AI Is Great at First-Order Thinking
At last week’s government roundtable, AI was on full display—complete with the major hype that seems to follow it everywhere right now. The enthusiasm is real, and so is the disruption. There’s no denying that AI has the potential to reshape work, particularly at the entry level, where routine research, analysis, and basic business functions are already being automated. The question isn’t whether redundancies will happen—they will—but how leaders, policymakers, and businesses choose to manage the transition.
Because here’s the thing: AI is already a remarkable first-order research tool. It can scan oceans of information, summarize patterns, and give us a starting point faster than any analyst ever could. Used correctly, it saves time, reduces friction, and surfaces insights that might otherwise stay buried.
But most AI today is still operating at the first-order level. It answers “what is out there?” rather than “what does this mean in context?” And because its outputs reflect the biases in its training data, it can replicate or even amplify blind spots.
The real frontier isn’t first-order answers. It’s second-order thinking—the ability to ask:
-
What are the implications of this information?
-
How does this insight connect to the messy realities of human judgment, incentives, and risk?
-
Where might the “obvious” answer fail in practice?
Second-order understanding is what strategy, leadership, and nuanced business decisions demand. It requires perspective, context, and the humility to weigh trade-offs. That’s still very human terrain—at least for now.
The question isn’t whether AI can research. It can.
The real question is: can AI reason deeply enough to grasp the second-order effects that make or break business decisions?
Until then, AI is the apprentice. The strategist still has to be human.