What If We’re Just Pattern Matchers Too
The Illusion of Knowing: what Apple’s latest AI paper gets right and what It might miss
Apple’s recent paper, The Illusion of Thinking, lands like a mic drop in the AI world.
Their core thesis? That Large Language Models (LLMs) aren’t actually reasoning, they’re simulating it. Through a brilliant set of puzzle-based benchmarks, Apple researchers demonstrate how LLMs collapse under real compositional complexity. As problems get harder, the models start to hallucinate steps, ignore algorithms they’ve been taught, and sometimes even think less despite having more room to reason.
But here’s my reaction after reading it:
Yes, the paper is compelling.
Yes, the findings are rigorous.
And yet, I believe it draws conclusions too quickly. Humans are complex, messy, and beautiful, not just biological machines.
What Is “Real” Thinking, Anyway?
The claim that “LLMs only simulate thinking” assumes we know what thinking actually is. But do we?
Cognitive science, neuroscience, and philosophy haven’t cracked that code. We don’t yet have a complete model of how the human brain produces reasoning, decisions, or insight. In fact, the deeper we go into understanding the brain, the more we uncover just how much of our cognition is pattern-based, context-dependent, and deeply heuristic. We’re guessing, just with confidence based on our experience. We rely on mental shortcuts more than we realize.
🌀 Humans Might Be Pattern Matchers Too
Nobel laureate Daniel Kahneman’s research on System 1 and System 2 thinking is instructive here.
System 1 is fast, intuitive, emotional. Essentially, a high-powered pattern matcher trained by experience.
System 2 is slow, effortful, and logical. But also lazy and error-prone. It avoids engaging unless absolutely necessary.
In most of our daily lives, it’s System 1 that’s running the show. We guess what someone’s about to say. We fill in ambiguous cues. We jump to conclusions before our conscious brain has even caught up.
Sound familiar?
It should. Because it’s not far off from how LLMs work. They’re trained to recognize and predict linguistic patterns based on massive corpora of human text. And in many cases, that’s enough to generate surprisingly human-like responses. Does that make them intelligent? Maybe not in the strong sense, but it does make them relatable.
The Real Illusion: That We Aren’t Simulating Too
The title of Apple’s paper The Illusion of Thinking applies just as well to us.
We like to believe we reason logically and deliberately. But the data tells us otherwise. Cognitive biases, framing effects, false memories, confabulation, we’re not exactly paragons of rationality. We simulate. We improvise. We get it right enough, often enough, to survive and adapt.
So when we critique LLMs for being “just pattern matchers,” we should ask: Compared to what?
Where Apple Is Absolutely Right
This doesn’t mean we should ignore the paper’s findings. Apple’s team surfaces a critical weakness in current LLM architectures: their inability to scale reasoning reliably. They collapse under multi-step logic. They don’t “know what they know.” They can say a lot without saying anything useful. For high-stakes domains like healthcare, finance, law - we need far more robust, transparent, and grounded systems.
What Comes Next: Tools in Our Own Image
Instead of treating LLMs as flawed versions of human thinkers, we might see them as part of a new frontier - tools shaped in our own image: brilliant, imperfect, and driven by shortcuts that often work. The real challenge isn’t to make them perfect, but to understand how they work, so we can learn to work with them. That’s the path to meaningful coexistence.