The first may have felt something like this: a thick card book of ABCs in your preschool library.
Page 1 peppered with shiny red balls: A is for apple, crunchy to bite.
Then golden crescents bunched together: B is for banana, yellow and ripe.
This 'figuring it out' approach is a top-down way to learn the written language. Here, visual and contextual cues play to existing associations, allowing the child to intuit the result. Think of it like a predictive model of language. You see a picture, you know the context, you predict the word. The range of possible answers is given to you before you figure it out.
Or perhaps you experienced the second approach: glued to a chair, a painstaking 'sounding out' of letters until you'd mapped each one to its sound correctly. A-P-P-L-E. Think of this like a rule-based approach, learning language from the bottom up.
In the absence of colourful cues, you're faced with a set of raw data, each character assigned a sound and blended together through error correction rather than inference. While slower and certainly less fun, it requires each element to be validated before the conclusion can form. Only through piecing the sounds together and understanding their meaning do you arrive at the apple.
Both schools represent opposing camps in a centuries-old battle: the inferential "Whole Language" versus the rule-based "Phonics" approach. The battleground? The education system. The foot soldiers? Neuroscientists and educators. The casualty? Careful thinking.
A key thing to bear in mind here is that, unlike speaking, which develops through exposure and exchange, reading is a human invention, requiring the brain to physically rewire itself. If you put two toddlers on a desert island, they wouldn't stay silent; they would eventually invent a "pidgin" language. It’s unlikely, however, that they would ever form an alphabet. Speech is an instinct. Reading is a hack.
In a recent article for The Atlantic, educator and writer Julia Fisher suggests that early exposure to the "Whole Language" approach has baked in, or at least precedented, a habit of reverse reasoning from our earliest learning. We tend to form an early reading of a situation first, then assemble the workings afterward—drawing conclusions from context rather than facts. We see the apple, then spell it.
While this pedagogy is receding from classrooms, the style of reasoning it encourages is familiar well beyond them. Much of professional life involves judgment under uncertainty. We sense a direction, form a view, and make a call. Often, this is necessary. Time is limited, and decisions have to be made. But increasingly, we're sidelining true discovery for a process of confirmation.
Think of writing a report, article, or brief. A spark forms early, a central claim or potential route. The rest of the work becomes a matter of support. Once a conclusion has begun to cohere, the work shifts from finding out to making sense.
It's not difficult to imagine how large language models might amplify this tendency. AI tools excel at completing our thoughts. Lay down any half-baked hunch, and it'll supply coherence, structure, justification, and evidence. What they rarely do, unless deliberately pushed, is introduce friction: the awkward fact, the twist in the tale, the loose thread.
Two traits stand out. First, its conventionality: the organising principle of AI is plausibility. It is strongest at what is most typical and likely; details that might force a rethink are less readily surfaced. Second, its agreeableness. Provide it a starting thought, and within reason, it will work to support it.
The result is a tool that smooths outliers and avoids confrontation, but genuine discovery rarely arrives as reinforcement. Copernicus' proposition that the Earth moved around the sun made existing data appear messy, interrupting a long chain of self-affirming reasoning and demanding a rethink of motion and calculation. Fleming didn't set out to discover antibiotics; he paid special attention to a failed experiment—a mouldy petri dish—and followed the inconvenience rather than discarding it.
In both cases, discoveries emerged from contradictory, not confirmatory, evidence. Predictable and pandering as language models can be, these are precisely the kind of inconvenient details that these models are least well-equipped to surface.
True discovery is not so tidy, and if we want to maintain a 'bottom-up' way of developing ideas, we must invite AI to trouble our presumptions, not affirm them. One way to do this is to stop treating AI as a single, helpful voice and start using it as a room full of awkward ones. Staging opposition, however tiresome, is the best way to bring the unexpected into your thinking. Test your hunch against an artificial focus group: a sceptical regulator, a risk-averse customer, or a competitor with different incentives.
The goal here is not accuracy - the challenging perspectives needn't be right—but to reintroduce friction at moments when it would otherwise have been steamrolled. As a culture, we have spent decades legislating the "Whole Language" guessing game out of our schools. Now, we spend significant sums to automate it in our offices.
None of this is to deny AI's value as a partner in thinking. But a partner that instinctively agrees is not always healthy. Sometimes what's needed is radical candour, not reassurance. To lead effectively in an AI-saturated market, we must stop asking these models to guess the word from the picture. Instead, let's encourage it to contend, to 'sound out,' to discover. Though slower and less fluent, this is where true understanding is found—one letter at a time.
