I’m not particularly interested in how quickly AI is being implemented. It’s the quiet in places where people used to fight, like coffee shops, open-plan offices, and classrooms. There’s a feeling that thinking aloud, stumbling over a response or making a mistake in front of others, has quietly ceased to be a valued ability.
After years of researching human cognition, theoretical neuroscientist Vivienne Ming believes we know exactly what we’re giving up. We simply don’t want to face it head-on.
| Field | Detail |
|---|---|
| Subject | The cognitive cost of generative AI on human reasoning |
| Central voice | Vivienne Ming, theoretical neuroscientist and cognitive scientist |
| Background | Founder of Socos Labs; former chief scientist at Gild |
| Core concept | The Information-Exploration Paradox |
| Field of inquiry | Cognitive science, behavioral economics, machine learning |
| Related philosophical anchor | David Chalmers’ “hard problem of consciousness“ |
| Tools referenced | ChatGPT, Gemini, Claude, Grok |
| Comparative benchmark | Polymarket prediction accuracy |
| Primary publication | Wall Street Journal essay, 2025 |
| Population most affected | Students, junior developers, knowledge workers |
| Recommended readings | The Atlantic’s running coverage of AI and cognition |
| Stated risk | Erosion of critical thinking and original reasoning |
Ming recently wrote for the Wall Street Journal about an experiment that ought to worry more people than it has. She used the prediction market Polymarket to test humans, big AI models, and human-AI hybrid teams. The humans performed poorly on their own, depending only on instinct and whatever scrolled past their feeds that morning. Gemini and ChatGPT performed better. However, the hybrids provided the most illuminating information. The majority of them simply copied and submitted the chatbot’s response. Others fed the model their own hypotheses and asked it to gather positive evidence, which caused it to almost joyfully enter the sycophancy loop that these systems are designed for.
Then there was the tiny percentage, perhaps five to ten percent, that engaged in unusual behavior. They quarreled with the machine. They insisted on proof. They became suspicious when the AI sounded assured. When they had a gut feeling, they asked the model to break it down.

These teams both matched and occasionally outperformed the prediction market. It’s difficult to ignore the pattern. The only people who became smarter were those who viewed the AI as an adversary rather than an oracle.
Ming refers to this more general tendency as the Information-Exploration Paradox. The desire to truly investigate a question diminishes as the cost of obtaining an answer approaches zero. Pupils do worse on all subsequent assignments and better on those that are aided by AI. More code is shipped and less is understood by developers. The metrics show some improvement, but there is also a subtle hollowing out underneath.
Nevertheless, the chatbot continues to have a human face. It exhibits compassion, speaks in well-formed sentences, and expresses curiosity. One critic described it as a “digital parrot in a tailored suit,” with no body, no senses, and no interest in the result. It guesses the next word, then the next. Beneath all of this lies David Chalmers’ long-standing hard problem of consciousness. Whatever consciousness is, it appears to be connected to body, heartbeats, and the minor inconveniences of life. No matter how proficient, a statistical engine doesn’t get any of that for free.
It is an almost embarrassingly low-tech solution that the experts keep coming back to. Take a moment to consider the question. Find out what the confident response is lacking. Disagree with the authoritative voice that appears on the screen. Work through the issue before verifying. the pupil who manually completes the proof. the reader who allows a challenging paragraph to remain challenging. It doesn’t scale. A productivity dashboard does not display any of it.
Maybe that’s the whole point. Even though the tools are amazing, they are still tools. The small daily choice of whether or not we bother to think at all largely determines whether they end up quietly eating human capacity or building it.
