Human Still Required in the Age of Cognitive Surrender
There’s a growing body of research warning us about something called cognitive surrender, and if you’ve been working in schools lately, you’ve likely already seen it. A recent study from Wharton introduces the idea of System 3—AI as a third form of cognition—and defines cognitive surrender as a moment when “the user accepts the AI’s response without critical evaluation, substituting it for their own reasoning.” This is not simply about using a tool more efficiently. It represents a shift in who is actually doing the thinking.
What makes this moment so important is that it validates a tension many educators have been feeling but struggling to name. In Human Still Required, I wrote, “When speed becomes the measure of success, thinking gets compressed. When output becomes easier, judgment gets outsourced.” (p. 3) The research now gives us empirical backing for that observation. It shows that when people engage with AI, they are more likely to adopt its answers—even when those answers are incorrect—and their confidence increases regardless of accuracy. In other words, the more seamless the tool becomes, the more likely we are to surrender the very cognitive work we claim to value.
This is why it is critical to understand that AI did not introduce this problem; it revealed it. As I argue early in the book, “AI didn’t break school. It revealed what was already fragile.” (p. 7) For decades, schools have operated within systems that rewarded completion, compliance, and efficiency. When assignments primarily ask students to summarize, restate, or replicate information, AI’s ability to complete those tasks instantly doesn’t undermine learning—it exposes that the task itself was never demanding much thinking to begin with. “When assignments ask students to summarize, restate, or replicate information, AI can complete them instantly… the task was never asking for much thinking to begin with.” (p. 11) The research on cognitive surrender simply extends this realization: when thinking is optional, people will opt out of it.
What should concern educators most is not whether students are using AI, but how that use is shaping their cognitive habits. The deeper issue is not academic dishonesty; it is cognitive atrophy. As I write, “The real risk isn’t cheating. It’s atrophy.” (p. 59) The research echoes this concern by describing cognitive surrender as a “relinquishing of cognitive control,” where individuals stop engaging in deliberative reasoning altogether. Over time, this has profound implications. When learners consistently accept answers without questioning them, bypass the struggle of making sense of information, and default to the first fluent response, they are not just completing tasks more efficiently—they are practicing not thinking.
This moment also forces a long-overdue reckoning with how we define evidence of learning. For years, schools have relied on polished outputs—essays, projects, presentations—as proxies for understanding. But as I argue in the book, “If AI can do it instantly, it was never evidence of learning in the first place.” (p. 79) The research reinforces this point in a different way. It shows that human accuracy becomes dependent on AI accuracy: when the system is correct, performance improves; when it is wrong, performance declines. Human reasoning is no longer stabilizing outcomes—it is being replaced by them. This should cause us to rethink not just how students complete work, but what we are actually measuring when we assess it.
If there is a path forward, it lies in reclaiming what has always mattered but has too often been overlooked. In Human Still Required, I make the shift explicit: “We are not assessing what AI can produce. We are assessing what people can explain, defend, and revise.” (p. 75) This is where the research and the practice align. If AI is now capable of generating answers, then the value of human contribution must shift to judgment, reasoning, and the ability to interrogate those answers. The classroom must become a place where thinking is made visible, where students are expected not just to produce, but to justify, critique, and adapt their ideas.
Ultimately, this is not a conversation about tools; it is a conversation about responsibility. The research makes clear that as AI becomes more embedded in our cognitive processes, the line between human and machine reasoning begins to blur. But that does not absolve us of accountability. As I write, “Judgment is the work… Tools can inform decisions—but responsibility cannot be delegated.” (p. 100) AI can draft, suggest, and analyze, but it cannot carry context, values, or consequences. Those remain fundamentally human responsibilities.
This is the tension educators must now hold. AI is not going away, nor should it. It has the potential to support thinking, extend it, and even improve it when used well. But without intentional design, it will just as easily replace it. The rise of cognitive surrender is not inevitable, but it is predictable. And if we do not respond by redesigning learning around explanation, judgment, and accountability, we risk building systems where students appear more capable than ever while becoming less capable of independent thought.
The future of education will not be defined by how well students use AI. It will be defined by whether they continue to think in a world where they no longer have to. And that is why, now more than ever, human is still required.