In These Strange New Minds: How AI Learned to Talk and What It Means, Christopher Summerfield, a Professor of Cognitive Neuroscience at Oxford, invites us into a deep and often disquieting meditation on the shifting boundary between human and machine intelligence. With a cool, clinical eye—and a surprising dose of humility—he confronts a question that has moved from speculative fiction to the urgent now: if machines can reason, what does that say about us?
Summerfield, who leads Oxford’s Human Information Processing Lab, is particularly well positioned to take on the task. His research bridges the cognitive, computational and neural sciences—disciplines that are increasingly merging into a tangled whole. His new book feels like a dispatch from that frontier, a place where the familiar architecture of the mind is being remodeled in real time, often without blueprints.
It is difficult to overstate the timeliness of Summerfield’s intervention. In an era of GPT-4s, NorthPole chips and multimodal models that claim to see, hear and reason, the popular imagination has lurched from AI euphoria to existential dread. Summerfield refuses to indulge either impulse. Instead, he offers something rarer: an honest reckoning with what today’s AI can—and, crucially, cannot—do.
His verdict on the most contentious topic—whether AI systems “reason”—is refreshingly direct. Yes, he says, they can reason within certain specialized domains. In some tasks, in fact, they outperform most humans.
It’s not a new claim, but Summerfield sharpens it by tracing the supposed lineage of machine reasoning—from the Logic Theorist (1956), which surprised its makers by proving unanticipated theorems, to today’s models that generate code and essays without clear instructions—raising the question: is this really reasoning, or just ever-better mimicry?
Yet Summerfield is no naive booster. He is keenly aware of the brittle underpinnings of AI: its lack of embodied experience, its transactional memory, its eerie hollowness. An AI may produce a convincing essay on grief, but it does not grieve. It may predict what a conscious being would say, but it does not truly think. Summerfield wields analogy with precision: a computer simulation of a hurricane, he reminds us, is not wet or windy. Likewise, no matter how coherent an AI’s self-reports of “being conscious” may sound, they remain empty echoes of its training data.
At moments, the writing takes on a sweeping historical perspective. Reflecting on the rise of AI as a custodian of human knowledge, Summerfield observes:
By the fifteenth century, the printing press had come clattering into existence, kickstarting the mass dissemination of ideas. Over the past thirty years, the internet has made a large fraction of all human knowledge discoverable, allowing anyone patient enough to click through a galaxy of half-truths, vitriol and indecipherable memes. In that simpler, bygone world, human understanding advanced as people read, listened, observed—and conveyed their thoughts to others by tongue, pen or keyboard. In that more innocent world, humans were the sole custodians of everything that was known. But quite incredibly, that world is passing into history at this very moment.
In Summerfield’s view, we are plunging into a future in which AI systems not only manage the collective memory of humanity, but also reason about it, generating new insights, theories and creative outputs—all capabilities that were previously the exclusive domain of people. The technological revolution, he argues, is not merely quantitative; it is qualitative.
The prose itself is crisp, occasionally rising into something almost lyrical when Summerfield describes the mind’s intricacies. He has a talent for making technical distinctions—say, between procedural and episodic memory, or between symbolic and statistical reasoning—feel essential rather than pedantic. Yet he never loses sight of the stakes: what we choose to believe about AI will shape not only our technologies, but also our self-conceptions as a species.
The most provocative sections look at how AI is changing the way scientists think about the mind. Summerfield suggests that by building machines that can solve problems without following clear symbolic rules, we may need to rethink how human reasoning works, too. For decades, we assumed intelligence required step-by-step logic. But modern AI models succeed by training on huge amounts of data and learning patterns. This suggests our own thinking might work more like theirs than we thought.
This is a notion at once thrilling and unsettling: thrilling because it suggests that intelligence is more accessible than we once thought; unsettling because it hints that our minds, too, might be less special and mystical than we would prefer.
Yet, if Summerfield has a melancholy side, it is always tempered by pragmatism. Consciousness, he argues, remains elusive—not merely for AI, but for cognitive science as well. He reminds us, with a scholar’s dry wit, that we still struggle to assess consciousness in octopuses, let alone in neural networks. Better, for now, to leave metaphysics aside and focus on more pressing matters: governance, transparency, accountability.
This is the other, quieter agenda of These Strange New Minds. Summerfield is acutely aware that AI is not merely a philosophical curiosity; it is a commercial juggernaut, racing ahead with few brakes. His calls for stronger public oversight and democratic engagement with AI’s trajectory are never didactic. Instead, they are woven into the fabric of the book, the background music to every discussion of algorithmic prowess.
Summerfield’s vision of the future is neither utopian nor dystopian. It is something rarer in the current discourse: sober, contingent, open-ended. AI, he seems to suggest, is not an alien force, but a distorted mirror of the world—one that reflects both our brilliance and our blindness. If we look carefully, and act wisely, we might yet shape that reflection into something worth carrying forward.
Verdict: In These Strange New Minds, Christopher Summerfield offers a lucid, thoughtful and deeply humane exploration of the lines between human and machine intelligence—and of what those frontiers reveal about our own minds. It is essential reading for anyone who wants to understand not merely where AI is headed, but what it is showing us about ourselves.