
One consequence of artificial intelligence is how it has revealed to us differences and similarities between human cognition and computational processes. “The brain is a computer” is a pervasive metaphor in cognitive science, since we process, store, and retrieve information in ways that sometimes feel mechanical, even computerized. Yet AI and humans engage with the world and use language in fundamentally distinct ways. Among the most noted differences is the peculiar phenomenon of “hallucinations“, AI-generated text that contains false or misleading information presented as fact.
But humans hallucinate too. We just named it something else: the Mandela Effect. I think the Mandela Effect–not simple errors or lies–is the human cognate of AI hallucinations.
The Mandela Effect describes a collective misremembering of historical facts or cultural artifacts. Named after a collective false memory of Nelson Mandela dying in prison in the 1980s, the phenomenon is widespread and seemingly cross-cultural. One famous example is the belief that actor Sinbad starred in a 1990s movie called Shazaam. In reality, no such film exists. This memory is likely an amalgamation of associations: Sinbad’s name, his comedic persona, and a vague recollection of similar movies like Kazaam starring Shaquille O’Neal. In these moments, the brain substitutes related but incorrect details, resulting in a memory that feels vivid yet doesn’t correspond to reality.
AI hallucinations follow a similar pattern. Generative AI systems compose text by predicting words based on associations learned from vast datasets. But sometimes, those associations fail to align with facts. The result is text that sounds plausible and coherent but is demonstrably false.
We know that hallucinations are an inevitability of AI, given that it generates text via associations and probabilities. But observing AI hallucinate also illuminates the mystery of the Mandela Effect in humans, as well as showcases the limitations of associative reasoning. Humans and AI alike depend on patterns and connections to make sense of the world. But when those connections are incomplete or misaligned, the result can be a version of reality that is “almost correct but not quite.”
This raises questions about how we navigate a world increasingly mediated by AI-generated content. If we rely on tools that describe reality through association, we risk adopting a “Mandelian lens” on the world—one where things are close to the truth but subtly warped. Over time, these small inaccuracies could compound and accumulate, shifting our collective understanding of reality in ways we may not even notice.
The analogy underscores a broader caution about generative AI. While these systems can assist with countless tasks, they do so by replicating human-like cognitive shortcuts, including our susceptibility to error. Recognizing these parallels helps us remain critical users of AI and reminds us of the human fallibility it often mirrors. Perhaps the most important lesson is that AI can reveal to us shortcomings in our own cognition by resembling and exhibiting our own errors. Our perception of truth—whether biological or computational—is shaped not just by what is real, but by how we connect the dots.