I hope AI doesn’t just become an elaborate advertising tool, like social media

For the last 50 years, the most significant technological advances have been in communication technology. We’ve developed incredible social media platforms, digital tools, and mobile technologies that have connected nearly everyone in the world in unprecedented ways. It is now easier than ever to communicate across vast distances, through diverse mediums, with anyone, anywhere.

This evolution in communication technology stands in contrast to the prior era of technological progress, which focused primarily on mechanical innovations. Before the Internet, breakthroughs were largely in the mechanical and physical domains—electricity, appliances, plumbing, transportation, and medical technologies. These advances transformed the physical and material conditions of life, whereas the digital and communication technologies of the last half-century have primarily transformed the social and symbolic: how we share, transmit, and receive information.

So where does AI fit within this framework? On the surface, AI seems to belong to the realm of communication technology, given its reliance on digital infrastructure and its ability to process, analyze, and generate information. However, AI is not just another faster, flashier app, nor is it simply an incremental improvement in how we connect or communicate. AI represents a qualitative shift—different not in degree but in kind. It’s not merely a tool to expand or refine existing communication technologies; it introduces entirely new capabilities: the ability to generate, interpret, and act on information autonomously, at scale, and with unprecedented sophistication.

This distinction raises an important question: What will we do with this power? Historically, the revolution in communication technology has largely been co-opted for one purpose: advertising. Social media, search engines, streaming platforms—all these breakthroughs have, at their core, been monetized by perfecting the art of targeting and persuading audiences to buy, click, or consume. The immense potential of communication technology has often been reduced to serving the interests of advertisers, prioritizing profit over other possible uses.

Will AI follow this same trajectory? Will its vast power be funneled into refining micro-targeted advertising and making marketing even more efficient? Or can its unique capabilities be directed toward broader, more meaningful purposes? AI has the potential to reshape education, healthcare, governance, and creative expression in ways that go far beyond commercial exploitation. But realizing this potential will depend on whether its deployment is driven by ethical considerations and the desire for collective good—or by the same profit motives that have shaped the digital landscape over the past half-century.

AI represents a turning point in the history of communication technology. It is not just a tool for transmitting or refining messages; it has the capacity to generate new knowledge, discover patterns we cannot see, and even challenge human creativity and decision-making. The real question, then, is whether we will seize this moment to redefine the role of communication technology in society or let it become yet another means to sell more products more effectively. The answer will determine whether AI becomes a transformative force for good or merely the next iteration of a decades-old advertising machine.

The Central Tendencies of the Rhetoric of AI

As artificial intelligence increasingly generates the written and published text we consume, it’s worth considering the consequences on both individual and societal levels. On the micro level—the everyday use of AI in writing—I suspect the changes will be subtle but meaningful. Individual writing abilities are likely to improve, as AI tools act as an accessible public option for crafting coherent prose. Just as autocorrect has quietly raised the baseline for grammatical accuracy in text messages and online posts, AI tools will elevate the overall quality of written communication. AI will make polished, coherent writing accessible to more people, effectively raising the “floor” of writing ability.

On the macro level, however, the implications are more profound. To understand this, let’s consider three primary dimensions of rhetoric: syntax, vocabulary, and tropes. These dimensions encompass how sentences are structured (syntax), which words are chosen and how they’re used (vocabulary), and the creative use of rhetorical devices like metaphors or antithesis (tropes). Since AI operates by analyzing and replicating patterns in language datasets, its writing reflects the statistical tendencies of its training data. In other words, AI-generated text is governed by the same central tendencies—mean, median, and mode—that define any dataset.

Syntax: The Median Sentence

AI-generated syntax will likely gravitate toward a median level of complexity. Sentences will neither be overly elaborate nor starkly simplistic but will instead reflect the middle level of grammatical intricacy found in its training data. This tendency could lead to a homogenization of sentence structure, with AI producing text that feels competent but not particularly varied or daring in its syntax.

Vocabulary: The Modal Words

Vocabulary choices in AI writing are often dictated by the most common words and phrases in its dataset—the mode. This preference for the most frequent linguistic elements means AI text can sometimes feel generic or boilerplate, favoring safe, widely used terms over more distinctive or idiosyncratic language. While this might ensure accessibility, it also risks a flattening of linguistic diversity, where rarer or less conventional words are underused.

Tropes: The Mean Creativity

When it comes to rhetorical tropes, AI tends toward the mean—a sort of average level of creativity. It might generate metaphors or analogies that are effective but lack the originality or boldness that characterizes the most memorable human writing. The result is a tendency toward competent but predictable creativity, rather than the kind of transformative or disruptive innovation that pushes rhetorical boundaries.

Language as Dataset

If AI treats language as a dataset, it inevitably inherits the statistical biases and patterns inherent in that dataset. While central tendencies like mean, median, and mode are useful for operationalizing numerical datasets, their application to language introduces a new set of challenges. Syntax, vocabulary, and rhetorical tropes may become increasingly tethered to these statistical norms, creating a gravitational pull toward a homogenized style of writing.

This is not to suggest that all AI-generated text will be devoid of creativity or variety. Rather, the concern lies in how the ubiquity of AI writing might influence broader linguistic and rhetorical trends. Will the prevalence of AI-generated text subtly shift our expectations of what “good writing” looks like? Will it amplify certain linguistic conventions while marginalizing others? These are questions worth monitoring as AI continues to shape the ways we write, think, and communicate.

If language becomes tethered to the central tendencies of AI’s datasets, the consequences extend beyond mere stylistic homogenization. They touch on the very dynamism of human expression—the outliers, the deviations, the unexpected turns of phrase that make language vibrant and uniquely human. Monitoring these tendencies isn’t just about understanding AI’s capabilities; it’s about preserving the richness of language itself.

AI Hallucinates; Humans Experience the Mandela Effect

One consequence of artificial intelligence is how it has revealed to us differences and similarities between human cognition and computational processes. “The brain is a computer” is a pervasive metaphor in cognitive science, since we process, store, and retrieve information in ways that sometimes feel mechanical, even computerized. Yet AI and humans engage with the world and use language in fundamentally distinct ways. Among the most noted differences is the peculiar phenomenon of “hallucinations“, AI-generated text that contains false or misleading information presented as fact.

But humans hallucinate too. We just named it something else: the Mandela Effect. I think the Mandela Effect–not simple errors or lies–is the human cognate of AI hallucinations.

The Mandela Effect describes a collective misremembering of historical facts or cultural artifacts. Named after a collective false memory of Nelson Mandela dying in prison in the 1980s, the phenomenon is widespread and seemingly cross-cultural. One famous example is the belief that actor Sinbad starred in a 1990s movie called Shazaam. In reality, no such film exists. This memory is likely an amalgamation of associations: Sinbad’s name, his comedic persona, and a vague recollection of similar movies like Kazaam starring Shaquille O’Neal. In these moments, the brain substitutes related but incorrect details, resulting in a memory that feels vivid yet doesn’t correspond to reality.

AI hallucinations follow a similar pattern. Generative AI systems compose text by predicting words based on associations learned from vast datasets. But sometimes, those associations fail to align with facts. The result is text that sounds plausible and coherent but is demonstrably false.

We know that hallucinations are an inevitability of AI, given that it generates text via associations and probabilities. But observing AI hallucinate also illuminates the mystery of the Mandela Effect in humans, as well as showcases the limitations of associative reasoning. Humans and AI alike depend on patterns and connections to make sense of the world. But when those connections are incomplete or misaligned, the result can be a version of reality that is “almost correct but not quite.”

This raises questions about how we navigate a world increasingly mediated by AI-generated content. If we rely on tools that describe reality through association, we risk adopting a “Mandelian lens” on the world—one where things are close to the truth but subtly warped. Over time, these small inaccuracies could compound and accumulate, shifting our collective understanding of reality in ways we may not even notice.

The analogy underscores a broader caution about generative AI. While these systems can assist with countless tasks, they do so by replicating human-like cognitive shortcuts, including our susceptibility to error. Recognizing these parallels helps us remain critical users of AI and reminds us of the human fallibility it often mirrors. Perhaps the most important lesson is that AI can reveal to us shortcomings in our own cognition by resembling and exhibiting our own errors. Our perception of truth—whether biological or computational—is shaped not just by what is real, but by how we connect the dots.