Rhetoric After Search: Composition in the Age of AI

A palimpsest: earlier writing partially erased beneath new text, a fitting image for how cultures layer one mode of communication over another.

When a society’s main way of communicating changes, the culture changes with it. The shift from oral storytelling to writing reorganized how people made and judged ideas. As Eric Havelock notes, orality tends to place ideas side by side (parataxis), while writing lets us order and rank them (hypotaxis). Later, the internet unsettled those hierarchies by putting everything in the same feed. Now generative AI pushes a new turn: instead of going out to find information, we ask a model to synthesize it for us in real time.

The shift from a largely static print culture (books, journals, newspapers) to the dynamic, hyperlink-laced world of the internet (posts, tweets, comments, videos, remixes) is instructive. If print stabilized hypotaxis—codified hierarchies of knowledge—the internet reintroduced powerful currents of parataxis, the flattening of ideas. Feeds place headlines, memes, and research side by side; comments appear co-present with reported stories; search results level institutions and hobby blogs into a single scroll. The effect isn’t a simple “reversion” to orality, but a hybrid: an always-on, text-heavy environment that nonetheless rewards immediacy, performance, and identity signals. We might call this the era of networked parataxis or feed culture. Authority did not vanish, but it was continuously jostled—ranked, re-ranked, and sometimes drowned—by the drumbeat of the new.

Now another shift is underway: from the internet as a place we go to an intelligence we bring to us. Generative AI reframes the web not as a destination but as a substrate for on-demand synthesis. Instead of clicking outward into a maze of links, we prompt, and the system composes a provisional text from learned patterns; a palimpsest of the internet, re-generated each time. In this sense, the interface transitions from navigation to conversation; from retrieval of artifacts to production of fresh, if probabilistic, prose.

What does this do to our rhetorical environment?

First, generative systems appear to restore hypotaxis, but of a different kind. Where the feed set items side by side (parataxis), AI models arrange them within a single, coherent utterance. Citations, definitions, warrants, and transitions arrive pre-braided, often with a competence that flatters the eye. Synthetic hypotaxis. Yet because the underlying process is statistical and unobserved, it risks performing coherence without guaranteeing evidence. The prose feels orderly; the epistemology may be wobbly. We are handed an essay when we might have needed a bibliography.

Second, generative AI re-centers dialogue as the controlling framework for knowledge work. Search terms give way to prompts, and prompts invite follow-ups, refinements, and counterfactuals. The standard unit of knowledge work becomes a conversation. This recovers something like the agility of oral exchange—call-and-response, iterative clarification—while living in a textual medium. In practice, this hybrid looks like scripted orality: improvisational yet instantly transcribed, searchable, editable, and archivable.

Third, the locus of authorship drifts. With the internet, we cited and linked; with AI models, we consult and compose. The user becomes a curator-designer, someone who specifies constraints, tones, examples, and audiences, while the model performs the heavy lifting of first-pass drafting and rephrasing. Our artifacts increasingly feel like bricolage: human intention wrapped around machine-generated scaffolds, tuned by promptcraft and revision.

Likely effects of the shift

Positive

  • Acceleration of synthesis. Students and researchers can pull together working overviews in minutes, explore counterpositions, and translate among registers or languages. This lowers the activation energy for inquiry and can widen participation.
  • Adaptive scaffolding. Models can perform as low-stakes tutors or writing partners, offering just-in-time explanations, outlines, and examples that match a learner’s current academic level.
  • Access workarounds. For people blocked by jargon, gatekept PDFs, or unfamiliar discourse conventions, generative AI can paraphrase, summarize, or simulate genres they need to enter.

Negative

  • Source erasure and credit drift. The move from links to syntheses obscures provenance. Without strong citation norms and tools, authority blurs and labor disappears into “the model.”
  • Confident misstatements. Synthetic hypotaxis can launder uncertainty; tidy paragraphs can mask speculative claims (or hallucinations) behind elegantly connective prose.
  • Homogenization of style. Fluency becomes formulaic. If everyone leans on the same engines, we risk a median voice—competent, placid, and forgettable—unless we deliberately cultivate voice.
  • Skill atrophy. If we outsource invention, arrangement, and revision too early or too often, we can lose the slow muscle of drafting, comparing sources, and building warrants from evidence.

Neutral/ambivalent

  • New genres, new shibboleths. Prompts, system messages, and “prompt-sets” become shareable teaching artifacts; AI marginalia (notes explaining how output was shaped) may emerge as a norm. These could deepen transparency, or become ritual theater.
  • Assessment realignment. If first drafts are cheap, assessment shifts toward process evidence (versions, notes, prompts), oral defenses, and situated tasks. This can improve authenticity but demands more from instructors.
  • Attention economics. Conversation-first tools reduce tab-hopping, but they also reward rapid iteration. Some users will become more focused; others will live in an endless loop of “one more prompt.”
  • Institutional enclosure. Organizations will build bespoke models and walled knowledge bases. That can improve reliability for local use while narrowing horizons and reinforcing house orthodoxies.

So what do we call this era?

If the internet cultivated networked parataxis, generative AI installs a layer of synthetic hypotaxis, or structured language on demand. I’m partial to naming it consultative literacy (to stress the dialogic nature), or generative rhetoric (to mark how invention and arrangement are becoming collaborative). Whatever we call it, the practical task is the same: pair the speed and plasticity of AI with disciplined habits of citation, verification, and style. In other words, keep the conviviality of the feed and the rigor of the page, and teach writers to orchestrate both.

The culture will follow the mode. As we move from going out into the web to inviting the web to speak through us, our work becomes less about locating information and more about shaping it: specifying constraints, testing outputs, insisting on sources, and cultivating voice. That is both the promise and the peril of an age where every prompt yields a fresh, provisional world.

I hope AI doesn’t just become an elaborate advertising tool, like social media

For the last 50 years, the most significant technological advances have been in communication technology. We’ve developed incredible social media platforms, digital tools, and mobile technologies that have connected nearly everyone in the world in unprecedented ways. It is now easier than ever to communicate across vast distances, through diverse mediums, with anyone, anywhere.

This evolution in communication technology stands in contrast to the prior era of technological progress, which focused primarily on mechanical innovations. Before the Internet, breakthroughs were largely in the mechanical and physical domains—electricity, appliances, plumbing, transportation, and medical technologies. These advances transformed the physical and material conditions of life, whereas the digital and communication technologies of the last half-century have primarily transformed the social and symbolic: how we share, transmit, and receive information.

So where does AI fit within this framework? On the surface, AI seems to belong to the realm of communication technology, given its reliance on digital infrastructure and its ability to process, analyze, and generate information. However, AI is not just another faster, flashier app, nor is it simply an incremental improvement in how we connect or communicate. AI represents a qualitative shift—different not in degree but in kind. It’s not merely a tool to expand or refine existing communication technologies; it introduces entirely new capabilities: the ability to generate, interpret, and act on information autonomously, at scale, and with unprecedented sophistication.

This distinction raises an important question: What will we do with this power? Historically, the revolution in communication technology has largely been co-opted for one purpose: advertising. Social media, search engines, streaming platforms—all these breakthroughs have, at their core, been monetized by perfecting the art of targeting and persuading audiences to buy, click, or consume. The immense potential of communication technology has often been reduced to serving the interests of advertisers, prioritizing profit over other possible uses.

Will AI follow this same trajectory? Will its vast power be funneled into refining micro-targeted advertising and making marketing even more efficient? Or can its unique capabilities be directed toward broader, more meaningful purposes? AI has the potential to reshape education, healthcare, governance, and creative expression in ways that go far beyond commercial exploitation. But realizing this potential will depend on whether its deployment is driven by ethical considerations and the desire for collective good—or by the same profit motives that have shaped the digital landscape over the past half-century.

AI represents a turning point in the history of communication technology. It is not just a tool for transmitting or refining messages; it has the capacity to generate new knowledge, discover patterns we cannot see, and even challenge human creativity and decision-making. The real question, then, is whether we will seize this moment to redefine the role of communication technology in society or let it become yet another means to sell more products more effectively. The answer will determine whether AI becomes a transformative force for good or merely the next iteration of a decades-old advertising machine.

AI Hallucinates; Humans Experience the Mandela Effect

One consequence of artificial intelligence is how it has revealed to us differences and similarities between human cognition and computational processes. “The brain is a computer” is a pervasive metaphor in cognitive science, since we process, store, and retrieve information in ways that sometimes feel mechanical, even computerized. Yet AI and humans engage with the world and use language in fundamentally distinct ways. Among the most noted differences is the peculiar phenomenon of “hallucinations“, AI-generated text that contains false or misleading information presented as fact.

But humans hallucinate too. We just named it something else: the Mandela Effect. I think the Mandela Effect–not simple errors or lies–is the human cognate of AI hallucinations.

The Mandela Effect describes a collective misremembering of historical facts or cultural artifacts. Named after a collective false memory of Nelson Mandela dying in prison in the 1980s, the phenomenon is widespread and seemingly cross-cultural. One famous example is the belief that actor Sinbad starred in a 1990s movie called Shazaam. In reality, no such film exists. This memory is likely an amalgamation of associations: Sinbad’s name, his comedic persona, and a vague recollection of similar movies like Kazaam starring Shaquille O’Neal. In these moments, the brain substitutes related but incorrect details, resulting in a memory that feels vivid yet doesn’t correspond to reality.

AI hallucinations follow a similar pattern. Generative AI systems compose text by predicting words based on associations learned from vast datasets. But sometimes, those associations fail to align with facts. The result is text that sounds plausible and coherent but is demonstrably false.

We know that hallucinations are an inevitability of AI, given that it generates text via associations and probabilities. But observing AI hallucinate also illuminates the mystery of the Mandela Effect in humans, as well as showcases the limitations of associative reasoning. Humans and AI alike depend on patterns and connections to make sense of the world. But when those connections are incomplete or misaligned, the result can be a version of reality that is “almost correct but not quite.”

This raises questions about how we navigate a world increasingly mediated by AI-generated content. If we rely on tools that describe reality through association, we risk adopting a “Mandelian lens” on the world—one where things are close to the truth but subtly warped. Over time, these small inaccuracies could compound and accumulate, shifting our collective understanding of reality in ways we may not even notice.

The analogy underscores a broader caution about generative AI. While these systems can assist with countless tasks, they do so by replicating human-like cognitive shortcuts, including our susceptibility to error. Recognizing these parallels helps us remain critical users of AI and reminds us of the human fallibility it often mirrors. Perhaps the most important lesson is that AI can reveal to us shortcomings in our own cognition by resembling and exhibiting our own errors. Our perception of truth—whether biological or computational—is shaped not just by what is real, but by how we connect the dots.

Artificial Intelligence is Artificial

The rise of large language models (LLMs) like ChatGPT has led many to believe we’ve entered an era of artificial general intelligence (AGI). Their remarkable fluency with language—arguably one of the most defining markers of human intelligence—fuels this perception. Language is deeply intertwined with thought, shaping and reflecting the way we conceptualize the world. But we must not mistake linguistic proficiency for genuine understanding or consciousness.

LLMs operate through a process that is, at its core, vastly different from human cognition. Our thoughts originate from lived experience, encoded into language by our conscious minds. LLMs, on the other hand, process language in three distinct steps:

  1. Text is translated into numerical data, where words and phrases are assigned numerical values based on probabilities.
  2. These numbers are plotted within a vast multidimensional space, representing relationships between words.
  3. The model uses these relationships to generate new numerical representations, which are then retranslated into human-readable text.

This process is an intricate and compute-intensive simulation of language use, not an emulation of human thought. It’s modeling all the way down. The magic of such models lies in their mathematical nature—computers excel at calculating probabilities and relationships at scales and efficiencies humans cannot match. But this magic comes at the cost of true understanding. LLMs grasp the syntax of language—the rules and patterns governing how words appear together—but they remain blind to semantics, the actual meaning derived from human experience.

Take the phrase “public park.” ChatGPT “knows” the term only because it has been trained on vast amounts of text where those two words frequently co-occur. The model assigns probabilities to their appearance in relation to other words, which helps it predict and generate coherent sentences. Humans, by contrast, understand “public park” semantically. We draw on lived experience—walking on grass, seeing children play, or reading a sign that designates a space as public. Our understanding is grounded in sensory and conceptual knowledge of the world, not in statistical associations.

Finally, fluffiness is quantified along a scale

This difference is critical. What humans and computers do may appear similar, but they are fundamentally distinct. LLMs imitate language use so convincingly that it can seem like they think as we do. But this is an illusion. Computers do not possess consciousness. They don’t experience the world through sensory input; they process language data, which is itself encoded information. From input to output, their entire function involves encoding, decoding, and re-encoding data, without ever bridging the gap to experiential understanding.

To extend the analogy: a language model understands the geography of words in the way a GPS system represents the geography of the world. A GPS system can map distances, show boundaries, and indicate terrain, but it is not the world itself. It’s a tool, a representation—useful, precise, but fundamentally distinct from the lived reality it depicts. To say AI is intelligent in the way humans are is like saying Google Maps has traveled the world and been everywhere on its virtual globe; this is sort of true, in the sense that a decentralized convoy of Google cars equipped with cameras have indeed crawled the earth collecting visual data for Google street view, but is not literally true.

As we marvel at the capabilities of LLMs, we must remain clear-eyed about their limitations. Their proficiency with language reflects the sophistication of their statistical models, not the emergence of thought. Understanding this distinction is crucial as we navigate an era where AI tools increasingly shape our communication and decision-making.

The surprising power of n=2

We are enmeshed in data every day. It shapes our decisions, informs our perspectives, and drives much of modern life. Often, we wish for more data; rarely do we wish for less.

Yet there are moments when all we have is a single datapoint. And what can we do with just one? One datapoint offers almost nothing. It is isolated, contextless, and inert—a fragment of information without relationship or meaning. One datapoint might as well be no datapoint.

But two datapoints? That’s transformative. Moving from one to two is not just an incremental improvement; it is a fundamental shift. Your dataset has doubled in size, a 100% increase. More importantly, with two datapoints, you can begin to make connections. You can compare and combine, correlate and coordinate.

From Isolation to Interaction

Consider the possibilities unlocked by having two datapoints rather than one. A single name—first or last—is practically useless; it cannot identify a person. But a full name—two datapoints—suddenly carries weight. It situates someone in a specific context, distinguishing them from others and enabling meaningful identification.

The same holds true for testimony. A single witness to a crime might not provide enough perspective to reconstruct what happened. Their account could be unreliable, incomplete, or subjective. But with two witnesses, we gain a second perspective. Their testimonies can corroborate or contradict each other, offering a deeper understanding of the event.

Or think about computation. A solitary binary digit—0 or 1—cannot do much. It is a static state. But introduce a second binary digit, and the world changes. With two bits, you unlock four possible combinations (00, 01, 10, 11), the foundation of all logical computation. Every computer, no matter how powerful, builds its intricate systems of thought from this basic doubling.

The Exponential Power of Pairing

Why is the shift from one to two so significant? It is not simply the doubling of data, but the transition from isolation to interaction. A single datapoint cannot create relationships, patterns, or meaning. It is static. Two datapoints, however, introduce dynamics. They allow for comparison and combination, for movement between states, for a framework within which meaning can emerge.

This leap—from one to two—is the smallest step toward creating systems of knowledge. Science relies on comparisons to establish causality. A single experimental result is meaningless without a control group to measure it against. Literature and language depend on dualities—protagonist and antagonist, question and answer, speaker and audience. Even human vision is based on the comparison of binocular inputs, it is our two eyes that allow us to see depth.

AI and the Power of Two

The transformative power of n=2 is most recently demonstrated in the operation of generative AI. At its core, generative AI depends on the interaction of two distinct but interdependent datasets: the training data and the user’s prompt. The training data serves as the foundation—a vast repository of language patterns, structures, and examples amassed from diverse sources. This data alone, however, is inert; it is an immense collection of information without activation or direction. Similarly, a prompt—a fragment of input text provided by a user—is meaningless without context. It is a solitary datapoint, incapable of producing anything on its own.

When these two datasets combine, however, the true power of AI is unlocked. The training data provides a rich, multidimensional context, while the prompt activates specific pathways within that context, directing the AI to generate meaningful output. This dynamic interaction transforms static data into a creative process. Much like the leap from one to two datapoints, the relationship between the training data and the prompt enables the emergence of patterns, coherence, and utility. Without the prompt, the AI remains silent; without the training data, the prompt is purposeless. Together, they form a system capable of producing complex and contextually relevant language.

This relationship between training data and prompts underscores the profound significance of pairing, the power of n=2. The interaction between these two elements mirrors a broader principle: meaning arises not from isolation, but from connection. Just as two witnesses can construct a fuller account of an event, and two binary digits can enable computation, the union of training data and prompts enables AI to simulate human-like language and reasoning, creating systems that are both dynamic and generative. The leap from one to two here is not just a quantitative doubling—it is a qualitative transformation that makes the impossible possible.

Building Toward Complexity

Two is not the end point; it is the beginning. Once we have two datapoints, we can imagine three, then four, and so on, building increasingly complex systems. But we should not overlook the profound importance of the leap from one to two. It is the first and most crucial step toward understanding—toward the ability to identify patterns, make connections, and draw conclusions.

N=2 is the minimum threshold for meaning, the simplest structure capable of supporting complexity. From two datapoints, entire worlds of logic, creativity, and understanding can unfold.