Rhetoric After Search: Composition in the Age of AI

A palimpsest: earlier writing partially erased beneath new text, a fitting image for how cultures layer one mode of communication over another.

When a society’s main way of communicating changes, the culture changes with it. The shift from oral storytelling to writing reorganized how people made and judged ideas. As Eric Havelock notes, orality tends to place ideas side by side (parataxis), while writing lets us order and rank them (hypotaxis). Later, the internet unsettled those hierarchies by putting everything in the same feed. Now generative AI pushes a new turn: instead of going out to find information, we ask a model to synthesize it for us in real time.

The shift from a largely static print culture (books, journals, newspapers) to the dynamic, hyperlink-laced world of the internet (posts, tweets, comments, videos, remixes) is instructive. If print stabilized hypotaxis—codified hierarchies of knowledge—the internet reintroduced powerful currents of parataxis, the flattening of ideas. Feeds place headlines, memes, and research side by side; comments appear co-present with reported stories; search results level institutions and hobby blogs into a single scroll. The effect isn’t a simple “reversion” to orality, but a hybrid: an always-on, text-heavy environment that nonetheless rewards immediacy, performance, and identity signals. We might call this the era of networked parataxis or feed culture. Authority did not vanish, but it was continuously jostled—ranked, re-ranked, and sometimes drowned—by the drumbeat of the new.

Now another shift is underway: from the internet as a place we go to an intelligence we bring to us. Generative AI reframes the web not as a destination but as a substrate for on-demand synthesis. Instead of clicking outward into a maze of links, we prompt, and the system composes a provisional text from learned patterns; a palimpsest of the internet, re-generated each time. In this sense, the interface transitions from navigation to conversation; from retrieval of artifacts to production of fresh, if probabilistic, prose.

What does this do to our rhetorical environment?

First, generative systems appear to restore hypotaxis, but of a different kind. Where the feed set items side by side (parataxis), AI models arrange them within a single, coherent utterance. Citations, definitions, warrants, and transitions arrive pre-braided, often with a competence that flatters the eye. Synthetic hypotaxis. Yet because the underlying process is statistical and unobserved, it risks performing coherence without guaranteeing evidence. The prose feels orderly; the epistemology may be wobbly. We are handed an essay when we might have needed a bibliography.

Second, generative AI re-centers dialogue as the controlling framework for knowledge work. Search terms give way to prompts, and prompts invite follow-ups, refinements, and counterfactuals. The standard unit of knowledge work becomes a conversation. This recovers something like the agility of oral exchange—call-and-response, iterative clarification—while living in a textual medium. In practice, this hybrid looks like scripted orality: improvisational yet instantly transcribed, searchable, editable, and archivable.

Third, the locus of authorship drifts. With the internet, we cited and linked; with AI models, we consult and compose. The user becomes a curator-designer, someone who specifies constraints, tones, examples, and audiences, while the model performs the heavy lifting of first-pass drafting and rephrasing. Our artifacts increasingly feel like bricolage: human intention wrapped around machine-generated scaffolds, tuned by promptcraft and revision.

Likely effects of the shift

Positive

  • Acceleration of synthesis. Students and researchers can pull together working overviews in minutes, explore counterpositions, and translate among registers or languages. This lowers the activation energy for inquiry and can widen participation.
  • Adaptive scaffolding. Models can perform as low-stakes tutors or writing partners, offering just-in-time explanations, outlines, and examples that match a learner’s current academic level.
  • Access workarounds. For people blocked by jargon, gatekept PDFs, or unfamiliar discourse conventions, generative AI can paraphrase, summarize, or simulate genres they need to enter.

Negative

  • Source erasure and credit drift. The move from links to syntheses obscures provenance. Without strong citation norms and tools, authority blurs and labor disappears into “the model.”
  • Confident misstatements. Synthetic hypotaxis can launder uncertainty; tidy paragraphs can mask speculative claims (or hallucinations) behind elegantly connective prose.
  • Homogenization of style. Fluency becomes formulaic. If everyone leans on the same engines, we risk a median voice—competent, placid, and forgettable—unless we deliberately cultivate voice.
  • Skill atrophy. If we outsource invention, arrangement, and revision too early or too often, we can lose the slow muscle of drafting, comparing sources, and building warrants from evidence.

Neutral/ambivalent

  • New genres, new shibboleths. Prompts, system messages, and “prompt-sets” become shareable teaching artifacts; AI marginalia (notes explaining how output was shaped) may emerge as a norm. These could deepen transparency, or become ritual theater.
  • Assessment realignment. If first drafts are cheap, assessment shifts toward process evidence (versions, notes, prompts), oral defenses, and situated tasks. This can improve authenticity but demands more from instructors.
  • Attention economics. Conversation-first tools reduce tab-hopping, but they also reward rapid iteration. Some users will become more focused; others will live in an endless loop of “one more prompt.”
  • Institutional enclosure. Organizations will build bespoke models and walled knowledge bases. That can improve reliability for local use while narrowing horizons and reinforcing house orthodoxies.

So what do we call this era?

If the internet cultivated networked parataxis, generative AI installs a layer of synthetic hypotaxis, or structured language on demand. I’m partial to naming it consultative literacy (to stress the dialogic nature), or generative rhetoric (to mark how invention and arrangement are becoming collaborative). Whatever we call it, the practical task is the same: pair the speed and plasticity of AI with disciplined habits of citation, verification, and style. In other words, keep the conviviality of the feed and the rigor of the page, and teach writers to orchestrate both.

The culture will follow the mode. As we move from going out into the web to inviting the web to speak through us, our work becomes less about locating information and more about shaping it: specifying constraints, testing outputs, insisting on sources, and cultivating voice. That is both the promise and the peril of an age where every prompt yields a fresh, provisional world.

The Central Tendencies of the Rhetoric of AI

As artificial intelligence increasingly generates the written and published text we consume, it’s worth considering the consequences on both individual and societal levels. On the micro level—the everyday use of AI in writing—I suspect the changes will be subtle but meaningful. Individual writing abilities are likely to improve, as AI tools act as an accessible public option for crafting coherent prose. Just as autocorrect has quietly raised the baseline for grammatical accuracy in text messages and online posts, AI tools will elevate the overall quality of written communication. AI will make polished, coherent writing accessible to more people, effectively raising the “floor” of writing ability.

On the macro level, however, the implications are more profound. To understand this, let’s consider three primary dimensions of rhetoric: syntax, vocabulary, and tropes. These dimensions encompass how sentences are structured (syntax), which words are chosen and how they’re used (vocabulary), and the creative use of rhetorical devices like metaphors or antithesis (tropes). Since AI operates by analyzing and replicating patterns in language datasets, its writing reflects the statistical tendencies of its training data. In other words, AI-generated text is governed by the same central tendencies—mean, median, and mode—that define any dataset.

Syntax: The Median Sentence

AI-generated syntax will likely gravitate toward a median level of complexity. Sentences will neither be overly elaborate nor starkly simplistic but will instead reflect the middle level of grammatical intricacy found in its training data. This tendency could lead to a homogenization of sentence structure, with AI producing text that feels competent but not particularly varied or daring in its syntax.

Vocabulary: The Modal Words

Vocabulary choices in AI writing are often dictated by the most common words and phrases in its dataset—the mode. This preference for the most frequent linguistic elements means AI text can sometimes feel generic or boilerplate, favoring safe, widely used terms over more distinctive or idiosyncratic language. While this might ensure accessibility, it also risks a flattening of linguistic diversity, where rarer or less conventional words are underused.

Tropes: The Mean Creativity

When it comes to rhetorical tropes, AI tends toward the mean—a sort of average level of creativity. It might generate metaphors or analogies that are effective but lack the originality or boldness that characterizes the most memorable human writing. The result is a tendency toward competent but predictable creativity, rather than the kind of transformative or disruptive innovation that pushes rhetorical boundaries.

Language as Dataset

If AI treats language as a dataset, it inevitably inherits the statistical biases and patterns inherent in that dataset. While central tendencies like mean, median, and mode are useful for operationalizing numerical datasets, their application to language introduces a new set of challenges. Syntax, vocabulary, and rhetorical tropes may become increasingly tethered to these statistical norms, creating a gravitational pull toward a homogenized style of writing.

This is not to suggest that all AI-generated text will be devoid of creativity or variety. Rather, the concern lies in how the ubiquity of AI writing might influence broader linguistic and rhetorical trends. Will the prevalence of AI-generated text subtly shift our expectations of what “good writing” looks like? Will it amplify certain linguistic conventions while marginalizing others? These are questions worth monitoring as AI continues to shape the ways we write, think, and communicate.

If language becomes tethered to the central tendencies of AI’s datasets, the consequences extend beyond mere stylistic homogenization. They touch on the very dynamism of human expression—the outliers, the deviations, the unexpected turns of phrase that make language vibrant and uniquely human. Monitoring these tendencies isn’t just about understanding AI’s capabilities; it’s about preserving the richness of language itself.

Science is a process of elimination

Science affirms truth not by direct assertion but by negation. If someone were to ask a scientist, “How are you?” the response, in scientific terms, could not simply be “good.” A more fitting answer would be “not bad” or something equivalent. This distinction highlights a fundamental characteristic of the scientific method: it avoids direct affirmation in favor of ruling out alternatives. Science is not in the business of making unequivocal positive statements about reality but instead progresses by systematically eliminating what is not true.

This framework resembles the process of elimination in a multiple-choice test. For example, when scientists seek to answer a complex question such as “What causes cancer?” they rarely pinpoint a singular, definitive cause from the outset. Instead, they proceed by excluding possibilities—narrowing the field of potential answers by identifying what doesn’t cause cancer. Over time, these negations lead to an indirect approximation of truth.

In rhetorical terms, this mode of expression aligns with litotes, a figure of speech characterized by understatement. Litotes operates by asserting something indirectly, through the negation of its opposite. For instance, saying “not bad” rather than “good” captures a nuanced, precise meaning. Similarly, the scientific method uses this rhetorical approach to articulate findings, allowing for a careful and measured representation of truth that avoids overstatement.

The Rhetoric of Science

The rhetoric of science, a field dedicated to studying how scientists communicate and persuade, reveals that this litotic approach pervades the language of scientific inquiry. Scientists primarily communicate through their studies, and these studies often present findings in a litotic manner. Rather than offering unequivocal proof, scientific studies partially affirm hypotheses by negating competing explanations. In this way, scientific discourse functions less as a mechanism for declaring truths and more as a process of reducing uncertainty.

For example, scientific studies do not state definitively, “This is the cause,” but instead provide evidence that rejects—or fails to reject—the null hypothesis. This distinction underscores the probabilistic nature of scientific claims: science rarely deals in absolutes. Instead, it evaluates competing possibilities, gradually narrowing the scope of uncertainty by eliminating incorrect answers.

Science as Litotic Language

Contrary to popular perceptions of science as a taxonomic system that definitively names and categorizes truths, the language of science is fundamentally litotic. It constructs meaning by naming what something is not, rather than what it is. Each scientific study contributes a single datapoint that refines understanding by rejecting potential errors. Through this iterative process, science approaches truth indirectly, never declaring certainty but instead offering probabilities.

This litotic mode of expression reflects a broader reality: in science, certainty is a moving target, perpetually replaced by degrees of confidence. By articulating truths through negation, science not only mirrors the logic of litotes but also exemplifies its rhetorical precision. In doing so, it avoids the pitfalls of overstatement while offering a uniquely rigorous path to knowledge.

Human versus Artificial Intelligence

From my appearance on NPR’s The Academic Minute

Humans think in words, AI in numbers. The revolutionary Large Language Model ChatGPT works like a round of Family Feud: it answers our questions with only the likeliest responses, as determined by probability distributions. Is this “intelligence”? How should we understand truth in a world where words are assigned numbers, like the points in a Family Feud survey?  

We often think of science as taxonomic, but it’s not really. Scientific classification is negative and imperfect; it names by ruling out. Science says a mammal is an animal that doesn’t lay eggs. But what about the platypus? 

In rhetoric this is known as litotes, a rhetorical device in which something is affirmed by negating its opposite, like if you ask how I’m doing and I respond, “not bad.” Paradoxically, this rhetorical approach can offer greater accuracy while granting less detail.  

Science is litotical. Frequently accurate but insufficiently detailed, scientific studies are limited to two types of negative findings; they either reject or fail to reject a hypothesis. This kind of knowledge is useful in a laboratory, but the real world has platypuses. Truth in the real world is more than the difference of everything it’s not. 

AI is similar, for now at least. AI doesn’t name, it affirms by negation. ChatGPT sees the world as a multiple-choice question, and it responds through a process of elimination. Humans, meanwhile, fill in the blanks. We confront uncertainty not by calculating probabilities but by consulting wisdom.  

Each word generated by an AI represents the rejection of an alternative; artificial intelligence is fueled by probability rather than possibility. That’s a new world. Before it’s here, we should remember that human intelligence isn’t confined to the artificial horizon between rejecting and failing to reject hypotheses, and that our wisdom is deeper than its syntax.  

Assonance and brand names

What’s in a name? Assonance. Assonance, of course, is the repetition of vowel sounds in successive words. I’ve long theorized, without any real research to either confirm or deny,* that one way to create a memorable or pleasant name is to use assonance. Incidentally, this mostly applies to brand and band/album names, I’ve found.** For example, the fifth studio (and third best) album by The National, a band I like, is titled “High Violet.” Now, there could be some artistic rationale for the title, but there is no corresponding track name and as far as I can tell no mention in any lyrics of either “high” or “violet.” At the risk of oversimplification, I’m left to conclude that, on some level, The National decided on High Violet because it sounds cool. Whether or not that’s why they arrived at that name is irrelevant, because it does “sound cool.” Why? Because the long i in “high” assonates with that in “violet.”

At least, that’s the only reason I can find for pairing High with Violet. Moreover, other, non-assonant but equally conceptually distant word pairs don’t sound as cool or pleasant or memorable. Consider: does Low Violet work as well? What about Quick Violet, Wet Violet, Brown Violet, Dumb Violet, Credible Violet, Expensive Violet? Now consider these assonant alternatives: Dry Violet, My Violet, Sky Violet, Why Violet, Like Violet, Shy Violet, White Violet. The second set preserves the same pleasance as High Violet, presumably through assonance.

But maybe you disagree with me or you don’t like assonance, yet you still want to pair your words in some subtle, playful way. You’re left with three other common phonological maneuvers: rhyme, consonance, and alliteration. Unfortunately, all three are too easy, too campy, too childish. Assonance achieves a more sophisticated attention to phonological detail. It suggests rhyme, but doesn’t go so far as to rhyme for you; it repeats open and round vowel sounds, not harsh, quick consonants (consonance); and unlike alliteration it doesn’t occur at the beginning of words, something you might find in a tongue twister or nursery rhyme. No, assonance is adult.

Assonance can occur within a single word, like Nirvana. And single word names/titles are trending hard, last I checked, especially for restaurants and bands.*** But what I’m diagnosing here is more the deliberate assonating of two or more conceptually unrelated words to create some ambiguously pleasant aura; that assonance, then, becomes the only connection between the words. Iconic band names frequently employ it: Led Zeppelin, Creedence Clearwater Revival, AC/DC, Lynyrd Skynyrd, Joy Division, Rolling Stones, and so on. And corporate America confirms my thesis too. When their names consist of more than one word, corporate brands love assonance. Out of the 50 most profitable brands, only 10 have names consisting of more than one word (though the one-worders often inter-assonate, like Toyota and Microsoft) but 6 out of those 10 use assonance. Examples include Coca-Cola, General Electric, Wal-mart, Home Depot. In general, brand names tend to have less conceptual distance between the name and product they offer (Home Depot is a home improvement store, after all) than artistic projects which involve layers of interpretive distance. But that proves my point further: even when constructing a practical brand name, using assonance can make your name/title that much more memorable. What if Home Depot were called Home Warehouse?

Maybe I will tell my students that I am renaming this Rhetorical Device Thursday to Assonate Day.

*I only do this with silly theories of no importance. I promise I don’t make a habit of willful ignorance.

**This is because over the last 7 years if I’ve ever been trying to name something, it’s either a band I’ve been in (all of which have had terrible names, maybe with the exception of Mote) or a hypothetical Brewery Zack and I will start and the subsequent beers we will brew. The name of one of the last beers I brewed used assonance–it was called “Hot Gold,” a phrase I got From Toni Morrison’s Sula.

***Bands also have a fixation on one word plurals (e.g. Battles).