Artificial Intelligence is Artificial

The rise of large language models (LLMs) like ChatGPT has led many to believe we’ve entered an era of artificial general intelligence (AGI). Their remarkable fluency with language—arguably one of the most defining markers of human intelligence—fuels this perception. Language is deeply intertwined with thought, shaping and reflecting the way we conceptualize the world. But we must not mistake linguistic proficiency for genuine understanding or consciousness.

LLMs operate through a process that is, at its core, vastly different from human cognition. Our thoughts originate from lived experience, encoded into language by our conscious minds. LLMs, on the other hand, process language in three distinct steps:

  1. Text is translated into numerical data, where words and phrases are assigned numerical values based on probabilities.
  2. These numbers are plotted within a vast multidimensional space, representing relationships between words.
  3. The model uses these relationships to generate new numerical representations, which are then retranslated into human-readable text.

This process is an intricate and compute-intensive simulation of language use, not an emulation of human thought. It’s modeling all the way down. The magic of such models lies in their mathematical nature—computers excel at calculating probabilities and relationships at scales and efficiencies humans cannot match. But this magic comes at the cost of true understanding. LLMs grasp the syntax of language—the rules and patterns governing how words appear together—but they remain blind to semantics, the actual meaning derived from human experience.

Take the phrase “public park.” ChatGPT “knows” the term only because it has been trained on vast amounts of text where those two words frequently co-occur. The model assigns probabilities to their appearance in relation to other words, which helps it predict and generate coherent sentences. Humans, by contrast, understand “public park” semantically. We draw on lived experience—walking on grass, seeing children play, or reading a sign that designates a space as public. Our understanding is grounded in sensory and conceptual knowledge of the world, not in statistical associations.

Finally, fluffiness is quantified along a scale

This difference is critical. What humans and computers do may appear similar, but they are fundamentally distinct. LLMs imitate language use so convincingly that it can seem like they think as we do. But this is an illusion. Computers do not possess consciousness. They don’t experience the world through sensory input; they process language data, which is itself encoded information. From input to output, their entire function involves encoding, decoding, and re-encoding data, without ever bridging the gap to experiential understanding.

To extend the analogy: a language model understands the geography of words in the way a GPS system represents the geography of the world. A GPS system can map distances, show boundaries, and indicate terrain, but it is not the world itself. It’s a tool, a representation—useful, precise, but fundamentally distinct from the lived reality it depicts. To say AI is intelligent in the way humans are is like saying Google Maps has traveled the world and been everywhere on its virtual globe; this is sort of true, in the sense that a decentralized convoy of Google cars equipped with cameras have indeed crawled the earth collecting visual data for Google street view, but is not literally true.

As we marvel at the capabilities of LLMs, we must remain clear-eyed about their limitations. Their proficiency with language reflects the sophistication of their statistical models, not the emergence of thought. Understanding this distinction is crucial as we navigate an era where AI tools increasingly shape our communication and decision-making.

The surprising power of n=2

We are enmeshed in data every day. It shapes our decisions, informs our perspectives, and drives much of modern life. Often, we wish for more data; rarely do we wish for less.

Yet there are moments when all we have is a single datapoint. And what can we do with just one? One datapoint offers almost nothing. It is isolated, contextless, and inert—a fragment of information without relationship or meaning. One datapoint might as well be no datapoint.

But two datapoints? That’s transformative. Moving from one to two is not just an incremental improvement; it is a fundamental shift. Your dataset has doubled in size, a 100% increase. More importantly, with two datapoints, you can begin to make connections. You can compare and combine, correlate and coordinate.

From Isolation to Interaction

Consider the possibilities unlocked by having two datapoints rather than one. A single name—first or last—is practically useless; it cannot identify a person. But a full name—two datapoints—suddenly carries weight. It situates someone in a specific context, distinguishing them from others and enabling meaningful identification.

The same holds true for testimony. A single witness to a crime might not provide enough perspective to reconstruct what happened. Their account could be unreliable, incomplete, or subjective. But with two witnesses, we gain a second perspective. Their testimonies can corroborate or contradict each other, offering a deeper understanding of the event.

Or think about computation. A solitary binary digit—0 or 1—cannot do much. It is a static state. But introduce a second binary digit, and the world changes. With two bits, you unlock four possible combinations (00, 01, 10, 11), the foundation of all logical computation. Every computer, no matter how powerful, builds its intricate systems of thought from this basic doubling.

The Exponential Power of Pairing

Why is the shift from one to two so significant? It is not simply the doubling of data, but the transition from isolation to interaction. A single datapoint cannot create relationships, patterns, or meaning. It is static. Two datapoints, however, introduce dynamics. They allow for comparison and combination, for movement between states, for a framework within which meaning can emerge.

This leap—from one to two—is the smallest step toward creating systems of knowledge. Science relies on comparisons to establish causality. A single experimental result is meaningless without a control group to measure it against. Literature and language depend on dualities—protagonist and antagonist, question and answer, speaker and audience. Even human vision is based on the comparison of binocular inputs, it is our two eyes that allow us to see depth.

AI and the Power of Two

The transformative power of n=2 is most recently demonstrated in the operation of generative AI. At its core, generative AI depends on the interaction of two distinct but interdependent datasets: the training data and the user’s prompt. The training data serves as the foundation—a vast repository of language patterns, structures, and examples amassed from diverse sources. This data alone, however, is inert; it is an immense collection of information without activation or direction. Similarly, a prompt—a fragment of input text provided by a user—is meaningless without context. It is a solitary datapoint, incapable of producing anything on its own.

When these two datasets combine, however, the true power of AI is unlocked. The training data provides a rich, multidimensional context, while the prompt activates specific pathways within that context, directing the AI to generate meaningful output. This dynamic interaction transforms static data into a creative process. Much like the leap from one to two datapoints, the relationship between the training data and the prompt enables the emergence of patterns, coherence, and utility. Without the prompt, the AI remains silent; without the training data, the prompt is purposeless. Together, they form a system capable of producing complex and contextually relevant language.

This relationship between training data and prompts underscores the profound significance of pairing, the power of n=2. The interaction between these two elements mirrors a broader principle: meaning arises not from isolation, but from connection. Just as two witnesses can construct a fuller account of an event, and two binary digits can enable computation, the union of training data and prompts enables AI to simulate human-like language and reasoning, creating systems that are both dynamic and generative. The leap from one to two here is not just a quantitative doubling—it is a qualitative transformation that makes the impossible possible.

Building Toward Complexity

Two is not the end point; it is the beginning. Once we have two datapoints, we can imagine three, then four, and so on, building increasingly complex systems. But we should not overlook the profound importance of the leap from one to two. It is the first and most crucial step toward understanding—toward the ability to identify patterns, make connections, and draw conclusions.

N=2 is the minimum threshold for meaning, the simplest structure capable of supporting complexity. From two datapoints, entire worlds of logic, creativity, and understanding can unfold.

Science is a process of elimination

Science affirms truth not by direct assertion but by negation. If someone were to ask a scientist, “How are you?” the response, in scientific terms, could not simply be “good.” A more fitting answer would be “not bad” or something equivalent. This distinction highlights a fundamental characteristic of the scientific method: it avoids direct affirmation in favor of ruling out alternatives. Science is not in the business of making unequivocal positive statements about reality but instead progresses by systematically eliminating what is not true.

This framework resembles the process of elimination in a multiple-choice test. For example, when scientists seek to answer a complex question such as “What causes cancer?” they rarely pinpoint a singular, definitive cause from the outset. Instead, they proceed by excluding possibilities—narrowing the field of potential answers by identifying what doesn’t cause cancer. Over time, these negations lead to an indirect approximation of truth.

In rhetorical terms, this mode of expression aligns with litotes, a figure of speech characterized by understatement. Litotes operates by asserting something indirectly, through the negation of its opposite. For instance, saying “not bad” rather than “good” captures a nuanced, precise meaning. Similarly, the scientific method uses this rhetorical approach to articulate findings, allowing for a careful and measured representation of truth that avoids overstatement.

The Rhetoric of Science

The rhetoric of science, a field dedicated to studying how scientists communicate and persuade, reveals that this litotic approach pervades the language of scientific inquiry. Scientists primarily communicate through their studies, and these studies often present findings in a litotic manner. Rather than offering unequivocal proof, scientific studies partially affirm hypotheses by negating competing explanations. In this way, scientific discourse functions less as a mechanism for declaring truths and more as a process of reducing uncertainty.

For example, scientific studies do not state definitively, “This is the cause,” but instead provide evidence that rejects—or fails to reject—the null hypothesis. This distinction underscores the probabilistic nature of scientific claims: science rarely deals in absolutes. Instead, it evaluates competing possibilities, gradually narrowing the scope of uncertainty by eliminating incorrect answers.

Science as Litotic Language

Contrary to popular perceptions of science as a taxonomic system that definitively names and categorizes truths, the language of science is fundamentally litotic. It constructs meaning by naming what something is not, rather than what it is. Each scientific study contributes a single datapoint that refines understanding by rejecting potential errors. Through this iterative process, science approaches truth indirectly, never declaring certainty but instead offering probabilities.

This litotic mode of expression reflects a broader reality: in science, certainty is a moving target, perpetually replaced by degrees of confidence. By articulating truths through negation, science not only mirrors the logic of litotes but also exemplifies its rhetorical precision. In doing so, it avoids the pitfalls of overstatement while offering a uniquely rigorous path to knowledge.