From my appearance on NPR’s The Academic Minute
Humans think in words, AI in numbers. The revolutionary Large Language Model ChatGPT works like a round of Family Feud: it answers our questions with only the likeliest responses, as determined by probability distributions. Is this “intelligence”? How should we understand truth in a world where words are assigned numbers, like the points in a Family Feud survey?
We often think of science as taxonomic, but it’s not really. Scientific classification is negative and imperfect; it names by ruling out. Science says a mammal is an animal that doesn’t lay eggs. But what about the platypus?
In rhetoric this is known as litotes, a rhetorical device in which something is affirmed by negating its opposite, like if you ask how I’m doing and I respond, “not bad.” Paradoxically, this rhetorical approach can offer greater accuracy while granting less detail.
Science is litotical. Frequently accurate but insufficiently detailed, scientific studies are limited to two types of negative findings; they either reject or fail to reject a hypothesis. This kind of knowledge is useful in a laboratory, but the real world has platypuses. Truth in the real world is more than the difference of everything it’s not.
AI is similar, for now at least. AI doesn’t name, it affirms by negation. ChatGPT sees the world as a multiple-choice question, and it responds through a process of elimination. Humans, meanwhile, fill in the blanks. We confront uncertainty not by calculating probabilities but by consulting wisdom.
Each word generated by an AI represents the rejection of an alternative; artificial intelligence is fueled by probability rather than possibility. That’s a new world. Before it’s here, we should remember that human intelligence isn’t confined to the artificial horizon between rejecting and failing to reject hypotheses, and that our wisdom is deeper than its syntax.