Blog

Automated Writing Technology: Generative AI, ChatGPT

In grad school (2015-2020) I focused my dissertation research on what I termed “automated writing technologies.” I was fascinated by such technology because it represented a nexus of the linguistic and the numeric. These computer programs allowed me to explore my fascination with both alphabetic and quantitative languages. Specifically, my project tested the ability of computers to evaluate human-generated text and asked whether automated writing evaluation could, at the time, provide effective formative feedback on the rhetorical dimension of prose writing at the level of metaphor, irony, humor, and analogy rather than the more rote aspects of grammatical correctness. My dissertation findings, while limited, were not promising.

Large Language Models (LLM) were around during this time, but quite rudimentary. They were about as good as the predictive texting on our smartphones, limited to accurately predicting one or two words at a time. In 2017, however, researchers from Google published Attention Is All You Need, a revolutionary paper about the Transformer architecture that serves as the foundation of generative AI and which represents the “T” in GPT.

By the time I finished grad school in 2020, OpenAI’s second generation LLM, GPT-2, was available to the public. Now, less than three years later, we have OpenAI’s GPT-3 and ChatGPT, as well as other proprietary AI text generators from Google, IBM, Facebook, and more. This area of technology has advanced rapidly, and I am fortunate to have graduated when I did to ride this wave.

While I am not a computer scientist, and don’t pretend to be one despite my attempts to read and understand some of the primary papers from this field, I bring a background in rhetoric and writing assessment to the study of generative AI generally and Large Language Models specifically. My experience in writing pedagogy/assessment and the history of rhetoric prepares me to contribute to the scholastic conversations about these breakthrough technologies in productive ways.

I urge other humanities and liberal arts scholars to join this conversation. I think many of us in the liberal arts have much to contribute to decisions about how to incorporate generative AI into educational and professional institutions, which we need to start thinking about sooner than later. I also think we should make an effort to try to learn some of the more technical aspects of LLMs in order to be conversant with the programmers and companies designing them.

To that end, I will be updating this site mostly about future research regarding LLMs, GPT, and generative AI more broadly, considering both theoretical questions (does AI use rhetoric in the same way we do?) and practical applications (how can we use this technology to better teach writing and thinking?)

I recently gave a talk to my university about what LLM technology is and how we might work with rather than against it in our classrooms. I also published an op-ed on Thursday 23 February 2023 in the Dallas Morning News about the same topic, how generative AI represents an opportunity to re-think and re-define how we teach writing. Finally, I gave an interview to The Texas Standard, a Texas public radio station, on why I’m not as anxious about generative AI’s impact on higher education as some others in my field. The segment aired around 10.30am Wednesday morning on 22 February 2023.

Stay tuned for more updates.

The Hypothetical: Between Classroom Lecture and Discussion

A perennial debate among teachers seasoned and new concerns how much lecture versus discussion to feature in class. The debate is especially relevant to teachers of liberal arts subjects, since the content of such courses is not always conducive to rote learning techniques. Liberal arts subjects often require the completion of previously assigned reading, and even if enough students read to engage in fruitful discussion there remains the risk of devolving into debate rather than dialogue. To complicate matters further, lectures and class discussions are frequently, and falsely, pitted against one another, viewed as binary opposites and sometimes filtered through a political lens which codes the former as conservative and the latter as progressive. Consider the image of a professor lecturing to a class of students dutifully taking notes versus a classroom of circled desks with students and teacher alike engaging in a dynamic conversation.

I believe there are more or less appropriate times and subjects for either lectures or discussions, and I don’t buy that either has a particularly salient political character; after all, even the teacher discussing texts in a circle still assigns grades at the end of the term. But there are challenges to both lectures and class discussions that I think frustrate new teachers in particular. Lectures aren’t especially engaging unless done well, which comes with time and practice. On the other hand, discussions can intimidate students to the point of non-participation, especially if the topic of discussion is particularly controversial. Also, the skill of facilitating and nourishing discussions is underrated and quite challenging, another pedagogical virtue that comes with time and practice. Additionally, I find neither mode particularly effective for beginning a class. Opening with a lecture (especially in the early morning) risks students nodding off to sleep, as their engagement with lectured material is entirely determined by their individual commitment. Conversely, jumping into a discussion right away often fails to take off, as any teacher can attest, since students can be reluctant to break the ice.

Enter the hypothetical. The hypothetical is exactly as it sounds: a contrived example or problem related to the day’s concepts is posed to students who must reason through its various frictions. Incorporating hypotheticals into liberal arts classes is particularly effective, as it occupies a middle ground between lecture and discussion, combines the best elements of both, and requires no prior reading. Like a lecture, a hypothetical allows the instructor discretion in guiding students more or less forcefully towards the concepts intended for learning. And like a discussion, it engages students’ creative and critical thinking muscles, prompting them to respond and participate. Hypotheticals are a great icebreaker to boot; completing the reading is not required to participate in the deliberation of the hypothetical, as its contrived parameters present enough content for participation. It also doesn’t risk intimidating students from offering controversial opinions since its hypothetical nature provides a comfortable space for intellectual exploration.

Here is one hypothetical I offer students in one of my first-year composition courses:

Here’s what typically happens when I pose it. The majority of students almost immediately answer “no,” which allows me to oppose the class consensus to encourage deeper thinking. For instance, I point out that only a 1% chance of a wrongful conviction is quite good, especially considering current estimates put the US wrongful conviction rate somewhere between 2-10%. I then ask which kind of evidence would persuade them to convict, to which they usually respond “eyewitness testimony,” “DNA evidence,” and “video footage.” All of which, I suggest, is likely less than 99% accurate. So what gives, I press.

I then switch to another hypothetical, this time involving blood pressure medication, to which the entire class always answers “yes.” (As a side note, a great technique is to juxtapose two structurally-similar hypotheticals that nevertheless induce students to opposite conclusions; the cognitive dissonance is generative.)

“So what’s the difference?” I ask. Students usually reason many of the differences pretty well, in my experience: The stakes are different for sending an innocent person to prison compared to leaving blood pressure untreated; the former involves deciding someone else’s fate and the latter your own; and there’s a matter of “trust” when it comes to doctors that feels distinct from the consequentialism of the judicial system.

The larger point I make is this: The American judicial system (in theory, at least) does not consider the probability of committing a crime as admissible evidence, knowledge of guilt. (Thank God Minority Report is just a movie.) Even if there’s only a 1% chance, you are presumed innocent until proven guilty. In American medical science, however, the probability of a drug’s effectiveness, inferred from carefully controlled experimental trials, is considered knowledge. In fact, inferring the effectiveness of medical treatments from samples is the only way to know anything in medicine at all; it is impossible to “witness” or testify to, at least rigorously and systematically, the evidence of medical efficacy. These hypotheticals therefore elegantly demonstrate the difference between two types of information: observational knowledge, obtained and verified through experience, and probabilistic belief, inferred through studies of manipulated samples.

In the larger context of the course, we are discussing the nature of knowledge, how it is we can say we know something. The goal of this class period is to demonstrate how knowledge is context-dependent; what counts as knowledge in a medical trial is not what counts as knowledge in a court of law. Does that mean knowledge is whatever we say it is? No, it means that certain domains (medicine, law, science) have developed their own rules that govern how knowledge in that domain is understood and counted. Awareness that there are differences in how we derive knowledge is a concept I discuss in my first-year composition courses because I believe the idea is essential to understand before reading and writing academic research at the college level (your mileage may vary). This hypothetical helps students to intuit the primary takeaway of the lesson, the constructed nature of knowledge, without me lecturing at them about it.

The sum of everything

Do you all ever think about infinity? It’s one of those things unique to human cognition and thus sort of unavoidable, like object permanence or episodic foresight. We are forced to confront the infinite as a matter of course. Yet to ponder the infinite is paradoxical, for thinking of it limits the limitless.

The concept of infinity comes out of mathematics of course, but I think it transcends the disciplinary borders typically ascribed to it. Infinity has more in common with faith and religion than math. The infinite and God both contain similar qualities of omnipotence, and like the ontological argument for the existence of God, if we can conceive of infinity in our minds–a watered down version–then it must exist in a purer form elsewhere. To imagine the infinite implies that an even more infinite infinity exists outside of us.

Outside of math class, infinity is historically contingent, repackaged by different cultures to reflect the specific historical conditions surrounding them. In Ancient Greece, when thinkers were just beginning to grapple with the concept, infinity was first inferred from the banal. The infinite presented itself in such mundane tasks as walking from point A to B, a distance with infinite halfway points between, or in the shapeless water of a river, infinitely turning in on itself so that one can “never step in the same river twice.”

For us today, the infinite reveals itself in myriad other ways, consumer choices chief among them. I often find myself mindlessly scrolling through Netflix’s seemingly infinite offerings, squandering two hours of my day. I am similarly paralyzed by the infinite consumer choices aggregated on Amazon or on display at grocery stores, so much so I often don’t purchase whatever it was I needed in the first place. I also experience a flavor of the infinite when I think about voting. While election votes are not technically infinite, there are enough of them relative to the one of mine that political outcomes feel infinitely distant and detached from my singular blue vote cast in a red state, a snowflake in an avalanche as a popular analogy goes.

While there are important mathematical operations involving infinity–I mean, it’s a crucial part of calculus, after all–I think infinity has more impact conceptually than arithmetically. The infinite is, in other words, more rhetorical than numerical. Like a drinking glass, it gives shape to the aqueous nature of existing in an infinitely expanding, timeless universe; it names a sensation commonly felt among individual people living infinite and singular lives in a godless world of billions. Many abstractions short circuit our cognitive console, but naming them, at least temporarily, cools it down. We can’t really experience the purely infinite, but we can name it. We can know it’s out there.

Knowing something exists is a prerequisite for ignoring it. And maybe it’s time to start ignoring the infinite. Calculus has taught us many things, but perhaps most importantly it resolves part of infinity’s paradoxical nature. We now know that the infinite is only one side of the equation. You can divide a number in half infinitely, but add all those divisions up and you get back to the whole number you started with. In our culture and our politics and our lives, we are too fixated on one side of the equation—the side with infinite divisions. The other side remains a single self contained unit, a finite value that the infinitesimally small divisions must add up to, even if there are an infinite number of them. I think we all stand to benefit from a greater focus on the whole rather than the infinitely divided parts.

We will become robots before robots become us

Who STAR TREK's Data Was, and Where He Is Now - Nerdist

Part of the overall argument of my dissertation is that the true threat of automated technology, AI, robots, etc., is not necessarily that robots are becoming dangerously human-like and that we might soon face a Matrix-style machine revolution, but rather that by increasing our interaction with automated, robotic technology we become more robotic.

I have never felt this sensation more acutely than when trying to call any kind of customer service line these days. It’s almost impossible to talk to an actual human person without first wading through a byzantine series of automated voiceover machine prompts. “Please describe your problem in a few words,” demands an affectless voice. I then proceed to ramble incoherently and in a totally unnatural cadence that results in the machine saying “I’m sorry, I don’t understand.” It’s maddening not only because you’re not solving your problem but also because you don’t feel or even sound like yourself. Seriously: next time you’re interacting with a robot voiceover, pay attention to the way you speak. It sounds more like you’re typing words into a search bar than anything resembling human dialogue; it is devoid of grammatical connector words, just a series of keywords rambled off in a frustrated tone. Never mind that the reason for the call in the first place is not easily summarized in simple sentences. Interestingly, a problem I can’t solve by Googling and that requires a phone call somehow never seems to fit into the simplistic categories that the robotic voice asks me to fit it into. Quelle surprise. But understanding poor descriptions of complex problems that defy easy categorization is something humans are very good at!

Speaking with an automated phone directory literally brings us down to the robot’s discursive level and forces us to talk and communicate like a robot does, because as humans our default social mode is to maintain interlocutor equilibrium. We subconsciously find and convert ourselves to the least common denominator in a communicative exchange. But robots can’t change their communicative register, so the only way to achieve mutual intelligibility with a robot is if we ourselves speak robotically. I find the whole experience of automated customer service incredibly frustrating and illustrative of the real threat automation poses, at least in the short term: It’s not that the robots will become like us, it’s that we will become like the robots. And of course, once we have normalized robotic human behavior, it will be much easier for differences between man and machine to cease to exist.

Who are the wise?

The Thinker — About UofL
The Thinker, University of Louisville

Where do we turn for wisdom in contemporary culture? Eclipsed by science, religion no longer wields the authority it once did. Our captains of industry offer nothing in the way of wisdom, as their station in life is not relatable to most people due to their unfathomable wealth. Companies offer no wisdom, only pandering and half-baked apologies for their involvement in various scandals. Celebrities are no more wise than us since they have been exposed, thanks to social media, as just as boring and sad as we are. Who holds wisdom today?

There is something: the wisdom of the crowd. In many ways, crowd wisdom has come to fill the void previously occupied by the authorial wisdom of political leaders and philosophers. In 2020, the “wisdom of the crowd” means the internet. And it’s true: the internet is full of wisdom. However methodologically flawed polling can be, I ultimately find the internet’s ability to aggregate public opinion on a topic and quantify it along a consensus scale a useful (and beautifully modern) kind of wisdom.

It’s important to distinguish wisdom from knowledge. Take user review metrics. Metrics like a movie’s Rotten Tomato score or a restaurant’s average Yelp! star count offer us data of a certain variety. We can’t say these metrics constitute knowledge, however. If the new Batman movie has a 99% rating on Rotten Tomatoes, do we know it is good? In most contexts, knowledge requires firsthand experience or observation. But if I scour dozens of product reviews on Amazon before I purchase a new keyboard, I am equipped with some kind of information about the product I didn’t have before. This information isn’t “knowledge,” but rather wisdom, or judgment, about how to navigate a world oversaturated with too many movies and restaurants and gizmos and gadgets, and the internet does it well.

Yet, this is practical wisdom. The practical wisdom offered by the internet is all good for the low-stakes struggles of discovering new movies and locating hole in the wall restaurants, but what about questions of greater significance? Where, today, might we turn for wisdom about deeper human quandaries? What about existential wisdom? Who has advice on how to live authentically during a once-in-a-century pandemic? How do we derive meaning from life when the entire US west coast is on fire? How should we live in a country whose democratic institutions are crumbling? All the gods are dead, after all, and our president is Donald Trump. Authorities have been demoted. No one is steering the ship.

To be sure, the wisdom of the crowd (read: the internet) has proven capable of addressing some of our anxieties, and could potentially solve future problems. Wikipedia, a testament to the power of collective action and an example dispositive of the omnipotence of the profit motive, functions as a kind of existential pain balm in its radical democratization of knowledge. But the Wikipedia model seems to me the exception and not the rule of the internet. In most other online ventures, the way that crowd wisdom is aggregated and repackaged in order to sell stuff or attract clicks deserves scrutiny.

The practical wisdom of the crowd is also by its nature anonymous, and therefore limited in its ability to address questions of existential significance. The existential wisdom I and I’m sure many others crave right now requires a personal, if individual, character. Crowd wisdom, on the other hand, is depersonalized, extracted by averaging together the opinions of many, meaning that its very process minimizes the impact of individual outlier data points; crowd wisdom is a resistant statistic.

That’s all good, and perhaps even preferential, for yielding an average movie score, but diluting existential wisdom by averaging it neutralizes one person’s wisdom with another’s thoughtlessness. And the challenges facing America today are anything but average; America is quite literally an outlier statistic in many of the most important metrics: gun deaths, covid-19 cases, per-capita health insurance spending, etc. It seems to me that a country determined to persist as an outlier may require outlier wisdom as a counterbalance. At any rate, sound judgment, prudence, and virtue—the ingredients of wisdom—are not traits easily expressed in aggregate.

A lot of things feel like they’re nearing their end these days: the US west coast, in-person education, democracy, the American dream. We know this. But knowledge simply won’t suffice, and neither will aggregate and practical crowd wisdom. Existential wisdom is what we need in such times. We already know how bad things are; the question is what will we do about them?

Dissertation excerpt: the politics of writing education

From Chapter 5: The Age of Automation in The Android English Teacher: Writing Education in the Age of Automation

The unique challenge of teaching writing serves as an instructive test case for education writ large. Writing embodies a series of paradoxes. It is both a science and an art, a technical skillset and a creative outlet. It is essential to every academic field—the medium through which scholarship transpires—yet its teaching is treated as a mere service to the university. Writing is inherently social, a communicative act between author and audience, and also a process that unfolds for long stretches in solitude. Writing is heavily mediated by technology, but also a fundamentally human endeavor. Finally, writing—and especially its teaching—is simultaneously progressive and conservative, associated with both radical expressivist disciplines and a traditional, prescriptivist and civics-oriented education. One of the main challenges for liberal arts scholars and writing educators is successfully balancing these contradictions, which become heightened in the age of automation.

An automated writing education fails to strike a balance between these divisions as human teachers do, and instead aligns with one side in each. With automated writing education, writing is only a science, a skill, and a nonsocial act; it is a lifeless interaction between a writer and a preprogrammed algorithm, a rote reproduction of conventions to be marked right or wrong. At root, writing is not concerned with being “right” or “wrong,” but with effective communication. The automation of writing education reduces the nuance of negotiating effective communication between author and audience to a formulaic transmission of agreed-upon conventions between a word manager and a machine. What portends to happen to writing education in the age of automation is not simply a change in the way writing is taught, but a redefining of what writing is.

Whether automated writing education will come to be defined as politically progressive or conservative remains to be seen. As higher education itself undergoes a redefinition amid public health pandemics and technological progress, its political valence has grown more significant. Pew Research (Parker, 2019) has shown for years a growing partisan divide in views of higher education (Figure 6). Choice of academic majors and course content, as well as the value of a degree, have become politically charged in a way they never have before. Writing occupies a peculiar dual position within this politicization: conservatives believe writing is an essential component of education and decry college students’ alleged declining writing ability, yet simultaneously view the very departments that teach writing as part of a domineering leftwing culture on campuses.

As universities and colleges grapple with remote and virtual learning configurations in the coming years, I fear these growing political fault lines will become ammunition in those debates. Educational technology companies may enlist progressive political rhetoric to push their products, and arguments about virtual learning or automated educational technology may end up being more about political allegiances than the pedagogical effectiveness of the tools. In the event of educational technology companies—sensing an opportunity to get their foot in the door on campuses—invoking progressive political rhetoric to sell their products, we should think critically about the pedagogical repercussions of employing virtual, and potentially automated, educational products and services independent of such rhetoric as best we can.

Impressive-sounding claims of academic personalization and customization, combined with a progressive framing of pandemic-related social distancing and educational “access,” will continue to escalate as education turns more and more virtual. My great fear is that out of a commanding paranoia of being perceived “conservative,” liberal-minded educators will thoughtlessly accept, even advocate for, corporate-led education reforms that are nominally and symbolically progressive but deeply and structurally reactionary. Many of the arguments that preserve our autonomy as writing educators have the potential to sound conservative, and perhaps some of them even are conservative in a definitional sense. “Conservatism” has become so radioactive that people forget there are many things worth “conserving”; I believe the in-person teaching of writing is worth conserving, for instance.

We must not fear being perceived as conservative if we push back against that rhetoric. In fact, much of our jobs and livelihoods as educators revolve around conserving certain elements of the current educational model that would be irreparably disrupted by the unilateral welcoming of endless technocratic reform. If we are so afraid of being perceived as conservative that we align with nominally progressive educational reforms that beget reactionary consequences, that could give technology companies with empty progressive branding the power to significantly redefine for us what higher education looks like.

Covid-19 by the numbers

Rainbow gravity theory - Wikipedia

Historians of science debate it, but many consider the first example of “true science”—defined as the effort to numerically describe natural world phenomena—to be when the ancient Greeks calculated the ratios of musical tone intervals. The Greeks discovered that a string of the same thickness and tension as another but twice its length vibrated at a frequency an octave lower, meaning the ratio of an octave is 2:1. The ratio of a perfect fourth was calculated to be 4:3, a perfect fifth 3:2, and a tone (whole step) 8:9.

This process of quantifying intervalic harmony came to structure the Greeks’ entire way of thinking, extending beyond the strings of the lyre all the way to the heavens. They soon formed an early version of astronomy in which the planets were thought to exist in various degrees of harmony with one another. Sometimes consonant and other times dissonant, the planets orbited at differing harmonic intervals above or below another. Venus and Earth were a minor third apart.

This is hailed as scientific thinking—translating reality into numbers, identifying patterns, and inferring broad conclusions. But describing the distance between Earth and Venus as a minor third, we now know, is not science but poetry. While the ratios have proved correct and useful for musical tones, they are a useless framework for analyzing astronomical patterns. Numbers have that kind of seductive power. Their seeming objectivity projects a certainty that is both comforting and dangerous.

I’ve been thinking about the ancient Greeks as we continue to battle Covid-19. We are deep in the process of describing this novel virus using our own modern quantitative language. The pandemic has brought with it a dizzying array of numbers to decipher: total confirmed cases, running death toll, case fatality rate, R0 (the rate of infection), growth rate, daily test totals, 7 day case average, incubation period, six feet of social distancing, and at-risk age ranges are just some of the most common metrics. Obviously, vital research is being conducted using these quantities, and this is not meant as an anti-intellectual or conspiratorial screed. But some days I feel like we are the ancient Greeks, staring at the sky and charting the planets using major and minor musical scales.

Almost every day, I visit the CDC Covid Data Tracker, which collects all these numbers and more. The interactive graphs and quantitative specificity exude an air of authority. Yet, crucially, each of these metrics is an approximation, a best guess. We will never know the true totals and rates. No matter how many decimal places we account for and log-adjusted rates we calculate, numbers are crude, blunt instruments that can only ever describe the vague contours of the pandemic.

Now that we are beginning to open the country back up I worry that our obsession with numbers may be leveraged for callous justifications. Although numbers help us stay vigilant, they can also be wielded to justify reckless action. Blind confidence in numbers has the potential to transform a rubric of compassion and caution into a cost-benefit ledger sheet of risk vs. reward, inoculating us from the horror of death by cloaking it in impersonal percentages.

I think about my personal relationship to the the Covid-19 numbers, how they factor into my current state of mind. Normally, when I’m dealing with quantities, the smaller the number the more intimate, and the less abstract, it is. I’m able to see things few in number as themselves, not as quantities. “The death of one man is a tragedy; the death of a million men is a statistic,” as they say. But this pandemic is peculiar. As the numbers add up, and the death toll rises, I feel less distant from this. Instead I feel the pandemic closing in on me; the mounting case figures feels like I’m sinking into quicksand.

I also think about the political significance of the numbers. We’re over 135,000 deaths now and confirmed cases are once again rising. Who’s to blame for all this senseless death? The president? Governors? Mayors? Individual Americans? Will anyone be tried for crimes? I happen to believe the more we focus on the personal transgressions of individual Americans the more we are distracted from holding the true perpetrators—politicians—accountable. They must pay us to fight this thing, or else we won’t fight it, not because we are bad people but because without actual relief such as paid leave, unemployment benefits, healthcare, and mortgage/rent suspension, fighting the virus by staying home currently amounts to a different kind of death—financial ruin.

Finally, I think about the numbers yet to enter the equation. When there is a vaccine, a new number will emerge as the most salient: how many people will get it and who? And if I know America, someone will be turning a huge profit. I fear the calculus to come.

Numbers are prisms. Held right so the light hits at the correct angle and a whole spectrum becomes visible. Held wrong, or stowed away in darkness, and the prism is empty and blank, nothing inside. Numbers are better than nothing, and I believe all policy decisions going forward should draw on the best numbers we’ve got. Once we do get a handle on this thing, though, I hope we are careful not to confuse harmony with astronomy.

Playing chess with Death

I’ve been “social distancing” now for six days, which means I’ve had time to watch several movies. One of those movies was Ingmar Bergman’s 1957 The Seventh Seal, which I found surprisingly uplifting, despite taking place in Europe during the Black Plague (fitting, I know). Besides thematic parallels between the Black Plague and today’s Covid-19 pandemic, the movie offers a simple lesson about how we might live our lives with death weighing so heavy on our minds.

In the opening scene, war-weary knight Antonius Block challenges Death incarnate (a mysterious man in a hooded black robe) to a game of chess. The stakes are high: if the knight wins, Death will let him go, otherwise he dies. The movie proceeds to follow Block as he travels across the country to re-unite with his wife, all the while his metaphorical chess match with Death continues in the background.

We learn a lot about Block on his journey. He questions God, lacks faith in anything, and struggles to find meaning in life. Faced with the sudden prospect of dying, the nihilistic Block confesses to a priest (who is actually Death in disguise) something most of us probably feel on some level: he wishes to perform one meaningful deed before he dies, which would ostensibly give his life significance. Like most of us probably do, he initially imagines this deed should be a grand one because, in his calculus, only a deed of truly heroic magnitude could counterbalance the profound existential torment he feels living in a plague-stricken, Godless and careless world. What good is planting one tree while the whole forest burns?

But he learns, eventually, that’s not how life works. Existential dread is not counterbalanced any more effectively by dramatic gestures or heroic performances than by the small activity of living everyday life. It is life’s seemingly trivial moments that best soothe our deepest anxieties: simply laughing with a friend can effectively ward off the specter of death, at least temporarily. In a moment of explicit optimism, Block picnics with a young couple and their child. He finally stops distressing over big questions relating to God and death and faith and happily embraces the present communion:

I shall remember this moment: the silence, the twilight, the bowl of strawberries, the bowl of milk. Your faces in the evening light. Mikael asleep, Jof with his lyre. I shall try to remember our talk. I shall carry this memory carefully in my hands as if it were a bowl brimful of fresh milk. It will be an adequate sign to me, and it will be enough for me.

Block continues on his journey to see his wife, with Death following close behind. Block of course finally loses the chess match with Death. But before he does, he briefly cheats Death: he purposefully knocks over the pieces on the board. Distracted by putting the pieces back in place, Death fails to notice that Block has just given his friends the opportunity to escape Death, for the time being. “What did you gain by this reprieve?” Death asks. “A great deal,” says Block.

And that’s it. That’s the lesson. We are all playing a losing game of chess with Death. We can strategize all we want, and we can look 3, 4, 5, 20 moves ahead. But it always ends the same. For all of us. We can play to gratify our own sense of competition, hopelessly trying to outwit Death. We can agonize over the unanswerable questions Death raises, callowly believing knowledge somehow functions as a soothing cosmic ointment. Or we can cheat Death one day at a time, little by little.

Cheating Death can take many forms, at different times, some of them heroic but most of them banal. Right now, cheating Death is simple: it means staying in, so you’re not a vector of transmission. Even if you’re not worried about losing your chess match soon, staying in means others might escape Death for a bit longer. It’s a small reprieve, staying in, but right now there’s a great deal to gain in doing it.

Other People

Other People movie review & film summary (2016) | Roger Ebert

This movie absolutely excels in its realistic details. There are no major metaphors, and it doesn’t try to offer some fancy overwrought soliloquy on the nature of sickness, death, and loss; it’s a movie that reports on its subject in unsettling but clarifying detail, like wearing glasses for the first time after you can’t read the chalkboard. Ironically, it’s the honest singularness of depicting one family’s struggle that ends up making the themes resonate on a more general level, evoking the big common human emotions relating to death. Cancer for many is what happens to “other people,” but the success of this movie is that it’s sincerity renders cancer not so “othered.”

The plot sometimes veers into attempts at comedic relief that don’t quite hit, but the performances of Plemons and Shannon mostly make up for that. Joanne giving David his own “birch trees” moment by reminding him he needs only to “see his sisters” when he misses her is one of the most heartbreaking scenes I’ve seen in recent years. Because, again, it’s not overwrought; it’s real, sincere, specific, and true.

A movie without a metaphor

“We can’t lose.”

Mississippi Grind captures a feeling of casual recklessness that, when embraced in youth, is edifying and exciting. When that same recklessness is embraced by a mid 40s gambling addict, it’s despairing. But, somehow, the movie manages to make Gerry’s middle-aged, addiction-riddled spiral feel like nothing more than youthful recklessness. I think this is largely due to the reassuring presence of Gerry’s counterpart, Ryan Reynolds’s mysterious character.

I mostly enjoyed the absolute lack of metaphor in the movie. It is a simple story, brutal at times, but without posturing or lecturing. A guy who needs money and can’t help himself drives to New Orleans to hit it big. Nothing is a proxy for something else. The narrative simply unfolds, and it tells itself without the ironic winking or nudging that seems to accompany every other movie today. Maybe you can only get away with such an utter lack of metaphor or symbolism when the primary thematic components are greed, loss, curiosity, and companionship— fundamental elements of the human experience.