Automated Writing Technology: Generative AI, ChatGPT

In grad school (2015-2020) I focused my dissertation research on what I termed “automated writing technologies.” I was fascinated by such technology because it represented a nexus of the linguistic and the numeric. These computer programs allowed me to explore my fascination with both alphabetic and quantitative languages. Specifically, my project tested the ability of computers to evaluate human-generated text and asked whether automated writing evaluation could, at the time, provide effective formative feedback on the rhetorical dimension of prose writing at the level of metaphor, irony, humor, and analogy rather than the more rote aspects of grammatical correctness. My dissertation findings, while limited, were not promising.

Large Language Models (LLM) were around during this time, but quite rudimentary. They were about as good as the predictive texting on our smartphones, limited to accurately predicting one or two words at a time. In 2017, however, researchers from Google published Attention Is All You Need, a revolutionary paper about the Transformer architecture that serves as the foundation of generative AI and which represents the “T” in GPT.

By the time I finished grad school in 2020, OpenAI’s second generation LLM, GPT-2, was available to the public. Now, less than three years later, we have OpenAI’s GPT-3 and ChatGPT, as well as other proprietary AI text generators from Google, IBM, Facebook, and more. This area of technology has advanced rapidly, and I am fortunate to have graduated when I did to ride this wave.

While I am not a computer scientist, and don’t pretend to be one despite my attempts to read and understand some of the primary papers from this field, I bring a background in rhetoric and writing assessment to the study of generative AI generally and Large Language Models specifically. My experience in writing pedagogy/assessment and the history of rhetoric prepares me to contribute to the scholastic conversations about these breakthrough technologies in productive ways.

I urge other humanities and liberal arts scholars to join this conversation. I think many of us in the liberal arts have much to contribute to decisions about how to incorporate generative AI into educational and professional institutions, which we need to start thinking about sooner than later. I also think we should make an effort to try to learn some of the more technical aspects of LLMs in order to be conversant with the programmers and companies designing them.

To that end, I will be updating this site mostly about future research regarding LLMs, GPT, and generative AI more broadly, considering both theoretical questions (does AI use rhetoric in the same way we do?) and practical applications (how can we use this technology to better teach writing and thinking?)

I recently gave a talk to my university about what LLM technology is and how we might work with rather than against it in our classrooms. I also published an op-ed on Thursday 23 February 2023 in the Dallas Morning News about the same topic, how generative AI represents an opportunity to re-think and re-define how we teach writing. Finally, I gave an interview to The Texas Standard, a Texas public radio station, on why I’m not as anxious about generative AI’s impact on higher education as some others in my field. The segment aired around 10.30am Wednesday morning on 22 February 2023.

Stay tuned for more updates.