The Hypothetical: Between Classroom Lecture and Discussion

A perennial debate among teachers seasoned and new concerns how much lecture versus discussion to feature in class. The debate is especially relevant to teachers of liberal arts subjects, since the content of such courses is not always conducive to rote learning techniques. Liberal arts subjects often require the completion of previously assigned reading, and even if enough students read to engage in fruitful discussion there remains the risk of devolving into debate rather than dialogue. To complicate matters further, lectures and class discussions are frequently, and falsely, pitted against one another, viewed as binary opposites and sometimes filtered through a political lens which codes the former as conservative and the latter as progressive. Consider the image of a professor lecturing to a class of students dutifully taking notes versus a classroom of circled desks with students and teacher alike engaging in a dynamic conversation.

I believe there are more or less appropriate times and subjects for either lectures or discussions, and I don’t buy that either has a particularly salient political character; after all, even the teacher discussing texts in a circle still assigns grades at the end of the term. But there are challenges to both lectures and class discussions that I think frustrate new teachers in particular. Lectures aren’t especially engaging unless done well, which comes with time and practice. On the other hand, discussions can intimidate students to the point of non-participation, especially if the topic of discussion is particularly controversial. Also, the skill of facilitating and nourishing discussions is underrated and quite challenging, another pedagogical virtue that comes with time and practice. Additionally, I find neither mode particularly effective for beginning a class. Opening with a lecture (especially in the early morning) risks students nodding off to sleep, as their engagement with lectured material is entirely determined by their individual commitment. Conversely, jumping into a discussion right away often fails to take off, as any teacher can attest, since students can be reluctant to break the ice.

Enter the hypothetical. The hypothetical is exactly as it sounds: a contrived example or problem related to the day’s concepts is posed to students who must reason through its various frictions. Incorporating hypotheticals into liberal arts classes is particularly effective, as it occupies a middle ground between lecture and discussion, combines the best elements of both, and requires no prior reading. Like a lecture, a hypothetical allows the instructor discretion in guiding students more or less forcefully towards the concepts intended for learning. And like a discussion, it engages students’ creative and critical thinking muscles, prompting them to respond and participate. Hypotheticals are a great icebreaker to boot; completing the reading is not required to participate in the deliberation of the hypothetical, as its contrived parameters present enough content for participation. It also doesn’t risk intimidating students from offering controversial opinions since its hypothetical nature provides a comfortable space for intellectual exploration.

Here is one hypothetical I offer students in one of my first-year composition courses:

Here’s what typically happens when I pose it. The majority of students almost immediately answer “no,” which allows me to oppose the class consensus to encourage deeper thinking. For instance, I point out that only a 1% chance of a wrongful conviction is quite good, especially considering current estimates put the US wrongful conviction rate somewhere between 2-10%. I then ask which kind of evidence would persuade them to convict, to which they usually respond “eyewitness testimony,” “DNA evidence,” and “video footage.” All of which, I suggest, is likely less than 99% accurate. So what gives, I press.

I then switch to another hypothetical, this time involving blood pressure medication, to which the entire class always answers “yes.” (As a side note, a great technique is to juxtapose two structurally-similar hypotheticals that nevertheless induce students to opposite conclusions; the cognitive dissonance is generative.)

“So what’s the difference?” I ask. Students usually reason many of the differences pretty well, in my experience: The stakes are different for sending an innocent person to prison compared to leaving blood pressure untreated; the former involves deciding someone else’s fate and the latter your own; and there’s a matter of “trust” when it comes to doctors that feels distinct from the consequentialism of the judicial system.

The larger point I make is this: The American judicial system (in theory, at least) does not consider the probability of committing a crime as admissible evidence, knowledge of guilt. (Thank God Minority Report is just a movie.) Even if there’s only a 1% chance, you are presumed innocent until proven guilty. In American medical science, however, the probability of a drug’s effectiveness, inferred from carefully controlled experimental trials, is considered knowledge. In fact, inferring the effectiveness of medical treatments from samples is the only way to know anything in medicine at all; it is impossible to “witness” or testify to, at least rigorously and systematically, the evidence of medical efficacy. These hypotheticals therefore elegantly demonstrate the difference between two types of information: observational knowledge, obtained and verified through experience, and probabilistic belief, inferred through studies of manipulated samples.

In the larger context of the course, we are discussing the nature of knowledge, how it is we can say we know something. The goal of this class period is to demonstrate how knowledge is context-dependent; what counts as knowledge in a medical trial is not what counts as knowledge in a court of law. Does that mean knowledge is whatever we say it is? No, it means that certain domains (medicine, law, science) have developed their own rules that govern how knowledge in that domain is understood and counted. Awareness that there are differences in how we derive knowledge is a concept I discuss in my first-year composition courses because I believe the idea is essential to understand before reading and writing academic research at the college level (your mileage may vary). This hypothetical helps students to intuit the primary takeaway of the lesson, the constructed nature of knowledge, without me lecturing at them about it.

Dissertation excerpt: the politics of writing education

From Chapter 5: The Age of Automation in The Android English Teacher: Writing Education in the Age of Automation

The unique challenge of teaching writing serves as an instructive test case for education writ large. Writing embodies a series of paradoxes. It is both a science and an art, a technical skillset and a creative outlet. It is essential to every academic field—the medium through which scholarship transpires—yet its teaching is treated as a mere service to the university. Writing is inherently social, a communicative act between author and audience, and also a process that unfolds for long stretches in solitude. Writing is heavily mediated by technology, but also a fundamentally human endeavor. Finally, writing—and especially its teaching—is simultaneously progressive and conservative, associated with both radical expressivist disciplines and a traditional, prescriptivist and civics-oriented education. One of the main challenges for liberal arts scholars and writing educators is successfully balancing these contradictions, which become heightened in the age of automation.

An automated writing education fails to strike a balance between these divisions as human teachers do, and instead aligns with one side in each. With automated writing education, writing is only a science, a skill, and a nonsocial act; it is a lifeless interaction between a writer and a preprogrammed algorithm, a rote reproduction of conventions to be marked right or wrong. At root, writing is not concerned with being “right” or “wrong,” but with effective communication. The automation of writing education reduces the nuance of negotiating effective communication between author and audience to a formulaic transmission of agreed-upon conventions between a word manager and a machine. What portends to happen to writing education in the age of automation is not simply a change in the way writing is taught, but a redefining of what writing is.

Whether automated writing education will come to be defined as politically progressive or conservative remains to be seen. As higher education itself undergoes a redefinition amid public health pandemics and technological progress, its political valence has grown more significant. Pew Research (Parker, 2019) has shown for years a growing partisan divide in views of higher education (Figure 6). Choice of academic majors and course content, as well as the value of a degree, have become politically charged in a way they never have before. Writing occupies a peculiar dual position within this politicization: conservatives believe writing is an essential component of education and decry college students’ alleged declining writing ability, yet simultaneously view the very departments that teach writing as part of a domineering leftwing culture on campuses.

As universities and colleges grapple with remote and virtual learning configurations in the coming years, I fear these growing political fault lines will become ammunition in those debates. Educational technology companies may enlist progressive political rhetoric to push their products, and arguments about virtual learning or automated educational technology may end up being more about political allegiances than the pedagogical effectiveness of the tools. In the event of educational technology companies—sensing an opportunity to get their foot in the door on campuses—invoking progressive political rhetoric to sell their products, we should think critically about the pedagogical repercussions of employing virtual, and potentially automated, educational products and services independent of such rhetoric as best we can.

Impressive-sounding claims of academic personalization and customization, combined with a progressive framing of pandemic-related social distancing and educational “access,” will continue to escalate as education turns more and more virtual. My great fear is that out of a commanding paranoia of being perceived “conservative,” liberal-minded educators will thoughtlessly accept, even advocate for, corporate-led education reforms that are nominally and symbolically progressive but deeply and structurally reactionary. Many of the arguments that preserve our autonomy as writing educators have the potential to sound conservative, and perhaps some of them even are conservative in a definitional sense. “Conservatism” has become so radioactive that people forget there are many things worth “conserving”; I believe the in-person teaching of writing is worth conserving, for instance.

We must not fear being perceived as conservative if we push back against that rhetoric. In fact, much of our jobs and livelihoods as educators revolve around conserving certain elements of the current educational model that would be irreparably disrupted by the unilateral welcoming of endless technocratic reform. If we are so afraid of being perceived as conservative that we align with nominally progressive educational reforms that beget reactionary consequences, that could give technology companies with empty progressive branding the power to significantly redefine for us what higher education looks like.

the student gaze

Have you ever seen Dead Poets Society, Stand and DeliverFreedom Writers, or Dangerous Minds? They are all movies about school, and they all follow a basic formula: a troubled group of students is whipped into shape through the innovative, if unorthodox, pedagogy of a maverick teacher, ultimately renewing our hope in the “transformative power” of education.

Despite the feel-good pathos, these movies come under immense and perennial scrutiny, particularly for their glurge. A common criticism takes aim at their depiction, and praise, of what is actually a regressive “solution” to the big problems of our school system: a Hero Teacher.

What’s so wrong with a Hero Teacher? Well, it’s a brutally austere, hyperindividualist, and inherently unscalable solution to systemic problems that extend far beyond schools. When the Hero Teacher saves the day, educational reform becomes in the minds of viewers the responsibility of individual educators, not of the state and federal governments that control the coffers. It should come as no surprise that I agree with criticism of these films’ sycophancy to such an impoverished vision of educational reform, but the criticism is also legion, so I’m not going to add to it here.

Instead, I want to talk about another problem with the Hero Teacher movie. I call it The Ralph Waldo Emerson Problem. Emerson is often credited with this famous definition of success:

“To know even one life has breathed easier because you have lived. This is to have succeeded.”

I would argue Emerson’s idea of success forms the scale along which teachers are evaluated in the Hero Teacher movie. Hero Teacher movies always follow one student’s, or one class’s, journey from troubled to top of the class, and with a bar as low as the Emerson scale to clear we always end up declaring the Hero Teacher a wild success. The success is considered that much greater when the teacher helps a particularly “troubled” group of students.

But successful teaching doesn’t work the way Emerson posits. Teaching is not about helping one kid or even one class. Teaching is not a two hour movie; it’s a life long career, a thankless and underpaid one at that. There’s never just one class, or one student, but hundreds and thousands, year after year. Yet the audience never gets that long term perspective. By meeting Emerson’s facile criteria for success, these films ultimately fall prey to what I call The Student Gaze.

The Student Gaze borrows from Laura Mulvey’s notable concept, The Male Gaze. The Male Gaze refers to the idea that movies and literature are overwhelmingly narrated from a masculine, heterosexual perspective. This perspective assumes many things. For one, female characters are inherently sexualized and prescribed roles according to a straight male logic. The Gaze permeates all levels of the cinematic or literary experience, from the camera shots to the characters in the narrative to the audience consuming the media: all are invited to absorb the narrative from a male point of view. Many critics believe The Male Gaze is an example of how media contributes to a larger culture that constrains women along sexual and traditionally peripheral gender role lines.

Not unlike The Male Gaze, The Student Gaze invites the characters in and the audience of movies about education to adopt the perspective of the students rather than the teacher. By the end of the movie, the audience “graduates” and moves on to the next year just like the students they watch in the movie, with the teacher occupying nothing more than a brief stop on the way to something greater. In reality, that teacher is not done with anything, and they have “succeeded” only in a limited sense, because they have new students coming in, they have bills to pay, a family to feed, and so on.

I’d like to see more movies about education told from the teacher’s perspective, instead of the students’, so the audience can see teaching from a different angle. As long as the audience is invited to identify with the students in these movies, teaching will remain mystical and provisional in the popular imagination. Teaching will therefore not be legitimized as a skilled career that requires long hours and devoted practice but instead will be mythologized and viewed as a one time charity donation from a faceless hero with seemingly no family or life or ambitions of their own. The teacher becomes someone who comes into your life for one year, one class, or one theater sitting at a time to help you move on to bigger things. By mythologizing teaching in this way, it’s easier to not give teachers their due.

The Hero Teacher movie doesn’t help anyone who’s not a teacher understand that teaching is a grueling career, and until teaching is treated as the legitimate career it is, teachers will struggle to secure fair wages and sufficient respect from the public. After all, heroes don’t do their job for a salary; they do it out of some ineffable impetus for good. I’d rather be seen as a hard working, skilled teacher than a hero. Let’s stop viewing teachers from the perspective of students.

“I can’t teach writing in only one semester”

One complaint I’ve consistently heard (and myself made) during all my years teaching college writing is that one or two semesters is not enough time to teach writing. One finding I’ve repeatedly come across in all my years poring over educational research is that the largest source of variance in student writing performance is the interaction between student and task. In other words, the fewer the number of tasks assigned in your course (assignments, papers, tests, etc.), the higher the variation in student performance, which means the less reliably (consistently) your class measures whatever it is we conceive of as “writing ability.” This seems intuitive to me, and it is consistently found in the literature. On average two different assignments, in the same course and even for the same student, are basically nonpredictive of performance in one by performance in another. A student can write an amazing editorial and absolutely bomb a research paper. Does that make your course an unreliable measurement of writing ability? Maybe.

The logical response to this is simple: just assign many more and wildly different tasks in your course to better cover the vast domain of writing ability and thus reduce performance variance. But, alas, that’s where the time constraint comes in. It’s virtually impossible to assign more than four major assignments a semester. Are four writing assignments enough to reliably capture writing ability? No way. Not given the infinite amount of genres and writing tasks out there and our evolving definition of writing ability.

So then, what if we increased the number of assignments, but made them smaller and spent less time on each? 10 small writing assignments a semester, instead of four major ones? Would 10 assignments more reliably capture writing ability and minimize our measurement error? Statistically, yes. Intuitively, I think yes, too. But I understand the resistance to this idea. There is value, I think, in longer, more in-depth writing assignments. I bet most freshman college students haven’t written a paper longer than 10 pages, and at some point they absolutely should write one. (I think multiple.)

But I wonder if that value can be realized in a freshman writing class. What is the realistic purpose of a one semester, freshman college writing class after all? If our time, and thus our measurement instrument, is narrowed to one semester, maybe we should break up the cognitive trait we intend to measure into smaller chunks, since the whole construct can never be reliably captured in a semester with four major assignments. It’s like we’re trying to measure several miles with four yardsticks. If we’re only going to get one (or two at most) semesters(s), maybe we should adjust the use of our narrowed instrument accordingly, by using it on more, smaller, varied tasks. Then instead of measuring miles with yardsticks we’ll at least be measuring yards with rulers.

What are grades?



Should teachers rank-order their students against one another?

The fancy way to ask this: Do we want norm-referenced grades? As opposed to criterion-referenced grades?

Norm-referenced means grades are assigned relative to a class average, while criterion-referenced means grades are assigned relative to a set of objective outcomes. Put another way: Do we care about what order the runners finish the race in? Or do we just want everyone to finish the race? What is a race?

Norm-referencing is common at the highest levels of educational and professional attainment, like in law or medical school. First in your class. Fifth in your class. Last in your class. Criterion-referencing, meanwhile, is more common in lower-stakes situations, such as a driving test. As long as you meet some level of competence, it doesn’t matter how your driving test score compares to that of your neighbor, you both get driver’s licenses. (Curiously, the Bar Exam and certain Medical Exams are criterion-referenced, even if prior schooling for the students taking them isn’t.)

Aside from law and medicine, school grades represent somewhat of a middle ground between pure ranking and broad licensure or credentialing. School grades reflect performance relative both to established criteria and to the performance of other students in the class. Think about it: we’ve all at times done A work or C work or D work, which means there are objective tiers of performance criteria. Theoretically, nothing prevents multiple, or even all, students from achieving the same grade, if they perform at roughly the same level. Yet this almost never happens. And grades, though tiered, are still ranked; the A tier is obviously better than the C tier. You can’t say you just care about everyone finishing the race if you rank-grade their performance. So what are grades?

My answer? Depends on the class. Some classes are indeed more like driving tests, and ranking students in such classes serves little purpose if we’re just trying to get students driving. But other classes perhaps do benefit from a more granular assessment of performance, like the rank-ordering of law and medical school students. Surgery isn’t driving.

My–and I suspect many others’–gut says ranking is bad, inherently hierarchical, discouraging, and inegalitarian. Yet, at the same time, I fear the service model of education comports all too well with a credentialing-style approach to grades, where students expect to just “finish the race” with little thought to how much learning actually occurs along the way.

Teach metaphor not grammar

At least once a semester since I started grad school (seven semesters now) I find myself embroiled in a familiar debate: to teach grammar or not to teach grammar. The majority of people outside of my field and outside the world of education broadly would laugh that this is even a question, because to most people of a course a writing teacher teaches grammar. Every time my job comes up in conversation with a non-teacher there’s inevitably a side comment made about watching their grammar around me. You can set your watch by it. In fact, to most people, grammar’s all I should teach. This is of course wrong, my field will eagerly point out. Writing is more than grammar; good grammar does not equal good writing; traditional, decontextualized grammar instruction–in the form of grammar handbooks or sentence diagramming–is largely ineffective as a pedagogical approach to teaching writing and usage to young, learning writers. 

However, I’ve always felt that because the grammar jokes persist, that because grammar is so strongly associated with writing and teaching in a general, it simply can’t be ignored. In a sense, these people are right. I’ve thus argued every semester I get in this debate that we absolutely we need to teach grammar. It’s irresponsible to ignore something so heavily weighted and so widely rewarded in our culture. Yes, we ought to live in a society that doesn’t disproportionately reward a certain interpretation of correct grammar, but we don’t. The question, then, is not should we teach grammar but how should we teach grammar.

In recent semesters I’ve concluded one effective way to teach grammar is to teach metaphor. The first unit of my course introduces Kenneth Burke’s four “master tropes” of language–metaphor, metonymy, synecdoche, and irony, the latter three each a species of the first. Now, in most composition circles, it’s currently fashionable to critique the teaching of writing at the sentence level; I might be laughed at by serious compositionists for emphasizing such dull linguistic nuances and sentence level tropes as metaphor and irony instead of spending valuable class time on having students explore their identity or something similar through their writing. I happen to believe assignments dealing with personal identity can (but not always) institutionalize identities in a way that doesn’t sit well with me, but that’s a different story. The real resistance to sentence-level writing pedagogy is that people associate it with sentence diagramming and rote grammar drills, which, many claim (and some studies support), are ineffective strategies for teaching writing.

But while sentence diagramming and decontextualized and rote grammar drills do in fact fail to teach writing, we must not confuse the method of pedagogical delivery for its content. Which is to say: the problem with attention to the sentence level is not the sentence level content itself; it’s the way we present the sentence level content. A big part of teaching writing, I’ve learned, is changing students’ perception of what writing and language is and does in the first place, an undoing of the view of language as solely grammatical, a product of the rote grammar exercises that programs us to think of language as merely a set of rules to feverishly follow. Coming at the sentence level from a different angle then–that of metaphor broadly and its various species, metonymy, synecdoche, irony–can help students re-think the content of the sentence level in productive, and, more importantly, novel ways. Language becomes less of a system of rules to anxiously follow and more of a vast toolbox for helping you describe and externalize some feeling inside you. Language becomes fun. Not to mention, teaching language as metaphorical, rather than simply “grammatical,” is a lot more intuitive to students. 

To be sure, emphasizing language as metaphorical as opposed to grammatical doesn’t remove the rules of grammar, it just transforms them. Foregrounding writing as metaphorical actually gets at grammatical concepts through a backdoor. It reframes the rules so they’re less about prohibition and more about assisting. It’s like crushing up your medicine and sprinkling it in pudding. Making students write deliberately metaphorical prose forces them to make use of more sophisticated grammatical constructions without them even realizing it. For example, early in the semester I often assign them the task of writing an extended metaphor for attending college, and by asking them to not only describe a complex event (attending college) but in a metaphorical way, they’re forced to reach for new kinds of tools in the toolbox of language to help construct their metaphor. Whether the students are conscious of it or not, these new tools contain various grammatical complexities, which then filter out into their writing more generally. After they write, then we can chat about what kind of grammatical affectations are present, how changing them might change the sentence or the metaphor as a whole, and so on.

I think we often teach grammar with the hope that, if students just memorize all the rules then they can write beautiful and grammatically complex sentences. But the rules of grammar are unintuitive, endless, and inherently restrictive (they are in fact rules.) Meanwhile, metaphors, or the four types I’ve briefly discussed, are generative. They’re not prohibitions, but sparks, suggestions. What’s more likely to enable you to write complex sentences, memorizing a bunch of rules or playing a game?

It’s not a perfect system and I’m still tweaking it, but it’s been a good move for me, and it helps resolve some of the anxieties I have about teaching grammar in Freshman Composition.