If we transfer what we learn, then we need a map

This past weekend I traveled to the College English Association 2018 conference in St. Pete, Florida, to give a talk about “learning transfer.” Learning transfer, often simply “transfer” in education literature, is the idea that when we talk about education broadly and learning something specifically, what we really mean is the ability to transfer knowledge learned in one context (the classroom, e.g.) to another (the office, e.g.). It’s the idea that we have only really learned something when we can successfully move the learned knowledge into a new context.

As far as theories of learning go, I think transfer is fairly mainstream and intuitive. Of course, the particular metaphor of “transfer” has both affordances and limitations, as all metaphors do. Some critics offer “generalization” or “abstraction” as more appropriate metaphors, and there might be a case to be made for those. But as long as the theory of transfer prevails I think we first need to get some things straight. This is what my talk was about.

If learning is concerned with transferring, and thus moving, between two places like the classroom and the office, then we must have a map to help us navigate that move. In the humanities, the literature on transfer disproportionately focuses on the vehicle of transfer, not a map to guide us through landscape the vehicle traverses. The vehicle is of course literacy–reading and writing. We painstakingly focus on honing and developing the most perfect and best kinds of writing prompts and essay projects, as well as assigning the most thought-provoking reading material. And this is good. If we try to move, or transfer, from point A to B but our vehicle is broken or old or slow or unfit for the terrain, we can’t go anywhere. We like to say that our students don’t just learn to write, but they write to learn as well. Literacy is a vehicle for all knowledge. All disciplines understand this. It’s why we write papers in all our classes, not just English.

But no matter how luxurious our transfer vehicles (our writing assignments and reading requirements) are, if we don’t know where we’re going, how to get there, or what obstacles lie in our way, then it doesn’t much matter. So how do we map the abstract terrain of education and cognition? Here are two simple statistics that can help us: R-Squared and Effect Size, or Cohen’s d.



The R-Squared statistic is commonly reported when using regression analysis. At its simplest, the R-Squared stat is a proportion (that is, a number between 0-1) that tells you how much of the variance, or change, in one quantitative variable can be explained by another. An example: suppose you’re a teacher who assigns four major papers each semester, and you’re interested in which of the essays can best predict, or explain, student performance in the class overall. For this, you’d regress student performances on one of the assignments on their final course grades. Some percentage of the variance in your Y variable (final course grades) will be explained by performance in your X variable (one of your assignments). This can give you a (rough) idea of which of your assignments most comprehensively tests the outcomes for which your whole course is designed to measure. (R-Squared proportions are often low but still instructive.)

If we extend the transfer/map metaphor, R-Squared is like the highways on a map–it can help us find more or less efficient routes, or combinations of routes, to get where we want to go.

Effect Size, or Cohen’s d


Effect Size is a great statistic, because its units are standard deviations. This means Effect Sizes can be reported and compared across studies. Effect Sizes are thus often used in Meta-Analyses, one of the most powerful research techniques we have. At the risk of oversimplification, Effect Size is important because it takes statistical significance one step further. Many fret over whether a result in an experiment, like the test of an educational intervention, is statistically significant. This just means: if we observe a difference between the control and experimental group, what is the likelihood that the difference is simply due to chance? If the likelihood falls below a certain threshold (which we arbitrarily set, usually 5%, but that’s a different discussion), then we say the result is statistically significant and very likely real and not due to chance. However, some difference being statistically real doesn’t tell us much else about it, like how big of a difference it is. This is where Effect Sizes come in.

An Effect Size can tell us how much of a difference our intervention makes. An example: suppose you develop a new homework reading technique for your students and you test it against a control group, which will be based on performance on a later reading comprehension test. If the group means (experimental vs. control group) differ significantly, great! But don’t stop there. You can also calculate the Effect Size to see just how much they differ–and, as mentioned, Effect Size units are standard deviations! So you are able say something like: my new homework reading technique is so effective that a student testing at the 50th percentile in a different class will test about one standard deviation higher in my course, around the 84th percentile.

If we again extend the transfer/map metaphor, Effect Size helps us find the really good places to visit, or transfer to. It’s like a map of the best vacation spots. After all, most of us would rather visit the beach than North Dakota in the winter, so it’s good to know where’s worth going (for certain purposes).

Stats can help teachers

Basically, what I tried to argue in my talk is not that quantitative methods/analysis are superior, but that they can do a lot of cool things, and that they are particularly important for building maps. Maps are, after all, inherently approximate and generalized, just like grades and student performance on tests and any kind of quantitative measure are merely approximations. Maps are indeed limited in many ways: looking at a (Google street) map of Paris, for instance, obviously pales in comparison to staring up at the Palace of Versailles in person. But looking at a map beforehand can help you get around once you do visit. It’s the same with quantitative measures of learning. They can help us get a lay of the land, which then allows us to use our pedagogical experience and expertise to attend to the particular subtleties of each class and each student. Teachers are experimenters, after all, and each class, each semester, is a sample of students whose mean knowledge we hope is significantly and sizably improved by the end of the course.

Dernst’s language compass

I’m currently working on my dissertation prospectus, which considers the next steps Automated Essay Scoring (AES) and Natural Language Processing (NLP) technologies are taking to enable machines to read and interpret language beyond surface level textual features. The big question mark is whether machines can assess the rhetorical dimension of writing, such as the use of metaphor, analogy, and persuasion, and how those and linguistic qualities like them might best be modeled, both statistically and conceptually. Here’s a quick mock-up of a compass-style model of two aspects of language I’m constantly assessing, the actual words used and the intent behind them:

Screen Shot 2018-04-03 at 2.50.26 PM.png

The degree of metaphor of an utterance’s words is contrasted with that of its literalness, while the orthogonal axis contrasts the degree of irony of an utterance’s intent with that of its sincerity.

Edit: An example might prove instructive. I would map the phrase “tax relief” onto the yellow square, as “ironic and metaphorical.” The metaphorical quality is obvious, since “relief” substitutes from an area other than taxation to pre-figure taxation as evil theft, which is itself not literal but also a metaphor. The irony is that those who utter the phrase “tax relief” don’t actually mean it.