Assessing Purdue’s first year writing program

I’m currently heading up an assessment of Purdue’s first year writing program. We are collecting and analyzing a variety of student writing and beginning to report the results. This might be of interest to you if you teach or are interested in theories and practices of writing assessment. Note that this and other assessments are exploratory pilots, which will provide evidence for which aspects to refine for the full study next year.

Here’s a brief update I prepared on one of the assignments we piloted, the rhetorical analysis: Assessing the rhetorical analysis.

A theory of educational relativity

The theory of classical relativity understands all motion as relative. No object moves absolutely; an object moves relative only to the motion of other objects. The same can be said about much of learning and education: our educational growth is frequently measured relative to that of others–our classmates, coworkers, friends, family, and so on.

Relative educational growth recalls a concept central to test theory I’ve discussed before: norm referenced assessment. When norm referencing academic achievement, individual students are compared relative to one another and to overall group averages, which are other objects in motion. Norm referenced assessment differs from criterion referenced assessment, which measures individual growth relative to established objective criteria, stationary objects; that is, for criterion referencing, performance relative to peers doesn’t matter. Think of a driving test: you either pass or not, but passing doesn’t depend on you scoring better than your neighbor, but rather on you meeting the state-established criteria required for licensure.

As it were, I would argue most of educational assessment consists of broad norm referencing often masquerading as criterion referencing. As far as I’m concerned this is not really good or bad. “Masquerading” has negative connotations, of course, but I believe the masquerade is less deliberate than inevitable. Any teacher will tell you it’s really really hard not to compare their students to one another, even if subconsciously. Try reading a stack of 20 papers and not keeping an unofficial mental rank of performance.

Although norm referencing students relative to their peers’ class performance is somewhat inevitable, I think with careful attention (and a little statistics) our assessments can prioritize a superior norm referenced comparison than that of students to their peers: the comparison between the student and themselves.

Comparing a student’s performance to themselves recalls the familiar growth vs. proficiency debate in education circles, which our current Secretary of Education is infamously ignorant about. Basically, the argument is that schools should assess growth and not proficiency, since not all students are afforded the same resources and there is incredible variation in individual academic ability and talent. Because not all students start at the same place they therefore cannot all be expected to meet the same proficiency criteria. I agree. (Incidentally, this is why No Child Left Behind utterly failed, since it was predicated on the concept of all children meeting uniform proficiency criteria.)

One way to prioritize the assessment of growth over proficiency in a writing class is to use z-scores (a standardized unit of measurement) to measure how many standard deviations students are growing by during each assignment. Writing classes are particularly conducive to such measures since most writing assignments are naturally administered as “pre” and “post” tests, or, more commonly, rough and final drafts. Such an assignment design allows for growth to be easily captured, since a student provides two reference points for a teacher to assess.

By calculating the whole class’s mean score difference (μ) from rough to final draft, subtracting that number from an individual student’s rough and final draft score difference (x), and dividing by the standard deviation of the class score difference (σ), you obtain an individual z-score for each student, which tells you how many standard deviations their improvement (or decline) from rough to final draft represents.

Why do all this? Why not simply look at each student’s improvement from rough to final draft? Because we should expect some nominal amount of growth given the assignment design of a rough and final draft, so not all improvement could actually be improvement in such a context. Calculating a z-score controls for overall class growth, so a nominal improvement in scores from rough to final draft can be interpreted in the context of an expected, quantified amount.

To assess for an individual student’s growth relative to themselves, then, you can calculate these individual z-scores for each assignment and compare the z-scores for a single student across all assignments, regardless of differing assignment scales or values. This provides a simple way to look at a somewhat controlled (relative to observed class growth) measure of growth for an individual student relative to themselves over the course of the semester. In this way, we are better able to see more carefully the often imperceptible educational “motion” of our students relative to themselves and to peers.