Pardon the alarmist tone of the title, but I just don't think folks are paying a whole lot of attention to the way that technology is being used these days in writing instruction. I alluded to this in my earlier post on informating vs. automating technologies. Perhaps I was being too circumspect.
Jeff, Mike & I said that the robots are coming.
The response has been...well...quiet (I get that a lot).
But let me tell you what I think is happening right now that bears more than just quiet contemplation. It has to do with automating technology, but more importantly, it has to do with what I see as a fundamentally flawed understanding of how people - especially young people - learn to write. The idea is a simple one: you can learn to write alone. And that we can figure out a way to give "personalized" feedback to individual learners.
The
thought is that, as with other subjects, students work alone and get
some kind of automated feedback on progress that serves to customize
their process. It is, to my eye, a return to the very things we in
the computers & writing community fought so very hard to escape
from in the days when the popular term was Computer-Assisted-Instruction. Individualized
drills. Now with more adaptive smarts. But the thought is the same:
students working alone can learn to write better.
Now, let me be very very clear about this next point. All the research we know of says this isn't true. Please don't believe me. Believe Graham & Perin's 2007 metanalysis or Kellog & Whiteford's comprehensive synthesis of the available research. Read it, though, and then consider that lots of folks
are heading down a path that either explicitly (bad) or implicitly
(worse) sees learning to write as a solo activity.
As teachers and researchers, we see this trend as bad for students. Folks like recently retired MIT Professor Les Perelman do too...saying that teaching students how to do well on a written exam scored by a machine, or by hundreds of readers acting like a machine, is not the same as teaching them how to write well.
We must stop teaching this way and running our programs this way. That *includes* training a bunch of human raters to score writing based on adherence to holistic criteria. Yes, you heard me. It's time to stop all that.
I've been thinking a lot about this issue of "calibration"
lately in connection with the rising popularity of individualized learning. I'll try to keep this short and clear: the way we
are used to doing calibration reviews - norming readers to find specific traits in a text - is good at showing
agreement (or lack of) about the features present in a single text. But these reviews
cannot show agreement about where criteria are best
exemplified across multiple acts of writing. What we really want to know is whether students can write well whenever they need to, and whether they can meet each new exigence with a repertoire that allows them to do that.
In other words: our current practice with holistic scoring is to settle for
agreement about the features of a text rather than agreement about the
criteria and their application more broadly. It is, I think
most of my colleagues would agree upon reflection, an example of settling for a less powerful
indicator of learning because the more desirable one was
formerly out of reach. It was simply not practical to see, with precision and rigor, students engage learning outcomes in writing across multiple instances of writing, reading, and review. So we had to settle for one high-stakes performance.
But not any more. Now we have Eli.(stay with me, this is not a sales pitch)
With Eli, we can run a calibration round "in reverse." That is, we can ask students to find the best fit
between criteria and a group of essays. Using criteria
matching and/or scaled items in Eli, "best fit" essays are nominated
by the group. As the review progresses, a teacher can check to see if she/he agrees with the highly
rated examples. This is an inductive approach to calibration. And it not only produces judgements about how/if student raters recognize features that match criteria, but whether and how they *produce* these features as a matter of routine (shown by a group trend) or whether one, a few, or the whole group needs more practice.
If a highly nominated text turns out to be something a teacher also rates highly, what you have is a much stronger and
more reliable indicator of learning than
the deductive approach gives us. You have a portrait of the way students have deployed the criteria as a principle in their own writing and in their judgement of others' writing. The deductive method can produce a valid result
for a single text, but as Perelman notes it is not a terribly reliable way to tell us who can write, again, in another situation with similar success.
Conversely, in an Eli review where agreement between the raters and the teacher is weak, the instructor has a very, very clear signal
regarding where there is confusion about how students are able to apply the
criteria. This allows for targeted instruction and a much more
effective next intervention.
So here is a simple claim that is, nonetheless, a bold one: Eli is better at calibration.
Why? We offer a way - for the first time - to focus on calibrating on the criteria, not on a sample
text. The informating technology built into Eli handles the extra work of comparing
agreement across multiple texts, something that we might well
do by hand if it weren't so arduous. We therefore get a valid
and reliable result that also guides instruction in a much
more focused way. We also avoid the problem of implicitly
asking students to reproduce the formal features of the sample
text rather than creating an original draft that exemplifies
the criteria (an old problem with the progymnasmatum
known as
imitatio).
This is a big shift. But it is profound because it shifts us back to the human-driven activity of learning to write. We have settled for too long on a method that is best left to robots. It was always a compromise. But we don't have to settle for it any longer.
I said this wasn't a sales pitch. I mean it. If you are inspired by this post to try Eli, get in touch. We'll work something out.