Sunday, June 5, 2016

A planning heuristic for DH projects

Our time in Easton, PA on the beautiful Lafayette College grounds wrapped up on Friday with a final, half-day session of the #lafdh workshop. We spent the time reflecting on what we learned together - this included Ryan and myself - and then I introduced one last thinking tool that we left our participants with. It was a great way to end a week of fun, hard work, and great ideas.
Reflecting: Learning Together
 I asked folks to respond to five questions corresponding to our key learning goals for the workshop. What did we learn about DH? about coding and programming languages? about computers and what we can make them do? about the limitations of computational methods? and about one another? That last one tends to take folks by surprise, but I always include it in my list of learning outcomes when I am teaching, leading workshops, or doing any kind of similar activity.

Why else spend so much time in a room together if we don't see some value in learning about the others around us? In hands-on workshops like #lafdh, we often find that our colleagues are our richest resource for learning. The relationships we start and/or strengthen in such experiences can persist long after the memory of a few Powerpoint Slides or a Python demonstration fades too. We can keep learning from one another once we have come to know and trust our colleagues. Ryan & I probably had the most to learn about all the others in the workshop because we were the newcomers.

We had a great time coming to understand each person's areas of specialty, hearing about their work and their experience, and seeing how they were most comfortable reasoning their way through new challenges. We had a nice blend of approaches in that last regard. Some preferred a top-down approach, reasoning from principles to integrate details. Others preferred a more inductive way, diving into the details to make order from the patterns they found.  

One thing I learned...
I had a few really valuable takeaways from this workshop that will illuminate my own scholarly work over the next few months. These came from the multiple opportunities I've had, along with Ryan, to explain what we have been doing to new audiences over the last few weeks. At #lafdh, we encountered an interdisciplinary group with a varying level of experience with DH. Half were library folks, and the other half were Humanities scholars and teachers, most from various areas in English Studies.

In response to the "what can we get computers to do" question, I realized that Ryan & I's work on "topoi mapping" - something that you can see working in both the Hedge-o-Matic and the Faciloscope - combines location-based and semantic analytic approaches in roughly equal measures to reliably locate "rhetorical moves." Combining the two gives us flexibility to correctly classify a language structure that we cannot reliably pin down to a certain number or type of words. Rhetorical moves are n-grams - chunks of text of undetermined length - that may exhibit some regularity of form in terms of their lexical components, but are highly variable nonetheless. You can do the same move with vastly different words, in other words (heh).

I'd never had a moment to distill our analytic approach into such a tight (if still a bit dense) form as that above. Nor had I tried to theorize from top to bottom - taking into account the specific transformation we perform on the texts we analyze - precisely what combination of steps we take to find something like a "hedge" in scientific writing. It came over the course of a day or so as one of those rare moments of clarity for me! I explained it out loud to the group and "tuned in" to what was coming out of my mouth with a sense of wonder. Ah ha! That's what we've been doing! So...look for more about that in a publication upcoming, I am sure.  

DH Planning Grid w/ "Master Trope" Questions
Planning a DH Project
I really hope our workshop participants found the final activity we did useful. Ryan & I walked them through a planning process that we think represents a good way to plan DH projects. Here's a peek at it.

On the X axis, we list actions that also correspond to common DH team roles: research(er), develop(er), and then the folks who think about the user experience that the team aims to facilitate. These can coincide in more than one person, of course, but they represent what are often distinct areas of interest, expertise, and work in any given interval of time on a project.

On the Y axis, we have the DH lifecycle I wrote about before. We'd spent the day before going through that with the participants in a hands-on way in an attempt to understand how DH work proceeds. Finally, below the grid, there is a prompt to go through and fill in the boxes in three passes. The first is to generate questions, the second is to generate to do list items, and the third is to plan desired outcomes.

In the grid above, I'm showing the guiding questions or "master tropes" for each of the DH activity roles. The researcher(s) ask "why" - why do the project? why do it this way? with these texts? etc. The developers ask "what" - what are we doing? with what? in what ways? And the user experience folks ask "who?" - who's looking at this? who needs to access it? who needs to understand it? All three share the all-important question: "how?" How shall we proceed? The researchers might ask how *should* we do it? The developers converge on how *will* we? while the user experience folks continually raise the question of how *can* we...?
Planning grid with Outcomes for HoM highlighted

I like to use this planning process with students as well as with project teams. Planning questions, activities, and outcomes is a good way to help all the team members feel some ownership with the project. Coming back to these decisions as the project progresses is also a good idea, because things change.

One thing folks are often pleasantly surprised by is the way each phase of the lifecycle produces valuable outcomes. The example grid here shows some for the conjecture/gather (i.e. theoretical) stage as well as the interpret results stage for the Hedge-o-Matic. That Ryan & I think of our work building applications as *primarily* theoretical in nature can come as a surprise for some. That it can also result in useful resources for folks who may or may not be interested in our theoretical work is a nice bonus!


Friday, June 3, 2016

A DH Project Lifecycle

On Thursday, we kicked it up a notch (on the abstraction scale) in the #lafdh Lafayette College Digital Humanities Workshop and worked through a DH project lifecycle. The goal Ryan & I had for the participants was to spend a day doing DH "cosplay" - spend a day in the life of a DH scholar - to get a feel for what it is like to do the work, make the decisions, fail forward, and yes, see your project actually yield some interesting results.

We started in the morning by proposing the following four steps for the DH project lifecycle:

1. Conjecture & Gather Resources
2. Prepare Data
3. Run Analysis and/or Synthesis
4. Interpret Results

This is a very simplified sequence, of course, and it serves more as a heuristic than anything else. But it is still pretty accurate if we also add the requisite "rinse, repeat" as a fifth step. Many, if not most projects proceed this way with a few notable (and perfectly reasonable) variations such as in #1, resources (e.g. an archive of original texts or images) can precede conjecture and in #3, synthesis/analysis can appear in either order.

For our participants, we added some detail. This was to condense the experience into a single working day. Not unlike a cooking show on TV, we wanted to teach the "recipe" but we also had to condense some of the steps that take a long time to fit into our scheduled programming time. So we had a version already done and ready to come out of the oven golden brown & delicious, as it were.

Our version of the four steps, above, then, included a bit more detail like this:

1. Conjecture & Gather Resources: Ask a question or make a guess based on theory and/or your available evidence

2. Prepare Data - Learn text processing basics; build a working corpus

3. Run Analysis in Custom-Built Python Environment - Select Analytic Tools & Configure I/O pipeline using Python

4. Interpret Results – Do we see what we expect? Are we surprised by anything? Can we get to a useful, reliable result?

We didn't make *all* the decisions for the group, but we did decide that we'd be doing a sentiment analysis using a naive Bayes classifier trained on the Cornell movie review corpus. Two of our seminar participants provided our analytic target - a set of student reflective essays known as "literacy narratives." This is a genre used often in writing courses. We ask students to reflect on their history as readers and writers, consider their experiences in formal and informal settings, and to set some goals for themselves as writers.

We decided, as a group, that it would be interesting to see if students generally had negative or positive feelings about their literacy experiences. So to find out, we trained a classifier to read these essays and, wherever students mentioned a particular word related to writing, to classify the sentence containing that word as having positive or negative valence.


We worked through all four steps in the list above. As with most projects of this type, quite a bit of text processing is required in order to make sure the classifier is picking up the "signal" that we are looking to hone in on. After working through that, near the end of the afternoon, we had a result! We had built a simple machine that reads a student essay and returns sentences that include references to "writing" and then makes an assessment about whether the students were writing with generally negative or postive feelings about their experience. After a spot check, we saw that it not only worked pretty well, but it helped us to formulate some new thoughts - a new conjecture - to start our process all over again.

Of course, we wouldn't have to start at the beginning this time. We could press ahead and make more ambitious goals for ourselves on the final day of #lafdh.

Wednesday, June 1, 2016

Writing/Code

Today in the #lafdh workshop we had folks working together to write original functions in Python. Ryan introduced six basic methods that are built into Python that are very useful for text processing:
  • print - print a text string or the value of a variable
  • lower - return a string in all lower case
  • len - get the length of a string
  • split - turn a string into a list, broken (by default) on white spaces
  • join - turn a list into a string
  • type - get the data type of the selected object (usually the value of a variable)
and with these, plus what we learned about variables yesterday, folks wrote functions to do cool things like
  • find out how many sentences are in the novel Moby Dick (or any other text)
  • turn a plain text passage into a list of comma separated values (.csv) so it can be opened in Excel
  • randomly switch words in a passage around, making "word salad"
The goal was to build on our work yesterday, gain confidence and experience working with a programming language, and to use computational thinking to understand how we Humanists and computers, though we see texts differently, might do some cool things together.

There were high-fives and cheers before lunch as code snippets executed without errors. And after lunch, there was a triumphant show-and-tell in which our merry band became real programmers by showing off their code and hearing from other programmers how they would have done it differently. :)

We also did some work in the afternoon building a topic model using gensim. Using libraries, moving into an IDE (from iPython notebooks) and working out i/o workflow were the real objectives in that introduction, but we did get to see and discuss the results of an LDA topic model. Real DH stuff after just two days!

A DH learning pattern

Our #lafdh #squadgoals
Yesterday, we met a fantastic group of folks taking our DH workshop at Lafayette College. As must be done with all such events, we made a hashtag: #lafdh

We spent yesterday doing a few activities focused on computational thinking. As workshop leaders, our goals for the day were to help folks build confidence to push beyond their current comfort levels, and to try out some new ways of working. We also offered folks a learning pattern - kind of like a pedagogical approach, but more learner-centric - for the DH-style mantra "more hack, less yack."

Think of it working at a variety of scales where you are trying to overcome anxiety/build confidence, build skill, acquire knowledge, and (re)consider the value of something new. It could be a piece of software or hardware, a new process or practice.

The Learn-by-Doing learning pattern goes like this:
  • Use the thing
  • "Read" the source code and/or (cultural) logic of the thing
  • Put what you have read into your own words
  • Make some changes to the thing and see what happens
  • Make something new based on the thing by modifying or building on it
  • Reflect on what you learned
As with many such patterns all of it is best understood as a cycle or recursive process. In fact, the first three are kind of an internal loop too. When I'm learning to code something new and trying to figure things out, I repeat those top three in rapid iterations for some time usually before moving on to number four.

Today, we'll try to move this pattern into the realm of "habit" for our participants. On deck: processing text with Python!

Monday, May 30, 2016

Going Gentle...

All this week, I’ll be leading a workshop at Lafayette Collegewith my colleague Ryan Omizo. We’ve been asked to provide a gentle introduction to the Digital Humanities and to help the new colleagues we are about to meet to think about how they might incorporate computational methods into their work as scholars and teachers. Throughout the workshop, we’ll invite participants to reflect and share their experiences. I’ll be sharing mine here on my occasional blog about Teaching with Technology.

This workshop is a special instance of TwT, but like all such occasions, it’s every bit as much a chance for me to learn new things as it is for the others attending. In this first post, I’m hoping to offer a glimpse into my own life as a researcher working at the intersection of DH, rhetoric and writing studies, and user experience. I’ve personally found this kind of self-portrait very helpful when I talk to others in DH because, perhaps notoriously, each scholar comes to DH from different disciplinary, institutional, and pragmatic angles. Hearing how others “Do DH” is a pretty important way that the DH community sustains itself and invites new individuals into the fold. This is what “The Day of DH” is all about, for example.

A Busy May
 
Results from the Hedge-o-Matic
The beginning of the Lafayette workshop caps a busy month for Ryan and me as well as another of my frequent collaborators in the world of rhetoric & DH, Jim Ridolfo. I flew to Pennsylvania immediately after doing two sessions at the Rhetoric Society of America conference on work with both Ryan and Jim. Ryan & I presented on a project called The Hedge-O-Matic, an article and an app that we recently published in Enculturation. Also on the panel were Enculturation editor Jim Brown and Production editor Kevin Brock. After a great intro by Jim who framed the project in terms of a well-known, but perhaps underappreciated structure in classical rhetorical education – the progymnasmata – I discussed the theoretical and ethical commitments behind the HoM. Ryan presented on the way our theoretical questions led us to make specific methodological choices, including how we trained the HoM to process and analyze text. And finally Kevin Brock did a reading of the HoM’s source code from a critical code studies perspective with a focus on the kinds of arguments the code itself can be seen to make when understood as a rhetorical text.

I also chaired and presented in a roundtable session at RSA organized by my colleague Jim Ridolfo, with whom I co-edited Rhetoric & the Digital Humanities. The roundtable was our way of continuing the conversation we hoped to begin with RDH by inviting early career scholars working in Rhetoric and DH to give brief updates about their cutting-edge work.

Krista Kennedy gave us an exciting preview of her forthcoming book Textual Curation, and framed the session beautifully with an overarching framework for digital composing. Seth Long demonstrated computational techniques for investigating authorial agency – by humans and non-humans – with a detailed and eloquent analysis of U.S. Presidential State of the Union Addresses. Jen Sano-Franchini argued for a feminist approach to user-experience design and showed three sample applications created by her undergraduate students that enacted values derived from radical feminist thought and UX. Casey Boyle proposed a speculative project – Quintillianism – to take more expansive advantage of the affordances of digital spaces, extending education beyond the boundaries of the four-year degree. Finally, Laura Gonzales impressed everybody in the room with an elegant, updated model of the work of translingual writers doing translation work based on extensive fieldwork with two different community organizations and over 6000 coded translation “sessions!”

It was an inspiring session for Jim and me. We also used the opportunity to announce our next project called #rhetops and issue a call for participation. If you have work that fits, please get in touch.

Thinking About Computational Thinking
Before RSA, Ryan & I gave a full day workshop on computational rhetoric at the Computers & Writing Conference in Rochester, NY.  So this really has been something of our Spring tour! But a common theme throughout has been computational thinking, and helping our colleagues wrap their brains around what its benefits and limitations may be for the kinds of work they do.

Looking forward to much more of that this week in Easton, PA! And I'll have more to say about it too. Stay tuned!

Thursday, September 25, 2014

Class Discussion Is Not All It's Cracked Up to Be

I find myself at an interesting place just apart from the debate some of my colleagues and some figures in the overlapping Higher Ed/BigTime Social Media Venn Diagram region have been having about laptops in classrooms. Alex Reid's post is one of the more recent ones in the thread (and he points to several others in his opening 'graf, helpfully).

I say I'm just apart from the debate because in my own point of view, I actually see a lot of overlap between the position of the two "sides" as they are sometimes constituted: allow laptops (and cell phones and ipads) or don't (or, maybe require them to be not in use some or most of the time) during a f2f class meeting.

When set up this way, the two sides will differ about whether it is possible or desirable to have students' attention with or without the machines involved. Clay Shirky doesn't think it is either desirable or, really, all that possible. My colleague Steve says "meh" (he always says "meh") it's possible if you engage students more and if you do it in a way that acknowledges the technology rather than tries to prohibit it.

Pushed to choose between these two, I'm more in alignment with Steve. And the reason is that I have a very low opinion of class discussion time as an effective intervention for most of the things that I want students to learn (writing, web authoring, interaction design, and even graduate seminars like research methods or...ta da...Teaching with Technology!). This low opinion is informed by very strong evidence that learning in these areas is achieved through practice, and that while talk is a very important part of the practice, it is a particular kind scaffolded peer interaction that does the most good. Better than expert critique, even. The reason is that students learn from participating not just from being in the room when others are talking. So what matters most is that students - all of them - participate as much as possible. What also matters is that they have some guidance to stay more or less on target when it comes to offering one another feedback. If not, then the talk is not so helpful. 

In my experience, the full-class gabfest is really far less full-class and far less about sharing feedback than it should be to be a very effective technique. It is more often about teachers than students. And it probably does more harm than good to the degree that it wastes valuable high-bandwidth opportunities for students to work together and learn from one another. As often happens, new technologies and the attempt to evaluate their effectiveness can reveal blackboxed issues with the underlying pedagogy that they have evolved to support. So one way we are seeing the frayed edges of "class discussion" these days is in the work being done to evaluate "clickers" also known as Classroom Response Systems. The consensus so far: clickers aren't good outside of an explicit strategy for using them to help students offer one another good feedback as outlined in the link I included above.

So...that's why I stand apart I guess. I think few teachers can pull off the live-action experience that is really required. I think they need some help to do it. And I think that technology can play a role if its used well.

And as you no doubt know by now, I helped make some that does just that for writing and design-oriented classrooms. But I don't think this way because we made Eli Review...we made it because we think this way (and the evidence backs us up). 

One more thing: Alex Reid's thought experiment imagines a feedback rich space for learning. Read his entry!



Wednesday, November 6, 2013

How to teach writing like a human (and not a robot)

Pardon the alarmist tone of the title, but I just don't think folks are paying a whole lot of attention to the way that technology is being used these days in writing instruction. I alluded to this in my earlier post on informating vs. automating technologies. Perhaps I was being too circumspect.

Jeff, Mike & I said that the robots are coming.

The response has been...well...quiet (I get that a lot).

But let me tell you what I think is happening right now that bears more than just quiet contemplation. It has to do with automating technology, but more importantly, it has to do with what I see as a fundamentally flawed understanding of how people - especially young people - learn to write. The idea is a simple one: you can learn to write alone. And that we can figure out a way to give "personalized" feedback to individual learners.

The thought is that, as with other subjects, students work alone and get some kind of automated feedback on progress that serves to customize their process. It is, to my eye, a return to the very things we in the computers & writing community fought so very hard to escape from in the days when the popular term was Computer-Assisted-Instruction. Individualized drills. Now with more adaptive smarts. But the thought is the same: students working alone can learn to write better.

Now, let me be very very clear about this next point. All the research we know of says this isn't true. Please don't believe me. Believe Graham & Perin's 2007 metanalysis or Kellog & Whiteford's comprehensive synthesis of the available research. Read it, though, and then consider that lots of folks are heading down a path that either explicitly (bad) or implicitly (worse) sees learning to write as a solo activity.

As teachers and researchers, we see this trend as bad for students. Folks like recently retired MIT Professor Les Perelman do too...saying that teaching students how to do well on a written exam scored by a machine, or by hundreds of readers acting like a machine, is not the same as teaching them how to write well.

We must stop teaching this way and running our programs this way. That *includes* training a bunch of human raters to score writing based on adherence to holistic criteria. Yes, you heard me. It's time to stop all that.

I've been thinking a lot about this issue of "calibration" lately in connection with the rising popularity of individualized learning. I'll try to keep this short and clear: the way we are used to doing calibration reviews - norming readers to find specific traits in a text -  is good at showing agreement (or lack of) about the features present in a single text. But these reviews cannot show agreement about where criteria are best exemplified across multiple acts of writing. What we really want to know is whether students can write well whenever they need to, and whether they can meet each new exigence with a repertoire that allows them to do that.

In other words: our current practice with holistic scoring is to settle for agreement about the features of a text rather than agreement about the criteria and their application more broadly. It is, I think most of my colleagues would agree upon reflection, an example of settling for a less powerful indicator of learning because the more desirable one was formerly out of reach. It was simply not practical to see, with precision and rigor, students engage learning outcomes in writing across multiple instances of writing, reading, and review. So we had to settle for one high-stakes performance.

But not any more. Now we have Eli.(stay with me, this is not a sales pitch)

With Eli, we can run a calibration round "in reverse." That is, we can ask students to find the best fit between criteria and a group of essays. Using criteria matching and/or scaled items in Eli, "best fit" essays are nominated by the group. As the review progresses, a teacher can check to see if she/he agrees with the highly rated examples. This is an inductive approach to calibration. And it not only produces judgements about how/if student raters recognize features that match criteria, but whether and how they *produce* these features as a matter of routine (shown by a group trend) or whether one, a few, or the whole group needs more practice.

If a highly nominated text turns out to be something a teacher also rates highly, what you have is a much stronger and more reliable indicator of learning than the deductive approach gives us. You have a portrait of the way students have deployed the criteria as a principle in their own writing and in their judgement of others' writing. The deductive method can produce a valid result for a single text, but as Perelman notes it is not a terribly reliable way to tell us who can write, again, in another situation with similar success.

Conversely, in an Eli review where agreement between the raters and the teacher is weak, the instructor has a very, very clear signal regarding where there is confusion about how students are able to apply the criteria. This allows for targeted instruction and a much more effective next intervention.

So here is a simple claim that is, nonetheless, a bold one: Eli is better at calibration. Why? We offer a way - for the first time - to focus on calibrating on the criteria, not on a sample text. The informating technology built into Eli handles the extra work of comparing agreement across multiple texts, something that we might well do by hand if it weren't so arduous. We therefore get a valid and reliable result that also guides instruction in a much more focused way. We also avoid the problem of implicitly asking students to reproduce the formal features of the sample text rather than creating an original draft that exemplifies the criteria (an old problem with the progymnasmatum known as imitatio).

This is a big shift. But it is profound because it shifts us back to the human-driven activity of learning to write. We have settled for too long on a method that is best left to robots. It was always a compromise. But we don't have to settle for it any longer.

I said this wasn't a sales pitch. I mean it. If you are inspired by this post to try Eli, get in touch. We'll work something out.

Friday, August 2, 2013

Informated Teaching & Learning

It's been nearly 25 years since Shosana Zuboff published In the Age of the Smart Machine: The Future of Work and Power and introduced the term "informate" as a contrast to "automate" to describe how technology transforms work and working conditions. To automate is to consciously and systematically transfer both the skill and the responsibility for routine work practice from a human to a non-human agent. To informate is to enroll a non-human agent in work practice such that it provides feedback to the human agent.

I've written a lot about informating technologies in my career as they apply to writing, knowledge work, and to the knowledge work of teaching writing. That last topic is something I care deeply about as a teacher of writing myself. With Tim Peeples, I wrote a chapter called "Techniques, Technology, and the Deskilling of Rhetoric & Composition: Managing the Knowledge-Intensive Work of Writing Instruction." In that chapter, Tim & I warned that the costs of writing instruction were so high that we would soon see technologies arise to automate as much of the work associated with it as possible. Automated grading, outsourcing, and other things were part of this gloomy forecast. Worst of all, we argued, student learning would suffer.

But we saw another way too, suggested by Zuboff's alternative path, to create informating technologies that not only improved working conditions for teachers but also helped to demonstrate, via the information generated, that our pedagogies were working. To demonstrate, in other words, that students were learning.

Tim & I first presented the work that informed that chapter when we were still graduate students. And it was a bit strange looking out into a crowd of experienced faculty and issuing what surely seemed, at the time, to be dire warnings and equally Quixotic exhortations that we must immediately get to work building new kinds of software, new writing systems oriented toward learning. I've done that a lot in my career too.

But here lately, with help from my colleagues at WIDE, I've started to act on that call. We even have a little company now. We make software that informates teaching and learning. We do this as a deliberate, emphatic alternative to those making software that automates teaching and learning. We do this because we think it serves learners better. And we know it serves teachers better too.

I wanted to say this - as plainly and clearly as I could - because I am guessing some of you would like to know what we are up to and why when we talk to you about Eli. So that's it. What and why. We think learning to teach writing well is tough but it is worth it because it is the best way to help students understand themselves as learners as well as writers. We think helping students to understand themselves as learners - not merely as writers trying to get a paper done - is hard work, but well worth it too.

There are many others out there making software that does exactly the sort of thing Tim & I predicted all those years ago. If you read that post I linked to, you won't recognize the names of folks working on that software as rhetoric and writing studies scholars. You WILL recognize the names of the companies funding those projects as the same folks that buy you hors d'oeuvres at the 4C's. I am not happy that this prediction has come true. But I am doing something about it. You can too. 




Wednesday, July 31, 2013

Inviting Eli to Your Class

I am thrilled that this Fall semester, many fellow teachers will be using Eli Review in their classrooms for the first time. Thanks for giving it a try! We are excited to help out in any way that we can. Our colleagues often ask Jeff, Mike, & I "how do we get the best results using Eli right from the start?" In this post, I’ll answer that by suggesting a couple of ways of thinking about Eli as a new resource. And I’ll also suggest a few specific things to do to help you see the value of Eli right away.

1. Think of Eli as a window on students’ writing and reviewing process
It’s true that Eli is a service that streamlines the process of doing peer review. Students can easily exchange work and offer feedback to one another, guided by teachers’ prompts. But Eli is not merely meant to streamline workflow during review.

Eli allows teachers to set up write, review, and revise cycles and track students’ progress through them. This means that Eli is a service that supports all of writing instruction, not just that bit in the middle where students’ review each other’s work. Two things follow from this, for me, as a teacher. One is when in the process I give writers feedback.  With Eli, my guidance comes after each writer has

1) shared a draft in Eli (write)
2) received feedback from peers (review) and
3) compiled their feedback and a reflection in a revision plan (revise)

I still read each students’ draft, but now I also have a much clearer idea what they are planning to do next because I can see their revision plan and feedback. As a result, my feedback is more focused: I can adjust their revision priorities if need be, or simply encourage them to go ahead with a solid plan if they have one. To illustrate, let’s look at a snapshot from an individual students’ review report. Eli shows me how the student did and how he compares with the rest of the group. This student we will call “Jeremy” is doing rather well:


When I look at these results and read his draft, I know what to say to Jeremy. What would you say?

The second thing is that now, for a typical assignment, I might do 4 or 5 Write-Review-Revise cycles. I start with something short, like a prompt for the student to do a one paragraph "pitch" that glosses their key argument and supporting evidence for an argumentative essay. Next I might ask students to submit a precis' that summarizes a key secondary source. Third we might look at a "line of argument" or detailed outline. Fourth is a full draft in rough form.

The prompts for reviewers in each cycle would ask  students to focus on matters appropriate to that stage of the writing process. So we would leave matters of attributing source materials accurately in MLA style until cycle four, but we would address accurate paraphrasing in cycle two when students are preparing a precis.

I usually try to have one or two cycles per week in a typical course, which is enough to keep students focused on drafting, reviewing, and revising consistently throughout a project (as opposed to doing everything at the end).

2. Think of Eli as a means to adjust your teaching priorities on the fly
So what do multiple cycles, each with a detailed review report buy you as a teacher? Something nearly magical happens when, during a review, I make sure that I explicitly align my learning goals for a particular project with the review response prompts I give to students.

Here’s what the magic looks like. The data below come from reviewers’ combined responses to two scaled items – like survey questions – that I included as part of a review of rough drafts of an analytic essay.


Note that the prompts are things I hope to see my students doing by this point. These are things we've been talking about and practicing in class. Seeing these results, I know what to spend more time on. Not everybody is using secondary source material well just yet. But I also see which students are doing well. I'll have them lead our discussion.

But how do we get from a writing assignment to a review that gives me this kind of real-time feedback to guide instructional priorities? Let's go through that process using a writing assignment - a real one from a course my colleague was teaching called "Writing and the Environment." This is a prompt for one of the short weekly response papers students were asked to write:

Write a response to the chapters from Walden by Emerson. Your response should  consider the work itself as well as the historical context of the message. What did it  mean for Emerson to write this book? to write it when he did? These should not be simple summaries of the essays/chapter. Instead they should be  a comment on what YOU THINK and/or FEEL in response to the week’s  texts.  What main ideas stuck with you and why? What questions did they raise for  you?  What made you wonder? Did you agree, disagree? Were you inspired, angered, encouraged, surprised?  

Thinking ahead to the review report we want to see as the instructor – the one that aggregates information for the whole class – we are interested to know which response papers have the traits mentioned in the assignment description above. These traits match up with my colleauge's learning goals for this particular point in the course. She wanted students to engage Emerson's text not only as a work of creative non-fiction, but as part of an evolving, historical dialogue in the U.S. that has shaped our understanding of the environment and society's relationship(s) to it.

So one of the key learning outcomes for the Emerson weekly response is related to seeing Emerson's writing as a product of its time and as an influence on what came after. Another is less specific to that week and that particular reading because it applies to all of the readings, and it is the ability to not merely sum up what was said, but to explain how the views of the writer changed over the course of the narrative (and to attribute those changes to the writers' experiences).

So, with these goals in mind, I’ll set up the review prompt like this:

Read the essay and check the box to indicate if the writer has done the following things:
  • include facts that accurately place the work in its historical context
  • explain the change(s) in perspective the author underwent
  • explain the experiences Emerson had that inspired his thinking
  • include thoughts of the writer (not only a summary of Emerson's thoughts)
What this will give us in the review report is a snapshot of how often reviewers notice these features in the drafts they read. We can see, too, which individual writers did well in comparison to the larger group and this will give us a source of samples to discuss in class. But most importantly as the instructor, I can use this – along with my own reading of the drafts – to adjust my plans about what to discuss in class. If only 30% of the drafts discussed Emerson’s changes in perspective, for instance, that can become the most important thing to address in the next class meeting.

You might be wondering, at this point, "where's the revision part of this cycle?" Well, for these short response papers, which help students process readings, we might not ask for revised versions to be turned in. But if I'm the instructor, I'd still use the revision plan task in Eli anyway, like this:

Revision Plan Prompt
You will have a chance to write on this topic again for your synthesis essay.  Gather the feedback that was most helpful for you from the peer review  round and reflect on the ways you might re-read Emerson, do additional  research, or revise your responses. Write a note to your future self about what you can do to score well on the exam with a similar writing task.


Once students have submitted their revision plans, what I see, as the teacher, is a set of materials that includes their original draft, all the feedback they received from peers, all the feedback they gave to others, and their revision plan with reflections on what they can improve upon in their next opportunity to write about Emerson. This is what I comment on, offering advice that adjusts or reinforces priorities to help them focus on the areas of greatest need.

A final thought: Write, Review, Revise…Repeat!
With Eli, my approach to teaching writing has not really changed very much. But my execution - my ability to see, understand, and offer feedback on students' writing process has improved dramatically. I make more decisions and give more feedback based on evidence than I did in the past.

Students’ work during peer review, on the other hand, changes tremendously with Eli. It is, quite simply, far better. So are their revision plans and revised drafts. With better (and more) feedback, I see better writing. It is that simple. 

Thursday, May 16, 2013

Eli and The Evidence: How I Use Review Results to Guide Deliberate Practice

Recently, I traveled to Washington State University to talk with faculty there about writing, learning to write, and what the evidence says is the best way to encourage student learning. You can find lots of the material for both the full-day workshop and my University address on the Multimodal blog published by the Writing Program.

In my talk, I reviewed some of the powerful evidence that shows what works in writing instruction: deliberate practice, guided by an expert (teacher), and scaffolded by peer feedback (Kellogg & Whiteford, 2009). But I also urged teachers to be guided by the kind of evidence that emerges from their own classrooms when students share their work. This is not simply doing what peer-reviewed studies say works, but involves systematically evaluating what the students in your classroom need and responding to that as well.

This can be a challenge to do. The evidence we need to make good pedagogical choices for the whole group, or to tailor feedback for a particular student, does not collect itself.

This is why we built Eli! With Eli, instructors can gather information shared by students during a peer review session to learn how the group - and how each individual student - is doing relative to the learning goals for a particular project or for the course as a whole.

A Review Report in Eli
Our marketing partner, Bedford St. Martin's, has put together some great resources designed to help instructors get the most out of Eli. And one especially helpful discussion that I'd like to point out here shows what evidence a review report makes available and how it might be used to guide deliberate practice in writing. Here's what a teacher's review report looks like in Eli:

A review report in Eli Review

Because Eli aggregates the results of review feedback into one report, a teacher can see trends that are helpful for providing various kinds of feedback to students. Here, I'll talk about just one kind of feedback - trait matching - and how it can be useful in the context of deliberate practice. I'll also offer some examples from my own pedagogy to show why having evidence is so powerful.

Help Students Identify Revision Priorities by Noting Missing Traits or Qualities
One type of feedback that students can give one another is called "trait matching" in Eli. Reviewers check boxes to indicate whether a specific feature is present in the draft. When I'm teaching, I use trait matching items to help students zero in on what is most important at a particular stage in their drafting process. Early on, for instance, it might be a main idea or thesis. Later on, it might be supporting details to develop an argument more fully.

In the review report, then, I get a result like this:
This tells me what percentage of reviewed drafts contained the four traits I have specified should be in each one. Notice that in this summary, most writers have included a summary of their own media use patterns. But fewer writers have made references to two secondary sources in these drafts.

This helps me to identify what I need to emphasize to the whole group, and it also allows me to evaluate each individual student's revision priorities as reflected in their revision plans (also in Eli). I know that at least 44% of those plans should list references to secondary sources!

Here's how I might convey these trends to students. First, I will reassure them that the patterns they've noticed agree with my own reading of their work. This let's them know that they can trust the feedback they are receiving from their peers:

Overall Comments
Overall, your review responses to trait matching items (the check boxes) and scaled items (star ratings) are very good and accurate. I did not find many instances where I disagreed with reviewers about what they had seen in a draft or how well the writer had met one of the criteria represented in the star ratings.

What this means: you can trust the feedback coming from your peers about whether or not your drafts contain all the specified features. Use these as you begin drafting to turn your attention to those things that were not present or not as detailed as they could have been.

Next, I'll try to interpret a bit and even pull out some good examples that they identified during their reviews (and which I think everyone can learn from):

Trait Matching - Summarizing Patterns, Including Detail, and Analyzing

I'm happy to say that on the whole, folks did a good job summarizing their own data. Almost everyone (88%) included a summary of media use patterns and (85%) supported these with detailed evidence. Of the few who were rated by peers as deficient in this area, they often merely listed statistics and did not frame these in terms of a pattern. The following example is modified, but shows what I mean:

List rather than summarize
"I sent 27 text messages on Wednesday and 41 on Thursday. I received 18 texts on Wednesday and 51 on Thursday."

Summarize, rather than list:
"My weekday texting activity seemed to depend on how much time I spent at work on a given day. On my day off, Wednesday, I sent just a little under twice the number of texts than I did on Tuesday, a day when I worked. I also received more texts on my off day, likely because I was engaged in longer conversations than I would be if I were at work."
See how example one is just the facts, while example two identifies a use pattern that the facts support? That's a key difference.

But by far, the biggest shift most of you will make as you draft your essays is from summarizing to analyzing. This makes sense, but will be a challenge as you also need to incorporate secondary source data. Note that only 68% of you " Described the relationship between two or more patterns" and just "65% provided evidence to explain the relationship between two or more patterns." As I read the drafts, I'd put those numbers just a little bit lower - you were generous with one another in your reviews this time! :)

Of course, these are just my comments to the whole group. For individual students, I can drill down further and say something like this: 


Thanks for providing a wonderful example of using evidence to support a claim. You are using primary source data well in this draft! Now, work on putting your primary source data in comparison with the secondary source - the trends reported in the Pew Internet report. Are your media use patterns similar or different than others' in your age group as reported by Pew?

Having this evidence makes all the difference when it comes to getting students to see a writing assignment as a way to identify what kinds of writing moves they need to improve (vs. those that they are doing well) and to practice those things in a deliberate way.

References
Kellogg, R. T., & Whiteford, A. P. (2009). Training advanced writing skills: The case for deliberate practice. Educational Psychologies, 44(4), 250–266.