Wednesday, July 6, 2011

Assessing Digital Writing

Assessing digital composition has been perplexing for me. I am able to grasp the traditional composition assessment method. However, the digital assessment method has presented slight challenges. I began to wonder about the feasibility of assessing via quantity, format, and other normal assessing procedures via the traditional composition assessment method. Thankfully, Chapter Four of Because Digital Writing Matters, helped me relinquish some of my digital assessment fears. 

Chapter Four, “Standards of Assessment for Digital Writing,” begins by referencing the ever-changing evolution of technology and the diligence needed for the implementation of it into “discourse” (90). Technology’s ever-changing presence makes me ponder over the sustainability of digital assessment. As noted in Chapter Four, digital assessments require careful planning. Thus it could take months before a successful digital assessment is developed. However, once the assessment is developed and implemented, new technological components will become prevalent; this could cause additional issues with digital assessment development (e.g., constant revising of assessment standards, inability to create effective sustainability).

Because Digital Writing Matters (Chapter Four), provides a great “Twenty-First Century” digital standards list. Extensive “Traits and Actions” are provided in diverse realms. This extensive list is certainly beneficial as it helps provide categorized clarification of successful digital writing components. The standards list covers inventiveness, participation, citizenship, and more (100-102).

Computer and human-generated assessments are also referenced in Because Digital Writing Matters. In particular, the authors reference CCCC's 2004 statement regarding the need for continued human-generated assessments in digital writing (due to the keenness provided). The statement denounces machine-generated assessments (due to the lack of sensitivity/inability to provide diligence) (112-113). I agree with the statement provided by CCCC. There are numerous benefits associated with assessments provided via a human being. However, I begin to ponder about this aspect as technology is being integrated into the collegiate curriculum. Digital writing is becoming more popular at higher education institutions. Faculty are encouraged to embrace digitization in some form. This digitized encouragement has not become universally prevalent (some aspects of technology are embraced, while others like computer-generated assessments are not). Thus can computer-generated assessment be reshaped for potential effectiveness? Can it eventually be considered for curriculum implementation? Can there be a concrete focus placed on computer-generated assessment just as there is a concrete focus on writing and technology?

The Digital Writing workshop by Troy Hicks provides another effective assessment rubric. His rubric, geared toward helping students succeed as online writers and as students overall, provides a clear assessment formula for blogs, wikis, visual essays, podcasts, and more. The author begins by stating, “In asking students to become digital writers, you will also be asking them to become proficient with file management, software interfaces, web searching, participating in online communities, and creating multi-media work…” (Hicks 107).  This is a very important statement. I never thought about this technological aspect. I always focused on the design, but failed to perceive the software and management portion (when asking my students to complete digital writing tasks). Additionally, Hicks mentions effectively assessing via information quality and digital design instead of formatting logistics like fonts and colors.

Because Digital Writing Matters by the National Writing Project (with Danielle DeVoss, Elyse Eidman-Aadahl, and Troy Hicks), and The Digital Writing Workshop by Troy Hicks has helped guide my digital writing assessment plans.

Photo courtesy of FotoBart (Flickr)

No comments:

Post a Comment