Thursday, June 9, 2011

Finding Balance - Meeting Competing Needs Through Multiple Testing Modules

As promised, here is my analysis of the two types of exams we use as a part of bcpLearning: traditional multiple choice exams and situational exams. In my post from Tuesday, I mentioned that, because many of our customers are companies seeking to train their employees, we've discovered a bit of a competition between the employer's need for quantifiable results and the employee's (student's) need for confidence in their new knowledge and their ability to apply it on the job. In an effort to meet the needs of both parties, we've begun using two different testing tools.

I. Multiple Choice Exams
These are your typical, run-of-the-mill, graded multiple choice exams. The questions are designed to cover the range of topics discussed throughout a particular course in order to provide a quick assessment of the student’s retained knowledge of those topics.

Affordances
  • Assess-ability. The employee (student) either passes or fails. If the student gets 80% or more of the questions correct, they get a passing grade and their manager is sent a certificate to keep on file. If the student fails, they have to retake the course.

  • Cheating block-ability. We’ve put a number of tools in place to assure employers that the employees are actually taking the courses and passing the exams fairly. For example, a student’s access to the course is cut off when an exam window is open, keeping students from clicking through the course for an answer. Also, the exams can automatically shuffle questions through a larger database, creating a different exam every time. Employers can also print out reports to track each employee’s usage. If the employee spends five minutes just clicking through everypage and calls it good, the employer will know. Employers can also set the amount of exam attempts allowed. If someone wants really wants to cheat, they can find a way…but at least we make it pretty difficult to do so.

  • Easy Assemble-ability and Maintain-ability. From the developer’s end, these exams are incredibly easy to build and maintain. Just create a new exam, enter the number of questions, type in each of the questions/answer choices, select the correct answer, click submit, and you’re done.

Disaffordances
  • Student Automaton-ability. These exams ask students to spit back a series of facts mentioned in the course, without thinking critically about the information as a whole. Students can easily learn what they need to get a passing score and then forget everything immediately afterward.

  • Lack of Feedback-ability. During the exam, there's no opportunity for immediate feedback and growth. After the students finish the exams, they get to see which questions they answered correctly and sometimes an explanation, but that’s about as far as it goes.

  • No Practice-ability. Students aren't prompted to think about the information in the course as it applies to their work. They may not be able to take what they learned in the course and convert it from a series of facts to actual job-related skills.

II. Scenario Exams

These exams constitute our attempt to combat the cons presented by Multiple Choice Exams. Here we present a series of situations where the information from the course can be used to solve legitimate, real-world problems.

The following description/images will give you a good idea of how a question works in a Scenario Exam. First, the student is provided a scenario description that will provide the context. Then, the student is presented with a series of questions that might pop up as a result of the posed scenario (See Image 1). If the student selects a wrong answer, he is sent to a screen that tells him why he was wrong, provides a hint for solving the problem, or both (See Image 2). Because this is a tricky question, the student is given a hint telling him where in the government databases he might look to find the answer. After receiving the hint, the student is sent back to the question to try again. If the student selects the correct answer, he is congratulated and given a detailed explanation of why the answer was correct and the easiest way to find it (See Image 3).

Affordances
  • Genuine Applicability. Students are asked to think about what they've learned and use the information as a new skill set to solve a series of problems that are taken from real-life situations. (In this case, using the new skills to classify an imported article for the student's company.)

  • Immediate Feedback-ability. If a student gets a question wrong, he is immediately told his answer was incorrect and given either an explanation or a hint. Then he's sent back to the main page to tackle the problem with the help of the additional information he just received.

  • Confidence-ability. By learning how to apply the information to their work instead of just learning the information in order to spit it back at the end, student's gain the confidence they need to turn the information into a new skill set. Guiding students to actually apply their new training not only helps the students improve their job performance, it meets the employer's overall goal of actually training their employees and achieving better work performance.

Disaffordances
  • No Grade-ability. Because everyone eventually gets to the solution and there's currently no way of tracking a student's progression through the exam, we have no way of measuring a student's skill level or retention through the situational exam.

  • Avoid-ability. Because these situational exams are not graded, they are not part of the course certification process. Students can choose not to apply themselves by just guessing randomly until they finally get through the exam.

  • Difficult Assemble-ability and Maintain-ability. Currently, these exams are incredibly time-consuming to build and even worse to maintain. Each exam can require hundreds of individual pages that have to be carefully linked together. Building a new exam can take over a week. Even making a small change to one question can require hours of work. There are also a lot of limitations on flexibility within the exams.
In Conclusion
As Bill described it the other day, the goal of combining "summative" and "formative" evaluation to meet the needs of both employer and employee (student) is an important one that we've taken a positive step towards achieving by utilizing this "double testing" approach. While the situational exams present extensive logistical difficulties and it would be much easier to just focus on a results driven training module, I feel 1.) the situational exams represent an important step in improving our testing procedures and 2.) the value added for the employees and, as a result, their employers, far outweighs the difficulties and should not stop us from moving forward with these projects.

1 comment:

  1. Cait: This is an interesting post and makes me think about testing methods. I think assessment is probably one of the less desirable aspects of teaching, and it's really important to keep in mind the affordances/disaffordances of certain methods and media because, as undesirable as assessment is for those administering those instruments, it's even worse for those taking them.

    As a huge proponent of multi-modal writing in composition classrooms, I often think about the special assessment-related issues that sort of work creates. I hope this is something we can talk about while we're in session.

    This last semester I took TE944 - Seminar in English Education. We skyped with Geoffery Phelps who is working to create an exam that tests teacher's Pedagogical Content Knowledge. His discussion evidenced just how difficult it is to create a fair testing instrument and how much work has to go on behind the scenes.

    ReplyDelete