Sunday, June 12, 2011

Building Better Tools…

Here are some initial thoughts on what I would do if I could build my own app/tool/system.

At Work

Writing about the two testing tools used in bcpLearning has made me think that if I had the ability to build any tool that I wanted at work, I would build a testing tool that took the necessary aspects of the multiple choice tests and the more formative assessment practices afforded by the situational exams and combine them into one super testing tool. The necessary and/or positive elements of the multiple choice exams that would be carried over include the ability to determine whether the student learned enough from the course to pass (assess-ability), the ability to keep students from cheating as much as possible (cheating block-ability), and the ability to build them and maintain them with little overhead (assemble/maintain-ability).

In order to combine the summative/formative assessment types represented by the two testing tools, the new testing tool would need to have

  1. The ability to track a student's progression through the exam. The new testing tool would still situate the questions within real-world problems and provide immediate feedback, allowing students to apply new guidance to figure out how to solve the problem if they didn't choose the correct answer the first time. However, for each question, the new testing tool would have to be able to report which answers the student selected, the order in which those answers were selected, and how many wrong answers the student chose before landing on the correct answer. This would allow the exam program as well as the student's manager to have a clear, differentiated view of which questions the student was able to answer without help and which questions were answered using help (and the degree to which that help was needed).

  2. The ability to use the tracked information to determine whether the student passed or failed. The program would require an algorithm that could give an individual "score" on each question and combine those results to determine if the student retained enough of the information taught in the course to pass, or if the student needed more study time/practice. For example, if a student was able to answer the question on the first try without any hints, she would get "100%" for that question, if the student was able to answer the question correctly after selecting only one wrong answer choice, she might get 80% for that question, and so on.

    To achieve a more exact idea of where the student's abilities lie in terms of actual vs. proximal development, it might be helpful to give each answer choice a specific weight. If the student first picked an answer choice that was pretty close to the right answer, or at least a plausible choice and then, with some additional guidance was able to figure out the correct answer, it would provide a more valid assessment if that progression counted for more than the student picking a random answer choice that wasn't even close (e.g., "your next step in completing the import process is to grab a cup of coffee") before using the hint to answer the question correctly.

  3. Appropriate feedback. As well as the immediate feedback provided by guidance/explanations throughout the exam, the new testing tool should provide students with appropriate feedback for moving forward. If a student does well enough to achieve a passing score, but required help in certain areas to achieve that score, the final results could congratulate the student for doing well and suggest that in the future she dedicate a little more time to building the skills in which she seemed to need a little guidance. If the student does not show enough ability in the subject matter to warrant passing, the testing tool should be able to walk her through personal areas of strength and where she struggled the most. This will help the student go back to the course and focus her studies on the specific areas of difficulty. It might also inspire her to seek out a more experienced coworker who could help her understand the subject matter better. With the additional study, the student would develop skills in her previous areas of weakness and retake the assessment, confident in her new abilities.

To keep the testing tool viable, it would be necessary for the new system to be as easy to build and maintain as the current multiple choice exams. Setting up the original situational exams in their current format was an excruciatingly time-consuming process that involved creating a huge web of linked pages to account for potential student choices. Now that they're up and running, they're still incredibly time consuming to maintain. Even correcting a small error in one question can take hours of work, which keeps us away from other important tasks and costs the company money. The new program would have to reduce the number of work hours needed for construction and maintenance to make the new tests a useful option for the company and allows us to keep the exams up-to-date for our students.

Overall, this new testing tool would allow students to apply the information provided in the course to legitimate situations and provide them with immediate, and even more thorough, feedback relating to their ability to apply that information to their daily work. In addition, I believe that these exams would result in a more legitimate assessment of the student's skills, providing employers with a thorough understanding of their employees' overall comprehension and ability to apply the information from the course(s).

Potential Implementation Difficulties

1. Development. Truthfully, I don't really have a concept of how much difficulty building this tool would present in terms of writing the code, developing the grading algorithms, etc., other than to guess it would probably take an extensive amount of start-up time and money.

2. Company Resistance. The company may not see the value added by the new program as outweighing the costs of implementation.

3. Customer Resistance. Some employers might be resistant to the new testing format and it would take extensive work to convince employers, and potentially the NCBFAA (the agency that certifies our courses with CES and CSS credits), that the new testing tool provides a valid assessment of students' abilities and knowledge gained through taking the course.

Thoughts? Suggestions?

No comments:

Post a Comment