Review of: A Self-Correcting Approach to Multiple-Choice Exams Improves Students’ Learning

Gruhn D., Cheng Y. (2014). A self-correcting approach to multiple-choice exams improves students’ learning. Teaching of Psychology 41(4), pp. 335-339.

The validity and learning potential of multiple-choice exams has been long debated in the field of teaching psychology. Questions often arise as to ability of these assessments to accurately reveal true student learning. In addition, the simple knowledge that a multiple-choice exam will be offered often produces an important effect on the type of learning activities students will typically engage in outside of class. Expectations in these exams are usually factually based and motivate a student to memorize rather than conceptualize. Despite these challenges, multiple-choice exams present a number of benefits that are difficult to ignore for a realistic instructor. They are generally cheaper, easier to deliver and easier to grade. These advantages are all too enticing for a time and money starved professor. All this in mind, Gruhn and Cheng, the authors of the study: A self-correcting approach to multiple-choice exams improves students’ learning, sought out to determine if there is, in fact, a way to enhance multiple-choice assessments so as to overcome their aforementioned limitations. Citing previous research, the authors discussed a previously articulated means by which multiple-choice exams could be improved. The idea is based on student self-correction. Prior research has found that students who are given the opportunity to self-correct a multiple-choice exam are more familiar with the material and develop a deeper understanding of the material compared to those who did not self-correct. However, Gruhn and Cheng wanted to know two things that prior studies had not yet covered. The purpose of their study was to first, compare pre- and post-test scores of students who self-corrected and secondly, determine if this assessment method could be applied to courses with a large number of enrolled students (150+).

The methodology in this study was quite simple and straightforward. Two classes were recruited to participate in the study. Each class was expected to take three exam throughout the semester, one of which being the final exam. The classes differed in that one of the classes self-corrected for the first two exams while the second did not. Scores on the final exam for each class would be used for comparisons of effectiveness between the two assessment strategies. In addition, the authors explored relationships between exam types, number of corrections and improvement over the semester.

The results of the study heavily favored the self-correcting assessment strategy as a better means of promoting learning and better performance on later exams. The authors found that students in the self-correcting exam group improved performance beyond that of the control group from one exam to the next. In other words, as each exam was administered, the self-correcting exam group continuously did better and better than the control group with the best performance being on the final exam. Gruhn and Cheng also found that students in the self-correcting group tended to learn more and make greater improvements the more corrections they needed to perform. This relationship suggests that the self-correcting procedure may be the means by which learning is achieved in this strategy. So, the more time a student spends with the material and the process of changing incorrect answers to correct answers appears to have a significant effect on deep learning and later performance.

Exploring new ways to enhance learning and assessment techniques should be a focus of teachers in all fields of academia. Some assessment strategies in the past have emerged, I believe, simply out of interest of time and money constraints. Ask most teachers which exam type they think is most beneficial to a student and they will likely suggest anything except multiple-choice exams. However, as mentioned previously, the draws of this method are simply difficult to pass up. The research performed by Gruhn and Cheng provides, perhaps, a happy medium of sorts. The results suggest that although some teaching or assessment strategies may lack in effectiveness, we should not ignore the potential opportunity of improving these strategies towards something that is indeed effective. The self-correcting theory discussed in this article suggests that multiple-choice exams may yet have hope in the realm of academia. Though I would imagine many instructors would still admit that other assessment types are still more productive, this move to enhance multiple-choice exams shows some promise and should be carefully considered.

 

Advertisements