On July 9, 2015 we had the opportunity to showcase our approach to mitigating guessing in multiple-choice questions (MCQs) at the XIV ECP Conference in Milan, Italy.
Under the overarching theme of 'Linking Technology and Psychology: feeding the mind, energy for life', we discussed the scoring of multiple-choice questions (MCQs) as widely used in assessments, with a focus on guessing.
We discussed how Classical Test Theory (CTT) is traditionally used to calculate a score based on the number of questions answered correctly, and how scores can be adjusted for possible guessing using a form of correction which assumes that guessing is random.
We compared this to modern test theories, noting that in Rasch measurement, fit statistics can be calculated, and that through having the item difficulty and test taker ability on a common scale, guessing can be suspected if a test taker responds correctly to a question of which the difficulty is significantly higher than their ability.
We further discussed the three-parameter Item Response Theory (IRT) model, which includes a pseudo-chance (guessing) parameter which reportedly estimates the probability of a test taker to correctly guess an answer.
Ultimately, it can be argued that the correction for guessing formula in CTT is hard to defend, since test takers seldom randomly guess and a guessing parameter value cannot realistically be applied in a constant way to all test takers.
In Prof John Barnards' Option Probability Theory (OPT), we argue that a guessing parameter should be a person parameter. In this theory, a realism index is calculated to indicate the amount of uncertainty in a test takers' response to each question, and ultimately the overall realism of their knowledge on a specific content area.