top of page

EPEC at the IACAT Conference 2015


EPEC's Founder and Exec Director, Professor John Barnard delivered his presidential address at the IACAT Conference on 14 September 2015 in Cambridge, England.

Entitled, 'Improving Precision of CAT Measures', Prof Barnard questioned dichotomous scoring (scoring only as either correct or incorrect). He suggested that a test taker may not always have 100% confidence in an answer when they have selected the correct option, and likewise, may have some knowledge of the concept being tested, when they have ultimately selected an incorrect answer option. As you would all know, in most MCQs, a single answer option has to be selected.

He argued that the use of a proper scoring function can perhaps better capture a test taker's knowledge/ability than the sole use of dichotomous scoring. In a CAT the test taker's responses, are scored as 0 or 1. This is used to select the next question to administer to the test taker. If the "true score" is used selected from a scoring function rather than a dichotomous score, the precision of selecting the next question can actually be improved to make the CAT even more efficient. This methodology is implemented in EPEC's measurement theory, Option Probability Theory (OPT).

Let's quickly recap - CAT, refers to a testing methodology where questions are adapted to test taker ability levels. This means that when a test taker is presented with a question, and they answer it correctly, they are subsequently presented with a more challenging question. Alternatively, if the test taker was to answer that question incorrectly, they are then presented with an easier question. This can be seen in the image below, where the ability estimate of the test taker is the circle(s), a correct answer is presented with a black line, an incorrect answer with a red line and the 'cut score' as the horizontal line. The length of the vertical lines represent measurement error, and as you can see, this also reduces over time.

CAT allows each test taker to receive their own unique test which is tailored to their ability level (increasing test accuracy). CAT can also increase test security, since no candidate receives the same questions in the same order as another. Ultimately however, if a candidate selects the correct answer in a question, is it reasonable to conclude that the candidate had 100% certainty in that answer (as CAT currently assumes), or is there a little more room for error which needs to be accounted for? Prof Barnard questioned the current trending measurement paradigms and demonstrated how OPT can improve measurement precision through measuring "true ability". If you would like to find out more on OPT, please click here to see Prof Barnard's publication, 'Option Probability Theory: A Quest for Better Measures', published in June 2015.

Featured Posts
Recent Posts
Archive
Search By Tags
No tags yet.
Follow Us
  • Facebook - Black Circle
  • LinkedIn - Black Circle
  • Twitter - Black Circle
bottom of page