
Scoring multiple choice questions
Author(s) -
J.J. Barnard
Publication year - 2013
Publication title -
suid-afrikaanse tydskrif vir natuurwetenskap en tegnologie/die suid-afrikaanse tydskrif vir natuurwetenskap en tegnologie
Language(s) - English
Resource type - Journals
eISSN - 2222-4173
pISSN - 0254-3486
DOI - 10.4102/satnt.v32i1.402
Subject(s) - rasch model , test (biology) , certainty , item response theory , classical test theory , multiple choice , psychology , degree (music) , cognitive psychology , econometrics , computer science , artificial intelligence , statistics , epistemology , mathematics , psychometrics , philosophy , significant difference , paleontology , physics , acoustics , biology
This article briefly touches on how different measurement theories can be used to score responses on multiple choice questions (MCQs). How missing data is treated may have a profound effect on a person’s score and is dealt with most elegantly in modern theories. The issue of guessing a correct answer has been a topic of discussion for many years. It is asserted that test takers almost never have no knowledge whatsoever of the content in an appropriate test and therefore tend to make educated guesses rather than random guesses. Problems related to the classical correction for guessing is highlighted and the Rasch approach to use fit statistics to identify possible guessing, is briefly discussed. The threeparameter ‘logistic’ item response theory (IRT) model includes a ‘guessing item parameter’ to indicate the chances that a test taker guessed the correct answer to an item. However, it is pointed out that it is a person that guesses, not an item, and therefore a guessing parameter should be a person parameter. Option probability theory (OPT) purports to overcome this problem through requiring an indication of the degree of certainty the test taker has that a particular option is the correct one. Realistic allocations of these probabilities indicate the degree of guessing and hence more precise measures of ability