Premium
Quality of MCQ‐based exams: why functioning distractors matter (533.4)
Author(s) -
Ali Syed Haris
Publication year - 2014
Publication title -
the faseb journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.709
H-Index - 277
eISSN - 1530-6860
pISSN - 0892-6638
DOI - 10.1096/fasebj.28.1_supplement.533.4
Subject(s) - multiple choice , psychology , significant difference , differential item functioning , quality (philosophy) , correlation , clinical psychology , medicine , psychometrics , item response theory , mathematics , philosophy , geometry , epistemology
BACKGROUND: Item difficulties on Free‐response (FR) and MCQ versions of an exam were compared over four study cohorts in this experimental study. The purpose was to investigate factors underlying differential performance on these two versions and the resultant threat to validity of obtained scores. METHODS: The two versions (FR and MCQ) of a Neurohistology practice exam comprising 23 items were randomly distributed among Year 1 medical students in four study cohorts. An index of expected MCQ difficulty was calculated via expected inflation in each item’s ease due to provision of options, and was then compared with the actually observed MCQ difficulty. No significant difference between expected and observed MCQ difficulty indices was hypothesized. RESULTS: A significant difference between FR and observed MCQ difficulty indices (p < 0.05), and not between FR and expected MCQ difficulty indices, was found in all cohorts. Moreover, a consistently low mean number of functioning distractors (0.95, 0.95, 0.86, 0.87), and a significant correlation (p < 0.01) between number of functioning distractors and MCQ difficulty were also noted. CONCLUSION: Low number of functioning distractors threatens the validity of scores obtained on MCQ‐based exams. To assess true knowledge, careful evaluation of plausibility of every provided MCQ option or a mix of MCQs and alternate assessment tools is recommended.