
Empirical option weights for multiple-choice items
Author(s) -
Gregor Sočan
Publication year - 2015
Publication title -
metodološki zvezki
Language(s) - English
Resource type - Journals
eISSN - 1854-0031
pISSN - 1854-0023
DOI - 10.51936/bfrh1091
Subject(s) - weighting , correlation , homogeneity (statistics) , consistency (knowledge bases) , statistics , multiple choice , test (biology) , internal consistency , homogeneous , computer science , psychology , mathematics , artificial intelligence , psychometrics , medicine , significant difference , geometry , radiology , paleontology , combinatorics , biology
In scoring of a multiple-choice test, the number of correct answers does not use all information available from item responses. Scoring such tests by applying empirically determined weights to the chosen options should provide more information on examinees' knowledge and consequently produce more valid test scores. However, existing empirical evidence on this topic does not clearly support option weighting. To overcome the limitations of the previous studies, we performed a simulation study where we manipulated the instruction to examinees, discrimination structure of distractors, test length, and sample size. We compared validity and internal consistency of number-correct scores, corrected-for-guessing scores, two variants of correlation-weighted scores and homogeneity analysis scores. The results suggest that in certain conditions the correlation-weighted scores are notably more valid than the number-correct scores. On the other hand, homogeneity analysis cannot be recommended as a scoring method. The relative performance of scoring methods strongly depends on the instructions and on distractors' properties, and only to a lesser extent on sample size and test length.