
AN ALTERNATIVE METHOD FOR SCORING ADAPTIVE TESTS 1
Author(s) -
Stocking Martha L.
Publication year - 1994
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/j.2333-8504.1994.tb01621.x
Subject(s) - computerized adaptive testing , equating , item response theory , test (biology) , computer science , machine learning , artificial intelligence , standardized test , cognitive psychology , econometrics , psychometrics , psychology , statistics , mathematics , rasch model , biology , paleontology
Modern applications of computerized adaptive testing (CAT) are typically grounded in Item Response Theory (IRT; Lord, 1980). While the IRT foundations of adaptive testing provide a number of approaches to adaptive test scoring that may seem natural and efficient to psychometricians, these approaches may be more demanding for test‐takers, test score users, interested regulatory institutions, and so forth, to comprehend. An alternative method, based on more familiar equated number‐correct scores and identical to that used to score and equate many conventional tests, is explored and compared with one that relies more directly on IRT. The conclusion is reached that scoring adaptive tests using the familiar number‐correct score, accompanied by the necessary equating to adjust for the intentional differences in adaptive test difficulty, is a statistically viable, although slightly less efficient, method of adaptive test scoring. To enhance the prospects for enlightened public debate about adaptive testing, it may be preferable to use this more familiar approach. Public attention would then likely be focussed on issues more central to adaptive testing, namely the adaptive nature of the test.