Premium
Evaluation of a New Method for Providing Full Review Opportunities in Computerized Adaptive Testing—Computerized Adaptive Testing With Salt
Author(s) -
Cui Zhongmin,
Liu Chunyan,
He Yong,
Chen Hanwei
Publication year - 2018
Publication title -
journal of educational measurement
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.917
H-Index - 47
eISSN - 1745-3984
pISSN - 0022-0655
DOI - 10.1111/jedm.12193
Subject(s) - computerized adaptive testing , computer science , test (biology) , robustness (evolution) , field (mathematics) , item response theory , machine learning , artificial intelligence , psychometrics , statistics , mathematics , paleontology , biochemistry , chemistry , gene , pure mathematics , biology
Allowing item review in computerized adaptive testing (CAT) is getting more attention in the educational measurement field as more and more testing programs adopt CAT. The research literature has shown that allowing item review in an educational test could result in more accurate estimates of examinees’ abilities. The practice of item review in CAT, however, is hindered by the potential danger of test‐manipulation strategies. To provide review opportunities to examinees while minimizing the effect of test‐manipulation strategies, researchers have proposed different algorithms to implement CAT with restricted revision options. In this article, we propose and evaluate a new method that implements CAT without any restriction on item review. In particular, we evaluate the new method in terms of the accuracy on ability estimates and the robustness against test‐manipulation strategies. This study shows that the newly proposed method is promising in a win‐win situation: examinees have full freedom to review and change answers, and the impacts of test‐manipulation strategies are undermined.