z-logo
Premium
Influence of Item Parameter Estimation Errors in Test Development
Author(s) -
Hambleton Ronald K.,
Jones Russell W.,
Rogers H. Jane
Publication year - 1993
Publication title -
journal of educational measurement
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.917
H-Index - 47
eISSN - 1745-3984
pISSN - 0022-0655
DOI - 10.1111/j.1745-3984.1993.tb01071.x
Subject(s) - item response theory , test (biology) , seriousness , equating , computerized adaptive testing , item analysis , estimation , statistics , econometrics , classical test theory , computer science , polytomous rasch model , psychometrics , mathematics , rasch model , paleontology , management , political science , economics , law , biology
Item response models are finding increasing use in achievement and aptitude test development. Item response theory (IRT) test development involves the selection of test items based on a consideration of their item information functions. But a problem arises because item information functions are determined by their item parameter estimates, which contain error. When the “best” items are selected on the basis of their statistical characteristics, there is a tendency to capitalize on chance due to errors in the item parameter estimates. The resulting test, therefore, falls short of the test that was desired or expected. The purposes of this article are (a) to highlight the problem of item parameter estimation errors in the test development process, (b) to demonstrate the seriousness of the problem with several simulated data sets, and (c) to offer a conservative solution for addressing the problem in IRT‐based test development.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here