Premium
Item response theory: applications of modern test theory in medical education
Author(s) -
Downing Steven M
Publication year - 2003
Publication title -
medical education
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.776
H-Index - 138
eISSN - 1365-2923
pISSN - 0308-0110
DOI - 10.1046/j.1365-2923.2003.01587.x
Subject(s) - item response theory , equating , computerized adaptive testing , context (archaeology) , classical test theory , sample (material) , psychology , test (biology) , educational measurement , item bank , confounding , test theory , psychometrics , computer science , statistics , econometrics , rasch model , clinical psychology , mathematics , developmental psychology , curriculum , pedagogy , paleontology , chemistry , chromatography , biology
Context Item response theory (IRT) measurement models are discussed in the context of their potential usefulness in various medical education settings such as assessment of achievement and evaluation of clinical performance. Purpose The purpose of this article is to compare and contrast IRT measurement with the more familiar classical measurement theory (CMT) and to explore the benefits of IRT applications in typical medical education settings. Summary CMT, the more common measurement model used in medical education, is straightforward and intuitive. Its limitation is that it is sample‐dependent, in that all statistics are confounded with the particular sample of examinees who completed the assessment. Examinee scores from IRT are independent of the particular sample of test questions or assessment stimuli. Also, item characteristics, such as item difficulty, are independent of the particular sample of examinees. The IRT characteristic of invariance permits easy equating of examination scores, which places scores on a constant measurement scale and permits the legitimate comparison of student ability change over time. Three common IRT models and their statistical assumptions are discussed. IRT applications in computer‐adaptive testing and as a method useful for adjusting rater error in clinical performance assessments are overviewed. Conclusions IRT measurement is a powerful tool used to solve a major problem of CMT, that is, the confounding of examinee ability with item characteristics. IRT measurement addresses important issues in medical education, such as eliminating rater error from performance assessments.