z-logo
Premium
IRT Approaches to Modeling Scores on Mixed‐Format Tests
Author(s) -
Lee WonChan,
Kim Stella Y.,
Choi Jiwon,
Kang Yujin
Publication year - 2019
Publication title -
journal of educational measurement
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.917
H-Index - 47
eISSN - 1745-3984
pISSN - 0022-0655
DOI - 10.1111/jedm.12248
Subject(s) - item response theory , reliability (semiconductor) , consistency (knowledge bases) , raw score , computer science , test (biology) , statistics , scale (ratio) , internal consistency , psychometrics , artificial intelligence , econometrics , psychology , raw data , mathematics , paleontology , power (physics) , physics , quantum mechanics , biology
This article considers psychometric properties of composite raw scores and transformed scale scores on mixed‐format tests that consist of a mixture of multiple‐choice and free‐response items. Test scores on several mixed‐format tests are evaluated with respect to conditional and overall standard errors of measurement, score reliability, and classification consistency and accuracy under three item response theory (IRT) frameworks: unidimensional IRT (UIRT), simple structure multidimensional IRT (SS‐MIRT), and bifactor multidimensional IRT (BF‐MIRT) models. Illustrative examples are presented using data from three mixed‐format exams with various levels of format effects. In general, the two MIRT models produced similar results, while the UIRT model resulted in consistently lower estimates of reliability and classification consistency/accuracy indices compared to the MIRT models.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here