z-logo
Premium
An Extension of Four IRT Linking Methods for Mixed‐Format Tests
Author(s) -
Kim Seonghoon,
Lee WonChan
Publication year - 2006
Publication title -
journal of educational measurement
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.917
H-Index - 47
eISSN - 1745-3984
pISSN - 0022-0655
DOI - 10.1111/j.1745-3984.2006.00004.x
Subject(s) - item response theory , statistics , extension (predicate logic) , scale (ratio) , sigma , mixed model , computer science , mathematics , psychometrics , physics , quantum mechanics , programming language
Under item response theory (IRT), linking proficiency scales from separate calibrations of multiple forms of a test to achieve a common scale is required in many applications. Four IRT linking methods including the mean/mean, mean/sigma, Haebara, and Stocking‐Lord methods have been presented for use with single‐format tests. This study extends the four linking methods to a mixture of unidimensional IRT models for mixed‐format tests. Each linking method extended is intended to handle mixed‐format tests using any mixture of the following five IRT models: the three‐parameter logistic, graded response, generalized partial credit, nominal response (NR), and multiple‐choice (MC) models. A simulation study is conducted to investigate the performance of the four linking methods extended to mixed‐format tests. Overall, the Haebara and Stocking‐Lord methods yield more accurate linking results than the mean/mean and mean/sigma methods. When the NR model or the MC model is used to analyze data from mixed‐format tests, limitations of the mean/mean, mean/sigma, and Stocking‐Lord methods are described.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here