z-logo
Premium
A Comparison of the Common‐Item and Random‐Groups Equating Designs Using Empirical Data
Author(s) -
Kim DongIn,
Choi Seung W.,
Lee Guemin,
Um Kooghyang R.
Publication year - 2008
Publication title -
international journal of selection and assessment
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.812
H-Index - 61
eISSN - 1468-2389
pISSN - 0965-075X
DOI - 10.1111/j.1468-2389.2008.00413.x
Subject(s) - equating , statistics , context (archaeology) , item response theory , econometrics , sample size determination , mathematics , sample (material) , differential item functioning , psychology , psychometrics , rasch model , geography , chemistry , archaeology , chromatography
We designed this study to evaluate several data collection and equating designs in the context of item response theory (IRT) equating. The random‐groups design and the common‐item design have been widely used for collecting data for IRT equating. In this study, we investigated four equating methods based upon these two data collection designs, using empirical data from a number of different testing programs. When the randomly equivalent group assumption was reasonably met, the four equating methods tended to produce highly comparable results. On the other hand, equating methods based upon either of the equating designs produced dissimilar results. Sample size can have differential effects on the equating results produced by the different equating methods. In practice, a common‐item equivalent‐groups design often produces unacceptably large differences in the group mean due to various anomalies such as context effects, poor quality of common items, or a very small number of common items. In such cases, a random‐groups design would produce more stable equating results.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here