Premium
Identification and Evaluation of Local Item Dependencies in the Medical College Admissions Test
Author(s) -
Zenisky April L.,
Hambleton Ronald K.,
Sired Stephen G.
Publication year - 2002
Publication title -
journal of educational measurement
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.917
H-Index - 47
eISSN - 1745-3984
pISSN - 0022-0655
DOI - 10.1111/j.1745-3984.2002.tb01144.x
Subject(s) - item response theory , context (archaeology) , reliability (semiconductor) , test (biology) , identification (biology) , equating , short forms , psychology , estimation , computer science , differential item functioning , scale (ratio) , statistics , item analysis , psychometrics , econometrics , rasch model , mathematics , clinical psychology , paleontology , power (physics) , physics , botany , management , quantum mechanics , economics , biology
Measurement specialists routinely assume examinee responses to test items are independent of one another. However, previous research has shown that many contemporary tests contain item dependencies and not accounting for these dependencies leads to misleading estimates of item, test, and ability parameters. The goals of the study were (a) to review methods for detecting local item dependence (LID), (b) to discuss the use of testlets to account for LID in context‐dependent item sets, (c) to apply LID detection methods and testlet‐based item calibrations to data from a large‐scale, high‐stakes admissions test, and (d) to evaluate the results with respect to test score reliability and examinee proficiency estimation. Item dependencies were found in the test and these were due to test speededness or context dependence (related to passage structure). Also, the results highlight that steps taken to correct for the presence of LID and obtain less biased reliability estimates may impact on the estimation of examinee proficiency. The practical effects of the presence of LID on passage‐based tests are discussed, as are issues regarding how to calibrate context‐dependent item sets using item response theory.