z-logo
open-access-imgOpen Access
Distractor Analysis for Multiple‐Choice Tests: An Empirical Study With International Language Assessment Data
Author(s) -
Haberman Shelby J.,
Liu Yang,
Lee YiHsuan
Publication year - 2019
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/ets2.12275
Subject(s) - item response theory , reliability (semiconductor) , logistic regression , language proficiency , selection (genetic algorithm) , psychology , scale (ratio) , econometrics , multiple choice , statistics , item analysis , computer science , psychometrics , artificial intelligence , mathematics , mathematics education , physics , quantum mechanics , significant difference , power (physics)
Distractor analyses are routinely conducted in educational assessments with multiple‐choice items. In this research report, we focus on three item response models for distractors: (a) the traditional nominal response (NR) model, (b) a combination of a two‐parameter logistic model for item scores and a NR model for selections of incorrect distractors, and (c) a model in which the item score satisfies a two‐parameter logistic model and distractor selection and proficiency are conditionally independent, given that an incorrect response is selected. Model comparisons involve generalized residuals, information measures, scale scores, and reliability estimates. To illustrate the methodology, a study of an international assessment of proficiency of nonnative speakers of a single target language used to make high‐stakes decisions compares the models under study.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here