z-logo
Premium
Controlling Bias in Both Constructed Response and Multiple‐Choice Items When Analyzed With the Dichotomous Rasch Model
Author(s) -
Andrich David,
Marais Ida
Publication year - 2018
Publication title -
journal of educational measurement
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.917
H-Index - 47
eISSN - 1745-3984
pISSN - 0022-0655
DOI - 10.1111/jedm.12176
Subject(s) - rasch model , item response theory , polytomous rasch model , equating , psychology , econometrics , multiple choice , statistics , item analysis , scale (ratio) , response bias , construct (python library) , psychometrics , differential item functioning , cognitive psychology , computer science , social psychology , mathematics , significant difference , physics , quantum mechanics , programming language
Even though guessing biases difficulty estimates as a function of item difficulty in the dichotomous Rasch model, assessment programs with tests which include multiple‐choice items often construct scales using this model. Research has shown that when all items are multiple‐choice, this bias can largely be eliminated. However, many assessments have a combination of multiple‐choice and constructed response items. Using vertically scaled numeracy assessments from a large‐scale assessment program, this article shows that eliminating the bias on estimates of the multiple‐choice items also impacts on the difficulty estimates of the constructed response items. This implies that the original estimates of the constructed response items were biased by the guessing on the multiple‐choice items. This bias has implications for both defining difficulties in item banks for use in adaptive testing composed of both multiple‐choice and constructed response items, and for the construction of proficiency scales.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here