
Detecting Rater Biases in Sparse Rater-Mediated Assessment Networks
Author(s) -
Stefanie A. Wind,
Yuan Ge
Publication year - 2021
Publication title -
educational and psychological measurement
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.819
H-Index - 95
eISSN - 1552-3888
pISSN - 0013-1644
DOI - 10.1177/0013164420988108
Subject(s) - inter rater reliability , sensitivity (control systems) , psychology , rating scale , machine learning , computer science , statistics , mathematics , developmental psychology , electronic engineering , engineering
Practical constraints in rater-mediated assessments limit the availability of complete data. Instead, most scoring procedures include one or two ratings for each performance, with overlapping performances across raters or linking sets of multiple-choice items to facilitate model estimation. These incomplete scoring designs present challenges for detecting rater biases, or differential rater functioning (DRF). The purpose of this study is to illustrate and explore the sensitivity of DRF indices in realistic sparse rating designs that have been documented in the literature that include different types and levels of connectivity among raters and students. The results indicated that it is possible to detect DRF in sparse rating designs, but the sensitivity of DRF indices varies across designs. We consider the implications of our findings for practice related to monitoring raters in performance assessments.