
Understanding Mean Score Differences Between the e‐rater ® Automated Scoring Engine and Humans for Demographically Based Groups in the GRE ® General Test
Author(s) -
Ramineni Chaitanya,
Williamson David
Publication year - 2018
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/ets2.12192
Subject(s) - test score , weighting , test (biology) , psychology , rating scale , scoring rule , standard score , percentile rank , statistics , artificial intelligence , machine learning , cognitive psychology , computer science , mathematics , developmental psychology , standardized test , medicine , percentile , paleontology , biology , radiology
Notable mean score differences for the e‐rater ® automated scoring engine and for humans for essays from certain demographic groups were observed for the GRE ® General Test in use before the major revision of 2012, called rGRE. The use of e‐rater as a check‐score model with discrepancy thresholds prevented an adverse impact on the examinee score at the item or test level. Despite this control, there remains a need to understand the root causes of these demographically based score differences and to identify potential mechanisms for avoiding future instances of discrepancy. In this study, we used a combination of statistical methods and human review to propose hypotheses about the root cause of score differences and whether such discrepancies reflect inadequacies of e‐rater, human scoring, or both. The human rating process was found to be influenced strongly by the scale structure and did not fully correspond to the e‐rater scoring mechanism. The human raters appeared to be using conditional logic and a rule‐based approach to their scoring, while e‐rater uses linear weighting of all the features. These analyses have implications for future research and operational policies for the scoring of the rGRE.