z-logo
open-access-imgOpen Access
Evaluating the Advisory Flags and Machine Scoring Difficulty in the e‐rater ® Automated Scoring Engine
Author(s) -
Zhang Mo,
Chen Jing,
Ruan Chunyi
Publication year - 2016
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/ets2.12116
Subject(s) - computer science , machine learning , context (archaeology) , artificial intelligence , flags register , scoring system , natural language processing , medicine , paleontology , surgery , biology , operating system
Abstract Successful detection of unusual responses is critical for using machine scoring in the assessment context. This study evaluated the utility of approaches to detecting unusual responses in automated essay scoring. Two research questions were pursued. One question concerned the performance of various prescreening advisory flags, and the other related to the degree of machine scoring difficulty and whether the size of the human–machine discrepancy could be predicted. The results suggested that some advisory flags operated more consistently across measures and tasks in detecting responses that the machine was likely to score differently from human raters than did other flags, and relatively little scoring difficulty was found for three of the four tasks examined in this study, with the relationship between machine and human scores being reasonably strong. Limitations and future studies are also discussed.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here