z-logo
open-access-imgOpen Access
Evaluations of Automated Scoring Systems in Practice
Author(s) -
Rotou Ourania,
Rupp André A.
Publication year - 2020
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/ets2.12293
Subject(s) - computer science , quality (philosophy) , data collection , perspective (graphical) , test (biology) , scale (ratio) , data science , management science , data mining , machine learning , artificial intelligence , statistics , mathematics , paleontology , philosophy , physics , epistemology , quantum mechanics , economics , biology
This research report provides a description of the processes of evaluating the “deployability” of automated scoring (AS) systems from the perspective of large‐scale educational assessments in operational settings. It discusses a comprehensive psychometric evaluation that entails analyses that take into consideration the specific purpose of AS, the test design, the quality of human scores, the data collection design needed to train and evaluate the AS model, and the application of statistics and evaluation criteria. Finally, it notes that an effective evaluation of an AS system requires professional judgment coupled with statistical and psychometric knowledge and understanding of the risk assessment and business metrics.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here