z-logo
open-access-imgOpen Access
A Study of the Use of the e‐rater ® Scoring Engine for the Analytical Writing Measure of the GRE ® revised General Test
Author(s) -
Breyer F. Jay,
Attali Yigal,
Williamson David M.,
RidolfiMcCulla Laura,
Ramineni Chaitanya,
Duchnowski Matthew,
Harris April
Publication year - 2014
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/ets2.12022
Subject(s) - argument (complex analysis) , test (biology) , consistency (knowledge bases) , measure (data warehouse) , task (project management) , quality (philosophy) , psychology , computer science , writing assessment , natural language processing , artificial intelligence , mathematics education , data mining , medicine , paleontology , philosophy , management , epistemology , economics , biology
In this research, we investigated the feasibility of implementing the e‐rater ® scoring engine as a check score in place of all‐human scoring for the Graduate Record Examinations ® ( GRE ® ) revised General Test (rGRE) Analytical Writing measure. This report provides the scientific basis for the use of e‐rater as a check score in operational practice. We proceeded with the investigation in four phases. In phase I, for both argument and issue prompts, we investigated the quality of human scoring consistency across individual prompts, as well as two groups of prompts organized into sets. The sets were composed of prompts with separate focused questions (i.e., variants ) that must be addressed by the writer in the process of responding to the topic of the prompt. There are also groups of variants of prompts (i.e., grouped for scoring purposes by similar variants). Results showed adequate human scoring quality for model building and evaluation. In phase II, we investigated eight different e‐rater model variations each for argument and issue essays including prompt‐specific; variant‐specific; variant‐group–specific; and generic models both with and without content features at the rating level, at the task score level, and at the writing score level. Results showed the generic model was a valued alternative to the prompt‐specific, variant‐specific, and variant‐group–specific models, with and without the content features. In phase III, we evaluated the e‐rater models on a recently tested group from the spring of 2012 (between March 18, 2012, to June 18, 2012) following the introduction of scoring benchmarks. Results confirmed the feasibility of using a generic model at the rating and task score level and at the writing score level, demonstrating reliable cross‐task correlations, as well as divergent and convergent validity. In phase IV of the study, we purposely introduced a bias to simulate the effects of training the model on a potentially less able group of test takers in the spring of 2012. Results showed that use of the check‐score model increased the need for adjudications between 5% and 8%, yet the increase in bias actually increased the agreement of the scores at the analytical writing score level with all‐human scoring.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here