z-logo
open-access-imgOpen Access
QUALIFYING READERS FOR THE ONLINE SCORING NETWORK: SCORING ARGUMENT ESSAYS
Author(s) -
Powers D.E.,
Kubota M.,
Bentley J.,
Farnum M.,
Swartz R.,
Willard A.
Publication year - 1998
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/j.2333-8504.1998.tb01777.x
Subject(s) - certification , test (biology) , set (abstract data type) , argument (complex analysis) , computer science , writing assessment , scoring system , psychology , mathematics education , medical education , medicine , political science , paleontology , surgery , law , biology , programming language
The ETS Online Scoring Network (OSN) is a system designed to meet the need for more‐or‐less continuous scoring of essays and other examinee‐constructed responses. To qualify as readers for the OSN, all applicants must pass a certification test to demonstrate that they have mastered both the OSN software and the scoring procedures used for the test(s) they will be scoring. Currently, personnel who evaluate examinee‐constructed responses for ETS‐administered testing programs must possess certain academic credentials. Recently, a study was conducted to investigate the circumstances under which current prerequisites for essay readers might be relaxed without sacrificing the accuracy of essay scoring. Of interest were potential readers who are not currently involved in postsecondary teaching and who therefore do not meet current standards for ETS essay readers. A secondary objective of the study was to provide information to set a passing score on an OSN reader certification test. To accomplish these objectives, experienced and inexperienced readers evaluated sets of essays both before and after they received standard training for scoring essays (of the kind requiring examinees to “discuss an issue”). The results showed that training did improve the accuracy with which readers scored essays. Moreover, after training, a significant proportion of inexperienced readers exhibited a level of accuracy that was commensurate with that shown by their more experienced counterparts. The major objective of the study reported here was to extend, on a smaller scale, the results of the previous study to a second kind of writing prompt — “analysis of an argument” — which is used in the GMAT writing assessment and is being considered for the GRE writing test. As in the earlier study, the results suggest that inexperienced readers without current required credentials can be trained to score “argument” essays with a high degree of accuracy. The results also support the recommendation from the earlier study concerning the appropriate level at which to set a passing score on an OSN reader certification test. The ETS Online Scoring Network (OSN) is a system designed to meet the need for more‐or‐less continuous scoring of essays and other examinee‐constructed responses. This need is the direct result of moving to computer‐based testing, which offers testing on essentially a daily basis and, for traditional multiple‐choice measures, immediate score reporting as well. Currently, OSN essay readers convene at regional scoring centers where they score examinee essays, which are displayed on personal computer screens. To qualify as readers for the OSN, all applicants must learn to use the OSN software. They must also receive training in holistic scoring methods and in the specific procedures needed for scoring particular kinds of constructed responses being evaluated. After training, all readers must pass a certification test to demonstrate that they have mastered both the OSN software and the scoring procedures used for the test(s) they will be scoring. Currently, personnel who evaluate examinee‐constructed responses for ETS‐administered testing programs must possess certain academic credentials. Recently, a study was conducted to investigate the circumstances under which current prerequisites for essay readers might be relaxed, provided readers are given sufficient training, without sacrificing the accuracy with which essay scores are awarded (Powers & Kubota, 1998). The specific interest was in a particular class of potential readers, namely, those who were not currently involved in postsecondary teaching and who therefore did not meet current standards for ETS essay readers. A secondary objective of the study was to provide information that would help determine a passing score on an OSN reader certification test. To accomplish these objectives, experienced and inexperienced readers (30 of each kind) were recruited for the study. All study participants evaluated sets of essays both before and after they received standard training for scoring essays. The results showed that training did improve the accuracy with which readers (especially those who were previously inexperienced) scored essays. Moreover, after training, a significant proportion of inexperienced readers exhibited a level of accuracy that was commensurate with that shown by their more experienced counterparts. These findings are consistent with a variety of other studies of the effects of reader training on essay evaluators. For example, readers can be trained to focus on (or to disregard) particular features of essays (Freedman, 1981; Powers, Fowles, Farnum, & Ramsey, 1994), and they can be encouraged to use agreed‐upon criteria instead of their own (Charney, 1984; Freedman & Calfee, 1983). Qualitative studies have demonstrated that training can alter the expectations of essay readers (Weigle, 1994) and that experienced readers behave differently than do inexperienced ones (Cumming, 1990). A comparison of the performances of experienced and inexperienced readers suggested that a defensible passing score of about 90% accuracy could be set on the reader certification test when the standard of accuracy is agreement within one point of previously established essay scores. The essays used in this study were of one particular kind that required writers to “discuss an issue.” This is one of the two kinds of prompts currently used in the Graduate Management Admission Test (GMAT) Writing Assessment and contemplated for use in the writing test being developed for the Graduate Record Examinations (GRE) program. The major objective of the current study was to extend, on a smaller scale, the results of the previous study to a second kind of writing prompt — “analysis of an argument” — which is used in the GMAT writing assessment and is being considered for the GRE writing test. A secondary objective was to explore the relationship between readers' accuracy in evaluating “argument” essays and their own ability to evaluate and analyze arguments.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here