z-logo
open-access-imgOpen Access
EVALUATING A PROTOTYPE ESSAY SCORING PROCEDURE USING OFF‐THE‐SHELF SOFTWARE
Author(s) -
Kaplan Randy M.,
Burstein Jill,
Trenholm Harriet,
Lu Chi,
Rock Donald,
Kaplan Bruce,
Wolff Susanne
Publication year - 1995
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/j.2333-8504.1995.tb01656.x
Subject(s) - computer science , grammar , natural language processing , construct (python library) , artificial intelligence , software , machine learning , linguistics , programming language , philosophy
Constructed‐response items, whose responses consist of words, phrases, sentences, paragraphs, and essays are among the most difficult and costly to score. The increased use of constructed‐response items like essays creates a need for tools to partially or fully automatically score these responses. This study explores one approach to analyzing essay‐length natural language constructed‐responses. In this study we develop and evaluate a decision model for scoring essays. The decision model uses off‐the‐shelf software for grammar and style checking of the English language. The first part of this study consisted of an evaluation of several commercial grammar checking programs. From this evaluation we select the best performing grammar checking programs to construct a decision model for scoring the essays. The second part of the study uses data produced from the selected grammar checking program(s) to make a decision about the score for an essay. Through statistical and linguistic methods, we analyze the performance of the decision model in an effort to understand its usefulness and practicality in a production scoring setting.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here