z-logo
open-access-imgOpen Access
BEYOND ESSAY LENGTH: EVALUATING E‐RATER®'S PERFORMANCE ON TOEFL® ESSAYS
Author(s) -
Chodorow Martin,
Burstein Jill
Publication year - 2004
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/j.2333-8504.2004.tb01931.x
Subject(s) - variance (accounting) , vocabulary , test of english as a foreign language , psychology , diversity (politics) , arabic , test (biology) , linguistics , relation (database) , computer science , statistics , mathematics education , english language , mathematics , sociology , philosophy , paleontology , accounting , database , anthropology , business , biology
ABSTRACT This study examines the relation between essay length and holistic scores assigned to Test of English as a Foreign Language™ (TOEFL®) essays by e‐rater®, the automated essay scoring system developed by ETS. Results show that an early version of the system, e‐rater99, accounted for little variance in human reader scores beyond that which could be predicted by essay length. A later version of the system, e‐rater01, performs significantly better than its predecessor and is less dependent on length due to its greater reliance on measures of topical content and of complexity and diversity of vocabulary. Essay length was also examined as a possible explanation for differences in scores among examinees with native languages of Spanish, Arabic, and Japanese. Human readers and e‐rater01 show the same pattern of differences for these groups, even when effects of length are controlled.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here