z-logo
open-access-imgOpen Access
Automated Trait Scores for TOEFL ® Writing Tasks
Author(s) -
Attali Yigal,
Sinharay Sandip
Publication year - 2015
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/ets2.12061
Subject(s) - trait , interpretability , psychology , raw score , fluency , reliability (semiconductor) , task (project management) , psychometrics , cognitive psychology , statistics , natural language processing , developmental psychology , computer science , artificial intelligence , mathematics , mathematics education , power (physics) , physics , raw data , management , quantum mechanics , economics , programming language
The e‐rater ® automated essay scoring system is used operationally in the scoring of TOEFL iBT ® independent and integrated tasks. In this study we explored the psychometric added value of reporting four trait scores for each of these two tasks, beyond the total e‐rater score. The four trait scores are word choice, grammatical conventions, fluency and organization, and content. Trait scores were computed on the basis of several criteria for determining feature weights: regression parameters of the trait features on human scores, reliability of trait features, and coefficients of features from a principal component analysis. In addition, augmented trait scores, based on information from other trait scores, were also analyzed. The psychometric added value of trait scores beyond total e‐rater scores was evaluated by comparing the ability to predict a particular trait score on one task from the same trait score on the other task versus the e‐rater score on the other task. Results supported the use of trait scores, and are discussed in terms of their contribution to the construct validity of e‐rater as an alternative essay scoring method.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here