Premium
Training and Scoring Issues Involved in Large‐Scale Writing Assessments
Author(s) -
Moon Tonya R.,
Hughes Kevin R.
Publication year - 2002
Publication title -
educational measurement: issues and practice
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.158
H-Index - 52
eISSN - 1745-3992
pISSN - 0731-1745
DOI - 10.1111/j.1745-3992.2002.tb00088.x
Subject(s) - writing assessment , scale (ratio) , computer science , training (meteorology) , psychology , applied psychology , mathematics education , physics , quantum mechanics , meteorology
Many states are implementing direct writing assessments to assess student achievement. While much literature has investigated minimizing raters' effects on writing scores, little attention has been given to the type of model used to prepare raters to score direct writing assessments. This study reports on an investigation that occurred in a state‐mandated writing program when a scoring anomaly became apparent once assessments were put in operation. The study indicates that using a spiral model for training raters and scoring papers results in higher mean ratings than does using a sequential model for training and scoring. Findings suggest that making decisions about cut‐scores based on pilot data has important implications for program implementation.