z-logo
open-access-imgOpen Access
AUTOMATED SCORING OF MATHEMATICS TASKS IN THE COMMON CORE ERA: ENHANCEMENTS TO M‐RATER IN SUPPORT OF CBAL ™ MATHEMATICS AND THE COMMON CORE ASSESSMENTS
Author(s) -
Fife James H.
Publication year - 2013
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/j.2333-8504.2013.tb02333.x
Subject(s) - equivalence (formal languages) , set (abstract data type) , computer science , graph , core (optical fiber) , common core , scoring system , theoretical computer science , mathematics , medicine , discrete mathematics , programming language , telecommunications , surgery
The m‐rater scoring engine has been used successfully for the past several years to score CBAL ™ mathematics tasks, for the most part without the need for human scoring. During this time, various improvements to m‐rater and its scoring keys have been implemented in response to specific CBAL needs. In 2012, with the general move toward creating innovative tasks for the Common Core assessment initiatives, in traditional testing programs, and with potential outside clients, and to further support CBAL, m‐rater was enhanced in ways that move ETS's automated scoring capabilities forward and that provide needed functionality for CBAL: (a) the numeric equivalence scoring engine was augmented with an open‐source computer algebra system; (b) a design flaw in the graph editor, affecting the way the editor graphs smooth functions, was corrected; (c) the graph editor was modified to give assessment specialists the option of requiring examinees to set the viewing window; and (d) m‐rater advisories were implemented in situations in which m‐rater either cannot score a response or may provide the wrong score. In addition, 2 m‐rater scoring models were built that presented some new challenges.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here