z-logo
Premium
Development of Automated Scoring Algorithms for Complex Performance Assessments: A Comparison of Two Approaches
Author(s) -
Clauser Brian E.,
Margolis Melissa J.,
Clyman Stephen G.,
Ross Linette P.
Publication year - 1997
Publication title -
journal of educational measurement
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.917
H-Index - 47
eISSN - 1745-3984
pISSN - 0022-0655
DOI - 10.1111/j.1745-3984.1997.tb00511.x
Subject(s) - computer science , machine learning , variety (cybernetics) , scale (ratio) , artificial intelligence , test (biology) , data mining , paleontology , physics , quantum mechanics , biology
Performance assessments are typically scored by having experts rate individual performances. The cost associated with using expert raters may represent a serious limitation in many large‐scale testing programs. The use of raters may also introduce an additional source of error into the assessment. These limitations have motivated development of automated scoring systems for performance assessments. Preliminary research has shown these systems to have application across a variety of tasks ranging from simple mathematics to architectural problem solving. This study extends research on automated scoring by comparing alternative automated systems for scoring a computer simulation test of physicians'patient management skills; one system uses regression‐derived weights for components of the performance, the other uses complex rules to map performances into score levels. The procedures are evaluated by comparing the resulting scores to expert ratings of the same performances.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here