
A REVIEW OF AUTOMATICALLY SCORABLE CONSTRUCTED‐RESPONSE ITEM TYPES FOR LARGE‐SCALE ASSESSMENT
Author(s) -
Martinez Michael E.,
Bennett Randy Elliot
Publication year - 1992
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/j.2333-8504.1992.tb01473.x
Subject(s) - computer science , item response theory , scale (ratio) , task (project management) , range (aeronautics) , test (biology) , architecture , computerized adaptive testing , artificial intelligence , data science , natural language processing , machine learning , psychometrics , mathematics , statistics , art , physics , management , quantum mechanics , economics , composite material , biology , paleontology , visual arts , materials science
The use of automated scanning of test sheets, beginning in the 1930s, led to widespread use of the multiple‐choice format in standardized testing. New forms of automated scoring now hold out the possibility of making a wide range of constructed‐response item formats feasible for use on a large‐scale basis. We describe new developments in five domains: mathematical reasoning, algebra problem solving, computer science, architecture, and natural language. For each one, we describe the task as presented to the examinee, the methods used to score the response, and the psychometric properties of the item responses. We then highlight general challenges and issues spanning these technologies. We conclude by offering our views on the ways in which such technologies are likely to shape the future of testing.