
EVALUATING AN UNDERDETERMINED RESPONSE TYPE FOR THE COMPUTERIZED SAT
Author(s) -
Bennett Randy Elliot,
Morley Mary,
Quardt Dennis,
Rock Donald A.,
Katz Irvin R.
Publication year - 1999
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/j.2333-8504.1999.tb01820.x
Subject(s) - underdetermined system , affect (linguistics) , psychology , sample (material) , ask price , association (psychology) , cognition , type (biology) , cognitive psychology , arithmetic , artificial intelligence , statistics , social psychology , computer science , mathematics education , mathematics , algorithm , communication , ecology , chemistry , economy , chromatography , neuroscience , economics , psychotherapist , biology
We evaluated a machine‐scorable, computer‐delivered response type for measuring quantitative reasoning skill. “Generating Examples” (GE) is built around items that present constraints and ask candidates to give one or more answers that meet those constraints. These items are attractive because, like many real‐world problems, GE items can have multiple correct answers. In addition, they appear to tap cognitive processes somewhat distinct from those measured by conventional quantitative questions. Nine GE forms were spiraled among a sample of academically precocious youth taking the Computerized SAT in association with a national talent search program. The forms differed in item manipulations designed to affect difficulty and in the expected time per item needed for solution. Results showed that across item lengths, the insertion of certain constraints increased difficulty. In addition, after correcting for attenuation, GE items similar in time requirements to SAT Mathematical items correlated in the mid‐eighties to mid‐nineties with SAT Mathematical scores, indicating that GE items might fit reasonably well with the SAT.