Premium
Psychometric and Cognitive Functioning of an Under‐Determined Computer‐Based Response Type for Quantitative Reasoning
Author(s) -
Bennett Randy Elliot,
Morley Mary,
Quardt Dennis,
Rock Donald A.,
Singley Mark K.,
Katz Irvin R.,
Nhouyvanisvong Adisack
Publication year - 1999
Publication title -
journal of educational measurement
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.917
H-Index - 47
eISSN - 1745-3984
pISSN - 0022-0655
DOI - 10.1111/j.1745-3984.1999.tb00556.x
Subject(s) - psychology , affect (linguistics) , consistency (knowledge bases) , reliability (semiconductor) , item response theory , internal consistency , cognition , perception , test (biology) , cognitive skill , cognitive psychology , psychometrics , social psychology , developmental psychology , artificial intelligence , computer science , communication , paleontology , power (physics) , physics , quantum mechanics , neuroscience , biology
We evaluated a computer‐delivered response type for measuring quantitative skill. “Generating Examples” (GE) presents under‐determined problems that can have many right answers. We administered two GE tests that differed in the manipulation of specific item features hypothesized to affect difficulty. Analyses related to internal consistency reliability, external relations, and features contributing to item difficulty, adverse impact, and examinee perceptions. Results showed that GE scores were reasonably reliable but only moderately related to the GRE quantitative section, suggesting the two tests might be tapping somewhat different skills. Item features that increased difficulty included asking examinees to supply more than one correct answer and to identify whether an item was solvable. Gender differences were similar to those found on the GRE quantitative and analytical test sections. Finally, examinees were divided on whether GE items were a fairer indicator of ability than multiple‐choice items, but still overwhelmingly preferred to take the more conventional questions.