
A TASK TYPE FOR MEASURING THE REPRESENTATIONAL COMPONENT OF QUANTITATIVE PROFICIENCY
Author(s) -
Bennett Randy Elliot,
Sebrechts Marc M.,
Rock Donald A.
Publication year - 1995
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/j.2333-8504.1995.tb01654.x
Subject(s) - task (project management) , categorization , test (biology) , psychology , similarity (geometry) , multiple choice , mathematics education , rating scale , multidimensional scaling , cognitive psychology , applied psychology , computer science , developmental psychology , statistics , artificial intelligence , mathematics , machine learning , significant difference , paleontology , management , economics , image (mathematics) , biology
Two computer‐based categorization tasks were developed and pilot tested. For Study I, the task asked examinees to sort mathematical word problem stems according to prototypes. Results showed that those who sorted well tended to have higher GRE General Test scores and college grades than did examinees who sorted less proficiently. Examinees generally preferred this task to multiple‐choice items like those found on General Test quantitative section and felt the task was a fairer measure of their ability to succeed in graduate school. For study II, the task involved rating the similarity of item pairs. Both mathematics test developers and students participated, with the results analyzed by individual differences multidimensional scaling. Experts produced more scaleable ratings overall and primarily attended to two dimensions. Students used the same two dimensions with the addition of a third. Students who rated like the experts in terms of the dimensions used tended to have higher admissions test scores than those who used other criteria. Finally, examinees preferred multiple‐choice questions to the rating task and felt that the former was a fairer indicator of their scholastic abilities. The major implication of this work is in identifying a new task type for admissions tests, as well as for instructional assessment products that might help lower scoring examinees localize and remediate problem‐solving difficulties.