
TOWARD A COGNITIVE BASIS FOR QUANTITATIVE ABILITY MEASURES
Author(s) -
Sebrechts Marc M.,
Enright Mary,
Bennett Randy Elliot,
Martin Kathleen
Publication year - 1993
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/j.2333-8504.1993.tb01533.x
Subject(s) - construct (python library) , variance (accounting) , contrast (vision) , basis (linear algebra) , principal (computer security) , computer science , algebraic number , regression analysis , principal component analysis , mathematics , artificial intelligence , algebra over a field , machine learning , mathematical analysis , geometry , accounting , pure mathematics , business , programming language , operating system
The construct validity of algebra word problems for measuring quantitative reasoning was examined from two perspectives, one focusing on an analysis of problem attributes and the other on the analysis of constructed‐response solutions. Twenty problems that had appeared on the Graduate Record Examinations General Test were investigated. Constructed‐response solutions to these problems were collected from 51 undergraduates. Regression analyses of problem attributes indicated that models including factors such as the need to apply algebraic concepts, problem complexity, and problem content could account for 37% to 62% of the variance in problem difficulty. With respect to constructed‐response solutions, four classes of strategies were identified: equation formulation, ratio setup, simulation, and other (unsystematic) approaches. Higher achieving students used equation strategies more and unsystematic approaches less than lower achieving examinees. Examinees' errors were classified into eight principal categories. Problem conception errors were the best predictor of performance on the constructed‐response problems and on SAT‐M. In contrast, procedural errors contributed to the prediction of performance on the constructed‐response problems but not to standing on SAT‐M. Overall, these results provide support for the construct validity of GRE algebra word problems and of SAT‐M as measures of quantitative reasoning. A preliminary theoretical framework for describing performance on algebra word problems is proposed and its usefulness for more systematic design of tests is discussed.