z-logo
open-access-imgOpen Access
EQUATING THE SCORES OF THE PRUEBA DE APTITUD ACADÉMICA ™ AND THE SCHOLASTIC APTITUDE TEST ®
Author(s) -
Angoff William H.,
Cook Linda L.
Publication year - 1988
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/j.2330-8516.1988.tb00259.x
Subject(s) - equating , aptitude , test (biology) , psychology , mathematics education , set (abstract data type) , scale (ratio) , standardized test , computer science , developmental psychology , programming language , paleontology , physics , quantum mechanics , rasch model , biology
The present study is a replication, in certain important respects, of an earlier study conducted by Angoff and Modu (1973) to develop algorithms for converting scores expressed on the College Board Scholastic Aptitude Test ( sat ) scale to scores expressed on the College Board Prueba de Aptitud Académica ( paa ) scale, and vice versa. Because the purpose and the design of the studies, though not all of the psychometric procedures, were identical in the two studies, the language of this report often duplicates that of the earlier study. The differences in procedure, however, are worth noting, and it is hoped that this study will contribute in substance and method to the solution of this important problem. The study described in this report was undertaken in an effort to establish score equivalences between two College Board tests—the Scholastic Aptitude Test ( sat ) and its Spanish‐language equivalent, the Prueba de Aptitud Académica ( paa ). The method involved two phases: (1) the selection of test items equally appropriate and useful for English‐ and Spanish‐speaking students for use as an anchor test in equating the two tests; and (2) the equating analysis itself. The first phase called for choosing a set of items in each of the two languages, translating each item into the other language, “back‐translating” independently into the original language, and comparing the twice‐translated versions with their originals. This process led to the adjustment of the translations in several instances and, in other instances, to the elimination of some items considered too difficult to be translated adequately. At this point both sets of “equivalent” items, each in its original language mode, were administered as pretests, chiefly to determine whether the two response functions for each item were sufficiently similar for the items to be considered equivalent. On the basis of these analyses two sets of items—one verbal and the other mathematical—were selected for use as anchor items for equating. These were administered again (in the appropriate language) at regularly scheduled administrations of the sat and the paa . An item response theory ( irt ) model was used to equate the paa to the sat , with the anchor items serving as the link in the equating process. The equating itself showed definite curvilinear relationships in both verbal and mathematical tests, indicating in this instance that both sections of the paa are easier than the corresponding sat sections. The results also showed good agreement between the current conversions and the 1973 Angoff‐Modu conversions for the mathematical tests, but not so close agreement for the verbal tests. The reasons for the difference are (speculatively) attributed to improved methodology in the present study, especially for the more difficult verbal equating, and to the possibility of scale drift in one or the other test (or both tests) over the intervening 12 to 15 years since the last study.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here