
COMPARABILITY OF COMPUTER AND PAPER‐AND‐PENCIL SCORES FOR TWO CLEP® GENERAL EXAMINATIONS
Author(s) -
Mazzeo John,
Druesne Barry,
Raffeld Paul C.,
Checketts Keith T.,
Muhlstein Alan
Publication year - 1992
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/j.2333-8504.1992.tb01446.x
Subject(s) - comparability , pencil (optics) , equating , mathematics education , equivalence (formal languages) , composition (language) , computer science , psychology , mathematics , statistics , linguistics , engineering , mechanical engineering , philosophy , discrete mathematics , combinatorics , rasch model
This report describes two studies that investigated the comparability of scores from paper‐and‐pencil and computer‐administered versions of the College‐Level Examination Program (CLEP) General Examinations in Mathematics and English Composition. The first study used a prototype computer‐administered version of each examination. Based on the results of the first study and feedback from the study participants, several modifications were made to these prototype versions. A second study was then conducted using the modified computer versions. Both studies used a single‐group counterbalanced equating design. Data for the Mathematics Examination were collected at Southwest Texas State University, and data for the English Composition Examination were collected at Utah State University. The results of Study 1 suggest that, despite efforts to design computer versions of the CLEP Mathematics and English Composition General Examinations that were administratively similar to the paper‐and‐pencil examinations (i.e., allowed item review and answer changing and were comparably timed), mode‐of‐administration effects (i.e., changes in average scores as a function of the mode of test delivery) were found. The results of Study 2 suggest that the modifications made to the computer versions eliminated the mode‐of‐administration effects for the English Composition Examination but not for the Mathematics Examination. The results of both studies underscore the need to determine empirically (rather than to just assume) the equivalence of computer and paper versions of an examination.