z-logo
Premium
COMPARISON OF THREE METHODS FOR ASSEMBLING APTITUDE TEST BATTERIES
Author(s) -
TRATTNER MARVIN H.
Publication year - 1963
Publication title -
personnel psychology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 6.076
H-Index - 142
eISSN - 1744-6570
pISSN - 0031-5826
DOI - 10.1111/j.1744-6570.1963.tb01271.x
Subject(s) - weighting , aptitude , raw score , psychology , test (biology) , statistics , sample (material) , selection (genetic algorithm) , mathematics , artificial intelligence , computer science , raw data , chemistry , paleontology , biology , medicine , chromatography , radiology
Summary A factored battery of thirteen aptitude tests was administered to samples of approximately 200 journeyman employees in each of twelve blue collar job series. Performance ratings were obtained from the employees' first and second level supervisors. Three basic methods for selecting and weighting tests from the aptitude battery were compared. The tests were selected on the basis of results obtained on one sample of employees in each job series and then applied to the second or independent sample to test the significance of the validity coefficients. The three test selection methods utilized were: (1) Wherry‐Gaylord Integral Gross Score Weight Method , which involves selecting and multiple weighting tests according to their intercorrelations and correlations with a criterion. In addition, tests selected and multiple weighted by the Wherry‐Gaylord method for the jobs were also weighted by their standard deviation by adding raw scores. (2) Civil Service Commission (CSC) Job Analysis Method , which involves equally weighting tests selected for a job series according to the validity of the tests for measuring the rated important employee abilities. (3) General Blue Collar Test Battery , which involves equally weighting the five tests that yielded the highest average correlation with the criterion for all jobs on the test selection sample. This battery was then utilized for all twelve job series. The tests selected and the weights applied for the twelve jobs by utilizing the test selection methods were indicated, also the validity coefficients obtained by applying the different patterns to the cross‐validation samples were tabulated. The criterion ratings yielded an average Spearman‐Brown estimated product‐moment reliability of .675. All but four of the 36 cross‐validation coefficients were significant at least at the .05 level. Most of the differences in the cross‐validity coefficients were due to the different jobs on which the three test weighting methods were applied rather than the test weighting methods. Also, a comparison of the Wherry‐Gaylord multiple formulas with the same tests unit weighted for the cross‐validation sample revealed that unit weighting the tests was as effective as using multiple weights. It appears that one test selection method is as effective as another for the number of subjects, job series, and test selection and weighting methods utilized in these studies. The implications of these results are discussed.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here