Premium
METAANALYSES OF VALIDITY STUDIES PUBLISHED BETWEEN 1964 AND 1982 AND THE INVESTIGATION OF STUDY CHARACTERISTICS
Author(s) -
SCHMITT NEAL,
GOODING RICHARD Z.,
NOE RAYMOND A.,
KIRSCH MICHAEL
Publication year - 1984
Publication title -
personnel psychology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 6.076
H-Index - 142
eISSN - 1744-6570
pISSN - 0031-5826
DOI - 10.1111/j.1744-6570.1984.tb00519.x
Subject(s) - criterion validity , psychology , predictive validity , variance (accounting) , incremental validity , concurrent validity , statistics , test validity , generalization , psychometrics , clinical psychology , construct validity , mathematics , accounting , internal consistency , business , mathematical analysis
Review and metaanalyses of published validation studies for the years 1964‐1982 of Journal of Applied Psychology and Personnel Psychology were undertaken to examine the effect of (1) research design; (2) criterion used; (3) type of selection instrument used; (4) occupational group studies; and (5) predictor‐criterion combination on the level of observed validity coefficients. Results indicate that concurrent validation designs produce validity coefficients roughly equivalent to those obtained in predictive validation designs and that both of these designs produce higher validity coefficients than does a predictive design which includes use of the selection instrument. Of the criteria examined, performance rating criteria generally produced lower validity coefficients than did the use of other more “objective” criteria. In comparing the validities of various types of predictors, it was found cognitive ability tests were not superior to other predictors such as assessment centers, work samples, and supervisory/peer evaluations as has been found in previous metaanalytic work. Personality measures were clearly less valid. Compared to previous validity generalization work, much unexplained variance in validity coefficients remained after corrections for differences in sample size. Finally, the studies reviewed were deficient for our purposes with respect to the data reported. Selection ratios, standard deviations, reliabilities, predictor and criterion intercorrelations were rarely and inconsistently reported. There are also many predictor‐criterion relationships for which very few validation efforts have been undertaken.