z-logo
Premium
A critique and standardization of meta‐analytic validity coefficients in personnel selection
Author(s) -
Hermelin Eran,
Robertson Ivan T.
Publication year - 2001
Publication title -
journal of occupational and organizational psychology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.257
H-Index - 114
eISSN - 2044-8325
pISSN - 0963-1798
DOI - 10.1348/096317901167352
Subject(s) - comparability , psychology , personnel selection , statistics , criterion validity , meta analysis , standardization , selection (genetic algorithm) , variance (accounting) , set (abstract data type) , econometrics , applied psychology , psychometrics , mathematics , computer science , artificial intelligence , construct validity , operating system , medicine , accounting , combinatorics , programming language , business
In personnel selection, variations in the interpretation and application of the Hunter and Schmidt meta‐analytic procedures often preclude a meaningful comparison between validity coefficients estimated by different meta‐analyses. In order to increase the comparability and accuracy of these coefficients, a standardized set of procedures and cumulated artifact distributions were used to correct 20 widely reported meta‐analytic validity coefficients estimated for six personnel selection methods (using job performance as the criterion). Structured interviews and cognitive ability tests demonstrated the highest operational validity. Seventeen of the 20 coefficients were found to be affected by moderators according to the Hunter and Schmidt 75% rule. On average, around 50% of variance in the meta‐analytic coefficients was explained by the correctable experimental artifacts of sampling error, direct range restriction in the predictor variable, and criterion unreliability. Limitations of this exercise and its implications for future research and practice are discussed.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here