Premium
Determinants, Detection and Amelioration of Adverse Impact in Personnel Selection Procedures: Issues, Evidence and Lessons Learned
Author(s) -
Hough Leatta M.,
Oswald Frederick L.,
Ployhart Robert E.
Publication year - 2001
Publication title -
international journal of selection and assessment
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.812
H-Index - 61
eISSN - 1468-2389
pISSN - 0965-075X
DOI - 10.1111/1468-2389.00171
Subject(s) - psychology , construct (python library) , conceptualization , equivalence (formal languages) , personnel selection , social psychology , construct validity , cognition , test (biology) , personality , applied psychology , clinical psychology , psychometrics , statistics , artificial intelligence , paleontology , philosophy , linguistics , mathematics , neuroscience , computer science , biology , programming language
Mean subgroup (gender, ethnic/cultural, and age) differences are summarized across studies for several predictor domains – cognitive ability, personality and physical ability – at both broadly and more narrowly defined construct levels, with some surprising results. Research clearly indicates that the setting, the sample, the construct and the level of construct specificity can all, either individually or in combination, moderate the magnitude of differences between groups. Employers using tests in employment settings need to assess accurately the requirements of work. When the exact nature of the work is specified, the appropriate predictors may or may not have adverse impact against some groups. The possible causes and remedies for adverse impact (measurement method, culture, test coaching, test‐taker perceptions, stereotype threat and criterion conceptualization) are also summarized. Each of these factors can contribute to subgroup differences, and some appear to contribute significantly to subgroup differences on cognitive ability tests, where Black–White mean differences are most pronounced. Statistical methods for detecting differential prediction, test fairness and construct equivalence are described and evaluated, as are statistical/mathematical strategies for reducing adverse impact (test‐score banding and predictor/criterion weighting strategies).