Premium
The Variability of Criterion‐Related Validity Estimates Among Interviewers and Interview Panels
Author(s) -
Van Iddekinge Chad H.,
Sager Christopher E.,
Burnfield Jennifer L.,
Heffner Tonia S.
Publication year - 2006
Publication title -
international journal of selection and assessment
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.812
H-Index - 61
eISSN - 1468-2389
pISSN - 0965-075X
DOI - 10.1111/j.1468-2389.2006.00352.x
Subject(s) - interview , psychology , criterion validity , variance (accounting) , concurrent validity , statistics , perspective (graphical) , social psychology , applied psychology , clinical psychology , psychometrics , construct validity , mathematics , computer science , artificial intelligence , business , accounting , internal consistency , political science , law
The authors examined differences in criterion‐related validity estimates among ratings from individual interviewers and interview panels within a structured interview. Senior non‐commissioned officers (NCOs) in the U.S. Army ( N =64) conducted panel interviews with 944 junior NCOs during a concurrent validation project. Analysis of the data revealed considerable variation in interviewer validity coefficients in relation to multiple performance criteria. Results also indicated the importance of adopting a multivariate perspective when evaluating interviewer validity differences in that the amount of variation in validity coefficients differed both by interview dimension and criterion. A similar pattern of findings emerged when analyses were performed on ratings averaged within interview panel. Nonetheless, when meta‐analysis was used to estimate the amount of true variance in interviewer‐ and panel‐level validity coefficients, most or all of the variance for some interview‐criterion combinations appeared to be due to statistical artifacts.