Premium
INTERRATER CORRELATIONS DO NOT ESTIMATE THE RELIABILITY OF JOB PERFORMANCE RATINGS
Author(s) -
MURPHY KEVIN R.,
DESHON RICHARD
Publication year - 2000
Publication title -
personnel psychology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 6.076
H-Index - 142
eISSN - 1744-6570
pISSN - 0031-5826
DOI - 10.1111/j.1744-6570.2000.tb02421.x
Subject(s) - inter rater reliability , generalizability theory , psychology , reliability (semiconductor) , variance (accounting) , statistics , common method variance , social psychology , developmental psychology , mathematics , rating scale , power (physics) , accounting , business , physics , quantum mechanics
Interrater correlations are widely interpreted as estimates of the reliability of supervisory performance ratings, and are frequently used to correct the correlations between ratings and other measures (e.g., test scores) for attenuation. These interrater correlations do provide some useful information, but they are not reliability coefficients. There is clear evidence of systematic rater effects in performance appraisal, and variance associated with raters is not a source of random measurement error. We use generalizability theory to show why rater variance is not properly interpreted as measurement error, and show how such systematic rater effects can influence both reliability estimates and validity coefficients. We show conditions under which interrater correlations can either overestimate or underestimate reliability coefficients, and discuss reasons other than random measurement error for low interrater correlations.