z-logo
Premium
Construct Validity of Multi‐Source Performance Ratings: An Examination of the Relationship of Self‐, Supervisor‐, and Peer‐Ratings with Cognitive and Personality Measures
Author(s) -
Hooft Edwin A. J.,
Flier Henk,
Minne Marjolein R.
Publication year - 2006
Publication title -
international journal of selection and assessment
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.812
H-Index - 61
eISSN - 1468-2389
pISSN - 0965-075X
DOI - 10.1111/j.1468-2389.2006.00334.x
Subject(s) - psychology , construct validity , supervisor , inter rater reliability , variance (accounting) , personality , construct (python library) , social psychology , incremental validity , reliability (semiconductor) , applied psychology , criterion validity , common method variance , clinical psychology , psychometrics , developmental psychology , rating scale , power (physics) , physics , accounting , quantum mechanics , political science , computer science , law , business , programming language
Although more and more organizations prefer using multi‐source performance ratings or 360° feedback over traditional performance appraisals, researchers have been rather skeptical regarding the reliability and validity of such ratings. The present study examined the validity of self‐, supervisor‐, and peer‐ratings of 195 employees in a Dutch public organization, using scores on an In‐Basket exercise, an intelligence test, and a personality questionnaire as external criterion measures. Interrater agreement ranged from .28 to .38. Variance in the ratings was explained by both method and content factors. Support for the external construct validity was rather weak. Supervisor‐ratings were not found to be superior to self‐ and peer‐ratings in predicting the scores on the external measures.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here