Premium
Increasing Validity with Forced‐Choice Criterion Measurement Formats
Author(s) -
Bartram Dave
Publication year - 2007
Publication title -
international journal of selection and assessment
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.812
H-Index - 61
eISSN - 1468-2389
pISSN - 0965-075X
DOI - 10.1111/j.1468-2389.2007.00386.x
Subject(s) - criterion validity , psychology , likert scale , rating scale , two alternative forced choice , set (abstract data type) , scale (ratio) , statistics , social psychology , applied psychology , psychometrics , econometrics , computer science , construct validity , clinical psychology , cognitive psychology , mathematics , developmental psychology , physics , quantum mechanics , programming language
The relative validities of forced‐choice (ipsative) and Likert rating‐scale item formats as criterion measures are examined. While there has been much debate about the relative technical and psychometric merits and demerits of ipsative instruments, the present research focused on the crucial question of whether the use of this format has any practical benefit – in terms of improved validity. An analysis is reported from a meta‐analysis data set. This demonstrates that higher operational validity coefficients (prediction of line‐manager ratings of competencies) are associated with the use of forced‐choice ( r =.38) rather than rating scale ( r =.25) item formats for the criterion measurement instrument when performance is rated by the same line managers on both formats and where the predictor is held constant. Thus the apparent criterion‐related validity of a predictor can increase by 50% simply by changing the format of the criterion measurement instrument. The implications of this for practice are discussed.