Premium
Be Careful What You Ask For: Effects of Response Instructions on the Construct Validity and Reliability of Situational Judgment Tests
Author(s) -
Ployhart Robert E.,
Ehrhart Mark G.
Publication year - 2003
Publication title -
international journal of selection and assessment
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.812
H-Index - 61
eISSN - 1468-2389
pISSN - 0965-075X
DOI - 10.1111/1468-2389.00222
Subject(s) - psychology , construct validity , reliability (semiconductor) , situational ethics , test (biology) , construct (python library) , applied psychology , social psychology , face validity , concurrent validity , psychometrics , clinical psychology , computer science , power (physics) , paleontology , physics , quantum mechanics , biology , programming language , internal consistency
The aim of this study was to examine how six different types of situational judgment test (SJT) instructions, used frequently in practice, influence the psychometric characteristics of SJTs. The six SJT versions used the exact same items and differed only in their instructions; these versions were administered in two phases. Phase I was a between–subjects design ( n = 486) that had participants complete one version of the SJTs. Phase II was a within–subjects design ( n = 231) held several weeks later that had participants complete all six versions of the SJTs. Further, 146 of these individuals completed both phases, allowing for an assessment of test–retest reliability. A variety of objective and subjective criteria were collected, including self and peer ratings. Results indicated that instructions had a large effect on SJT responses, reliability, and validity. In general, instructions asking what one ‘would do’ showed more favorable characteristics than those that asked what one ‘should do’. Correlations between these two types were relatively low despite the fact that the same items were used, and criterion–related validities differed substantially in favor of the ‘would do’ instructions. Overall, this study finds that researchers and practitioners must give careful consideration to the types of SJT instructions used; failing to do so could influence criterion–related validity and cloud inferences of construct validity.