Premium
Use of Situational Judgment Tests in Personnel Selection: Are the different methods for scoring the response options equivalent?
Author(s) -
StSauveur Catherine,
Girouard Sarah,
Goyette Véronique
Publication year - 2014
Publication title -
international journal of selection and assessment
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.812
H-Index - 61
eISSN - 1468-2389
pISSN - 0965-075X
DOI - 10.1111/ijsa.12072
Subject(s) - variance (accounting) , selection (genetic algorithm) , rank (graph theory) , psychology , sample (material) , situational ethics , construct (python library) , personnel selection , point (geometry) , process (computing) , statistics , computer science , incremental validity , applied psychology , construct validity , social psychology , machine learning , psychometrics , mathematics , clinical psychology , chemistry , geometry , accounting , chromatography , combinatorics , business , programming language , operating system
The different methods used to score the response options in situational judgment tests ( SJT s) carried out as part of the personnel selection process were compared by creating different keys for a single SJT , and the potential benefits of an innovative method combining existing methods were examined. The results, based on a sample of 1,194 candidates, point to some interesting differences between scoring methods. First, the innovative method created the lowest mean, near 60%. Second, the single‐best‐answer method produced the largest variance. The curve of the rank‐ordering method was the closest to a normal distribution. Finally, evidence suggests that the best‐and‐worst‐answer method and the innovative method provide the best results regarding construct validity. In sum, although no clear conclusion could be drawn about which methods should be preferred to score SJT s, results indicate that the new method could prove to be very interesting.