z-logo
Premium
Option weights should be determined empirically and not by experts when assessing knowledge with multiple‐choice items
Author(s) -
Diedenhofen Birk,
Musch Jochen
Publication year - 2019
Publication title -
international journal of selection and assessment
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.812
H-Index - 61
eISSN - 1468-2389
pISSN - 0965-075X
DOI - 10.1111/ijsa.12252
Subject(s) - weighting , reliability (semiconductor) , selection (genetic algorithm) , test (biology) , multiple choice , empirical research , psychology , computer science , machine learning , statistics , mathematics , significant difference , medicine , paleontology , power (physics) , physics , quantum mechanics , biology , radiology
Multiple‐choice tests are frequently used in personnel selection contexts to measure knowledge and abilities. Option weighting is an alternative multiple‐choice scoring procedure that awards partial credit for incomplete knowledge reflected in applicants’ distractor choices. We investigated whether option weights should be based on expert judgment or on empirical data when trying to outperform conventional number‐right scoring in terms of reliability and validity. To obtain generalizable results, we used repeated random sub‐sampling validation and found that empirical option weighting, but not expert option weighting, increased the reliability of a knowledge test. Neither option weighting procedure improved test validity. We recommend to improve the reliability of existing ability and knowledge tests used for personnel selection by computing and publishing empirical option weights.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here