z-logo
open-access-imgOpen Access
Evaluation of Different Scoring Rules for a Noncognitive Test in Development
Author(s) -
Guo Hongwen,
Zu Jiyun,
Kyllonen Patrick,
Schmitt Neal
Publication year - 2016
Publication title -
ets research report series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.235
H-Index - 5
ISSN - 2330-8516
DOI - 10.1002/ets2.12089
Subject(s) - test (biology) , psychology , item response theory , test validity , econometrics , statistics , computer science , mathematics education , psychometrics , mathematics , developmental psychology , paleontology , biology
In this report, systematic applications of statistical and psychometric methods are used to develop and evaluate scoring rules in terms of test reliability. Data collected from a situational judgment test are used to facilitate the comparison. For a well‐developed item with appropriate keys (i.e., the correct answers), agreement among various item‐scoring rules is expected in the item‐option characteristic curves. In addition, when models based on item‐response theory fit the data, test reliability is greatly improved, particularly if the nominal response model and its estimates are used in scoring.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom