z-logo
Premium
Measures of Agreement to Assess Attribute‐Level Classification Accuracy and Consistency for Cognitive Diagnostic Assessments
Author(s) -
Johnson Matthew S.,
Sinharay Sandip
Publication year - 2018
Publication title -
journal of educational measurement
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.917
H-Index - 47
eISSN - 1745-3984
pISSN - 0022-0655
DOI - 10.1111/jedm.12196
Subject(s) - reliability (semiconductor) , consistency (knowledge bases) , cognition , statistics , population , psychology , quality (philosophy) , mathematics , computer science , artificial intelligence , medicine , power (physics) , philosophy , physics , environmental health , epistemology , quantum mechanics , neuroscience
One of the proposed uses of cognitive diagnostic assessments is to classify the examinees as either masters or nonmasters on each of a number of skills being assessed. As with any test, it is important to report the quality of these binary classifications with measures of their reliability. Cui et al. and Wang et al. have suggested reliability measures that can be calculated from the model parameters of cognitive diagnosis models; these previously suggested indices are measures of agreement between either the estimated and true mastery classifications, or between the estimated classifications from parallel assessments. This article discusses the limitations of these existing methods and suggests the use of other measures of agreement. A simulation study demonstrates that the proposed measures are related to factors that would be expected to be associated with reliability; for example, reliability increases with variability in the population and with item discrimination, whereas the previously suggested measures do not show the same pattern. A real data example is also included.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here