z-logo
Premium
Category Distinguishability and Observer Agreement
Author(s) -
Darroch J. N.,
McCloud P. I.
Publication year - 1986
Publication title -
australian journal of statistics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.434
H-Index - 41
eISSN - 1467-842X
pISSN - 0004-9581
DOI - 10.1111/j.1467-842x.1986.tb00709.x
Subject(s) - kappa , measure (data warehouse) , observer (physics) , mathematics , object (grammar) , agreement , matrix (chemical analysis) , cohen's kappa , statistics , computer science , artificial intelligence , data mining , linguistics , physics , geometry , materials science , quantum mechanics , composite material , philosophy
Summary It is common in the medical, biological, and social sciences for the categories into which an object is classified not to have a fully objective definition. Theoretically speaking the categories are therefore not completely distinguishable. The practical extent of their distinguishability can be measured when two expert observers classify the same sample of objects. It is shown, under reasonable assumptions, that the matrix of joint classification probabilities is quasi‐symmetric, and that the symmetric matrix component is non‐negative definite. The degree of distinguishability between two categories is defined and is used to give a measure of overall category distinguishability. It is argued that the kappa measure of observer agreement is unsatisfactory as a measure of overall category distinguishability.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here