z-logo
Premium
A comparison of methods for calculating a stratified kappa
Author(s) -
Barlow William,
Lai MeiYing,
Azen Stanley P.
Publication year - 1991
Publication title -
statistics in medicine
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.996
H-Index - 183
eISSN - 1097-0258
pISSN - 0277-6715
DOI - 10.1002/sim.4780100913
Subject(s) - weighting , statistics , kappa , mathematics , cohen's kappa , confounding , sample size determination , sample (material) , econometrics , medicine , chemistry , geometry , chromatography , radiology
Abstract Investigators use the kappa coefficient to measure chance‐corrected agreement among observers in the classification of subjects into nominal categories. The marginal probability of classification may depend, however, on one or more confounding variables. We consider assessment of interrater agreement with subjects grouped into strata on the basis of these confounders. We assume overall agreement across strata is constant and consider a stratified index of agreement, or ‘stratified kappa’, based on weighted summations of the individual kappas. We use three weighting schemes: (1) equal weighting; (2) weighting by the size of the table; and (3) weighting by the inverse of the variance. In a simulation study we compare these methods under differing probability structures and differing sample sizes for the tables. We find weighting by sample size moderately efficient under most conditions. We illustrate the techniques by assessing agreement between surgeons and graders of fundus photographs with respect to retinal characteristics, with stratification by initial severity of the disease.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here