Premium
Calculating power for the comparison of dependent κ ‐coefficients
Author(s) -
Lin HungMo,
Williamson John M.,
Lipsitz Stuart R.
Publication year - 2003
Publication title -
journal of the royal statistical society: series c (applied statistics)
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.205
H-Index - 72
eISSN - 1467-9876
pISSN - 0035-9254
DOI - 10.1111/1467-9876.00412
Subject(s) - wald test , statistic , statistics , sample size determination , sample (material) , mathematics , binary number , power (physics) , computer science , statistical hypothesis testing , econometrics , arithmetic , chemistry , physics , chromatography , quantum mechanics
Summary. In the psychosocial and medical sciences, some studies are designed to assess the agreement between different raters and/or different instruments. Often the same sample will be used to compare the agreement between two or more assessment methods for simplicity and to take advantage of the positive correlation of the ratings. Although sample size calculations have become an important element in the design of research projects, such methods for agreement studies are scarce. We adapt the generalized estimating equations approach for modelling dependent κ ‐statistics to estimate the sample size that is required for dependent agreement studies. We calculate the power based on a Wald test for the equality of two dependent κ ‐statistics. The Wald test statistic has a non‐central χ 2 ‐distribution with non‐centrality parameter that can be estimated with minimal assumptions. The method proposed is useful for agreement studies with two raters and two instruments, and is easily extendable to multiple raters and multiple instruments. Furthermore, the method proposed allows for rater bias. Power calculations for binary ratings under various scenarios are presented. Analyses of two biomedical studies are used for illustration.