z-logo
Premium
Reliability analysis for continuous measurements: Equivalence test for agreement
Author(s) -
Yi Qilong,
Wang P. Peter,
He Yaohua
Publication year - 2007
Publication title -
statistics in medicine
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.996
H-Index - 183
eISSN - 1097-0258
pISSN - 0277-6715
DOI - 10.1002/sim.3110
Subject(s) - repeatability , equivalence (formal languages) , reliability (semiconductor) , inter rater reliability , computer science , reliability engineering , intra rater reliability , statistics , test (biology) , consistency (knowledge bases) , mathematics , rating scale , artificial intelligence , engineering , paleontology , power (physics) , physics , discrete mathematics , quantum mechanics , biology
In tandem with the rapid development of medical technology, methods for assessing intrarater and interrater reliability or agreement across tools for continuous measurements have become an increasingly important research topic. Thus far, a number of reliability assessment methods have been proposed. Among them, the limits of agreement and repeatability coefficients were found to be the most useful tools for assessing reliability when measurements are on a continuous scale. However, both are considered as descriptive methods. The concepts of consistency or conformity require an equivalence test without which judgment would be subjective. In this paper we will extend the repeatability coefficient approach and propose an equivalence test that can be used to confirm the agreement between two or more measurement tools or assess interrater and intrarater reliability. Using this approach, a formula to calculate sample size will also be suggested and examples will be provided to illustrate the method. Copyright © 2007 John Wiley & Sons, Ltd.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here