z-logo
Premium
BACK TO BASICS: PERCENTAGE AGREEMENT MEASURES ARE ADEQUATE, BUT THERE ARE EASIER WAYS
Author(s) -
Birkimer John C.,
Brown Joseph H.
Publication year - 1979
Publication title -
journal of applied behavior analysis
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.1
H-Index - 76
eISSN - 1938-3703
pISSN - 0021-8855
DOI - 10.1901/jaba.1979.12-535
Subject(s) - agreement , inter rater reliability , statistics , reliability (semiconductor) , psychology , observer (physics) , task (project management) , computer science , mathematics , rating scale , philosophy , linguistics , power (physics) , physics , management , quantum mechanics , economics
Percentage agreement measures of interobserver agreement or “reliability” have traditionally been used to summarize observer agreement from studies using interval recording, time‐sampling, and trial‐scoring data collection procedures. Recent articles disagree on whether to continue using these percentage agreement measures, and on which ones to use, and what to do about chance agreements if their use is continued. Much of the disagreement derives from the need to be reasonably certain we do not accept as evidence of true interobserver agreement those agreement levels which are substantially probable as a result of chance observer agreement. The various percentage agreement measures are shown to be adequate to this task, but easier ways are discussed. Tables are given to permit checking to see if obtained disagreements are unlikely due to chance. Particularly important is the discovery of a simple rule that, when met, makes the tables unnecessary. If reliability checks using 50 or more observation occasions produce 10% or fewer disagreements, for behavior rates from 10% through 90%, the agreement achieved is quite improbably the result of chance agreement.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here