z-logo
Premium
ARTIFACT, BIAS, AND COMPLEXITY OF ASSESSMENT: THE ABCs OF RELIABILITY
Author(s) -
Kazdin Alan E.
Publication year - 1977
Publication title -
journal of applied behavior analysis
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.1
H-Index - 76
eISSN - 1938-3703
pISSN - 0021-8855
DOI - 10.1901/jaba.1977.10-141
Subject(s) - artifact (error) , reliability (semiconductor) , psychology , inter rater reliability , credence , response bias , cognitive psychology , statistics , reliability engineering , artificial intelligence , computer science , machine learning , social psychology , mathematics , developmental psychology , power (physics) , rating scale , physics , quantum mechanics , engineering
Interobserver agreement (also referred to here as “reliability”) is influenced by diverse sources of artifact, bias, and complexity of the assessment procedures. The literature on reliability assessment frequently has focused on the different methods of computing reliability and the circumstances under which these methods are appropriate. Yet, the credence accorded estimates of interobserver agreement, computed by any method, presupposes eliminating sources of bias that can spuriously affect agreement. The present paper reviews evidence pertaining to various sources of artifact and bias, as well as characteristics of assessment that influence interpretation of interobserver agreement. These include reactivity of reliability assessment, observer drift, complexity of response codes and behavioral observations, observer expectancies and feedback, and others. Recommendations are provided for eliminating or minimizing the influence of these factors from interobserver agreement

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here