Premium
The process of rater training for observational instruments: Implications for interrater reliability
Author(s) -
Castorr Alexandria H.,
Thompson Kathleen O.,
Ryan Judith W.,
Phillips Carol Y.,
Prescott Patricia A.,
Soeken Karen L.
Publication year - 1990
Publication title -
research in nursing and health
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.836
H-Index - 85
eISSN - 1098-240X
pISSN - 0160-6891
DOI - 10.1002/nur.4770130507
Subject(s) - inter rater reliability , reliability (semiconductor) , observational study , psychology , applied psychology , process (computing) , medicine , computer science , rating scale , developmental psychology , power (physics) , physics , pathology , quantum mechanics , operating system
Although the process of rater training is important for establishing interrater reliability of observational instruments, there is little information available in current literature to guide the researcher. In this article, principles and procedures that can be used when rater performance is a critical element of reliability assessment are described. Three phases of the process of rater training are presented: (a) training raters to use the instrument; (b) evaluating rater performance at the end of training; and (c) determining the extent to which rater training is maintained during a reliability study. An example is presented to illustrate how these phases were incorporated in a study to examine the reliability of a measure of patient intensity called the Patient Intensity for Nursing Index (PINI).