z-logo
open-access-imgOpen Access
Using a new interrater reliability method to test the modified Oulu Patient Classification instrument in home health care
Author(s) -
Flo Jill,
Landmark Bjørg,
Hatlevik Ove Edward,
Fagerström Lisbeth
Publication year - 2018
Publication title -
nursing open
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.55
H-Index - 12
ISSN - 2054-1058
DOI - 10.1002/nop2.126
Subject(s) - inter rater reliability , cronbach's alpha , kappa , reliability (semiconductor) , test (biology) , medicine , internal consistency , cohen's kappa , psychology , statistics , mathematics , psychometrics , clinical psychology , rating scale , paleontology , power (physics) , geometry , physics , quantum mechanics , biology
Aim To test the interrater reliability of the modified Oulu Patient Classification instrument, using a multiple parallel classification method based on oral case presentations in home health care in Norway. Design Reliability study. Methods Data were collected at two municipal home healthcare units during 2013–2014. The reliability of the modified OPC q instrument was tested using a new multiple parallel classification method. The data material consisted of 2 010 parallel classifications, analysed using consensus in per cent and Cohen's kappa. Cronbach's alpha was used to measure internal consistency. Results For parallel classifications, consensus varied between 64.78–77.61%. Interrater reliability varied between 0.49–0.69 (Cohen's kappa), the internal consistency between 0.81–0.94 (Cronbach's alpha). Analysis of the raw scores showed 27.2% classifications had the same points, 39.1% differed one point, 17.9% differed two points and 16.5% differed ≥3 points.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here