Premium
Reliability of Computerized Emergency Triage
Author(s) -
Dong Sandy L.,
Bullard Michael J.,
Meurer David P.,
Blitz Sandra,
Ohinmaa Arto,
Holroyd Brian R.,
Rowe Brian H.
Publication year - 2006
Publication title -
academic emergency medicine
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.221
H-Index - 124
eISSN - 1553-2712
pISSN - 1069-6563
DOI - 10.1197/j.aem.2005.10.014
Subject(s) - triage , inter rater reliability , medicine , confidence interval , emergency department , crowding , overcrowding , reliability (semiconductor) , medical emergency , emergency medicine , statistics , nursing , power (physics) , rating scale , physics , mathematics , quantum mechanics , neuroscience , economics , biology , economic growth
Objectives:Emergency department (ED) triage prioritizes patients based on urgency of care. This study compared agreement between two blinded, independent users of a Web‐based triage tool (eTRIAGE) and examined the effects of ED crowding on triage reliability.Methods:Consecutive patients presenting to a large, urban, tertiary care ED were assessed by the duty triage nurse and an independent study nurse, both using eTRIAGE. Triage score distribution and agreement are reported. The study nurse collected data on ED activity, and agreement during different levels of ED crowding is reported. Two methods of interrater agreement were used: the linear‐weighted κ and quadratic‐weighted κ .Results:A total of 575 patients were assessed over nine weeks, and complete data were available for 569 patients (99.0%). Agreement between the two nurses was moderate if using linear κ (weighted κ = 0.52; 95% confidence interval = 0.46 to 0.57) and good if using quadratic κ (weighted κ = 0.66; 95% confidence interval = 0.60 to 0.71). ED overcrowding data were available for 353 patients (62.0%). Agreement did not significantly differ with respect to periods of ambulance diversion, number of admitted inpatients occupying stretchers, number of patients in the waiting room, number of patients registered in two hours, or nurse perception of busyness.Conclusions:This study demonstrated different agreement depending on the method used to calculate interrater reliability. Using the standard methods, it found good agreement between two independent users of a computerized triage tool. The level of agreement was not affected by various measures of ED crowding.