z-logo
Premium
Classification of Cardiopulmonary Resuscitation Chest Compression Patterns: Manual Versus Automated Approaches
Author(s) -
Wang Henry E.,
Schmicker Robert H.,
Herren Heather,
Brown Siobhan,
Donnelly John P.,
Gray Randal,
Ragsdale Sally,
Gleeson Andrew,
Byers Adam,
Jasti Jamie,
Aguirre Christina,
Owens Pam,
Condle Joe,
Leroux Brian
Publication year - 2015
Publication title -
academic emergency medicine
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.221
H-Index - 124
eISSN - 1553-2712
pISSN - 1069-6563
DOI - 10.1111/acem.12577
Subject(s) - medicine , cardiopulmonary resuscitation , chest pain , compression (physics) , confidence interval , kappa , resuscitation , radiology , surgery , linguistics , philosophy , materials science , composite material
Objectives New chest compression detection technology allows for the recording and graphical depiction of clinical cardiopulmonary resuscitation ( CPR ) chest compressions. The authors sought to determine the inter‐rater reliability of chest compression pattern classifications by human raters. Agreement with automated chest compression classification was also evaluated by computer analysis. Methods This was an analysis of chest compression patterns from cardiac arrest patients enrolled in the ongoing Resuscitation Outcomes Consortium ( ROC ) Continuous Chest Compressions Trial. Thirty CPR process files from patients in the trial were selected. Using written guidelines, research coordinators from each of eight participating ROC sites classified each chest compression pattern as 30:2 chest compressions, continuous chest compressions ( CCC ), or indeterminate. A computer algorithm for automated chest compression classification was also developed for each case. Inter‐rater agreement between manual classifications was tested using Fleiss's kappa. The criterion standard was defined as the classification assigned by the majority of manual raters. Agreement between the automated classification and the criterion standard manual classifications was also tested. Results The majority of the eight raters classified 12 chest compression patterns as 30:2, 12 as CCC , and six as indeterminate. Inter‐rater agreement between manual classifications of chest compression patterns was κ = 0.62 (95% confidence interval [ CI ] = 0.49 to 0.74). The automated computer algorithm classified chest compression patterns as 30:2 ( n =  15), CCC ( n =  12), and indeterminate ( n =  3). Agreement between automated and criterion standard manual classifications was κ = 0.84 (95% CI  = 0.59 to 0.95). Conclusions In this study, good inter‐rater agreement in the manual classification of CPR chest compression patterns was observed. Automated classification showed strong agreement with human ratings. These observations support the consistency of manual CPR pattern classification as well as the use of automated approaches to chest compression pattern analysis.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here