z-logo
open-access-imgOpen Access
Reliability and Discriminant Validity of a Checklist for Surgical Scrubbing, Gowning and Gloving
Author(s) -
Stephen P. Canton,
C. Fritz Foley,
Isabel Fulcher,
Laura K. Newcomb,
Noah B. Rindos,
Nicole Donnellan
Publication year - 2021
Publication title -
international journal of medical students
Language(s) - English
Resource type - Journals
ISSN - 2076-6327
DOI - 10.5195/ijms.2021.1221
Subject(s) - checklist , data scrubbing , reliability (semiconductor) , discriminant validity , medicine , reliability engineering , linear discriminant analysis , validity , statistics , surgery , psychology , mathematics , engineering , clinical psychology , psychometrics , power (physics) , physics , quantum mechanics , internal consistency , cognitive psychology , patient satisfaction
Background: Surgical scrubbing, gowning, and gloving is challenging for medical trainees to learn in the operating room environment. Currently, there are few reliable or valid tools to evaluate a trainee’s ability to scrub, gown and glove. The objective of this study is to test the reliability and validity of a checklist that evaluates the technique of surgical scrubbing, gowning, and gloving.Methods: This Institutional Review Board-approved study recruited medical students, residents, and fellows from an academic, tertiary care institution. Trainees were stratified based upon prior surgical experience as novices, intermediates, or experts. Participants were instructed to scrub, gown and glove in a staged operating room while being video-recorded. Two blinded raters scored the videos according to the SGG checklist. Reliability was assessed using the intraclass correlation coefficient for total scores and Cohen’s kappa for item completion. The internal consistency and discriminant validity of the SGG checklist were assessed using Cronbach alpha and the Wilcoxon rank sum test, respectively.Results: 56 participants were recruited (18 novices, 19 intermediates, 19 experts). The intraclass correlation coefficient demonstrated excellent inter-rater reliability for the overall checklist (0.990), and the Cohen’s kappa ranged from 0.598 to 1.00. The checklist also had excellent internal consistency (Cronbach’s alpha 0.950). A significant difference in scores was observed between all groups (p < 0.001).Conclusion: This checklist demonstrates a high inter-rater reliability, discriminant validity, and internal consistency. It has the potential to enhance medical education curricula.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here