Premium
Evaluation of the McMahon Competence Assessment Instrument for Use with Midwifery Students During a Simulated Shoulder Dystocia
Author(s) -
McMahon Erin,
Jevitt Cecilia,
Aronson Barbara
Publication year - 2018
Publication title -
journal of midwifery and women's health
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.543
H-Index - 62
eISSN - 1542-2011
pISSN - 1526-9523
DOI - 10.1111/jmwh.12721
Subject(s) - intraclass correlation , shoulder dystocia , competence (human resources) , inter rater reliability , medicine , reliability (semiconductor) , cohen's kappa , medical education , psychology , medical physics , psychometrics , pregnancy , clinical psychology , computer science , social psychology , rating scale , developmental psychology , power (physics) , physics , quantum mechanics , biology , genetics , machine learning
Intrapartum emergencies occur infrequently but require a prompt and competent response from the midwife to prevent morbidity and mortality of the woman, fetus, and newborn. Simulation provides the opportunity for student midwives to develop competence in a safe environment. The purpose of this study was to determine the inter‐rater reliability of the McMahon Competence Assessment Instrument (MCAI) for use with student midwives during a simulated shoulder dystocia scenario. Methods A pilot study using a nonprobability convenience sample was used to evaluate the MCAI. Content validity indices were calculated for the individual items and the overall instrument using data from a panel of expert reviewers. Fourteen student midwives consented to be video recorded while participating in a simulated shoulder dystocia scenario. Three faculty raters used the MCAI to evaluate the student performance. These quantitative data were used to determine the inter‐rater reliability of the MCAI. Results The intraclass correlation coefficient (ICC) was used to assess the inter‐rater reliability of MCAI scores between 2 or more raters. The ICC was 0.86 (95% confidence interval, 0.60‐0.96). Fleiss's kappa was calculated to determine the inter‐rater reliability for individual items. Twenty‐three of the 42 items corresponded to excellent strength of agreement. Discussion This study demonstrates a method to determine the inter‐rater reliability of a competence assessment instrument to be used with student midwives. Data produced by this study were used to revise and improve the instrument. Additional research will further document the inter‐rater reliability and can be used to determine changes in student competence. Valid and reliable methods of assessment will encourage the use of simulation to efficiently develop the competence of student midwives.