Premium
Automated assessment of cortical mastoidectomy performance in virtual reality
Author(s) -
Wijewickrema Sudanthi,
Talks Benjamin James,
Lamtara Jesslyn,
Gerard JeanMarc,
O’Leary Stephen
Publication year - 2021
Publication title -
clinical otolaryngology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.914
H-Index - 68
eISSN - 1749-4486
pISSN - 1749-4478
DOI - 10.1111/coa.13760
Subject(s) - mastoidectomy , medicine , metric (unit) , workload , otorhinolaryngology , medical physics , cholesteatoma , artificial intelligence , computer science , surgery , operations management , engineering , operating system
Abstract Introduction Cortical mastoidectomy is a core skill that Otolaryngology trainees must gain competency in. Automated competency assessments have the potential to reduce assessment subjectivity and bias, as well as reducing the workload for surgical trainers. Objectives This study aimed to develop and validate an automated competency assessment system for cortical mastoidectomy. Participants Data from 60 participants (Group 1) were used to develop and validate an automated competency assessment system for cortical mastoidectomy. Data from 14 other participants (Group 2) were used to test the generalisability of the automated assessment. Design Participants drilled cortical mastoidectomies on a virtual reality temporal bone simulator. Procedures were graded by a blinded expert using the previously validated Melbourne Mastoidectomy Scale: a different expert assessed procedures by Groups 1 and 2. Using data from Group 1, simulator metrics were developed to map directly to the individual items of this scale. Metric value thresholds were calculated by comparing automated simulator metric values to expert scores. Binary scores per item were allocated using these thresholds. Validation was performed using random sub‐sampling. The generalisability of the method was investigated by performing the automated assessment on mastoidectomies performed by Group 2, and correlating these with scores of a second blinded expert. Results The automated binary score compared with the expert score per item had an accuracy, sensitivity and specificity of 0.9450, 0.9547 and 0.9343, respectively, for Group 1; and 0.8614, 0.8579 and 0.8654, respectively, for Group 2. There was a strong correlation between the total scores per participant assigned by the expert and calculated by the automatic assessment method for both Group 1 ( r = .9144, P < .0001) and Group 2 ( r = .7224, P < .0001). Conclusion This study outlines a virtual reality‐based method of automated assessment of competency in cortical mastoidectomy, which proved comparable to the assessment provided by human experts.