Premium
Comparative assessment of three standardized robotic surgery training methods
Author(s) -
Hung Andrew J.,
Jayaratna Isuru S.,
Teruya Kara,
Desai Mihir M.,
Gill Inderbir S.,
Goh Alvin C.
Publication year - 2013
Publication title -
bju international
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.773
H-Index - 148
eISSN - 1464-410X
pISSN - 1464-4096
DOI - 10.1111/bju.12045
Subject(s) - construct validity , robotic surgery , virtual reality , test (biology) , task (project management) , construct (python library) , medical physics , computer science , simulation , psychology , artificial intelligence , human–computer interaction , medicine , surgery , engineering , patient satisfaction , paleontology , biology , programming language , systems engineering
Objectives To evaluate three standardized robotic surgery training methods, inanimate, virtual reality and in vivo , for their construct validity. To explore the concept of cross‐method validity, where the relative performance of each method is compared.Materials and Methods Robotic surgical skills were prospectively assessed in 49 participating surgeons who were classified as follows: ‘novice/trainee’: urology residents, previous experience <30 cases ( n = 38) and ‘experts’: faculty surgeons, previous experience ≥30 cases ( n = 11). Three standardized, validated training methods were used: (i) structured inanimate tasks; (ii) virtual reality exercises on the da V inci S kills S imulator (Intuitive Surgical, Sunnyvale, CA , USA ); and (iii) a standardized robotic surgical task in a live porcine model with performance graded by the G lobal E valuative A ssessment of R obotic S kills ( GEARS ) tool. A K ruskal– W allis test was used to evaluate performance differences between novices and experts (construct validity). Spearman's correlation coefficient (ρ) was used to measure the association of performance across inanimate, simulation and in vivo methods (cross‐method validity).Results Novice and expert surgeons had previously performed a median (range) of 0 (0–20) and 300 (30–2000) robotic cases, respectively ( P < 0.001). Construct validity: experts consistently outperformed residents with all three methods ( P < 0.001). Cross‐method validity: overall performance of inanimate tasks significantly correlated with virtual reality robotic performance (ρ = −0.7, P < 0.001) and in vivo robotic performance based on GEARS (ρ = −0.8, P < 0.0001). Virtual reality performance and in vivo tissue performance were also found to be strongly correlated (ρ = 0.6, P < 0.001).Conclusions We propose the novel concept of cross‐method validity, which may provide a method of evaluating the relative value of various forms of skills education and assessment. We externally confirmed the construct validity of each featured training tool.