z-logo
open-access-imgOpen Access
Integrating Stereoscopic Video with Modular 3D Anatomic Models for Lateral Skull Base Training
Author(s) -
Samuel R. Barber,
Saurabh Jain,
Young-Jun Son,
Kaith K. Almefty,
Michael T. Lawton,
Shawn M. Stevens
Publication year - 2020
Publication title -
journal of neurological surgery. part b, skull base
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.488
H-Index - 42
eISSN - 2193-6331
pISSN - 2193-634X
DOI - 10.1055/s-0040-1701675
Subject(s) - stereoscopy , skull , modular design , computer science , base (topology) , anatomy , training (meteorology) , geology , artificial intelligence , computer vision , computer graphics (images) , medicine , mathematics , geography , operating system , mathematical analysis , meteorology
 Current virtual reality (VR) technology allows the creation of instructional video formats that incorporate three-dimensional (3D) stereoscopic footage.Combined with 3D anatomic models, any surgical procedure or pathology could be represented virtually to supplement learning or surgical preoperative planning. We propose a standalone VR app that allows trainees to interact with modular 3D anatomic models corresponding to stereoscopic surgical videos. Methods  Stereoscopic video was recorded using an OPMI Pentero 900 microscope (Zeiss, Oberkochen, Germany). Digital Imaging and Communications in Medicine (DICOM) images segmented axial temporal bone computed tomography and each anatomic structure was exported separately. 3D models included semicircular canals, facial nerve, sigmoid sinus and jugular bulb, carotid artery, tegmen, canals within the temporal bone, cochlear and vestibular aqueducts, endolymphatic sac, and all branches for cranial nerves VII and VIII. Finished files were imported into the Unreal Engine. The resultant application was viewed using an Oculus Go. Results  A VR environment facilitated viewing of stereoscopic video and interactive model manipulation using the VR controller. Interactive models allowed users to toggle transparency, enable highlighted segmentation, and activate labels for each anatomic structure. Based on 20 variable components, a value of 1.1 × 10 12 combinations of structures per DICOM series was possible for representing patient-specific anatomy in 3D. Conclusion  This investigation provides proof of concept that a hybrid of stereoscopic video and VR simulation is possible, and that this tool may significantly aid lateral skull base trainees as they learn to navigate a complex 3D surgical environment. Future studies will validate methodology.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here