Premium
An integrated approach to endoscopic instrument tracking for augmented reality applications in surgical simulation training
Author(s) -
Loukas Constantinos,
Lahanas Vasileios,
Georgiou Evangelos
Publication year - 2013
Publication title -
the international journal of medical robotics and computer assisted surgery
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.556
H-Index - 53
eISSN - 1478-596X
pISSN - 1478-5951
DOI - 10.1002/rcs.1485
Subject(s) - augmented reality , computer vision , computer science , artificial intelligence , pose , rendering (computer graphics) , bittorrent tracker , virtual reality , tracking (education) , eye tracking , psychology , pedagogy
Background Despite the popular use of virtual and physical reality simulators in laparoscopic training, the educational potential of augmented reality (AR) has not received much attention. A major challenge is the robust tracking and three‐dimensional (3D) pose estimation of the endoscopic instrument, which are essential for achieving interaction with the virtual world and for realistic rendering when the virtual scene is occluded by the instrument. In this paper we propose a method that addresses these issues, based solely on visual information obtained from the endoscopic camera. Methods Two different tracking algorithms are combined for estimating the 3D pose of the surgical instrument with respect to the camera. The first tracker creates an adaptive model of a colour strip attached to the distal part of the tool (close to the tip). The second algorithm tracks the endoscopic shaft, using a combined Hough–Kalman approach. The 3D pose is estimated with perspective geometry, using appropriate measurements extracted by the two trackers. Results The method has been validated on several complex image sequences for its tracking efficiency, pose estimation accuracy and applicability in AR‐based training. Using a standard endoscopic camera, the absolute average error of the tip position was 2.5 mm for working distances commonly found in laparoscopic training. The average error of the instrument's angle with respect to the camera plane was approximately 2°. The results are also supplemented by video segments of laparoscopic training tasks performed in a physical and an AR environment. Conclusions The experiments yielded promising results regarding the potential of applying AR technologies for laparoscopic skills training, based on a computer vision framework. The issue of occlusion handling was adequately addressed. The estimated trajectory of the instruments may also be used for surgical gesture interpretation and assessment. Copyright © 2013 John Wiley & Sons, Ltd.