Premium
WE‐AB‐BRA‐12: Virtual Endoscope Tracking for Endoscopy‐CT Image Registration
Author(s) -
Ingram W,
Yang J,
Beadle B,
Rao A,
Wendt R,
Court L
Publication year - 2015
Publication title -
medical physics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.473
H-Index - 180
eISSN - 2473-4209
pISSN - 0094-2405
DOI - 10.1118/1.4925865
Subject(s) - imaging phantom , computer vision , endoscope , artificial intelligence , tracking (education) , endoscopy , computer science , virtual image , image registration , frame (networking) , nuclear medicine , medicine , radiology , image (mathematics) , psychology , telecommunications , pedagogy
Purpose: The use of endoscopy in radiotherapy will remain limited until we can register endoscopic video to CT using standard clinical equipment. In this phantom study we tested a registration method using virtual endoscopy to measure CT‐space positions from endoscopic video. Methods: Our phantom is a contorted clay cylinder with 2‐mm‐diameter markers in the luminal surface. These markers are visible on both CT and endoscopic video. Virtual endoscope images were rendered from a polygonal mesh created by segmenting the phantom's luminal surface on CT. We tested registration accuracy by tracking the endoscope's 6‐degree‐of‐freedom coordinates frame‐to‐frame in a video recorded as it moved through the phantom, and using these coordinates to measure CT‐space positions of markers visible in the final frame. To track the endoscope we used the Nelder‐Mead method to search for coordinates that render the virtual frame most similar to the next recorded frame. We measured the endoscope's initial‐frame coordinates using a set of visible markers, and for image similarity we used a combination of mutual information and gradient alignment. CT‐space marker positions were measured by projecting their final‐frame pixel addresses through the virtual endoscope to intersect with the mesh. Registration error was quantified as the distance between this intersection and the marker's manually‐selected CT‐space position. Results: Tracking succeeded for 6 of 8 videos, for which the mean registration error was 4.8±3.5mm (24 measurements total). The mean error in the axial direction (3.1±3.3mm) was larger than in the sagittal or coronal directions (2.0±2.3mm, 1.7±1.6mm). In the other 2 videos, the virtual endoscope got stuck in a false minimum. Conclusion: Our method can successfully track the position and orientation of an endoscope, and it provides accurate spatial mapping from endoscopic video to CT. This method will serve as a foundation for an endoscopy‐CT registration framework that is clinically valuable and requires no specialized equipment.