z-logo
Premium
Lung tumor tracking in fluoroscopic video based on optical flow
Author(s) -
Xu Qianyi,
Hamilton Russell J.,
Schowengerdt Robert A.,
Alexander Brian,
Jiang Steve B.
Publication year - 2008
Publication title -
medical physics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.473
H-Index - 180
eISSN - 2473-4209
pISSN - 0094-2405
DOI - 10.1118/1.3002323
Subject(s) - fiducial marker , optical flow , centroid , computer vision , computer science , tracking (education) , artificial intelligence , fluoroscopy , position (finance) , multileaf collimator , medical imaging , nuclear medicine , medicine , radiation therapy , radiology , radiation treatment planning , image (mathematics) , psychology , pedagogy , finance , economics
Respiratory gating and tumor tracking for dynamic multileaf collimator delivery require accurate and real‐time localization of the lung tumor position during treatment. Deriving tumor position from external surrogates such as abdominal surface motion may have large uncertainties due to the intra‐ and interfraction variations of the correlation between the external surrogates and internal tumor motion. Implanted fiducial markers can be used to track tumors fluoroscopically in real time with sufficient accuracy. However, it may not be a practical procedure when implanting fiducials bronchoscopically. In this work, a method is presented to track the lung tumor mass or relevant anatomic features projected in fluoroscopic images without implanted fiducial markers based on an optical flow algorithm. The algorithm generates the centroid position of the tracked target and ignores shape changes of the tumor mass shadow. The tracking starts with a segmented tumor projection in an initial image frame. Then, the optical flow between this and all incoming frames acquired during treatment delivery is computed as initial estimations of tumor centroid displacements. The tumor contour in the initial frame is transferred to the incoming frames based on the average of the motion vectors, and its positions in the incoming frames are determined by fine‐tuning the contour positions using a template matching algorithm with a small search range. The tracking results were validated by comparing with clinician determined contours on each frame. The position difference in 95% of the frames was found to be less than 1.4 pixels( ∼ 0.7 mm ) in the best case and 2.8 pixels( ∼ 1.4 mm ) in the worst case for the five patients studied.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here