Premium
WE‐G‐207‐06: 3D Fluoroscopic Image Generation From Patient‐Specific 4DCBCT‐Based Motion Models Derived From Physical Phantom and Clinical Patient Images
Author(s) -
Dhou S,
Cai W,
Hurwitz M,
Williams C,
Rottmann J,
Mishra P,
Myronakis M,
Cifter F,
Berbeco R,
Ionascu D,
Lewis J
Publication year - 2015
Publication title -
medical physics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.473
H-Index - 180
eISSN - 2473-4209
pISSN - 0094-2405
DOI - 10.1118/1.4926099
Subject(s) - imaging phantom , cone beam computed tomography , context (archaeology) , medical imaging , ground truth , computer science , fluoroscopy , image guided radiation therapy , artificial intelligence , image registration , computer vision , nuclear medicine , medicine , radiology , image (mathematics) , computed tomography , paleontology , biology
Purpose: Respiratory‐correlated cone‐beam CT (4DCBCT) images acquired immediately prior to treatment have the potential to represent patient motion patterns and anatomy during treatment, including both intra‐ and inter‐fractional changes. We develop a method to generate patient‐specific motion models based on 4DCBCT images acquired with existing clinical equipment and used to generate time varying volumetric images (3D fluoroscopic images) representing motion during treatment delivery. Methods: Motion models are derived by deformably registering each 4DCBCT phase to a reference phase, and performing principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated by optimizing the resulting PCA coefficients iteratively through comparison of the cone‐beam projections simulating kV treatment imaging and digitally reconstructed radiographs generated from the motion model. Patient and physical phantom datasets are used to evaluate the method in terms of tumor localization error compared to manually defined ground truth positions. Results: 4DCBCT‐based motion models were derived and used to generate 3D fluoroscopic images at treatment time. For the patient datasets, the average tumor localization error and the 95th percentile were 1.57 and 3.13 respectively in subsets of four patient datasets. For the physical phantom datasets, the average tumor localization error and the 95th percentile were 1.14 and 2.78 respectively in two datasets. 4DCBCT motion models are shown to perform well in the context of generating 3D fluoroscopic images due to their ability to reproduce anatomical changes at treatment time. Conclusion: This study showed the feasibility of deriving 4DCBCT‐based motion models and using them to generate 3D fluoroscopic images at treatment time in real clinical settings. 4DCBCT‐based motion models were found to account for the 3D non‐rigid motion of the patient anatomy during treatment and have the potential to localize tumor and other patient anatomical structures at treatment time even when inter‐fractional changes occur. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc., Palo Alto, CA. The project was also supported, in part, by Award Number R21CA156068 from the National Cancer Institute.