Premium
TU‐G‐211‐03: Automatic Segmentation of Non‐Small Cell Lung Carcinoma Using 3D Texture Features in Co‐Registered FDG PET/CT Images
Author(s) -
Markel D,
Caldwell C,
Alasti H,
Sun A,
Soliman H,
Lee J,
Ung Y,
McGhee P,
Webster D
Publication year - 2011
Publication title -
medical physics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.473
H-Index - 180
eISSN - 2473-4209
pISSN - 0094-2405
DOI - 10.1118/1.3613253
Subject(s) - segmentation , artificial intelligence , ground truth , pattern recognition (psychology) , skewness , nuclear medicine , computer science , concordance , sørensen–dice coefficient , feature (linguistics) , computer aided diagnosis , medicine , mathematics , image segmentation , statistics , linguistics , philosophy
Purpose: To evaluate the usage of a combination of FDG‐PET/CT features to improve automated segmentation of the gross tumor volume (GTV) in the thorax in order to reduce target definition uncertainty in radiotherapy. Methods: Features of co‐registered FDG‐PET/CT images of patients with non small cell lung carcinoma (NSCLC) were investigated using spatial gray‐level dependence matrices, neighborhood gray tone difference matrices, Tamura textures, first order statistics and structural characteristics. A training data set of PET and CT scans from 21 patients diagnosed with NSCLC was used. Feature samples were taken from regions of interest that included GTV, positive nodes and healthy structures found in the thorax. A decision tree incorporating KNN classifiers as nodes was trained to segment GTVs using an exhaustive search for the optimal combination of features by area under the curve (AUC) at each node. A validation set of 10 patients deemed difficult to contour was used and a probabilistic ground truth was derived from a combination of three observer contours using simultaneous truth and performance level estimation (STAPLE).Results: The concordance index of the three observers was found to average 0.370. CT skewness and PET coarseness were found to be the most useful discriminators when evaluated independently with AUCs of 0.705 and 0.972 respectively. Evaluation of the segmentation results using Dice coefficients found the resulting DTKNN outperformed a variety of thresholds including signal‐to‐background ratio and an implementation of the 3‐FLAB algorithm. Dice coefficients for the DTKNN averaged 0.65 and reached as high as 0.84. Conclusions: Incorporation of texture features from both modalities offers an improvement in segmentation accuracy over approaches that utilize each modality independently. The largest source of error was found to be the misregistration of PET to CT volumes and blurring of PET due to internal motion.