z-logo
Premium
Simultaneous cosegmentation of tumors in PET ‐ CT images using deep fully convolutional networks
Author(s) -
Zhong Zisha,
Kim Yusung,
Plichta Kristin,
Allen Bryan G.,
Zhou Leixin,
Buatti John,
Wu Xiaodong
Publication year - 2019
Publication title -
medical physics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.473
H-Index - 180
eISSN - 2473-4209
pISSN - 0094-2405
DOI - 10.1002/mp.13331
Subject(s) - artificial intelligence , positron emission tomography , graph , nuclear medicine , segmentation , convolutional neural network , pet ct , pattern recognition (psychology) , computer science , deep learning , mathematics , medicine , theoretical computer science
Purpose To investigate the use and efficiency of 3‐D deep learning, fully convolutional networks ( DFCN ) for simultaneous tumor cosegmentation on dual‐modality nonsmall cell lung cancer ( NSCLC ) and positron emission tomography ( PET )‐computed tomography ( CT ) images. Methods We used DFCN cosegmentation for NSCLC tumors in PET ‐ CT images, considering both the CT and PET information. The proposed DFCN ‐based cosegmentation method consists of two coupled three‐dimensional (3D)‐ UN ets with an encoder‐decoder architecture, which can communicate with the other in order to share complementary information between PET and CT . The weighted average sensitivity and positive predictive values denoted as Scores, dice similarity coefficients ( DSC s), and the average symmetric surface distances were used to assess the performance of the proposed approach on 60 pairs of PET / CT s. A Simultaneous Truth and Performance Level Estimation Algorithm ( STAPLE ) of 3 expert physicians’ delineations were used as a reference. The proposed DFCN framework was compared to 3 graph‐based cosegmentation methods. Results Strong agreement was observed when using the STAPLE references for the proposed DFCN cosegmentation on the PET ‐ CT images. The average DSC s on CT and PET are 0.861  ±  0.037 and 0.828  ±  0.087, respectively, using DFCN , compared to 0.638  ±  0.165 and 0.643  ±  0.141, respectively, when using the graph‐based cosegmentation method. The proposed DFCN cosegmentation using both PET and CT also outperforms the deep learning method using either PET or CT alone. Conclusions The proposed DFCN cosegmentation is able to outperform existing graph‐based segmentation methods. The proposed DFCN cosegmentation shows promise for further integration with quantitative multimodality imaging tools in clinical trials.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here