Premium
Deeply supervised 3D fully convolutional networks with group dilated convolution for automatic MRI prostate segmentation
Author(s) -
Wang Bo,
Lei Yang,
Tian Sibo,
Wang Tonghe,
Liu Yingzi,
Patel Pretesh,
Jani Ashesh B.,
Mao Hui,
Curran Walter J.,
Liu Tian,
Yang Xiaofeng
Publication year - 2019
Publication title -
medical physics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.473
H-Index - 180
eISSN - 2473-4209
pISSN - 0094-2405
DOI - 10.1002/mp.13416
Subject(s) - artificial intelligence , segmentation , computer science , pattern recognition (psychology) , convolutional neural network , hausdorff distance , discriminative model , deep learning , convolution (computer science) , similarity (geometry) , image segmentation , computer vision , artificial neural network , image (mathematics)
Purpose Reliable automated segmentation of the prostate is indispensable for image‐guided prostate interventions. However, the segmentation task is challenging due to inhomogeneous intensity distributions, variation in prostate anatomy, among other problems. Manual segmentation can be time‐consuming and is subject to inter‐ and intraobserver variation. We developed an automated deep learning‐based method to address this technical challenge. Methods We propose a three‐dimensional (3D) fully convolutional networks ( FCN ) with deep supervision and group dilated convolution to segment the prostate on magnetic resonance imaging ( MRI ). In this method, a deeply supervised mechanism was introduced into a 3D FCN to effectively alleviate the common exploding or vanishing gradients problems in training deep models, which forces the update process of the hidden layer filters to favor highly discriminative features. A group dilated convolution which aggregates multiscale contextual information for dense prediction was proposed to enlarge the effective receptive field of convolutional neural networks, which improve the prediction accuracy of prostate boundary. In addition, we introduced a combined loss function including cosine and cross entropy, which measures similarity and dissimilarity between segmented and manual contours, to further improve the segmentation accuracy. Prostate volumes manually segmented by experienced physicians were used as a gold standard against which our segmentation accuracy was measured. Results The proposed method was evaluated on an internal dataset comprising 40 T2‐weighted prostate MR volumes. Our method achieved a Dice similarity coefficient ( DSC ) of 0.86 ± 0.04, a mean surface distance ( MSD ) of 1.79 ± 0.46 mm, 95% Hausdorff distance (95% HD ) of 7.98 ± 2.91 mm, and absolute relative volume difference ( aRVD ) of 15.65 ± 10.82. A public dataset ( PROMISE 12) including 50 T2‐weighted prostate MR volumes was also employed to evaluate our approach. Our method yielded a DSC of 0.88 ± 0.05, MSD of 1.02 ± 0.35 mm, 95% HD of 9.50 ± 5.11 mm, and aRVD of 8.93 ± 7.56. Conclusion We developed a novel deeply supervised deep learning‐based approach with a group dilated convolution to automatically segment the MRI prostate, demonstrated its clinical feasibility, and validated its accuracy against manual segmentation. The proposed technique could be a useful tool for image‐guided interventions in prostate cancer.