z-logo
open-access-imgOpen Access
Deep3DSCan: Deep residual network and morphological descriptor based framework forlung cancer classification and 3D segmentation
Author(s) -
Bansal Gaurang,
Chamola Vinay,
Narang Pratik,
Kumar Subham,
Raman Sundaresan
Publication year - 2020
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2019.1164
Subject(s) - artificial intelligence , segmentation , computer science , robustness (evolution) , pattern recognition (psychology) , residual , deep learning , image segmentation , computed tomography , radiology , medicine , biochemistry , chemistry , algorithm , gene
With the increasing incidence rate of lung cancer patients, early diagnosis could help in reducing the mortality rate. However, accurate recognition of cancerous lesions is immensely challenging owing to factors such as low contrast variation, heterogeneity and visual similarity between benign and malignant nodules. Deep learning techniques have been very effective in performing natural image segmentation with robustness to previously unseen situations, reasonable scale invariance and the ability to detect even minute differences. However, they usually fail to learn domain‐specific features due to the limited amount of available data and domain agnostic nature of these techniques. This work presents an ensemble framework Deep3DSCan for lung cancer segmentation and classification. The deep 3D segmentation network generates the 3D volume of interest from computed tomography scans of patients. The deep features and handcrafted descriptors are extracted using a fine‐tuned residual network and morphological techniques, respectively. Finally, the fused features are used for cancer classification. The experiments were conducted on the publicly available LUNA16 dataset. For the segmentation, the authors achieved an accuracy of 0.927, significant improvement over the template matching technique, which had achieved an accuracy of 0.927. For the detection, previous state‐of‐the‐art is 0.866, while ours is 0.883.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here