
Segmentation of the pulmonary nodule and the attached vessels in the CT scan of the chest using morphological features and topological skeleton of the nodule
Author(s) -
Tavakoli Mahsa Bank,
Orooji Mahdi,
Teimouri Mehdi,
Shahabifar Ramita
Publication year - 2020
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2019.1054
Subject(s) - skeleton (computer programming) , nodule (geology) , topological skeleton , segmentation , computed tomography , radiology , anatomy , topology (electrical circuits) , computer science , artificial intelligence , medicine , pattern recognition (psychology) , biology , mathematics , combinatorics , paleontology , active shape model
Nowadays, the proficiency of Computer‐Aided Diagnosis systems for early diagnosis of malignant nodules in baseline Computed Tomography (CT) scan of chest crucially depends on the authenticity of the segmented nodule. In this study, the authors introduce a new morphological feature called solidity radius (SR). They then employ this feature in the new segmentation framework for the automatic segmentation of nodule and the attached vessels around the seed point on the nodule, delineated by an expert. In the framework, they extract the SR and the curvature features and employ them to determine the candidate pixels of the nodule. They then use the convex‐hull image of the candidate pixels to surround the nodule area. Afterward, using the region growing on the Hessian‐based vesselness enhancement map, the attached vessels are labelled. Finally, they apply the traditional solidity feature of the segmented nodule and the pattern of the related skeleton to prune the false positive pieces. They validate the introduced approach on two datasets, including 56 and 481 CTs (containing 1205 nodules). They show the proficiency of their SR‐based approach compared to the state‐of‐the‐art methods with average Dice Similarity Coefficients of 77.98 and 77.47% for the two datasets, respectively.