Premium
Deep feature classification of angiomyolipoma without visible fat and renal cell carcinoma in abdominal contrast‐enhanced CT images with texture image patches and hand‐crafted feature concatenation
Author(s) -
Lee Hansang,
Hong Helen,
Kim Junmo,
Jung Dae Chul
Publication year - 2018
Publication title -
medical physics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.473
H-Index - 180
eISSN - 2473-4209
pISSN - 0094-2405
DOI - 10.1002/mp.12828
Subject(s) - artificial intelligence , random forest , pattern recognition (psychology) , computer science , feature (linguistics) , deep learning , contrast (vision) , feature extraction , receiver operating characteristic , classifier (uml) , machine learning , philosophy , linguistics
Purpose To develop an automatic deep feature classification ( DFC ) method for distinguishing benign angiomyolipoma without visible fat ( AML wvf) from malignant clear cell renal cell carcinoma (cc RCC ) from abdominal contrast‐enhanced computer tomography ( CE CT ) images. Methods A dataset including 80 abdominal CT images of 39 AML wvf and 41 cc RCC patients was used. We proposed a DFC method for differentiating the small renal masses ( SRM ) into AML wvf and cc RCC using the combination of hand‐crafted and deep features, and machine learning classifiers. First, 71‐dimensional hand‐crafted features ( HCF ) of texture and shape were extracted from the SRM contours. Second, 1000–4000‐dimensional deep features ( DF ) were extracted from the ImageNet pretrained deep learning model with the SRM image patches. In DF extraction, we proposed the texture image patches ( TIP ) to emphasize the texture information inside the mass in DF s and reduce the mass size variability. Finally, the two features were concatenated and the random forest ( RF ) classifier was trained on these concatenated features to classify the types of SRM s. The proposed method was tested on our dataset using leave‐one‐out cross‐validation and evaluated using accuracy, sensitivity, specificity, positive predictive values ( PPV ), negative predictive values ( NPV ), and area under receiver operating characteristics curve ( AUC ). In experiments, the combinations of four deep learning models, AlexNet, VGGN et, GoogleNet, and ResNet, and four input image patches, including original, masked, mass‐size, and texture image patches, were compared and analyzed. Results In qualitative evaluation, we observed the change in feature distributions between the proposed and comparative methods using tSNE method. In quantitative evaluation, we evaluated and compared the classification results, and observed that (a) the proposed HCF + DF outperformed HCF ‐only and DF ‐only, (b) AlexNet showed generally the best performances among the CNN models, and (c) the proposed TIP s not only achieved the competitive performances among the input patches, but also steady performance regardless of CNN models. As a result, the proposed method achieved the accuracy of 76.6 ± 1.4% for the proposed HCF + DF with AlexNet and TIP s, which improved the accuracy by 6.6%p and 8.3%p compared to HCF ‐only and DF ‐only, respectively. Conclusions The proposed shape features and TIP s improved the HCF s and DF s, respectively, and the feature concatenation further enhanced the quality of features for differentiating AML wvf from cc RCC in abdominal CE CT images.