Premium
SU‐D‐207B‐06: Predicting Breast Cancer Malignancy On DCE‐MRI Data Using Pre‐Trained Convolutional Neural Networks
Author(s) -
Antropova N,
Huynh B,
Giger M
Publication year - 2016
Publication title -
medical physics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.473
H-Index - 180
eISSN - 2473-4209
pISSN - 0094-2405
DOI - 10.1118/1.4955674
Subject(s) - convolutional neural network , artificial intelligence , computer science , pattern recognition (psychology) , support vector machine , transfer of learning , deep learning , breast mri , artificial neural network , receiver operating characteristic , segmentation , breast cancer , mammography , machine learning , medicine , cancer
Purpose: We investigate deep learning in the task of distinguishing between malignant and benign breast lesions on dynamic contrast‐enhanced MR images (DCE‐MRIs), eliminating the need for lesion segmentation and extraction of tumor features. We evaluate convolutional neural network (CNN) after transfer learning with ImageNet, a database of thousands of non‐medical images. Methods: Under a HIPAA‐compliant IRB protocol, a database of 551 (357 malignant and 194 benign) breast MRI cases was collected. ROIs around each lesion were extracted from the DCE‐MRI slices at the second post‐contrast time point. Depending on the size of the tumor, the dimensions of ROIs varied between 1 and 1.5 times the maximum diameter of the lesion, which were further upsampled or downsampled to yield 256×256‐pixel ROIs. Next, these ROIs were directly input to the convolutional neural network ConvNet, which had been pre‐trained by AlexNet on the ImageNet database. The internal layer of ConvNet output 4,096 features, which were subsequently used as the input to a support vector machine (SVM) to classify the lesions as malignant or benign. Area under the receiver operating characteristic curve (AUC) was used as a figure of merit for the classification task. We performed 10‐fold cross‐validation with training and testing sets consisting of 90% and 10% of the database, respectively. Results: The CNN with transfer learning and subsequent SVM yielded an an AUC value of 0.85, demonstrating the predictive value of the CNN. In future work, we will compare the results obtained with ConvNet with the results obtained using conventional tumor radiomics features. Our approach returns the prediction within minutes, due to incorporation of transfer learning. Conclusion: A CNN pre‐trained on non‐medical images can be used to extract image characteristics from breast DCE‐MR images relevant to diagnostic decision‐making. Our work shows that transfer learning can aid in prediction of breast cancer malignancy. M. Giger is a stockholder in R2/Hologic, co‐founder and equity holder in Quantitative Insights, and receives royalties from Hologic, GE Medical Systems, MEDIAN Technologies, Riverain Medical, Mitsubishi, and Toshiba