z-logo
Premium
Multi‐scale attention‐based convolutional neural network for classification of breast masses in mammograms
Author(s) -
Niu Jing,
Li Hua,
Zhang Chen,
Li Dengao
Publication year - 2021
Publication title -
medical physics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.473
H-Index - 180
eISSN - 2473-4209
pISSN - 0094-2405
DOI - 10.1002/mp.14942
Subject(s) - convolutional neural network , artificial intelligence , mammography , breast cancer , pattern recognition (psychology) , feature extraction , digital mammography , feature (linguistics) , computer science , artificial neural network , computer aided diagnosis , cancer , medicine , linguistics , philosophy
Purpose Breast cancer is the cancer with the highest incidence in women, and early detection can effectively improve the survival rate of patients. Mammography is an important method for physicians to screening breast cancer, but the diagnosis of mammograms by physicians depends largely on clinical practice experience. Studies have shown that using computer‐aided diagnosis techniques can help doctors diagnose breast cancer. Methods In this paper, the method of convolutional neural network is mainly used to classify benign and malignant breast masses in the mammograms. First, we use multi‐scale residual networks and densely connected networks as backbone networks to extract the features of global image patches and local image patches. Second, we use the attention module named convolutional block attention module (CBAM) to improve the two feature extraction networks to enhance the network's feature expression ability. Finally, we fuse the features of multi‐scale image patches to achieve the classification of benign and malignant breast masses. Results In the digital database for screening mammography (DDSM) database, the accuracy, sensitivity, AUC value and corresponding standard deviation of our method are 0.9626 ± 0.0110, 0.9719 ± 0.0126, and 0.9576 ± 0.0064, respectively. Compared with the commonly used ResNet (AUC = 0.8823 ± 0.0112) and DenseNet (AUC = 0.9141 ± 0.0085), the performance of our method has improved. In addition, we also used the INbreast database to train and validate the proposed method. The accuracy, sensitivity, AUC and corresponding standard deviations are 0.9554 ± 0.0296, 0.9605 ± 0.0228, and 0.9468 ± 0.0085, respectively. Conclusions Compared with the previous work, our proposed method uses multi‐scale image features, has better classification performance in breast mass patches classification tasks, and can effectively assist physicians in breast cancer diagnosis.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here