z-logo
Premium
Breast tumor segmentation in 3D automatic breast ultrasound using Mask scoring R‐CNN
Author(s) -
Lei Yang,
He Xiuxiu,
Yao Jincao,
Wang Tonghe,
Wang Lijing,
Li Wei,
Curran Walter J.,
Liu Tian,
Xu Dong,
Yang Xiaofeng
Publication year - 2021
Publication title -
medical physics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.473
H-Index - 180
eISSN - 2473-4209
pISSN - 0094-2405
DOI - 10.1002/mp.14569
Subject(s) - jaccard index , hausdorff distance , artificial intelligence , segmentation , convolutional neural network , breast ultrasound , computer science , breast cancer , pattern recognition (psychology) , sørensen–dice coefficient , breast imaging , computer aided diagnosis , image segmentation , mammography , medicine , cancer
Purpose Automatic breast ultrasound (ABUS) imaging has become an essential tool in breast cancer diagnosis since it provides complementary information to other imaging modalities. Lesion segmentation on ABUS is a prerequisite step of breast cancer computer‐aided diagnosis (CAD). This work aims to develop a deep learning‐based method for breast tumor segmentation using three‐dimensional (3D) ABUS automatically. Methods For breast tumor segmentation in ABUS, we developed a Mask scoring region‐based convolutional neural network (R‐CNN) that consists of five subnetworks, that is, a backbone, a regional proposal network, a region convolutional neural network head, a mask head, and a mask score head. A network block building direct correlation between mask quality and region class was integrated into a Mask scoring R‐CNN based framework for the segmentation of new ABUS images with ambiguous regions of interest (ROIs). For segmentation accuracy evaluation, we retrospectively investigated 70 patients with breast tumor confirmed with needle biopsy and manually delineated on ABUS, of which 40 were used for fivefold cross‐validation and 30 were used for hold‐out test. The comparison between the automatic breast tumor segmentations and the manual contours was quantified by I) six metrics including Dice similarity coefficient (DSC), Jaccard index, 95% Hausdorff distance (HD95), mean surface distance (MSD), residual mean square distance (RMSD), and center of mass distance (CMD); II) Pearson correlation analysis and Bland–Altman analysis. Results The mean (median) DSC was 85% ± 10.4% (89.4%) and 82.1% ± 14.5% (85.6%) for cross‐validation and hold‐out test, respectively. The corresponding HD95, MSD, RMSD, and CMD of the two tests was 1.646 ± 1.191 and 1.665 ± 1.129 mm, 0.489 ± 0.406 and 0.475 ± 0.371 mm, 0.755 ± 0.755 and 0.751 ± 0.508 mm, and 0.672 ± 0.612 and 0.665 ± 0.729 mm. The mean volumetric difference (mean and ± 1.96 standard deviation) was 0.47 cc ([−0.77, 1.71)) for the cross‐validation and 0.23 cc ([−0.23 0.69]) for hold‐out test, respectively. Conclusion We developed a novel Mask scoring R‐CNN approach for the automated segmentation of the breast tumor in ABUS images and demonstrated its accuracy for breast tumor segmentation. Our learning‐based method can potentially assist the clinical CAD of breast cancer using 3D ABUS imaging.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here