z-logo
open-access-imgOpen Access
DBFU-Net: Double branch fusion U-Net with hard example weighting train strategy to segment retinal vessel
Author(s) -
Jianping Huang,
Zefang Lin,
Yingyin Chen,
Xiao Zhang,
Wei Zhao,
Jie Zhang,
Yong Li,
Xu He,
Meixiao Zhan,
Ligong Lu,
Xiaofei Jiang,
Yongjun Peng
Publication year - 2022
Publication title -
peerj. computer science
Language(s) - English
Resource type - Journals
ISSN - 2376-5992
DOI - 10.7717/peerj-cs.871
Subject(s) - weighting , net (polyhedron) , computer science , fusion , artificial intelligence , mathematics , physics , geometry , linguistics , philosophy , acoustics
Background Many fundus imaging modalities measure ocular changes. Automatic retinal vessel segmentation (RVS) is a significant fundus image-based method for the diagnosis of ophthalmologic diseases. However, precise vessel segmentation is a challenging task when detecting micro-changes in fundus images, e.g., tiny vessels, vessel edges, vessel lesions and optic disc edges. Methods In this paper, we will introduce a novel double branch fusion U-Net model that allows one of the branches to be trained by a weighting scheme that emphasizes harder examples to improve the overall segmentation performance. A new mask, we call a hard example mask, is needed for those examples that include a weighting strategy that is different from other methods. The method we propose extracts the hard example mask by morphology, meaning that the hard example mask does not need any rough segmentation model. To alleviate overfitting, we propose a random channel attention mechanism that is better than the drop-out method or the L2-regularization method in RVS. Results We have verified the proposed approach on the DRIVE, STARE and CHASE datasets to quantify the performance metrics. Compared to other existing approaches, using those dataset platforms, the proposed approach has competitive performance metrics. (DRIVE: F1-Score = 0.8289, G-Mean = 0.8995, AUC = 0.9811; STARE: F1-Score = 0.8501, G-Mean = 0.9198, AUC = 0.9892; CHASE: F1-Score = 0.8375, G-Mean = 0.9138, AUC = 0.9879). Discussion The segmentation results showed that DBFU-Net with RCA achieves competitive performance in three RVS datasets. Additionally, the proposed morphological-based extraction method for hard examples can reduce the computational cost. Finally, the random channel attention mechanism proposed in this paper has proven to be more effective than other regularization methods in the RVS task.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here