
Medical image translation using an edge-guided generative adversarial network with global-to-local feature fusion
Author(s) -
Hamed Amini Amirkolaee,
Hamid Amini Amirkolaee
Publication year - 2022
Publication title -
journal of biomedical research/journal of biomedical research
Language(s) - English
Resource type - Journals
eISSN - 2352-4685
pISSN - 1674-8301
DOI - 10.7555/jbr.36.20220037
Subject(s) - translation (biology) , feature (linguistics) , generative grammar , image (mathematics) , artificial intelligence , enhanced data rates for gsm evolution , computer science , image translation , adversarial system , pattern recognition (psychology) , fusion , image fusion , computer vision , linguistics , chemistry , biochemistry , philosophy , messenger rna , gene
In this paper, we propose a framework based deep learning for medical image translation using paired and unpaired training data. Initially, a deep neural network with an encoder-decoder structure is proposed for image-to-image translation using paired training data. A multi-scale context aggregation approach is then used to extract various features from different levels of encoding, which are used during the corresponding network decoding stage. At this point, we further propose an edge-guided generative adversarial network for image-to-image translation based on unpaired training data. An edge constraint loss function is used to improve network performance in tissue boundaries. To analyze framework performance, we conducted five different medical image translation tasks. The assessment demonstrates that the proposed deep learning framework brings significant improvement beyond state-of-the-arts.