
CFNet: Context fusion network for multi‐focus images
Author(s) -
Zhang Kang,
Wu Zhiliang,
Yuan Xia,
Zhao Chunxia
Publication year - 2022
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/ipr2.12363
Subject(s) - computer science , artificial intelligence , pyramid (geometry) , merge (version control) , focus (optics) , image fusion , context (archaeology) , computer vision , fusion , salient , pixel , pattern recognition (psychology) , image (mathematics) , information retrieval , mathematics , paleontology , linguistics , philosophy , physics , geometry , biology , optics
Multi‐focus image fusion aims to generate a clear image by fusing multiple source images. Existing deep learning‐based fusion methods often neglect the context information resulting in the loss of detail information. To address this issue, a context fusion network to merge multi‐focus images, namely CFNet, is proposed. Specifically, a context fusion module is proposed to make full use of low‐level pixels and high‐level semantic features. Particularly, the pyramid fusion mechanism and cross‐scale transfer strategy are adopted to ensure the visual and semantic consistency of the fused image. Meanwhile, to extract salient features more effectively, a spatial attention mechanism is introduced to enhance these features. Further, the pyramid loss is used to progressively refine the fused features at each scale. Experimental results show that the proposed method is superior to some existing methods in both qualitative and quantitative evaluation.