Premium
Multistage multimodal medical image fusion model using feature‐adaptive pulse coupled neural network
Author(s) -
Singh Sneha,
Gupta Deep
Publication year - 2021
Publication title -
international journal of imaging systems and technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.359
H-Index - 47
eISSN - 1098-1098
pISSN - 0899-9457
DOI - 10.1002/ima.22507
Subject(s) - fuse (electrical) , artificial intelligence , computer science , pattern recognition (psychology) , feature (linguistics) , fusion , image fusion , visualization , wavelet , fusion rules , image (mathematics) , artificial neural network , computer vision , linguistics , philosophy , electrical engineering , engineering
Medical image fusion focuses to fuse complementary diagnostic details for better visualization of comprehensive information and interpretation of various diseases and its treatment planning. In this paper, a multistage multimodal fusion model is presented based on nonsubsampled shearlet transform (NSST), stationary wavelet transform (SWT), and feature‐adaptive pulse coupled neural network. Firstly, NSST is employed to decompose the source images into optimally sparse multi‐resolution components followed by SWT. Secondly, structural features are extracted by a weighted sum‐modified Laplacian and applied to an adaptive model to map feature weights for low‐band SWT component fusion, and texture feature‐based fusion rule is applied to fuse high‐band SWT components. High‐frequency NSST components are fused using the absolute maximum and sum of absolute difference based rule to retain complex directional details. Experimental results show that the proposed method obtains significantly better fused medical images compared to others with excellent visual quality and improved computational measures.