z-logo
Premium
IGM‐based perceptual multimodal medical image fusion using free energy motivated adaptive PCNN
Author(s) -
Tang Lu,
Tian Chuangeng,
Xu Kai
Publication year - 2018
Publication title -
international journal of imaging systems and technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.359
H-Index - 47
eISSN - 1098-1098
pISSN - 0899-9457
DOI - 10.1002/ima.22261
Subject(s) - computer science , artificial intelligence , fusion , image fusion , image (mathematics) , computer vision , energy (signal processing) , perception , pattern recognition (psychology) , artificial neural network , mathematics , philosophy , linguistics , statistics , neuroscience , biology
Abstract Multimodal medical image fusion merges two medical images to produce a visual enhanced fused image, to provide more accurate comprehensive pathological information to doctors for better diagnosis and treatment. In this article, we present a perceptual multimodal medical image fusion method with free energy (FE) motivated adaptive pulse coupled neural network (PCNN) by employing Internal Generative Mechanism (IGM). First, source images are divided into predicted layers and detail layers with Bayesian prediction model. Then to retain human visual system inspired features, FE is used to motivate the PCNN for processing detail layers, and large firing times are selected as coefficients. The predicted layers are fused with the averaging strategy as activity level measurement. Finally, the fused image is reconstructed by merging coefficients in both fused layers. Experimental results visually and quantitatively show that the proposed fusion strategy is superior to the state‐of‐the‐art methods.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here