
Adaptive enhanced affine transformation for non‐rigid registration of visible and infrared images
Author(s) -
Min Chaobo,
Gu Yan,
Li Yingjie,
Yang Feng
Publication year - 2021
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/ipr2.12093
Subject(s) - affine transformation , transformation (genetics) , computer vision , image registration , infrared , artificial intelligence , rigid transformation , computer science , adaptive optics , mathematics , optics , image (mathematics) , geometry , physics , chemistry , biochemistry , gene
Non‐rigid registration, performing well in all‐weather and all‐day/night conditions, directly determine the reliability of visible (VIS) and infrared (IR) image fusion. On account of non‐planar scenes and differences between IR and VIS cameras, non‐linear transformation models are more helpful to non‐rigid image registration than the affine model. However, most of non‐linear models usually used on non‐rigid registration are constructed by control points at present. Aiming at the issue that the adaptiveness and generalization of the control‐point‐based models are limited, adaptive enhanced affine transformation (AEAT) is proposed for image registration, generalizing the affine model from linear to non‐linear case. Firstly, Gaussian weighted shape context, measuring the structural similarity between multimodal images, is designed to extract putative matches from edge maps of IR and VIS images. Secondly, to implement global image registration, the optimal parameters of the AEAT modal are estimated from putative matches by a strategy of subsection optimization. Experiment results show that this approach is robust in different registration tasks and outperforms several competitive methods on registration precision and speed.