Open Access
Multimodal image matching via dual‐codebook‐based self‐similarity hypercube feature descriptor and voting strategy
Author(s) -
Wang H.,
Han D.K.,
Ko H.
Publication year - 2014
Publication title -
electronics letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.375
H-Index - 146
eISSN - 1350-911X
pISSN - 0013-5194
DOI - 10.1049/el.2014.1802
Subject(s) - codebook , artificial intelligence , pattern recognition (psychology) , matching (statistics) , feature (linguistics) , computer science , cluster analysis , image (mathematics) , similarity (geometry) , benchmark (surveying) , computer vision , mathematics , linguistics , statistics , philosophy , geodesy , geography
An effective feature descriptor is proposed for multimodal local‐image patch matching. The conventional self‐similarity hypercube (SSH) fails in multimodal image matching due to different intensities of multimodal images. To mitigate this problem, a dual‐codebook clustering is proposed for generating the descriptors. It is based on extracting a codebook, respectively, from visible and thermal images but sharing the same k ‐means clustering index of the local features of visible and thermal image patches. The experimental results show that the proposed approach effectively solves the multimodal image quantisation problem. Moreover, a voting strategy based on the proposed similarity family function facilitates the multimodal image matching more robustly compared with the conventional state‐of‐the‐art methods.