z-logo
open-access-imgOpen Access
Scale‐invariant feature matching based on pairs of feature points
Author(s) -
Wang Zhiheng,
Wang Zhifei,
Liu Hongmin,
Huo Zhanqiang
Publication year - 2015
Publication title -
iet computer vision
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.38
H-Index - 37
eISSN - 1751-9640
pISSN - 1751-9632
DOI - 10.1049/iet-cvi.2014.0369
Subject(s) - pattern recognition (psychology) , artificial intelligence , invariant (physics) , feature (linguistics) , scale invariance , mathematics , feature extraction , matching (statistics) , scale invariant feature transform , computer science , feature matching , scale (ratio) , gaussian , computer vision , statistics , linguistics , philosophy , physics , quantum mechanics , mathematical physics
On the basis of feature points pairing, a scale‐invariant feature matching method is proposed in this study. The distance between two features is used to compute feature pairs' support region size, which is different from the methods using detectors to provide information to find the support region. Moreover, to achieve rotation invariance, a sub‐region division method based on intensity order is introduced. For comparison to the popular descriptors scale‐invariant feature transform and speeded‐up robust features, the authors also choose the detected points by difference of Gaussian and fast Hessain detectors as feature points to start the authors' method. Additional experiments compare the reported method with similar proposed methods, such as Tell's and Fan's. The experimental results show that the authors' proposed descriptor outperforms the popular descriptors under various image transformations, especially on images with scale and viewpoint transformations.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here