z-logo
open-access-imgOpen Access
An effective content-based image retrieval technique for image visuals representation based on the bag-of-visual-words model
Author(s) -
Sadia Jabeen,
Zahid Mehmood,
Toqeer Mahmood,
Tanzila Saba,
Amjad Rehman,
Muhammad Tariq Mahmood
Publication year - 2018
Publication title -
plos one
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.99
H-Index - 332
ISSN - 1932-6203
DOI - 10.1371/journal.pone.0194526
Subject(s) - freak , artificial intelligence , computer science , pattern recognition (psychology) , image retrieval , robustness (evolution) , computer vision , content based image retrieval , feature (linguistics) , bag of words model in computer vision , visual word , feature extraction , image (mathematics) , biochemistry , chemistry , linguistics , philosophy , computer security , gene
For the last three decades, content-based image retrieval (CBIR) has been an active research area, representing a viable solution for retrieving similar images from an image repository. In this article, we propose a novel CBIR technique based on the visual words fusion of speeded-up robust features (SURF) and fast retina keypoint (FREAK) feature descriptors. SURF is a sparse descriptor whereas FREAK is a dense descriptor. Moreover, SURF is a scale and rotation-invariant descriptor that performs better in the case of repeatability, distinctiveness, and robustness. It is robust to noise, detection errors, geometric, and photometric deformations. It also performs better at low illumination within an image as compared to the FREAK descriptor. In contrast, FREAK is a retina-inspired speedy descriptor that performs better for classification-based problems as compared to the SURF descriptor. Experimental results show that the proposed technique based on the visual words fusion of SURF-FREAK descriptors combines the features of both descriptors and resolves the aforementioned issues. The qualitative and quantitative analysis performed on three image collections, namely Corel-1000, Corel-1500, and Caltech-256, shows that proposed technique based on visual words fusion significantly improved the performance of the CBIR as compared to the feature fusion of both descriptors and state-of-the-art image retrieval techniques.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here