z-logo
open-access-imgOpen Access
Integration Colour and Texture Features for Content-based Image Retrieval
Author(s) -
Hanan Al-Jubouri
Publication year - 2020
Publication title -
international journal of modern education and computer science
Language(s) - English
Resource type - Journals
eISSN - 2075-017X
pISSN - 2075-0161
DOI - 10.5815/ijmecs.2020.02.02
Subject(s) - computer science , artificial intelligence , image retrieval , image texture , pattern recognition (psychology) , content based image retrieval , visual word , automatic image annotation , computer vision , semantic gap , similarity (geometry) , image (mathematics) , image processing
Content-Based Image Retrieval offers an automatic way to extract visual image contents such as colour, texture, and shape so-called extracted features. Due to growing volume of digital images, Content-Based Image Retrieval is emerged to store and retrieved images from large scale databases. However, Content-Based Image Retrieval faces a challenge of meaning “Semantic gap” between machine and human conceptual. How to reduce this gap between colour and/or texture features that represent an object in the image? It is still the challenge that basically related to the effectiveness of image representation by extracted features and similarity measures between a query image features and database image features. Hence, different visual features have been proposed such as Gary Level Co-occurrence Matrix (GLCM), Local Binary Pattern (LBP), and Discrete Wavelet Transform (DWT) texture features that are extracted from gray-scale images. This paper presents an unsupervised algorithm that exploits data and score-level fusion to address the semantic gap. The algorithm first extracts mentioned features from colour images in HSV and YCbCr colour spaces to increase the effectiveness of image representation by integrating texture and colour visual information in terms of data-level fusion. Resulted similarity retrieval values are then fused in three versions of score-level fusion, summing values without weights, fixed, and adaptive weights using linear regression to raise relevant images in a ranked retrieved images list. WANG standard colour images are used to implement the algorithm. Rates of achievement in image retrievals are enhanced at both levels.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom