z-logo
open-access-imgOpen Access
A cluster-based feature selection method for image texture classification
Author(s) -
Abbas F. H. Alharan,
Hayder K. Fatlawi,
Nabeel Salih Ali
Publication year - 2019
Publication title -
indonesian journal of electrical engineering and computer science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.241
H-Index - 17
eISSN - 2502-4760
pISSN - 2502-4752
DOI - 10.11591/ijeecs.v14.i3.pp1433-1442
Subject(s) - pattern recognition (psychology) , artificial intelligence , support vector machine , computer science , feature extraction , gabor filter , local binary patterns , feature selection , discriminative model , cluster analysis , naive bayes classifier , k nearest neighbors algorithm , feature (linguistics) , contextual image classification , texture (cosmology) , image (mathematics) , histogram , linguistics , philosophy
Computer vision and pattern recognition applications have been counted serious research trends in engineering technology and scientific research content. These applications such as texture image analysis and its texture feature extraction. Several studies have been done to obtain accurate results in image feature extraction and classifications, but most of the extraction and classification studies have some shortcomings. Thus, it is substantial to amend the accuracy of the classification via minify the dimension of feature sets. In this paper, presents a cluster-based feature selection approach to adopt more discriminative subset texture features based on three different texture image datasets. Multi-step are conducted to implement the proposed approach. These steps involve texture feature extraction via Gray Level Co-occurrence Matrix (GLCM), Local Binary Pattern (LBP) and Gabor filter. The second step is feature selection by using K-means clustering algorithm based on five feature evaluation metrics which are infogain, Gain ratio, oneR, ReliefF, and symmetric. Finally, K-Nearest Neighbor (KNN), Naive Bayes (NB) and Support Vector Machine (SVM) classifiers are used to evaluate the proposed classification performance and accuracy. Research achieved better classification accuracy and performance using KNN and NB classifiers that were 99.9554% for Kelberg dataset and 99.0625% for SVM in Brodatz-1 and Brodatz-2 datasets consecutively. Conduct a comparison to other studies to give a unified view of the quality of the results and identify the future research directions.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here