z-logo
open-access-imgOpen Access
Texture Classification using a Linear Configuration Model based Descriptor
Author(s) -
Yimo Guo,
Guoying Zhao,
Matti Pietikäinen
Publication year - 2011
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.25.119
Subject(s) - pattern recognition (psychology) , artificial intelligence , computer science , texture (cosmology) , image texture , computer vision , image segmentation , image (mathematics)
Texture classification can be concluded as the problem of classifying images according to textural cues, that is, categorizing a texture image obtained under certain illumination and viewpoint condition as belonging to one of the pre-learned texture classes. Therefore, it would mainly pass through two steps: image representation or description and classification. In this paper, we focus on the feature extraction part that aims to extract effective patterns to distinguish different textures. Among various feature extraction methods, local features have performed well in real-world applications, such as LBP[4], SIFT [2] and Histogram of Oriented Gradients (HOG) [1]. Representative methods also include grey level difference or co-occurrence statistics [10], and methods based on multi-channel filtering or wavelet decomposition [3, 5, 7]. To learn representative structural configuration from texture images, Varma et al. proposed texton methods based on the filter response space and local image patch space [8, 9]. We show in this paper the descriptor MiC that encodes image microscopic configuration by a linear configuration model. The final local configuration pattern (LCP) feature integrates both the microscopic features represented by optimal model parameters and local features represented by pattern occurrences. To be specific, microscopic features capture image microscopic configuration which embodies image configuration and pixel-wise interaction relationships by a linear model. The optimal model parameters are estimated by an efficient least squares estimator. To achieve rotation invariance, which is a desired property for texture features, Fourier transform is applied to the estimated parameter vectors. Finally, the transformed vectors are concatenated with local pattern occurrences to construct LCPs. As this framework is unsupervised, it could avoid the generalization problem suffered by other statistical learning methods. To model the image configuration with respect to each pattern, we estimate optimal weights, associating with intensities of neighboring pixels, to linearly reconstruct the central pixel intensity. This can be expressed by:

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom