z-logo
open-access-imgOpen Access
Image classification using hybrid method
Author(s) -
Ielaf O. Abdul Majjed Dahl
Publication year - 2012
Publication title -
mağallaẗ al-tarbiyaẗ wa-al-ʻilm
Language(s) - English
Resource type - Journals
eISSN - 2664-2530
pISSN - 1812-125X
DOI - 10.33899/edusj.2012.59160
Subject(s) - artificial intelligence , computer science , artificial neural network , pattern recognition (psychology) , matlab , jpeg , peak signal to noise ratio , software , image processing , image (mathematics) , computer vision , programming language , operating system
This paper find a method depending on combining both classic and artificial methods to classify (recognition) an image.k-means method is used to find the main characteristics of the images. Then these images are classified using Hamming and Maxnet Artificial Neural Networks (ANNs). The proposed method is compared with the artificial one only. Three parameters are used for this comparison, number of iteration, second Peak Signal to Noise Ratio (PSNR) and third correlation. The results reveal that the used method is better than artificial method. correlation for proposed method is equal (0.5360), while it is (0.4610) for artificial method applying the same input JPEG image. This software is applied on JPEG and BMP gray images type. MATLAB 7.6.0 is utilized for the implementation of this software. Image classification using hybrid method. 90 1Introduction Iteration Pattern recognition is the task performed by human being in daily life (e.g., classifying objects based on its characteristics, identifying person based on his\her face etc.). All these tasks are performed by brain through learning process. The whole idea behind pattern recognition isn't just to be able to determine if a provided pattern is exactly equal to another, but it also implies that patterns are grouped into classes. The task is then to decide which class a provided pattern belongs to [1]. The patterns belonging to a specific class share some common features. In the alphabet for example, a letter could be written in many different ways, every letter is a class embracing different patterns. No pattern recognition system can function by itself; there is always a need to some previous knowledge to base all decisions on, e.g. sample images tell what each class look like [2]. 2Pattern Classification Pattern classification is a growing field with applications in very different areas such as speech and handwriting recognition, computer vision, image analysis, marketing, data mining, medical science, and information retrieval, to name a few. Typically, classification rules are established from randomly selected training instances from each class and are applied to test samples to evaluate their classification accuracy [3]. 2-1 Clustering The goal of clustering is to identify the clusters, which can be considered as classes [4]. Clustering is an unsupervised learning problem, which tries to group a set of points into clusters such that points in the same cluster are more similar to each other than points in different clusters, under a particular similarity metric [5]. Clustering can be used to produce an effective image index as follows: After clustering, each cluster is represented by a single representative data item (i.e. the image label for that cluster) and, instead of the original data items, the query point is compared to the cluster representatives. The best cluster or clusters, according to the used similarity measure, are then selected and the data items belonging to those clusters are retrieved also according to the used similarity measure [6]. 2-1-1 K-means Clustering Algorithm The k-means algorithm is the most frequently used clustering algorithm due to its simplicity and efficiency. K-means is a partitioned Ielaf O. abdl-majjed al dahl 91 clustering algorithm. It performs iterative relocation to partition a dataset into k clusters [5]; It is based on the minimization of a performance index which is defined as the sum of the squared distances from all points in a cluster domain to the cluster center. This algorithm consists of the following steps [7]: Step1 : Choosing K initial cluster centers ) 1 ( ),..., 1 ( ), 1 ( 2 1 K z z z . These are arbitrary and are usually selected as the first K samples of the given sample set. Step2 : Distributing the samples { x } at the th k iterative step among the K cluster domains, using the relation: ) ( ) ( ) ( k z x k z x if k S x i j j (1) for all , , ,..., 2 , 1 j i K i where ) (k S j denotes the set of samples whose cluster is ) (k z j . Setp3 : Computing the new cluster centers K j k z j ,..., 2 , 1 ), 1 ( , such that the sum of the squared distances from all points in ) (k S j to the new cluster center is minimized. In other words, the new cluster center ) 1 ( k z j is computed so that the performance index Jj is minimized. K j k z x J k S x j j j ,..., 2 , 1 , ) 1 ( ) ( 2 (2) The ) 1 ( k z j which minimizes this performance index is simply the sample mean of ) (k S j . Therefore, the new cluster center is given by: K i x N k z k S x j j j ,..., 2 , 1 , 1 ) 1 ( ) ( (3) where j N is the number of samples in ) (k S j . The name “K-means” is obviously derived from the manner in which clusters are sequentially updated. Sept4 : If ) ( ) 1 ( k z k z j j for K j ,..., 2 , 1 , the algorithm has converged and the procedure is terminated. Otherwise one should go to step2. The behavior of the k-means algorithm is influenced by the number of the specified cluster centers, the choice of initial cluster centers, the order in which the samples are taken, and, of course, the geometrical properties of the data. Although no general proof of convergence exists for this algorithm, it can be expected to yield acceptable results when the data exhibit characteristic pockets which are relatively far from each other. In most practical cases the application of this algorithm will require Image classification using hybrid method. 92 Figure 1: Hamming Neural Network experimenting with various values of K as well as different choices of starting configurations [7]. 2-2 Neural Network Technique for Classification This technique has some unique advantages, such as their nonparametric nature, arbitrary decision boundary capabilities, and ability to generalize from training data. In addition, unlike traditional statistical methods, such as the maximum likelihood classifier, ANNs permit the use of a range of data types, including categorical data. It has also been reported that artificial neural networks can classify small training datasets better than conventional statistical classification technique [3]. 2-2-1 Hamming Neural Network as Pattern Recognizer Hamming Neural Network (Hamming NN) is divided into two parts one is for calculating matching score and another is MAXNET as shown in Figure(1) [8]. When any input is applied to this NN, it first calculates N minus the Hamming distance to M exemplar patterns is first calculated using the former part and then the node with maximum output is found using MAXNET. The advantage of Hamming Net is that it requires fewer number of connections as compared to that of Hopfield NN [8]. The above two nets work cooperatively to identify the class to which a given input pattern belongs. The pattern is identified by means of a set of stored prototype patterns (one for each class). The input pattern is assigned to the class of the prototype that is closest in terms of Hamming distance. The basic structure is shown in Figure(2). If it is required that the classifier is executed to identify examples from different classes, both the Hamming net and Maxnet should have p outputs. The Hamming net Ielaf O. abdl-majjed al dahl 93 gives the largest output for the prototype that is closest (has smallest Hamming distance) to the input vector. The role of Maxnet is simply to suppress all other outputs so that the right-hand vector in Figure(2) finishes with just one non-zero output, which corresponds to the pattern class identified by the Hamming net [9]. Figure2: Hamming-maxnet NN 2-2-2 Hamming-maxnet algorithm [10] Step1 : specify the example sij (each input image convert to vector). Step2 : fixed the weight matrix for hamming net

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom