z-logo
open-access-imgOpen Access
Two-Stage Approach for Protein Superfamily Classification
Author(s) -
Swati Vipsita,
Santanu Kumar Rath
Publication year - 2013
Publication title -
computational biology journal
Language(s) - English
Resource type - Journals
eISSN - 2314-4173
pISSN - 2314-4165
DOI - 10.1155/2013/898090
Subject(s) - classifier (uml) , artificial intelligence , pattern recognition (psychology) , computer science , weighting , eigenvalues and eigenvectors , chromosome , feature extraction , principal component analysis , feature vector , binary classification , mathematics , algorithm , data mining , biology , support vector machine , medicine , biochemistry , physics , quantum mechanics , gene , radiology
We deal with the problem of protein superfamily classification in which the family membership of newly discovered amino acid sequence is predicted. Correct prediction is a matter of great concern for the researchers and drug analyst which helps them in discovery of new drugs. As this problem falls broadly under the category of pattern classification problem, we have made all efforts to optimize feature extraction in the first stage and classifier design in the second stage with an overall objective to maximize the performance accuracy of the classifier. In the feature extraction phase, Genetic Algorithm- (GA-) based wrapper approach is used to select few eigenvectors from the principal component analysis (PCA) space which are encoded as binary strings in the chromosome. On the basis of position of 1’s in the chromosome, the eigenvectors are selected to build the transformation matrix which then maps the original high-dimension feature space to lower dimension feature space. Using PCA-NSGA-II (non-dominated sorting GA), the nondominated solutions obtained from the Pareto front solve the trade-off problem by compromising between the number of eigenvectors selected and the accuracy obtained by the classifier. In the second stage, recursive orthogonal least square algorithm (ROLSA) is used for training radial basis function network (RBFN) to select optimal number of hidden centres as well as update the output layer weighting matrix. This approach can be applied to large data set with much lower requirements of computer memory. Thus, very small architectures having few number of hidden centres are obtained showing higher level of performance accuracy.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom