z-logo
Premium
Sparsifying the Fisher linear discriminant by rotation
Author(s) -
Hao Ning,
Dong Bin,
Fan Jianqing
Publication year - 2015
Publication title -
journal of the royal statistical society: series b (statistical methodology)
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 6.523
H-Index - 137
eISSN - 1467-9868
pISSN - 1369-7412
DOI - 10.1111/rssb.12092
Subject(s) - linear discriminant analysis , computer science , principal component analysis , classifier (uml) , artificial intelligence , pattern recognition (psychology) , rotation (mathematics) , covariance matrix , discriminant , covariance , clustering high dimensional data , linear classifier , machine learning , data mining , algorithm , mathematics , cluster analysis , statistics
Summary Many high dimensional classification techniques have been proposed in the literature based on sparse linear discriminant analysis. To use them efficiently, sparsity of linear classifiers is a prerequisite. However, this might not be readily available in many applications, and rotations of data are required to create the sparsity needed. We propose a family of rotations to create the sparsity required. The basic idea is to use the principal components of the sample covariance matrix of the pooled samples and its variants to rotate the data first and then to apply an existing high dimensional classifier. This rotate‐and‐solve procedure can be combined with any existing classifiers and is robust against the level of sparsity of the true model. We show that these rotations do create the sparsity that is needed for high dimensional classifications and we provide theoretical understanding why such a rotation works empirically. The effectiveness of the method proposed is demonstrated by several simulated and real data examples, and the improvements of our method over some popular high dimensional classification rules are clearly shown.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here