z-logo
open-access-imgOpen Access
Robust Covariance Representations With Large Margin Dimensionality Reduction for Visual Classification
Author(s) -
Qiule Sun,
Jianxin Zhang,
Pengfei Zhu,
Qilong Wang,
Peihua Li
Publication year - 2018
Publication title -
ieee access
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.587
H-Index - 127
ISSN - 2169-3536
DOI - 10.1109/access.2018.2797419
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Inspired by the breakthrough performance of deep convolutional neural networks (CNNs) and the effectiveness of covariance representations, the combination of covariances with activations of deep CNNs has great potential in representing visual concepts. However, such method lies in two challenges: 1) robust estimation of covariance in the case of high dimension and small sample size and 2) high computational and storage costs caused by high-dimensional covariance representations. To tackle the above challenges, this paper proposes a novel robust covariance representation with large-margin dimensionality reduction for visual classification. First, we introduce two regularized maximum likelihood estimators to perform the robust estimation of covariance in the case of high dimension and small sample size, which can greatly improve the modeling ability of covariances. Then, we present a large-margin dimensionality reduction method for high-dimensional covariance representations. It does not only significantly reduce the dimension of robust covariance representations with considering their Riemannian geometry structure, but also can further enhance their discriminability. Experiments are conducted on three kinds of visual classification tasks, and the results show that our proposed method is superior to its counterparts and achieves the state-of-the-art performance.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom