z-logo
open-access-imgOpen Access
Dual Low-Rank Decompositions for Robust Cross-View Learning
Author(s) -
Zhengming Ding,
Yun Fu
Publication year - 2018
Publication title -
ieee transactions on image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.778
H-Index - 288
eISSN - 1941-0042
pISSN - 1057-7149
DOI - 10.1109/tip.2018.2865885
Subject(s) - computer science , artificial intelligence , invariant (physics) , discriminative model , divergence (linguistics) , dual (grammatical number) , manifold (fluid mechanics) , pattern recognition (psychology) , machine learning , mathematics , mechanical engineering , linguistics , philosophy , engineering , mathematical physics , art , literature
Cross-view data are very popular contemporarily, as different viewpoints or sensors attempt to richly represent data in various views. However, the cross-view data from different views present a significant divergence, that is, cross-view data from the same category have a lower similarity than those in different categories but within the same view. Considering that each cross-view sample is drawn from two intertwined manifold structures, i.e., class manifold and view manifold, in this paper, we propose a robust cross-view learning framework to seek a robust view-invariant low-dimensional space. Specifically, we develop a dual low-rank decomposition technique to unweave those intertwined manifold structures from one another in the learned space. Moreover, we design two discriminative graphs to constrain the dual low-rank decompositions by fully exploring the prior knowledge. Thus, our proposed algorithm is able to capture more within-class knowledge and mitigate the view divergence to obtain a more effective view-invariant feature extractor. Furthermore, our proposed method is very flexible in addressing such a challenging cross-view learning scenario that we only obtain the view information of the training data while with the view information of the evaluation data unknown. Experiments on face and object benchmarks demonstrate the effective performance of our designed model over the state-of-the-art algorithms.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom