Incremental Learning of Locally Orthogonal Subspaces for Set-based Object Recognition
Author(s) -
TaeKyun Kim,
Josef Kittler,
Roberto Cipolla
Publication year - 2006
Publication title -
citeseer x (the pennsylvania state university)
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.20.58
Subject(s) - linear subspace , orthogonality , orthogonal transformation , mathematics , pattern recognition (psychology) , principal component analysis , artificial intelligence , facial recognition system , orthogonal basis , image (mathematics) , computer science , algorithm , pure mathematics , geometry , physics , quantum mechanics
Orthogonal subspaces are effective models to represent object image sets (generally any high-dimensional vector sets). Canonical correlation analysis of the orthogonal subspaces provides a good solution to discriminate objects with sets of images. In such a recognition task involving image sets, an efficient learning over a large volume of image sets, which may be increasing over time, is important. In this paper, an incremental learning method of orthogonal subspaces is proposed by updating the principal components of the class correlation and total correlation matrices separately, yielding the same solution as the batch computation with far lower computational cost. A novel concept of local orthogonality is further proposed to cope with non-linear manifolds of data vectors and find a more optimal solution of orthogonal subspaces for a certain neighbouring object image sets. In the experiments using 700 face image sets, the locally orthogonal subspaces outperformed the orthogonal subspaces as well as relevant state-of-the-art methods in accuracy. Note that the locally orthogonal subspaces are also amenable to incremental updating due to their linear property.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom