z-logo
open-access-imgOpen Access
Sparse Passive-Aggressive Learning for Bounded Online Kernel Methods
Author(s) -
Jing Lu,
Doyen Sahoo,
Peilin Zhao,
Steven C. H. Hoi
Publication year - 2018
Publication title -
acm transactions on intelligent systems and technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.914
H-Index - 63
eISSN - 2157-6912
pISSN - 2157-6904
DOI - 10.1145/3156684
Subject(s) - computer science , bounded function , scalability , kernel (algebra) , machine learning , kernel method , polynomial kernel , radial basis function kernel , support vector machine , online machine learning , artificial intelligence , online algorithm , kernel embedding of distributions , upper and lower bounds , algorithm , active learning (machine learning) , mathematics , discrete mathematics , mathematical analysis , database
One critical deficiency of traditional online kernel learning methods is their unbounded and growing number of support vectors in the online learning process, making them inefficient and non-scalable for large-scale applications. Recent studies on scalable online kernel learning have attempted to overcome this shortcoming, e.g., by imposing a constant budget on the number of support vectors. Although they attempt to bound the number of support vectors at each online learning iteration, most of them fail to bound the number of support vectors for the final output hypothesis, which is often obtained by averaging the series of hypotheses over all the iterations. In this article, we propose a novel framework for bounded online kernel methods, named “Sparse Passive-Aggressive (SPA)” learning, which is able to yield a final output kernel-based hypothesis with a bounded number of support vectors. Unlike the common budget maintenance strategy used by many existing budget online kernel learning approaches, the idea of our approach is to attain the bounded number of support vectors using an efficient stochastic sampling strategy that samples an incoming training example as a new support vector with a probability proportional to its loss suffered. We theoretically prove that SPA achieves an optimal mistake bound in expectation, and we empirically show that it outperforms various budget online kernel learning algorithms. Finally, in addition to general online kernel learning tasks, we also apply SPA to derive bounded online multiple-kernel learning algorithms, which can significantly improve the scalability of traditional Online Multiple-Kernel Classification (OMKC) algorithms while achieving satisfactory learning accuracy as compared with the existing unbounded OMKC algorithms.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom