z-logo
open-access-imgOpen Access
Multiple indefinite kernel learning with mixed norm regularization
Author(s) -
Matthieu Kowalski,
Marie Szafranski,
Liva Ralaivola
Publication year - 2009
Publication title -
hal (le centre pour la communication scientifique directe)
Language(s) - English
Resource type - Conference proceedings
DOI - 10.1145/1553374.1553445
Subject(s) - computer science , regularization (linguistics) , kernel (algebra) , norm (philosophy) , artificial intelligence , regular polygon , gradient descent , proximal gradient methods for learning , convex function , multiple kernel learning , machine learning , mathematical optimization , algorithm , convex optimization , mathematics , kernel method , convex combination , support vector machine , artificial neural network , combinatorics , geometry , political science , law
We address the problem of learning classifiers using several kernel functions. On the contrary to many contributions in the field of learning from different sources of information using kernels, we here do not assume that the kernels used are positive definite. The learning problem that we are interested in involves a misclassification loss term and a regularization term that is expressed by means of a mixed norm. The use of a mixed norm allows us to enforce some sparsity structure, a particular case of which is, for instance, the Group Lasso. We solve the convex problem by employing proximal minimization algorithms, which can be viewed as refined versions of gradient descent procedures capable of naturally dealing with nondifferentiability. A numerical simulation on a Uci dataset shows the modularity of our approach.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom