z-logo
Premium
Online learning with sparse labels
Author(s) -
He Wenwu,
Zou Fumin,
Liang Quan
Publication year - 2018
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.4480
Subject(s) - regret , computer science , benchmark (surveying) , artificial intelligence , bernoulli's principle , machine learning , online learning , world wide web , geography , engineering , aerospace engineering , geodesy
Summary In this paper, we consider an online learning scenario where the instances arrive sequentially with partly revealed labels. We assume that the labels of instances are revealed randomly according to some distribution, eg, Bernoulli distribution. Three specific algorithms based on different inspirations are developed. The first one performs the idea of Estimated gradient for which a strict high‐probability regret guarantee in scale of Õ ( T / p ) can be derived, when the distributing parameter p is revealed. An empirical version is also developed for cases where the learner has to learn the parameter p when it is not revealed. Experiments on several benchmark data sets show the feasibility of the proposed method. To further improve the performance, two kinds of aggressive algorithms are presented. The first one is based on the idea of instances recalling , which tries to get the full use of the labeled instances. The second one is based on the idea of labels learning , and it tries to learn the labels for unlabeled instances. In particular, it includes the step of online co‐learning, which aims to learn the labels, and the step of weighted voting, which aims to make the final decision. Empirical results confirm the positive effects of the two aggressive algorithms.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here