z-logo
open-access-imgOpen Access
Matching Neural Network for Extreme Multi-Label Learning
Author(s) -
Zhiyun Zhao,
Fengzhi Li,
Yuan Zuo,
Junjie Wu
Publication year - 2020
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1642/1/012013
Subject(s) - computer science , embedding , encode , pairwise comparison , outlier , cosine similarity , artificial intelligence , feature learning , pattern recognition (psychology) , similarity (geometry) , artificial neural network , matching (statistics) , benchmark (surveying) , feature (linguistics) , machine learning , feature vector , image (mathematics) , mathematics , biochemistry , chemistry , statistics , linguistics , philosophy , geodesy , gene , geography
Multi-label learning involving hundreds of thousands or even millions of labels is referred to as extreme multi-label learning, in which the labels often follow a power-law distribution with the majority occurring in very few data points as tail labels. As a promising solution of multi-label learning, however, the embedding-based methods face a problem that most of them have the basic low-rank assumption but the widespread of tail labels in data violates it. Recently, research efforts have been put on building tail label tolerant embedding-based models, however, for real-life datasets containing substantial data points with only tail labels, simply treating them as label matrix outliers will incur severe information loss, meanwhile accurately computing the pairwise distances between label vectors turns infeasible. In light of this, we present the Matching Neural Network (MNN), which learns two neural mapping functions that encode feature vectors and label vectors into their distributed representations, respectively. A noise contrastive loss is also proposed to guide the training of the functions so as to ensure matched features and labels have similar distributed representation measured by cosine similarity. Extensive experiments on various benchmark datasets with state-of-the-art baselines demonstrate the more accurate predictions of MNN.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here