z-logo
Premium
Multi‐label learning on principles of reverse k‐nearest neighbourhood
Author(s) -
Sadhukhan Payel,
Palit Sarbani
Publication year - 2021
Publication title -
expert systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.365
H-Index - 38
eISSN - 1468-0394
pISSN - 0266-4720
DOI - 10.1111/exsy.12615
Subject(s) - neighbourhood (mathematics) , computer science , jaccard index , classifier (uml) , k nearest neighbors algorithm , artificial intelligence , euclidean distance , machine learning , data mining , pattern recognition (psychology) , mathematics , mathematical analysis
In this article, we present a novel neighbourhood based multi‐label classifier, Multi‐label Learning on principles of Reverse k‐Nearest Neighbourhood (ML‐RkNN) where we estimate the neighbourhood of the points on the basis of their reverse k ‐nearest neighbourhood (RkNN). Through RkNN, for the same value of k , we get different number of neighbours for different instances and this happens adaptively according to the neighbourhood configuration of the points. The automatically adaptive neighbourhood helps us in better learning of the local configurations around the points. Our scheme also facilitates implicit handling of the local imbalances prevailing in the datasets by comparing the class distributions of the test points and their reverse nearest neighbours. This implicit and adaptive handling is particularly useful for multi‐label label datasets, whose labels are differentially imbalanced. Empirical study is performed on 10 real‐world multi‐label datasets considering five neighbourhood based multi‐label learners. Macro F 1 is used as the evaluating metric. The proposed method has given statistically superior and statistically comparable performances with respect to three and two comparing methods respectively. Additionally, we have explored the use of two different distance metrics, Euclidean and Jaccard in our scheme for nominal datasets.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here