
Improving k-Nearest Neighbors Algorithm for Imbalanced Data Classification
Author(s) -
Zhan Shi
Publication year - 2020
Publication title -
iop conference series. materials science and engineering
Language(s) - English
Resource type - Journals
eISSN - 1757-899X
pISSN - 1757-8981
DOI - 10.1088/1757-899x/719/1/012072
Subject(s) - boosting (machine learning) , computer science , k nearest neighbors algorithm , artificial intelligence , machine learning , class (philosophy) , parametric statistics , data mining , test data , statistical classification , algorithm , pattern recognition (psychology) , mathematics , statistics , programming language
The k-Nearest Neighbors (k-NN) algorithm is a classic non-parametric method that has wide applications in data classification and prediction. Like many other machine learning schemes, the performance of k-NN classifiers will be significantly impacted by the imbalanced class distributions of data. That is, the data instances in the majority class tend to dominate the prediction of the test instances. In this paper, we look into the data pre-processing techniques that can be used to rebalance the training data and enhance the performance of k-NN classifiers in imbalanced data sets. We conduct extensive experiments on 14 real-world data sets collected from different application domains. We also perform statistical tests to verify the significance of different data pre-processing techniques in terms of boosting k-NN classification precision.