z-logo
open-access-imgOpen Access
EASY ENSEMMBLE WITH RANDOM FOREST TO HANDLE IMBALANCED DATA IN CLASSIFICATION
Author(s) -
Sarini Abdullah,
GV Prasetyo
Publication year - 2020
Publication title -
journal of fundamental mathematics and applications
Language(s) - English
Resource type - Journals
eISSN - 2621-6035
pISSN - 2621-6019
DOI - 10.14710/jfma.v3i1.7415
Subject(s) - random forest , resampling , computer science , recall , classifier (uml) , class (philosophy) , ensemble learning , artificial intelligence , machine learning , data set , oversampling , training set , data mining , pattern recognition (psychology) , bandwidth (computing) , computer network , philosophy , linguistics
Imbalanced data might cause some issues in problem definition level, algorithm level, and data level. Some of the methods have been developed to overcome this issue, one of state-of-the-art method is Easy Ensemble. Easy Ensemble was claimed can improve model performance to classify minority class, and overcome the deficiency of random under- sampling. In this paper we discussed the implementation of Easy Ensemble with Random Forest Classifiers to handle imbalance problem in credit scoring case. This combination method is implemented in two datasets which taken from data science competition website, finhacks.id and kaggle.com with class proportion within majority and minority is 70:30 and 94:6. The results showed that resampling with Easy Ensemble can improve Random Forest classifier performance upon minority class. Recall on minority class increased significantly after the resampling. Before resampling, the recall on minority class for the first dataset (finhacks.id) was 0.49, and increased to 0.82 after the resampling. Similar results were obtained for the second data set (kaggle.com), where the recall for the minority class was increased from just 0.14 to 0.73.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here