z-logo
open-access-imgOpen Access
MEGA: Predicting the best classifier combination using meta-learning and a genetic algorithm
Author(s) -
Paria Golshanrad,
Hossein Rahmani,
Banafsheh Karimian,
Fatemeh Karimkhani,
Gerhard Weiß
Publication year - 2021
Publication title -
intelligent data analysis
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.231
H-Index - 47
eISSN - 1571-4128
pISSN - 1088-467X
DOI - 10.3233/ida-205494
Subject(s) - artificial intelligence , machine learning , computer science , classifier (uml) , random subspace method , decision tree , mega , a priori and a posteriori , ensemble learning , genetic algorithm , meta learning (computer science) , pattern recognition (psychology) , algorithm , data mining , task (project management) , engineering , philosophy , epistemology , physics , systems engineering , astronomy
Classifier combination through ensemble systems is one of the most effective approaches to improve the accuracy of classification systems. Ensemble systems are generally used to combine classifiers; However, selecting the best combination of individual classifiers is a challenging task. In this paper, we propose an efficient assembling method that employs both meta-learning and a genetic algorithm for the selection of the best classifiers. Our method is called MEGA, standing for using MEta-learning and a Genetic Algorithm for algorithm recommendation. MEGA has three main components: Training, Model Interpretation and Testing. The Training component extracts meta-features of each training dataset and uses a genetic algorithm to discover the best classifier combination. The Model Interpretation component interprets the relationships between meta-features and classifiers using a priori and multi-label decision tree algorithms. Finally, the Testing component uses a weighted k-nearest-neighbors algorithm to predict the best combination of classifiers for unseen datasets. We present extensive experimental results that demonstrate the performance of MEGA. MEGA achieves superior results in a comparison of three other methods and, most importantly, is able to find novel interpretable rules that can be used to select the best combination of classifiers for an unseen dataset.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here