Premium
Two‐level pruning based ensemble with abstained learners for concept drift in data streams
Author(s) -
Goel Kanu,
Batra Shalini
Publication year - 2021
Publication title -
expert systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.365
H-Index - 38
eISSN - 1468-0394
pISSN - 0266-4720
DOI - 10.1111/exsy.12661
Subject(s) - pruning , computer science , concept drift , majority rule , set (abstract data type) , machine learning , artificial intelligence , voting , scheme (mathematics) , similarity (geometry) , ensemble learning , data mining , data stream mining , mathematical analysis , mathematics , politics , law , political science , agronomy , image (mathematics) , biology , programming language
Mining data streams for predictive analysis is one of the most interesting topics in machine learning. With the drifting data distributions, it becomes important to build adaptive systems which are dynamic and accurate. Although ensembles are powerful in improving accuracy of incremental learning, it is crucial to maintain a set of best suitable learners in the ensemble while considering the diversity between them. By adding diversity‐based pruning to the traditional accuracy‐based pruning, this paper proposes a novel concept drift handling approach named Two‐Level Pruning based Ensemble with Abstained Learners (TLP‐EnAbLe). In this approach, deferred similarity based pruning delays the removal of under performing similar learners until it is assured that they are no longer fit for prediction. The proposed scheme retains diverse learners that are well suited for current concept. Two‐level abstaining monitors performance of learners and chooses the best set of competent learners for participating in decision making. This is an enhancement to traditional majority voting system which dynamically chooses high performing learners and abstains the ones which are not suitable for prediction. In our experiments, it has been demonstrated that TLP‐EnAbLe handles concept drift more effectively than other state‐of‐the‐art algorithms on nineteen artificially drifting and ten real‐world datasets. Further, statistical tests conducted on various drift patterns which include gradual, abrupt, recurring and their combinations prove efficiency of the proposed approach.