z-logo
Premium
LOCAL MODELS—THE KEY TO BOOSTING STABLE LEARNERS SUCCESSFULLY
Author(s) -
Ting Kai Ming,
Zhu Lian,
Wells Jonathan R.
Publication year - 2013
Publication title -
computational intelligence
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.353
H-Index - 52
eISSN - 1467-8640
pISSN - 0824-7935
DOI - 10.1111/j.1467-8640.2012.00441.x
Subject(s) - boosting (machine learning) , naive bayes classifier , decision tree , machine learning , support vector machine , artificial intelligence , computer science , key (lock) , stability (learning theory) , ensemble learning , gradient boosting , pattern recognition (psychology) , random forest , computer security
Boosting has been shown to improve the predictive performance of unstable learners such as decision trees, but not of stable learners like Support Vector Machines (SVM), k‐nearest neighbors and Naive Bayes classifiers. In addition to the model stability problem, the high time complexity of some stable learners such as SVM prohibits them from generating multiple models to form an ensemble for large data sets. This paper introduces a simple method that not only enables Boosting to improve the predictive performance of stable learners, but also significantly reduces the computational time to generate an ensemble of stable learners such as SVM for large data sets that would otherwise be infeasible. The method proposes to build local models, instead of global models; and it is the first method, to the best of our knowledge, to solve the two problems in Boosting stable learners at the same time. We implement the method by using a decision tree to define local regions and build a local model for each local region. We show that this implementation of the proposed method enables successful Boosting of three types of stable learners: SVM, k‐nearest neighbors and Naive Bayes classifiers.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here