z-logo
Premium
Competence‐conscious associative classification
Author(s) -
Veloso Adriano,
Zaki Mohammed,
Meira Wagner,
Gonçalves Marcos
Publication year - 2009
Publication title -
statistical analysis and data mining: the asa data science journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.381
H-Index - 33
eISSN - 1932-1872
pISSN - 1932-1864
DOI - 10.1002/sam.10058
Subject(s) - computer science , classifier (uml) , artificial intelligence , machine learning , associative property , competence (human resources) , statistic , dilemma , pattern recognition (psychology) , data mining , mathematics , statistics , psychology , social psychology , geometry , pure mathematics
The classification performance of an associative classifier is strongly dependent on the statistic measure or metric that is used to quantify the strength of the association between features and classes (i.e. confidence, correlation, etc.). Previous studies have shown that classifiers produced by different metrics may provide conflicting predictions, and that the best metric to use is data‐dependent and rarely known while designing the classifier. This uncertainty concerning the optimal match between metrics and problems is a dilemma, and prevents associative classifiers to achieve their maximal performance. This dilemma is the focus of this paper. A possible solution to this dilemma is to learn the competence, expertise, or assertiveness of metrics. The basic idea is that each metric has a specific sub‐domain for which it is most competent (i.e. it consistently produces more accurate classifiers than the ones produced by other metrics). Particularly, we investigate stacking‐based meta‐learning methods, which use the training data to find the domain of competence of each metric. The meta‐classifier describes the domains of competence (or areas of expertise) of each metric, enabling a more sensible use of these metrics so that competence‐conscious classifiers can be produced (i.e. a metric is only used to produce classifiers for test instances that belong to its domain of competence). We conducted a systematic and comprehensive evaluation, using different datasets and evaluation measures, of classifiers produced by different metrics. The result is that, while no metric is always superior than all others, the selection of appropriate metrics according to their competence/expertise (i.e. competence‐conscious associative classifiers) seems very effective, showing gains that range from 1.2% to 26.3% when compared with the baselines (SVMs and an existing ensemble method). Copyright © 2009 Wiley Periodicals, Inc. Statistical Analysis and Data Mining 2: 361‐377, 2009

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here