z-logo
open-access-imgOpen Access
Analysis of a Fusion Method for Combining Marginal Classifiers
Author(s) -
Mark D. Happel,
P. Bock
Publication year - 2000
Publication title -
lecture notes in computer science
Language(s) - English
Resource type - Book series
SCImago Journal Rank - 0.249
H-Index - 400
eISSN - 1611-3349
pISSN - 0302-9743
ISBN - 3-540-67704-6
DOI - 10.1007/3-540-45014-9_13
Subject(s) - classifier (uml) , computer science , joint probability distribution , marginal distribution , pattern recognition (psychology) , bayes classifier , bayesian probability , artificial intelligence , bayes error rate , sample space , probability of error , margin classifier , feature vector , bayes' theorem , probability density function , machine learning , statistics , algorithm , mathematics , random variable
The use of multiple features by a classifier often leads to a reduced probability of error, but the design of an optimal Bayesian classifier for multiple features is dependent on the estimation of multidimensional joint probability density functions and therefore requires a design sample size that, in general, increases exponentially with the number of dimensions. The classification method described in this paper makes decisions by combining the decisions made by multiple Bayesian classifiers using an additional classifier that estimates the joint probability densities of the decision space rather than the joint probability densities of the feature space. A proof is presented for the restricted case of two classes and two features; showing that the method always demonstrates a probability of error that is less than or equal to the probability of error of the marginal classifier with the lowest probability of error.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom