z-logo
open-access-imgOpen Access
Learning classification trees
Author(s) -
Wray Buntine
Publication year - 1992
Publication title -
statistics and computing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.009
H-Index - 77
eISSN - 1573-1375
pISSN - 0960-3174
DOI - 10.1007/bf01889584
Subject(s) - pruning , smoothing , bayesian probability , artificial intelligence , machine learning , computer science , tree (set theory) , algorithm , mathematics , data mining , pattern recognition (psychology) , mathematical analysis , agronomy , computer vision , biology
Algorithms for learning classification trees have had successes in artificial intelligence and statistics over many years. This paper outlines how a tree learning algorithm can be derived using Bayesian statistics. This introduces Bayesian techniques for splitting, smoothing, and tree averaging. The splitting rule is similar to Quinlan's information gain, while smoothing and averaging replace pruning. Comparative experiments with reimplementations of a minimum encoding approach,c4 (Quinlanet al., 1987) andcart (Breimanet al., 1984), show that the full Bayesian algorithm can produce more accurate predictions than versions of these other approaches, though pays a computational price.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom