z-logo
Premium
LEARNING STRUCTURED BAYESIAN NETWORKS: COMBINING ABSTRACTION HIERARCHIES AND TREE‐STRUCTURED CONDITIONAL PROBABILITY TABLES
Author(s) -
DesJardins Marie,
Rathod Priyang,
Getoor Lise
Publication year - 2008
Publication title -
computational intelligence
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.353
H-Index - 52
eISSN - 1467-8640
pISSN - 0824-7935
DOI - 10.1111/j.1467-8640.2007.00320.x
Subject(s) - computer science , conditional independence , artificial intelligence , bayesian network , tree structure , machine learning , data mining , tree (set theory) , posterior probability , conditional probability , cluster analysis , theoretical computer science , mathematics , data structure , bayesian probability , statistics , mathematical analysis , programming language
Context‐specific independence representations, such as tree‐structured conditional probability distributions, capture local independence relationships among the random variables in a Bayesian network (BN). Local independence relationships among the random variables can also be captured by using attribute‐value hierarchies to find an appropriate abstraction level for the values used to describe the conditional probability distributions. Capturing this local structure is important because it reduces the number of parameters required to represent the distribution. This can lead to more robust parameter estimation and structure selection, more efficient inference algorithms, and more interpretable models. In this paper, we introduce Tree‐Abstraction‐Based Search (TABS), an approach for learning a data distribution by inducing the graph structure and parameters of a BN from training data. TABS combines tree structure and attribute‐value hierarchies to compactly represent conditional probability tables. To construct the attribute‐value hierarchies, we investigate two data‐driven techniques: a global clustering method, which uses all of the training data to build the attribute‐value hierarchies, and can be performed as a preprocessing step; and a local clustering method, which uses only the local network structure to learn attribute‐value hierarchies. We present empirical results for three real‐world domains, finding that (1) combining tree structure and attribute‐value hierarchies improves the accuracy of generalization, while providing a significant reduction in the number of parameters in the learned networks, and (2) data‐derived hierarchies perform as well or better than expert‐provided hierarchies.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here