z-logo
Premium
LEARNING BAYESIAN BELIEF NETWORKS: AN APPROACH BASED ON THE MDL PRINCIPLE
Author(s) -
Lam Wai,
Bacchus Fahiem
Publication year - 1994
Publication title -
computational intelligence
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.353
H-Index - 52
eISSN - 1467-8640
pISSN - 0824-7935
DOI - 10.1111/j.1467-8640.1994.tb00166.x
Subject(s) - minimum description length , principle of maximum entropy , computer science , artificial intelligence , bayesian probability , bayesian network , machine learning , prior probability , entropy (arrow of time) , bayesian information criterion , mathematics , algorithm , physics , quantum mechanics
A new approach for learning Bayesian belief networks from raw data is presented. The approach is based on Rissanen's minimal description length (MDL) principle, which is particularly well suited for this task. Our approach does not require any prior assumptions about the distribution being learned. In particular, our method can learn unrestricted multiply‐connected belief networks. Furthermore, unlike other approaches our method allows us to trade off accuracy and complexity in the learned model. This is important since if the learned model is very complex (highly connected) it can be conceptually and computationally intractable. In such a case it would be preferable to use a simpler model even if it is less accurate. The MDL principle offers a reasoned method for making this trade‐off. We also show that our method generalizes previous approaches based on Kullback cross‐entropy. Experiments have been conducted to demonstrate the feasibility of the approach.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here