z-logo
Premium
A neural network learning method for belief networks
Author(s) -
Peng Yun,
Zhou Zonglin
Publication year - 1996
Publication title -
international journal of intelligent systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.291
H-Index - 87
eISSN - 1098-111X
pISSN - 0884-8173
DOI - 10.1002/(sici)1098-111x(199611)11:11<893::aid-int3>3.0.co;2-u
Subject(s) - computer science , hebbian theory , artificial intelligence , artificial neural network , learning rule , set (abstract data type) , convergence (economics) , class (philosophy) , function (biology) , leabra , competitive learning , base (topology) , machine learning , feature (linguistics) , property (philosophy) , mathematics , wake sleep algorithm , mathematical analysis , linguistics , philosophy , epistemology , evolutionary biology , economics , generalization error , biology , programming language , economic growth
This article presents a learning method for a special class of belief networks known as noisy‐or networks. By extending the Hebbian rule of neural networks, two learning rules are developed to learn the probabilities of nodes and the internode causal strengths, respectively. The latter rule also learns structures of networks because a nonzero causal strength indicates the existence of a causal link. One distinct feature of this method is its ability to work in a sequential or incremental manner in which a network adjusts its parameters upon the arrival of every case description. As a result, this method is capable of not only constructing a causal knowledge base from a fixed set of case data but also dynamically adapting an existing knowledge base to a changing environment. To prove the convergence of learning, a Liapunov function is identified for the dynamic system defined by the learning rules. Computer experiment results show that this method is significantly faster than some existing learning methods for belief networks. © 1996 John Wiley & Sons, Inc.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here