z-logo
open-access-imgOpen Access
Optimal learning in multilayer neural networks
Author(s) -
Ole Winther,
B. Lautrup,
Jiawen Zhang
Publication year - 1997
Publication title -
physical review. e, statistical physics, plasmas, fluids, and related interdisciplinary topics
Language(s) - English
Resource type - Journals
eISSN - 1095-3787
pISSN - 1063-651X
DOI - 10.1103/physreve.55.836
Subject(s) - computer science , task (project management) , generalization , artificial intelligence , machine learning , artificial neural network , wake sleep algorithm , bayes' theorem , algorithm , multi task learning , generalization error , bayesian probability , mathematics , mathematical analysis , management , economics
The generalization performance of two learning algorithms, Bayes algorithm and the ``optimal learning'' algorithm, on two classification tasks is studied theoretically. In the first example the task is defined by a restricted two-layer network, a committee machine, and in the second the task is defined by the so-called prototype problem. The architecture of the learning machine, in both cases, is defined to be a committee machine. For both tasks the optimal learning algorithm, which is optimal when the solution is restricted to a specific architecture, performs worse than the overall optimal Bayes algorithm. However, both algorithms perform better than the conventional stochastic Gibbs algorithm, especially for the prototype problem in which the task and the learning machine are very different.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom