z-logo
open-access-imgOpen Access
Accelerate training of restricted Boltzmann machines via iterative conditional maximum likelihood estimation
Author(s) -
Mingqi Wu,
Ye Luo,
Faming Liang
Publication year - 2019
Publication title -
statistics and its interface
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.388
H-Index - 18
eISSN - 1938-7997
pISSN - 1938-7989
DOI - 10.4310/18-sii552
Subject(s) - boltzmann machine , computer science , divergence (linguistics) , convergence (economics) , algorithm , restricted boltzmann machine , artificial intelligence , hidden markov model , mathematics , artificial neural network , mathematical optimization , linguistics , philosophy , economics , economic growth
Restricted Boltzmann machines (RBMs) have become a popular tool of feature coding or extraction for unsupervised learning in recent years. However, there still lacks an efficient algorithm for training the RBM due to that its likelihood function contains an intractable normalizing constant. The existing algorithms, such as contrastive divergence and its variants, approximate the gradient of the likelihood function using Markov chain Monte Carlo. However, the approximation is time consuming and, moreover, the approximation error often impedes the convergence of the training algorithm. This paper proposes a fast algorithm for training RBMs by treating the hidden states as missing data and then estimating the parameters of the RBM via an iterative conditional maximum likelihood estimation approach, which avoids the issue of intractable normalizing constants. The numerical results indicate that the proposed algorithm can provide a drastic improvement over the contrastive divergence algorithm in RBM training. This paper also presents an extension of the proposed algorithm for how to cope with missing data in RBM training and illustrates its application using an example about drug-target interaction prediction.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here