On theoretical justification of the forward–backward algorithm for the variational learning of Bayesian hidden Markov models
Author(s) -
Li Tao,
Ma Jinwen
Publication year - 2022
Publication title -
iet signal processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.384
H-Index - 42
eISSN - 1751-9683
pISSN - 1751-9675
DOI - 10.1049/sil2.12129
Subject(s) - algorithm , hidden markov model , forward algorithm , probabilistic logic , computer science , bayesian probability , markov process , mathematics , hidden semi markov model , markov chain , artificial intelligence , markov model , machine learning , markov property , variable order markov model , statistics
In the variational learning process of a Bayesian Hidden Markov model, the forward‐backward algorithm is heuristically applied without theoretical justification. This is potentially problematic, because the original derivation of the forward‐backward algorithm implicitly requires the parameters to be normalized, which does not hold in the variational learning process of Bayesian HMM. In this paper, we prove that such a requirement is not necessary for the forward‐backward algorithm to obtain the correct result. We prove the result from two perspectives. The first proof straightforwardly verifies that implementing the forward‐backward algorithm with the unnormalised parameters is equivalent to implementing it with the normalized parameters. The second proof provides a new derivation of the forward‐backward algorithm without hidden Markov assumptions and probabilistic meanings of the parameters. As a result, we justify that applying the forward‐backward algorithm is theoretically correct and reasonable in the variational learning of Bayesian hidden Markov models.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom