
Language Model Adaptation Based on Topic Probability of Latent Dirichlet Allocation
Author(s) -
Jeon HyungBae,
Lee SooYoung
Publication year - 2016
Publication title -
etri journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.295
H-Index - 46
eISSN - 2233-7326
pISSN - 1225-6463
DOI - 10.4218/etrij.16.0115.0499
Subject(s) - latent dirichlet allocation , computer science , latent variable , artificial intelligence , cluster analysis , non negative matrix factorization , language model , domain adaptation , benchmark (surveying) , topic model , adaptation (eye) , latent semantic analysis , machine learning , pattern recognition (psychology) , speech recognition , matrix decomposition , eigenvalues and eigenvectors , physics , geodesy , quantum mechanics , classifier (uml) , optics , geography
Two new methods are proposed for an unsupervised adaptation of a language model (LM) with a single sentence for automatic transcription tasks. At the training phase, training documents are clustered by a method known as Latent Dirichlet allocation (LDA), and then a domain‐specific LM is trained for each cluster. At the test phase, an adapted LM is presented as a linear mixture of the now trained domain‐specific LMs. Unlike previous adaptation methods, the proposed methods fully utilize a trained LDA model for the estimation of weight values, which are then to be assigned to the now trained domain‐specific LMs; therefore, the clustering and weight‐estimation algorithms of the trained LDA model are reliable. For the continuous speech recognition benchmark tests, the proposed methods outperform other unsupervised LM adaptation methods based on latent semantic analysis, non‐negative matrix factorization, and LDA with n ‐gram counting.