A Comparative Study of the Performance of HMM, DNN, and RNN based Speech Synthesis Systems Trained on Very Large Speaker-Dependent Corpora
Author(s) -
Xin Wang,
Shinji Takaki,
Junichi Yamagishi
Publication year - 2016
Publication title -
edinburgh research explorer (university of edinburgh)
Language(s) - English
Resource type - Conference proceedings
DOI - 10.21437/ssw.2016-20
Subject(s) - hidden markov model , speech recognition , computer science , recurrent neural network , parametric statistics , speech synthesis , trajectory , artificial neural network , artificial intelligence , training set , training (meteorology) , mathematics , statistics , physics , astronomy , meteorology
This study investigates the impact of the amount of training data on the performance of parametric speech synthesis systems. A Japanese corpus with 100 hours’ audio recordings of a male voice and another corpus with 50 hours’ recordings of a female voice were utilized to train systems based on hidden Markov model (HMM), feed-forward neural network and recurrent neural network (RNN). The results show that the improvement on the accuracy of the predicted spectral features gradually diminishes as the amount of training data increases. However, different from the “diminishing returns” in the spectral stream, the accuracy of the predicted F0 trajectory by the HMM and RNN systems tends to consistently benefit from the increasing amount of training data.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom