Modeling Labial Coarticulation with Bidirectional Gated Recurrent Networks and Transfer Learning
Author(s) -
Théo Biasutto-Lervat,
Sara Dahmani,
Slim Ouni
Publication year - 2019
Publication title -
interspeech 2022
Language(s) - English
Resource type - Conference proceedings
DOI - 10.21437/interspeech.2019-2097
Subject(s) - coarticulation , transfer of learning , computer science , transfer (computing) , artificial intelligence , speech recognition , vowel , parallel computing
In this study, we investigate how to learn labial coarticula-tion to generate a sparse representation of the face from speech. To do so, we experiment a sequential deep learning model, bidi-rectional gated recurrent networks, which have reached nice result in addressing the articulatory inversion problem and so should be able to handle coarticulation effects. As acquiring audiovisual corpora is expensive and time-consuming, we designed our solution to counteract the lack of data. Firstly, we have used phonetic information (phoneme label and respective duration) as input to ensure speaker independence, and in second hand, we have experimented around pretraining strategies to reach acceptable performances. We demonstrate how a careful initialization of the last layers of the network can greatly ease the training and help to handle coarticulation effect. This initialization relies on dimensionality reduction strategies, allowing injecting knowledge of useful latent representation of the visual data into the network. We focused on two data-driven tools (PCA and autoencoder) and one hand-crafted latent space coming from animation community, blendshapes decomposition. We have trained and evaluated the model with a corpus consisting of 4 hours of French speech, and we have gotten an average RMSE close to 1.3mm.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom