Open Access
The Relevance of the Source Language in Transfer Learning for ASR
Author(s) -
Nils Hjortnæs,
Niko Partanen,
Michael Rießler,
Francis M. Tyers
Publication year - 2021
Language(s) - English
DOI - 10.33011/computel.v1i.959
Subject(s) - computer science , relevance (law) , transfer of learning , natural language processing , linguistics , artificial intelligence , documentation , matching (statistics) , domain (mathematical analysis) , russian language , mathematical analysis , philosophy , statistics , mathematics , political science , law , programming language
This study presents new experiments on Zyrian Komi speech recognition. We use Deep-Speech to train ASR models from a language documentation corpus that contains both contemporary and archival recordings. Earlier studies have shown that transfer learning from English and using a domain matching Komi language model both improve the CER and WER. In this study we experiment with transfer learning from a more relevant source language, Russian, and including Russian text in the language model construction. The motivation for this is that Russian and Komi are contemporary contact languages, and Russian is regularly present in the corpus. We found that despite the close contact of Russian and Komi, the size of the English speech corpus yielded greater performance when used as the source language. Additionally, we can report that already an update in DeepSpeech version improved the CER by 3.9% against the earlier studies, which is an important step in the development of Komi ASR.