z-logo
open-access-imgOpen Access
TEnet: target speaker extraction network with accumulated speaker embedding for automatic speech recognition
Author(s) -
Li Wenjie,
Zhang Pengyuan,
Yan Yonghong
Publication year - 2019
Publication title -
electronics letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.375
H-Index - 146
ISSN - 1350-911X
DOI - 10.1049/el.2019.1228
Subject(s) - speaker recognition , speech recognition , computer science , speaker diarisation , embedding , feature extraction , pattern recognition (psychology) , artificial intelligence
It is challenging to perform automatic speech recognition when multiple people talk simultaneously. To solve this problem, speaker‐aware selective methods have been proposed to extract the speech of the target speaker, relying on the auxiliary speaker characteristics provided by an anchor (a clean audio sample of the target speaker). However, the extraction performance depends on the duration and quality of the anchors, which is unstable. To address this limitation, the authors propose a target speaker extraction network (TEnet) which applies the robust speaker embedding to extract the target speech from the speech mixture. To get more stable speaker characteristics during training, the robust speaker embeddings are accumulated over all the speech of each target speaker, rather than utilising the embedding produced by a single anchor. As for testing, very few anchors are enough to get decent extraction performance. Results show the TEnet trained with accumulated embedding achieves better performance and robustness compared with the single‐anchored TEnet. Moreover, to exploit the potential of the speaker embedding, the authors propose to feed the extracted target speech as anchor and train a feedback TEnet, whose results are superior to the short‐anchored baseline for 22.5% on word error rate and 15.5% on signal‐to‐distortion rate.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here