z-logo
open-access-imgOpen Access
Speech De-identification with Deep Neural Networks
Author(s) -
Ádám Fodor,
László Kopácsi,
Zoltán Ádám Milacski,
András Lőrincz
Publication year - 2021
Publication title -
acta cybernetica
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.143
H-Index - 18
eISSN - 2676-993X
pISSN - 0324-721X
DOI - 10.14232/actacyb.288282
Subject(s) - computer science , utterance , speech recognition , artificial neural network , cloud computing , distortion (music) , identification (biology) , mel frequency cepstrum , feature (linguistics) , speech synthesis , artificial intelligence , feature extraction , telecommunications , linguistics , amplifier , philosophy , botany , bandwidth (computing) , biology , operating system
Cloud-based speech services are powerful practical tools but the privacy of the speakers raises important legal concerns when exposed to the Internet. We propose a deep neural network solution that removes personal characteristics from human speech by converting it to the voice of a Text-to-Speech (TTS) system before sending the utterance to the cloud. The network learns to transcode sequences of vocoder parameters, delta and delta-delta features of human speech to those of the TTS engine. We evaluated several TTS systems, vocoders and audio alignment techniques. We measured the performance of our method by (i) comparing the result of speech recognition on the de-identified utterances with the original texts, (ii) computing the Mel-Cepstral Distortion of the aligned TTS and the transcoded sequences, and (iii) questioning human participants in A-not-B, 2AFC and 6AFC tasks. Our approach achieves the level required by diverse applications.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here