z-logo
open-access-imgOpen Access
Hidden variability subspace learning for adaptation of deep neural networks
Author(s) -
Fernando S.,
Sethu V.,
Ambikairajah E.
Publication year - 2018
Publication title -
electronics letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.375
H-Index - 146
eISSN - 1350-911X
pISSN - 0013-5194
DOI - 10.1049/el.2017.4027
Subject(s) - subspace topology , robustness (evolution) , computer science , artificial neural network , artificial intelligence , adaptation (eye) , deep neural networks , pattern recognition (psychology) , speech recognition , noise (video) , set (abstract data type) , machine learning , biochemistry , chemistry , physics , optics , image (mathematics) , gene , programming language
This Letter proposes a deep neural network (DNN) adaptation method, herein referred to as the hidden variability subspace (HVS) method, to achieve improved robustness under diverse acoustic environments arising due to differences in conditions, e.g. speaker, channel, duration and environmental noise. In the proposed approach, a set of condition‐dependent parameters is estimated to adapt the hidden layer weights of the DNN in the HVS to reduce the condition mismatch. These condition‐dependent parameters are then connected to various layers through a new set of adaptively trained weights. The authors evaluate the proposed hidden variability learning method on a language identification task and show that significant performance gains can be obtained by discriminatively estimating a set of adaptation parameters to compensate the mismatch in the trained model.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here