I-vector features and deep neural network modeling for language recognition
Author(s) -
Wei Wang,
Wenjie Song,
Chen Chen,
Zhaoxin Zhang,
Yi Xin
Publication year - 2019
Publication title -
procedia computer science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.334
H-Index - 76
ISSN - 1877-0509
DOI - 10.1016/j.procs.2019.01.181
Subject(s) - overfitting , computer science , nist , dropout (neural networks) , artificial intelligence , task (project management) , baseline (sea) , artificial neural network , feature (linguistics) , feature vector , language model , support vector machine , machine learning , pattern recognition (psychology) , deep learning , deep neural networks , speech recognition , linguistics , oceanography , philosophy , management , economics , geology
We combine Total Variability algorithm with Deep Learning theory to complete the language recognition task. The Total Variability algorithm can compensate for the influence of differences in channels and speakers among various languages, while deep learning methods have a stronger ability of nonlinear modeling compared with traditional statistical models. In this paper, I-vector feature is extracted using Total Variability algorithm, and model training is established using fully connected neural network. Meanwhile, the dropout strategy is also used to suppress overfitting. The experimental results show that the new system outperforms the baseline system on the NIST LRE 2007 corpus.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom