z-logo
open-access-imgOpen Access
Language Recognition via Sparse Coding
Author(s) -
Youngjune Gwon,
William M. Campbell,
Douglas Sturim,
H. T. Kung
Publication year - 2016
Publication title -
interspeech 2022
Language(s) - English
Resource type - Conference proceedings
DOI - 10.21437/interspeech.2016-881
Subject(s) - discriminative model , computer science , neural coding , k svd , artificial intelligence , sparse approximation , speech recognition , nist , pattern recognition (psychology) , utterance , maximum a posteriori estimation , language model , feature learning , coding (social sciences) , machine learning , maximum likelihood , mathematics , statistics
: Spoken language recognition requires a series of signal processing steps and learning algorithms to model distinguishing characteristics of different languages. In this paper, we present a sparse discriminative feature learning framework for language recognition. We use sparse coding, an unsupervised method, to compute efficient representations for spectral features from a speech utterance while learning basis vectors for language models. Differentiated from existing approaches, we introduce a maximum a posteriori (MAP) adaptation scheme that further optimizes the discriminative quality of sparse-coded speech features. We empirically validate the effectiveness of our approach using the NIST LRE 2015 dataset.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom