z-logo
open-access-imgOpen Access
Using Highway Connections to Enable Deep Small‐footprint LSTM‐RNNs for Speech Recognition
Author(s) -
CHENG Gaofeng,
LI Xin,
YAN Yonghong
Publication year - 2019
Publication title -
chinese journal of electronics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.267
H-Index - 25
eISSN - 2075-5597
pISSN - 1022-4653
DOI - 10.1049/cje.2018.11.008
Subject(s) - recurrent neural network , footprint , computer science , memory footprint , speech recognition , long short term memory , sequence (biology) , artificial intelligence , artificial neural network , geology , paleontology , biology , genetics , operating system
Long short‐term memory RNNs (LSTMRNNs) have shown great success in the Automatic speech recognition (ASR) field and have become the state‐ofthe‐ art acoustic model for time‐sequence modeling tasks. However, it is still difficult to train deep LSTM‐RNNs while keeping the parameter number small. We use the highway connections between memory cells in adjacent layers to train a small‐footprint highway LSTM‐RNNs (HLSTM‐RNNs), which are deeper and thinner compared to conventional LSTM‐RNNs. The experiments on the Switchboard (SWBD) indicate that we can train thinner and deeper HLSTM‐RNNs with a smaller parameter number than the conventional 3‐layer LSTM‐RNNs and a lower Word error rate (WER) than the conventional one. Compared with the counterparts of small‐footprint LSTMRNNs, the small‐footprint HLSTM‐RNNs show greater reduction in WER.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here