z-logo
open-access-imgOpen Access
Simultaneous neural machine translation with a reinforced attention mechanism
Author(s) -
Lee YoHan,
Shin JongHun,
Kim YoungKil
Publication year - 2021
Publication title -
etri journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.295
H-Index - 46
eISSN - 2233-7326
pISSN - 1225-6463
DOI - 10.4218/etrij.2020-0358
Subject(s) - computer science , machine translation , translation (biology) , latency (audio) , artificial intelligence , sentence , mechanism (biology) , transfer based machine translation , example based machine translation , telecommunications , biochemistry , chemistry , philosophy , epistemology , messenger rna , gene
To translate in real time, a simultaneous translation system should determine when to stop reading source tokens and generate target tokens corresponding to a partial source sentence read up to that point. However, conventional attention‐based neural machine translation (NMT) models cannot produce translations with adequate latency in online scenarios because they wait until a source sentence is completed to compute alignment between the source and target tokens. To address this issue, we propose a reinforced learning (RL)‐based attention mechanism, the reinforced attention mechanism, which allows a neural translation model to jointly train the stopping criterion and a partial translation model. The proposed attention mechanism comprises two modules, one to ensure translation quality and the other to address latency. Different from previous RL‐based simultaneous translation systems, which learn the stopping criterion from a fixed NMT model, the modules can be trained jointly with a novel reward function. In our experiments, the proposed model has better translation quality and comparable latency compared to previous models.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here