Research Library

open-access-imgOpen AccessAligning Translation-Specific Understanding to General Understanding in Large Language Models
Author(s)
Yichong Huang,
Xiaocheng Feng,
Baohang Li,
Chengpeng Fu,
Wenshuai Huo,
Ting Liu,
Bing Qin
Publication year2024
Although large language models (LLMs) have shown surprising languageunderstanding and generation capabilities, they have yet to gain arevolutionary advancement in the field of machine translation. One potentialcause of the limited performance is the misalignment between thetranslation-specific understanding and general understanding inside LLMs. Toalign the translation-specific understanding to the general one, we propose anovel translation process xIoD (Cross-Lingual Interpretation of Difficultwords), explicitly incorporating the general understanding on the contentincurring inconsistent understanding to guide the translation. Specifically,xIoD performs the cross-lingual interpretation for the difficult-to-translatewords and enhances the translation with the generated interpretations.Furthermore, we reframe the external tools of QE to tackle the challenges ofxIoD in the detection of difficult words and the generation of helpfulinterpretations. We conduct experiments on the self-constructed benchmarkChallengeMT, which includes cases in which multiple SOTA translation systemsconsistently underperform. Experimental results show the effectiveness of ourxIoD, which improves up to +3.85 COMET.
Language(s)English

Seeing content that should not be on Zendy? Contact us.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here