
Acceptability of machine-translated content: A multi-language evaluation by translators and end-users
Author(s) -
Sheila Castilho,
Sharon O’Brien
Publication year - 2018
Publication title -
linguistica antverpiensia new series - themes in translation studies
Language(s) - English
Resource type - Journals
ISSN - 2295-5739
DOI - 10.52034/lanstts.v16i0.430
Subject(s) - usability , terminology , spelling , computer science , machine translation , german , end user , quality (philosophy) , natural language processing , artificial intelligence , multimedia , world wide web , human–computer interaction , linguistics , philosophy , epistemology
As machine translation (MT) continues to be used increasingly in the translation industry, there is a corresponding increase in the need to understand MT quality and, in particular, its impact on end-users. To date, little work has been carried out to investigate the acceptability of MT output among end-users and, ultimately, how acceptable they find it. This article reports on research conducted to address that gap. End-users of instructional content machine-translated from English into German, Simplified Chinese and Japanese were engaged in a usability experiment. Part of this experiment involved giving feedback on the acceptability of raw machine-translated content and lightly post-edited (PE) versions of the same content. In addition, a quality review was carried out in collaboration with an industry partner and experienced translation quality reviewers. The translation quality-assessment (TQA) results from translators reflect the usability and satisfaction results by end-users insofar as the implementation of light PE both increased the usability and acceptability of the PE instructions and led to satisfaction being reported. Nonetheless, the raw MT content also received good scores, especially for terminology, country standards and spelling.