Open Access
Item-based assessment of translation competence: Chimera of objectivity versus prospect of reliable measurement
Author(s) -
June Eyckmans,
Philippe Anckaert
Publication year - 2018
Publication title -
linguistica antverpiensia new series - themes in translation studies
Language(s) - English
Resource type - Journals
ISSN - 2295-5739
DOI - 10.52034/lanstts.v16i0.436
Subject(s) - summative assessment , competence (human resources) , objectivity (philosophy) , psychology , quality assessment , computer science , formative assessment , evaluation methods , social psychology , epistemology , mathematics education , engineering , reliability engineering , philosophy
In the course of the past decade, scholars in Translation Studies have repeatedly expressed the need for more empirical research on translation assessment. Notwithstanding the many pleas for “objectivity” that have been voiced in the literature, the issue of reliability remains unaddressed. Although there is no consensus on the best method for measuring the quality of human or machine translations, it is clear that in both cases measurement error will need to be accounted for. This is especially the case in high-stake situations such as assessments that lead to translation competence being certified. In this article we focus on the summative assessment of translation competence in an educational context. We explore the psychometric quality of two assessment methods: the CDI method (Eyckmans, Anckaert, & Segers, 2009) and the PIE method (Kockaert & Segers, 2014; 2017; Segers & Kockaert, 2016). In our study, the reliability of both methods is compared empirically by scoring the same set of translations (n > 100) according to each method.