Post-edited quality, post-editing behaviour and human evaluation: a case study - HAL Accéder directement au contenu
Chapitre d'ouvrage Année : 2014

Post-edited quality, post-editing behaviour and human evaluation: a case study

Nathalie de Sutter
  • Fonction : Auteur
Arda Tezcan
  • Fonction : Auteur

Résumé

In this chapter, we address the correlation between post-editing similarity and the human evaluation of machine translation. We were interested to find out whether a high similarity score corresponded to a high quality score and vice versa in the sample that we compiled for the purposes of the case study. A group of translation trainees post-edited a sample and a number of these informants also rated the MT output for quality on a five-point scale. We calculated Pearson's correlation coefficient as well as the relative standard deviation per informant for each activity with a view to determining which of the two evaluation methods appeared to be the more reliable measurement given the project settings. Our sample also enabled us to test whether MT enhances the productivity of translation trainees, and whether the quality of post-edited sentences is different from the quality of sentences translated 'from scratch'.

Mots clés

Loading...
Fichier non déposé

Dates et versions

halshs-01060447, version 1 (03-09-2014)

Identifiants

  • HAL Id : halshs-01060447 , version 1

Citer

Ilse Depraetere, Nathalie de Sutter, Arda Tezcan. Post-edited quality, post-editing behaviour and human evaluation: a case study. Laura Winther-Balling, Lucia Specia, Michael Carl, Michel Simard, Sharon O'Brien. Post-editing of machine translation: processes and applications, Cambridge Scholars Publishing, pp.78-108., 2014. ⟨halshs-01060447⟩
295 Consultations
0 Téléchargements
Dernière date de mise à jour le 20/04/2024
comment ces indicateurs sont-ils produits

Partager

Gmail Facebook Twitter LinkedIn Plus