Issue #106 – Informative Manual Evaluation of Machine Translation Output

05 Nov20

Issue #106 – Informative Manual Evaluation of Machine Translation Output

Author: Méabh Sloane, MT Researcher @ Iconic

Introduction

With regards to manual evaluation of machine translation (MT) output, there is a continuous search for balance between the time and effort required with manual evaluation, and the significant results it achieves. As MT technology continues to improve and evolve, the need for human evaluation increases, an element often disregarded due to its demanding nature. This need is heightened by the prevailing tendency of automatic metrics to underestimate the quality of Neural MT (NMT) output (Shterionov et al., 2018). NMT evaluation is not a newcomer to our blog series and has been featured in a number of Neural MT Weekly issues e.g. #104,

 

 

To finish reading, please visit source site