Articles About Natural Language Processing

Issue #92 – The Importance of References in Evaluating MT Output

30 Jul20 Issue #92 – The Importance of References in Evaluating MT Output Author: Dr. Carla Parra Escartín, Global Program Manager @ Iconic Introduction Over the years, BLEU has become the “de facto standard” for Machine Translation automatic evaluation. However, and despite being the metric being referenced in all MT research papers, it is equally criticized for not providing a reliable evaluation of the MT output. In today’s blog post we look at the work done by Freitag et al. […]

Read more

Issue #91 – Translating Translationese: A Two-Step Approach to Unsupervised Machine Translation

23 Jul20 Issue #91 – Translating Translationese: A Two-Step Approach to Unsupervised Machine Translation Author: Dr. Chao-Hong Liu, Machine Translation Scientist @ Iconic Introduction Unsupervised Machine Translation (MT) is the technology that we use to train MT engines when parallel data is not used, at least not directly. We have discussed some interesting approaches in several previous posts for unsupervised MT (Issues #11 and #28) and some related topics (Issues #6, #25 and #66). Training MT engines requires the existence […]

Read more

Issue #90 – Tangled up in BLEU: Reevaluating how we evaluate automatic metrics in Machine Translation

16 Jul20 Issue #90 – Tangled up in BLEU: Reevaluating how we evaluate automatic metrics in Machine Translation Author: Dr. Karin Sim, Machine Translation Scientist @ Iconic Introduction Automatic metrics have a crucial role in Machine Translation (MT). They are used to tune the MT systems during the development phase, to determine which model is best, and to subsequently determine the accuracy of the final translations. Currently, the performance of these automatic metrics is judged by seeing how well they […]

Read more

Issue #89 – Norm-Based Curriculum Learning for Neural Machine Translation

09 Jul20 Issue #89 – Norm-Based Curriculum Learning for Neural Machine Translation Author: Dr. Patrik Lambert, Senior Machine Translation Scientist @ Iconic Introduction Neural machine translation (NMT) models benefit from large amounts of data. However in high resource conditions, training these models is computationally expensive. In this post we take a look at a paper from Liu et al. (2020) aiming at improving the efficiency of training by introducing a curriculum learning method based on the word embedding norm. The […]

Read more

Issue #88 – Multilingual Denoising Pre-training for Neural Machine Translation

02 Jul20 Issue #88 – Multilingual Denoising Pre-training for Neural Machine Translation Author: Dr. Chao-Hong Liu, Machine Translation Scientist @ Iconic Introduction Pre-training has been used in many natural language processing (NLP) tasks with significant improvements in performance. In neural machine translation (NMT), pre-training is mostly applied to building blocks of the whole system, e.g. encoder or decoder. In a previous post (#70), we compared several approaches using pre-training with masked language models. In this post, we take a closer […]

Read more

Issue #87 – YiSi – A Unified Semantic MT Quality Evaluation and Estimation Metric

25 Jun20 Issue #87 – YiSi – A Unified Semantic MT Quality Evaluation and Estimation Metric Author: Dr. Karin Sim, Machine Translation Scientist @ Iconic Introduction Automatic evaluation is an issue that has long troubled machine translation (MT): how do we evaluate how good the MT output is? Traditionally, BLEU has been the “go to”, as it is simple to use across language pairs. However, it is overly simplistic, evaluating string matches to a single reference translation. More sophisticated metrics […]

Read more

Issue #86 – Neural MT with Levenshtein Transformer

18 Jun20 Issue #86 – Neural MT with Levenshtein Transformer Author: Dr. Patrik Lambert, Senior Machine Translation Scientist @ Iconic Introduction The standard Transformer model is autoregressive, meaning that the prediction of each target word is based on the predictions for the previous words. The output is generated from left to right, with no chance to revise a past decision and without considering future predictions of the words on the right of the current word. In a recent post (#82), […]

Read more

Issue #85 – Applying Terminology Constraints in Neural MT

11 Jun20 Issue #85 – Applying Terminology Constraints in Neural MT Author: Dr. Chao-Hong Liu, Machine Translation Scientist @ Iconic Introduction Maintaining consistency of terminology translation in Neural Machine Translation (NMT) is a more challenging task than in Statistical MT (SMT). In this post, we review a method proposed by Dinu et al. (2019) to train NMT to use custom terminology. Translation with Terminology Constraints Applying terminology constraints to translation may appear to be an easy task. It is a […]

Read more

Issue #84 – Are Neural Machine Translation Systems Good Estimators of Quality?

04 Jun20 Issue #84 – Are Neural Machine Translation Systems Good Estimators of Quality? Author: Prof. Lucia Specia, Professor of Natural Language Processing, Imperial College London (also to ADAPT/Dublin City University and University of Sheffield) This week, we are delighted to have a guest post from Prof. Lucia Specia of Imperial College London, and laterally the University of Sheffield and our own alma mater, Dublin City University. Prof. Specia is one of the world’s preeminent experts on the topic of […]

Read more

Issue #83 – Selective Attention for Context-aware Neural Machine Translation

21 May20 Issue #83 – Selective Attention for Context-aware Neural Machine Translation Author: Dr. Karin Sim, Machine Translation Scientist @ Iconic Introduction One of the next frontiers for Neural Machine Translation (NMT) is moving beyond the sentence-by-sentence translation that currently is the norm, to a context-aware, document level translation. Including extra-sentential context means that discourse elements (such as expressions referring back to previously-mentioned entities) can be integrated, resulting in better translation of references, for example. Currently the engine has no […]

Read more
1 62 63 64 65 66 71