Issue #105 – Improving Non-autoregressive Neural Machine Translation with Monolingual Data
30 Oct20
Issue #105 – Improving Non-autoregressive Neural Machine Translation with Monolingual Data
Author: Dr. Chao-Hong Liu, Machine Translation Scientist @ Iconic
Introduction
In the training of neural machine translation (NMT) systems, determining how to take advantage of monolingual data and improve the performance of the resulting trained models is a challenge. In this post, we review an approach proposed by Zhou and Keung (2020), under the framework of non-autoregressive (NAR) NMT. The results confirm that NAR models achieve better or comparable performance compared to state-of-the-art non-iterative NAR models.
NAR-MT with Monolingual Data
Zhou and Keung (2020) see the “NAR model as a function approximator of an existing AR (autoregressive) model”. The input of the approach is an AR model and source sentences. Firstly, the AR