Issue #73 – Mixed Multi-Head Self-Attention for Neural MT

12 Mar20 Issue #73 – Mixed Multi-Head Self-Attention for Neural MT Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic Self-attention is a key component of the Transformer, a state-of-the-art neural machine translation architecture. In the Transformer, self-attention is divided into multiple heads to allow the system to independently attend to information from different representation subspaces. Recently it has been shown that some redundancy occurs in the multiple heads. In this post, we take a look at approaches which ensure […]

Read more

Issue #68 – Incorporating BERT in Neural MT

07 Feb20 Issue #68 – Incorporating BERT in Neural MT Author: Raj Patel, Machine Translation Scientist @ Iconic BERT (Bidirectional Encoder Representations from Transformers) has shown impressive results in various Natural Language Processing (NLP) tasks. However, how to effectively apply BERT in Neural MT has not been fully explored. In general, BERT is used as fine-tuning for downstream NLP tasks. For Neural MT, a pre-trained BERT model is used to initialise the encoder in an encoder-decoder architecture. In this post we […]

Read more

Issue #66 – Neural Machine Translation Strategies for Low-Resource Languages

23 Jan20 Issue #66 – Neural Machine Translation Strategies for Low-Resource Languages This week we are pleased to welcome the newest member to our scientific team, Dr. Chao-Hong Liu. In this, his first post with us, he’ll give his views on two specific MT strategies, namely, pivot MT and zero-shot MT. While we have covered these topics in previous ‘Neural MT Weekly’ blog posts (Issue #54, Issue #40), these are topics that Chao-Hong has recently worked on prior to joining […]

Read more

Issue #64 – Neural Machine Translation with Byte-Level Subwords

13 Dec19 Issue #64 – Neural Machine Translation with Byte-Level Subwords Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic In order to limit vocabulary, most neural machine translation engines are based on subwords. In some settings, character-based systems are even better (see issue #60). However, rare characters in noisy data or character-based languages can unnecessarily take up vocabulary slots and limit its compactness. In this post we take a look at an alternative, proposed by Wang et al. (2019), […]

Read more

Issue #62 – Domain Differential Adaptation for Neural MT

28 Nov19 Issue #62 – Domain Differential Adaptation for Neural MT Author: Raj Patel, Machine Translation Scientist @ Iconic Neural MT models are data hungry and domain sensitive, and it is nearly impossible to obtain a good amount ( >1M segments) of training data for every domain we are interested in. One common strategy is to align the statistics of the source and target domain, but the drawback of this approach is that the statistics of the different domains are inherently […]

Read more

Issue #60 – Character-based Neural Machine Translation with Transformers

14 Nov19 Issue #60 – Character-based Neural Machine Translation with Transformers Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic We saw in issue #12 of this blog how character-based recurrent neural networks (RNNs) could outperform (sub)word-based models if the network is deep enough. However, character sequences are much longer than subword ones, which is not easy to deal with in  RNNs. In this post, we discuss how the Transformer architecture changes the situation for character-based models. We take a […]

Read more

Issue #52 – A Selection from ACL 2019

19 Sep19 Issue #52 – A Selection from ACL 2019 Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic The Conference of the Association for Computational Linguistics (ACL) took place this summer, and over the past few months we have reviewed a number of preprints (see Issues 28, 41 and 43) which were published at ACL. In this post, we take a look at three more papers presented at the conference, that we found particularly interesting, in the context of […]

Read more

Issue #48 – It’s all French Belgian Fries to me… or The Art of Multilingual e-Disclosure (Part II)

01 Aug19 Issue #48 – It’s all French Belgian Fries to me… or The Art of Multilingual e-Disclosure (Part II) Author: Jérôme Torres Lozano, Director of Professional Services, Inventus This is the second of a two-part guest post from Jérôme Torres Lozano, the Director of Professional Services at Inventus, who shares his perspective on The Art of Multilingual e-Disclosure. In Part I,  we learned about the challenges of languages in e-disclosure.  In this post he will discuss language identification and translation options available […]

Read more

Issue #47 – It’s all French Belgian Fries to me, or The Art of Multilingual e-Disclosure (Part I)

25 Jul19 Issue #47 – It’s all French Belgian Fries to me, or The Art of Multilingual e-Disclosure (Part I) Author: Jérôme Torres Lozano, Director of Professional Services, Inventus Over the next two weeks, we’re taking a slightly different approach on the blog. In today’s article, the first of two parts, we will hear from Jérôme Torres-Lozano of Inventus, a user of Iconic’s Neural MT solutions for e-discovery. He gives us an entertaining look at his experiences on the challenges of language, […]

Read more

Issue #46 – Augmenting Self-attention with Persistent Memory

18 Jul19 Issue #46 – Augmenting Self-attention with Persistent Memory Author: Dr. Rohit Gupta, Sr. Machine Translation Scientist @ Iconic In Issue #32 we introduced the Transformer model as the new state-of-the-art in Neural Machine Translation. Subsequently, in Issue #41 we looked at some approaches that were aiming to improve upon it. In this post, we take a look at significant change in the Transformer model, proposed by Sukhbaatar et al. (2019), which further improves its performance. Each Transformer layer consists of two types […]

Read more
1 910 911 912 913 914 916