Issue #36 – Average Attention Network for Neural MT
09 May19 Issue #36 – Average Attention Network for Neural MT Author: Dr. Rohit Gupta, Sr. Machine Translation Scientist @ Iconic In Issue#32, we covered the Transformer model for neural machine translation which is the state of the art in neural MT. In this post we explore a technique presented by Zhang et. al. 2018, which modifies the transformer model and speeds up the translation process by 4-7 times across a range of different engines. Where is the bottleneck? In the […]
Read more