Issue #58 – Quantisation of Neural Machine Translation models
31 Oct19 Issue #58 – Quantisation of Neural Machine Translation models Author: Dr. Patrik Lambert, Machine Translation Scientist @ Iconic When large amounts of training data are available, the quality of Neural MT engines increases with the size of the model. However, larger models imply decoding with more parameters, which makes the engine slower at test time. Improving the trade-off between model compactness and translation quality is an active research topic. One of the ways to achieve more compact models […]
Read more