Machine Translation Weekly 44: Tangled up in BLEU (and not blue)

For quite a while, machine translation is approached as a behaviorist simulation. Don’t you know what a good translation is? It does not matter, you can just simulate what humans do. Don’t you know how to measure if something is a good translation? It does not matter, you can simulate what humans do again. Things seem easy. We learn how to translate from tons of training data that were translated by humans. When we want to measure how well the […]

Read more

Machine Translation Weekly 45: Deep Encoder, Shallow Decoder, and the Fall of Non-autoregressive models

Researchers concerned with machine translation speed invented several methods that are supposed to significantly speed up the translation while maintaining as much as possible from the translation quality of the state-of-the-art models. The methods are usually based on generating as many words as possible in parallel. State-of-the-art models do not generate in parallel, they are autoregressive: it means that they generate words one by one and condition the decisions about the next words on the previously generated words. On the […]

Read more

Machine Translation Weekly 43: Dynamic Programming Encoding

One of the narratives people (including me) love to associate with neural machine translation is that we got rid of all linguistic assumptions about the text and let the neural network learn their own way independent of what people think about language. It sounds cool, it almost gives a science-fiction feeling. What I think that we really do is that we move our assumptions about language from hard constrains of discrete representation into soft constraints of inductive biases that we […]

Read more
1 9 10 11