Machine Translation Weekly 81: Unsupervsied MT and Parallel Sentence Mining

This week I am going to briefly comment on a paper that uses unsupervised machine translation to improve unsupervised scoring for parallel data mining. The title of the paper is Unsupervised Multilingual Sentence Embeddings for Parallel Corpus Mining, it has authors from Charles University and the University of the Basque Country and will appear at this year’s ACL student research workshop. The idea of the paper is quite simple. They took XLM, a BERT-like model that was trained for 100 […]

Read more

Machine Translation Weekly 80: Deontological ethics and MT

At this year’s NAACL, there will be a paper that tries to view NLP from the perspective of deontological ethics and promotes an unusual and very insightful view on NLP ethics. The title of the paper is Case Study: Deontological Ethics in NLP, it was written by authors from CMU and discusses several NLP applications from the perspective of deontological ethics. Usually, ethics in NLP is discussed from the consequentialist perspective. In this view, the morality of an action is […]

Read more

My most amazing Makefile for CL papers

Automation of stuff that does not need to be automated at all is one of my most favorite procrastination activities. As an experienced (and most of the time unsuccessful) submitter to conferences organized by ACL (ACL, NAACL, EACL, EMNLP), I spent a lot of procrastinating time improving the Makefile compiling the papers. Here are few commented snippets from the Makefiles. Hopefully, someone finds that useful. The normal LaTeX stuff I compile the paper using latexmk. main.pdf: $(FILES) latexmk -pdflatex=”$(LATEX) %O […]

Read more

Machine Translation Weekly 79: More context in MT

The lack of broader context is one of the main problems in machine translation and in NLP in general. People tried various methods with actually quite mixed results. A recent preprint from Unbabel introduces an unusual quantification of context-awareness and based on that do some training improvements. The title of the paper is Measuring and Increasing Context Usage in Context-Aware Machine Translation and will be presented at ACL 2021. The paper measures how well informed the model is about the […]

Read more

Machine Translation Weekly 78: Multilingual Hate Speech Detection

This week I will comment on a preprint Cross-lingual hate speech detection based on multilingual domain-specific word embeddings by authors from the University of Chile. The pre-print evaluates the possibility of cross-lingual transfer of models for hate speech detection, i.e., training a model in one language and testing it in a different language. Hate speech detection is a particularly tough task for model transfer because many of the words have a different meaning or at least different connotations when used […]

Read more

Machine Translation Weekly 77: Reference-free Evaluation

This week, I am will comment on a paper by authors from the University of Maryland and Google Research on reference-free evaluation of machine translation, which seems quite disturbing to me and suggests there is a lot about current MT models we still don’t quite understand. The title of the paper is “Assessing Reference-Free Peer Evaluation for Machine Translation” and it will be published at this year’s NAACL conference. The standard evaluation of machine translation uses reference translations: translations that […]

Read more

Machine Translation Weekly 76: Zero-shot MT with pre-trained encoder

Using pre-trained multilingual representation as a universal encoder for machine translation might seem like an obvious thing to try: train a decoder into one target language using one or several source languages and you get a translation from 100 languages into the target language. This sounds great, but this is not how it works. (Or it works somehow, but not really well, I tried it myself.) Recently, I came across a pre-print where the authors figured out how to do […]

Read more

Machine Translation Weekly 75: Outbound Translation

This week, I will comment on a paper by my good old friends from Charles University in collaboration with the University of Edinburgh, the University of Sheffield, and the University of Tartu within the Bergamot project. The main goal of the project is to develop a high-quality machine translation that runs locally in an internet browser and unlike services such as Google Translate or Microsoft Translator does not send any (potentially sensitive) data to any server. This is a very […]

Read more

Machine Translation Weekly 74: Architectrues we will hear about in MT

This week, I would like to feature three recent papers with innovations in neural architectures that I think might become important in MT and multilingual NLP during the next year. But of course, I might be wrong, in MT Weekly 27, I self-assuredly claimed that the Reformer architecture will start an era of much larger models than we have now and will turn the attention of the community towards document-level problems and it seems it is not happening. CANINE: Tokenization-free […]

Read more

Machine Translation Weekly 73: Non-autoregressive MT with Latent Codes

Today, I will comment on a paper on non-autoregressive machine translation that shows a neat trick for increasing output fluency. The title of the paper is Non-Autoregressive Translation by Learning Target Categorical Codes, has authors from several Chinese private and public institutions and will appear at this year’s NAACL Conference. Unlike standard, so-called autoregressive encoder-decoder architectures that decode output sequentially (and in theory in linear time), non-autoregressive models generate all outputs in parallel (and in theory in constant time, regardless […]

Read more
1 5 6 7 8 9 11