Minimax in Python: Learn How to Lose the Game of Nim

You’ve gotten to know the steps of the minimax algorithm. In this section, you’ll implement minimax in Python. You’ll start by tailoring the algorithm directly to the game of Simple-Nim. Later, you’ll refactor your code to separate the core of the algorithm from the rules of the game, such that you can later apply your minimax code to other games. Implement a Nim-Specific Minimax Algorithm Consider the same example as in the previous section: it’s Maximillian’s turn, and there are […]

Read more

Highlights from Machine Translation and Multilinguality 02/2022

After 100 MT Weekly posts (which took me 130 weeks to write), I realized that weekly blogging is impossible while weekly teaching. So I decided to change the format of the post and write monthly summaries of what I found most interesting in machine translation and multilinguality. This is the first issue that summarizes what interesting happened in February. Exciting news about WMT There will be some exciting changes in WMT competitions. WMT is an annual conference on machine translation […]

Read more

Highlights from Machine Translation and Multilinguality in March 2022

Here is a monthly summary of what I found most interesting on arXiv this month from machine translation and mutlilinguality. This month was the camera-ready deadline for ACL 2022, so many of the interesting papers are accepted to ACL. Overlapping BPE When training, BPE merges actually do not have to follow the simple objective of merging the most frequent token pair. In massively multilingual models, there is an imbalance between languages, and some of them got segmented almost down to […]

Read more

Highlights from Machine Translation and Multilinguality 04/2022

Another month is over, so here is my overview of what I found most interesting in machine translation and multilinguality. Rotation ciphers as regularizers A paper accepted to ACL 2022 from Simon Fraser University experiments with using rotation ciphers on the source side of MT as a data augmentation technique. They tested it in low data scenarios and it seems to work quite well, which actually seems quite strange to me. It’s just systematic replacing characters with different characters – […]

Read more

Highlights from Machine Translation and Multilinguality in May and June 2022

After a while, here is a dump of what I found most interesting on arXiv about machine translation and multilinguality, covering May and June of this year. Google Research published a pre-print of their NAACL paper: SCONES (Single-label Contrastive Objective for Non-Exclusive Sequences). The paper is about a simple trick: they replace softmax with binary classifiers with a sigmoid output and use the sum of binary cross-entropies as their loss function. It gets a slightly better BLEU and BLEURT score […]

Read more

Highlights from Machine Translation and Multilinguality in July 2022

Here is my monthly summary of what I found worth reading on arXiv in the past month. A preprint from JHU studies zero-shot cross-lingual transfer using pretrained multilingual representation and comes to the conclusion that it is an under-specified optimization problem. In other words, with a multilingual representation model, there are potentially many solutions that are good for the source language, but only some of them are good for the target language. In practice, the solution is probably proper training […]

Read more

Highlights from Machine Translation and Multilinguality in September 2022

Here are my monthly highlights from paper machine translation and multilinguality. A preprint from the Nara Institute of Science and Technology shows that target-language-specific fully connected layers in the Transformer decoder improve multilingual and zero-shot MT compared to the current practice of using a special token to indicate what the target language is. A very similar idea is also in a preprint from Tianjin University, but in this case, they add language-specific parameters for the other part of the Transformer […]

Read more

FAST TEXT ALGORITHM

FastText is a lightweight, open-source framework that enables users to learn text representations and classifiers. It runs on common, generic hardware. This model may be used to create an unsupervised or supervised learning technique for obtaining word vector representations. This module has acquired a lot of interest in the NLP community and might be a viable alternative to the genism package, which includes Word Vectors and other features. FastText differs in that word vectors, also known as word2vec, take   […]

Read more
1 130 131 132 133 134 908