Machine Translation Weekly 100: IGLUE as cool as igloo, multilingual and multimodal benchmark
This week I would like to feature a new multimodal-multilingual benchmark
called IGLUE, presented in a
pre-print that went out last Friday. The
authors are from many place around the world: University of Copenhagen, Mila –
Quebec Artificial Intelligence Institute, University of Cambridge, TU
Darmstadt, New York University, and McGill University.
Following the best practices from established multilingual benchmarks, the new
multimodal and multilingual benchmark evaluates zero-shot cross-lingual
transfer with the multimodal tasks. Zero-shot cross-lingual transfer means a
task-specific model (in this case, e.g., for visual question answering) is
trained in English. However, because the sentence representation is
(presumably) multilingual, the model should work in other languages as well.
The benchmark collects multilingual test sets for the following tasks:
-
Visual natural language inference: Decide what