Issue #23 – Unbiased Neural MT
01 Feb19
Issue #23 – Unbiased Neural MT
Author: Raj Patel, Machine Translation Scientist @ Iconic
A recent topic of conversation and interest in the area of Neural MT – and Artificial Intelligence in general – is gender bias. Neural models are trained using large text corpora which inherently contain social biases and stereotypes, and as a consequence, translation models inherit these biases. In this article, we’ll try to understand how gender bias affects the translation quality and discuss a few techniques to reduce or eliminate its impact in Neural MT.
Machine Bias
Recently, there has been growing concern in the AI research community regarding “machine bias”, where the trained statistical/data-driven models grow to reflect gender and racial biases. A significant number of AI tools have recently been suggested to be biased, for example, towards gender and minorities, and there have been a number of high-profile faux pas.
Although a systematic study of such biases can be difficult, Prates et al., (2018) exploited machine translation through gender neutral languages (languages that do not explicitly give gender information about the subject) to analyze the phenomenon of gender bias in AI. They prepared sentences with a comprehensive list
To finish reading, please visit source site