Few words on Natural Language Processing and User Autonomy
As natural language processing (NLP) finds its way from university labs and
becomes a crucial element of many user-facing technologies (machine
translation, search, language-model-based assistants), people start to get
concerned about the ethics of this technology. When people talk about NLP
ethics, the main topics are: biases that the models get from training data,
replication of toxic behavior found on the Internet, underrepresentation of
already underprivileged groups, differences between the technology availability
between the global north and global south. Now, after ChatGPT was released
people add some science-fictions fears.
There are many good articles on NLP and AI ethics, and I don’t have much to
add, but I want to tackle this from a slightly different angle. Most work on
this topic views the technology creators as the active ones.