Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design
A wide range of reinforcement learning (RL) problems – including robustness, transfer learning, unsupervised RL, and emergent complexity – require specifying a distribution of tasks or environments in which a policy will be trained. However, creating a useful distribution of environments is error prone, and takes a significant amount of developer time and effort...
We propose Unsupervised Environment Design (UED) as an alternative paradigm, where developers provide environments with unknown parameters, and these parameters are used to automatically produce a distribution over valid, solvable environments. Existing approaches to automatically generating environments suffer from common failure modes: