Learning in two-player games between transparent opponents
We consider a scenario in which two reinforcement learning agents repeatedly play a matrix game against each other and update their parameters after each round. The agents’ decision-making is transparent to each other, which allows each agent to predict how their opponent will play against them...
To prevent an infinite regress of both agents recursively predicting each other indefinitely, each agent is required to give an opponent-independent response with some probability at least epsilon. Transparency also allows each agent to anticipate and shape the other agent’s gradient step, i.e. to move to regions of parameter space