On Gradient-Based Learning in Continuous Games
We introduce a general framework for competitive gradient-based learning that encompasses a wide breadth of multiagent learning algorithms, and analyze the limiting behavior of competitive gradient-based learning algorithms using dynamical systems theory. For both general-sum and potential games, we characterize a nonnegligible subset of the local Nash equilibria that will be avoided if each agent employs a gradient-based learning algorithm. We also shed light on the issue of convergence to non-Nash strategies in general- and zero-sum games, which may have no relevance to the underlying game, and arise solely due to the choice of algorithm. The existence and frequency of such strategies may explain some of the difficulties encountered when using gradient descent in zero-sum games as, e.g., in the training of generative adversarial networks. To reinforce the theoretical contributions, we provide empirical results that highlight the frequency of linear quadratic dynamic games (a benchmark for multiagent reinforcement learning) that admit global Nash equilibria that are almost surely avoided by policy gradient.
Additional Information© 2020, Society for Industrial and Applied Mathematics. Received by the editors December 10, 2018; accepted for publication (in revised form) November 21, 2019; published electronically February 18, 2020. This work was supported by a National Science Foundation award, CNS:1656873, and by the Defense Advanced Research Projects Agency, FA8750-18-C-0101.
Accepted Version - 1804.05464.pdf