CaltechAUTHORS
  A Caltech Library Service

Policy-Gradient Algorithms Have No Guarantees of Convergence in Linear Quadratic Games

Mazumdar, Eric and Ratliff, Lillian J. and Jordan, Michael I. and Sastry, S. Shankar (2019) Policy-Gradient Algorithms Have No Guarantees of Convergence in Linear Quadratic Games. . (Unpublished) https://resolver.caltech.edu/CaltechAUTHORS:20210903-213700306

[img] PDF - Submitted Version
See Usage Policy.

1MB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20210903-213700306

Abstract

We show by counterexample that policy-gradient algorithms have no guarantees of even local convergence to Nash equilibria in continuous action and state space multi-agent settings. To do so, we analyze gradient-play in N-player general-sum linear quadratic games, a classic game setting which is recently emerging as a benchmark in the field of multi-agent learning. In such games the state and action spaces are continuous and global Nash equilibria can be found be solving coupled Ricatti equations. Further, gradient-play in LQ games is equivalent to multi agent policy-gradient. We first show that these games are surprisingly not convex games. Despite this, we are still able to show that the only critical points of the gradient dynamics are global Nash equilibria. We then give sufficient conditions under which policy-gradient will avoid the Nash equilibria, and generate a large number of general-sum linear quadratic games that satisfy these conditions. In such games we empirically observe the players converging to limit cycles for which the time average does not coincide with a Nash equilibrium. The existence of such games indicates that one of the most popular approaches to solving reinforcement learning problems in the classic reinforcement learning setting has no local guarantee of convergence in multi-agent settings. Further, the ease with which we can generate these counterexamples suggests that such situations are not mere edge cases and are in fact quite common.


Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription
http://arxiv.org/abs/1907.03712arXivDiscussion Paper
ORCID:
AuthorORCID
Mazumdar, Eric0000-0002-1815-269X
Ratliff, Lillian J.0000-0001-8936-0229
Jordan, Michael I.0000-0001-8935-817X
Alternate Title:Policy-Gradient Algorithms Have No Guarantees of Convergence in Continuous Action and State Multi-Agent Settings
Record Number:CaltechAUTHORS:20210903-213700306
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20210903-213700306
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:110721
Collection:CaltechAUTHORS
Deposited By: George Porter
Deposited On:07 Sep 2021 16:27
Last Modified:07 Sep 2021 19:27

Repository Staff Only: item control page