A Caltech Library Service

Convergence Analysis of Gradient-Based Learning in Continuous Games

Chasnov, Benjamin and Ratliff, Lillian and Mazumdar, Eric and Burden, Samuel (2020) Convergence Analysis of Gradient-Based Learning in Continuous Games. Proceedings of Machine Learning Research, 115 . pp. 935-944. ISSN 2640-3498.

[img] PDF - Published Version
See Usage Policy.

[img] PDF - Supplemental Material
See Usage Policy.


Use this Persistent URL to link to this item:


Considering a class of gradient-based multi-agent learning algorithms in non-cooperative settings, we provide convergence guarantees to a neighborhood of a stable Nash equilibrium. In particular, we consider continuous games where agents learn in 1) deterministic settings with oracle access to their gradient and 2) stochastic settings with an unbiased estimator of their gradient. We also study the effects of non-uniform learning rates, which causes a distortion of the vector field that can alter which equilibrium the agents converge to and the path they take. We support the analysis with numerical examples that provide insight into how one might synthesize games to achieve desired equilibria.

Item Type:Article
Related URLs:
URLURL TypeDescription Information
Chasnov, Benjamin0000-0003-3484-2997
Ratliff, Lillian0000-0001-8936-0229
Mazumdar, Eric0000-0002-1815-269X
Additional Information:© 2019 The authors.
Record Number:CaltechAUTHORS:20210907-195235166
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:110744
Deposited By: George Porter
Deposited On:07 Sep 2021 20:50
Last Modified:07 Sep 2021 20:50

Repository Staff Only: item control page