Chasnov, Benjamin and Ratliff, Lillian and Mazumdar, Eric and Burden, Samuel (2020) Convergence Analysis of Gradient-Based Learning in Continuous Games. Proceedings of Machine Learning Research, 115 . pp. 935-944. ISSN 2640-3498. https://resolver.caltech.edu/CaltechAUTHORS:20210907-195235166
![]() |
PDF
- Published Version
See Usage Policy. 817kB |
![]() |
PDF
- Supplemental Material
See Usage Policy. 218kB |
Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20210907-195235166
Abstract
Considering a class of gradient-based multi-agent learning algorithms in non-cooperative settings, we provide convergence guarantees to a neighborhood of a stable Nash equilibrium. In particular, we consider continuous games where agents learn in 1) deterministic settings with oracle access to their gradient and 2) stochastic settings with an unbiased estimator of their gradient. We also study the effects of non-uniform learning rates, which causes a distortion of the vector field that can alter which equilibrium the agents converge to and the path they take. We support the analysis with numerical examples that provide insight into how one might synthesize games to achieve desired equilibria.
Item Type: | Article | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
Related URLs: |
| |||||||||
ORCID: |
| |||||||||
Additional Information: | © 2019 The authors. | |||||||||
Record Number: | CaltechAUTHORS:20210907-195235166 | |||||||||
Persistent URL: | https://resolver.caltech.edu/CaltechAUTHORS:20210907-195235166 | |||||||||
Usage Policy: | No commercial reproduction, distribution, display or performance rights in this work are provided. | |||||||||
ID Code: | 110744 | |||||||||
Collection: | CaltechAUTHORS | |||||||||
Deposited By: | George Porter | |||||||||
Deposited On: | 07 Sep 2021 20:50 | |||||||||
Last Modified: | 07 Sep 2021 20:50 |
Repository Staff Only: item control page