A Caltech Library Service

Convergence Rates for Localized Actor-Critic in Networked Markov Potential Games

Zhou, Zhaoyi and Chen, Zaiwei and Lin, Yiheng and Wierman, Adam (2023) Convergence Rates for Localized Actor-Critic in Networked Markov Potential Games. . (Unpublished)

[img] PDF - Submitted Version
Creative Commons Attribution.


Use this Persistent URL to link to this item:


We introduce a class of networked Markov potential games where agents are associated with nodes in a network. Each agent has its own local potential function, and the reward of each agent depends only on the states and actions of agents within a κ-hop neighborhood. In this context, we propose a localized actor-critic algorithm. The algorithm is scalable since each agent uses only local information and does not need access to the global state. Further, the algorithm overcomes the curse of dimensionality through the use of function approximation. Our main results provide finite-sample guarantees up to a localization error and a function approximation error. Specifically, we achieve an O̅(ϵ⁻⁴) sample complexity measured by the averaged Nash regret. This is the first finite-sample bound for multi-agent competitive games that does not depend on the number of agents.

Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription Paper
Lin, Yiheng0000-0001-6524-2877
Wierman, Adam0000-0002-5923-0199
Additional Information:Attribution 4.0 International (CC BY 4.0)
Record Number:CaltechAUTHORS:20230316-204018535
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:120098
Deposited By: George Porter
Deposited On:16 Mar 2023 22:58
Last Modified:16 Mar 2023 22:58

Repository Staff Only: item control page