A Caltech Library Service

Global Convergence of Localized Policy Iteration in Networked Multi-Agent Reinforcement Learning

Zhang, Yizhou and Qu, Guannan and Xu, Pan and Lin, Yiheng and Chen, Zaiwei and Wierman, Adam (2022) Global Convergence of Localized Policy Iteration in Networked Multi-Agent Reinforcement Learning. . (Unpublished)

[img] PDF - Submitted Version
Creative Commons Attribution.


Use this Persistent URL to link to this item:


We study a multi-agent reinforcement learning (MARL) problem where the agents interact over a given network. The goal of the agents is to cooperatively maximize the average of their entropy-regularized long-term rewards. To overcome the curse of dimensionality and to reduce communication, we propose a Localized Policy Iteration (LPI) algorithm that provably learns a near-globally-optimal policy using only local information. In particular, we show that, despite restricting each agent's attention to only its κ-hop neighborhood, the agents are able to learn a policy with an optimality gap that decays polynomially in κ. In addition, we show the finite-sample convergence of LPI to the global optimal policy, which explicitly captures the trade-off between optimality and computational complexity in choosing κ. Numerical simulations demonstrate the effectiveness of LPI.

Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription Paper ItemJournal Article
Qu, Guannan0000-0002-5466-3550
Xu, Pan0000-0002-2559-8622
Lin, Yiheng0000-0001-6524-2877
Chen, Zaiwei0000-0001-9915-5595
Wierman, Adam0000-0002-5923-0199
Additional Information:Attribution 4.0 International (CC BY 4.0)
Record Number:CaltechAUTHORS:20230316-204011712
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:120096
Deposited By: George Porter
Deposited On:16 Mar 2023 22:55
Last Modified:17 Mar 2023 21:31

Repository Staff Only: item control page