Global Convergence of Localized Policy Iteration in Networked Multi-Agent Reinforcement Learning
Abstract
We study a multi-agent reinforcement learning (MARL) problem where the agents interact over a given network. The goal of the agents is to cooperatively maximize the average of their entropy-regularized long-term rewards. To overcome the curse of dimensionality and to reduce communication, we propose a Localized Policy Iteration (LPI) algorithm that provably learns a near-globally-optimal policy using only local information. In particular, we show that, despite restricting each agent's attention to only its κ-hop neighborhood, the agents are able to learn a policy with an optimality gap that decays polynomially in κ. In addition, we show the finite-sample convergence of LPI to the global optimal policy, which explicitly captures the trade-off between optimality and computational complexity in choosing κ. Numerical simulations demonstrate the effectiveness of LPI.
Additional Information
Attribution 4.0 International (CC BY 4.0)Attached Files
Submitted - 2211.17116.pdf
Files
Name | Size | Download all |
---|---|---|
md5:be414c51a8a94253b4fdb81fea6e800f
|
993.2 kB | Preview Download |
Additional details
- Eprint ID
- 120096
- Resolver ID
- CaltechAUTHORS:20230316-204011712
- Created
-
2023-03-16Created from EPrint's datestamp field
- Updated
-
2023-03-17Created from EPrint's last_modified field