CaltechAUTHORS
  A Caltech Library Service

Scalable Multi-Agent Reinforcement Learning for Networked Systems with Average Reward

Qu, Guannan and Lin, Yiheng and Wierman, Adam and Li, Na (2020) Scalable Multi-Agent Reinforcement Learning for Networked Systems with Average Reward. . (Unpublished) https://resolver.caltech.edu/CaltechAUTHORS:20200707-095349805

[img] PDF - Submitted Version
See Usage Policy.

740Kb

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20200707-095349805

Abstract

It has long been recognized that multi-agent reinforcement learning (MARL) faces significant scalability issues due to the fact that the size of the state and action spaces are exponentially large in the number of agents. In this paper, we identify a rich class of networked MARL problems where the model exhibits a local dependence structure that allows it to be solved in a scalable manner. Specifically, we propose a Scalable Actor-Critic (SAC) method that can learn a near optimal localized policy for optimizing the average reward with complexity scaling with the state-action space size of local neighborhoods, as opposed to the entire network. Our result centers around identifying and exploiting an exponential decay property that ensures the effect of agents on each other decays exponentially fast in their graph distance.


Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription
http://arxiv.org/abs/2006.06626arXivDiscussion Paper
Record Number:CaltechAUTHORS:20200707-095349805
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20200707-095349805
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:104238
Collection:CaltechAUTHORS
Deposited By: Tony Diaz
Deposited On:07 Jul 2020 17:12
Last Modified:07 Jul 2020 17:12

Repository Staff Only: item control page