CaltechAUTHORS
  A Caltech Library Service

Scalable Reinforcement Learning of Localized Policies for Multi-Agent Networked Systems

Qu, Guannan and Wierman, Adam and Li, Na (2019) Scalable Reinforcement Learning of Localized Policies for Multi-Agent Networked Systems. . (Unpublished) https://resolver.caltech.edu/CaltechAUTHORS:20200214-105551932

[img] PDF - Accepted Version
See Usage Policy.

432kB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20200214-105551932

Abstract

We study reinforcement learning (RL) in a setting with a network of agents whose states and actions interact in a local manner where the objective is to find localized policies such that the (discounted) global reward is maximized. A fundamental challenge in this setting is that the state-action space size scales exponentially in the number of agents, rendering the problem intractable for large networks. In this paper, we propose a Scalable Actor-Critic (SAC) framework that exploits the network structure and finds a localized policy that is a O(ρ^(κ+1))-approximation of a stationary point of the objective for some ρ ∈ (0,1), with complexity that scales with the local state-action space size of the largest κ-hop neighborhood of the network.


Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription
http://arxiv.org/abs/1912.02906arXivDiscussion Paper
Additional Information:© 2020 G. Qu, A. Wierman & N. Li. To appear in Proceedings of Machine Learning Research.
Subject Keywords:Multi-agent reinforcement learning, networked systems, actor-critic methods
Record Number:CaltechAUTHORS:20200214-105551932
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20200214-105551932
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:101299
Collection:CaltechAUTHORS
Deposited By: George Porter
Deposited On:14 Feb 2020 21:10
Last Modified:14 Feb 2020 21:10

Repository Staff Only: item control page