A Caltech Library Service

Safe Policy Synthesis in Multi-Agent POMDPs via Discrete-Time Barrier Functions

Ahmadi, Mohamadreza and Singletary, Andrew and Burdick, Joel W. and Ames, Aaron D. (2019) Safe Policy Synthesis in Multi-Agent POMDPs via Discrete-Time Barrier Functions. . (Unpublished)

[img] PDF - Submitted Version
See Usage Policy.


Use this Persistent URL to link to this item:


A multi-agent partially observable Markov decision process (MPOMDP) is a modeling paradigm used for high-level planning of heterogeneous autonomous agents subject to uncertainty and partial observation. Despite their modeling efficiency, MPOMDPs have not received significant attention in safety-critical settings. In this paper, we use barrier functions to design policies for MPOMDPs that ensure safety. Notably, our method does not rely on discretization of the belief space, or finite memory. To this end, we formulate sufficient and necessary conditions for the safety of a given set based on discrete-time barrier functions (DTBFs) and we demonstrate that our formulation also allows for Boolean compositions of DTBFs for representing more complicated safe sets. We show that the proposed method can be implemented online by a sequence of one-step greedy algorithms as a standalone safe controller or as a safety-filter given a nominal planning policy. We illustrate the efficiency of the proposed methodology based on DTBFs using a high-fidelity simulation of heterogeneous robots.

Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription Paper
Ames, Aaron D.0000-0003-0848-3177
Record Number:CaltechAUTHORS:20190410-120651366
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:94638
Deposited By: George Porter
Deposited On:10 Apr 2019 20:00
Last Modified:03 Oct 2019 21:05

Repository Staff Only: item control page