A Caltech Library Service

Constrained Risk-Averse Markov Decision Processes

Ahmadi, Mohamadreza and Rosolia, Ugo and Ingham, Michel D. and Murray, Richard M. and Ames, Aaron D. (2021) Constrained Risk-Averse Markov Decision Processes. .

[img] PDF - Accepted Version
See Usage Policy.


Use this Persistent URL to link to this item:


We consider the problem of designing policies for Markov decision processes (MDPs) with dynamic coherent risk objectives and constraints. We begin by formulating the problem in a Lagrangian framework. Under the assumption that the risk objectives and constraints can be represented by a Markov risk transition mapping, we propose an optimization-based method to synthesize Markovian policies that lower-bound the constrained risk-averse problem. We demonstrate that the formulated optimization problems are in the form of difference convex programs (DCPs) and can be solved by the disciplined convex-concave programming (DCCP) framework. We show that these results generalize linear programs for constrained MDPs with total discounted expected costs and constraints. Finally, we illustrate the effectiveness of the proposed method with numerical experiments on a rover navigation problem involving conditional-value-at-risk (CVaR) and entropic-value-at-risk (EVaR) coherent risk measures.

Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription Paper ItemVideo and Slides
Ahmadi, Mohamadreza0000-0003-1447-3012
Rosolia, Ugo0000-0002-1682-0551
Murray, Richard M.0000-0002-5785-7481
Ames, Aaron D.0000-0003-0848-3177
Additional Information:© 2021, Association for the Advancement of Artificial Intelligence.
Record Number:CaltechAUTHORS:20210120-165231602
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:107610
Deposited By: George Porter
Deposited On:21 Jan 2021 15:31
Last Modified:18 Aug 2021 18:33

Repository Staff Only: item control page