CaltechAUTHORS
  A Caltech Library Service

Risk-Averse Decision Making Under Uncertainty

Ahmadi, Mohamadreza and Rosolia, Ugo and Ingham, Michel D. and Murray, Richard M. and Ames, Aaron D. (2021) Risk-Averse Decision Making Under Uncertainty. . (Unpublished) https://resolver.caltech.edu/CaltechAUTHORS:20220224-200812106

[img] PDF - Submitted Version
See Usage Policy.

742kB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20220224-200812106

Abstract

A large class of decision making under uncertainty problems can be described via Markov decision processes (MDPs) or partially observable MDPs (POMDPs), with application to artificial intelligence and operations research, among others. Traditionally, policy synthesis techniques are proposed such that a total expected cost or reward is minimized or maximized. However, optimality in the total expected cost sense is only reasonable if system behavior in the large number of runs is of interest, which has limited the use of such policies in practical mission-critical scenarios, wherein large deviations from the expected behavior may lead to mission failure. In this paper, we consider the problem of designing policies for MDPs and POMDPs with objectives and constraints in terms of dynamic coherent risk measures, which we refer to as the constrained risk-averse problem. For MDPs, we reformulate the problem into a infsup problem via the Lagrangian framework and propose an optimization-based method to synthesize Markovian policies. For MDPs, we demonstrate that the formulated optimization problems are in the form of difference convex programs (DCPs) and can be solved by the disciplined convex-concave programming (DCCP) framework. We show that these results generalize linear programs for constrained MDPs with total discounted expected costs and constraints. For POMDPs, we show that, if the coherent risk measures can be defined as a Markov risk transition mapping, an infinite-dimensional optimization can be used to design Markovian belief-based policies. For stochastic finite-state controllers (FSCs), we show that the latter optimization simplifies to a (finite-dimensional) DCP and can be solved by the DCCP framework. We incorporate these DCPs in a policy iteration algorithm to design risk-averse FSCs for POMDPs.


Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription
http://arxiv.org/abs/2109.04082arXivDiscussion Paper
ORCID:
AuthorORCID
Ahmadi, Mohamadreza0000-0003-1447-3012
Rosolia, Ugo0000-0002-1682-0551
Ingham, Michel D.0000-0001-5893-543X
Murray, Richard M.0000-0002-5785-7481
Ames, Aaron D.0000-0003-0848-3177
Additional Information:M. Ahmadi acknowledges stimulating discussions with Dr. Masahiro Ono at NASA Jet Propulsion Laboratory and Prof. Marco Pavone at Nvidia Research-Stanford University.
Subject Keywords:Markov Processes, Stochastic systems, Uncertain systems.
Record Number:CaltechAUTHORS:20220224-200812106
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20220224-200812106
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:113580
Collection:CaltechAUTHORS
Deposited By: George Porter
Deposited On:28 Feb 2022 17:25
Last Modified:28 Feb 2022 17:25

Repository Staff Only: item control page