Safe Policies for Factored Partially Observable Stochastic Games

Conference Paper (2021)
Author(s)

Steven Carr (The University of Texas at Austin)

Nils Jansen (Radboud Universiteit Nijmegen)

Suda Bharadwaj (The University of Texas at Austin)

M.T.J. Spaan (TU Delft - Algorithmics)

Ufuk Topcu (The University of Texas at Austin)

Research Group
Algorithmics
Copyright
© 2021 Steven Carr, Nils Jansen, Suda Bharadwaj, M.T.J. Spaan, Ufuk Topcu
DOI related publication
https://doi.org/10.15607/RSS.2021.XVII.079
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 Steven Carr, Nils Jansen, Suda Bharadwaj, M.T.J. Spaan, Ufuk Topcu
Research Group
Algorithmics
ISBN (electronic)
978-0-9923747-7-8
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

We study planning problems where a controllable agent operates under partial observability and interacts with an uncontrollable opponent, also referred to as the adversary. The agent has two distinct objectives: To maximize an expected
value and to adhere to a safety specification. Multi-objective partially observable stochastic games (POSGs) formally model such problems. Yet, even for a single objective, the task of computing suitable policies for POSGs is theoretically hard and computationally intractable in practice. Using a factored state-space representation, we define a decoupling scheme for the POSG state space that—under certain assumptions on the observability and the reward structure—separates the state components relevant for the reward from those relevant for safety. This decoupling affects the possibility to compute provably safe and reward-optimal policies in a tractable two-stage approach. In particular, on the fully observable components related to safety, we exactly compute the set of policies that captures all possible safe choices against the opponent. We restrict the agent’s behavior to these safe policies and project the POSG to a partially observable Markov decision process (POMDP). Any
reward-maximal policy for the POMDP is then guaranteed to be safe and reward-maximal for the POSG. We showcase our approach’s feasibility using high-fidelity simulations of two case studies that concern UAV path planning and autonomous driving. Moreover, to demonstrate the practical applicability, we design a physical experiment involving a robot decision making problem
under energy constraints that is motivated by a paired helicopter with NASA’s Perseverance Mars rover.

Files

P079_1_.pdf
(pdf | 17.6 Mb)
License info not available