Online and offline learning of player objectives from partial observations in dynamic games

Journal Article (2023)
Author(s)

L. Peters (TU Delft - Learning & Autonomous Control, Universität Bonn)

Vicenç Rubies-Royo (University of California)

Claire Tomlin (University of California)

L Ferranti (TU Delft - Learning & Autonomous Control)

J. Alonso-Mora (TU Delft - Learning & Autonomous Control)

Cyrill Stachniss (Universität Bonn)

David Fridovich-Keil (The University of Texas at Austin)

Research Group
Learning & Autonomous Control
Copyright
© 2023 L. Peters, Vicenç Rubies-Royo, Claire J. Tomlin, L. Ferranti, J. Alonso-Mora, Cyrill Stachniss, David Fridovich-Keil
DOI related publication
https://doi.org/10.1177/02783649231182453
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 L. Peters, Vicenç Rubies-Royo, Claire J. Tomlin, L. Ferranti, J. Alonso-Mora, Cyrill Stachniss, David Fridovich-Keil
Research Group
Learning & Autonomous Control
Issue number
10
Volume number
42
Pages (from-to)
917-937
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Robots deployed to the real world must be able to interact with other agents in their environment. Dynamic game theory provides a powerful mathematical framework for modeling scenarios in which agents have individual objectives and interactions evolve over time. However, a key limitation of such techniques is that they require a priori knowledge of all players’ objectives. In this work, we address this issue by proposing a novel method for learning players’ objectives in continuous dynamic games from noise-corrupted, partial state observations. Our approach learns objectives by coupling the estimation of unknown cost parameters of each player with inference of unobserved states and inputs through Nash equilibrium constraints. By coupling past state estimates with future state predictions, our approach is amenable to simultaneous online learning and prediction in receding horizon fashion. We demonstrate our method in several simulated traffic scenarios in which we recover players’ preferences, for, e.g. desired travel speed and collision-avoidance behavior. Results show that our method reliably estimates game-theoretic models from noise-corrupted data that closely matches ground-truth objectives, consistently outperforming state-of-the-art approaches.