Robust Conformal Prediction for Adaptive Motion Planning Among Interactive Agents

Master Thesis (2024)
Author(s)

M. Prashar (TU Delft - Mechanical Engineering)

Contributor(s)

Javier Alonso-Mora – Mentor (TU Delft - Learning & Autonomous Control)

Lars Lindemann – Mentor

Jenny Della Santina – Graduation committee member (TU Delft - Learning & Autonomous Control)

Luca Laurenti – Graduation committee member (TU Delft - Team Luca Laurenti)

Faculty
Mechanical Engineering
More Info
expand_more
Publication Year
2024
Language
English
Graduation Date
29-08-2024
Awarding Institution
Delft University of Technology
Programme
Mechanical Engineering | Vehicle Engineering | Cognitive Robotics
Faculty
Mechanical Engineering
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Autonomous motion planning requires the ability to safely reason about learned trajectory predictors, particularly in settings where an agent can influence other agents' behavior. These learned predictors are essential for anticipating the future states of uncontrollable agents, whose decision-making process can be difficult to model analytically. Thus, uncertainty quantification of these predictors is crucial for ensuring safe planning and control. In this work, we introduce a framework for interactive motion planning in unknown dynamic environments with probabilistic safety assurances. We adapt a model predictive controller (MPC) to distribution shifts in learned trajectory predictors when other agents react to the ego agent's plan. Our approach leverages tools from conformal prediction (CP) to detect when the other agent's behavior deviates from the training distribution and employs robust CP to quantify the uncertainty in trajectory predictions during these agent interactions. We propose a method for estimating interaction-induced distribution shifts during runtime and the Huber quantile for enhanced outlier detection. Using a KL divergence ambiguity set that upper bounds the distribution shift, our method constructs prediction regions with probabilistic assurances in the presence of distribution shifts caused by interactions with the ego agent. We evaluate our framework in interactive scenarios involving navigation around autonomous vehicles in the BITS simulator, demonstrating enhanced safety and reduced conservatism.

Files

Mayank_Prashar_Thesis1.pdf
(pdf | 0 Mb)
- Embargo expired in 25-02-2025
License info not available