Unravelling uncertainty in trajectory prediction using a non-parametric approach

Journal Article (2024)
Author(s)

G. Li (TU Delft - Transport and Planning)

Zirui Li (Beijing Institute of Technology)

Victor Knoop (TU Delft - Transport and Planning)

JWC Van Lint (TU Delft - Transport and Planning)

Transport and Planning
DOI related publication
https://doi.org/10.1016/j.trc.2024.104659
More Info
expand_more
Publication Year
2024
Language
English
Transport and Planning
Volume number
163
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Predicting the trajectories of road agents is fundamental for self-driving cars. Trajectory prediction contains many sources of uncertainty in data and modelling. A thorough understanding of this uncertainty is crucial in a safety-critical task like auto-piloting a vehicle. In practice, it is necessary to distinguish between the uncertainty caused by partial observability of all factors that may affect a driver's near-future decisions, the so-called aleatoric uncertainty, and the uncertainty of deploying a model in new scenarios that are possibly not present in the training set, the so-called epistemic uncertainty. They reflect the trade-off between data collection and model improvement In this paper, we propose a new framework to systematically quantify both sources of uncertainty. Specifically, to approximate the spatial distribution of an agent's future position, we propose a 2D histogram-based deep learning model combined with deep ensemble techniques for measuring aleatoric and epistemic uncertainty by entropy-based quantities. The proposed Uncertainty Quantification Network (UQnet) employs a causal part to enhance its generalizability so rare driving behaviours can be effectively identified. Experiments on the INTERACTION dataset show that UQnet is able to give more robust predictions in generalizability tests compared to the correlation-based models. Further analysis presents that high aleatoric uncertainty cases are mainly caused by heterogeneous driving behaviours and unknown intended directions. Based on this aleatoric uncertainty component, we estimate the lower bounds of mean-square-error and final-displacement-error as indicators for the predictability of trajectories. Furthermore, the analysis of epistemic uncertainty illustrates that domain knowledge of speed-dependent driving behaviour is essential for adapting a model from low-speed to high-speed situations. Our paper contributes to motion forecasting with a new framework, that recasts the problem of accuracy improvement in a way that focuses on differentiating between unpredictable components and rare cases for which more and different data should be collected.