Learning from Demonstrations of Critical Driving Behaviours Using Driver's Risk Field

Journal Article (2023)
Author(s)

Yurui Du (Siemens PLM Software, Student TU Delft)

Flavia Sofia Acerbo (Siemens PLM Software)

J. Kober (TU Delft - Learning & Autonomous Control)

Tong Duy Son (Siemens PLM Software)

Research Group
Learning & Autonomous Control
DOI related publication
https://doi.org/10.1016/j.ifacol.2023.10.1376
More Info
expand_more
Publication Year
2023
Language
English
Research Group
Learning & Autonomous Control
Issue number
2
Volume number
56
Pages (from-to)
2774-2779
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

In recent years, imitation learning (IL) has been widely used in industry as the core of autonomous vehicle (AV) planning modules. However, previous IL works show sample inefficiency and low generalisation in safety-critical scenarios, on which they are rarely tested. As a result, IL planners can reach a performance plateau where adding more training data ceases to improve the learnt policy. First, our work presents an IL model using the spline coefficient parameterisation and offline expert queries to enhance safety and training efficiency. Then, we expose the weakness of the learnt IL policy by synthetically generating critical scenarios through optimisation of parameters of the driver's risk field (DRF), a parametric human driving behaviour model implemented in a multi-agent traffic simulator based on the Lyft Prediction Dataset. To continuously improve the learnt policy, we retrain the IL model with augmented data. Thanks to the expressivity and interpretability of the DRF, the desired driving behaviours can be encoded and aggregated to the original training data. Our work constitutes a full development cycle that can efficiently and continuously improve the learnt IL policies in closed-loop. Finally, we show that our IL planner developed with less training resource still has superior performance compared to the previous state-of-the-art.