Visually-guided motion planning for autonomous driving from interactive demonstrations

Journal Article (2022)
Authors

Rodrigo Perez Dattari (TU Delft - Learning & Autonomous Control)

B.F. Brito (TU Delft - Learning & Autonomous Control)

O. De Groot (TU Delft - Learning & Autonomous Control)

J. Kober (TU Delft - Learning & Autonomous Control)

J. Alonso-Mora (TU Delft - Learning & Autonomous Control)

Research Group
Learning & Autonomous Control
Copyright
© 2022 Rodrigo Pérez-Dattari, B.F. Ferreira de Brito, O.M. de Groot, J. Kober, J. Alonso-Mora
To reference this document use:
https://doi.org/10.1016/j.engappai.2022.105277
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Rodrigo Pérez-Dattari, B.F. Ferreira de Brito, O.M. de Groot, J. Kober, J. Alonso-Mora
Research Group
Learning & Autonomous Control
Volume number
116
DOI:
https://doi.org/10.1016/j.engappai.2022.105277
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

The successful integration of autonomous robots in real-world environments strongly depends on their ability to reason from context and take socially acceptable actions. Current autonomous navigation systems mainly rely on geometric information and hard-coded rules to induce safe and socially compliant behaviors. Yet, in unstructured urban scenarios these approaches can become costly and suboptimal. In this paper, we introduce a motion planning framework consisting of two components: a data-driven policy that uses visual inputs and human feedback to generate socially compliant driving behaviors (encoded by high-level decision variables), and a local trajectory optimization method that executes these behaviors (ensuring safety). In particular, we employ Interactive Imitation Learning to jointly train the policy with the local planner, a Model Predictive Controller (MPC), which results in safe and human-like driving behaviors. Our approach is validated in realistic simulated urban scenarios. Qualitative results show the similarity of the learned behaviors with human driving. Furthermore, navigation performance is substantially improved in terms of safety, i.e., number of collisions, as compared to prior trajectory optimization frameworks, and in terms of data-efficiency as compared to prior learning-based frameworks, broadening the operational domain of MPC to more realistic autonomous driving scenarios.