Generating 3D person trajectories from sparse image annotations in an intelligent vehicles setting

Conference Paper (2019)
Author(s)

S.A. Krebs (Daimler AG, TU Delft - Intelligent Vehicles)

Matthias Braun (Daimler AG, TU Delft - Intelligent Vehicles)

D.M. Gavrila (TU Delft - Intelligent Vehicles)

Research Group
Intelligent Vehicles
Copyright
© 2019 S.A. Krebs, M. Braun, D. Gavrila
DOI related publication
https://doi.org/10.1109/ITSC.2019.8917160
More Info
expand_more
Publication Year
2019
Language
English
Copyright
© 2019 S.A. Krebs, M. Braun, D. Gavrila
Research Group
Intelligent Vehicles
Pages (from-to)
783-788
ISBN (print)
978-1-5386-7024-8
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

This paper presents an approach to generate dense person 3D trajectories from sparse image annotations on-board a moving platform. Our approach leverages the additional information that is typically available in an intelligent vehicle setting, such as LiDAR sensor measurements (to obtain 3D positions from detected 2D image bounding boxes) and inertial sensing (to perform ego-motion compensation). The sparse manual 2D person annotations that are available at regular time intervals (key-frames) are augmented with the output of a state-of-the-art 2D person detector, to obtain frame-wise data. A graph-based batch optimization approach is subsequently performed to find the best 3D trajectories, accounting for erroneous person detector output (false positives, false negatives, imprecise localization) and unknown temporal correspondences. Experiments on the EuroCity Persons dataset show promising results.

Files

License info not available