With the performance of current motion planning methods being highly dependent on the quality of the perception system, robust 3D multi-object detection and tracking are vital for autonomous driving applications. Despite all the advancements in 2D and 3D object detectors, robust
...
With the performance of current motion planning methods being highly dependent on the quality of the perception system, robust 3D multi-object detection and tracking are vital for autonomous driving applications. Despite all the advancements in 2D and 3D object detectors, robust tracking of pedestrians in dense scenarios is still a challenging subject for small Automated Guided Vehicles (AGVs). Most research in the field of object detection and tracking focuses on autonomous cars, neglecting the design challenges that come with small AGVs.
This thesis presents a real-time multi-modal multi-pedestrian detection and tracking pipeline for small mobile robots. The framework integrates five RGB-D cameras and a LiDAR sensor to achieve real-time pedestrian detection and tracking. The system relies on state-of-the-art 2D and 3D object detectors, a sensor fusion and filtering scheme, and a 3D object tracker. Moreover, to improve detection and tracking performance, we have collected a pedestrian dataset tailored for small AGVs. We use this dataset to train the 3D pedestrian detector and evaluate the performance of the pedestrian detectors and tracker. Evaluation of the proposed framework demonstrated the ability to robustly detect and track multiple pedestrians up to a distance of 10 meters. We open-sourced our framework at: https://github.com/bbrito/amr_navigation.