Autonomous embedded system enabled 3-D object detector

(With point cloud and camera)

Conference Paper (2019)
Author(s)

D. Katare (Purdue University)

Mohamed El-Sharkawy (Purdue University)

Affiliation
External organisation
DOI related publication
https://doi.org/10.1109/ICVES.2019.8906442
More Info
expand_more
Publication Year
2019
Language
English
Affiliation
External organisation
ISBN (electronic)
9781728134734

Abstract

An Autonomous vehicle or present day smart vehicle is equipped with several ADAS safety features such as Blind Spot Detection, Forward Collision Warning, Lane Departure and Parking Assistance, Surround View System, Vehicular communication System. Recent research utilize deep learning algorithms as a counterfeit for these traditional methods, using optimal sensors. This paper discusses the perception tasks related to autonomous vehicle, specifically the computer-vision approach of 3D object detection and thus proposes a model compatible with embedded system using the RTMaps framework. The proposed model is based on the sensors: camera and Lidar connected to an autonomous embedded system, providing the sensed inputs to the deep learning classifier which on the basis of theses inputs estimates the position and predicts a 3-d bounding box on the physical objects. The Frustum PointNet a contemporary architecture for 3-D object detection is used as base model and is implemented with extended functionality. The architecture is trained and tested on the KITTI dataset and is discussed with the competitive validation precision and accuracy. The Presented model is deployed on the Bluebox 2.0 platform with the RTMaps Embedded framework.

No files available

Metadata only record. There are no files for this record.