End-to-End Motion Planning

A Data Driven Approach for Mobile Robot Navigation

More Info
expand_more

Abstract

A lot research has been conducted in the field of autonomous navigation of mobile robots with focus on Robot Vision and Robot Motion Planning. However, most of the classical navigation solutions require several steps of data pre-processing and hand tuning of parameters, with separate modules for vision, localization, planning and control. All these modules work independently and make their own parameter assumptions to optimize their own performance without taking into account the effect these assumptions have on the performance of rest of the modules. Hence, even though each module in the whole system tries to achieve an optimal performance for the task it has been assigned, the lack of interdependence exhibited by these modules for decision making means that the overall performance of the whole system is sub-optimal in most of the cases. An alternating approach for addressing these issues is to train certain parts of the vision module to incorporate partial tasks from the planning module. Deep Learning architectures have achieved great success in the field of pattern recognition and object detection and as a result are usually being deployed to design such a module that jointly learns to carry out perception and path planning. This master's thesis, making use of Deep Learning, proposes an End-to-End Learning architecture that learns to directly map raw sensor readings to control commands for a ground based mobile robot. The research makes use of the simulation of Jackal UGV from Clearpath Robotics and the proposed network is able to produce collision free trajectories for the robot to navigate in it's environment.