Deep Reinforcement Learning for Goal-directed Visual Navigation

More Info
expand_more

Abstract

Safe navigation in a cluttered environment is a key capability for the autonomous operation of Micro Aerial Vehicles (MAVs). This work explores a (deep) Reinforcement Learning (RL) based approach for monocular vision based obstacle avoidance and goal directed navigation for MAVs in cluttered environments. We investigated this problem in the context of forest flight under the tree canopy.

Our focus was on training an effective and practical neural control module, that is easy to integrate into conventional control hierarchies and can extend the capabilities of existing autopilot software stacks. This module has the potential to greatly improve the autonomous capabilities of MAVs, and their applicability for many interesting real world use-cases. We demonstrated training this module in a visually highly realistic virtual forest environment, created with a state-of-the-art computer game engine.