Improving DRL Of Vision-Based Navigation By Stereo Image Prediction

More Info
expand_more

Abstract

Although deep reinforcement learning (DRL) is a highly promising approach to learning robotic vision-based control, it is plagued by long training times. This report introduces a DRL setup that relies on self-supervised learning for extracting depth information valuable for navigation. Specifically, a literature study is conducted to investigate the effects of learning how to synthesize one view from the other in a stereo-vision setup without relying on any preliminary knowledge of the camera extrinisics and how it can be integrated for its downstream use for an obstacle avoidance task. As such, the literature study concludes that competitive geometry-free monocular-to-stereo image view synthesis is feasible due to recent developments in computer vision. The scientific paper further develops concepts proposed in the literature study and benchmarks the proposed architectures on depth estimation benchmarks for KITTI. Competitive results are achieved for view synthesis and despite sub-optimal performance compared to state-of-the-art monocular depth estimation, an ability to encode depth and detect shapes is present and, therefore, satisfactory for the application to DRL. Additionally, the research examines the benefits of using the latent space of a view synthesis architecture compared to other feature extractor methods as an input to the PPO agent implemented as auxiliary tasks. This method achieves quicker convergence and better performance for an obstacle avoidance task in a simulated indoor environment than the autoencoding feature extractor and end-to-end DRL methods. It is only outperformed by the monocular depth estimation feature extractor method. Overall, this research provides valuable insights for developing more efficient and effective DRL methods for monocular camera-based drones. Finally, the complementary code for this research can be found: \url{https://github.com/ldenridder/drl-obstacle-avoidance-view-synthesis}.

Files