Aggressive Online Control of a Quadrotor via Deep Network Representations of Optimality Principles

More Info
expand_more

Abstract

Optimal control holds great potential to improve a variety of robotic applications. The application of optimal control on-board limited platforms has been severely hindered by the large computational requirements of current state of the art implementations. In this work, we make use of a deep neural network to directly map the robot states to control actions. The network is trained offline to imitate the optimal control computed by a time consuming direct nonlinear method. A mixture of time optimality and power optimality is considered with a continuation parameter used to select the predominance of each objective. We apply our networks (termed GCNets) to aggressive quadrotor control, first in simulation and then in the real world. We give insight into the factors that influence the 'reality gap' between the quadrotor model used by the offline optimal control method and the real quadrotor. Furthermore, we explain how we set up the model and the control structure on-board of the real quadrotor to successfully close this gap and perform time-optimal maneuvers in the real world. Finally, GCNet's performance is compared to state-of-the-art differential-flatness-based optimal control methods. We show, in the experiments, that GCNets lead to significantly faster trajectory execution due to, in part, the less restrictive nature of the allowed state-to-input mappings.

Files