End-to-end Reinforcement Learning for Time-Optimal Quadcopter Flight

Conference Paper (2024)
Authors

R. Ferede (TU Delft - Control & Simulation)

C de Wagter (TU Delft - Control & Simulation)

Dario Izzo (European Space Agency (ESA))

Guido Cornelis Henricus Eugene de Croon (TU Delft - Control & Simulation)

Research Group
Control & Simulation
More Info
expand_more
Publication Year
2024
Language
English
Research Group
Control & Simulation
Bibliographical Note
Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public. @en
Pages (from-to)
6172-6177
ISBN (electronic)
9798350384574
DOI:
https://doi.org/10.1109/ICRA57147.2024.10611665
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Aggressive time-optimal control of quadcopters poses a significant challenge in the field of robotics. The state-of-the-art approach leverages reinforcement learning (RL) to train optimal neural policies. However, a critical hurdle is the sim-to-real gap, often addressed by employing a robust inner loop controller-an abstraction that, in theory, constrains the optimality of the trained controller, necessitating margins to counter potential disturbances. In contrast, our novel approach introduces high-speed quadcopter control using end-to-end RL (E2E) that gives direct motor commands. To bridge the reality gap, we incorporate a learned residual model and an adaptive method that can compensate for modeling errors in thrust and moments. We compare our E2E approach against a state-of-the-art network that commands thrust and body rates to an INDI inner loop controller, both in simulated and real-world flight. E2E showcases a significant 1.39-second advantage in simulation and a 0.17-second edge in real-world testing, highlighting end-to-end reinforcement learning's potential. The performance drop observed from simulation to reality shows potential for further improvement, including refining strategies to address the reality gap or exploring offline reinforcement learning with real flight data.

Files

End-to-end_Reinforcement_Learn... (pdf)
(pdf | 2.37 Mb)
- Embargo expired in 10-02-2025
License info not available