Uncertainty-Aware Hybrid Reinforcement Learning for Fault-Tolerant Flight Control
Hybrid Reinforcement Learning for the Flight Control System of a Cessna 550 Citation II
P. Garcia de Vinuesa Garcia (TU Delft - Aerospace Engineering)
E. van Kampen – Mentor (TU Delft - Control & Simulation)
Spilios Theodoulis – Graduation committee member (TU Delft - Control & Simulation)
Xuerui Wang – Mentor
I.Z. El-Hajj – Graduation committee member (TU Delft - Control & Simulation)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Fault-tolerant flight control remains a major challenge as aircraft systems become increasingly autonomous and must operate under uncertain conditions and potential actuator failures. Reinforcement learning has shown strong potential for learning control policies directly from interaction with the environment, but purely offline-trained agents often lack the ability to adapt once deployed.
This thesis proposes a hybrid reinforcement learning framework that combines offline deep reinforcement learning with online adaptive control. A novel RUN-DSAC-IDHP controller is developed, integrating an uncertainty-aware offline policy learned with RUN-DSAC with an online Incremental Dual Heuristic Programming (IDHP) adaptation layer. The offline component provides a high-performance baseline policy, while the IDHP actor continuously adapts the control policy online when the aircraft dynamics change.
The proposed approach demonstrates how combining offline deep reinforcement learning with online adaptive control enables controllers to maintain strong baseline performance while adapting in real time to faults and changing flight conditions.