Deep Reinforcement Learning for Flight Control
Fault-Tolerant Control for the PH-LAB
Killian Dally (TU Delft - Aerospace Engineering)
EJ van Kampen – Mentor (TU Delft - Control & Simulation)
M. M.(René) van Paassen – Graduation committee member (TU Delft - Control & Simulation)
SJ Hulshoff – Graduation committee member (TU Delft - Aerodynamics)
Bo Sun – Graduation committee member (TU Delft - Control & Simulation)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Fault-tolerant flight control faces challenges as developing a model-based controller for each unexpected failure is unrealistic, and online learning methods can handle limited system complexity due to their low sample efficiency. In this research, a model-free coupled-dynamics flight controller for a jet aircraft able to withstand multiple failure types is proposed. An offline-trained cascaded Soft Actor-Critic Deep Reinforcement Learning controller is successful on highly coupled maneuvers, including high-bank coordinated climbing turns. The controller is robust to six unforeseen failure cases, including the rudder jammed at -15°, the aileron effectiveness reduced by 70%, a structural failure, icing and a backward c.g. shift as the response is stable and the climbing turn is completed successfully. Robustness to biased sensor noise, atmospheric disturbances, and to varying initial flight conditions and reference signal shapes is also demonstrated.