Evolutionary Reinforcement Learning

Hybrid Approach for Safety-Informed Fault-Tolerant Flight Control

Journal Article (2024)
Author(s)

Vlad Gavra (Student TU Delft)

Erik Jan Kampen (TU Delft - Control & Simulation)

Research Group
Control & Simulation
DOI related publication
https://doi.org/10.2514/1.G008112
More Info
expand_more
Publication Year
2024
Language
English
Research Group
Control & Simulation
Bibliographical Note
Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public. @en
Issue number
5
Volume number
47
Pages (from-to)
887-900
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Recent research in artificial intelligence potentially provides solutions to the challenging problem of fault-tolerant and robust flight control. This paper proposes a novel Safety-Informed Evolutionary Reinforcement Learning algorithm (SERL), which combines Deep Reinforcement Learning (DRL) and neuroevolution to optimize a population of nonlinear control policies. Using SERL, the work has trained agents to provide attitude tracking on a high-fidelity nonlinear fixed-wing aircraft model. Compared to a state-of-the-art DRL solution, SERL achieves better tracking performance in nine out of ten cases, remaining robust against faults and changes in flight conditions, while providing smoother action signals.

Files

Gavra-van-kampen-2024-evolutio... (pdf)
(pdf | 6.02 Mb)
- Embargo expired in 26-08-2024
License info not available