Evolutionary Reinforcement Learning: A Hybrid Approach for Safety-informed Intelligent Fault-tolerant Flight Control

Conference Paper (2024)
Author(s)

V. Gavra (Student TU Delft)

Erik Jan Kampen (TU Delft - Control & Simulation)

Research Group
Control & Simulation
Copyright
© 2024 V. Gavra, E. van Kampen
DOI related publication
https://doi.org/10.2514/6.2024-0954
More Info
expand_more
Publication Year
2024
Language
English
Copyright
© 2024 V. Gavra, E. van Kampen
Research Group
Control & Simulation
Bibliographical Note
Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public. @en
ISBN (electronic)
978-1-62410-711-5
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Recent research in artificial intelligence potentially provides solutions to the challenging problem of fault-tolerant and robust flight control. The current work proposes a novel Safety-informed Evolutionary Reinforcement Learning (SERL) algorithm, which combines Deep Reinforcement Learning (DRL) and neuro-evolution to optimize a population of non-linear control policies. Using SERL, the work has trained agents to provide attitude tracking on a high-fidelity non-linear fixed-wing aircraft model. Compared to a state-of-the-art DRL solution, SERL achieves better tracking performance in nine out of ten cases, remaining robust against faults and changes in flight conditions, while providing smoother actions.

Files

Gavra_van_kampen_2024_evolutio... (pdf)
(pdf | 6.42 Mb)
- Embargo expired in 01-07-2024
License info not available