Risk-sensitive Distributional Reinforcement Learning for Flight Control

Journal Article (2023)
Author(s)

Peter Seres (Student TU Delft)

Cheng Liu (TU Delft - Control & Simulation)

E. van Kampen (TU Delft - Control & Simulation)

Research Group
Control & Simulation
DOI related publication
https://doi.org/10.1016/j.ifacol.2023.10.1097
More Info
expand_more
Publication Year
2023
Language
English
Research Group
Control & Simulation
Issue number
2
Volume number
56
Pages (from-to)
2013-2018
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Recent aerospace systems increasingly demand model-free controller synthesis, and autonomous operations require adaptability to uncertainties in partially observable environments. This paper applies distributional reinforcement learning to synthesize risk-sensitive, robust model-free policies for aerospace control. We investigate the use of distributional soft actor-critic (DSAC) agents for flight control and compare their learning characteristics and tracking performance with the soft actor-critic (SAC) algorithm. The results show that (1) the addition of distributional critics significantly improves learning consistency, (2) risk-averse agents increase flight safety by avoiding uncertainties in the environment.