Constraint Aware Reinforcement Learning for Aeroelastic Aircraft
A hybridization of Reinforcement Learning with Model Predictive Control
P. Kostelac (TU Delft - Aerospace Engineering)
Ana Jamshidnejad – Mentor (TU Delft - Control & Simulation)
Sherry Wang – Mentor (TU Delft - Control & Simulation)
Erik-Jan Kampen – Graduation committee member (TU Delft - Control & Simulation)
A. Dabiri – Graduation committee member (TU Delft - Team Azita Dabiri)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
This thesis presents a hybrid control framework that combines Reinforcement Learning (RL) and Model Predictive Control (MPC) to achieve constraint-satisfying flutter suppression and load alleviation in flexible aircraft subject to turbulent gusts. During training, MPC computes safe input bounds using high-fidelity Linear Parameter Varying (LPV) models and long prediction horizons, exploiting known disturbances to accurately capture aeroelastic behavior. A Q-learning agent is trained to select control actions within these bounds, adapting to nonlinear dynamics and actuator delay. At deployment, the learned policy operates from a lightweight Q-table, with certified interpolation ensuring constraint satisfaction even for unseen states. By integrating the anticipatory capabilities of MPC with the adaptability of RL, the framework enables effective control under turbulence and structural uncertainty. While demonstrated on an aeroelastic aircraft, the approach can be generalized to other systems with similar dynamics, where constraint handling and adaptive control under uncertainty are critical.