This thesis presents a hybrid control framework that combines Reinforcement Learning (RL) and Model Predictive Control (MPC) to achieve constraint-satisfying flutter suppression and load alleviation in flexible aircraft subject to turbulent gusts. During training, MPC computes sa
...
This thesis presents a hybrid control framework that combines Reinforcement Learning (RL) and Model Predictive Control (MPC) to achieve constraint-satisfying flutter suppression and load alleviation in flexible aircraft subject to turbulent gusts. During training, MPC computes safe input bounds using high-fidelity Linear Parameter Varying (LPV) models and long prediction horizons, exploiting known disturbances to accurately capture aeroelastic behavior. A Q-learning agent is trained to select control actions within these bounds, adapting to nonlinear dynamics and actuator delay. At deployment, the learned policy operates from a lightweight Q-table, with certified interpolation ensuring constraint satisfaction even for unseen states. By integrating the anticipatory capabilities of MPC with the adaptability of RL, the framework enables effective control under turbulence and structural uncertainty. While demonstrated on an aeroelastic aircraft, the approach can be generalized to other systems with similar dynamics, where constraint handling and adaptive control under uncertainty are critical.