Using Explainable Artificial Intelligence to Improve Transparency of Reinforcement Learning for Online Adaptive Flight Control

Breaking Open the Black Box

More Info


Deep Reinforcement Learning (DRL) shows great potential for flight control, due to its adaptability, fault-tolerance, and as it does not require an accurate system model. However, these techniques, like many machine learning applications, are considered black-box as their inner workings are hidden. This paper aims to break open the black box of RL for adaptive flight control by applying Shapley Additive Explanations (SHAP). The generated explanations are aimed at control experts, but can be useful for anyone interested in RL for adaptive flight control. This research proposes a novel Constant Weight Segment Detection (CWSD) algorithm, facilitating the use of eXplainable Artificial Intelligence techniques to adaptive RL. The algorithm and its usefulness are tested on an Adaptive Critic Design controlling a high-fidelity model of a Cessna Citation aircraft. It is demonstrated that SHAP in combination with CWSD provides detailed and useful insights into the relation between input and output of the RL algorithm. Using SHAP, linear relations between input and output are discovered, simplifying the understanding of the learned strategy.