Enhancing Aviation Maintenance with Explainable AI: A Bayesian Approach to Counterfactual Explanations for Remaining Useful Life Estimation

Master Thesis (2024)
Author(s)

J.Y. Andringa (TU Delft - Aerospace Engineering)

Contributor(s)

Marcia L. Baptista – Mentor (TU Delft - Air Transport & Operations)

B. F. Santos – Graduation committee member (TU Delft - Air Transport & Operations)

Faculty
Aerospace Engineering
Copyright
© 2024 Jilles Andringa
More Info
expand_more
Publication Year
2024
Language
English
Copyright
© 2024 Jilles Andringa
Graduation Date
25-03-2024
Awarding Institution
Delft University of Technology
Programme
['Aerospace Engineering']
Faculty
Aerospace Engineering
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Machine learning models have improved Prognostics and Health Management (PHM) in aviation, notably in estimating the Remaining Useful Life (RUL) of aircraft engines. However, their 'black-box' nature limits transparency, critical in safety-sensitive aviation maintenance. Explainable AI (XAI), particularly Counterfactual (CF) explanations, offers a way to explain model decisions by suggesting alternative scenarios for different outcomes. Additionally, Bayesian models enhance predictions by quantifying uncertainty, yet the combination of CF explanations and Bayesian methods is largely unexplored. This study investigates counterfactual methods within a Bayesian framework to improve the explainability of RUL estimation and improve model performance. For this, a Bayesian Long Short-Term Memory (LSTM) model was applied to the C-MAPSS data-set. This research uniquely applies CF explanations in two ways, with the goal of offering insights into how varying operational conditions could affect the RUL, and to improve the model's performance by generating additional augmented data with reduced uncertainty for the model to train on. Preliminary results show that CF explanations are able to provide insights and suggestions for RUL improvement. Also, the addition of the augmented data using the CF uncertainty reduction method has shown to improve the models predictive performance, confirming the viability of this approach as a data augmentation method.

Files

License info not available