Perception modelling by invariant representation of deep learning for automated structural diagnostic in aircraft maintenance

A study case using DeepSHM

More Info


Predictive maintenance, as one of the core components of Industry 4.0, takes a proactive approach to maintain machines and systems in good order to keep downtime to a minimum and the airline maintenance industry is not an exception to this. To achieve this goal, practices in Structural Health Monitoring (SHM) complement the existing Non-Destructive-Testing (NDT) have been established in the last decades. Recently, the increasing computational capability such as utilization of a graphical processing unit (GPU) in combination with advanced machine learning techniques such as deep learning has been one of the main drivers in the advancement of predictive analytics in condition monitoring. In our previous work, we proposed a novel approach using deep learning for guided wave based structural health called DeepSHM. As a study case, we treated an ultrasonic signal from guided Lamb wave SHM with a convolutional neural network (CNN). In that work, we only considered a single central frequency excitation. This led to a single governing wavelength which is normally good for the detection of a single damage size. In classical signal processing, applying a broader excitation frequency poses an analysis and interpretation nightmare because it contains more complex information and thus is difficult to understand. This problem can be overcome with deep learning; however, it creates another problem: while deep learning typically results in a more accurate result prediction, it is specifically made for solving only certain types of tasks. While many papers have already introduced deep learning for diagnostics, many of these works are only proposing novel predictive techniques, however the mathematical formalization is lacking, and we are not informed about why we should treat acoustic signal with deep learning. So, the basis of ‘explainable AI’ for SHM and NDT is currently lacking. For this reason, in this paper, we would like to extend our previous work into a more generalized. Rather than focusing on a novel technique, we propose a plausible theoretical perspective inspired from neuroscience for signal representation of deep learning framework to model machine perception in structural health monitoring (SHM), especially because SHM typically involves multiple sensory input from different sensing locations. To do this, we created a set of artificial data from a finite element model (FEM) and represented DeepSHM in two different ways: 1). Perpetual representation of observation and 2). Hierarchical structure of entities that is decomposable in a smaller sub-entity. Consequently, we assume two plausible models for DeepSHM: 1). Either it behaves as a single deciding actor since the observation is regarded as perpetual, and 2). Or it acts as a multiple actor with independent outputs since multiple sensors can form different output probabilities. These artificial data were split into several different input representations, classified into several damage scenarios and then trained with commonly used deep learning training parameters. We compare the performance metrics of each perception model to describe the training behavior of both representations.