Beyond Spectral Graph Theory

An Explainability-Driven Approach to Analyzing the Stability of Graph Neural Networks to Topology Perturbations

More Info
expand_more

Abstract

Graph Neural Networks (GNNs) have emerged as a powerful tool for learning from relational data. The real-world graphs such models are trained on are susceptible to changes in their topology. A growing body of work in the field of GNNs' stability to topology perturbations is trying to characterise how models respond to those changes, providing valuable insights that have enhanced the robustness of GNNs to adversarial attacks. The past work in this field, however, has only approached stability analysis using spectral graph theory, which is not applicable to all kinds of GNN models. In this paper, we aim to extend the past work in GNN stability by proposing an algorithm for analysing the stability of GNNs with model-agnostic GNN explainability tools instead of the mathematical framework of spectral graph theory. We demonstrate that the outputs of explainability tools can encode useful insights into the stability of GNNs and present a case study on using those insights to analyse node removal, edge removal, and edge weight perturbations.