Explainable GNNs in Biomedicine

Master Thesis (2025)
Author(s)

N.A. Perez Zambrano (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

M. Khosla – Mentor (TU Delft - Multimedia Computing)

Elvin Isufi – Graduation committee member (TU Delft - Multimedia Computing)

Jasmijn A. Baaijens – Graduation committee member (TU Delft - Pattern Recognition and Bioinformatics)

Faculty
Electrical Engineering, Mathematics and Computer Science
More Info
expand_more
Publication Year
2025
Language
English
Graduation Date
19-05-2025
Awarding Institution
Delft University of Technology
Programme
['Computer Science']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Graph Neural Networks have become ubiquitous in machine learning research, and their use has also given rise to expectations of what a model can do and how we can understand it. Explainability has become one of the key tools for solving these problems, but explainability often needs to consider domain-specific requirements to be useful. To the best of our knowledge, no domain-specific requirements have been set for the biomedicine domain when working with biomedicine. In this thesis, we seek to understand what standards are needed by the domain, set automatic metrics to meet them, and evaluate these metrics in problems common to biomedicine. Thanks to working on the gene disease association and the proteins classification task, we are able to provide some insights into what constraints require what explainers. We find that no single explainer is able to outperform all the other explainers in all metrics. This work offers practical recommendations for selecting appropriate explainers for specific biomedical applications. It identifies key directions for developing domain-specific explainability approaches that address the unique needs of biomedical research.

Files

License info not available