Graph Neural Networks have become ubiquitous in machine learning research, and their use has also given rise to expectations of what a model can do and how we can understand it. Explainability has become one of the key tools for solving these problems, but explainability often ne
...
Graph Neural Networks have become ubiquitous in machine learning research, and their use has also given rise to expectations of what a model can do and how we can understand it. Explainability has become one of the key tools for solving these problems, but explainability often needs to consider domain-specific requirements to be useful. To the best of our knowledge, no domain-specific requirements have been set for the biomedicine domain when working with biomedicine. In this thesis, we seek to understand what standards are needed by the domain, set automatic metrics to meet them, and evaluate these metrics in problems common to biomedicine. Thanks to working on the gene disease association and the proteins classification task, we are able to provide some insights into what constraints require what explainers. We find that no single explainer is able to outperform all the other explainers in all metrics. This work offers practical recommendations for selecting appropriate explainers for specific biomedical applications. It identifies key directions for developing domain-specific explainability approaches that address the unique needs of biomedical research.