Relational Deep Learning with Graph Transformers: Exploring Local and Global Message Passing
I. Cuñado (TU Delft - Electrical Engineering, Mathematics and Computer Science)
H.Ç. Bilgi – Mentor (TU Delft - Data-Intensive Systems)
Kubilay Atasu – Mentor (TU Delft - Data-Intensive Systems)
T. Höllt – Graduation committee member (TU Delft - Computer Graphics and Visualisation)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Graph Transformers have played a key role in the latest graph learning developments. However, their application and performance in Relational Deep Learning (RDL), which has huge potential to remove inefficient data pre-processing pipelines, remain largely unexplored. For this reason, we present adaptations to two well-known Graph Transformer models: a relation-aware local message passing variant (FraudGT) that computes separate attention matrices for each edge and node type; and a simplified global-attention version that ignores heterogeneity (Graphormer). Our analysis demonstrates that local relation-aware attention achieves state-of-the-art results on node classification and regression tasks when evaluated against RelBench tasks, a set of comprehensive RDL benchmarks. We show how local message passing is computationally cheaper, faster, more efficient and more accurate than global attention. Our code is available at https://github.com/ignaciocunado/gt-rdl.