Detecting money-laundering activity in financial transactions is challenging due to the multigraph nature of the problem as well as the intricate fraud patterns that exist. In this work we introduce two architectures, Cascade and Interleaved. These architectures combine the expre
...
Detecting money-laundering activity in financial transactions is challenging due to the multigraph nature of the problem as well as the intricate fraud patterns that exist. In this work we introduce two architectures, Cascade and Interleaved. These architectures combine the expressive power of local message passing (MP) from Graph Neural Networks (GNNs) with the one of global message passing from Transformers. Both models leverage the Principal Neighborhood Aggregation (PNA) GNN for capturing rich local structure. We also incorporate the MEGA two-stage aggregation scheme to distinguish transactions that have the same source and destination accounts from other transactions. We further enhance our architectures with PEARL, a learnable positional encoding framework that has a reduced overhead compared to other techniques. We evaluate our models on the IBM transactions for Anti-Money Laundering (AML) synthetic datasets. We achieve significant improvements compared to the PNA baseline, and come close to tie SOTA results, while requiring less feature engineering on the input graphs and also show that the application of learnable positional encodings in financial fraud detection tasks is promising.