Cache blocking of distributed-memory parallel matrix power kernels

Journal Article (2025)
Author(s)

Dane Lacey (Friedrich-Alexander-Universität Erlangen-Nürnberg)

Christie Alappat (Friedrich-Alexander-Universität Erlangen-Nürnberg)

Florian Lange (Friedrich-Alexander-Universität Erlangen-Nürnberg)

Georg Hager (Friedrich-Alexander-Universität Erlangen-Nürnberg)

Holger Fehske (Greifswald University, Friedrich-Alexander-Universität Erlangen-Nürnberg)

Gerhard Wellein (TU Delft - Numerical Analysis, Friedrich-Alexander-Universität Erlangen-Nürnberg)

Research Group
Numerical Analysis
DOI related publication
https://doi.org/10.1177/10943420251319332
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Numerical Analysis
Issue number
3
Volume number
39
Pages (from-to)
385-404
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Sparse matrix-vector products (SpMVs) are a bottleneck in many scientific codes. Due to the heavy strain on the main memory interface from loading the sparse matrix and the possibly irregular memory access pattern, SpMV typically exhibits low arithmetic intensity. Repeating these products multiple times with the same matrix is required in many algorithms. This so-called matrix power kernel (MPK) provides an opportunity for data reuse since the same matrix data is loaded from main memory multiple times, an opportunity that has only recently been exploited successfully with the Recursive Algebraic Coloring Engine (RACE). Using RACE, one considers a graph based formulation of the SpMV and employs a level-based implementation of SpMV for the reuse of relevant matrix data. However, the underlying data dependencies have restricted the use of this concept to shared memory parallelization and thus to single compute nodes. Enabling cache blocking for distributed-memory parallelization of MPK is challenging due to the need for explicit communication and synchronization of data in neighboring levels. In this work, we propose and implement a flexible method that interleaves the cache-blocking capabilities of RACE with an MPI communication scheme that fulfills all data dependencies among processes. Compared to a “traditional” distributed-memory parallel MPK, our new distributed level-blocked MPK yields substantial speed-ups on modern Intel and AMD architectures across a wide range of sparse matrices from various scientific applications. Finally, we address a modern quantum physics problem to demonstrate the applicability of our method, achieving a speed-up of up to 4× on 832 cores of an Intel Sapphire Rapids cluster.