Performance modeling and optimization of sparse matrix-vector multiplication on NVIDIA CUDA platform

Journal Article (2011)
Contributor(s)

Copyright
© The Author(s) 2011. This article is published with open access at Springerlink.com
DOI related publication
https://doi.org/doi:10.1007/s11227-011-0626-0
More Info
expand_more
Publication Year
2011
Copyright
© The Author(s) 2011. This article is published with open access at Springerlink.com
Related content
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

In this article, we discuss the performance modeling and optimization of Sparse Matrix-Vector Multiplication (SpMV) on NVIDIA GPUs using CUDA. SpMV has a very low computation-data ratio and its performance is mainly bound by the memory bandwidth. We propose optimization of SpMV based on ELLPACK from two aspects: (1) enhanced performance for the dense vector by reducing cache misses, and (2) reduce accessed matrix data by index reduction. With matrix bandwidth reduction techniques, both cache usage enhancement and index compression can be enabled. For GPU with better cache support, we propose differentiated memory access scheme to avoid contamination of caches by matrix data. Performance evaluation shows that the combined speedups of proposed optimizations for GT-200 are 16% (single-precision) and 12.6% (double-precision) for GT-200 GPU, and 19% (single-precision) and 15% (double-precision) for GF-100 GPU.

Files

Xu_performance_modeling.pdf
(pdf | 0.486 Mb)
License info not available