Print Email Facebook Twitter Performance modeling and optimization of sparse matrix-vector multiplication on NVIDIA CUDA platform Title Performance modeling and optimization of sparse matrix-vector multiplication on NVIDIA CUDA platform Author Xu, S. Xue, W. Lin, H.X. Faculty Electrical Engineering, Mathematics and Computer Science Department Delft Institute of Applied Mathematics Date 2011-06-07 Abstract In this article, we discuss the performance modeling and optimization of Sparse Matrix-Vector Multiplication (SpMV) on NVIDIA GPUs using CUDA. SpMV has a very low computation-data ratio and its performance is mainly bound by the memory bandwidth. We propose optimization of SpMV based on ELLPACK from two aspects: (1) enhanced performance for the dense vector by reducing cache misses, and (2) reduce accessed matrix data by index reduction. With matrix bandwidth reduction techniques, both cache usage enhancement and index compression can be enabled. For GPU with better cache support, we propose differentiated memory access scheme to avoid contamination of caches by matrix data. Performance evaluation shows that the combined speedups of proposed optimizations for GT-200 are 16% (single-precision) and 12.6% (double-precision) for GT-200 GPU, and 19% (single-precision) and 15% (double-precision) for GF-100 GPU. Subject sparse matrices-vector multiplicationGPUCUDAmatrixpermutationcache optimization To reference this document use: http://resolver.tudelft.nl/uuid:f45fb838-3453-4eb4-82ab-0394ecc21e3e DOI https://doi.org/10.1007/s11227-011-0626-0 Publisher Springer ISSN 1573-0484 Source http://link.springer.com/journal/11227 Source The Journal of Supercomputing, 63 (3), 2013 Part of collection Institutional Repository Document type journal article Rights © The Author(s) 2011. This article is published with open access at Springerlink.com Files PDF Xu_performance_modeling.pdf 498.05 KB Close viewer /islandora/object/uuid:f45fb838-3453-4eb4-82ab-0394ecc21e3e/datastream/OBJ/view