Computer Science – Distributed – Parallel – and Cluster Computing
Scientific paper
2011-12-23
Computer Science
Distributed, Parallel, and Cluster Computing
10 pages, 5 figures. Added reference to other recent sparse matrix formats
Scientific paper
Sparse matrix-vector multiplication (spMVM) is the dominant operation in many sparse solvers. We investigate performance properties of spMVM with matrices of various sparsity patterns on the nVidia "Fermi" class of GPGPUs. A new "padded jagged diagonals storage" (pJDS) format is proposed which may substantially reduce the memory overhead intrinsic to the widespread ELLPACK-R scheme. In our test scenarios the pJDS format cuts the overall spMVM memory footprint on the GPGPU by up to 70%, and achieves 95% to 130% of the ELLPACK-R performance. Using a suitable performance model we identify performance bottlenecks on the node level that invalidate some types of matrix structures for efficient multi-GPGPU parallelization. For appropriate sparsity patterns we extend previous work on distributed-memory parallel spMVM to demonstrate a scalable hybrid MPI-GPGPU code, achieving efficient overlap of communication and computation.
Basermann Achim
Bishop Alan R.
Fehske Holger
Hager Georg
Kreutzer Moritz
No associations
LandOfFree
Sparse matrix-vector multiplication on GPGPU clusters: A new storage format and a scalable implementation does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Sparse matrix-vector multiplication on GPGPU clusters: A new storage format and a scalable implementation, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Sparse matrix-vector multiplication on GPGPU clusters: A new storage format and a scalable implementation will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-192581