Skip to main content
SHARE
Publication

Unified Communication Optimization Strategies for Sparse Triangular Solver on CPU and GPU Clusters

by Piyush K Sao, Yang Liu, Nan Ding, Xiaoye Li, Samuel Williams
Publication Type
Conference Paper
Book Title
SC '23: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis
Publication Date
Page Numbers
1 to 15
Publisher Location
New York, New York, United States of America
Conference Name
SC23: International Conference for High Performance Computing, Networking, Storage and Analysis
Conference Location
Denver, Colorado, United States of America
Conference Sponsor
ACM
Conference Date
-

This paper presents a unified communication optimization framework for sparse triangular solve (SpTRSV) algorithms on CPU and GPU clusters. The framework builds upon a 3D communication-avoiding (CA) layout of Px × Py × Pz processes that divides a sparse matrix into Pz submatrices, each handled by a Px × Py 2D grid with block-cyclic distribution. We propose three communication optimization strategies: First, a new 3D SpTRSV algorithm is developed, which trades the inter-grid communication and synchronization with replicated computation. This design requires only one inter-grid synchronization, and the inter-grid communication is efficiently implemented with sparse allreduce operations. Second, broadcast and reduction communication trees are used to reduce message latency of the intra-grid 2D communication on CPU clusters. Finally, we leverage GPU-initiated one-sided communication to implement the communication trees on GPU clusters. With these nested inter- and intra-grid communication optimization strategies, the proposed 3D SpTRSV algorithm can attain up to 3.45x speedups compared to the baseline 3D SpTRSV algorithm using up to 2048 Cori Haswell CPU cores. In addition, the proposed GPU 3D SpTRSV algorithm can achieve up to 6.5x speedups compared to the proposed CPU 3D SpTRSV algorithm with Pz up to 64. Finally it is remarkable that the proposed GPU 3D SpTRSV can scale to 256 GPUs using the Perlmutter system while the existing 2D SpTRSV algorithm can only scale up to 4 GPUs.