Combinatorial Parallel Computing: The Case of Reducing Communication in Large-Scale Sparse Computations

Nabil Abubaker, Scalable Parallel Computing Laboratory SPCL
Seminar
LANS Seminar Graphic featuring the title and date for the event.

Effective parallel algorithm design is crucial for reducing energy consumption in high-performance computing, particularly in sparse computations. We explore efforts to minimize communication overhead in these computations, focusing on how combinatorial opportunities can be leveraged to enhance the parallel scalability of algorithms dominated by sparse components. We consider several key examples, including sparse matrix kernels such as SpMM and SDDMM, low-rank sparse matrix approximations, and sparse tensor decompositions. The emphasis is on sparsity-aware communication and how it can be constructed, analyzed, and optimized. We delve into the parallelization of these computations, demonstrating how careful algorithm analysis and redesign lead to substantial improvements in communication, memory usage, runtime, and energy efficiency.

To add to calendar:

Click on:  https://wordpress.cels.anl.gov/cels-seminars/

Enter your credentials.

Search for your seminar 

Click “Add to calendar”