Hybrid Parallel Programming for HPC Clusters with MPI and DPC++

Karl Qi and Loc Nguyen, Intel
Webinar
Intel webinars

Hybrid Parallel Programming for HPC Clusters with MPI and DPC++
 

In the HPC realm, heterogeneous computing increasingly requires hybrid programming—code approaches that optimize for GPUs, FPGAs, and other accelerators. Tune in to learn how it can be done.

Modern high-performance computing (HPC) clusters can include various nodes that contain hardware accelerators such as GPUs and FPGAs. To take full advantage of inter-node, intra-node, and accelerator-device-level parallelism, hybrid programming is required.

This webinar discusses how to do exactly that by using Data Parallel C++ (DPC++) with the Message Passing Interface (MPI), which are supported in the Intel® oneAPI Base Toolkit and Intel® oneAPI HPC Toolkit, respectively.

In this session, software specialists Karl Qi and Loc Nguyen will cover the landscape, clarifying how these two distinct standards—MPI and DPC++—can be effectively used together to (1) communicate between nodes and (2) accelerate computation on a single node using available accelerators.

  • A brief overview of MPI
  • Using the Intel® MPI Library with DPC++
  • Compiling and deploying applications on Linux and Windows
  • Target DPC++ kernels for CPUs and GPUs
  • Using MPI and DPC++ in the Intel® DevCloud