In the HPC realm, heterogeneous computing increasingly requires hybrid programming—code approaches that optimize for GPUs, FPGAs, and other accelerators. Tune in to learn how it can be done.
Modern high-performance computing (HPC) clusters can include various nodes that contain hardware accelerators such as GPUs and FPGAs. To take full advantage of inter-node, intra-node, and accelerator-device-level parallelism, hybrid programming is required.
This webinar discusses how to do exactly that by using Data Parallel C++ (DPC++) with the Message Passing Interface (MPI), which are supported in the Intel® oneAPI Base Toolkit and Intel® oneAPI HPC Toolkit, respectively.
In this session, software specialists Karl Qi and Loc Nguyen will cover the landscape, clarifying how these two distinct standards—MPI and DPC++—can be effectively used together to (1) communicate between nodes and (2) accelerate computation on a single node using available accelerators.