Ensuring the portability of deep learning software to explore fusion energy on Aurora

science
Best Practices for Coding Development

As part of a series aimed at sharing best practices in preparing applications for Aurora, we highlight researchers' efforts to optimize codes to run efficiently on graphics processing units.

As part of the Argonne Leadership Computing Facility’s (ALCF) Aurora Early Science Program, William Tang of the U.S Department of Energy’s (DOE) Princeton Plasma Physics Laboratory is leading a project, “Accelerated Deep Learning Discovery in Fusion Energy Science,” that uses artificial intelligence methods to improve predictive capabilities and mitigate large-scale disruptions in burning plasmas in tokamak systems such as ITER.

Best practice
 

  • - Encapsulate application performance with figures of merit

The project’s primary application, the FusionDL FRNN (Fusion Recurrent Neural Net) suite, contains a growing collection of machine learning models and implementations in multiple frameworks, including TensorFlow and PyTorch. Running on top of TensorFlow is Keras, a Python-based deep learning application programming interface (API).

Efforts to port FusionDL to Aurora, the ALCF’s forthcoming GPU-powered exascale supercomputer from Intel-HPE, have been led by Kyle Felker, an ALCF computational scientist. The ALCF is a DOE Office of Science user facility at Argonne National Laboratory.

Lessons learned
 

  • - Remain flexible with respect to the adoption of different deep learning frameworks
     
  • - Look for commonalities among deep learning models and training/inference pipelines rather than over-optimizing one particular model.

More powerful predictive models

Exascale systems such as Aurora stand to enable fusion researchers to train increasingly large-scale deep learning models able to predict with greater accuracy the onset of plasma instabilities in tokamak reactors. The increased processing and predictive powers of exascale will permit more exhaustive hyperparameter tuning campaigns that in turn can lead to better-optimized configurations for the AI models.

In addition, porting to exascale offers the potential to enable the training of more specialized or flexible models that can be shared in real-time with experimental facilities from exascale systems in order to perform more complex prediction tasks than, for example, simply estimating plasma disruption start times. Such tasks include providing a zoo of trained classifiers, each of which can fulfill separate roles in a plasma control system.

Consequently, the ported application is able to perform a broader, deeper set of operations, ranging from a standardized and highly accurate model that can safely shut down a reactor if it detects an imminent disruption, to more complex models that can provide live feedback to reactor operators about potential “disruption precursors” and advise as to which actuators might move plasma into a more stable state.

Moreover, the developers aim to accelerate and improve communications between experimental sites and the supercomputing facilities with which they interact; the turnaround times for data transfers and for training new model architectures are expected to shorten significantly.

Porting to exascale

Similar to its utilization in efforts to port the CANDLE suite, Data Parallel C++ (DPC++) has helped facilitate porting FusionDL to Aurora via Intel implementations and optimizations of underlying deep learning frameworks like TensorFlow and PyTorch.

In addition to MPI, FusionDL utilizes the oneCCL and oneDNN programming models, the latter implicitly via TensorFlow and PyTorch. These high-level Python frameworks rely on oneDNN for computationally intensive GPU processes, while oneCCL helps deliver optimal performance on multiple GPUs by distributing optimized communications patterns to allocate parallel training among different nodes.

Collaborating with Intel engineers to diagnose the causes of model underperformance relative to NVIDIA capabilities has helped the development team more deeply understand their models. The team has evaluated and profiled their software on NVIDIA systems, ThetaGPU’s A100s, via Nsight Systems software. The insights gleaned informed and helped calibrate Felker’s expectations on the upcoming Polaris testbed, and on Intel GPUs—and hence of Aurora.

Figures of merit

To assess the progress of their porting efforts, the developers encapsulate application performance in one or more figures of merit (FoM), which can be compared across hardware from different vendors, including NVIDIA, AMD, and Intel GPUs.

While training throughput can provide a useful FoM in terms of examples per second, the developers have found it insufficient for certain performance analyses, particularly during neural network training in initial I/O phases and for checkpointing between epochs.

In those instances where FoM fail on their own, the developers have implemented a more rigorous approach whereby they create regularly updated matrices of FoM that array vendors and hardware components, numerical precision settings (including float16, bfloat16, TensorFloat-32, and float32), and deep learning models.

Stay flexible and don’t get too attached

After implementing both TensorFlow and PyTorch in several models, including long short-term memory (LSTM) and temporal convolutional networks (TCN), the developers have come to accept a degree of flexibility with regards to the deep learning frameworks they adopt depending on the situation.

Furthermore, they have learned not to become too attached to any particular deep learning model. In the fast-evolving field of scientific machine learning and AI, new deep learning architectures are constantly supplanting existing, widely deployed architectures, thereby disincentivizing developers from expending excessive time and energy over-optimizing for any single model. A more efficient way for developers to work to guarantee performance portability is by searching among deep learning models and training/inference pipelines for commonalities such as data loading and batching, convolution operations, transformation operations of layer activations, and techniques for leveraging mixed-precision and quantization.

 

Domains
Allocations
Systems