The annual ALCF Hands-on HPC Workshop connects attendees with staff experts to optimize HPC and AI applications on ALCF supercomputers. (Image: Argonne National Laboratory)
Researchers gathered at Argonne National Laboratory for the annual workshop to improve application performance on ALCF systems.
The Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science user facility at Argonne National Laboratory, regularly hosts hands-on training programs to help researchers enhance the performance of their codes and workflows on the lab’s high-performance computing (HPC) and artificial intelligence (AI) systems.
In 2025, the ALCF held its Hands-on HPC Workshop as a two-part event aimed at advancing participants’ HPC and AI skills. The initial virtual portion focused on the fundamentals of building, compiling, and running applications on ALCF systems, including introductions to Aurora and Polaris, and was particularly recommended for new users. The subsequent in-person portion, held a few weeks later in October at Argonne, offered deeper sessions, dedicated hands-on time with ALCF staff and industry experts, and tours of the laboratory’s facilities, including the Aurora supercomputer.
“Our 2025 Hands-on HPC Workshop provides participants with practical experience using key ALCF systems and tools,” said Yasaman Ghadar of ALCF, the lead organizer of the event. “It also gives attendees the chance to engage directly with experts and prepare for cutting-edge scientific research on Aurora and Polaris.”
Throughout the workshop, participants gained hands-on experience with ALCF’s HPC and AI systems, learning to optimize code, manage large datasets, and leverage performance tools and debuggers. By combining interactive exercises and expert guidance, the workshop equipped attendees to tackle complex scientific challenges and fully harness the power of Argonne’s advanced computing platforms such as Aurora and Polaris.
Kavin Kabilan, a PhD student in Mechanical Engineering at the Georgia Institute of Technology, learned about the ALCF workshop from Dr. Suhas S. Jain, an assistant professor at the George W. Woodruff School of Mechanical Engineering. Jain is leading an ALCC project and attended the workshop in hopes of advancing their work on Aurora.
His team’s ALCF project is focused on wall-bounded turbulence and its modeling, particularly fluid-structure interactions with applications in ice accretion on aerofoils. At the workshop, Kabilan sought to deepen his understanding of HPC techniques, which are essential for studying these physical phenomena computationally.
Kabilan’s came to the workshop to gain practical experience with Aurora and learn the tools needed for analyzing and profiling codes. “My motivation to attend the workshop was to get an idea of different techniques in high-performance computing and learn how to use the Aurora supercomputer, get acquainted with it and run some basic cases on it,” he said. He also hoped to explore software used for code analysis, profiling, and benchmarking.
Kris Rowe, a computational scientist at ALCF, gave an overview of hardware, building, and running on Aurora and Polaris, helping attendees like Kabilan learn the ins and outs of the machines.
ALCF's Kris Rowe provides a hardware overview of Aurora and Polaris, covering system specs, how to login, and tips for using the Aurora job scheduler.
During the workshop, Kabilan worked with HPC and AI profiling tools, while also gaining exposure to concepts in weather modeling. The highlight of the event for him was the AI and machine learning (ML) session, where he “learned a lot about profiling tools that can be used alongside AI/ML workflows which I plan to work on in the future.”
ALCF's Christine Simpson covers what a workflow tool is, why you should use one, and which workflows are available at the ALCF.
Kabilan noted that one of the most valuable takeaways was gaining hands-on experience with Aurora. “Most crucially I learned how to operate Aurora, manage my cases and run them, which is going to be important for me in the near future,” he said. Looking ahead, Kabilan plans to maintain connections with the lab and pursue potential collaborations or internships at Argonne. He also plans to attend the hackathon at Argonne in 2026.
Luiz Carlos Aldeia Machado, a fifth-year PhD candidate at Penn State University, attended the ALCF workshop to gain familiarity with Aurora and optimize workflows in his research. “Attending the ALCF workshop was an opportunity to become more familiar with ALCF resources, particularly the tools available on Aurora and how to use them,” Aldeia Machado said.
His research with ATHENA (Advanced Thermal-Hydraulics + Exascale Numerical Analysis) focuses on high-fidelity modeling of advanced reactors, combining tools like NekRS/Nek5000 and MOOSE with machine learning models for multiphysics simulations. “Some of our simulations can produce dozens of terabytes of data, and postprocessing them without the use of machines such as Aurora would be impossible,” he explained.
At the workshop, Aldeia Machado worked with collaborators to implement a graph neural network (GNN) in the NekRS code to reconstruct velocity fields from coarse simulations, reducing the time required for high-fidelity computational fluid dynamics runs. He noted that this work “greatly benefited from the use of machines such as Aurora, which allow us to conduct such studies for complex cases, something impossible on ordinary computers.”
A highlight for Aldeia Machado was the workshop’s hands-on format and interaction with the ALCF visualization team. “From a didactic point of view, this is a much better approach than just presenting dozens of talks one after the other,” he said.
Looking ahead, Aldeia Machado plans to continue applying these tools in his research. “Absolutely, the GNN model, for example, is something we are using right now in our research group, as well as the visualization tips. Profiling AI/ML and data pipelines are also useful tools I plan to use going forward,” he said.
Sarah Elliott, a new assistant scientist at Argonne, attended the ALCF workshop to expand her work to larger-scale chemical kinetic mechanisms and leverage AI and GPU-accelerated software. “Expanding the AutoMech software to leverage AI and to utilize GPU-accelerated software would enable me to explore much larger reaction spaces,” she said.
Her research focuses on gas-phase computational thermochemistry and kinetics, providing high-accuracy rate constants and thermochemical properties for mid- to low-temperature reactions used in low-temperature combustion, oxidation, and atmospheric models. To support this work, Elliott’s team has been developing AutoMech, an open-source software for large-scale chemical kinetic mechanism development. “My team, originating at Argonne, but with an expanding list of international developers, has been developing AutoMech, an open-source software for large-scale chemical kinetic mechanism development,” she explained.
During the workshop, Elliott successfully ported and tested AutoMech on Aurora, verifying that the newly implemented workflow manager efficiently parallelizes tasks across nodes. She also validated AutoMech’s interface with the GPU-accelerated Master Equation System Solver (MESS) and identified opportunities for more effective GPU usage.
The highlight of the workshop for Elliott was the ability to utilize Machine Learned Interatomic Potentials, which she described as “such a leap forward towards large-scale rate constant production, it was very exciting to successfully drive those calculations through AutoMech.” She also noted that the talks on large language models could offer future opportunities, even if she has no immediate plans to implement them.
“The workshops have always been a great opportunity for the team to focus together on implementing new features while we have experts available to advise us,” Elliot said.
Ashish Dhanalakota, a PhD student at the Georgia Institute of Technology, attended the ALCF workshop to develop a deeper understanding of HPC. “My primary motivation to attend was to gain a better intuition for high-performance computing, learn how others utilize this tool, and explore what opportunities there are in this field of research,” Dhanalakota said.
He works in a lab specializing in multiphase flow and turbulence simulations, focusing on bubble and droplet dynamics. One of his projects, on the coalescence of bubbles and droplets, will require “hundreds of simulations with multiscale modeling,” he explained. Dhanalakota noted that “ALCF’s Aurora computer would be perfect to run these long and computationally expensive simulations” and that guidance from ALCF experts would be invaluable in learning how to run these workflows.
During the workshop, Dhanalakota worked on writing GPU code, running programs on Aurora, and understanding how a supercomputer operates. He received support from his colleagues and Argonne researchers, while also connecting with students and scientists with similar research interests.
To view all recorded talks from the event, please visit the 2025 Hands-on HPC Workshop playlist on YouTube.