The Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility, has officially launched Cooley, a new visualization and analysis cluster with nearly eight times the memory capacity of the facility’s previous system, Tukey.
The new system’s significant boost in memory, along with its state-of-the-art hardware, will help facility users to better analyze and explore the massive datasets that result from their simulations on Mira, the ALCF’s 10-petaflops IBM Blue Gene/Q supercomputer.
“As datasets continue to get larger and larger, visualization is playing an increasingly important role in helping researchers to make sense of their simulation data,” said Mark Hereld, manager of visualization and data analysis at the ALCF. “With Cooley, we are improving our ability to delve into and interact with large-scale numerical datasets by transforming them into high-resolution images and animations.”
Cooley is equipped 126 compute nodes, each with two Intel Xeon E5-2620 Haswell 2.4 GHz 6-core processors with 384 gigabytes of RAM, and an NVIDIA Tesla K80 graphics processing unit (GPU) with 24 gigabytes of memory. The system has a peak performance of 223 teraflops, an aggregate RAM of more than 48 terabytes, and an aggregate GPU memory of more than 3 terabytes. By contrast, Tukey had a peak performance of 99 teraflops and an aggregate RAM of 6 terabytes.
Cooley and its predecessor were named after the Cooley-Tukey algorithm, the most widely used fast Fourier transform (FFT) algorithm for data analysis and signal processing. And the namesake is not the only thing the two systems have in common. Cooley will operate with the same software environment as Tukey, and will also continue to share file systems with Mira.
While ALCF users will not experience a major change in how they interact with the system, they will notice the improved performance and capabilities enabled by Cooley.
Salman Habib, Argonne physicist and longtime ALCF user, is looking forward to using Cooley for his ongoing project to simulate the evolution of the universe. The new system will allow his team to better analyze the huge datasets that result from their research, which includes the largest high-resolution cosmological simulation performed to date, known as the Outer Rim simulation.
“A single raw particle output from the Outer Rim run on Mira is roughly 40 terabytes,” Habib said. “This far exceeded the RAM available in Tukey, but it will fit into Cooley, which will greatly accelerate our data analysis efforts.”
Habib is also excited about the possibility of tapping Cooley for in situ analysis, a technique that allows researchers to analyze data directly on a host computing resource rather than storing and transferring data to another resource.
“In situ analysis is becoming ever more important as the computational power of computers is increasing much faster than accessible storage and I/O network bandwidth,” Habib said.
Cooley’s architecture opens the door to other new and improved capabilities as well.
As an example, Ivan Bermejo-Moreno, a researcher at Stanford University’s Center for Turbulence Research, who chairs the ALCF’s User Advisory Council, points to the potential for unprecedented volume-rendered visualizations, a computationally demanding technique that allows researchers to visualize 3D volumetric datasets.
Other possible uses that stand to benefit from Cooley include pre-processing applications such as meshing complex geometries and post-processing applications such as uncertainty quantification analysis.
“All of the improvements that Cooley brings will result in better physical insights and more accurate models in a wide range of applications from climatology to combustion,” Bermejo-Moreno said. “With an improved ability to see and explain the results from massive simulations, it will help us to reach out to a larger community and share the critical role that modeling and simulation plays in scientific discovery across all disciplines.”