The Large Hadron Collider (LHC) is one of the largest and most complex experimental facilities ever built, with six experiments exploring the behavior of matter, energy, space and time at the shortest distance scales ever probed. The LHC accelerates counter‐circulating beams of protons to very high energies which then collide in the center of building‐sized particle detectors like ATLAS. These detectors – each operated by a worldwide collaboration of 3000 scientists (about 20% from the US) – measure the positions, momenta and properties of the particles produced in these collisions, and the scientific teams use these measurements to infer what happened in each collision, e.g. the production of a Higgs boson. The computational requirements for such experiments are extremely large and growing, and we can imagine a time in the near future where the science is limited not by the number of events that the LHC can produce but instead the number of events our Grid based computers can simulate. Additionally, even today there are events that are too complex to simulate on the Grid at the scale needed by the experiments’ analysis teams.
This allocation supports an effort to address this challenge by running a small but noticeable fraction of ATLAS simulation events on DOE supercomputers, particularly those that are difficult to produce any other way. The outcomes will support advances in high-energy physics and shed light on a possible path forward for analyzing future LHC data.