The ALCF provides users with access to supercomputing resources that are significantly more powerful than systems typically used for open scientific research.
The ALCF is accelerating scientific discoveries in many disciplines, ranging from chemistry and engineering to physics and materials science.
The ALCF is committed to providing training and outreach opportunities that prepare researchers to efficiently use its leadership computing systems, while also cultivating a diverse and skilled HPC workforce for the future.
The Argonne Leadership Computing Facility enables breakthroughs in science and engineering by providing supercomputing resources and expertise to the research community.
The ALCF Support Center assists users with support requests related to their ALCF projects.
Help Desk Hours: 9:00am-5:00pm CT M-F Email: support@alcf.anl.gov
An overview of evaluating large language models, including a discussion on potential pitfalls and limitations.
Bethany Lusch is a Computer Scientist in the data science group at the Argonne...
We present modern parallelism techniques and discuss how they can be used to train and distribute large models across many GPUs.
Sam Foreman is a Computational Scientist with a background in high...
The video covers essential concepts of sequential data modeling and modeling approaches such as transformers.
Archit Vasan is a postdoctoral appointee in the Argonne Leadership Computing Facility...
This video covers the essential concepts of sequential data modeling and modeling approaches such as transformers.
Carlo Graziani is a Computational Scientist at Argonne National Laboratory. He...
This video covers advanced topics in convolutional neural networks, such as deep, residual, variational, and adversarial networks
Corey Adams is a Computational Scientist at the Argonne Leadership...
This video goes over the basics of neural networks, opening up the black box of machine learning by building out by-hand networks for linear regression to increase the understanding of the math that...
This video goes over the basics of supercomputers and high-performance computing. It introduces parallel programming and the fundamentals of training AI models on supercomputers.
Huihuo Zheng is a...
As the landscape of high-performance computing expands, support for machine learning workflows and interactivity becomes even more critical. JupyterLab, an evolution of the popular Jupyter Notebook...
This session gives an overview of using Tensorflow, Pytorch, and JAX are the core deep learning frameworks on ALCF production resources. All three frameworks are accessible in python (and a few other...
This session covers LLMs and how to get started running LLMs at the ALCF.
This video session covers using Python and Jupyter Notebooks on Polaris and containers at ALCF.
You learn about managing conda environments, running Python and multi-rank jobs, creating a new notebook...
This video discusses distributed deep learning. It covers parallelization schemes, distributed training frameworks, and I/O and data management.
This training covers coupling HPC simulations and AI/ML. You will learn how to couple and what software to use, and you will get a demo of how to use NekRS.
This video provides a demo on the ALCF AI testbed and an introduction to the broader toolset of AI methods.