The AI Testbed aims to help evaluate the usability and performance of machine learning-based high-performance computing applications running on these accelerators. The goal is to better understand how to integrate with existing and upcoming supercomputers at the facility to accelerate science insights.
We are currently offering allocations on our Cerebras CS-2 and SambaNova DataScale systems.
AI-Testbed Links
Systems
Cerebras CS-2 (Available for Allocation Requests)
Cerebras CS-2 Wafer-Scale Deep Learning Accelerator
- 850,000 Processing Cores
- 2.6 Trillion Transistors, 7nm
- 40GB On-Chip SRAM; 220 Pb/s Interconnect Bandwidth
- The Cerebras Software Platform (CSoft), Tensorflow, PyTorch
- Accepting proposal submissions for usage
SambaNova Dataflow (Available for Allocation Requests)
- > 40 Billion Transistors, 7nm
- RDU-Connect
- SambaFlow software stack, PyTorch
- Accepting proposal submissions for usage
Graphcore MK1
Graphcore Intelligent Processing Unit (IPU)
- 1216 IPU Tiles, 14nm
- > 23 Billion Transistors
- IPU-Links interconnect
- Poplar Software stack, PyTorch, Tensorflow
Groq
Groq Tensor Streaming Processor
- > 26 Billion Transistors, 14nm
- Chip-to-Chip interconnect
- GroqWare software stack, Onnx
Habana Gaudi
Habana Gaudi Tensor Processing Cores
- 16nm
- Integrated 100GbE based interconnect
- Synapse AI Software, PyTorch, Tensorflow