The AI Testbed aims to help evaluate the usability and performance of machine learning-based high-performance computing applications running on these accelerators. The goal is to better understand how to integrate with existing and upcoming supercomputers at the facility to accelerate science insights.
We are currently offering allocations on our Graphcore Bow IPUs, Cerebras CS-2, and SambaNova DataScale systems.
AI-Testbed Links
Systems
Cerebras CS-2 (Available for Allocation Requests)
Cerebras CS-2 Wafer-Scale Cluster WSE-2
- Two nodes
- 850,000 Processing Cores
- MemoryX system and SwarmX fabric
- 2.6 Trillion Transistors, 7nm
- 40GB On-Chip SRAM; 220 Pb/s Interconnect Bandwidth
- The Cerebras Software Platform (CSoft), Tensorflow, PyTorch
- Accepting proposal submissions for usage
SambaNova Dataflow (Available for Allocation Requests)
- Eight nodes
- 64 Cardinal SN30™ RDUs
- Terabytes of memory
- SambaFlow software stack, PyTorch
- Good for large models, large data, and detailed models
- Accepting proposal submissions for usage
Graphcore Bow Pod64 (Available for Allocation Requests)
Graphcore Intelligent Processing Unit (IPU)
- 1,472 IPU Tiles, 7nm
- 59bn transistors
- IPU-Links interconnect
- Poplar Software stack, PyTorch, Tensorflow
Groq
Groq Tensor Streaming Processor
- > 26 Billion Transistors, 14nm
- Chip-to-Chip interconnect
- GroqWare software stack, Onnx
Habana Gaudi
Habana Gaudi Tensor Processing Cores
- 16nm
- Integrated 100GbE based interconnect
- Synapse AI Software, PyTorch, Tensorflow