
Learn how the latest Intel optimizations extend stock PyTorch on Intel hardware, including the Intel® Xeon® CPU Max Series (formerly codenamed Sapphire Rapids) and Intel® Data Center GPU Max Series (formerly codenamed Ponte Vecchio).
PyTorch is a favorite among AI developers and researchers because it’s easy to learn, interfaces with Python, is easy to debug, can distribute tasks among multiple CPUs and GPUs, and has a lot of extensions.
Sweet, right?
This session introduces the Intel® Extension for PyTorch—part of the Intel® Optimization for PyTorch—which extends the stock sci-computing framework with optimizations for extra performance on Intel® architecture.
Topics covered:
- An overview of the Intel optimizations, including installation and performance boost metrics
- The newest features, including Intel® AVX-512 Vector Neural Network Instructions (VNNI), Intel® Advanced Matrix Extensions (Intel® AMX), ease-of-use python API, vectorization, parallelism, quantization, operator fusion, constant folding, and runtime extension
- A live demo showcasing usage and performance boosts on both CPUs and GPUs.
Sign up today.