Learn how to accelerate deep learning applications—natural language processing, recommender systems, computer vision, and more—on Intel CPUs and GPUs with the Intel Extension for PyTorch.
PyTorch is emerging as one of the Top 2 most preferred deep learning frameworks (the other is TensorFlow) because of its flexibility, computation power, quick learning curve, and data parallelism capabilities which enable developers to distribute tasks among multiple CPUs and/or GPUs.
This session focuses on how the Intel Extension for PyTorch* extends stock PyTorch with optimizations for extra performance on Intel hardware platforms.
Takeaways: