Deep learning is massively popular in scientific computing, with DL algorithms used by industries to solve complex, computationally sophisticated problems in real time. Find out how two Intel oneAPI-optimized tools can boost training and inference performance of big models.
For developers focused on deep learning use cases—predictive modeling, recommendation systems, natural language processing, object detection, and tons more—it’s paramount to extract the most workload performance using newer technologies like BF16, graph-level optimizations, and custom kernels.
This session focuses on the performance and ease-of-use benefits for DL training and inference of big models like DLRM (deep learning recommendation model) using Intel® Extension for PyTorch* and Intel oneAPI Deep Neural Network Library (oneDNN).
Register to hear Senior Deep Learning Engineer Eikan Wang cover: