Summer Student Presentations III

Webinar
CS Seminar Graphic

Seminar Title #1: Transfer Learning-based Autotuning Using Gaussian Copula
Speaker: Tom Randall, Visiting Student, MCS Division, Argonne National Laboratory

Description: As diverse high-performance computing (HPC) systems are built, many opportunities arise for applications to solve larger problems than ever before. The increased complexity of both HPC systems and applications have made emprical performance tuning, known as autotuning, a promising approach to maximize application performance. Despite its effectiveness, autotuning is often a computationally expensive approach. Transfer learning (TL)-based autotuning seeks to address this issue by leveraging the data from prior tuning. Current TL methods for autotuning spend significant time modeling the relationship between parameter configurations and performance, which is ineffective for few-shot (that is, few empirical evaluations) tuning on new tasks. We introduce the first generative TL-based autotuning approach based on the Gaussian copula (GC) to model the high-performing regions of the search space from prior data and then generate high-performing configurations for new tasks. This allows a sampling-based approach that maximizes few-shot performance and provides the first probabilistic estimation of the few-shot budget for effective TL-based autotuning. We compare our generative TL approach with state-of-the-art autotuning techniques on several benchmarks. We find that the GC is capable of achieving 64.37% of peak few-shot performance in its first evaluation. Furthermore, the GC model can determine a few-shot transfer budget that yields up to 33.39× speedup, a dramatic improvement over the 20.58× speedup using prior techniques.

Bio – Tom Randall: Tom Randall is a Graduate Research Assistant in the School of Computing at Clemson University, and a Visiting Student in the MCS division at ANL for Summer 2023.

Seminar Title #2: Autotuning Apache TVM Scientific Applications Using ytopt 
Speaker: Praveen Paramasivam, Visiting Student, MCS Division, Argonne National Laboratory

Description: Apache TVM is a cutting-edge machine learning compiler framework with the primary goal of improving computation efficiency across diverse hardware platforms. Its versatility allows it to work seamlessly with different hardware architectures, such as CPUs, GPUs, and specialized accelerators. By leveraging the capabilities of YTOPT, an autotuner that utilizes Bayesian optimization techniques, Apache TVM aims to optimize the search for hyperparameters, further enhancing the performance of machine learning models. The use of Bayesian optimization helps in efficiently navigating the vast hyperparameter space, leading to better-tuned models and reduced manual intervention. To thoroughly assess and compare the effectiveness of the tuners, a comprehensive study is conducted involving autotuning of the tunable knobs within Apache TVM. The evaluation includes employing both the built-in autotvm tuners provided by Apache TVM and the YTOPT tuner. The experiments encompass various computational tasks, such as gemm (general matrix multiplication), 3mm (3-matrix multiplication), cholesky (Cholesky decomposition), and LU (lower-upper factorization), which are fundamental building blocks for many machine learning algorithms. These experiments serve as representative benchmarks to gauge the performance improvements achieved by the different tuning approaches. To present the findings in an accessible manner, the results are visualized in a dynamic dashboard. This dashboard not only allows users to interactively explore the performance data but also provides the flexibility to select specific experiments and generate plots for side-by-side comparisons. The visualizations aid researchers and developers in comprehending the nuances of each tuning strategy and make informed decisions on choosing the most effective approach based on their specific hardware and use cases. By providing a user-friendly interface and detailed performance insights, the study aims to contribute to the ongoing efforts of optimizing machine learning models and fostering advancements in the field of artificial intelligence.

Bio – Praveen Paramasivam: Praveen Paramasivam is a Master student at the University of South Dakota, and a Visiting Student in the MCS division at Argonne National Laboratory for Summer 2023.

 

See upcoming and previous presentations at CS Seminar Series