Close

Presentation

This content is available for: Workshop Reg Pass. Upgrade Registration
Benchmarking and In-Depth Performance Study of Large Language Models on Habana Gaudi Processors
DescriptionTransformer models suffer from high computational complexity. Habana GAUDI architecture offers a promising solution to tackle these issues. GAUDI features a Matrix Multiplication Engine (MME) and a cluster of fully programmable Tensor Processing Cores (TPC). This paper explores the untapped potential of using GAUDI processors to accelerate Transformer-based models, addressing key challenges in the process. First, we provide a performance comparison between the MME and TPC components, illuminating their relative strengths and weaknesses. Second, we explore strategies to optimize MME and TPC utilization, offering practical insights to enhance computational efficiency. Third, we evaluate the performance of Transformers on GAUDI, particularly in handling long sequences and uncovering performance bottlenecks. Last, we evaluate the end-to-end performance of two Transformer-based large language models (LLM) on GAUDI. The contributions of this work encompass practical insights for practitioners and researchers alike. We delve into GAUDI's capabilities for Transformers through systematic profiling, analysis, and optimization exploration.
Event Type
Workshop
TimeSunday, 12 November 20234:14pm - 4:34pm MST
Location704-706
Tags
Accelerators
Codesign
Heterogeneous Computing
Task Parallelism
Registration Categories
W