Close

Presentation

This content is available for: Workshop Reg Pass. Upgrade Registration
The Future of Machine Learning is Sparse
DescriptionAs machine learning models become prohibitively expensive to train and use, sparsity is increasingly essential for neural networks such as large language models. If performed correctly, sparsity does not come at the cost of performance, and pruned deep neural networks have shown benefits in both throughput and generalization. In this talk, we will review the many faces of sparsity in deep learning and how it can be leveraged, from input representation to specialized hardware. We will discuss the differences in sparsity structure from other HPC applications, learning schedules that enable near-lossless sparse training, and libraries that aid in introducing high-performance sparsity to existing machine learning models. Lastly, we will outline the current limitations of sparsity in deep learning and discuss future opportunities in pushing those boundaries.
Event Type
Workshop
TimeFriday, 17 November 202311am - 11:30am MST
Location401-402
Tags
Graph Algorithms and Frameworks
Linear Algebra
Programming Frameworks and System Software
State of the Practice
Registration Categories
W