Close

Presentation

This content is available for: Workshop Reg Pass. Upgrade Registration
Evaluating Primitives in Deep Neural Network Libraries: A Case Study with the Softmax Functions
DescriptionA deep neural network library (DNNL) is an optimized library of low-level computational primitives for deep neural networks. In this study, we choose the softmax function, a primitive commonly used in new computing models for DNNs, as a case study on evaluating the unique programming models adopted by the vendors’ DNNLs (cuDNN, MIOpen, and oneDNN) and the performance and portability of DNNLs on NVIDIA and AMD GPUs. We find that cuDNN selects different compute kernels to execute based on the problem size for the primitive, which may have a significant performance impact. oneDNN successfully enables functional portability of the primitive across vendors’ platforms, but performance portability will need to be improved. In addition, the performance of a primitive in the DNNLs may be suboptimal compared to a custom implementation.
Event Type
Workshop
TimeSunday, 12 November 202311:35am - 11:40am MST
Location505
Tags
Accelerators
Edge Computing
Heterogeneous Computing
Registration Categories
W