Close

Presentation

This content is available for: Workshop Reg Pass. Upgrade Registration
Accelerating Hyperparameter Optimization Algorithms with Mixed Precision
DescriptionHyperparameter Optimization (HPO) of Neural Networks is a computationally expensive procedure, that has the potential to benefit from the use of novel accelerator capabilities. This paper investigates the performance of three popular HPO algorithms in terms of the achieved speed-up and model accuracy, utilizing early stopping, Bayesian, and genetic optimization approaches, in combination with mixed precision functionalities on NVIDIA A100 GPUs with Tensor Cores. The benchmarks are performed on 64 GPUs in parallel on three datasets: two from the vision and one from the CFD domain. The results show that, depending on the algorithm, larger speed-ups can be achieved for mixed precision compared to full precision HPO if the checkpoint frequency is kept low. In addition to the reduced runtime, also small gains in generalization performance on the test set are observed.
Event Type
Workshop
TimeSunday, 12 November 20235:18pm - 5:28pm MST
Location704-706
Tags
Accelerators
Codesign
Heterogeneous Computing
Task Parallelism
Registration Categories
W