Auto-Tuning for Green Computing on GPUs
TimeTuesday, June 23rd3:10pm - 3:15pm
DescriptionBecause GPUs are a popular platform for accelerating applications, increasing the energy-efficiency of GPU computing is a relevant research topic for many applications and infrastructure owners.
One way to address this problem is through auto-tuning. However, existing GPU auto-tuning strategies do not consider power capping. In this research, we demonstrate how auto-tuning can be enhanced to support energy-efficiency improvement for GPUs. Specifically, we define three different auto-tuning strategies: (1) tuning core frequency, memory frequency, and power capping values, (2) tuning core and memory frequencies, and (3) tuning only power capping. We show that, by tuning core and memory frequencies and power capping values together with the application tunable parameters, we can improve the energy efficiency of GPUs for multiple benchmark applications. We also show that the power capping level is a new tunable parameter in the context of GPUs and its optimal value depends on different parameters, such as GPU architecture, application, and objective function.
Our empirical analysis shows important energy-consumption reduction for NVIDIA GTX980 for three applications - vector add, stencil, and matrix multiplication - by 21%, 23%, and 17%, respectively. These figures correspond to increases in energy efficiency of 23%, 11% and 9%, respectively. We, therefore, conclude that auto-tuning can be successfully adapted for GPU green computing iff it includes core/memory frequency and power capping as tunable parameters.