Close

Presentation

This content is available for: Workshop Reg Pass. Upgrade Registration
Accelerating Deep Neural Network Guided MCTS Using Adaptive Parallelism
DescriptionDeep Neural Network guided Monte-Carlo Tree Search (DNN-MCTS) is a powerful class of AI algorithms. The DNN operations are highly parallelizable, but tree search operations are sequential and often become the system bottleneck. Existing MCTS parallel schemes on CPU platforms either exploit data parallelism but sacrifice memory access latency, or take advantage of local cache for low-latency accesses but constrain the search to a single thread. This work analyzes the tradeoff of these parallel schemes, and proposes an adaptive parallel scheme that optimally chooses the MCTS component's parallel scheme on the CPU. Additionally, an efficient method for searching the optimal communication batch size when the CPU interfaces with DNN operations on an accelerator(GPU) is proposed. Using a DNN-MCTS algorithm on board game benchmarks, we show that our work is able to adaptively generate the best-performing parallel implementation, leading to a range of 1.5-3 times speedup compared with the baseline methods.
Event Type
Workshop
TimeSunday, 12 November 20235:20pm - 5:30pm MST
Location702
Tags
Algorithms
Applications
Architecture and Networks
Registration Categories
W