Close

Presentation

This content is available for: Workshop Reg Pass. Upgrade Registration
Addressing Stale Gradients in Scalable Federated Deep Reinforcement Learning
DescriptionAdvancements in reinforcement learning (RL) via deep neural networks have enabled their application to a variety of real-world problems. However, these applications often suffer from long training times. While attempts to distribute training have been successful in controlled scenarios, they face challenges in heterogeneous-capacity, unstable, and privacy critical environments. This work applies concepts from federated learning (FL) to distributed RL, specifically addressing the stale gradient problem. A deterministic framework for asynchronous federated RL is utilized to explore dynamic methods for handling stale gradient updates in the Arcade Learning Environment. Experimental results from applying these methods to two Atari-2600 games demonstrate a relative speedup of up to 95% compared to plain A3C in large and unstable federations.
Event Type
Workshop
TimeMonday, 13 November 20233:30pm - 3:54pm MST
Location704-706
Tags
Artificial Intelligence/Machine Learning
Graph Algorithms and Frameworks
Registration Categories
W