Events  Deals  Jobs  SF Climate Week 2024 
    Sign in  
 
 
With Yaroslav Bulatov (Research Enggr, OpenAI).
Tue, Jun 18, 2019 @ 06:30 PM   FREE   Mission Hall UCSF, 550 16th St
 
   
 
 
Sign up for our awesome SF Bay Area
Tech Events weekly email newsletter.
   
LOCATION
EVENT DETAILS

This session we will discuss
"An Empirical Model of Large-Batch Training"

The discussion will be led by Yaroslav Bulatov:
https://medium.com/@yaroslavvb

Everyone should take time to read the paper in detail several days in advance of the meetup, & to the greatest extent possible, read the key references from the papers.

Main paper:
An Empirical Model of Large-Batch Training
https://arxiv.org/abs/1812.06162

Blog posts:
https://openai.com/blog/science-of-ai/
https://towardsdatascience.com/scaling-transformer-xl-to-128-gpus-27afce2d23d6

Abstract
In an increasing number of domains it has been demonstrated that deep learning models can be trained using relatively large batch sizes without sacrificing data efficiency. However the limits of this massive data parallelism seem to differ from domain to domain, ranging from batches of tens of thousands in ImageNet to batches of millions in RL agents that play the game Dota 2. To our knowledge there is limited conceptual understanding of why these limits to batch size differ or how we might choose the correct batch size in a new domain. In this paper, we demonstrate that a simple & easy-to-measure statistic called the gradient noise scale predicts the largest useful batch size across many domains & applications, including a number of supervised learning datasets (MNIST, SVHN, CIFAR-10, ImageNet, Billion Word), reinforcement learning domains (Atari & Dota), & even generative model training (autoencoders on SVHN). We find that the noise scale increases as the loss decreases over a training run & depends on the model size primarily through improved model performance. Our empirically-motivated theory also describes the tradeoff between compute-efficiency & time-efficiency, & provides a rough model of the benefits of adaptive batch-size training.

 
 
 
 
© 2024 GarysGuide      About    Feedback    Press    Terms