随着批量的变化,学习率应该如何变化?

当我增加或减少在 SGD 使用的迷你批次的批次数目时,我应否改变学习率?

作为参考,我和某人讨论过,有人说,当批量增加时,学习率应该在一定程度上降低。

我的理解是,当我增加批量大小,计算平均梯度将较少噪音,所以我要么保持相同的学习率或增加它。

另外,如果我使用一个在线机机器学习优化器,比如 Adam 或 RMSProp,那么我想我可以保持学习率不变。

如果我错了,请纠正我,并提供任何有关这方面的见解。

40484 次浏览

Theory suggests that when multiplying the batch size by k, one should multiply the learning rate by sqrt(k) to keep the variance in the gradient expectation constant. See page 5 at A. Krizhevsky. One weird trick for parallelizing convolutional neural networks: https://arxiv.org/abs/1404.5997

However, recent experiments with large mini-batches suggest for a simpler linear scaling rule, i.e multiply your learning rate by k when using mini-batch size of kN. See P.Goyal et al.: Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour https://arxiv.org/abs/1706.02677

I would say that with using Adam, Adagrad and other adaptive optimizers, learning rate may remain the same if batch size does not change substantially.

Apart from the papers mentioned in Dmytro's answer, you can refer to the article of: Jastrzębski, S., Kenton, Z., Arpit, D., Ballas, N., Fischer, A., Bengio, Y., & Storkey, A. (2018, October). Width of Minima Reached by Stochastic Gradient Descent is Influenced by Learning Rate to Batch Size Ratio. The authors give the mathematical and empirical foundation to the idea that the ratio of learning rate to batch size influences the generalization capacity of DNN. They show that this ratio plays a major role in the width of the minima found by SGD. The higher ratio the wider is minima and better generalization.

Learning Rate Scaling for Dummies

I've always found the heuristics which seem to vary somewhere between scale with the square root of the batch size and the batch size to be a bit hand-wavy and fluffy, as is often the case in Deep Learning. Hence I devised my own theoretical framework to answer this question.

EDIT: Since the posting of this answer, my paper on this topic has been published at the journal of machine learning (https://www.jmlr.org/papers/volume23/20-1258/20-1258.pdf). I want to thank the stackoverflow community for believing in my ideas, engaging with and probing me, at a time where the research community dismissed me out of hand.

Learning Rate is a function of the Largest Eigenvalue

Let me start with two small sub-questions, which answer the main question

  • Are there any cases where we can a priori know the optimal learning rate?

Yes, for the convex quadratic, the optimal learning rate is given as 2/(λ+μ), where λ,μ represent the largest and smallest eigenvalues of the Hessian (Hessian = the second derivative of the loss ∇∇L, which is a matrix) respectively.

  • How do we expect these eigenvalues (which represent how much the loss changes along a infinitesimal move in the direction of the eigenvectors) to change as a function of batch size?

This is actually a little more tricky to answer (it is what I made the theory for in the first place), but it goes something like this.

Let us imagine that we have all the data and that would give us the full Hessian H. But now instead we only sub-sample this Hessian so we use a batch Hessian B. We can simply re-write B=H+(B-H)=H+E. Where E is now some error or fluctuation matrix.

Under some technical assumptions on the nature of the elements of E, we can assume this fluctations to be a zero mean random matrix and so the Batch Hessian becomes a fixed matrix + a random matrix.

For this model, the change in eigenvalues (which determines how large the learning rate can be) is known. In my paper there is another more fancy model, but the answer is more or less the same.

What actually happens? Experiments and Scaling Rules

I attach a plot of what happens in the case that the largest eigenvalue from the full data matrix is far outside that of the noise matrix (usually the case). As we increase the mini-batch size, the size of the noise matrix decreases and so the largest eigenvalue also decreases in size, hence larger learning rates can be used. This effect is initially proportional and continues to be approximately proportional until a threshold after which no appreciable decrease happens.

largest eigenvalue change with minibatching

How well does this hold in practice? The answer as shown below in my plot on the VGG-16 without batch norm (see paper for batch normalisation and resnets), is very well.

enter image description here

I would hasten to add that for adaptive order methods, if you use a small numerical stability constant (epsilon for Adam) the argument is a little different because you have an interplay of the eigenvalues, the estimated eigenvalues and your stability constant! so you actually end up getting a square root rule up till a threshold. Quite why nobody is discussing this or has published this result is honestly a little beyond me.

enter image description here

But if you want my practical advice, stick with SGD and just go proportional to the increase in batch size if your batch size is small and then don't increase it beyond a certain point.