作者丨科技猛兽

转自丨极市平台

本文思想来自下面这篇大佬的文章:

Juliuszh:一个框架看懂优化算法之异同 SGD/AdaGrad/Adam

https://zhuanlan.zhihu.com/p/32230623

主要是对深度学习各种优化器 (从SGD到AdamW) 使用统一的框架做一次整理,本文相比于链接从源代码的角度理解这些优化器的思路。

代码来自 PyTorch1.7.0 官方教程:

https://pytorch.org/docs/1.7.0/optim.html

首先我们来回顾一下各类优化算法。

深度学习优化算法经历了 SGD -> SGDM -> NAG ->AdaGrad -> AdaDelta -> Adam -> Nadam -> AdamW 这样的发展历程。Google一下就可以看到很多的教程文章,详细告诉你这些算法是如何一步一步演变而来的。在这里,我们换一个思路,用一个框架来梳理所有的优化算法,做一个更加高屋建瓴的对比。

  • 统一框架:

首先定义:待优化参数:




,目标函数:







,初始学习率




而后,开始进行迭代优化。在每个epoch




1 计算目标函数关于当前参数的梯度:


































2 根据历史梯度计算一阶动量和二阶动量:

3 计算当前时刻的下降梯度:
























































4 根据下降梯度进行更新:











































掌握了这个框架,你可以轻轻松松设计自己的优化算法。

我们拿着这个框架,来照一照各种玄乎其玄的优化算法的真身。步骤3, 4对于各个算法都是一致的,主要的差别就体现在1和2上,也就是计算一阶动量











和二阶动量











时采用不同的套路。当计算好二者之后,都是使用固定的学习率




与二者作用得到当前时刻的下降梯度











,进而最后更新参数。

在所有优化器的代码里面有一些函数的作用是相通的:

共性的方法有:

  • add_param_group(param_group):把参数放进优化器中,这在 Fine-tune 预训练网络时很有用,因为可以使冻结层可训练并随着训练的进行添加到优化器中。

  • load_state_dict(state_dict):把优化器的状态加载进去。

  • state_dict():返回优化器的状态,以dict的形式返回。

  • step(closure=None):优化一步参数。

  • zero_grad(set_to_none=False):把所有的梯度值设为0。

使用方法:

for input, target in dataset:def closure():optimizer.zero_grad()output = model(input)loss = loss_fn(output, target)loss.backward()return lossoptimizer.step(closure)

下面正式开始。

SGD

先来看SGD。SGD没有动量的概念,也就是说:
















































代入步骤3,可以看到下降梯度就是最简单的




































SGD最大的缺点是下降速度慢,而且可能会在沟壑的两边持续震荡,停留在一个局部最优点。

SGD with Momentum

为了抑制SGD的震荡,SGDM认为梯度下降过程可以加入惯性。下坡的时候,如果发现是陡坡,那就利用惯性跑的快一些。SGDM全称是SGD with momentum,在SGD基础上引入了一阶动量:

一阶动量是各个时刻梯度方向的指数移动平均值,约等于最近















个时刻的梯度向量和的平均值。

也就是说,




时刻的下降方向,不仅由当前点的梯度方向决定,而且由此前累积的下降方向决定。







的经验值为0.9,这就意味着下降方向主要是此前累积的下降方向,并略微偏向当前时刻的下降方向。想象高速公路上汽车转弯,在高速向前的同时略微偏向,急转弯可是要出事的。

SGD with Nesterov Acceleration

SGD 还有一个问题是困在局部最优的沟壑里面震荡。想象一下你走到一个盆地,四周都是略高的小山,你觉得没有下坡的方向,那就只能待在这里了。可是如果你爬上高地,就会发现外面的世界还很广阔。因此,我们不能停留在当前位置去观察未来的方向,而要向前一步、多看一步、看远一些。

NAG全称Nesterov Accelerated Gradient,是在SGD、SGD-M的基础上的进一步改进,改进点在于步骤1。我们知道在时刻




的主要下降方向是由累积动量决定的,自己的梯度方向说了也不算,那与其看当前梯度方向,不如先看看如果跟着累积动量走了一步,那个时候再怎么走。因此,NAG在步骤1,不计算当前位置的梯度方向,而是计算如果按照累积动量走了一步,那个时候的下降方向:

然后用下一个点的梯度方向,与历史累积动量相结合,计算步骤2中当前时刻的累积动量。

定义优化器:

CLASS torch.optim.SGD(params, lr=<required parameter>, momentum=0, dampening=0, weight_decay=0, nesterov=False)

参数:

  • params (iterable) – 优化器作用的模型参数。

  • lr (float) – learning rate,相当于是统一框架中的




  • momentum (float, optional) – 动量参数。(默认值:0)

  • weight_decay (float, optional) – 权重衰减系数 weight decay (L2 penalty) (默认值:0)

  • dampening (float, optional) – dampening for momentum (默认值:0)

  • nesterov (bool, optional) – 允许 Nesterov momentum (默认值:False)

FLOAT:https://docs.python.org/3/library/functions.html#float

bool:https://docs.python.org/3/library/functions.html#bool

源码解读:

import torch
from .optimizer import Optimizer, required[docs]class SGD(Optimizer):r"""Implements stochastic gradient descent (optionally with momentum).Nesterov momentum is based on the formula from`On the importance of initialization and momentum in deep learning`__.Args:params (iterable): iterable of parameters to optimize or dicts definingparameter groupslr (float): learning ratemomentum (float, optional): momentum factor (default: 0)weight_decay (float, optional): weight decay (L2 penalty) (default: 0)dampening (float, optional): dampening for momentum (default: 0)nesterov (bool, optional): enables Nesterov momentum (default: False)Example:>>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)>>> optimizer.zero_grad()>>> loss_fn(model(input), target).backward()>>> optimizer.step()__ http://www.cs.toronto.edu/%7Ehinton/absps/momentum.pdf.. note::The implementation of SGD with Momentum/Nesterov subtly differs fromSutskever et. al. and implementations in some other frameworks.Considering the specific case of Momentum, the update can be written as.. math::\begin{aligned}v_{t+1} & = \mu * v_{t} + g_{t+1}, \\p_{t+1} & = p_{t} - \text{lr} * v_{t+1},\end{aligned}where :math:`p`, :math:`g`, :math:`v` and :math:`\mu` denote the parameters, gradient, velocity, and momentum respectively.This is in contrast to Sutskever et. al. andother frameworks which employ an update of the form.. math::\begin{aligned}v_{t+1} & = \mu * v_{t} + \text{lr} * g_{t+1}, \\p_{t+1} & = p_{t} - v_{t+1}.\end{aligned}The Nesterov version is analogously modified."""def __init__(self, params, lr=required, momentum=0, dampening=0,weight_decay=0, nesterov=False):if lr is not required and lr < 0.0:raise ValueError("Invalid learning rate: {}".format(lr))if momentum < 0.0:raise ValueError("Invalid momentum value: {}".format(momentum))if weight_decay < 0.0:raise ValueError("Invalid weight_decay value: {}".format(weight_decay))defaults = dict(lr=lr, momentum=momentum, dampening=dampening,weight_decay=weight_decay, nesterov=nesterov)if nesterov and (momentum <= 0 or dampening != 0):raise ValueError("Nesterov momentum requires a momentum and zero dampening")super(SGD, self).__init__(params, defaults)def __setstate__(self, state):super(SGD, self).__setstate__(state)for group in self.param_groups:group.setdefault('nesterov', False)[docs]    @torch.no_grad()def step(self, closure=None):"""Performs a single optimization step.Arguments:closure (callable, optional): A closure that reevaluates the modeland returns the loss."""loss = Noneif closure is not None:with torch.enable_grad():loss = closure()for group in self.param_groups:weight_decay = group['weight_decay']momentum = group['momentum']dampening = group['dampening']nesterov = group['nesterov']for p in group['params']:if p.grad is None:continued_p = p.gradif weight_decay != 0:d_p = d_p.add(p, alpha=weight_decay)if momentum != 0:param_state = self.state[p]if 'momentum_buffer' not in param_state:buf = param_state['momentum_buffer'] = torch.clone(d_p).detach()else:buf = param_state['momentum_buffer']buf.mul_(momentum).add_(d_p, alpha=1 - dampening)if nesterov:d_p = d_p.add(buf, alpha=momentum)else:d_p = bufp.add_(d_p, alpha=-group['lr'])return loss

这里通过 d_p=p.grad 得到每个参数的梯度,也就是1式的







如果使用 weight_decay 的话,那么相当于目标函数加上
















,所以相当于是梯度相当于要再加上





,所以使用了 d_p = d_p.add(p, alpha=weight_decay)。

通过 buf.mul_(momentum).add_(d_p, alpha=1 - dampening) 来计算动量,momentum参数







一般取0.9,就相当于是之前的动量buf乘以









,再加上此次的梯度d_p乘以













如果不通过nesterov方式更新参数,那么3式中的











就相当于是上一步计算出的动量











了。如果通过nesterov方式更新参数,那么3式中的











就相当于





















,和不用nesterov方式相比,相差了

最后通过 p.add_(d_p, alpha=-group['lr']) 更新梯度,相当于是上面的 3 式。

AdaGrad

此前我们都没有用到二阶动量。二阶动量的出现,才意味着“自适应学习率”优化算法时代的到来。SGD及其变种以同样的学习率更新每个参数,但深度神经网络往往包含大量的参数,这些参数并不是总会用得到(想想大规模的embedding)。对于经常更新的参数,我们已经积累了大量关于它的知识,不希望被单个样本影响太大,希望学习速率慢一些;对于偶尔更新的参数,我们了解的信息太少,希望能从每个偶然出现的样本身上多学一些,即学习速率大一些。

怎么样去度量历史更新频率呢?那就是二阶动量——该维度上,迄今为止所有梯度值的平方和:










































我们再回顾一下步骤3中的下降梯度:
























































可以看出,此时实质上的学习率由




变成了
















。一般为了避免分母为0,会在分母上加一个小的平滑项。因此












是恒大于0的,而且参数更新越频繁,二阶动量越大,学习率就越小。

这一方法在稀疏数据场景下表现非常好。但也存在一些问题:因为












是单调递增的,会使得学习率单调递减至0,可能会使得训练过程提前结束,即便后续还有数据也无法学到必要的知识。

定义优化器:

CLASS torch.optim.Adagrad(params,lr=0.01,lr_decay=0,weight_decay=0,initial_accumulator_value=0,eps=1e-10)

参数:

  • params (iterable) – 优化器作用的模型参数。

  • lr (float) – learning rate – 相当于是统一框架中的




  • lr_decay(float,optional) – 学习率衰减 (默认值:0)

  • weight_decay (float, optional) – 权重衰减系数 weight decay (L2 penalty) (默认值:0)

  • eps(float,optional):防止分母为0的一个小数 (默认值:1e-10)

源码解读:

[docs]class Adagrad(Optimizer):"""Implements Adagrad algorithm.It has been proposed in `Adaptive Subgradient Methods for Online Learningand Stochastic Optimization`_.Arguments:params (iterable): iterable of parameters to optimize or dicts definingparameter groupslr (float, optional): learning rate (default: 1e-2)lr_decay (float, optional): learning rate decay (default: 0)weight_decay (float, optional): weight decay (L2 penalty) (default: 0)eps (float, optional): term added to the denominator to improvenumerical stability (default: 1e-10).. _Adaptive Subgradient Methods for Online Learning and StochasticOptimization: http://jmlr.org/papers/v12/duchi11a.html"""def __init__(self, params, lr=1e-2, lr_decay=0, weight_decay=0, initial_accumulator_value=0, eps=1e-10):if not 0.0 <= lr:raise ValueError("Invalid learning rate: {}".format(lr))if not 0.0 <= lr_decay:raise ValueError("Invalid lr_decay value: {}".format(lr_decay))if not 0.0 <= weight_decay:raise ValueError("Invalid weight_decay value: {}".format(weight_decay))if not 0.0 <= initial_accumulator_value:raise ValueError("Invalid initial_accumulator_value value: {}".format(initial_accumulator_value))if not 0.0 <= eps:raise ValueError("Invalid epsilon value: {}".format(eps))defaults = dict(lr=lr, lr_decay=lr_decay, eps=eps, weight_decay=weight_decay,initial_accumulator_value=initial_accumulator_value)super(Adagrad, self).__init__(params, defaults)for group in self.param_groups:for p in group['params']:state = self.state[p]state['step'] = 0state['sum'] = torch.full_like(p, initial_accumulator_value, memory_format=torch.preserve_format)def share_memory(self):for group in self.param_groups:for p in group['params']:state = self.state[p]state['sum'].share_memory_()[docs]    @torch.no_grad()def step(self, closure=None):"""Performs a single optimization step.Arguments:closure (callable, optional): A closure that reevaluates the modeland returns the loss."""loss = Noneif closure is not None:with torch.enable_grad():loss = closure()for group in self.param_groups:params_with_grad = []grads = []state_sums = []state_steps = []for p in group['params']:if p.grad is not None:params_with_grad.append(p)grads.append(p.grad)state = self.state[p]state_sums.append(state['sum'])# update the steps for each param group updatestate['step'] += 1# record the step after step updatestate_steps.append(state['step'])F.adagrad(params_with_grad,grads,state_sums,state_steps,group['lr'],group['weight_decay'],group['lr_decay'],group['eps'])return loss

AdaDelta / RMSProp

由于AdaGrad单调递减的学习率变化过于激进,我们考虑一个改变二阶动量计算方法的策略:不累积全部历史梯度,而只关注过去一段时间窗口的下降梯度。这也就是AdaDelta名称中Delta的来历。

修改的思路很简单。前面我们讲到,指数移动平均值大约就是过去一段时间的平均值,因此我们用这一方法来计算二阶累积动量:

接下来还是步骤3:




















































这就避免了二阶动量持续累积、导致训练过程提前结束的问题了。

RMSProp

定义优化器:

CLASS torch.optim.RMSprop(params, lr=0.01, alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False)

参数:

  • params (iterable) – 优化器作用的模型参数。

  • lr (float) – learning rate – 相当于是统一框架中的




  • momentum (float, optional) – 动量参数。(默认值:0)。

  • alpha(float,optional) – 平滑常数 (默认值:0.99)。

  • centered(bool,optional) – ifTrue, compute the centered RMSProp, the gradient is normalized by an estimation of its variance,就是这一项是 True 的话就把方差使用梯度作归一化。

  • weight_decay (float, optional) – 权重衰减系数 weight decay (L2 penalty) (默认值:0)

  • eps(float,optional):防止分母为0的一个小数 (默认值:1e-10)

源码解读:

import torch
from .optimizer import Optimizer[docs]class RMSprop(Optimizer):r"""Implements RMSprop algorithm.Proposed by G. Hinton in his`course <https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf>`_.The centered version first appears in `Generating SequencesWith Recurrent Neural Networks <https://arxiv.org/pdf/1308.0850v5.pdf>`_.The implementation here takes the square root of the gradient average beforeadding epsilon (note that TensorFlow interchanges these two operations). The effectivelearning rate is thus :math:`\alpha/(\sqrt{v} + \epsilon)` where :math:`\alpha`is the scheduled learning rate and :math:`v` is the weighted moving averageof the squared gradient.Arguments:params (iterable): iterable of parameters to optimize or dicts definingparameter groupslr (float, optional): learning rate (default: 1e-2)momentum (float, optional): momentum factor (default: 0)alpha (float, optional): smoothing constant (default: 0.99)eps (float, optional): term added to the denominator to improvenumerical stability (default: 1e-8)centered (bool, optional) : if ``True``, compute the centered RMSProp,the gradient is normalized by an estimation of its varianceweight_decay (float, optional): weight decay (L2 penalty) (default: 0)"""def __init__(self, params, lr=1e-2, alpha=0.99, eps=1e-8, weight_decay=0, momentum=0, centered=False):if not 0.0 <= lr:raise ValueError("Invalid learning rate: {}".format(lr))if not 0.0 <= eps:raise ValueError("Invalid epsilon value: {}".format(eps))if not 0.0 <= momentum:raise ValueError("Invalid momentum value: {}".format(momentum))if not 0.0 <= weight_decay:raise ValueError("Invalid weight_decay value: {}".format(weight_decay))if not 0.0 <= alpha:raise ValueError("Invalid alpha value: {}".format(alpha))defaults = dict(lr=lr, momentum=momentum, alpha=alpha, eps=eps, centered=centered, weight_decay=weight_decay)super(RMSprop, self).__init__(params, defaults)def __setstate__(self, state):super(RMSprop, self).__setstate__(state)for group in self.param_groups:group.setdefault('momentum', 0)group.setdefault('centered', False)[docs]    @torch.no_grad()def step(self, closure=None):"""Performs a single optimization step.Arguments:closure (callable, optional): A closure that reevaluates the modeland returns the loss."""loss = Noneif closure is not None:with torch.enable_grad():loss = closure()for group in self.param_groups:for p in group['params']:if p.grad is None:continuegrad = p.gradif grad.is_sparse:raise RuntimeError('RMSprop does not support sparse gradients')state = self.state[p]# State initializationif len(state) == 0:state['step'] = 0state['square_avg'] = torch.zeros_like(p, memory_format=torch.preserve_format)if group['momentum'] > 0:state['momentum_buffer'] = torch.zeros_like(p, memory_format=torch.preserve_format)if group['centered']:state['grad_avg'] = torch.zeros_like(p, memory_format=torch.preserve_format)square_avg = state['square_avg']alpha = group['alpha']state['step'] += 1if group['weight_decay'] != 0:grad = grad.add(p, alpha=group['weight_decay'])square_avg.mul_(alpha).addcmul_(grad, grad, value=1 - alpha)if group['centered']:grad_avg = state['grad_avg']grad_avg.mul_(alpha).add_(grad, alpha=1 - alpha)avg = square_avg.addcmul(grad_avg, grad_avg, value=-1).sqrt_().add_(group['eps'])else:avg = square_avg.sqrt().add_(group['eps'])if group['momentum'] > 0:buf = state['momentum_buffer']buf.mul_(group['momentum']).addcdiv_(grad, avg)p.add_(buf, alpha=-group['lr'])else:p.addcdiv_(grad, avg, value=-group['lr'])return loss

这里通过 grad = p.grad 得到每个参数的梯度,也就是1式的







如果使用 weight_decay 的话,那么相当于目标函数加上
















,所以相当于是梯度相当于要再加上





,故使用了 grad = grad.add(p, alpha=group['weight_decay'])。

square_avg.mul_(alpha).addcmul_(grad, grad, value=1 - alpha) 对应10式,计算当前步的







centered 这一项是 False 的话直接 square_avg.sqrt().add_(group['eps']) 对







开根号。
centered 这一项是 True 的话就把方差使用梯度作归一化。

最后通过 p.addcdiv_(grad, avg, value=-group['lr']) 更新梯度,相当于是上面的 3 式。
RMSprop算是Adagrad的一种发展,和Adadelta的变体,效果趋于二者之间

AdaDelta

定义优化器:

CLASS torch.optim.Adadelta(params, lr=1.0, rho=0.9, eps=1e-06, weight_decay=0)

参数:

  • params (iterable) – 优化器作用的模型参数。

  • lr (float) – learning rate – 相当于是统一框架中的




  • rho(float,optional) – 计算梯度平方的滑动平均超参数 (默认值:0.9)

  • weight_decay (float, optional) – 权重衰减系数 weight decay (L2 penalty) (默认值:0)

  • eps(float,optional):防止分母为0的一个小数 (默认值:1e-10)

源码解读:

import torchfrom .optimizer import Optimizer[docs]class Adadelta(Optimizer):"""Implements Adadelta algorithm.It has been proposed in `ADADELTA: An Adaptive Learning Rate Method`__.Arguments:params (iterable): iterable of parameters to optimize or dicts definingparameter groupsrho (float, optional): coefficient used for computing a running averageof squared gradients (default: 0.9)eps (float, optional): term added to the denominator to improvenumerical stability (default: 1e-6)lr (float, optional): coefficient that scale delta before it is appliedto the parameters (default: 1.0)weight_decay (float, optional): weight decay (L2 penalty) (default: 0)__ https://arxiv.org/abs/1212.5701"""def __init__(self, params, lr=1.0, rho=0.9, eps=1e-6, weight_decay=0):if not 0.0 <= lr:raise ValueError("Invalid learning rate: {}".format(lr))if not 0.0 <= rho <= 1.0:raise ValueError("Invalid rho value: {}".format(rho))if not 0.0 <= eps:raise ValueError("Invalid epsilon value: {}".format(eps))if not 0.0 <= weight_decay:raise ValueError("Invalid weight_decay value: {}".format(weight_decay))defaults = dict(lr=lr, rho=rho, eps=eps, weight_decay=weight_decay)super(Adadelta, self).__init__(params, defaults)[docs]    @torch.no_grad()def step(self, closure=None):"""Performs a single optimization step.Arguments:closure (callable, optional): A closure that reevaluates the modeland returns the loss."""loss = Noneif closure is not None:with torch.enable_grad():loss = closure()for group in self.param_groups:for p in group['params']:if p.grad is None:continuegrad = p.gradif grad.is_sparse:raise RuntimeError('Adadelta does not support sparse gradients')state = self.state[p]# State initializationif len(state) == 0:state['step'] = 0state['square_avg'] = torch.zeros_like(p, memory_format=torch.preserve_format)state['acc_delta'] = torch.zeros_like(p, memory_format=torch.preserve_format)square_avg, acc_delta = state['square_avg'], state['acc_delta']rho, eps = group['rho'], group['eps']state['step'] += 1if group['weight_decay'] != 0:grad = grad.add(p, alpha=group['weight_decay'])square_avg.mul_(rho).addcmul_(grad, grad, value=1 - rho)std = square_avg.add(eps).sqrt_()delta = acc_delta.add(eps).sqrt_().div_(std).mul_(grad)p.add_(delta, alpha=-group['lr'])acc_delta.mul_(rho).addcmul_(delta, delta, value=1 - rho)return loss

这里通过 grad = p.grad 得到每个参数的梯度,也就是1式的







如果使用 weight_decay 的话,那么相当于目标函数加上
















,所以相当于是梯度相当于要再加上





,故使用了 grad = grad.add(p, alpha=group['weight_decay'])。

square_avg.mul_(rho).addcmul_(grad, grad, value=1 - rho) 对应10式,计算当前步的







。std = square_avg.add(eps).sqrt_() 对







开根号。

最后通过 p.add_(delta, alpha=-group['lr']) 更新梯度,相当于是上面的 3 式。
delta 的分子项是







,分母项是







开根号。acc_delta 是对 delta 的滑动平均。

Adam

谈到这里,Adam和Nadam的出现就很自然而然了——它们是前述方法的集大成者。我们看到,SGD-M在SGD基础上增加了一阶动量,AdaGrad和AdaDelta在SGD基础上增加了二阶动量。把一阶动量和二阶动量都用起来,就是Adam了——Adaptive + Momentum。

SGD的一阶动量:

加上AdaDelta的二阶动量:























































































































优化算法里最常见的两个超参数












就都在这里了,前者控制一阶动量,后者控制二阶动量。

Nadam

最后是Nadam。我们说Adam是集大成者,但它居然遗漏了Nesterov,这还能忍?必须给它加上,按照NAG的步骤1:

这就是Nesterov + Adam = Nadam了。

定义优化器:

CLASS torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)

参数:

  • params (iterable) – 优化器作用的模型参数。

  • lr (float) – learning rate – 相当于是统一框架中的




  • betas(Tuple[float,float],optional) – coefficients used for computing running averages of gradient and its square ((默认值:(0.9, 0.999))

  • weight_decay (float, optional) – 权重衰减系数 weight decay (L2 penalty) (默认值:0)

  • eps(float,optional):防止分母为0的一个小数 (默认值:1e-10)

源码解读:

import math
import torch
from .optimizer import Optimizer[docs]class Adam(Optimizer):r"""Implements Adam algorithm.It has been proposed in `Adam: A Method for Stochastic Optimization`_.Arguments:params (iterable): iterable of parameters to optimize or dicts definingparameter groupslr (float, optional): learning rate (default: 1e-3)betas (Tuple[float, float], optional): coefficients used for computingrunning averages of gradient and its square (default: (0.9, 0.999))eps (float, optional): term added to the denominator to improvenumerical stability (default: 1e-8)weight_decay (float, optional): weight decay (L2 penalty) (default: 0)amsgrad (boolean, optional): whether to use the AMSGrad variant of thisalgorithm from the paper `On the Convergence of Adam and Beyond`_(default: False).. _Adam\: A Method for Stochastic Optimization:https://arxiv.org/abs/1412.6980.. _On the Convergence of Adam and Beyond:https://openreview.net/forum?id=ryQu7f-RZ"""def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8,weight_decay=0, amsgrad=False):if not 0.0 <= lr:raise ValueError("Invalid learning rate: {}".format(lr))if not 0.0 <= eps:raise ValueError("Invalid epsilon value: {}".format(eps))if not 0.0 <= betas[0] < 1.0:raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))if not 0.0 <= betas[1] < 1.0:raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))if not 0.0 <= weight_decay:raise ValueError("Invalid weight_decay value: {}".format(weight_decay))defaults = dict(lr=lr, betas=betas, eps=eps,weight_decay=weight_decay, amsgrad=amsgrad)super(Adam, self).__init__(params, defaults)def __setstate__(self, state):super(Adam, self).__setstate__(state)for group in self.param_groups:group.setdefault('amsgrad', False)[docs]    @torch.no_grad()def step(self, closure=None):"""Performs a single optimization step.Arguments:closure (callable, optional): A closure that reevaluates the modeland returns the loss."""loss = Noneif closure is not None:with torch.enable_grad():loss = closure()for group in self.param_groups:for p in group['params']:if p.grad is None:continuegrad = p.gradif grad.is_sparse:raise RuntimeError('Adam does not support sparse gradients, please consider SparseAdam instead')amsgrad = group['amsgrad']state = self.state[p]# State initializationif len(state) == 0:state['step'] = 0# Exponential moving average of gradient valuesstate['exp_avg'] = torch.zeros_like(p, memory_format=torch.preserve_format)# Exponential moving average of squared gradient valuesstate['exp_avg_sq'] = torch.zeros_like(p, memory_format=torch.preserve_format)if amsgrad:# Maintains max of all exp. moving avg. of sq. grad. valuesstate['max_exp_avg_sq'] = torch.zeros_like(p, memory_format=torch.preserve_format)exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']if amsgrad:max_exp_avg_sq = state['max_exp_avg_sq']beta1, beta2 = group['betas']state['step'] += 1bias_correction1 = 1 - beta1 ** state['step']bias_correction2 = 1 - beta2 ** state['step']if group['weight_decay'] != 0:grad = grad.add(p, alpha=group['weight_decay'])# Decay the first and second moment running average coefficientexp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2)if amsgrad:# Maintains the maximum of all 2nd moment running avg. till nowtorch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq)# Use the max. for normalizing running avg. of gradientdenom = (max_exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(group['eps'])else:denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(group['eps'])step_size = group['lr'] / bias_correction1p.addcdiv_(exp_avg, denom, value=-step_size)return loss

这里通过 grad = p.grad 得到每个参数的梯度,也就是1式的







如果使用 weight_decay 的话,那么相当于目标函数加上
















,所以相当于是梯度相当于要再加上





,故使用了 grad = grad.add(p, alpha=group['weight_decay'])。

exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1) 计算12式。
exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2) 计算13式。
因为15式的缘故,要给分母除以 math**.**sqrt(bias_correction2)。
因为14式的缘故,要给分子除以 bias_correction1。
最后通过 p.addcdiv_(exp_avg, denom, value=-step_size) 更新梯度,相当于是上面的 3 式。

AdamW

下图1所示为Adam的另一个改进版:AdamW。

简单来说,AdamW就是Adam优化器加上L2正则,来限制参数值不可太大,这一点属于机器学习入门知识了。以往的L2正则是直接加在损失函数上,比如这样子:加入正则,损失函数就会变成这样子:

所以在计算梯度







时要加上粉色的这一项。

但AdamW稍有不同,如下图所示,将正则加在了绿色位置。

图1:AdamW

至于为何这么做?直接摘录BERT里面的原话看看:

Just adding the square of the weights to the loss function is *not* the correct way of using L2 regularization/weight decay with Adam, since that will interact with the m and v parameters in strange ways. Instead we want to decay the weights in a manner that doesn't interact with the m/v parameters. This is equivalent to adding the square of the weights to the loss with plain (non-momentum) SGD. Add weight decay at the end (fixed version).

这段话意思是说,如果直接将L2正则加到loss上去,由于Adam优化器的后序操作,该正则项将会与















产生奇怪的作用。因而,AdamW选择将







正则项加在了Adam的















等参数被计算完之后、在与学习率




相乘之前,所以这也表明了weight_decay和







正则虽目的一致、公式一致,但用法还是不同,二者有着明显的差别。以 PyTorch1.7.0 中的AdamW代码为例:

定义优化器:

CLASS torch.optim.AdamW(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.01, amsgrad=False)

参数:

  • params (iterable) – 优化器作用的模型参数。

  • lr (float) – learning rate – 相当于是统一框架中的




  • betas(Tuple[float,float],optional) – coefficients used for computing running averages of gradient and its square ((默认值:(0.9, 0.999))

  • weight_decay (float, optional) – 权重衰减系数 weight decay (L2 penalty) (默认值:0)

  • eps(float,optional):防止分母为0的一个小数 (默认值:1e-10)

源码解读:

import math
import torch
from .optimizer import Optimizer[docs]class AdamW(Optimizer):r"""Implements AdamW algorithm.The original Adam algorithm was proposed in `Adam: A Method for Stochastic Optimization`_.The AdamW variant was proposed in `Decoupled Weight Decay Regularization`_.Arguments:params (iterable): iterable of parameters to optimize or dicts definingparameter groupslr (float, optional): learning rate (default: 1e-3)betas (Tuple[float, float], optional): coefficients used for computingrunning averages of gradient and its square (default: (0.9, 0.999))eps (float, optional): term added to the denominator to improvenumerical stability (default: 1e-8)weight_decay (float, optional): weight decay coefficient (default: 1e-2)amsgrad (boolean, optional): whether to use the AMSGrad variant of thisalgorithm from the paper `On the Convergence of Adam and Beyond`_(default: False).. _Adam\: A Method for Stochastic Optimization:https://arxiv.org/abs/1412.6980.. _Decoupled Weight Decay Regularization:https://arxiv.org/abs/1711.05101.. _On the Convergence of Adam and Beyond:https://openreview.net/forum?id=ryQu7f-RZ"""def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8,weight_decay=1e-2, amsgrad=False):if not 0.0 <= lr:raise ValueError("Invalid learning rate: {}".format(lr))if not 0.0 <= eps:raise ValueError("Invalid epsilon value: {}".format(eps))if not 0.0 <= betas[0] < 1.0:raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))if not 0.0 <= betas[1] < 1.0:raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))if not 0.0 <= weight_decay:raise ValueError("Invalid weight_decay value: {}".format(weight_decay))defaults = dict(lr=lr, betas=betas, eps=eps,weight_decay=weight_decay, amsgrad=amsgrad)super(AdamW, self).__init__(params, defaults)def __setstate__(self, state):super(AdamW, self).__setstate__(state)for group in self.param_groups:group.setdefault('amsgrad', False)[docs]    @torch.no_grad()def step(self, closure=None):"""Performs a single optimization step.Arguments:closure (callable, optional): A closure that reevaluates the modeland returns the loss."""loss = Noneif closure is not None:with torch.enable_grad():loss = closure()for group in self.param_groups:for p in group['params']:if p.grad is None:continue# Perform stepweight decayp.mul_(1 - group['lr'] * group['weight_decay'])# Perform optimization stepgrad = p.gradif grad.is_sparse:raise RuntimeError('Adam does not support sparse gradients, please consider SparseAdam instead')amsgrad = group['amsgrad']state = self.state[p]# State initializationif len(state) == 0:state['step'] = 0# Exponential moving average of gradient valuesstate['exp_avg'] = torch.zeros_like(p, memory_format=torch.preserve_format)# Exponential moving average of squared gradient valuesstate['exp_avg_sq'] = torch.zeros_like(p, memory_format=torch.preserve_format)if amsgrad:# Maintains max of all exp. moving avg. of sq. grad. valuesstate['max_exp_avg_sq'] = torch.zeros_like(p, memory_format=torch.preserve_format)exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']if amsgrad:max_exp_avg_sq = state['max_exp_avg_sq']beta1, beta2 = group['betas']state['step'] += 1bias_correction1 = 1 - beta1 ** state['step']bias_correction2 = 1 - beta2 ** state['step']# Decay the first and second moment running average coefficientexp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2)if amsgrad:# Maintains the maximum of all 2nd moment running avg. till nowtorch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq)# Use the max. for normalizing running avg. of gradientdenom = (max_exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(group['eps'])else:denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(group['eps'])step_size = group['lr'] / bias_correction1p.addcdiv_(exp_avg, denom, value=-step_size)return loss

与 Adam 不一样的地方是:
Adam 如果使用 weight_decay 的话,那么相当于目标函数加上

















,所以相当于是梯度相当于要再加上





,故使用了 grad = grad.add(p, alpha=group['weight_decay'])。

而 AdamW 是 p.mul_(1 - group['lr'] * group['weight_decay']) 直接让参数:










































这样才能和绿色框一致。

建议阅读:

高考失利之后,属于我的大学本科四年

【资源分享】对于时间序列,你所能做的一切.

【时空序列预测第一篇】什么是时空序列问题?这类问题主要应用了哪些模型?主要应用在哪些领域?

【AI蜗牛车出品】手把手AI项目、时空序列、时间序列、白话机器学习、pytorch修炼

深度学习优化算法的总结与梳理(从 SGD 到 AdamW 原理和代码解读)相关推荐

  1. 2017年深度学习优化算法最新进展:如何改进SGD和Adam方法?

    2017年深度学习优化算法最新进展:如何改进SGD和Adam方法? 深度学习的基本目标,就是寻找一个泛化能力强的最小值,模型的快速性和可靠性也是一个加分点. 随机梯度下降(SGD)方法是1951年由R ...

  2. 大梳理!深度学习优化算法:从 SGD 到 AdamW 原理和代码解读

    ‍ 作者丨知乎 科技猛兽  极市平台 编辑 https://zhuanlan.zhihu.com/p/391947979 本文思想来自下面这篇大佬的文章: Juliuszh:一个框架看懂优化算法之异同 ...

  3. 2017年深度学习优化算法最新进展:改进SGD和Adam方法

    2017年深度学习优化算法最新进展:如何改进SGD和Adam方法 转载的文章,把个人觉得比较好的摘录了一下 AMSGrad 这个前期比sgd快,不能收敛到最优. sgdr 余弦退火的方案比较好 最近的 ...

  4. Adam 那么棒,为什么还对 SGD 念念不忘?一个框架看懂深度学习优化算法

    作者|Juliuszh 链接 | https://zhuanlan.zhihu.com/juliuszh 本文仅作学术分享,若侵权,请联系后台删文处理 机器学习界有一群炼丹师,他们每天的日常是: 拿来 ...

  5. 深度学习优化算法,Adam优缺点分析

    优化算法 首先我们来回顾一下各类优化算法. 深度学习优化算法经历了 SGD -> SGDM -> NAG ->AdaGrad -> AdaDelta -> Adam -& ...

  6. Adam那么棒,为什么还对SGD念念不忘?一个框架看懂深度学习优化算法

    点击上方"小白学视觉",选择加"星标"或"置顶" 重磅干货,第一时间送达 作者|Juliuszh,https://zhuanlan.zhih ...

  7. Pytorch框架的深度学习优化算法集(优化中的挑战)

    个人简介:CSDN百万访问量博主,普普通通男大学生,深度学习算法.医学图像处理专攻,偶尔也搞全栈开发,没事就写文章,you feel me? 博客地址:lixiang.blog.csdn.net Py ...

  8. 深度学习优化算法实现(Momentum, Adam)

    目录 Momentum 初始化 更新参数 Adam 初始化 更新参数 除了常见的梯度下降法外,还有几种比较通用的优化算法:表现都优于梯度下降法.本文只记录完成吴恩达深度学习作业时遇到的Momentum ...

  9. 深度学习优化算法之动量法[公式推导](MXNet)

    我们在前面的文章中熟悉了梯度下降的各种形态,深度学习优化算法之(小批量)随机梯度下降(MXNet),也了解了梯度下降的原理,由每次的迭代,梯度下降都根据自变量的当前位置来更新自变量,做自我迭代.但是如 ...

最新文章

  1. dedecms vdimgck.php,织梦dedecms后台验证码图片不显示解决方案
  2. 调试U-Boot笔记(三)
  3. oracle11g nid,Oracle工具之nid命令的使用
  4. Apache Derby数据库用户和权限
  5. 跑得好好的Java进程,怎么突然就瘫痪了?
  6. 20180915牛客A 你好诶加币
  7. python入门100例题-这 100 道 Python 题,拿去刷!!!
  8. 唐宇迪学习笔记6:线性回归算法原理推导
  9. STM8S103重映射
  10. Golang Web框架性能对比
  11. 数据库中索引原理及填充因子
  12. RSA+AES混合加密实例
  13. 咏南开发框架之日志管理
  14. 把一个人的特点写具体作文_五年级下册第五单元同步作文《把一个人的特点写具体》范文4篇...
  15. 一文轻松明白 Base64 编码原理
  16. 计算机安全更新无法卸载,出现windows 系统补丁无法卸载该怎么解决?简单几步即可解决...
  17. 洛谷 P2715 约数和
  18. APP获取手机验证码防止短信轰炸解决办法
  19. GSMS软件系统安装教程
  20. 国际版QQ登陆协议的详细分析-工具准备

热门文章

  1. 元器件温度系数(ppm/℃)是什么?
  2. 最完整的Windows系统安装教程(Win7、Win10、Win11)
  3. 2020.4.11普及C组 Loan Repayment【纪中】【二分】
  4. [阶段4 企业开发进阶] 5. Netty
  5. 抢答网页PHP,GitHub - zhaiwenjun/vie-to-answer: 用于小型多人的线下知识竞赛活动的在线抢答器...
  6. 软考英文缩写_计算机软件常见英文缩写及对应全称
  7. Greenplum的系统表
  8. 微信公众号开发__微信网页授权并获取用户基本信息(是否关注公众号、头像、昵称等)
  9. 距测试软件,两步路怎么测量距离 测距工具使用方法介绍
  10. 一个脚本打比赛之SMP WEIBO 2016