1、为什么要调整学习率

学习率控制梯度更新的快慢,在训练中,开始时的学习率比较大,梯度更新步伐比较大,后期时学习率比较小,梯度更新步伐比较小。

梯度下降:wi+1=wi−g(wi)w_{i+1}=w_{i}-g\left(w_{i}\right)wi+1​=wi​−g(wi​)使用学习率的梯度下降:wi+1=wi−LR∗g(wi)w_{i+1}=w_{i}-L R * g\left(w_{i}\right)wi+1​=wi​−LR∗g(wi​)学习率的作用是控制更新的步伐;

Pytorch提供了一个调整学习率的方法——class_LRScheduler

主要参数

  • optimizer:关联的优化器;
  • last_epoch:记录epoch数;
  • base_lrs:记录初始学习率;
class_LRScheduler(object):def __init__(self,optimizer,last_epoch=1):def get_lr(self):raise NotlmplementedError

主要方法

  • step():更新下一个epoch的学习率;
  • get_lr():虚函数,计算下一个epoch的学习率;

下面通过代码看一下学习率调整类的具体使用:

import os
import random
import numpy as np
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
import torchvision.transforms as transforms
import torch.optim as optim
from PIL import Image
from matplotlib import pyplot as plt
from model.lenet import LeNet
from toolss.my_dataset import RMBDatasetimport torchvisiondef transform_invert(img_, transform_train):"""将data 进行反transfrom操作:param img_: tensor:param transform_train: torchvision.transforms:return: PIL image"""if 'Normalize' in str(transform_train):norm_transform = list(filter(lambda x: isinstance(x, transforms.Normalize), transform_train.transforms))mean = torch.tensor(norm_transform[0].mean, dtype=img_.dtype, device=img_.device)std = torch.tensor(norm_transform[0].std, dtype=img_.dtype, device=img_.device)img_.mul_(std[:, None, None]).add_(mean[:, None, None])img_ = img_.transpose(0, 2).transpose(0, 1)  # C*H*W --> H*W*Cif 'ToTensor' in str(transform_train):img_ = np.array(img_) * 255if img_.shape[2] == 3:img_ = Image.fromarray(img_.astype('uint8')).convert('RGB')elif img_.shape[2] == 1:img_ = Image.fromarray(img_.astype('uint8').squeeze())else:raise Exception("Invalid img shape, expected 1 or 3 in axis 2, but got {}!".format(img_.shape[2]) )return img_def set_seed(seed=1):random.seed(seed)np.random.seed(seed)torch.manual_seed(seed)torch.cuda.manual_seed(seed)set_seed()  # 设置随机种子
rmb_label = {"1": 0, "100": 1}# 参数设置
MAX_EPOCH = 10
BATCH_SIZE = 16
LR = 0.01
log_interval = 10
val_interval = 1# ============================ step 1/5 数据 ============================split_dir = os.path.join("F:/Pytorch框架班/Pytorch-Camp-master/代码合集/rmb_split")
train_dir = os.path.join(split_dir, "train")
valid_dir = os.path.join(split_dir, "valid")norm_mean = [0.485, 0.456, 0.406]
norm_std = [0.229, 0.224, 0.225]train_transform = transforms.Compose([transforms.Resize((32, 32)),transforms.RandomCrop(32, padding=4),transforms.RandomGrayscale(p=0.8),transforms.ToTensor(),transforms.Normalize(norm_mean, norm_std),
])valid_transform = transforms.Compose([transforms.Resize((32, 32)),transforms.ToTensor(),transforms.Normalize(norm_mean, norm_std),
])# 构建MyDataset实例
train_data = RMBDataset(data_dir=train_dir, transform=train_transform)
valid_data = RMBDataset(data_dir=valid_dir, transform=valid_transform)# 构建DataLoder
train_loader = DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True)
valid_loader = DataLoader(dataset=valid_data, batch_size=BATCH_SIZE)# ============================ step 2/5 模型 ============================net = LeNet(classes=2)
net.initialize_weights()# ============================ step 3/5 损失函数 ============================
criterion = nn.CrossEntropyLoss()                                                   # 选择损失函数# ============================ step 4/5 优化器 ============================
optimizer = optim.SGD(net.parameters(), lr=LR, momentum=0.9)                        # 选择优化器
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.1)     # 设置学习率下降策略# ============================ step 5/5 训练 ============================
train_curve = list()
valid_curve = list()for epoch in range(MAX_EPOCH):loss_mean = 0.correct = 0.total = 0.net.train()for i, data in enumerate(train_loader):# forwardinputs, labels = dataoutputs = net(inputs)# backwardoptimizer.zero_grad()loss = criterion(outputs, labels)loss.backward()# update weightsoptimizer.step()# 统计分类情况_, predicted = torch.max(outputs.data, 1)total += labels.size(0)correct += (predicted == labels).squeeze().sum().numpy()# 打印训练信息loss_mean += loss.item()train_curve.append(loss.item())if (i+1) % log_interval == 0:loss_mean = loss_mean / log_intervalprint("Training:Epoch[{:0>3}/{:0>3}] Iteration[{:0>3}/{:0>3}] Loss: {:.4f} Acc:{:.2%}".format(epoch, MAX_EPOCH, i+1, len(train_loader), loss_mean, correct / total))loss_mean = 0.scheduler.step()  # 更新学习率# validate the modelif (epoch+1) % val_interval == 0:correct_val = 0.total_val = 0.loss_val = 0.net.eval()with torch.no_grad():for j, data in enumerate(valid_loader):inputs, labels = dataoutputs = net(inputs)loss = criterion(outputs, labels)_, predicted = torch.max(outputs.data, 1)total_val += labels.size(0)correct_val += (predicted == labels).squeeze().sum().numpy()loss_val += loss.item()valid_curve.append(loss_val)print("Valid:\t Epoch[{:0>3}/{:0>3}] Iteration[{:0>3}/{:0>3}] Loss: {:.4f} Acc:{:.2%}".format(epoch, MAX_EPOCH, j+1, len(valid_loader), loss_val, correct / total))train_x = range(len(train_curve))
train_y = train_curvetrain_iters = len(train_loader)
valid_x = np.arange(1, len(valid_curve)+1) * train_iters*val_interval # 由于valid中记录的是epochloss,需要对记录点进行转换到iterations
valid_y = valid_curveplt.plot(train_x, train_y, label='Train')
plt.plot(valid_x, valid_y, label='Valid')plt.legend(loc='upper right')
plt.ylabel('loss value')
plt.xlabel('Iteration')
plt.show()# ============================ inference ============================BASE_DIR = os.path.dirname(os.path.abspath(__file__))
test_dir = os.path.join(BASE_DIR, "test_data")test_data = RMBDataset(data_dir=test_dir, transform=valid_transform)
valid_loader = DataLoader(dataset=test_data, batch_size=1)for i, data in enumerate(valid_loader):# forwardinputs, labels = dataoutputs = net(inputs)_, predicted = torch.max(outputs.data, 1)rmb = 1 if predicted.numpy()[0] == 0 else 100img_tensor = inputs[0, ...]  # C H Wimg = transform_invert(img_tensor, train_transform)plt.imshow(img)plt.title("LeNet got {} Yuan".format(rmb))plt.show()plt.pause(0.5)plt.close()

上述代码中使用到学习率调整的代码为:

scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.1)     # 设置学习率下降策略

通过设置断点进入该函数源码观察其具体实现:

class StepLR(_LRScheduler):def __init__(self, optimizer, step_size, gamma=0.1, last_epoch=-1):self.step_size = step_sizeself.gamma = gammasuper(StepLR, self).__init__(optimizer, last_epoch)def get_lr(self):return [base_lr * self.gamma ** (self.last_epoch // self.step_size)for base_lr in self.base_lrs]

可以发现,代码进入StepLR类,这个类继承于_LRScheduler,也就是我们上面学习的scheduler的基类,代码中的__init__()执行的操作为初始化参数并继承父类的参数,现在观察一下__init__()的代码:

super(StepLR, self).__init__(optimizer, last_epoch)

通过步进进入StepLR的父类:

class _LRScheduler(object):def __init__(self, optimizer, last_epoch=-1):if not isinstance(optimizer, Optimizer):raise TypeError('{} is not an Optimizer'.format(type(optimizer).__name__))self.optimizer = optimizerif last_epoch == -1:for group in optimizer.param_groups:group.setdefault('initial_lr', group['lr'])last_epoch = 0else:for i, group in enumerate(optimizer.param_groups):if 'initial_lr' not in group:raise KeyError("param 'initial_lr' is not specified ""in param_groups[{}] when resuming an optimizer".format(i))self.base_lrs = list(map(lambda group: group['initial_lr'], optimizer.param_groups))self.last_epoch = last_epoch# Following https://github.com/pytorch/pytorch/issues/20124# We would like to ensure that `lr_scheduler.step()` is called after# `optimizer.step()`def with_counter(func, opt):@wraps(func)def wrapper(*args, **kwargs):opt._step_count += 1return func(*args, **kwargs)wrapper._with_counter = Truereturn wrapperself.optimizer.step = with_counter(self.optimizer.step, self.optimizer)self.optimizer._step_count = 0self._step_count = 0self.step(last_epoch)def state_dict(self):"""Returns the state of the scheduler as a :class:`dict`.It contains an entry for every variable in self.__dict__ whichis not the optimizer."""return {key: value for key, value in self.__dict__.items() if key != 'optimizer'}def load_state_dict(self, state_dict):"""Loads the schedulers state.Arguments:state_dict (dict): scheduler state. Should be an object returnedfrom a call to :meth:`state_dict`."""self.__dict__.update(state_dict)def get_lr(self):raise NotImplementedErrordef step(self, epoch=None):# Raise a warning if old pattern is detected# https://github.com/pytorch/pytorch/issues/20124if self._step_count == 1:if not hasattr(self.optimizer.step, "_with_counter"):warnings.warn("Seems like `optimizer.step()` has been overridden after learning rate scheduler ""initialization. Please, make sure to call `optimizer.step()` before ""`lr_scheduler.step()`. See more details at ""https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)# Just check if there were two first lr_scheduler.step() calls before optimizer.step()elif self.optimizer._step_count < 1:warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. ""In PyTorch 1.1.0 and later, you should call them in the opposite order: ""`optimizer.step()` before `lr_scheduler.step()`.  Failure to do this ""will result in PyTorch skipping the first value of the learning rate schedule.""See more details at ""https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)self._step_count += 1if epoch is None:epoch = self.last_epoch + 1self.last_epoch = epochfor param_group, lr in zip(self.optimizer.param_groups, self.get_lr()):param_group['lr'] = lr

主要观察代码中的:

self.optimizer = optimizer  # optimizer是要关联的优化器
self.base_lrs = list(map(lambda group: group['initial_lr'], optimizer.param_groups))
self.last_epoch = last_epoch

代码中的base_lrs是一个list,因为优化器中可能有多个学习率;代码中使用到map,map会对optimizer.param_groups中的每一个元素执行函数lambda group,optimizer.param_groups是一个参数组,参数组是一个list,list中的元素都是dict,所以lambda函数的作用是取出dict中的value值,也就是每个参数的学习率。

所以self.base_lrs = list(map(lambda group: group[‘initial_lr’], optimizer.param_groups))的作用是把每一个参数组的初始学习率提取出来,构建一个list存放到self.base_lrs中。

self.last_epoch用于学习率的更新,这样就构建好了一个基本的scheduler。

设置好一个scheduler之后,在人民币二分类任务中会在每一个epoch训练后进行学习率的更新,也就是如下代码:

scheduler.step()  # 更新学习率

我们进入step()观察step的具体工作原理,通过设置断点并进行步进调试,代码如下:

    def step(self, epoch=None):# Raise a warning if old pattern is detected# https://github.com/pytorch/pytorch/issues/20124if self._step_count == 1:if not hasattr(self.optimizer.step, "_with_counter"):warnings.warn("Seems like `optimizer.step()` has been overridden after learning rate scheduler ""initialization. Please, make sure to call `optimizer.step()` before ""`lr_scheduler.step()`. See more details at ""https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)# Just check if there were two first lr_scheduler.step() calls before optimizer.step()elif self.optimizer._step_count < 1:warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. ""In PyTorch 1.1.0 and later, you should call them in the opposite order: ""`optimizer.step()` before `lr_scheduler.step()`.  Failure to do this ""will result in PyTorch skipping the first value of the learning rate schedule.""See more details at ""https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)self._step_count += 1if epoch is None:epoch = self.last_epoch + 1self.last_epoch = epochfor param_group, lr in zip(self.optimizer.param_groups, self.get_lr()):param_group['lr'] = lr

我们主要注意上面代码中的以下部分:

if epoch is None:epoch = self.last_epoch + 1
self.last_epoch = epoch
for param_group, lr in zip(self.optimizer.param_groups, self.get_lr()):param_group['lr'] = lr

上面代码中的for循环是调整学习率,self.optimizer.param_groups是一个参数组list,list中的每个元素是一个dict,因此param_group是一个字典,param_group[‘lr’]是参数组的学习率,通过param_group[‘lr’] = lr更新学习率,学习率是通过self.get_lr()进行更新的,self.get_lr()是计算下一个epoch中的lr,现在观察self.get_lr()是怎样更新学习率的,其具体代码如下所示:

def get_lr(self):return [base_lr * self.gamma ** (self.last_epoch // self.step_size)for base_lr in self.base_lrs]

get_lr()函数是在step()函数中使用的;

2、Pytorch的六种学习率调整策略

2.1 StepLR

功能:等间隔调整学习率
主要参数

  • step_size:调整间隔数;
  • gamma:调整系数;
    调整方法:lr=lr∗gammalr = lr * gammalr=lr∗gamma
torch.optim.lr_scheduler.StepLR(optimizer, step_size, gamma=0.1,last_epoch=-1)

下面通过代码观察StepLR的具体应用:

import torch
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt
torch.manual_seed(1)LR = 0.1  # 初始学习率
iteration = 10
max_epoch = 200
# ------------------------------ fake data and optimizer  ------------------------------weights = torch.randn((1), requires_grad=True)  # 进行梯度更新的参数
target = torch.zeros((1))optimizer = optim.SGD([weights], lr=LR, momentum=0.9)  # 构建一个虚拟优化器# ------------------------------ 1 Step LR ------------------------------
# flag = 0
flag = 1
if flag:scheduler_lr = optim.lr_scheduler.StepLR(optimizer, step_size=50, gamma=0.1)  # 设置学习率下降策略lr_list, epoch_list = list(), list()for epoch in range(max_epoch):lr_list.append(scheduler_lr.get_lr())epoch_list.append(epoch)for i in range(iteration):loss = torch.pow((weights - target), 2)loss.backward()optimizer.step()  # 梯度更新optimizer.zero_grad()  # 梯度清零scheduler_lr.step()plt.plot(epoch_list, lr_list, label="Step LR Scheduler")plt.xlabel("Epoch")plt.ylabel("Learning rate")plt.legend()plt.show()

代码输出为:

可以看出每隔50个epoch学习率会下降90%。

2.2 MultiStepLR

功能:按给定间隔调整学习率;
主要参数

  • milestones:设定调整时刻数;
  • gamma:调整系数;
    调整方式:lr=lr∗gammalr = lr * gammalr=lr∗gamma
lr_scheduler.MultiStepLR(optimizer,milestones, gamma=0.1,last_epoch=-1)

与stepLR不同的是,MultiStepLR可以自定义间隔,该功能通过milestones实现,下面通过代码观察其功能实现:

    milestones = [50, 125, 160]scheduler_lr = optim.lr_scheduler.MultiStepLR(optimizer, milestones=milestones, gamma=0.1)lr_list, epoch_list = list(), list()for epoch in range(max_epoch):lr_list.append(scheduler_lr.get_lr())epoch_list.append(epoch)for i in range(iteration):loss = torch.pow((weights - target), 2)loss.backward()optimizer.step()optimizer.zero_grad()scheduler_lr.step()plt.plot(epoch_list, lr_list, label="Multi Step LR Scheduler\nmilestones:{}".format(milestones))plt.xlabel("Epoch")plt.ylabel("Learning rate")plt.legend()plt.show()

代码的输出为:

2.3 ExponentialLR

功能:按指数衰减调整学习率;
主要参数

  • gamma:指数的底
    调整方式:lr=lr∗gamma∗∗epochlr = lr * gamma**epoch lr=lr∗gamma∗∗epoch
lr_scheduler.ExponentialLR(optimizer,gamma,last_epoch=-1)

下面通过代码学习这个方法:

    gamma = 0.95scheduler_lr = optim.lr_scheduler.ExponentialLR(optimizer, gamma=gamma)lr_list, epoch_list = list(), list()for epoch in range(max_epoch):lr_list.append(scheduler_lr.get_lr())epoch_list.append(epoch)for i in range(iteration):loss = torch.pow((weights - target), 2)loss.backward()optimizer.step()optimizer.zero_grad()scheduler_lr.step()plt.plot(epoch_list, lr_list, label="Exponential LR Scheduler\ngamma:{}".format(gamma))plt.xlabel("Epoch")plt.ylabel("Learning rate")plt.legend()plt.show()

代码输出为:

2.4 CosineAnnealingLR

功能:余弦周期调整学习率;
主要参数

  • T_max:下降周期;
  • eta_min:学习率下限;
    调整方式:ηt=ηmin⁡+12(ηmax⁡−ηmin⁡)(1+cos⁡(Tcur⁡Tmax⁡π))\eta_{t}=\eta_{\min }+\frac{1}{2}\left(\eta_{\max }-\eta_{\min }\right)\left(1+\cos \left(\frac{T_{\operatorname{cur}}}{T_{\max }} \pi\right)\right)ηt​=ηmin​+21​(ηmax​−ηmin​)(1+cos(Tmax​Tcur​​π))
lt_scheduler.CosineAnnealingLR(optimizer,T_max,eta_min=0,last_epoch=-1)

下面通过代码学习这个方法:

    t_max = 50scheduler_lr = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=t_max, eta_min=0.)lr_list, epoch_list = list(), list()for epoch in range(max_epoch):lr_list.append(scheduler_lr.get_lr())epoch_list.append(epoch)for i in range(iteration):loss = torch.pow((weights - target), 2)loss.backward()optimizer.step()optimizer.zero_grad()scheduler_lr.step()plt.plot(epoch_list, lr_list, label="CosineAnnealingLR Scheduler\nT_max:{}".format(t_max))plt.xlabel("Epoch")plt.ylabel("Learning rate")plt.legend()plt.show()

代码输出为:

2.5 ReduceLRonPlateau

功能:监控指标,当指标不再变化则调整学习率;当loss不再下降则调整学习率,或者监控分类的准确率,当准确率不再上升则调整学习率;
主要参数

  • mode:min/max 两种模式;min是监控指标下降,当指标不下降则调整学习率;max是监控指标上升,如果指标不上升则调整学习率;
  • factor:调整系数,例如上面方法中介绍的gamma;
  • patience:“耐心”,接受几次不变化;比如说loss连续多少次不变化才调整学习率;
  • cooldown:“冷却时间”,停止监控一段时间;意思是调整完学习率之后,在一段时间内不去监控指标,等冷却时间过了再监控参数指标;
  • verbose:bool变量,是否打印日志;
  • min_lr:学习率下限;
  • eps:学习率衰减最小值;
lr_scheduler.ReduceLROnPlateau(optimizer,mode='min',factor=0.1,patience=10,cerbose=False,threshold=0.0001,threshold_mode='rel',cooldown=0,min_lr=0,eps=1e-08)

下面通过代码学习这个方法:

    loss_value = 0.5accuray = 0.9factor = 0.1mode = "min"patience = 10cooldown = 10min_lr = 1e-4verbose = Truescheduler_lr = optim.lr_scheduler.ReduceLROnPlateau(optimizer, factor=factor, mode=mode, patience=patience,cooldown=cooldown, min_lr=min_lr, verbose=verbose)for epoch in range(max_epoch):for i in range(iteration):# train(...)optimizer.step()optimizer.zero_grad()if epoch == 5:loss_value = 0.4scheduler_lr.step(loss_value)

代码对应的输出为:

Epoch    16: reducing learning rate of group 0 to 1.0000e-02.
Epoch    37: reducing learning rate of group 0 to 1.0000e-03.
Epoch    58: reducing learning rate of group 0 to 1.0000e-04.

2.6 LambdaLR

功能:自定义调整策略,对于不同的参数可以设置不同的学习率;
主要参数

  • lr_lambda:可以是function or list,如果是list,list中的每一个元素必须是function;
lr_scheduler.LambdaLR(Optimizer.lr_lambda,last_epoch=-1)

下面通过代码学习其功能实现:

    lr_init = 0.1weights_1 = torch.randn((6, 3, 5, 5))weights_2 = torch.ones((5, 5))optimizer = optim.SGD([{'params': [weights_1]},{'params': [weights_2]}], lr=lr_init)lambda1 = lambda epoch: 0.1 ** (epoch // 20)lambda2 = lambda epoch: 0.95 ** epochscheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=[lambda1, lambda2])lr_list, epoch_list = list(), list()for epoch in range(max_epoch):for i in range(iteration):# train(...)optimizer.step()optimizer.zero_grad()scheduler.step()lr_list.append(scheduler.get_lr())epoch_list.append(epoch)print('epoch:{:5d}, lr:{}'.format(epoch, scheduler.get_lr()))plt.plot(epoch_list, [i[0] for i in lr_list], label="lambda 1")plt.plot(epoch_list, [i[1] for i in lr_list], label="lambda 2")plt.xlabel("Epoch")plt.ylabel("Learning Rate")plt.title("LambdaLR")plt.legend()plt.show()

代码输出为:

3、学习率调整小结

  1. 有序调整:Step、MultiStep、Exponential和CosineAnnealing;
  2. 自适应调整:ReduceLROnPleateau;当参数不再下降或者不再上升,调整学习率;
  3. 自定义调整:Lambda;在模型fintinue或者有多个模型参数组中对于不同参数执行不同学习率时可以使用;

学习率初始化

  1. 设置较小值:0.01/0。001/0。0001
  2. 搜索最大学习率:《Cyclical Learning Rates for Training Neural Networks》

Pytorch —— 学习率调整策略相关推荐

  1. PyTorch学习之六个学习率调整策略

    PyTorch学习率调整策略通过torch.optim.lr_scheduler接口实现.PyTorch提供的学习率调整策略分为三大类,分别是 a. 有序调整:等间隔调整(Step),按需调整学习率( ...

  2. pytorch优化器学习率调整策略以及正确用法

    优化器 optimzier优化器的作用:优化器就是需要根据网络反向传播的梯度信息来更新网络的参数,以起到降低loss函数计算值的作用. 从优化器的作用出发,要使得优化器能够起作用,需要主要两个东西: ...

  3. 【深度学习】图解 9 种PyTorch中常用的学习率调整策略

    learning rate scheduling 学习率调整策略 01 LAMBDA LR 将每个参数组的学习率设置为初始lr乘以给定函数.当last_epoch=-1时,将初始lr设置为初始值. t ...

  4. PyTorch框架学习十四——学习率调整策略

    PyTorch框架学习十四--学习率调整策略 一._LRScheduler类 二.六种常见的学习率调整策略 1.StepLR 2.MultiStepLR 3.ExponentialLR 4.Cosin ...

  5. YOLOv5-优化器和学习率调整策略

    优化器和学习率调整策略 pytorch-优化器和学习率调整 这个链接关于优化器和学习率的一些基础讲得很细,还有相关实现代码 优化器 前向传播的过程,会得到模型输出与真实标签的差,我们称之为损失, 有了 ...

  6. pytorch学习率下降策略

    阶段离散下降调整策略: 首先"阶段离散"下降调整这个词不是个专有名词,它只是一个形容. 符合这种调整策略的方法,一般是step,step学习率下降策略是最为常用的一种,表现为,在初 ...

  7. keras train_on_batch详解(train_on_batch的输出输入详解,train_on_batch多GPU训练详解,自定义学习率调整策略)

    利用 train_on_batch 精细管理训练过程 大部分使用 keras 的同学使用 fit() 或者 fit_generator() 进行模型训练, 这两个 api 对于刚接触深度学习的同学非常 ...

  8. PyTorch学习率衰减策略:指数衰减(ExponentialLR)、固定步长衰减(StepLR)、多步长衰减(MultiStepLR)、余弦退火衰减(CosineAnnealingLR)

    梯度下降算法需要我们指定一个学习率作为权重更新步幅的控制因子,常用的学习率有0.01.0.001以及0.0001等,学习率越大则权重更新.一般来说,我们希望在训练初期学习率大一些,使得网络收敛迅速,在 ...

  9. 【DL】——Warmup学习率调整策略

    1. warmup的必要性 原理这部分转载自: 神经网络中 warmup 策略为什么有效:有什么理论解释么? 那么在什么时候可能不成立呢?论文[3]告诉我们有两种情况: 在训练的开始阶段,模型权重迅速 ...

最新文章

  1. 设置linux查看历史命令显示两个小时内,linux系统中history历史命令显示执行日期和时间...
  2. 同是程序员,为什么别人可以事半功倍?
  3. Elasticsearch2.x Cluster Health
  4. C/C++语言之 日期 时间
  5. matlab中如何添加注释
  6. NLP免费直播 | 两周讲透图卷积神经网络、BERT、知识图谱、对话生成
  7. 计算机组成原理课设总线,计算机组成原理课程设计(全).doc
  8. Protobuf3教程
  9. asp.net如何获取客户端真实IP地址
  10. html拖拽吸附插件,前端拖拽插件gridster.js
  11. 随想录(构建自己的代码库)
  12. 阿里云rds mysql 并发_干货 | 浅析RDS MySQL 8.0语句级并发控制-阿里云开发者社区
  13. 运营前线1:一线运营专家的运营方法、技巧与实践03 与用户沟通,请避免这6个“坑”!...
  14. everything的安装后初始设置
  15. 引入其他字体库 和 字体样式设置
  16. 香农编码用matlab实验报告,香农编码实验报告
  17. 联想g510拆键盘的简单方法_笔记本键盘怎么拆 教你如何正确拆笔记本键盘 (全文)...
  18. 程序员女朋友礼物python代码_程序员到底该怎么给女朋友挑礼物
  19. switch c语言格式,switch语句格式是什么
  20. python九宫格拼图_Python制作九宫格图片

热门文章

  1. 【珍藏】 2012.NET开发必看资料53个+经典源码77个—下载目录
  2. VC 实现文件夹属性的获取与更改
  3. Windows下架设Apache并支持ASP-Win+Apache+ASP
  4. Hugo中文文档 快速开始
  5. 容器编排技术 -- Kubernetes Volume
  6. 容器编排技术 -- Kubernetes 在 Namespace 中配置默认的CPU请求与限额
  7. Spring @Autowired Annotation
  8. excel如何快速选中某个区域
  9. 【Java】利用for循环打印心型
  10. 闭包的示例_用示例解释JavaScript中的闭包