参考链接: class torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda, last_epoch=-1, verbose=False)
配套代码下载链接: 测试学习率调度器.zip

实验代码:

# torch.optim.lr_scheduler.LambdaLRimport matplotlib.pyplot as plt
import numpy as np
import torch
from torch.utils.data import Dataset, DataLoader
from torch import nn
from torch.autograd import Function
import random
import os
seed = 20200910
os.environ['PYTHONHASHSEED'] = str(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)  # if you are using multi-GPU.
np.random.seed(seed)  # Numpy module.
random.seed(seed)  # Python random module.
torch.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = Trueclass Dataset4cxq(Dataset):def __init__(self, length):self.length = lengthdef __len__(self):return self.lengthdef __getitem__(self, index):if type(index) != type(2) and type(index) != (slice):raise  TypeError('索引类型错误,程序退出...')# index 是单个数if type(index) == type(2):if index >= self.length or index < -1 * self.length:# print("索引越界,程序退出...")raise IndexError("索引越界,程序退出...")elif index < 0:index = index + self.length Celsius = torch.randn(1,1,dtype=torch.float).item()Fahrenheit = 32.0 + 1.8 * Celsiusreturn Celsius, Fahrenheit def collate_fn4cxq(batch):list_c = []list_f = []for c, f in batch:list_c.append(c)list_f.append(f)list_c = torch.tensor(list_c)list_f = torch.tensor(list_f)return list_c, list_fif __name__ == "__main__":my_dataset = Dataset4cxq(32)# for c,f in my_dataset:#     print(type(c),type(f))dataloader4cxq = torch.utils.data.DataLoader(dataset=my_dataset, batch_size=8,# batch_size=2,drop_last=True,# drop_last=False,shuffle=True,  #  True   False# shuffle=False,  #  True   Falsecollate_fn=collate_fn4cxq,# collate_fn=None,)# for cnt, data in enumerate(dataloader4cxq, 0):#     # pass#     sample4cxq, label4cxq = data#     print('sample4cxq的类型: ',type(sample4cxq),'\tlabel4cxq的类型: ',type(label4cxq))#     print('迭代次数:', cnt, '  sample4cxq:', sample4cxq, '  label4cxq:', label4cxq)print('开始创建模型'.center(80,'-'))model = torch.nn.Linear(in_features=1, out_features=1, bias=True)  # True # Falsemodel.cuda()optimizer = torch.optim.Adam(model.parameters(), lr=0.001)# 模拟华氏度与摄氏度之间的转换  # Fahrenheit = 32 + 1.8 * Celsiusmodel.train()cost_function = torch.nn.MSELoss()epochs = 100001  # 100001epochs = 10001  # 100001print('\n')print('开始训练模型'.center(80,'-'))list4delta = list()list4epoch = list()scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=(lambda epoch: 0.99 ** (epoch//1000)))for epoch in range(epochs):# with torch.no_grad():#     Celsius = torch.randn(10,1,dtype=torch.float).cuda()#     Fahrenheit = 32.0 + 1.8 * Celsius#     Fahrenheit = Fahrenheit.cuda()# Celsius = torch.randn(1,1,dtype=torch.float,requires_grad=False).cuda()  # requires_grad=False  True# Fahrenheit = 32.0 + 1.8 * Celsius# Fahrenheit = Fahrenheit.cuda()        # requires_grad=Falsetotal_loss = 0.0for cnt, data in enumerate(dataloader4cxq, 0):Celsius, Fahrenheit = dataCelsius, Fahrenheit = Celsius.cuda().view(-1,1), Fahrenheit.cuda().view(-1,1)output = model(Celsius)loss = cost_function(output, Fahrenheit)total_loss += loss.item()optimizer.zero_grad()loss.backward()optimizer.step()scheduler.step()if epoch % 100 == 0:  # if epoch % 1000 == 0:list4delta.append(total_loss)list4epoch.append(epoch)if epoch % 500 == 0:info = '\nepoch:{0:>6}/{1:<6}\t'.format(epoch,epochs)for k, v in model.state_dict().items():info += str(k)+ ':' + '{0:<.18f}'.format(v.item()) + '\t'# info += str(k)+ ':' + str(v.item()) + '\t'print(info)fig, ax = plt.subplots() # ax.plot(10*np.random.randn(100),10*np.random.randn(100),'o')ax.plot(list4epoch, list4delta, 'r.-', markersize=8)ax.set_title("Visualization For My Model's Errors")plt.show()

控制台下输出:

Windows PowerShell
版权所有 (C) Microsoft Corporation。保留所有权利。尝试新的跨平台 PowerShell https://aka.ms/pscore6加载个人及系统配置文件用了 926 毫秒。
(base) PS C:\Users\chenxuqi\Desktop\News4cxq\测试学习率调度器>  & 'D:\Anaconda3\envs\pytorch_1.7.1_cu102\python.exe' 'c:\Users\chenxuqi\.vscode\extensions\ms-python.python-2021.1.502429796\pythonFiles\lib\python\debugpy\launcher' '49464' '--' 'c:\Users\chenxuqi\Desktop\News4cxq\测试学习率调度器\test06.py'
-------------------------------------开始创建模型--------------------------------------------------------------------------开始训练模型-------------------------------------epoch:     0/10001      weight:0.962383031845092773     bias:0.980020046234130859       epoch:   500/10001      weight:1.129050374031066895     bias:2.955143213272094727       epoch:  1000/10001      weight:1.249524116516113281     bias:4.898723125457763672       epoch:  1500/10001      weight:1.320719122886657715     bias:6.810392856597900391       epoch:  2000/10001      weight:1.434252023696899414     bias:8.715221405029296875epoch:  2500/10001      weight:1.468232393264770508     bias:10.594973564147949219epoch:  3000/10001      weight:1.536670327186584473     bias:12.468175888061523438epoch:  3500/10001      weight:1.680503368377685547     bias:14.315374374389648438epoch:  4000/10001      weight:1.758755326271057129     bias:16.153095245361328125epoch:  4500/10001      weight:1.769892215728759766     bias:17.961753845214843750epoch:  5000/10001      weight:1.744580507278442383     bias:19.756875991821289062epoch:  5500/10001      weight:1.757981419563293457     bias:21.517288208007812500epoch:  6000/10001      weight:1.790049910545349121     bias:23.255580902099609375epoch:  6500/10001      weight:1.826546669006347656     bias:24.947116851806640625epoch:  7000/10001      weight:1.756798028945922852     bias:26.596363067626953125epoch:  7500/10001      weight:1.809650421142578125     bias:28.166427612304687500epoch:  8000/10001      weight:1.825483560562133789     bias:29.631296157836914062epoch:  8500/10001      weight:1.800792336463928223     bias:30.888553619384765625epoch:  9000/10001      weight:1.800277113914489746     bias:31.746377944946289062epoch:  9500/10001      weight:1.799844503402709961     bias:31.993532180786132812epoch: 10000/10001      weight:1.800002932548522949     bias:31.999877929687500000

运行结果截图:

class torch.optim.lr_scheduler.LambdaLR相关推荐

  1. torch.optim.lr_scheduler.LambdaLR与OneCycleLR

    目录 LambdaLR 输出 OneCycleLR 输出 LambdaLR 函数接口: LambdaLR(optimizer, lr_lambda, last_epoch=-1, verbose=Fa ...

  2. pytorch中调整学习率: torch.optim.lr_scheduler

    文章翻译自:https://pytorch.org/docs/stable/optim.html torch.optim.lr_scheduler 中提供了基于多种epoch数目调整学习率的方法. t ...

  3. class torch.optim.lr_scheduler.ExponentialLR

    参考链接: class torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma, last_epoch=-1, verbose=False) 配 ...

  4. class torch.optim.lr_scheduler.StepLR

    参考链接: class torch.optim.lr_scheduler.StepLR(optimizer, step_size, gamma=0.1, last_epoch=-1, verbose= ...

  5. Pytorch(0)降低学习率torch.optim.lr_scheduler.ReduceLROnPlateau类

    当网络的评价指标不在提升的时候,可以通过降低网络的学习率来提高网络性能.所使用的类 class torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer ...

  6. ImportError: cannot import name ‘SAVE_STATE_WARNING‘ from ‘torch.optim.lr_scheduler‘ (/home/jsj/anac

    from transformers import BertModel 报错   ImportError: cannot import name 'SAVE_STATE_WARNING' from 't ...

  7. torch.optim.lr_scheduler.StepLR()函数

    1 目的 在训练的开始阶段, 使用的 LR 较大, loss 可以下降的较快, 但是随着训练的轮数越来越多, loss 越来越接近 global min, 若不降低 LR , 就会导致 loss 在最 ...

  8. pytorch torch.optim.lr_scheduler 各种使用和解释

    https://blog.csdn.net/baoxin1100/article/details/107446538

  9. 【torch.optim】优化器的使用 / 学习率的调整 / SWA策略

    torch.optim torch.optim是实现各种优化算法的包.大多数常用的方法都已得到支持,而且接口足够通用,因此将来还可以轻松集成更复杂的方法. 优化器 使用优化器 为了使用一个优化器,必须 ...

最新文章

  1. 2678v3支持内存频率_电脑内存别乱加,不了解这些白花钱
  2. android binder 实例
  3. java 反射 注解 运用_Java注解与反射的使用
  4. jmeter 线程执行顺序_面试官让我说出8种线程顺序执行的方法!我懵了
  5. Mac OS X下使用C++ JSON库
  6. 天翼云从业认证(1.2)存储的概念、体系结构、块存储、对象存储、文件存储以及 RAID 磁盘管理技术
  7. MAVEN POM dependencies and Dependency Exclusions
  8. Java 图形用户界面(GUI)java.awt包概述
  9. eclipse Android 开发基础 Activity 窗体 界面
  10. android 动态库 后缀,Android Robolectric加载运行本地So动态库
  11. 谈谈写程序与学英语(转载)
  12. 1221. 分割平衡字符串
  13. SQLite 3.31.0 发布,世界上使用量最大的数据库引擎
  14. iOS 系统爆 Bug!
  15. css设置图标居左_学会这几种方法css居中很简单
  16. 安利一个黑科技!还有两款电视盒子播影神器,放假了应该用得到~
  17. Sqlserver添加或修改字段
  18. 微信去除 防欺诈或盗号请不要输入qq密码 的方法
  19. vscode 插件 markdown-preview-enhanced 设置深色预览主题
  20. python输入esc退出循环_如何用Esc键停止GhPython或者RhinoPython脚本运行

热门文章

  1. CAD高版本转低版本的方法有哪些?
  2. 不要再使用TCHAR和_T了
  3. 最新支持android的手机型号,android8.0国产手机有哪些 哪些手机支持android 8.0
  4. 【无标题】输入和输出
  5. Java代理模式(Cglib)代理模式
  6. Linux 安装Mysql8.0.15教程,以及修改密码
  7. python爬b站评论_一个简单的爬取b站up下所有视频的所有评论信息的爬虫
  8. 牛客网刷题-java
  9. 使用python操作excel的xls文件和xlsx文件之间的批量导出和填充数据
  10. 【三维目标检测】VoteNet(一)