参考链接: requires_grad_(requires_grad=True) → Tensor

原文及翻译:

requires_grad_(requires_grad=True) → Tensor
方法: requires_grad_(requires_grad=True) 返回类型是 TensorChange if autograd should record operations on this tensor: sets this tensor’s requires_grad attribute in-place. Returns this tensor.该方法能够决定自动梯度机制是否需要为当前这个张量计算记录运算操作.该方法能对当前张量的requires_grad属性进行原地操作.返回这个当前张量.require_grad_()’s main use case is to tell autograd to begin recording operations on a Tensor tensor. If tensor has requires_grad=False (because it was obtained through a DataLoader,or required preprocessing or initialization), tensor.requires_grad_() makes it so that autograd will begin to record operations on tensor.require_grad_()方法的主要使用场景是告诉自动梯度机制开始记录追踪这个张量tensor的操作.如果张量tensor的属性requires_grad=False(因为这个张量是从DataLoader中获得到的,或者这个张量需要预处理或初始化),该张量执行tensor.requires_grad_()方法将会让自动梯度autograd开始跟踪并记录这个张量上的运算操作.Parameters  参数requires_grad (bool) – If autograd should record operations on this tensor. Default: True.requires_grad (布尔类型) – 用于告诉自动梯度机制,是否需要记录这个张量上的运算操作.默认值是True.Example:  例子:>>> # Let's say we want to preprocess some saved weights and use>>> # the result as new weights.>>> # 假设我们想要预处理一些已保存好的权重,并且要使用这些结果作为新的权重.>>> saved_weights = [0.1, 0.2, 0.3, 0.25]>>> loaded_weights = torch.tensor(saved_weights)>>> weights = preprocess(loaded_weights)  # some function  某个函数>>> weightstensor([-0.5503,  0.4926, -2.1158, -0.8303])>>> # Now, start to record operations done to weights>>> # 现在我们开始记录在这个weights张量上所执行的运算操作.>>> weights.requires_grad_()>>> out = weights.pow(2).sum()>>> out.backward()>>> weights.gradtensor([-1.1007,  0.9853, -4.2316, -1.6606])

实验代码:

Microsoft Windows [版本 10.0.18363.1316]
(c) 2019 Microsoft Corporation。保留所有权利。C:\Users\chenxuqi>conda activate ssd4pytorch1_2_0(ssd4pytorch1_2_0) C:\Users\chenxuqi>python
Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.manual_seed(seed=20200910)
<torch._C.Generator object at 0x000002BF3761D330>
>>>
>>> a = torch.randn(3,5)
>>> a
tensor([[ 0.2824, -0.3715,  0.9088, -1.7601, -0.1806],[ 2.0937,  1.0406, -1.7651,  1.1216,  0.8440],[ 0.1783,  0.6859, -1.5942, -0.2006, -0.4050]])
>>> a.requires_grad
False
>>> a.requires_grad()
Traceback (most recent call last):File "<stdin>", line 1, in <module>
TypeError: 'bool' object is not callable
>>> a.requires_grad_()
tensor([[ 0.2824, -0.3715,  0.9088, -1.7601, -0.1806],[ 2.0937,  1.0406, -1.7651,  1.1216,  0.8440],[ 0.1783,  0.6859, -1.5942, -0.2006, -0.4050]], requires_grad=True)
>>> a
tensor([[ 0.2824, -0.3715,  0.9088, -1.7601, -0.1806],[ 2.0937,  1.0406, -1.7651,  1.1216,  0.8440],[ 0.1783,  0.6859, -1.5942, -0.2006, -0.4050]], requires_grad=True)
>>> a.requires_grad
True
>>> a.requires_grad_()
tensor([[ 0.2824, -0.3715,  0.9088, -1.7601, -0.1806],[ 2.0937,  1.0406, -1.7651,  1.1216,  0.8440],[ 0.1783,  0.6859, -1.5942, -0.2006, -0.4050]], requires_grad=True)
>>> a.requires_grad_(False)
tensor([[ 0.2824, -0.3715,  0.9088, -1.7601, -0.1806],[ 2.0937,  1.0406, -1.7651,  1.1216,  0.8440],[ 0.1783,  0.6859, -1.5942, -0.2006, -0.4050]])
>>> a.requires_grad_()
tensor([[ 0.2824, -0.3715,  0.9088, -1.7601, -0.1806],[ 2.0937,  1.0406, -1.7651,  1.1216,  0.8440],[ 0.1783,  0.6859, -1.5942, -0.2006, -0.4050]], requires_grad=True)
>>> a.requires_grad_(requires_grad=False)
tensor([[ 0.2824, -0.3715,  0.9088, -1.7601, -0.1806],[ 2.0937,  1.0406, -1.7651,  1.1216,  0.8440],[ 0.1783,  0.6859, -1.5942, -0.2006, -0.4050]])
>>> a
tensor([[ 0.2824, -0.3715,  0.9088, -1.7601, -0.1806],[ 2.0937,  1.0406, -1.7651,  1.1216,  0.8440],[ 0.1783,  0.6859, -1.5942, -0.2006, -0.4050]])
>>> a.requires_grad_(requires_grad=False)
tensor([[ 0.2824, -0.3715,  0.9088, -1.7601, -0.1806],[ 2.0937,  1.0406, -1.7651,  1.1216,  0.8440],[ 0.1783,  0.6859, -1.5942, -0.2006, -0.4050]])
>>> a
tensor([[ 0.2824, -0.3715,  0.9088, -1.7601, -0.1806],[ 2.0937,  1.0406, -1.7651,  1.1216,  0.8440],[ 0.1783,  0.6859, -1.5942, -0.2006, -0.4050]])
>>> a.requires_grad
False
>>>
>>>
>>>
>>> b = torch.randn(3,5,requires_grad=True)
>>> b
tensor([[-0.5556,  0.9571,  0.7435, -0.2974, -2.2825],[-0.6627, -1.1902, -0.1748,  1.2125,  0.6630],[-0.5813, -0.1549, -0.4551, -0.4570,  0.4547]], requires_grad=True)
>>> b.requires_grad
True
>>> b.requires_grad_()
tensor([[-0.5556,  0.9571,  0.7435, -0.2974, -2.2825],[-0.6627, -1.1902, -0.1748,  1.2125,  0.6630],[-0.5813, -0.1549, -0.4551, -0.4570,  0.4547]], requires_grad=True)
>>> b
tensor([[-0.5556,  0.9571,  0.7435, -0.2974, -2.2825],[-0.6627, -1.1902, -0.1748,  1.2125,  0.6630],[-0.5813, -0.1549, -0.4551, -0.4570,  0.4547]], requires_grad=True)
>>> b.requires_grad
True
>>> b.requires_grad_(False)
tensor([[-0.5556,  0.9571,  0.7435, -0.2974, -2.2825],[-0.6627, -1.1902, -0.1748,  1.2125,  0.6630],[-0.5813, -0.1549, -0.4551, -0.4570,  0.4547]])
>>> b
tensor([[-0.5556,  0.9571,  0.7435, -0.2974, -2.2825],[-0.6627, -1.1902, -0.1748,  1.2125,  0.6630],[-0.5813, -0.1549, -0.4551, -0.4570,  0.4547]])
>>> b.requires_grad_(False)
tensor([[-0.5556,  0.9571,  0.7435, -0.2974, -2.2825],[-0.6627, -1.1902, -0.1748,  1.2125,  0.6630],[-0.5813, -0.1549, -0.4551, -0.4570,  0.4547]])
>>> b.requires_grad_(False)
tensor([[-0.5556,  0.9571,  0.7435, -0.2974, -2.2825],[-0.6627, -1.1902, -0.1748,  1.2125,  0.6630],[-0.5813, -0.1549, -0.4551, -0.4570,  0.4547]])
>>> b.requires_grad_()
tensor([[-0.5556,  0.9571,  0.7435, -0.2974, -2.2825],[-0.6627, -1.1902, -0.1748,  1.2125,  0.6630],[-0.5813, -0.1549, -0.4551, -0.4570,  0.4547]], requires_grad=True)
>>> b
tensor([[-0.5556,  0.9571,  0.7435, -0.2974, -2.2825],[-0.6627, -1.1902, -0.1748,  1.2125,  0.6630],[-0.5813, -0.1549, -0.4551, -0.4570,  0.4547]], requires_grad=True)
>>> b.requires_grad_()
tensor([[-0.5556,  0.9571,  0.7435, -0.2974, -2.2825],[-0.6627, -1.1902, -0.1748,  1.2125,  0.6630],[-0.5813, -0.1549, -0.4551, -0.4570,  0.4547]], requires_grad=True)
>>> b
tensor([[-0.5556,  0.9571,  0.7435, -0.2974, -2.2825],[-0.6627, -1.1902, -0.1748,  1.2125,  0.6630],[-0.5813, -0.1549, -0.4551, -0.4570,  0.4547]], requires_grad=True)
>>>
>>>
>>> b
tensor([[-0.5556,  0.9571,  0.7435, -0.2974, -2.2825],[-0.6627, -1.1902, -0.1748,  1.2125,  0.6630],[-0.5813, -0.1549, -0.4551, -0.4570,  0.4547]], requires_grad=True)
>>>
>>> c = 2.0*b
>>> c
tensor([[-1.1113,  1.9141,  1.4871, -0.5948, -4.5650],[-1.3254, -2.3805, -0.3497,  2.4250,  1.3261],[-1.1625, -0.3099, -0.9102, -0.9141,  0.9093]], grad_fn=<MulBackward0>)
>>> c.requires_grad
True
>>> c.requires_grad_()
tensor([[-1.1113,  1.9141,  1.4871, -0.5948, -4.5650],[-1.3254, -2.3805, -0.3497,  2.4250,  1.3261],[-1.1625, -0.3099, -0.9102, -0.9141,  0.9093]], grad_fn=<MulBackward0>)
>>> c.requires_grad_(False)
Traceback (most recent call last):File "<stdin>", line 1, in <module>
RuntimeError: you can only change requires_grad flags of leaf variables. If you want to use a computed variable in a subgraph that doesn't require differentiation use var_no_grad = var.detach().
>>>
>>> c
tensor([[-1.1113,  1.9141,  1.4871, -0.5948, -4.5650],[-1.3254, -2.3805, -0.3497,  2.4250,  1.3261],[-1.1625, -0.3099, -0.9102, -0.9141,  0.9093]], grad_fn=<MulBackward0>)
>>> d = c.detach()
>>> d
tensor([[-1.1113,  1.9141,  1.4871, -0.5948, -4.5650],[-1.3254, -2.3805, -0.3497,  2.4250,  1.3261],[-1.1625, -0.3099, -0.9102, -0.9141,  0.9093]])
>>> c
tensor([[-1.1113,  1.9141,  1.4871, -0.5948, -4.5650],[-1.3254, -2.3805, -0.3497,  2.4250,  1.3261],[-1.1625, -0.3099, -0.9102, -0.9141,  0.9093]], grad_fn=<MulBackward0>)
>>> d.requires_grad
False
>>> d.is_leaf
True
>>> d.requires_grad_(False)
tensor([[-1.1113,  1.9141,  1.4871, -0.5948, -4.5650],[-1.3254, -2.3805, -0.3497,  2.4250,  1.3261],[-1.1625, -0.3099, -0.9102, -0.9141,  0.9093]])
>>> d.requires_grad_()
tensor([[-1.1113,  1.9141,  1.4871, -0.5948, -4.5650],[-1.3254, -2.3805, -0.3497,  2.4250,  1.3261],[-1.1625, -0.3099, -0.9102, -0.9141,  0.9093]], requires_grad=True)
>>>
>>>
>>>

实验代码:关于中间节点(非叶节点张量)保持梯度不释放中间节点.

Microsoft Windows [版本 10.0.18363.1316]
(c) 2019 Microsoft Corporation。保留所有权利。C:\Users\chenxuqi>conda activate ssd4pytorch1_2_0(ssd4pytorch1_2_0) C:\Users\chenxuqi>python
Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.manual_seed(seed=20200910)
<torch._C.Generator object at 0x000001CDB4A5D330>
>>> data_in = torch.randn(3,5,requires_grad=True)
>>> data_in
tensor([[ 0.2824, -0.3715,  0.9088, -1.7601, -0.1806],[ 2.0937,  1.0406, -1.7651,  1.1216,  0.8440],[ 0.1783,  0.6859, -1.5942, -0.2006, -0.4050]], requires_grad=True)
>>> data_mean = data_in.mean()
>>> data_mean
tensor(0.0585, grad_fn=<MeanBackward0>)
>>> data_in.requires_grad
True
>>> data_mean.requires_grad
True
>>> data_1 = data_mean * 20200910.0
>>> data_1
tensor(1182591., grad_fn=<MulBackward0>)
>>> data_2 = data_1 * 15.0
>>> data_2
tensor(17738864., grad_fn=<MulBackward0>)
>>> data_2.retain_grad()
>>> data_3 = 2 * (data_2 + 55.0)
>>> loss = data_3 / 2.0 +89.2
>>> loss
tensor(17739010., grad_fn=<AddBackward0>)
>>>
>>> data_in.grad
>>> data_mean.grad
>>> data_1.grad
>>> data_2.grad
>>> data_3.grad
>>> loss.grad
>>> print(data_in.grad, data_mean.grad, data_1.grad, data_2.grad, data_3.grad, loss.grad)
None None None None None None
>>>
>>> loss.backward()
>>> data_in.grad
tensor([[20200910., 20200910., 20200910., 20200910., 20200910.],[20200910., 20200910., 20200910., 20200910., 20200910.],[20200910., 20200910., 20200910., 20200910., 20200910.]])
>>> data_mean.grad
>>> data_mean.grad
>>> data_1.grad
>>> data_2.grad
tensor(1.)
>>> data_3.grad
>>> loss.grad
>>>
>>>
>>> print(data_in.grad, data_mean.grad, data_1.grad, data_2.grad, data_3.grad, loss.grad)
tensor([[20200910., 20200910., 20200910., 20200910., 20200910.],[20200910., 20200910., 20200910., 20200910., 20200910.],[20200910., 20200910., 20200910., 20200910., 20200910.]]) None None tensor(1.) None None
>>>
>>>
>>> print(data_in.is_leaf, data_mean.is_leaf, data_1.is_leaf, data_2.is_leaf, data_3.is_leaf, loss.is_leaf)
True False False False False False
>>>
>>>
>>>

torch.Tensor.requires_grad_(requires_grad=True)的使用说明相关推荐

  1. torch.Tensor.requires_grad属性的使用说明

    参考链接: torch.Tensor.requires_grad 原文及翻译: requires_grad() 方法: requires_grad()Is True if gradients need ...

  2. torch.Tensor常用数据操作汇总与自动求梯度

    在深度学习中,通常会频繁地对数据进行操作,在PyTorch中,torch.Tensor是存储和变换数据的主要工具(Tensor和NumPy的多维数组非常类似,Tensor提供GPU计算和自动求梯度等更 ...

  3. 通俗讲解Pytorch梯度的相关问题:计算图、torch.no_grad、zero_grad、detach和backward;Variable、Parameter和torch.tensor

    文章目录 with torch.no_grad()和requires_grad backward() Variable,Parameter和torch.tensor() zero_grad() 计算图 ...

  4. PyTorch 1.0 中文文档:torch.Tensor

    译者:hijkzzz torch.Tensor 是一种包含单一数据类型元素的多维矩阵. Torch定义了八种CPU张量类型和八种GPU张量类型: Data type dtype CPU tensor ...

  5. torch中(required_grad_=True),根据mask替换掉部分值

    举个例子 a = torch.randn(2, 3, 4) a.requires_grad=True a tensor([[[ 1.2255, 1.5870, -0.5485, -0.4809],[- ...

  6. torch.Tensor.retain_grad()的使用举例

    参考链接: torch.Tensor.retain_grad() 原文及翻译: retain_grad() 方法: retain_grad()Enables .grad attribute for n ...

  7. pytorch torch.Tensor.new_ones()(返回一个与size大小相同的用1填充的张量。 默认返回的Tensor具有与此张量相同的torch.dtype和torch.device)

    from https://pytorch.org/docs/1.1.0/tensors.html?highlight=new_ones#torch.Tensor.new_ones new_ones(s ...

  8. torch.cuda.FloatTensor 与 torch.FloatTensor(torch.Tensor)--CPU和GPU上的数据类型

    1.torch.cuda.FloatTensor 与 torch.FloatTensor Pytorch中的tensor又包括CPU上的数据类型和GPU上的数据类型,一般GPU上的Tensor是CPU ...

  9. pytorch torch.tensor

    应用 torch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]]) ---------------------------------------------- ...

最新文章

  1. shell 脚本安装Tomcat和java
  2. php分页类代码,php 分页类 扩展代码
  3. 多个ip对应的是同一个mac_Python3+Scapy安装使用 + 查询本机对应网卡,IP,MAC代码...
  4. KMyMoney:全体理财好管家
  5. RequestMapping
  6. Python找出某元素的索引下标
  7. zoj 3747 dp递推
  8. ping 可以ping通,但是dns无法解析,导致ie无法上网问题的解决办法!---winsock2问题[转贴]...
  9. 用python的OCR实现自动拍照搜题
  10. ad域下发策略_AD域修改组策略
  11. 数字孪生技术在交通运输领域管理应用【案例解析】
  12. consul剔除无效服务
  13. Scala----特质trait的使用
  14. hashmap扩容线程安全问题_HashMap是非线程安全,为什么ConcurrentHashMap能做到线程安全?...
  15. win10家庭组(win10家庭组共享打印机)
  16. MYSQL的下载与配置安装
  17. 电脑中常用的“扇区”、“簇”、“块”、“页”等概念
  18. Qt如何设置控件字体有下划线
  19. 苹果手机打不开html,苹果手机打不开app是怎么回事(苹果app异常解决方法)
  20. Spring Cloud概念+案例(eureka注册+拉取、ribbon负载均衡+重试)、netfix各组件介绍【上】

热门文章

  1. 【pandas】 DataFrame缺失值的查找与填充
  2. 基于AIC评价指标的逐步回归Python语言实现
  3. 从MPU6050了解姿态解算
  4. 计算机错误651是什么故障,宽带连接错误显示代码651是什么原因 宽带连接错误651的解决方法...
  5. “给阿姨倒杯卡布奇诺”广发卡分期积分带你免费畅饮星巴克!
  6. JSD-2204-反射-Day17
  7. unity 和安卓互相交互
  8. Qt Quick Dialogs
  9. 工商名人堂 爱德华·约翰逊二世---富达基金的崛起(转载自新浪)
  10. Asp.Net之分页查询