显卡RTX3080 cuda 11.1 cudnn 8.0.5 python3.6.4

在使用pytorch1.0训练densefusion模型时报错,改用pytorch1.7,然后报上面的错误

Traceback (most recent call last):File "./tools/train.py", line 240, in <module>main()File "./tools/train.py", line 143, in mainloss, dis, new_points, new_target = criterion(pred_r, pred_t, pred_c, target, model_points, idx, points, opt.w,opt.refine_start)File "/home/xsy/anaconda3/envs/python36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_implresult = self.forward(*input, **kwargs)File "/home/xsy/DenseFusion-Pytorch-1.0/lib/loss.py", line 83, in forwardreturn loss_calculation(pred_r, pred_t, pred_c, target, model_points, idx, points, w, refine, self.num_pt_mesh, self.sym_list)File "/home/xsy/DenseFusion-Pytorch-1.0/lib/loss.py", line 44, in loss_calculationinds = knn(target.unsqueeze(0), pred.unsqueeze(0))File "/home/xsy/anaconda3/envs/python36/lib/python3.6/site-packages/torch/autograd/function.py", line 160, in __call__
"Legacy autograd function with non-static forward method is deprecated. "
RuntimeError: Legacy autograd function with non-static forward method is deprecated. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)

下面是我的损失函数:

from torch.nn.modules.loss import _Loss
from torch.autograd import Variable
import torch
import time
import numpy as np
import torch.nn as nn
import random
import torch.backends.cudnn as cudnn
from lib.knn.__init__ import KNearestNeighbordef loss_calculation(pred_r, pred_t, pred_c, target, model_points, idx, points, w, refine, num_point_mesh, sym_list):knn = KNearestNeighbor(1)bs, num_p, _ = pred_c.size()pred_r = pred_r / (torch.norm(pred_r, dim=2).view(bs, num_p, 1))base = torch.cat(((1.0 - 2.0*(pred_r[:, :, 2]**2 + pred_r[:, :, 3]**2)).view(bs, num_p, 1),\(2.0*pred_r[:, :, 1]*pred_r[:, :, 2] - 2.0*pred_r[:, :, 0]*pred_r[:, :, 3]).view(bs, num_p, 1), \(2.0*pred_r[:, :, 0]*pred_r[:, :, 2] + 2.0*pred_r[:, :, 1]*pred_r[:, :, 3]).view(bs, num_p, 1), \(2.0*pred_r[:, :, 1]*pred_r[:, :, 2] + 2.0*pred_r[:, :, 3]*pred_r[:, :, 0]).view(bs, num_p, 1), \(1.0 - 2.0*(pred_r[:, :, 1]**2 + pred_r[:, :, 3]**2)).view(bs, num_p, 1), \(-2.0*pred_r[:, :, 0]*pred_r[:, :, 1] + 2.0*pred_r[:, :, 2]*pred_r[:, :, 3]).view(bs, num_p, 1), \(-2.0*pred_r[:, :, 0]*pred_r[:, :, 2] + 2.0*pred_r[:, :, 1]*pred_r[:, :, 3]).view(bs, num_p, 1), \(2.0*pred_r[:, :, 0]*pred_r[:, :, 1] + 2.0*pred_r[:, :, 2]*pred_r[:, :, 3]).view(bs, num_p, 1), \(1.0 - 2.0*(pred_r[:, :, 1]**2 + pred_r[:, :, 2]**2)).view(bs, num_p, 1)), dim=2).contiguous().view(bs * num_p, 3, 3)ori_base = basebase = base.contiguous().transpose(2, 1).contiguous()model_points = model_points.view(bs, 1, num_point_mesh, 3).repeat(1, num_p, 1, 1).view(bs * num_p, num_point_mesh, 3)target = target.view(bs, 1, num_point_mesh, 3).repeat(1, num_p, 1, 1).view(bs * num_p, num_point_mesh, 3)ori_target = targetpred_t = pred_t.contiguous().view(bs * num_p, 1, 3)ori_t = pred_tpoints = points.contiguous().view(bs * num_p, 1, 3)pred_c = pred_c.contiguous().view(bs * num_p)pred = torch.add(torch.bmm(model_points, base), points + pred_t)if not refine:if idx[0].item() in sym_list:target = target[0].transpose(1, 0).contiguous().view(3, -1)pred = pred.permute(2, 0, 1).contiguous().view(3, -1)inds = knn(target.unsqueeze(0), pred.unsqueeze(0))target = torch.index_select(target, 1, inds.view(-1).detach() - 1)target = target.view(3, bs * num_p, num_point_mesh).permute(1, 2, 0).contiguous()pred = pred.view(3, bs * num_p, num_point_mesh).permute(1, 2, 0).contiguous()dis = torch.mean(torch.norm((pred - target), dim=2), dim=1)loss = torch.mean((dis * pred_c - w * torch.log(pred_c)), dim=0)pred_c = pred_c.view(bs, num_p)how_max, which_max = torch.max(pred_c, 1)dis = dis.view(bs, num_p)t = ori_t[which_max[0]] + points[which_max[0]]points = points.view(1, bs * num_p, 3)ori_base = ori_base[which_max[0]].view(1, 3, 3).contiguous()ori_t = t.repeat(bs * num_p, 1).contiguous().view(1, bs * num_p, 3)new_points = torch.bmm((points - ori_t), ori_base).contiguous()new_target = ori_target[0].view(1, num_point_mesh, 3).contiguous()ori_t = t.repeat(num_point_mesh, 1).contiguous().view(1, num_point_mesh, 3)new_target = torch.bmm((new_target - ori_t), ori_base).contiguous()# print('------------> ', dis[0][which_max[0]].item(), pred_c[0][which_max[0]].item(), idx[0].item())del knnreturn loss, dis[0][which_max[0]], new_points.detach(), new_target.detach()class Loss(_Loss):def __init__(self, num_points_mesh, sym_list):super(Loss, self).__init__(True)self.num_pt_mesh = num_points_meshself.sym_list = sym_listdef forward(self, pred_r, pred_t, pred_c, target, model_points, idx, points, w, refine):return loss_calculation(pred_r, pred_t, pred_c, target, model_points, idx, points, w, refine, self.num_pt_mesh, self.sym_list)

请教一下各位大神怎么解决?

RuntimeError: Legacy autograd function with non-static forward method is deprecated.相关推荐

  1. Pytorch版本过高产生的RuntimeError: Legacy autograd function with non-static forward method is deprecated.

    前言 在尝试用ECO Lite做视频分类的时候,使用了作者的Pytorch实现,然而Pytorch实现是基于Pytorch0.4的,我自己的Pytorch版本是1.4,所以在跑模型的时候出现了一些问题 ...

  2. Please use new-style autograd function with static forward method

    Please use new-style autograd function with static forward method Legacy autograd function with non- ...

  3. Legacy autograd function with non-static forward method is deprecated. Please use new-style autograd

    Legacy autograd function with non-static forward method is deprecated. Please use new-style autograd ...

  4. Legacy autograd function with non-static forward method is deprecated

    Legacy autograd function with non-static forward method is deprecated 在网络s3fd_atss_sapd 测试时发现这个问题, d ...

  5. Pytorch使用autograd.Function自定义拓展神经网络

    我们知道CNN这类人工神经网络都基于BP算法进行优化,因此需要误差关于权重是连续可导的,这是可以运用BP算法的前提条件:也有一些网络不满足这个条件. 1.可导 对于可连续求导的神经网络构建时采用nn. ...

  6. Pytorch的自定义拓展:torch.nn.Module和torch.autograd.Function

    参考链接:pytorch的自定义拓展之(一)--torch.nn.Module和torch.autograd.Function_LoveMIss-Y的博客-CSDN博客_pytorch自定义backw ...

  7. 自定义autograd function

    在TSN代码中, segmentconsensus是一个自定义函数, 所以要写一下它对应的梯度运算 # tj : https://blog.csdn.net/tsq292978891/article/ ...

  8. 使用torch.autograd.function解决dist.all_gather不能反向传播问题

    1. 问题来源 最近在用mmcv复现Partial FC模型,看到源码中,有单独写的前向反向传播,甚是疑惑- 源码: # Features all-gather total_features = to ...

  9. WPF error: does not contain a static 'Main' method suitable for an entry point

    does not contain a static 'Main' method suitable for an entry point 在Visual Studio中删除App.xaml从别的位置拷贝 ...

最新文章

  1. 改变自己,让自己变得更好
  2. MySQL数据库备份(INTO OUTFILE)
  3. [BUUCTF-pwn]——ciscn_2019_n_3
  4. iis 装完framework4 7 无法切换_扫盲贴之电压并列与电压切换
  5. apache 反向代理_通过 Apache 与 Nginx 配置 AJP 配置反向代理
  6. Python Challenge 过关心得(0)
  7. Latex中将多个eps图片合并成一幅eps的代码和方法
  8. python基础作业_017--python基础作业
  9. java浮点数的精确计算_Java 浮点数计算精度丢失问题?
  10. html5图片在线剪辑,Web端裁剪图片方法
  11. 第十节、grep、find、sed和awk
  12. Chapter6 机器人系统仿真(Ⅰ)---使用rviz集成urdf、xacro建造机器人模型
  13. cad转excel插件c2e_cad表格转换器2016
  14. Hacker News API
  15. win10此电脑默认7个文件夹(附+ OneDrive、Catch!)隐藏方法
  16. STM32F4 DMA
  17. 一行代码获取股票、基金数据,并绘制K线图
  18. mysql relaylog_MySQL relay log 详细参数解释
  19. python pil模块
  20. HTTP Cache(缓存)

热门文章

  1. 2013年3月《Testing Experience》电子杂志下载
  2. 2021届双非学生考西北区国家电网第一批(计算机类)经验分享
  3. python语言打小数点_如何在python中打小数点-问答-阿里云开发者社区-阿里云
  4. 简师网:公务员这些知识点需要背诵!
  5. python将数据导出为csv文件时,出现PermissionError: [Errno 13] Permission denied:问题
  6. SSH登录一条线理解前因后果
  7. 一台计算机两人共享使用,双人共享一台电脑主机 两个人同时使用一台电脑主机...
  8. Alpha、Beta、RC、GA、RTW版本
  9. 前端自动化部署,基于scp2,ssh2
  10. Protel 常用问题总结