文章目录

  • 问题描述:
  • 调试过程记录:
    • 调试1:
      • 运行结果:
      • 结果分析:
    • 调试2:
      • 运行结果:
      • 结果分析:
  • 最终代码1:
    • 设置device = 'cpu':
    • 设置device = 'cuda':
  • 运行结果:
    • 设置device = 'cpu':
  • 代码备份:

问题描述:

本次调试中遇到的错误,其原因主要是数据类型不符。具体表示为cpu、cuda、Tensor.cuda.FloatTensor、Tensor.FloatTensor的不匹配。

本次调试用到一个第三方库,名为torchsnooper,用于监测深度网络程序中的变量的类型、维度等信息,用以解决相应的不匹配问题。

GitHub链接:TorchSnooper

本次调试遇到的报错信息包括:

RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 ‘weight’

RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
p.s.

由于本人一开始对论文的网络架构architecture理解有误,致使网络结构与论文中的架构出现了较大的差异。

正确的网络架构如下:

而对应的最终的源代码如下:

import torch.nn as nn
from torch.nn import init
import torch.nn.functional as F
import torch
import os
from torchsummary import summary
import torchsnooperpatch_size = 17
batch_size = 20
device = "cuda" if torch.cuda.is_available() else "cpu"x = torch.randn(batch_size,1,103,patch_size,patch_size,device=device)
# -----------------------自加在构建网络的情况下获得维度---------------------------
# @torchsnooper.snoop()class Net(nn.Module):@staticmethoddef weight_init(m):if isinstance(m, nn.Linear) or isinstance(m, nn.Conv3d):init.xavier_uniform_(m.weight.data)init.constant_(m.bias.data, 0)def _get_final_flattened_size(self):with torch.no_grad():x = torch.zeros((1, 1, 103,patch_size, patch_size),device=device)x = self.pool1(self.conv1(x))x = self.pool2(self.conv2(x))x = self.conv3(x)_, t, c, w, h = x.size()return t * c * w * hdef __init__(self):super(Net,self).__init__()self.conv1 = nn.Conv3d(1, 32, (32, 4, 4), padding=(1, 1, 1)).cuda()self.conv2 = nn.Conv3d(32, 2*32, (32, 5, 5), padding=(1, 1, 1)).cuda()self.conv3 = nn.Conv3d(2*32, 4*32, (32, 3, 3), padding=(1, 0, 0)).cuda()self.pool1 = nn.MaxPool3d((1,2,2), stride = (1,2,2)).cuda()self.pool2 = nn.MaxPool3d((1,2,2), stride = (1,2,2)).cuda()self.features_size = self._get_final_flattened_size()self.fc = nn.Linear(self.features_size, 10).cuda()self.apply(self.weight_init)def forward(self,x):x = F.relu(self.conv1(x))x = self.pool1(x)x = F.relu(self.conv2(x))x = self.pool2(x)x = F.relu(self.conv3(x))x = x.view(-1, self.features_size)x = self.fc(x)return xnet = Net()
net.to(device)print(net.to(device))# os.system('pause')summary(net.to(device),(1, 103, patch_size, patch_size),device=device)
# -----------------------自加在构建网络的情况下获得维度---------------------------

这里通过将nn.MaxPool3d((1,2,2), stride = (1,2,2))将三维最大池化在Depth(Depth×Width×Height)的size和stride均设置为,从而达到只将W,H维度的size减半,保持D维度不变的目的。

对应的运行结果如下:

E:\Anaconda\python.exe C:/Users/73416/PycharmProjects/HSIproject/test.py
Net((conv1): Conv3d(1, 32, kernel_size=(32, 4, 4), stride=(1, 1, 1), padding=(1, 1, 1))(conv2): Conv3d(32, 64, kernel_size=(32, 5, 5), stride=(1, 1, 1), padding=(1, 1, 1))(conv3): Conv3d(64, 128, kernel_size=(32, 3, 3), stride=(1, 1, 1), padding=(1, 0, 0))(pool1): MaxPool3d(kernel_size=(1, 2, 2), stride=(1, 2, 2), padding=0, dilation=1, ceil_mode=False)(pool2): MaxPool3d(kernel_size=(1, 2, 2), stride=(1, 2, 2), padding=0, dilation=1, ceil_mode=False)(fc): Linear(in_features=2048, out_features=10, bias=True)
)
----------------------------------------------------------------Layer (type)               Output Shape         Param #
================================================================Conv3d-1       [-1, 32, 74, 16, 16]          16,416MaxPool3d-2         [-1, 32, 74, 8, 8]               0Conv3d-3         [-1, 64, 45, 6, 6]       1,638,464MaxPool3d-4         [-1, 64, 45, 3, 3]               0Conv3d-5        [-1, 128, 16, 1, 1]       2,359,424Linear-6                   [-1, 10]          20,490
================================================================
Total params: 4,034,794
Trainable params: 4,034,794
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.11
Forward/backward pass size (MB): 6.79
Params size (MB): 15.39
Estimated Total Size (MB): 22.29
----------------------------------------------------------------Process finished with exit code 0

调试过程记录:

调试1:
运行结果:
E:\Anaconda\python.exe C:/Users/73416/PycharmProjects/HSIproject/test.py
torch.Size([10, 1, 103, 17, 17])
Source path:... C:/Users/73416/PycharmProjects/HSIproject/test.py
Starting var:.. self = REPR FAILED
Starting var:.. __class__ = <class '__main__.Net'>
22:19:30.891404 call        63     def __init__(self):
22:19:30.892400 line        64         super(Net, self).__init__()
Modified var:.. self = Net()
22:19:30.892400 line        65         self.conv1 = nn.Conv3d(1, 32, (32, 4, 4), padding=(1, 1, 1))
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4, 4), stride=(1, 1, 1), padding=(1, 1, 1)))
22:19:30.893395 line        66         self.conv2 = nn.Conv3d(32, 2*32, (32, 5, 5), padding=(1, 1, 1))
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...(32, 5, 5), stride=(1, 1, 1), padding=(1, 1, 1)))
22:19:30.904366 line        67         self.conv3 = nn.Conv3d(2*32, 4*32, (4, 3, 3), padding=(1, 0, 0))
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...=(4, 3, 3), stride=(1, 1, 1), padding=(1, 0, 0)))
22:19:30.906358 line        68         self.pool1 = nn.MaxPool3d(2, stride = 2)
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...tride=2, padding=0, dilation=1, ceil_mode=False))
22:19:30.907359 line        69         self.pool2 = nn.MaxPool3d(2, stride = 2)
22:19:30.907359 return      69         self.pool2 = nn.MaxPool3d(2, stride = 2)
Return value:.. None
Starting var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...tride=2, padding=0, dilation=1, ceil_mode=False))
Starting var:.. x = tensor<(2, 1, 103, 17, 17), float32, cuda:0>
22:19:33.521693 call        71     def forward(self,x):
22:19:33.523689 line        72         x = F.relu(self.conv1(x))
conv1: torch.Size([2, 32, 74, 16, 16])
Modified var:.. x = tensor<(2, 32, 74, 16, 16), float32, cuda:0, grad>
22:19:34.297627 line        73         print('conv1:', x.size())
22:19:34.312655 line        74         x = self.pool1(x)
pool1: torch.Size([2, 32, 37, 8, 8])
Modified var:.. x = tensor<(2, 32, 37, 8, 8), float32, cuda:0, grad>
22:19:34.315645 line        75         print('pool1:', x.size())
22:19:34.317638 line        76         x = F.relu(self.conv2(x))
Modified var:.. x = tensor<(2, 64, 8, 6, 6), float32, cuda:0, grad>
22:19:34.318636 line        77         print('conv2:', x.size())
conv2: torch.Size([2, 64, 8, 6, 6])
pool2: torch.Size([2, 64, 4, 3, 3])
22:19:34.322625 line        78         x = self.pool2(x)
Modified var:.. x = tensor<(2, 64, 4, 3, 3), float32, cuda:0, grad>
22:19:34.323624 line        79         print('pool2:', x.size())
22:19:34.324619 line        80         x = F.relu(self.conv3(x))
Modified var:.. x = tensor<(2, 128, 3, 1, 1), float32, cuda:0, grad>
22:19:34.325617 line        81         print('conv3:', x.size())
conv3: torch.Size([2, 128, 3, 1, 1])
22:19:34.326614 line        82         features_size = self._get_final_flattened_size()Starting var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...tride=2, padding=0, dilation=1, ceil_mode=False))22:19:34.327611 call        53     def _get_final_flattened_size(self):22:19:34.327611 line        54         with torch.no_grad():22:19:34.327611 line        55             x = torch.zeros((batch_size, 1, 103,22:19:34.327611 line        56                              patch_size, patch_size))New var:....... x = tensor<(10, 1, 103, 17, 17), float32, cpu>22:19:34.328577 line        57             x = self.pool1(self.conv1(x))22:19:34.332566 exception   57             x = self.pool1(self.conv1(x))RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'weight'Call ended by exception
22:19:34.337552 exception   82         features_size = self._get_final_flattened_size()
RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'weight'
Call ended by exception
Traceback (most recent call last):File "C:/Users/73416/PycharmProjects/HSIproject/test.py", line 92, in <module>summary(net,(1, 103, patch_size, patch_size),batch_size,device='cuda')File "E:\Anaconda\lib\site-packages\torchsummary\torchsummary.py", line 72, in summarymodel(*x)File "E:\Anaconda\lib\site-packages\torch\nn\modules\module.py", line 489, in __call__result = self.forward(*input, **kwargs)File "E:\Anaconda\lib\site-packages\pysnooper\tracer.py", line 256, in simple_wrapperreturn function(*args, **kwargs)File "C:/Users/73416/PycharmProjects/HSIproject/test.py", line 82, in forwardfeatures_size = self._get_final_flattened_size()File "E:\Anaconda\lib\site-packages\pysnooper\tracer.py", line 256, in simple_wrapperreturn function(*args, **kwargs)File "C:/Users/73416/PycharmProjects/HSIproject/test.py", line 57, in _get_final_flattened_sizex = self.pool1(self.conv1(x))File "E:\Anaconda\lib\site-packages\torch\nn\modules\module.py", line 489, in __call__result = self.forward(*input, **kwargs)File "E:\Anaconda\lib\site-packages\torch\nn\modules\conv.py", line 448, in forwardself.padding, self.dilation, self.groups)
RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'weight'Process finished with exit code 1
结果分析:

可以看到报错原因是:

RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'weight'

可以看出程序报错是cpucuda类型不匹配造成的。

通过@torchsnooper.snoop()来监测程序中的变量。发现:


只有这个变量的类型是cpu,而其他的变量的类型都是cuda。查找上文发现这个报错的位置是在程序的第56行。

22:19:34.327611 line        55             x = torch.zeros((batch_size, 1, 103,
22:19:34.327611 line        56                              patch_size, patch_size))
New var:....... x = tensor<(10, 1, 103, 17, 17), float32, cpu>

即在相应的位置将变量的类型指定为cuda即可。

x = torch.zeros((batch_size, 1, 103, patch_size, patch_size), device = 'cuda')
调试2:
运行结果:
E:\Anaconda\python.exe C:/Users/73416/PycharmProjects/HSIproject/test.py
Source path:... C:/Users/73416/PycharmProjects/HSIproject/test.py
torch.Size([10, 1, 103, 17, 17])
Starting var:.. self = REPR FAILED
Starting var:.. __class__ = <class '__main__.Net'>
22:28:30.503370 call        63     def __init__(self):
22:28:30.503370 line        64         super(Net, self).__init__()
Modified var:.. self = Net()
22:28:30.503370 line        65         self.conv1 = nn.Conv3d(1, 32, (32, 4, 4), padding=(1, 1, 1))
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4, 4), stride=(1, 1, 1), padding=(1, 1, 1)))
22:28:30.504392 line        66         self.conv2 = nn.Conv3d(32, 2*32, (32, 5, 5), padding=(1, 1, 1))
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...(32, 5, 5), stride=(1, 1, 1), padding=(1, 1, 1)))
22:28:30.516367 line        67         self.conv3 = nn.Conv3d(2*32, 4*32, (4, 3, 3), padding=(1, 0, 0))
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...=(4, 3, 3), stride=(1, 1, 1), padding=(1, 0, 0)))
22:28:30.518328 line        68         self.pool1 = nn.MaxPool3d(2, stride = 2)
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...tride=2, padding=0, dilation=1, ceil_mode=False))
22:28:30.518328 line        69         self.pool2 = nn.MaxPool3d(2, stride = 2)
22:28:30.518328 line        70         self.features_size = self._get_final_flattened_size()Starting var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...tride=2, padding=0, dilation=1, ceil_mode=False))22:28:30.519359 call        53     def _get_final_flattened_size(self):22:28:30.519359 line        54         with torch.no_grad():22:28:30.519359 line        55             x = torch.zeros((batch_size, 1, 103,22:28:30.519359 line        56                              patch_size, patch_size),device='cuda')New var:....... x = tensor<(10, 1, 103, 17, 17), float32, cuda:0>22:28:30.520324 line        57             x = self.pool1(self.conv1(x))22:28:30.523339 exception   57             x = self.pool1(self.conv1(x))RuntimeError: Input type (torch.cuda.FloatTensor...eight type (torch.FloatTensor) should be the sameCall ended by exception
22:28:30.526306 exception   70         self.features_size = self._get_final_flattened_size()
RuntimeError: Input type (torch.cuda.FloatTensor...eight type (torch.FloatTensor) should be the same
Call ended by exception
Traceback (most recent call last):File "C:/Users/73416/PycharmProjects/HSIproject/test.py", line 92, in <module>net = Net().to("cuda")File "E:\Anaconda\lib\site-packages\pysnooper\tracer.py", line 256, in simple_wrapperreturn function(*args, **kwargs)File "C:/Users/73416/PycharmProjects/HSIproject/test.py", line 70, in __init__self.features_size = self._get_final_flattened_size()File "E:\Anaconda\lib\site-packages\pysnooper\tracer.py", line 256, in simple_wrapperreturn function(*args, **kwargs)File "C:/Users/73416/PycharmProjects/HSIproject/test.py", line 57, in _get_final_flattened_sizex = self.pool1(self.conv1(x))File "E:\Anaconda\lib\site-packages\torch\nn\modules\module.py", line 489, in __call__result = self.forward(*input, **kwargs)File "E:\Anaconda\lib\site-packages\torch\nn\modules\conv.py", line 448, in forwardself.padding, self.dilation, self.groups)
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the sameProcess finished with exit code 1
结果分析:

可以看到报错是:


究其原因是因为输入的x与模型net()所在的位置不同,一个在CPU,一个在GPU。

解决方式有两种:

  • 将输入数据x和模型net()的所在位置均设定为CPU。
  • 将输入数据x和模型net()的所在位置均设定为GPU。

本次的处理是将位置设定为CPU。(因为设定为GPU后仍报错,且报错信息不变)。

修改过的源码见“最终代码”部分。

最终代码1:

设置device = ‘cpu’:

这部分就是将模型和所用的网络的device均设定为‘cuda’

import torch.nn as nn
from torch.nn import init
import torch.nn.functional as F
import torch
import os
from torchsummary import summary
import torchsnooperpatch_size = 17
batch_size = 20
x = torch.randn(batch_size,1,103,patch_size,patch_size,device='cpu')
# -----------------------自加在构建网络的情况下获得维度---------------------------
@torchsnooper.snoop()class Net(nn.Module):def _get_final_flattened_size(self):with torch.no_grad():x = torch.zeros((batch_size, 1, 103,patch_size, patch_size),device='cpu')x = self.pool1(self.conv1(x))x = self.pool2(self.conv2(x))x = self.conv3(x)_, t, c, w, h = x.size()return t * c * w * hdef __init__(self):super(Net, self).__init__()self.conv1 = nn.Conv3d(1, 32, (32, 4, 4), padding=(1, 1, 1))self.conv2 = nn.Conv3d(32, 2*32, (32, 5, 5), padding=(1, 1, 1))self.conv3 = nn.Conv3d(2*32, 4*32, (4, 3, 3), padding=(1, 0, 0))self.pool1 = nn.MaxPool3d(2, stride = 2)self.pool2 = nn.MaxPool3d(2, stride = 2)self.features_size = self._get_final_flattened_size()self.fc = nn.Linear(self.features_size, 10)def forward(self,x):x = F.relu(self.conv1(x))x = self.pool1(x)x = F.relu(self.conv2(x))x = self.pool2(x)x = F.relu(self.conv3(x))x = x.view(-1, self.features_size)x = self.fc(x)return xdef print(self):print(self.features_size)net = Net()
summary(net,(1, 103, patch_size, patch_size),device='cpu')
net.print()
# -----------------------自加在构建网络的情况下获得维度---------------------------
# # torch.Size([10, 1, 103, 17, 17])
# # conv1: torch.Size([10, 32, 74, 16, 16])
# # pool1: torch.Size([10, 32, 37, 8, 8])
# # conv2: torch.Size([10, 64, 8, 6, 6])
# # pool2: torch.Size([10, 64, 4, 3, 3])
# # conv3: torch.Size([10, 128, 3, 1, 1])
# # features_size: 384
# # final_size: torch.Size([10, 10])
设置device = ‘cuda’:

这部分的操作是在网络Net的类的初始化中,对网络的层加上.cuda()的后缀,将网络层放到GPU上。

之前的报错原因是因为网络(或者说网络的层)在CPU而不在GPU。

当然这种操作我是第一次见(2019.9.20)

import torch.nn as nn
from torch.nn import init
import torch.nn.functional as F
import torch
import os
from torchsummary import summary
import torchsnooperpatch_size = 17
batch_size = 20
device = "cuda" if torch.cuda.is_available() else "cpu"x = torch.randn(batch_size,1,103,patch_size,patch_size,device=device)
# -----------------------自加在构建网络的情况下获得维度---------------------------
# @torchsnooper.snoop()class Net(nn.Module):@staticmethoddef weight_init(m):if isinstance(m, nn.Linear) or isinstance(m, nn.Conv3d):init.xavier_uniform_(m.weight.data)init.constant_(m.bias.data, 0)def _get_final_flattened_size(self):with torch.no_grad():x = torch.zeros((1, 1, 103,patch_size, patch_size),device=device)x = self.pool1(self.conv1(x))x = self.pool2(self.conv2(x))x = self.conv3(x)_, t, c, w, h = x.size()return t * c * w * hdef __init__(self):super(Net,self).__init__()self.conv1 = nn.Conv3d(1, 32, (32, 4, 4), padding=(1, 1, 1)).cuda()self.conv2 = nn.Conv3d(32, 2*32, (32, 5, 5), padding=(1, 1, 1)).cuda()self.conv3 = nn.Conv3d(2*32, 4*32, (4, 3, 3), padding=(1, 0, 0)).cuda()self.pool1 = nn.MaxPool3d(2, stride = 2).cuda()self.pool2 = nn.MaxPool3d(2, stride = 2).cuda()self.features_size = self._get_final_flattened_size()self.fc = nn.Linear(self.features_size, 10).cuda()self.apply(self.weight_init)def forward(self,x):x = F.relu(self.conv1(x))x = self.pool1(x)x = F.relu(self.conv2(x))x = self.pool2(x)x = F.relu(self.conv3(x))x = x.view(-1, self.features_size)x = self.fc(x)return xnet = Net()
net.to(device)print(net.to(device))# os.system('pause')summary(net.to(device),(1, 103, patch_size, patch_size),device=device)
# -----------------------自加在构建网络的情况下获得维度---------------------------
# -----------------------自加在构建网络的情况下获得维度---------------------------
# # torch.Size([10, 1, 103, 17, 17])
# # conv1: torch.Size([10, 32, 74, 16, 16])
# # pool1: torch.Size([10, 32, 37, 8, 8])
# # conv2: torch.Size([10, 64, 8, 6, 6])
# # pool2: torch.Size([10, 64, 4, 3, 3])
# # conv3: torch.Size([10, 128, 3, 1, 1])
# # features_size: 384
# # final_size: torch.Size([10, 10])

这里我要强调一点:这是我这么长时间以来(2019.9.20)第一次见在网络的类的初始化模块,就将在网络层的后面加上.cuda()

运行结果:

设置device = ‘cpu’:
E:\Anaconda\python.exe C:/Users/73416/PycharmProjects/HSIproject/test.py
Done!
Source path:... C:/Users/73416/PycharmProjects/HSIproject/test.py
Starting var:.. self = REPR FAILED
Starting var:.. __class__ = <class '__main__.Net'>
14:08:20.721270 call        62     def __init__(self):
14:08:20.721270 line        63         super(Net, self).__init__()
Modified var:.. self = Net()
14:08:20.721270 line        64         self.conv1 = nn.Conv3d(1, 32, (32, 4, 4), padding=(1, 1, 1))
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4, 4), stride=(1, 1, 1), padding=(1, 1, 1)))
14:08:20.767246 line        65         self.conv2 = nn.Conv3d(32, 2*32, (32, 5, 5), padding=(1, 1, 1))
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...(32, 5, 5), stride=(1, 1, 1), padding=(1, 1, 1)))
14:08:20.797150 line        66         self.conv3 = nn.Conv3d(2*32, 4*32, (4, 3, 3), padding=(1, 0, 0))
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...=(4, 3, 3), stride=(1, 1, 1), padding=(1, 0, 0)))
14:08:20.799145 line        67         self.pool1 = nn.MaxPool3d(2, stride = 2)
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...tride=2, padding=0, dilation=1, ceil_mode=False))
14:08:20.799145 line        68         self.pool2 = nn.MaxPool3d(2, stride = 2)
14:08:20.799145 line        69         self.features_size = self._get_final_flattened_size()Starting var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...tride=2, padding=0, dilation=1, ceil_mode=False))14:08:20.799145 call        52     def _get_final_flattened_size(self):14:08:20.799145 line        53         with torch.no_grad():14:08:20.800142 line        54             x = torch.zeros((batch_size, 1, 103,14:08:20.800142 line        55                              patch_size, patch_size),device='cpu')New var:....... x = tensor<(20, 1, 103, 17, 17), float32, cpu>14:08:20.811084 line        56             x = self.pool1(self.conv1(x))Modified var:.. x = tensor<(20, 32, 37, 8, 8), float32, cpu>14:08:21.515198 line        57             x = self.pool2(self.conv2(x))Modified var:.. x = tensor<(20, 64, 4, 3, 3), float32, cpu>14:08:21.967987 line        58             x = self.conv3(x)Modified var:.. x = tensor<(20, 128, 3, 1, 1), float32, cpu>14:08:21.985939 line        59             _, t, c, w, h = x.size()New var:....... _ = 20New var:....... t = 128New var:....... c = 3New var:....... w = 1New var:....... h = 114:08:21.986969 line        60         return t * c * w * h14:08:21.986969 return      60         return t * c * w * hReturn value:.. 384
14:08:21.986969 line        70         self.fc = nn.Linear(self.features_size, 10)
Modified var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...ear(in_features=384, out_features=10, bias=True))
14:08:21.987934 return      70         self.fc = nn.Linear(self.features_size, 10)
Return value:.. None
Starting var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...ear(in_features=384, out_features=10, bias=True))
Starting var:.. x = tensor<(2, 1, 103, 17, 17), float32, cpu>
14:08:22.012867 call        73     def forward(self,x):
14:08:22.013864 line        74         x = F.relu(self.conv1(x))
Modified var:.. x = tensor<(2, 32, 74, 16, 16), float32, cpu, grad>
14:08:22.148504 line        75         x = self.pool1(x)
Modified var:.. x = tensor<(2, 32, 37, 8, 8), float32, cpu, grad>
14:08:22.165459 line        76         x = F.relu(self.conv2(x))
Modified var:.. x = tensor<(2, 64, 8, 6, 6), float32, cpu, grad>
14:08:22.208344 line        77         x = self.pool2(x)
Modified var:.. x = tensor<(2, 64, 4, 3, 3), float32, cpu, grad>
14:08:22.209340 line        78         x = F.relu(self.conv3(x))
Modified var:.. x = tensor<(2, 128, 3, 1, 1), float32, cpu, grad>
14:08:22.210337 line        79         x = x.view(-1, self.features_size)
Modified var:.. x = tensor<(2, 384), float32, cpu, grad>
14:08:22.221308 line        80         x = self.fc(x)
Modified var:.. x = tensor<(2, 10), float32, cpu, grad>
14:08:22.238264 line        81         return x
14:08:22.239262 return      81         return x
Return value:.. tensor<(2, 10), float32, cpu, grad>
----------------------------------------------------------------Layer (type)               Output Shape         Param #
================================================================Conv3d-1       [-1, 32, 74, 16, 16]          16,416MaxPool3d-2         [-1, 32, 37, 8, 8]               0Conv3d-3          [-1, 64, 8, 6, 6]       1,638,464MaxPool3d-4          [-1, 64, 4, 3, 3]               0Conv3d-5         [-1, 128, 3, 1, 1]         295,040Linear-6                   [-1, 10]           3,850
================================================================
Total params: 1,953,770
Trainable params: 1,953,770
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.11
Forward/backward pass size (MB): 5.36
Params size (MB): 7.45
Estimated Total Size (MB): 12.93
----------------------------------------------------------------
384
Starting var:.. self = Net(  (conv1): Conv3d(1, 32, kernel_size=(32, 4,...ear(in_features=384, out_features=10, bias=True))
14:08:22.299100 call        83     def print(self):
14:08:22.300098 line        84         print(self.features_size)
14:08:22.300098 return      84         print(self.features_size)
Return value:.. NoneProcess finished with exit code 0

#####设置device = ‘cuda’

E:\Anaconda\python.exe C:/Users/73416/PycharmProjects/HSIproject/test.py
Net((conv1): Conv3d(1, 32, kernel_size=(32, 4, 4), stride=(1, 1, 1), padding=(1, 1, 1))(conv2): Conv3d(32, 64, kernel_size=(32, 5, 5), stride=(1, 1, 1), padding=(1, 1, 1))(conv3): Conv3d(64, 128, kernel_size=(4, 3, 3), stride=(1, 1, 1), padding=(1, 0, 0))(pool1): MaxPool3d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)(pool2): MaxPool3d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)(fc): Linear(in_features=384, out_features=10, bias=True)
)
----------------------------------------------------------------Layer (type)               Output Shape         Param #
================================================================Conv3d-1       [-1, 32, 74, 16, 16]          16,416MaxPool3d-2         [-1, 32, 37, 8, 8]               0Conv3d-3          [-1, 64, 8, 6, 6]       1,638,464MaxPool3d-4          [-1, 64, 4, 3, 3]               0Conv3d-5         [-1, 128, 3, 1, 1]         295,040Linear-6                   [-1, 10]           3,850
================================================================
Total params: 1,953,770
Trainable params: 1,953,770
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.11
Forward/backward pass size (MB): 5.36
Params size (MB): 7.45
Estimated Total Size (MB): 12.93
----------------------------------------------------------------Process finished with exit code 0

代码备份:

import torch.nn as nn
from torch.nn import init
import torch.nn.functional as F
import torch
import os
from torchsummary import summary
import torchsnooperpatch_size = 17
batch_size = 20
x = torch.randn(batch_size,1,103,patch_size,patch_size,device='cpu')
# -----------------------自加在不建立网络的情况下获得维度-------------------------
# conv1 = nn.Conv3d(1, 32, (32, 4, 4), padding=(1, 1, 1))
# conv2 = nn.Conv3d(32, 2*32, (32, 5, 5), padding=(1, 1, 1))
# conv3 = nn.Conv3d(2*32, 4*32, (4, 3, 3), padding=(1, 0, 0))
# pool1 = nn.MaxPool3d(2, stride = 2)
# pool2 = nn.MaxPool3d(2, stride = 2)
#
# def _get_final_flattened_size():
#         with torch.no_grad():
#             x = torch.zeros((batch_size, 1, 103,
#                              patch_size, patch_size))
#             x = pool1(conv1(x))
#             x = pool2(conv2(x))
#             x = conv3(x)
#             _, t, c, w, h = x.size()
#         return t * c * w * h
#
# x = F.relu(conv1(x))
# print('conv1:', x.size())
# x = pool1(x)
# print('pool1:', x.size())
# x = F.relu(conv2(x))
# print('conv2:', x.size())
# x = pool2(x)
# print('pool2:', x.size())
# x = F.relu(conv3(x))
# print('conv3:', x.size())
# features_size = _get_final_flattened_size()
# print('features_size:', features_size)
# fc = nn.Linear(features_size, 10)
# x = x.view(-1, features_size)
# x = fc(x)
# print('final_size:', x.size())
print('Done!')
# -----------------------自加在不建立网络的情况下获得维度-------------------------
# # torch.Size([10, 1, 103, 17, 17])
# # conv1: torch.Size([10, 32, 74, 16, 16])
# # pool1: torch.Size([10, 32, 37, 8, 8])
# # conv2: torch.Size([10, 64, 8, 6, 6])
# # pool2: torch.Size([10, 64, 4, 3, 3])
# # conv3: torch.Size([10, 128, 3, 1, 1])
# # features_size: 384
# # final_size: torch.Size([10, 10])

[TorchSummary+TorchSnooper] 一次TorchSummary可视化网络的调试+TorchSnooper的第一次实际使用相关推荐

  1. Pulsar:一款功能强大的可视化网络足迹扫描平台

    Pulsar Pulsar是一款专为红队研究人员.渗透测试人员和Bug Hunter们设计的自动化网络足迹扫描平台,该工具可以帮助我们在对目标组织基础设施了解不多的情况下,尽可能多地发现组织面向外部的 ...

  2. 网络流量分析利器-可视化网络-netflow【6】-生产网流量监控架构设计

    拓扑图 监控点选择 监控点的选择主要取决于要监控的数据流,如果将监控点放在公网口,则捕获的地址均为公网地址,如果监控点部署在负载均衡内网口,则捕获的数据有公网地址和内网地址两种. 选在公网口,此时要注 ...

  3. 网络大数据分析 -- 使用 ElasticSearch + LogStash + Kibana 来可视化网络流量

    简介 ELK 套装包括 ElasticSearch.LogStash 和 Kibana. 其中, ElasticSearch 是一个数据搜索引擎(基于 Apache Lucene)+分布式 NoSQL ...

  4. 首届可视化网络安全技术论坛圆满落幕 可视化网络安全技术联盟成立

    可视化网络安全是一个新兴的交叉研究领域,它通过提供交互式可视化工具,提升网络安全分析人员感知.分析和理解网络安全问题的能力.近日,针对可视化网络安全这一话题,以主题为"看透安全 体验价值&q ...

  5. 首届可视化网络安全技术论坛开讲,大腕云集干货足

    2016年7月26日,由中国科学院信息工程研究所主办,太极计算机股份有限公司.烽火科技集团协办,北京安博通科技股份有限公司承办的首届可视化网络安全技术论坛(VNSTech)于在北京中关村软件园国际会议 ...

  6. Yolox_x可视化网络结构图

    Yolox共有七种网络结构,包含2种轻量级网络,和5种标准网络. ① 轻量级网络 (1) Yolox-Nano可视化网络结构图:点击查看 (2) Yolox-Tiniy可视化网络结构图:点击查看 ② ...

  7. Yolov5m可视化网络结构图

    Yolov5共有四种网络结构,每种网络深度和宽度上都不相同. (1) Yolov5s可视化网络结构图:点击查看 (2) Yolov5m可视化网络结构图:点击查看 (3) Yolov5l可视化网络结构图 ...

  8. 打开网络adb 调试

    打开网络adb 调试 adb 可以使用usb 也可以使用网络. 方法一(推荐) 需要 shell权限,且adb 客户端和手机是连通着的,命令行执行: adb tcpip 5555 方法二 这个方法是比 ...

  9. sscom串口网络数据调试器使用post方法向华为云obs桶上传文件和图片

    原贴地址:sscom串口网络数据调试器使用post方法向华为云obs桶上传文件和图片-云社区-华为云 [摘要] 之前发了文章"postman使用post方法向华为云obs桶上传文件和图片&q ...

最新文章

  1. python和c先学哪个对于初中生来说-初中生想学编程,请问先学C语言好还是先学Python?...
  2. 3.Web项目中使用Log4j实例
  3. 008_JsonConfig对象
  4. elementui表格复制_Element-UI中关于table表格的那些骚操作
  5. MySQL同时添加多条记录
  6. 操作系统(二十八)死锁的概念
  7. [Android] 修改ImageView的图片颜色
  8. cnpm : 无法加载文件_DELL 服务器R230 加载阵列卡驱动安装Server 2012R2操作系统
  9. xtrabackup备份还原MySQL数据库
  10. hibernate框架 最新_Java 15 个框架
  11. React 组件开发 传参(详解)。
  12. C++中#includeXXX.h和#includeXXX.h的区别
  13. New Relic性能监控(三)浏览器端监控
  14. 灰狼/狼群算法优化支持向量机SVM分类预测matlab代码,支持多分类。 Excel数据格式 ,直接运行 。
  15. 【矩阵乘法】外部矩阵乘法
  16. java delayqueue_Java DelayQueue size()用法及代码示例
  17. win10系统文件拖拽卡顿_win10系统下移动鼠标卡顿如何解决
  18. Effective Java(第三版) 学习笔记 - 第六章 枚举和注解 Rule34~Rule41
  19. 深搜回溯与不回溯的区别
  20. css 中划线,原价样式

热门文章

  1. (转)【自动语音识别课程】
  2. ejb stateless 包含在哪个包_微信评论能发表情包的那三天,今年最火表情包已有定论?!...
  3. html缓存效果代码,html5缓存(示例代码)
  4. vue 导出Excel
  5. LAV Filter 源代码分析 3: LAV Video (1)
  6. 基于HEVC 的UHD(超高清 4K)视频的主观质量评价
  7. element 输入框点击事件_Element Input输入框的使用方法
  8. 如何把catia完全卸载干净_catia软件无法卸载怎么办?彻底删除catia等三维软件的方法...
  9. mysql行列转换case_浅析SQL语句行列转换的两种方法 case...when与pivot函数的应用_MySQL...
  10. 配置多个ssh-key