文章目录

  • 数据加载
    • Dataset
    • Dataloader
  • 可视化——TensorBoard
  • Transform
    • ToTensor
    • Nomalize
    • Resize
    • RandomCrop
  • torch.nn:神经网络的基本架构
    • Module:所有神经网络的基本类
    • Convolution Layers
    • Pooling Layers
    • 非线性激活
    • 正则化层
    • Recurrent Layers
    • Transformer Layers
    • Linear Layers
    • Dropout Layers
    • Sparse Layers
    • Distance Functions
    • Loss Functions
  • torch.nn.Sequential
  • 搭建实战
  • 损失函数
  • 反向传播
  • 优化器torch.optim
  • 现有网络模型的修改与使用
  • 模型的保存与读取
  • 完整的模型训练套路
  • 使用GPU训练
  • 完整的模型使用套路

数据加载

  • Dataset:提供一种方式去获取数据及其label
  • 如何获取每一个数据及其label
  • 告诉我们总共有多少数据
  • Dataloader:为后面的网络提供不同的数据形式

Dataset

简介:所有数据集都应该继承Dataset,所有子类都应该重写__getitem__(对每个数据获取label),可以重写_len_(获取数据长度)

举例1:

输入:蚂蚁的照片;label:ants

(蚂蚁的照片放在名为ants的文件夹中)

from torch.utils.data import Dataset
from PIL import Image #读取图片用
import osclass MyData(Dataset):def __init__(self,root_dir,label_dir):self.root_dir = root_dirself.label_dir = label_dirself.path = os.path.join(self.root_dir,self.label_dir) #用系统支持的方式连接路径self.img_path = os.listdir(self.path) #列表,存储着路径下所有文件的名称def __getitem__(self,idx):img_name = self.img_path[idx] #第idx张图片的文件名img_item_path = os.path.join(self.root_dir,self.label_dir,img_name) #该图片的具体位置img = Image.open(img_item_path)label = self.label_dirreturn img, labeldef __len__(self):return len(self.img_path)root_dir = "dataset/train"
ants_label_dir = 'ants'
ants_dataset = MyData(root_dir,ants_label_dir)img,label = ants_dataset[0]
img.show()

举例2:

有文件夹ants_image和ants_label,ants_image存储蚂蚁的图片,ants_label存储蚂蚁的标签(和图片同样个数的txt文件,文件名为图片名,文件内容为标签)。当标签复杂时常用这种方式。

Dataloader

from torch.utils.data import DataLoader
from torch.utils,tensorboard import SummaryWriter
import torchvision
"""
dataset:要加载的数据集
batch_size:每次取多少
shuffle:True表示每次将数据集打乱
sampler
batch_sampler
num_workers:采用多进程进行加载
drop_last:采样除不尽是否舍去最后余下的样本
"""test_data = torchvision.datasets.CIFAR10("./dataset",train=False,transform=torchvision.transforms.ToTensor()) #自带数据集test_loader = DataLoader(dataser=test_data,batch_size=64,shuffle=True,num_workers=0,drop_last=True)#测试数据集中第一张图片及其target
img,target = test_data[0]
print(img.shape)
print(target)writer = SummaryWriter("logs")for epoch in range(2):step = 0for data in data_loader:imgs,target = datawriter.add_image("Epoch:{}".format(epoch),imgs,step)step += 1writer.close()

可视化——TensorBoard

举例:绘制 y=x 的图像

from torch.utils.tensorboard import SummaryWriter #直接向log_dir写入文件,可被TensorBoard解析writer = SummaryWriter("logs") #将对应的事件文件存储到logs文件夹中# writer.add_image(tag标题,img_tensor图片[torch.Tensor/numpy.array/string],global_step) 添加图片
# writer.add_scaler(tag标题,scaler_value储存的数值y,global_step记录的步x) 添加标量# y = x
for i in range(100):writer.add_scaler("y=x",i,i)writer.close()
#————————————以上已经建立好事件文件,下面是如何打开————————————————#在Terminal中输入 tensorboard --logdir=logs --port=6007,点击端口网址即可从浏览器打开图片
#port指定端口,可不指定

举例:绘制图片

from torch.utils.tensorboard import SummaryWriter
import numpy as np
from PIL import Imagewriter = SummaryWriter("logs")image_path = "dataset/train/ants/0013035.jpg"
img_PIL = Image.open(image_path)
img_array = np.array(img_PIL)writer.add_image("test",img_array,1,dataformats='HWC')writer.close()

Transform

常用类:

ToTensor:转换成tensor数据类型

Compose:中心裁剪并返回tensor数据类型

ToPILImage:把tensor或ndarray转换成PIL Image

Nomalize:输入tensor图片,均值,标准差,将其标准化

Resize:把PIL Image转换成给定尺寸,如果只有一个参数则match小的那条边进行等比缩放

RandomCrop:随机裁剪

ToTensor

from torchvision import transforms # ToTensor,resize等,对图片进行变换
from torch.utils.tensorboard import SummaryWriterimage_path = "dataset/train/ants/0013035.jpg"
img = Image.open(img_path)writer = SummaryWriter("logs")tensor_trans = transforms.ToTensor()
tensor_img = tensor_trans(img)writer.add_image("Tensor_img",tensor_img)writer.close()

Nomalize

trans_norm = transforms.Normalize([0.5,0.5,0.5],[0.5,0.5,0.5]) #三个通道,均值标准差都为0.5
# (input - 0.5) / 0.5 = 2 * input - 1
# input [0,1]
# result [-1,1]
img_norm = trans_norm(img_tensor)

Resize

trans_resize = transforms.Resize((512,512))
img_resize = trans_resize(img)#Compose+resize
trans_resize_2 = transforms.Resize(512)
#Compose参数为列表,元素为transforms类型
trans_compose = transforms.Compose([trans_resize_2,trans_totensor])
img_resize_2 = trans_compose(img)

RandomCrop

trans_random = transforms.RandomCrop(512)
trans_compose_2 = transforms.Compose([trans_random,trans_totensor])
for i in range(10):img_crop = trans_compose_2(img)writer.add_image("RandomCrop",img_crop,i)

torch.nn:神经网络的基本架构

Containers:容器,骨架

Convolution Layers:卷积层

Pooling Layers:池化层

Padding Layers

Non-linear Activations:非线性激活

Normalization Layers:正则化层

Module:所有神经网络的基本类

举例1:

import torch.nn as nn
import torch.nn.functional as Fclass Model(nn.Module):def __init__(self):super(Model,self).__init__()self.conv1 = nn.Conv2d(1,20,5)self.conv2 = nn.Conv2d(20,20,5)def forward(self,x):x = F.relu(self.conv1(x))return F.relu(self.conv2(x))

举例2:

from torch import nn
import torchclass test(nn.Module):def __init__(self):super(test,self).__init__()def forward(self,input):output = input + 1return outputtest_model = test()
x = torch.tensor(1.0)
output = test_model(x)

Convolution Layers

torch.nn.functional.conv2d:

import torch
import torch.nn.functional as Finput = torch.tensor([[1,2,0,3,1],[0,1,2,3,1],[1,2,1,0,0],[5,2,3,1,1],[2,1,0,1,1]])kernel = torch.tensor([1,2,1],[0,1,0],[2,1,0])input = torch.reshape(input,(1,1,5,5)) #(batch_size,通道数,数据维度)
kernel = torch.reshape(kernel,(1,1,3,3))output = F.conv2d(input,kernel,stride=1) #stride:步长;padding:在输入数据的周围进行补零填充
print(output)

torch.nn.Conv2d:

import torch
import torchvision
from torch import nn
from torch.nn import Conv2d
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriterdataset = torchvision.datasets.CIFAR10("../data",train=False,transform=torchvision.transforms.ToTensor(),download=True)
dataloader = DataLoader(dataset,batch_size=64)class mymodule(nn.Module):def __init__(self):super(mymodule,self).__init__()self.conv1 = Conv2d(3,6,3,stride=1,padding=0) #输入3通道,输出6层,3*3的核def forward(self,x):x = self.conv1(x)return xmymo = mymodule()writer = SummaryWriter("../logs")
step = 0
for data in dataloader:imgs,targets = dataoutput = mymo(imgs)#print(output.shape)writer.add_images("input",imgs,step)output = torch.reshape(output,(-1,3,30,30))writer.add_images("output",output,step)step += 1

Pooling Layers

torch.nn.MaxPool2d:

(默认步长为kernel_size)

(ceil_mode为True表示不足核大小时仍取;否则不取)

import torch
from torch import nn
import torchvision
from torch.nn import MaxPool2d
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriterinput = torch.tensor([[1,2,0,3,1],[0,1,2,3,1],[1,2,1,0,0],[5,2,3,1,1],[2,1,0,1,1]],dtype=torch.float32)
input = torch.reshape(input,(-1,1,5,5))class mymodule(nn.Module):def __init__(self):super(mymodule,self).__init__()self.maxpool1 = MaxPool2d(kernel_size=3,ceil_mode=True)def forward(self,input):output = self.maxpool1(input)return outputmymo = mymodule()
output = mymo(input)dataset = torchvision.datasets.CIFAR10("../data",train=False,download=True,transform = torchvision.transforms.ToTensor())dataloader = DataLoader(dataset,batch_size=64)
writer = SummaryWriter("../logs_maxpool")
step = 0
for imgs,targets in dataloader:writer.add_images("input",imgs,step)output = mymo(imgs)writer.add_images("output",output,step)step += 1writer.close()

非线性激活

torch.nn.ReLU:

import torch
from torch import nn
from torch.nn import ReLUinput = torch.tensor([[1,-0.5],[-1,3]])
input = torch.reshape(input,(-1,1,2,2))class mymodule(nn.Module):def __init__(self):super(mymodule,self).__init__()self.relu1 = ReLU()def forward(self,input):output = self.relu1(input)return outputmymo = mymodule()
output = mymo(input)

torch.nn.sigmoid:

import torch
import torchvision
from torch import nn
from torch.utils.data import DataLoader
from torch.nn import ReLU,Sigmoid
from torch.utils.tensorboard import SummaryWriterdataset = torchvision.datasets.CIFAR10("../data",train=False,download=True,transform=torchvision.transforms.ToTensor())
dataloader = DataLoader(dataset,batch_size=64)class mymodule(nn.Module):def __init__(self):super(mymodule,self).__init__()self.relu1 = ReLU()self.sigmoid1 = Sigmoid()def forward(self,input):output = self.sigmoid1(input)return outputmymo = mymodule()
writer = SummaryWriter("../logs_relu")
step = 0
for imgs,targets in dataloader:writer.add_images("input",imgs,global_step=step)output = mymo(imgs)writer.add_images("output",output,step)step += 1writer.close()

正则化层

torch.nn.BatchNorm2d:

m = nn.BatchNorm2d(100,affine=False) #100是特征数,对应通道数
input = torch.randn(20,100,35,45)
output = m(input)

Recurrent Layers

nn.RNNBase

nn.RNN

nn.LSTM

nn.GRU

nn.RNNCell

nn.LSTMCell

nn.GRUCell

Transformer Layers

nn.Transformer

nn.TransformerEncoder

nn.TransformerDecoder

nn.TransformerEncoderLayer

nn.TransformerDecoderLayer

Linear Layers

nn.Linear

import torch
import torchvision
from torch import nn
from torch.utils.data import DataLoaderdataset = torchvision.datasets.CIFAR10("../data",train=False,transform=torchvision.transforms.ToTensor(),download=True)
dataloader = DataLoader(dataset,batch_size=64)class mymodule(nn.Module):def __init__(self):super(mymodule,self).__init__()self.linear1 = Linear(196608,10) #输入特征数,输出特征数def forward(self,input):output = self.linear1(input)return outputmymo = mymodule()for imgs,targets in dataloader:output = torch.reshape(imgs,(1,1,1,-1))#等价于#output = torch.flatten(imgs)output = mymo(output)

Dropout Layers

是什么:训练时,将input tensor中的一些元素随机变为0(概率为p),以防止过拟合

Sparse Layers

nn.Embedding

Distance Functions

是什么:计算两值间的相似度/距离

Loss Functions

torch.nn.Sequential

例:

model = nn.Sequential(nn.Conv2d(1,20,5),nn.ReLU(),nn.Conv2d(20,64,5),nn.ReLU()
)

搭建实战

from torch import nn
from torch.nn import Conv2d,MaxPool,Linear,Flatten,Sequential
from torch.utils.tensorboard import SummaryWriterclass mymodule(nn.Module):def __init__(self):super(mymodule,self).__init__()self.conv1 = Conv2d(3,32,5,padding=2) #in_channel,out_channel,kernelself.maxpool1 = MaxPool2d(2)self.conv2 = Conv2d(32,32,5,padding=2)self.maxpool2 = MaxPool2d(2)self.conv3 = Conv2d(32,64,5,padding=2)self.maxpool3 = MaxPool2d(2)self.flatten = Flatten()self.linear1 = Linear(1024,64)self.linear2 = Linear(64,10)#法2self.model1 = Sequential(Conv2d(3,32,5,padding=2),MaxPool2d(2),Conv2d(32,32,5,padding=2),MaxPool2d(2),Conv2d(32,64,5,padding=2),MaxPool2d(2),Flatten(),Linear(1024,64),Linear(64,10))def forward(self,x):x = self.conv1(x)x = self.maxpool1(x)x = self.conv2(x)x = self.maxpool2(x)x = self.conv3(x)x = self.maxpool3(x)x = self.flatten(x)x = self.linear1(x)x = self.linear2(x)#法2x = self.model1(x)return xmymo = mymodule()
#检查代码合规性
input = torch.ones((64,3,32,32))
output = mymo(input)writer = SummaryWriter("../logs_seq")
writer.add_graph(mymo,input) #展现网络层
writer.close()

损失函数

nn.L1Loss()

import torch
from torch.nn import L1Loss,MSELossinputs = torch.tensor([1,2,3],dtype=torch.float32)
targets = torch.tensor([1,2,5],dtype=torch.float32)inputs = torch.reshape(inputs,(1,1,1,3))
targets = torch.reshape(targets,(1,1,1,3))loss = L1Loss() #可设置reduction='sum'
result = loss(inputs,targets)

nn.MSELoss()

loss_mse = MSELoss()
result_mse = loss_mse(inputs,targets)

nn.CrossEntropyLoss()

  • 要求input(N,C):N=batch_size,C=类别数
  • 要求targets(N)
x = torch.tensor([0.1,0.2,0.3])
y = torch.tensor([1])x = torch.reshape(x,(1,3))
loss_cross = nn.CrossEntropyLoss()
result_cross = loss_cross(x,y)

反向传播

from torch import nn
from torch.nn import Sequential,Conv2d,MaxPool2d,Flatten,Linear
import torchvision
from torch.utils.data import DataLoaderclass mymodule(nn.Module):def __init__(self):super(mymodule,self).__init__()self.model1 = Sequential(Conv2d(3,32,5,padding=2),MaxPool2d(2),Conv2d(32,32,5,padding=2),MaxPool2d(2),Conv2d(32,64,5,padding=2),MaxPool2d(2),Flatten(),Linear(1024,64),Linear(64,10))def forward(self,x):x = self.model1(x)        return xdataset = torchvision.datasets.CIFAR10("../data",train=False,transform=torchvision.transforms.ToTensor(),download=True)
dataloader = DataLoader(dataset,batch_size=64)mymo = mymodule()loss = nn.CrossEntropyLoss()for imgs,targets in dataloader:outputs = mymo(imgs)result_loss = loss(outputs,targets)result_loss.backward() #可以得到梯度

优化器torch.optim

optim.SGD:随机梯度下降

optim.Adam

import torch
from torch import nn
from torch.nn import Sequential,Conv2d,MaxPool2d,Flatten,Linear
import torchvision
from torch.utils.data import DataLoaderclass mymodule(nn.Module):def __init__(self):super(mymodule,self).__init__()self.model1 = Sequential(Conv2d(3,32,5,padding=2),MaxPool2d(2),Conv2d(32,32,5,padding=2),MaxPool2d(2),Conv2d(32,64,5,padding=2),MaxPool2d(2),Flatten(),Linear(1024,64),Linear(64,10))def forward(self,x):x = self.model1(x)        return xdataset = torchvision.datasets.CIFAR10("../data",train=False,transform=torchvision.transforms.ToTensor(),download=True)
dataloader = DataLoader(dataset,batch_size=64)mymo = mymodule()loss = nn.CrossEntropyLoss()optim = torch.optim.SGD(mymo.parameters(),lr=0.01)for epoch in range(20):running_loss = 0.0for imgs,targets in dataloader:outputs = mymo(imgs)result_loss = loss(outputs,targets)optim.zero_grad()result_loss.backward() #可以得到梯度optim.step()running_loss += result_lossprint(running_loss)

现有网络模型的修改与使用

import torchvision
from torch import nnvgg_false = torchvision.models.vgg16(pretrained=False)
vgg_true = torchvision.models.vgg16(pretrained=True)print(vgg16_true)#添加层
vgg16_true.classifier.add_module('add_linear',nn.Linear(1000,10))#修改层
vgg16_false.classifier[6] = nn.Linear(4096,10)

模型的保存与读取

import torch
import torchvisionvgg16 = torchvision.models.vgg16(pretrained=False)#保存方式一:保存模型与参数
torch.save(vgg16,"vgg16_method1.pth")#对应的加载方式
model = torch.load("vgg16_method1.pth")#以上方法存在陷阱,当前若无该模型的类class,无法正确加载,需要import#保存方式二:保存模型的参数[官方推荐]
torch.save(vgg16.state_dict(),"vgg16_method2.pth")#对应的加载方式
vgg16 = torchvision.models.vgg16(pretrained=False)
vgg16.load_state_dict(torch.load("vgg16_method2.pth"))

完整的模型训练套路

import torchvision
import torch
from torch.utils.data import DataLoader
from torch import nn
from torch.utils import SummaryWriter
import timetrain_data = torchvision.datasets.CIFAR10(root="../data",train=True,transform=torchvision.transforms.ToTensor(),download=True)
test_data = torchvision.datasets.CIFAR10(root="../data",train=False,transform=torchvision.transforms.ToTensor(),download=True)train_data_size = len(train_data)
test_data_size = len(test_data)
print("训练数据集的长度为:{}".format(train_data_size))
print("测试数据集的长度为:{}".format(test_data_size))train_dataloader = DataLoader(train_data,batch_size=64)
test_dataloader = DataLoader(test_data,batch_size=64)class mymodule(nn.Module):def __init__(self):super(mymodule,self).__init__()self.model = nn.Sequential(nn.Conv2d(3,32,5,1,2),nn.MaxPool2d(2),nn.Conv2d(32,32,5,1,2),nn.MaxPool2d(2),nn.Conv2d(32,64,5,1,2),nn.MaxPool2d(2),nn.Flatten(),nn.Linear(1024,64),nn.Linear(64,10))def forward(self,x):x = self.model(x)return x
#创建网络模型
mymo = mymodule()
"""
⭐
if torch.cuda.is_available():mymo = mymo.cuda()
"""#损失函数
loss_fn = nn.CrossEntropyLoss()
"""
⭐
if torch.cuda.is_available():loss_fn = loss_fn.cuda()
"""#优化器
learning_rate = 0.01 # 1e-2
optimizer = torch.optim.SGD(mymo.parameters(),lr=learning_rate)#设置训练网络的一些参数
total_train_step = 0
total_test_step = 0
epoch = 10#添加tensorboard
writer = SummaryWriter("../logs_train")start_time = time.time()for i in range(epoch):print("-----------第{}轮训练开始-----------".format(i+1))mymo.train() #和eval()同,有dropout层或batchnorm层时才有实际作用for imgs,targets in train_dataloader:"""⭐if torch.cuda.is_available():imgs = imgs.cuda()targets = targets.cuda()"""outputs = mymo(imgs)loss = loss_fn(outputs,targets)optimizer.zero_grad()loss.backward()optimizer.step()total_train_step += 1if total_train_step % 100 == 0:end_time = time.time()print(end_time - start_time)print("训练次数:{},Loss:{}".format(total_train_step,loss.item())) #item可以将tensor转化成真实数字writer.add_scaler("train_loss",loss.item(),total_train_step)#测试步骤开始mymo.eval()total_test_loss = 0total_accuracy = 0with torch.no_grad():for imgs,targets in test_dataloader:"""⭐if torch.cuda.is_available():imgs = imgs.cuda()targets = targets.cuda()"""outputs = mymo(imgs)loss = loss_fn(outputs,targets)total_test_loss += loss.item()accuracy = (outputs.argmax(1) == targets).sum()total_accuracy += accuracyprint("整体测试集上的Loss:{}".format(total_test_loss))print("整体测试集上的正确率:{}".format(total_accuracy/test_data_size))writer.add_scaler("test_loss",total_test_loss,total_test_step)writer.add_scaler("test_accuracy",total_accuracy/test_data_size,total_test_step)total_test_step += 1torch.save(mymo,"mymo_{}.pth".format(i))print("模型已保存")writer.close()

如何计算误分类率:

import torchoutputs = torch.tensor([[0.1,0.2],[0.3,0.4]])print(outputs.argmax(1)) #1表示横向最值,0表示纵向最值
preds = outputs.argmax(1)
targets = torch.tensor([0,1])print((preds==targets).sum())

使用GPU训练

方法1:

对网络模型、数据(输入,标注)、损失函数调用它们的.cuda()【见#12⭐部分】

方法2:

.to(device)

device = torch.device(“cpu”) or torch.device(“cuda”) or torch.device(“cuda:0”)

以#12的代码为例

import torchvision
import torch
from torch.utils.data import DataLoader
from torch import nn
from torch.utils import SummaryWriter
import time"""
⭐定义训练的设备
device = torch.device("cpu")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
"""train_data = torchvision.datasets.CIFAR10(root="../data",train=True,transform=torchvision.transforms.ToTensor(),download=True)
test_data = torchvision.datasets.CIFAR10(root="../data",train=False,transform=torchvision.transforms.ToTensor(),download=True)train_data_size = len(train_data)
test_data_size = len(test_data)
print("训练数据集的长度为:{}".format(train_data_size))
print("测试数据集的长度为:{}".format(test_data_size))train_dataloader = DataLoader(train_data,batch_size=64)
test_dataloader = DataLoader(test_data,batch_size=64)class mymodule(nn.Module):def __init__(self):super(mymodule,self).__init__()self.model = nn.Sequential(nn.Conv2d(3,32,5,1,2),nn.MaxPool2d(2),nn.Conv2d(32,32,5,1,2),nn.MaxPool2d(2),nn.Conv2d(32,64,5,1,2),nn.MaxPool2d(2),nn.Flatten(),nn.Linear(1024,64),nn.Linear(64,10))def forward(self,x):x = self.model(x)return x
#创建网络模型
mymo = mymodule()
"""
⭐
mymo = mymo.to(device)
"""#损失函数
loss_fn = nn.CrossEntropyLoss()
"""
⭐
loss_fn = loss_fn.to(device)
"""#优化器
learning_rate = 0.01 # 1e-2
optimizer = torch.optim.SGD(mymo.parameters(),lr=learning_rate)#设置训练网络的一些参数
total_train_step = 0
total_test_step = 0
epoch = 10#添加tensorboard
writer = SummaryWriter("../logs_train")start_time = time.time()for i in range(epoch):print("-----------第{}轮训练开始-----------".format(i+1))mymo.train() #和eval()同,有dropout层或batchnorm层时才有实际作用for imgs,targets in train_dataloader:"""⭐imgs = imgs.to(device)targets = targets.to(device)"""outputs = mymo(imgs)loss = loss_fn(outputs,targets)optimizer.zero_grad()loss.backward()optimizer.step()total_train_step += 1if total_train_step % 100 == 0:end_time = time.time()print(end_time - start_time)print("训练次数:{},Loss:{}".format(total_train_step,loss.item())) #item可以将tensor转化成真实数字writer.add_scaler("train_loss",loss.item(),total_train_step)#测试步骤开始mymo.eval()total_test_loss = 0total_accuracy = 0with torch.no_grad():for imgs,targets in test_dataloader:"""⭐imgs = imgs.to(device)targets = targets.to(device)"""outputs = mymo(imgs)loss = loss_fn(outputs,targets)total_test_loss += loss.item()accuracy = (outputs.argmax(1) == targets).sum()total_accuracy += accuracyprint("整体测试集上的Loss:{}".format(total_test_loss))print("整体测试集上的正确率:{}".format(total_accuracy/test_data_size))writer.add_scaler("test_loss",total_test_loss,total_test_step)writer.add_scaler("test_accuracy",total_accuracy/test_data_size,total_test_step)total_test_step += 1torch.save(mymo,"mymo_{}.pth".format(i))print("模型已保存")writer.close()

完整的模型使用套路

  • 利用已经训练好的模型,给它提供输入
from PIL import Image
import torchvision
from torch import nn
import torchimage_path = "../imgs/dog.png"
image = Image.open(image_path)
print(image)#PNG是四个通道,除了RGB的三通道外还有透明度通道。如果图片本来就为三通道,则该操作不做改变
image = image.convert("RGB")transform = torchvision.transforms.Compose([torchvision.transforms.Resize((32,32)),torchvision.transforms.ToTensor()])image = transform(image)
print(image.shape) #3*32*32"""
此处拷贝mymodule类
"""
class mymodule(nn.Module):def __init__(self):super(mymodule,self).__init__()self.model = nn.Sequential(nn.Conv2d(3,32,5,1,2),nn.MaxPool2d(2),nn.Conv2d(32,32,5,1,2),nn.MaxPool2d(2),nn.Conv2d(32,64,5,1,2),nn.MaxPool2d(2),nn.Flatten(),nn.Linear(1024,64),nn.Linear(64,10))def forward(self,x):x = self.model(x)return xmodel = torch.load("mymo_0.pth")
#如果是在GPU上训练的模型,现在在cpu上使用,需要加参数:
#model = torch.load("mymo_29_gpu.pth",map_location=torch.devive("gpu"))
print(model)image = torch.reshape(image,(1,3,32,32))
model.eval()
with torch.no_grad():output = model(image)
print(output.argmax(1))

[基本功]pytorch基本操作教程相关推荐

  1. linux cp 强制覆盖_Linux基本操作教程

    Linux基本操作教程 点击蓝字 关注我们 01.Linux系统简介 Linux,全称GNU/Linux,是一套免费使用和自由传播的类UNIX操作系统,其内核由林纳斯·本纳第克特·托瓦兹于1991年第 ...

  2. 深度学习之Pytorch基础教程!

    ↑↑↑关注后"星标"Datawhale 每日干货 & 每月组队学习,不错过 Datawhale干货 作者:李祖贤,Datawhale高校群成员,深圳大学 随着深度学习的发展 ...

  3. PyTorch官方教程大更新:增加标签索引,更加新手友好

    点击上方↑↑↑"视学算法"关注我 来源:公众号 量子位 授权 PyTorch官方教程,现已大幅更新: 提供标签索引,增加主题分类,更加新手友好. 不必再面对一整页教学文章茫然无措, ...

  4. PyTorch 官方教程发布,限时免费开放!

    点击上方"AI有道",选择"星标"公众号 重磅干货,第一时间送达 PyTorch 如今已经称为最受欢迎的深度学习框架之一了!2019年1月到6月底,在arXiv ...

  5. 撒花!PyTorch 官方教程中文版正式上线,激动人心的大好事!

    点击上方"AI有道",选择"星标"公众号 重磅干货,第一时间送达 什么是 PyTorch?其实 PyTorch 可以拆成两部分:Py+Torch.Py 就是 P ...

  6. 【深度学习】深度学习之Pytorch基础教程!

    作者:李祖贤,Datawhale高校群成员,深圳大学 随着深度学习的发展,深度学习框架开始大量的出现.尤其是近两年,Google.Facebook.Microsoft等巨头都围绕深度学习重点投资了一系 ...

  7. PyTorch官方教程大更新:增加标签索引,新手更加友好

    点上方蓝字计算机视觉联盟获取更多干货 在右上方 ··· 设为星标 ★,与你不见不散 仅作分享,不代表本公众号立场,侵权联系删除 转载于:量子位 AI博士笔记系列推荐 周志华<机器学习>手推 ...

  8. PyTorch官方教程《Deep Learning with PyTorch》开源分享,LeCun力荐,通俗易懂

    1 前言 谈到深度学习框架,就不得不谈TensorFlow 和 PyTorch .目前来看,TensorFlow 和 PyTorch 框架是业界使用最为广泛的两个深度学习框架, TensorFlow在 ...

  9. Pytorch基础教程(1):人工智能与Pytorch简介

    Pytorch基础教程(1) :人工智能与Pytorch简介 1.人工智能简介 人工智能是新一轮科技革命和产业变革的重要驱动力量,其发挥作用的广度和深度堪比历次工业革命.蒸汽革命成就了英国,电气和信息 ...

最新文章

  1. 块状元素的text-align对齐属性
  2. LiveVideoStack线上分享第三季(六):深度学习与视频编码
  3. Panasonic Programming Contest (AtCoder Beginner Contest 195) 题解
  4. kafka集群下载、启动、部署、测试
  5. 薪酬与工作满意度大调查:数据科学家还是21世纪最性感的职业吗?
  6. oracle 数据的定义,oracle——数据定义
  7. c语言中输入大数,如何使用C语言实现输入10个数按从大到小的顺序排序输出
  8. jQuery javaScript js 判断浏览器的类型、版本的方法
  9. 管理感悟:掌握写文档的技能
  10. 五大地形等高线特征_【新微专题】从等高线地形图的实际应用分析如何培养图表判读能力?...
  11. SQL Server数据库简繁体数据混用的问题
  12. 最全的Vim操作快捷键
  13. win98 支持html5,对“让sbpci 128在win98下支持WDM”的补充
  14. 内存带宽测试与AVX指令集读写内存
  15. allegro如何编辑铜皮
  16. 如何用计算机蓝牙发送文件,电脑怎么利用蓝牙使手机和电脑互传文件
  17. C7N新增,保存,删除基础模板
  18. 阿尔法python 第四章 程序的控制结构
  19. USB OVER Network的使用(共享usb端口)
  20. ORACLE创建一个‘数据库’

热门文章

  1. 字符串拼接函数的实现(简易版)
  2. 为什么百度知道章子怡的老公是汪峰?点击教你AI智能技术!
  3. docker build: COPY/ADD报错:not a directory
  4. Unity粒子系统OnParticleTrigger()使用注意事项
  5. 逆水寒服务器什么时候能维护好,2019逆水寒7月18更新内容汇总 逆水寒什么时候可以玩维护结束_蚕豆网新闻...
  6. 【代码审计】LaySNS_v2.2.0 System.php页面存在代码执行漏洞分析.
  7. STM32定时器中断实验-学习笔记
  8. 智能交通系统是什么,一般可以划分哪几类服务?
  9. 不要在linux上启用net.ipv4.tcp_tw_recycle参数
  10. 微信公众平台环境搭建