PyTorch ResNet 实现图片分类

  • 建党 100 年
  • Resnet
  • 深度网络退化
  • 代码实现
    • 残差块
    • 超参数
    • ResNet 18 网络
    • 获取数据
    • 训练
    • 测试
  • 完整代码

建党 100 年

百年风雨, 金戈铁马.
忆往昔, 岁月峥嵘;
看今朝, 灿烂辉煌.

Resnet

深度残差网络 ResNet (Deep residual network) 和 Alexnet 一样是深度学习的一个里程碑.

TensorFlow 版 Restnet 实现:

TensorFlow2 千层神经网络, 始步于此

深度网络退化

当网络深度从 0 增加到 20 的时候, 结果会随着网络的深度而变好. 但当网络超过 20 层的时候, 结果会随着网络深度的增加而下降. 网络的层数越深, 梯度之间的相关性会越来越差, 模型也更难优化.

残差网络 (ResNet) 通过增加映射 (Identity) 来解决网络退化问题. H(x) = F(x) + x通过集合残差而不是恒等隐射, 保证了网络不会退化.

代码实现

残差块

class BasicBlock(torch.nn.Module):"""残差块"""def __init__(self, inplanes, planes, stride=1):"""初始化"""super(BasicBlock, self).__init__()self.conv1 = torch.nn.Conv2d(in_channels=inplanes, out_channels=planes, kernel_size=(3, 3),  stride=(stride, stride), padding=1)  # 卷积层1self.bn1 = torch.nn.BatchNorm2d(planes)  # 标准化层1self.conv2 = torch.nn.Conv2d(in_channels=planes, out_channels=planes, kernel_size=(3, 3), padding=1)  # 卷积层2self.bn2 = torch.nn.BatchNorm2d(planes)  # 标准化层2# 如果步长不为1, 用1*1的卷积实现下采样if stride != 1:self.downsample = torch.nn.Sequential(# 下采样torch.nn.Conv2d(in_channels=inplanes, out_channels=planes, kernel_size=(1, 1), stride=(stride, stride)))else:self.downsample = lambda x: x  # 返回xdef forward(self, input):"""前向传播"""out = self.conv1(input)out = self.bn1(out)out = F.relu(out)out = self.conv2(out)out = self.bn2(out)identity = self.downsample(input)output = torch.add(out, identity)output = F.relu(output)return outputResNet_18 = torch.nn.Sequential(# 初始层torch.nn.Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1)),  # 卷积torch.nn.BatchNorm2d(64),torch.nn.ReLU(),torch.nn.MaxPool2d((2, 2)),  # 池化# 8个block(每个为两层)BasicBlock(64, 64, stride=1),BasicBlock(64, 64, stride=1),BasicBlock(64, 128, stride=2),BasicBlock(128, 128, stride=1),BasicBlock(128, 256, stride=2),BasicBlock(256, 256, stride=1),BasicBlock(256, 512, stride=2),BasicBlock(512, 512, stride=1),torch.nn.AvgPool2d(2),  # 池化torch.nn.Flatten(),  # 平铺层# 全连接层torch.nn.Linear(512, 100)  # 100类
)

超参数

# 定义超参数
batch_size = 1024  # 一次训练的样本数目
learning_rate = 0.0001  # 学习率
iteration_num = 20  # 迭代次数
network = ResNet_18
optimizer = torch.optim.Adam(network.parameters(), lr=learning_rate)  # 优化器# GPU 加速
use_cuda = torch.cuda.is_available()if use_cuda:network.cuda()
print("是否使用 GPU 加速:", use_cuda)
print(summary(network, (3, 32, 32)))

ResNet 18 网络

ResNet_18 = torch.nn.Sequential(# 初始层torch.nn.Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1)),  # 卷积torch.nn.BatchNorm2d(64),torch.nn.ReLU(),torch.nn.MaxPool2d((2, 2)),  # 池化# 8个block(每个为两层)BasicBlock(64, 64, stride=1),BasicBlock(64, 64, stride=1),BasicBlock(64, 128, stride=2),BasicBlock(128, 128, stride=1),BasicBlock(128, 256, stride=2),BasicBlock(256, 256, stride=1),BasicBlock(256, 512, stride=2),BasicBlock(512, 512, stride=1),torch.nn.AvgPool2d(2),  # 池化torch.nn.Flatten(),  # 平铺层# 全连接层torch.nn.Linear(512, 100)  # 100类
)

获取数据

def get_data():"""获取数据"""# 获取测试集train = torchvision.datasets.CIFAR100(root="./data", train=True, download=True,transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor(),  # 转换成张量torchvision.transforms.Normalize((0.1307,), (0.3081,))  # 标准化]))train_loader = DataLoader(train, batch_size=batch_size)  # 分割测试集# 获取测试集test = torchvision.datasets.CIFAR100(root="./data", train=False, download=True,transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor(),  # 转换成张量torchvision.transforms.Normalize((0.1307,), (0.3081,))  # 标准化]))test_loader = DataLoader(test, batch_size=batch_size)  # 分割训练# 返回分割好的训练集和测试集return train_loader, test_loader

训练

def train(model, epoch, train_loader):"""训练"""# 训练模式model.train()# 迭代for step, (x, y) in enumerate(train_loader):# 加速if use_cuda:model = model.cuda()x, y = x.cuda(), y.cuda()# 梯度清零optimizer.zero_grad()output = model(x)# 计算损失loss = F.cross_entropy(output, y)# 反向传播loss.backward()# 更新梯度optimizer.step()# 打印损失if step % 10 == 0:print('Epoch: {}, Step {}, Loss: {}'.format(epoch, step, loss))

测试

def test(model, test_loader):"""测试"""# 测试模式model.eval()# 存放正确个数correct = 0with torch.no_grad():for x, y in test_loader:# 加速if use_cuda:model = model.cuda()x, y = x.cuda(), y.cuda()# 获取结果output = model(x)# 预测结果pred = output.argmax(dim=1, keepdim=True)# 计算准确个数correct += pred.eq(y.view_as(pred)).sum().item()# 计算准确率accuracy = correct / len(test_loader.dataset) * 100# 输出准确print("Test Accuracy: {}%".format(accuracy))

完整代码

完整代码:

import torch
import torchvision
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torchsummary import summaryclass BasicBlock(torch.nn.Module):"""残差块"""def __init__(self, inplanes, planes, stride=1):"""初始化"""super(BasicBlock, self).__init__()self.conv1 = torch.nn.Conv2d(in_channels=inplanes, out_channels=planes, kernel_size=(3, 3),stride=(stride, stride), padding=1)  # 卷积层1self.bn1 = torch.nn.BatchNorm2d(planes)  # 标准化层1self.conv2 = torch.nn.Conv2d(in_channels=planes, out_channels=planes, kernel_size=(3, 3), padding=1)  # 卷积层2self.bn2 = torch.nn.BatchNorm2d(planes)  # 标准化层2# 如果步长不为1, 用1*1的卷积实现下采样if stride != 1:self.downsample = torch.nn.Sequential(# 下采样torch.nn.Conv2d(in_channels=inplanes, out_channels=planes, kernel_size=(1, 1), stride=(stride, stride)))else:self.downsample = lambda x: x  # 返回xdef forward(self, input):"""前向传播"""out = self.conv1(input)out = self.bn1(out)out = F.relu(out)out = self.conv2(out)out = self.bn2(out)identity = self.downsample(input)output = torch.add(out, identity)output = F.relu(output)return outputResNet_18 = torch.nn.Sequential(# 初始层torch.nn.Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1)),  # 卷积torch.nn.BatchNorm2d(64),torch.nn.ReLU(),torch.nn.MaxPool2d((2, 2)),  # 池化# 8个block(每个为两层)BasicBlock(64, 64, stride=1),BasicBlock(64, 64, stride=1),BasicBlock(64, 128, stride=2),BasicBlock(128, 128, stride=1),BasicBlock(128, 256, stride=2),BasicBlock(256, 256, stride=1),BasicBlock(256, 512, stride=2),BasicBlock(512, 512, stride=1),torch.nn.AvgPool2d(2),  # 池化torch.nn.Flatten(),  # 平铺层# 全连接层torch.nn.Linear(512, 100)  # 100类
)# 定义超参数
batch_size = 1024  # 一次训练的样本数目
learning_rate = 0.0001  # 学习率
iteration_num = 20  # 迭代次数
network = ResNet_18
optimizer = torch.optim.Adam(network.parameters(), lr=learning_rate)  # 优化器# GPU 加速
use_cuda = torch.cuda.is_available()if use_cuda:network.cuda()
print("是否使用 GPU 加速:", use_cuda)
print(summary(network, (3, 32, 32)))def get_data():"""获取数据"""# 获取测试集train = torchvision.datasets.CIFAR100(root="./data", train=True, download=True,transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor(),  # 转换成张量torchvision.transforms.Normalize((0.1307,), (0.3081,))  # 标准化]))train_loader = DataLoader(train, batch_size=batch_size)  # 分割测试集# 获取测试集test = torchvision.datasets.CIFAR100(root="./data", train=False, download=True,transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor(),  # 转换成张量torchvision.transforms.Normalize((0.1307,), (0.3081,))  # 标准化]))test_loader = DataLoader(test, batch_size=batch_size)  # 分割训练# 返回分割好的训练集和测试集return train_loader, test_loaderdef train(model, epoch, train_loader):"""训练"""# 训练模式model.train()# 迭代for step, (x, y) in enumerate(train_loader):# 加速if use_cuda:model = model.cuda()x, y = x.cuda(), y.cuda()# 梯度清零optimizer.zero_grad()output = model(x)# 计算损失loss = F.cross_entropy(output, y)# 反向传播loss.backward()# 更新梯度optimizer.step()# 打印损失if step % 10 == 0:print('Epoch: {}, Step {}, Loss: {}'.format(epoch, step, loss))def test(model, test_loader):"""测试"""# 测试模式model.eval()# 存放正确个数correct = 0with torch.no_grad():for x, y in test_loader:# 加速if use_cuda:model = model.cuda()x, y = x.cuda(), y.cuda()# 获取结果output = model(x)# 预测结果pred = output.argmax(dim=1, keepdim=True)# 计算准确个数correct += pred.eq(y.view_as(pred)).sum().item()# 计算准确率accuracy = correct / len(test_loader.dataset) * 100# 输出准确print("Test Accuracy: {}%".format(accuracy))def main():# 获取数据train_loader, test_loader = get_data()# 迭代for epoch in range(iteration_num):print("\n================ epoch: {} ================".format(epoch))train(network, epoch, train_loader)test(network, test_loader)if __name__ == "__main__":main()

输出结果:

是否使用 GPU 加速: True
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /pytorch/c10/core/TensorImpl.h:1156.)return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
----------------------------------------------------------------Layer (type)               Output Shape         Param #
================================================================Conv2d-1           [-1, 64, 30, 30]           1,792BatchNorm2d-2           [-1, 64, 30, 30]             128ReLU-3           [-1, 64, 30, 30]               0MaxPool2d-4           [-1, 64, 15, 15]               0Conv2d-5           [-1, 64, 15, 15]          36,928BatchNorm2d-6           [-1, 64, 15, 15]             128Conv2d-7           [-1, 64, 15, 15]          36,928BatchNorm2d-8           [-1, 64, 15, 15]             128BasicBlock-9           [-1, 64, 15, 15]               0Conv2d-10           [-1, 64, 15, 15]          36,928BatchNorm2d-11           [-1, 64, 15, 15]             128Conv2d-12           [-1, 64, 15, 15]          36,928BatchNorm2d-13           [-1, 64, 15, 15]             128BasicBlock-14           [-1, 64, 15, 15]               0Conv2d-15            [-1, 128, 8, 8]          73,856BatchNorm2d-16            [-1, 128, 8, 8]             256Conv2d-17            [-1, 128, 8, 8]         147,584BatchNorm2d-18            [-1, 128, 8, 8]             256Conv2d-19            [-1, 128, 8, 8]           8,320BasicBlock-20            [-1, 128, 8, 8]               0Conv2d-21            [-1, 128, 8, 8]         147,584BatchNorm2d-22            [-1, 128, 8, 8]             256Conv2d-23            [-1, 128, 8, 8]         147,584BatchNorm2d-24            [-1, 128, 8, 8]             256BasicBlock-25            [-1, 128, 8, 8]               0Conv2d-26            [-1, 256, 4, 4]         295,168BatchNorm2d-27            [-1, 256, 4, 4]             512Conv2d-28            [-1, 256, 4, 4]         590,080BatchNorm2d-29            [-1, 256, 4, 4]             512Conv2d-30            [-1, 256, 4, 4]          33,024BasicBlock-31            [-1, 256, 4, 4]               0Conv2d-32            [-1, 256, 4, 4]         590,080BatchNorm2d-33            [-1, 256, 4, 4]             512Conv2d-34            [-1, 256, 4, 4]         590,080BatchNorm2d-35            [-1, 256, 4, 4]             512BasicBlock-36            [-1, 256, 4, 4]               0Conv2d-37            [-1, 512, 2, 2]       1,180,160BatchNorm2d-38            [-1, 512, 2, 2]           1,024Conv2d-39            [-1, 512, 2, 2]       2,359,808BatchNorm2d-40            [-1, 512, 2, 2]           1,024Conv2d-41            [-1, 512, 2, 2]         131,584BasicBlock-42            [-1, 512, 2, 2]               0Conv2d-43            [-1, 512, 2, 2]       2,359,808BatchNorm2d-44            [-1, 512, 2, 2]           1,024Conv2d-45            [-1, 512, 2, 2]       2,359,808BatchNorm2d-46            [-1, 512, 2, 2]           1,024BasicBlock-47            [-1, 512, 2, 2]               0AvgPool2d-48            [-1, 512, 1, 1]               0Flatten-49                  [-1, 512]               0Linear-50                  [-1, 100]          51,300
================================================================
Total params: 11,223,140
Trainable params: 11,223,140
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.01
Forward/backward pass size (MB): 3.74
Params size (MB): 42.81
Estimated Total Size (MB): 46.56
----------------------------------------------------------------
None
Downloading https://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz to ./data/cifar-100-python.tar.gz
169001984/? [00:07<00:00, 23425059.51it/s]Extracting ./data/cifar-100-python.tar.gz to ./data
Files already downloaded and verified================ epoch: 0 ================
Epoch: 0, Step 0, Loss: 4.73184871673584
Epoch: 0, Step 10, Loss: 4.262868881225586
Epoch: 0, Step 20, Loss: 3.946244239807129
Epoch: 0, Step 30, Loss: 3.7039854526519775
Epoch: 0, Step 40, Loss: 3.5138051509857178
Test Accuracy: 17.16%================ epoch: 1 ================
Epoch: 1, Step 0, Loss: 3.3631155490875244
Epoch: 1, Step 10, Loss: 3.183103561401367
Epoch: 1, Step 20, Loss: 3.0515971183776855
Epoch: 1, Step 30, Loss: 2.913054943084717
Epoch: 1, Step 40, Loss: 2.8454060554504395
Test Accuracy: 26.76%================ epoch: 2 ================
Epoch: 2, Step 0, Loss: 2.764857053756714
Epoch: 2, Step 10, Loss: 2.5304853916168213
Epoch: 2, Step 20, Loss: 2.3920257091522217
Epoch: 2, Step 30, Loss: 2.294809341430664
Epoch: 2, Step 40, Loss: 2.2125251293182373
Test Accuracy: 30.599999999999998%================ epoch: 3 ================
Epoch: 3, Step 0, Loss: 2.15826678276062
Epoch: 3, Step 10, Loss: 1.9255717992782593
Epoch: 3, Step 20, Loss: 1.7490493059158325
Epoch: 3, Step 30, Loss: 1.6468313932418823
Epoch: 3, Step 40, Loss: 1.5404233932495117
Test Accuracy: 29.659999999999997%================ epoch: 4 ================
Epoch: 4, Step 0, Loss: 1.4881120920181274
Epoch: 4, Step 10, Loss: 1.3130300045013428
Epoch: 4, Step 20, Loss: 1.119794249534607
Epoch: 4, Step 30, Loss: 1.07780921459198
Epoch: 4, Step 40, Loss: 0.9983140826225281
Test Accuracy: 27.04%================ epoch: 5 ================
Epoch: 5, Step 0, Loss: 1.0429306030273438
Epoch: 5, Step 10, Loss: 0.9188315868377686
Epoch: 5, Step 20, Loss: 0.7664494514465332
Epoch: 5, Step 30, Loss: 0.8060574531555176
Epoch: 5, Step 40, Loss: 0.7700539231300354
Test Accuracy: 25.629999999999995%================ epoch: 6 ================
Epoch: 6, Step 0, Loss: 0.8620188236236572
Epoch: 6, Step 10, Loss: 0.8017312288284302
Epoch: 6, Step 20, Loss: 0.6923062801361084
Epoch: 6, Step 30, Loss: 0.6696692109107971
Epoch: 6, Step 40, Loss: 0.6102812886238098
Test Accuracy: 25.45%================ epoch: 7 ================
Epoch: 7, Step 0, Loss: 0.5835701823234558
Epoch: 7, Step 10, Loss: 0.5514459013938904
Epoch: 7, Step 20, Loss: 0.4809255301952362
Epoch: 7, Step 30, Loss: 0.3889707326889038
Epoch: 7, Step 40, Loss: 0.42040011286735535
Test Accuracy: 25.3%================ epoch: 8 ================
Epoch: 8, Step 0, Loss: 0.4036518931388855
Epoch: 8, Step 10, Loss: 0.31424838304519653
Epoch: 8, Step 20, Loss: 0.2538606524467468
Epoch: 8, Step 30, Loss: 0.26636990904808044
Epoch: 8, Step 40, Loss: 0.23289920389652252
Test Accuracy: 28.22%================ epoch: 9 ================
Epoch: 9, Step 0, Loss: 0.20370212197303772
Epoch: 9, Step 10, Loss: 0.21275906264781952
Epoch: 9, Step 20, Loss: 0.1724529266357422
Epoch: 9, Step 30, Loss: 0.16944238543510437
Epoch: 9, Step 40, Loss: 0.11199608445167542
Test Accuracy: 28.17%================ epoch: 10 ================
Epoch: 10, Step 0, Loss: 0.14693205058574677
Epoch: 10, Step 10, Loss: 0.11063629388809204
Epoch: 10, Step 20, Loss: 0.08746964484453201
Epoch: 10, Step 30, Loss: 0.08660224825143814
Epoch: 10, Step 40, Loss: 0.09079966694116592
Test Accuracy: 29.12%================ epoch: 11 ================
Epoch: 11, Step 0, Loss: 0.07582048326730728
Epoch: 11, Step 10, Loss: 0.07523166388273239
Epoch: 11, Step 20, Loss: 0.05015444755554199
Epoch: 11, Step 30, Loss: 0.06376209855079651
Epoch: 11, Step 40, Loss: 0.047050636261701584
Test Accuracy: 30.159999999999997%================ epoch: 12 ================
Epoch: 12, Step 0, Loss: 0.03873936086893082
Epoch: 12, Step 10, Loss: 0.036511268466711044
Epoch: 12, Step 20, Loss: 0.03504694253206253
Epoch: 12, Step 30, Loss: 0.03236941248178482
Epoch: 12, Step 40, Loss: 0.04149263724684715
Test Accuracy: 30.69%================ epoch: 13 ================
Epoch: 13, Step 0, Loss: 0.02524631842970848
Epoch: 13, Step 10, Loss: 0.02024298906326294
Epoch: 13, Step 20, Loss: 0.01565425843000412
Epoch: 13, Step 30, Loss: 0.03372647985816002
Epoch: 13, Step 40, Loss: 0.03173805773258209
Test Accuracy: 30.61%================ epoch: 14 ================
Epoch: 14, Step 0, Loss: 0.013597095385193825
Epoch: 14, Step 10, Loss: 0.014107376337051392
Epoch: 14, Step 20, Loss: 0.010056688450276852
Epoch: 14, Step 30, Loss: 0.016869302839040756
Epoch: 14, Step 40, Loss: 0.016789773479104042
Test Accuracy: 30.79%================ epoch: 15 ================
Epoch: 15, Step 0, Loss: 0.00870730821043253
Epoch: 15, Step 10, Loss: 0.0070304274559021
Epoch: 15, Step 20, Loss: 0.005506859626621008
Epoch: 15, Step 30, Loss: 0.02930188737809658
Epoch: 15, Step 40, Loss: 0.013658527284860611
Test Accuracy: 30.990000000000002%================ epoch: 16 ================
Epoch: 16, Step 0, Loss: 0.006122640334069729
Epoch: 16, Step 10, Loss: 0.008687378838658333
Epoch: 16, Step 20, Loss: 0.008756318129599094
Epoch: 16, Step 30, Loss: 0.011087586171925068
Epoch: 16, Step 40, Loss: 0.011925156228244305
Test Accuracy: 31.25%================ epoch: 17 ================
Epoch: 17, Step 0, Loss: 0.00833406113088131
Epoch: 17, Step 10, Loss: 0.004966908134520054
Epoch: 17, Step 20, Loss: 0.003708316246047616
Epoch: 17, Step 30, Loss: 0.020299237221479416
Epoch: 17, Step 40, Loss: 0.010047768242657185
Test Accuracy: 31.540000000000003%================ epoch: 18 ================
Epoch: 18, Step 0, Loss: 0.0037587652914226055
Epoch: 18, Step 10, Loss: 0.0033208071254193783
Epoch: 18, Step 20, Loss: 0.004131313879042864
Epoch: 18, Step 30, Loss: 0.012251097708940506
Epoch: 18, Step 40, Loss: 0.00844736211001873
Test Accuracy: 31.8%================ epoch: 19 ================
Epoch: 19, Step 0, Loss: 0.0030041378922760487
Epoch: 19, Step 10, Loss: 0.0028436880093067884
Epoch: 19, Step 20, Loss: 0.0026263371109962463
Epoch: 19, Step 30, Loss: 0.01706080697476864
Epoch: 19, Step 40, Loss: 0.007125745993107557
Test Accuracy: 31.72%

PyTorch ResNet 实现图片分类相关推荐

  1. 基于pytorch的简单图片分类问题实现

    pytorch中基于简单图片分类问题的实现大致可以分为以下几个步骤: 1.建立处理图片的神经网络,提前设置好损失函数(图片分类问题一般使用交叉熵损失函数),以及优化器. 2.在每一个学习的步骤中,将训 ...

  2. PyTorch入门-简单图片分类

    一. CNN图像分类 PyTorch Version: 1.0.0 import torch import torch.nn as nn import torch.nn.functional as F ...

  3. PyTorch搭建预训练AlexNet、DenseNet、ResNet、VGG实现猫狗图片分类

    目录 前言 AlexNet DensNet ResNet VGG 前言 在之前的文章中,利用一个简单的三层CNN猫狗图片分类,正确率不高,详见: CNN简单实战:PyTorch搭建CNN对猫狗图片进行 ...

  4. [深度应用]·实战掌握PyTorch图片分类简明教程

    [深度应用]·实战掌握PyTorch图片分类简明教程 个人网站--> http://www.yansongsong.cn 项目GitHub地址--> https://github.com/ ...

  5. 实战:掌握PyTorch图片分类的简明教程 | 附完整代码

    作者 | 小宋是呢 转载自CSDN博客 1.引文 深度学习的比赛中,图片分类是很常见的比赛,同时也是很难取得特别高名次的比赛,因为图片分类已经被大家研究的很透彻,一些开源的网络很容易取得高分.如果大家 ...

  6. yolov8 做图片分类和 ResNet Efficientnet 等常用分类网络的对比

    文章大纲 yolo v8 图片分类 简介与原理说明 训练代码 数据集的组织 多尺度训练 参考内容 ResNet 简介与原理说明 训练代码与使用说明 Usage EfficientNet 推理代码 其他 ...

  7. CNN自定义图片分类(Pytorch)

    CNN图片分类(Pytorch)_zju_cbw的博客-CSDN博客这篇文章主要讲述用 pytorch 完成简单 CNN 图片分类任务,如果想对 CNN 的理论知识进行了解,可以看我的这篇文章,深度学 ...

  8. PyTorch 深度学习实战 | 基于 ResNet 的花卉图片分类

    "工欲善其事,必先利其器".如果直接使用 Python 完成模型的构建.导出等工作,势必会耗费相当多的时间,而且大部分工作都是深度学习中共同拥有的部分,即重复工作.所以本案例为了快 ...

  9. 【PyTorch】构造VGG19网络进行本地图片分类(超详细过程)——项目介绍

    本篇博客主要解决以下3个问题: 如何自定义网络(以VGG19为例). 如何自建数据集并加载至模型中. 如何使用自定义数据训练自定义模型. 第一篇:[PyTorch]构造VGG19网络进行本地图片分类( ...

最新文章

  1. Spark源码剖析 - SparkContext的初始化(八)_初始化管理器BlockManager
  2. 自定义View(二),强大的Canvas
  3. 1.3.1 单隐层网络的数学实现
  4. jquery源码解析:jQuery数据缓存机制详解2
  5. 俄罗斯方块之四 运动块的绘制实现
  6. 已经了关联到svn的文件类型,如何添加到 ignore
  7. NET问答: 发布 asp.net core 时如何修改 ASPNETCORE_ENVIRONMENT 环境变量?
  8. Java基础——Java IO详解
  9. VisualStudio2005技巧集合--打造自己的CodeSnippet
  10. 关于matlab的问题,关于MATLAB的一些基础问题
  11. 简述Flash 事件机制?
  12. 关于卡巴斯基KEY被列入黑名单的问题
  13. SSL-ZYC 2133 腾讯大战360
  14. 风火牙疼,紧急止痛、快速治疗的真实历程
  15. 企业IT管理基础知识巩固系列之(一)路由器
  16. 因《C程序设计伴侣》的争执,谈谭浩强《C程序设计》的批评
  17. 迅雷新财报背后:下载一哥到艰难求生
  18. python二进制和图片转换
  19. 一文读懂LeNet、AlexNet、VGG、GoogleNet、ResNet到底是什么?
  20. vue中引入jquery方法 或 $ is not defined错误解决方法

热门文章

  1. 数字证书申请流程(双证)
  2. JFreeChart的正确使用列子
  3. UI 设计代码化:低代码式设计语言 —— Unflow
  4. 几何画板是如何证明勾股定理的
  5. 3D模型欣赏:《死亡搁浅》角色Porter 次世代 现实主义 【3D游戏建模教程】
  6. 支付宝/云闪付个人免签
  7. 安卓手机小说阅读器_小说迷安卓app2020最新版下载安卓版下载_小说迷安卓app2020最新版下载v3.1.8手机版apk下载...
  8. 周大福守护一生 | 在520奔赴一场终身浪漫的约会
  9. UE4地形简单材质球制作,及地形变黑处理办法
  10. 一篇令所有游戏圈的兄弟汗颜的文章