目录

一、ResNet概述

二、BN层(BatchNormalization)

1、BN层原理

2、注意事项

三、ResNet网络的实现

四、ResNext网络概述

五、ResNext网络实现


一、ResNet概述

ResNet网络由Kaiming He等人于2015年提出,论文名为《Deep Residual Learning for Image Recognition》,论文见:https://arxiv.org/pdf/1512.03385.pdf

ResNet又称残差网络,通过每个残差块中的恒等映射(identity mapping)和残差映射(residual mapping)构建,并输出两者的和,如下图所示。

残差块的设计方式有两种,上述左图针对ResNet18,ResNet34网络,右图针对ResNet50,Res101,ResNet151网络,通过这一方式目的是降低参数数目。

ResNet网络结构如下图:

由于在conv2,conv3,conv4,conv5中任意一个残差块循环体的第一个残差块的恒等映射的输出与残差映射输出不相等,所以在支路添加一个conv用来转换输出,如下图所示。

 

注:conv2,conv3,conv4,conv5中的循环要在每一个中括号进行之后再循环,而不是每一步循环多次

二、BN层(BatchNormalization)

BN层结构是由谷歌团队在2015年提出的,通过该方法可以加速网络收敛并提升准确率。

1、BN层原理

由于图像预处理中通常会进行标准化处理,来加速网络收敛,这样对于conv1来说输入的特征矩阵是满足一定分布的,但是对于conv2所输入的特征矩阵就不一定遵循这一分布规律了,而BN层的目的就是使我们的特征矩阵满足均值为0,方差为1的高斯分布。

2、注意事项

(1)在使用BN层时要注意conv层中bias要置为False,在数学运算后bias=True和False输出的效果相同,但增加了参数与计算量

(2)BN层要放在conv层和Relu层之间

三、ResNet网络的实现

BasicBlock类用于resnet18和resnet34网络,Bottleneck类用于更深层的网络

import torch.nn as nn
import torchclass BasicBlock(nn.Module):  #仅限于resnet-18和resnet-34expansion = 1def __init__(self,in_channel,out_channel,stride=1,downsample=None):super(BasicBlock,self).__init__()self.conv1=nn.Conv2d(in_channels=in_channel,out_channels=out_channel,kernel_size=3,stride=stride,padding=1,bias=False)   #bias为什么去False?由于BN层存在的缘故self.bn1=nn.BatchNorm2d(out_channel)   #BN层self.relu=nn.ReLU()self.conv2=nn.Conv2d(in_channels=out_channel,out_channels=out_channel,kernel_size=3,stride=1,padding=1,bias=False)self.bn2=nn.BatchNorm2d(out_channel)self.downsample=downsampledef forward(self,x):identity=xif self.downsample is not None:identity=self.downsample(x)    #虚线部分的捷径out=self.conv1(x)      #实线部分out=self.bn1(out)out=self.relu(out)out=self.conv2(out)out=self.bn2(out)out+=identityout=self.relu(out)     #在实线和虚线相加之后再进入激活函数return outclass Bottleneck(nn.Module): #仅限于resnet-50,resnet-101和resnet-151expansion=4def __init__(self,in_channel,out_channel,stride=1,downsample=None):super(Bottleneck,self).__init__()self.conv1=nn.Conv2d(in_channels=in_channel,out_channels=out_channel,kernel_size=1,stride=1,bias=False)self.bn1=nn.BatchNorm2d(out_channel)self.conv2=nn.Conv2d(in_channels=in_channel,out_channels=out_channel,kernel_size=3,stride=stride,bias=False,padding=1)self.bn2=nn.BatchNorm2d(out_channel)self.conv3=nn.Conv2d(in_channels=in_channel,out_channels=out_channel*self.expansion,   #注意这里输出为4倍关系kernel_size=1,stride=1,bias=False)self.bn3=nn.BatchNorm2d(out_channel*self.expansion)self.relu=nn.ReLU(inplace=True)self.downsample=downsampledef forward(self,x):identity=xif self.downsample is not None:identity=self.downsample(x)out=self.conv1(x)out=self.bn1(out)out=self.relu(out)out=self.conv2(out)out=self.bn2(out)out=self.relu(out)out=self.conv3(out)out=self.bn3(out)out+=identityout=self.relu(out)return outclass ResNet(nn.Module):def __init__(self,block,                 #block为BasicBlock(resnet18,resnet34)或Bottleneck(resnet50,resnet101,resnet151)blocks_num,            #conv2,conv3,conv4,conv5的倒残差结构具体的个数num_classes=1000,include_top=True) :    #为进一步增加Resnet结构做铺垫#groups=1,#width_per_group=64):   #组卷积个数,对于Resnext网络而言super(ResNet, self).__init__()self.include_top = include_topself.in_channel = 64self.conv1 = nn.Conv2d(3, self.in_channel, kernel_size=7, stride=2,    #conv1padding=3, bias=False)self.bn1 = nn.BatchNorm2d(self.in_channel)self.relu = nn.ReLU(inplace=True)self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)        #conv2self.layer1 = self._make_layer(block, 64, blocks_num[0])self.layer2 = self._make_layer(block, 128, blocks_num[1], stride=2)    #conv3self.layer3 = self._make_layer(block, 256, blocks_num[2], stride=2)    #conv4self.layer4 = self._make_layer(block, 512, blocks_num[3], stride=2)    #conv5if self.include_top:self.avgpool = nn.AdaptiveAvgPool2d((1, 1))  # output size = (1, 1)self.fc = nn.Linear(512 * block.expansion, num_classes)            #输出是2048=512*4for m in self.modules():if isinstance(m, nn.Conv2d):                                       #对卷积层的初始化操作nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')def _make_layer(self, block, channel, block_num, stride=1):                #本质通过循环创建conv的若干层downsample = Noneif stride != 1 or self.in_channel != channel * block.expansion:        #如果存在conv2中stride为1或者resnet50,resnet101,resnet151中倒残差结构(64-64-256)#中后一个64到256则需要升维downsample = nn.Sequential(nn.Conv2d(self.in_channel, channel * block.expansion, kernel_size=1, stride=stride, bias=False),nn.BatchNorm2d(channel * block.expansion))layers = []layers.append(block(self.in_channel,channel,downsample=downsample,stride=stride))self.in_channel = channel * block.expansionfor _ in range(1, block_num):                                  #一个conv中的循环的个数,例如resnet50中conv2循环3次layers.append(block(self.in_channel,channel))return nn.Sequential(*layers)def forward(self, x):x = self.conv1(x)x = self.bn1(x)x = self.relu(x)x = self.maxpool(x)x = self.layer1(x)x = self.layer2(x)x = self.layer3(x)x = self.layer4(x)if self.include_top:x = self.avgpool(x)x = torch.flatten(x, 1)x = self.fc(x)return xdef resnet34(num_classes=1000, include_top=True):# https://download.pytorch.org/models/resnet34-333f7ec4.pthreturn ResNet(BasicBlock, [3, 4, 6, 3], num_classes=num_classes, include_top=include_top)def resnet50(num_classes=1000, include_top=True):# https://download.pytorch.org/models/resnet50-19c8e357.pthreturn ResNet(Bottleneck, [3, 4, 6, 3], num_classes=num_classes, include_top=include_top)def resnet101(num_classes=1000, include_top=True):# https://download.pytorch.org/models/resnet101-5d3b4d8f.pthreturn ResNet(Bottleneck, [3, 4, 23, 3], num_classes=num_classes, include_top=include_top)

四、ResNext网络概述

ResNext网络可以看做是VGG,Inception,ResNet网络的组合体,体现了VGG网络的堆叠网络形式,Inception网络的split-transform-merge和ResNet的残差结构,有效避开了Inception网络中对于超参数设定的针对性较强,应用在其他数据集时需要修改若干参数,可扩展性一般的问题。

在2017CVPR上提出了ResNext 网络架构,主要原因是通过不增加参数复杂度的前提下提高准确率,得益于组卷积的拓扑结构,减少超参数的数量。论文中与ResNet的网络架构对比如下:

五、ResNext网络实现

import torch.nn as nn
import torchclass BasicBlock(nn.Module):expansion = 1def __init__(self, in_channel, out_channel, stride=1, downsample=None, **kwargs):super(BasicBlock, self).__init__()self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=out_channel,kernel_size=3, stride=stride, padding=1, bias=False)self.bn1 = nn.BatchNorm2d(out_channel)self.relu = nn.ReLU()self.conv2 = nn.Conv2d(in_channels=out_channel, out_channels=out_channel,kernel_size=3, stride=1, padding=1, bias=False)self.bn2 = nn.BatchNorm2d(out_channel)self.downsample = downsampledef forward(self, x):identity = xif self.downsample is not None:identity = self.downsample(x)out = self.conv1(x)out = self.bn1(out)out = self.relu(out)out = self.conv2(out)out = self.bn2(out)out += identityout = self.relu(out)return outclass Bottleneck(nn.Module):"""注意:原论文中,在虚线残差结构的主分支上,第一个1x1卷积层的步距是2,第二个3x3卷积层步距是1。但在pytorch官方实现过程中是第一个1x1卷积层的步距是1,第二个3x3卷积层步距是2,这么做的好处是能够在top1上提升大概0.5%的准确率。可参考Resnet v1.5 https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch"""expansion = 4def __init__(self, in_channel, out_channel, stride=1, downsample=None,groups=1, width_per_group=64):super(Bottleneck, self).__init__()width = int(out_channel * (width_per_group / 64.)) * groupsself.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=width,kernel_size=1, stride=1, bias=False)  # squeeze channelsself.bn1 = nn.BatchNorm2d(width)# -----------------------------------------self.conv2 = nn.Conv2d(in_channels=width, out_channels=width, groups=groups,kernel_size=3, stride=stride, bias=False, padding=1)self.bn2 = nn.BatchNorm2d(width)# -----------------------------------------self.conv3 = nn.Conv2d(in_channels=width, out_channels=out_channel*self.expansion,kernel_size=1, stride=1, bias=False)  # unsqueeze channelsself.bn3 = nn.BatchNorm2d(out_channel*self.expansion)self.relu = nn.ReLU(inplace=True)self.downsample = downsampledef forward(self, x):identity = xif self.downsample is not None:identity = self.downsample(x)out = self.conv1(x)out = self.bn1(out)out = self.relu(out)out = self.conv2(out)out = self.bn2(out)out = self.relu(out)out = self.conv3(out)out = self.bn3(out)out += identityout = self.relu(out)return outclass ResNet(nn.Module):def __init__(self,block,blocks_num,num_classes=1000,include_top=True,groups=1,                #初始组卷积数为1,就是ResNet结构width_per_group=64):super(ResNet, self).__init__()self.include_top = include_topself.in_channel = 64self.groups = groupsself.width_per_group = width_per_groupself.conv1 = nn.Conv2d(3, self.in_channel, kernel_size=7, stride=2,padding=3, bias=False)self.bn1 = nn.BatchNorm2d(self.in_channel)self.relu = nn.ReLU(inplace=True)self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)self.layer1 = self._make_layer(block, 64, blocks_num[0])self.layer2 = self._make_layer(block, 128, blocks_num[1], stride=2)self.layer3 = self._make_layer(block, 256, blocks_num[2], stride=2)self.layer4 = self._make_layer(block, 512, blocks_num[3], stride=2)if self.include_top:self.avgpool = nn.AdaptiveAvgPool2d((1, 1))  # output size = (1, 1)self.fc = nn.Linear(512 * block.expansion, num_classes)for m in self.modules():if isinstance(m, nn.Conv2d):nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')def _make_layer(self, block, channel, block_num, stride=1):downsample = Noneif stride != 1 or self.in_channel != channel * block.expansion:downsample = nn.Sequential(nn.Conv2d(self.in_channel, channel * block.expansion, kernel_size=1, stride=stride, bias=False),nn.BatchNorm2d(channel * block.expansion))layers = []layers.append(block(self.in_channel,channel,downsample=downsample,stride=stride,groups=self.groups,width_per_group=self.width_per_group))self.in_channel = channel * block.expansionfor _ in range(1, block_num):layers.append(block(self.in_channel,channel,groups=self.groups,width_per_group=self.width_per_group))return nn.Sequential(*layers)def forward(self, x):x = self.conv1(x)x = self.bn1(x)x = self.relu(x)x = self.maxpool(x)x = self.layer1(x)x = self.layer2(x)x = self.layer3(x)x = self.layer4(x)if self.include_top:x = self.avgpool(x)x = torch.flatten(x, 1)x = self.fc(x)return xdef resnet34(num_classes=1000, include_top=True):# https://download.pytorch.org/models/resnet34-333f7ec4.pthreturn ResNet(BasicBlock, [3, 4, 6, 3], num_classes=num_classes, include_top=include_top)def resnet50(num_classes=1000, include_top=True):# https://download.pytorch.org/models/resnet50-19c8e357.pthreturn ResNet(Bottleneck, [3, 4, 6, 3], num_classes=num_classes, include_top=include_top)def resnet101(num_classes=1000, include_top=True):# https://download.pytorch.org/models/resnet101-5d3b4d8f.pthreturn ResNet(Bottleneck, [3, 4, 23, 3], num_classes=num_classes, include_top=include_top)def resnext50_32x4d(num_classes=1000, include_top=True):# https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pthgroups = 32width_per_group = 4return ResNet(Bottleneck, [3, 4, 6, 3],num_classes=num_classes,include_top=include_top,groups=groups,width_per_group=width_per_group)def resnext101_32x8d(num_classes=1000, include_top=True):# https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pthgroups = 32width_per_group = 8return ResNet(Bottleneck, [3, 4, 23, 3],num_classes=num_classes,include_top=include_top,groups=groups,width_per_group=width_per_group)

参考视频:6.1 ResNet网络结构,BN以及迁移学习详解_哔哩哔哩_bilibili

深度学习(3)--ResNetResNext相关推荐

  1. 从2012年到现在深度学习领域标志成果

    2006年,Hinton 发表了一篇论文<A Fast Learning Algorithm for Deep Belief Nets>,提出了降维和逐层预训练方法,该方法可成功运用于训练 ...

  2. 各种优化算法公式快速回忆优化器-深度学习

    本文是Deep Learning 之 最优化方法系列文章的RMSProp方法.主要参考Deep Learning 一书. 整个优化系列文章列表: Deep Learning 之 最优化方法 Deep ...

  3. 卷积神经网络之卷积计算、作用与思想 深度学习

    博客:blog.shinelee.me | 博客园 | CSDN 卷积运算与相关运算 在计算机视觉领域,卷积核.滤波器通常为较小尺寸的矩阵,比如3×33×3.从这个角度看,多层卷积是在进行逐层映射,整 ...

  4. 矩阵的卷积核运算(一个简单小例子的讲解)深度学习

    卷积运算:假设有一个卷积核h,就一般为3*3的矩阵: 有一个待处理矩阵A: h*A的计算过程分为三步 第一步,将卷积核翻转180°,也就是成为了 第二步,将卷积核h的中心对准x的第一个元素,然后对应元 ...

  5. 深度学习优化函数详解(5)-- Nesterov accelerated gradient (NAG) 优化算法

    深度学习优化函数详解系列目录 深度学习优化函数详解(0)– 线性回归问题 深度学习优化函数详解(1)– Gradient Descent 梯度下降法 深度学习优化函数详解(2)– SGD 随机梯度下降 ...

  6. transformer bert seq2seq 深度学习 编码和解码的逻辑-重点

    参考文献: 详解从 Seq2Seq模型.RNN结构.Encoder-Decoder模型 到 Attention模型 [NLP]Attention Model(注意力模型)学习总结(https://ww ...

  7. 入门指南目录页 -PaddlePaddle 飞桨 入门指南 FAQ合集-深度学习问题

    入门指南目录页 -PaddlePaddle 飞桨 入门指南 FAQ合集 GT_Zhang关注 0.1012019.08.01 18:43:34字数 1,874阅读 795 Hi,欢迎各位来自Paddl ...

  8. 深度学习的分布式训练--数据并行和模型并行

    <div class="htmledit_views"> 在深度学习这一领域经常涉及到模型的分布式训练(包括一机多GPU的情况).我自己在刚刚接触到一机多卡,或者分布式 ...

  9. 1-1 机器学习和深度学习综述-paddle

    课程>我的课程>百度架构师手把手教深度学习>1-1 机器学习和深度学习综述> 1-1 机器学习和深度学习综述 paddle初级课程 王然(学生) Notebook 教育 初级深 ...

  10. 深度学习的Xavier初始化方法

    在tensorflow中,有一个初始化函数:tf.contrib.layers.variance_scaling_initializer.Tensorflow 官网的介绍为: variance_sca ...

最新文章

  1. sencha touch 组件选择器getCmp和ComponentQuery.query()的效率解析
  2. rocksdb原理_手摸手学习 RocksDB 的 Write Buffer Manager
  3. 编译器构造概述(详细)
  4. 1 shell备份数据库MYSQL案例
  5. DataGridView绑定list的注意事项
  6. Sublime Merge for Mac(git客户端软件)2064
  7. 表达式引擎Aviator基本介绍及使用以及基于Aviator的规则引擎(附代码详细介绍)
  8. Android架构师能力素质模型
  9. 2项密评新标准6月1日实施(附图解幻灯片下载)
  10. iOS打包神器fastlane安装
  11. 线性代数(1)—— 行列式
  12. ViewPager example -实现左右两个屏幕的切换
  13. php cgi sapi
  14. 敏捷团队的质量保障赋能
  15. 利用定时/计数器T1产生定时时钟,由P1口控制8个发光二极管,使8个提示灯依次一个一个闪动,闪动频率为10次每秒(8个灯亮一遍为一个周期),循环
  16. java ftpClient 下载文件损坏问题
  17. 请问,电子产品中常用的12V转5V的DCDC电源芯片有哪些
  18. 张飞硬件课程第六部:开关电源(上)
  19. 什么是vps?vps和代理ip的本质区别?
  20. 【学习笔记16】JavaScript函数封装习题

热门文章

  1. 【JAVA】GUI常用组件
  2. 用php和mysql开发招聘网站
  3. 【软考】【知识产权与法律法规】
  4. 出第三方软件检测报告的测试机构有哪些,具备CMA、CNAS资质的测评公司推荐
  5. 初识C语言——C语言的第一课
  6. 如何做一个高级的文本编辑器 textarea,拥有快捷键操作
  7. R语言绘制柱状图(bar plot)
  8. Java 密码学算法
  9. iOS 可用的热更新、热修复方案
  10. 一、Java语言简介