MobileNetV1《MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications》_程大海的博客-CSDN博客

《MobileNetV2: Inverted Residuals and Linear Bottlenecks》_程大海的博客-CSDN博客

《ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices》_程大海的博客-CSDN博客

pytorch中MobileNetV2分类模型的源码注解_程大海的博客-CSDN博客

pytorch卷积操作nn.Conv中的groups参数用法解释_程大海的博客-CSDN博客_pytorch中groups

开源代码:https://github.com/xxcheng0708/Pytorch_Image_Classifier_Template


温馨提示:本文仅供自己参考,如有理解错误,有时间再改!

标准残差块:

1、使用1x1卷积减少通道数

2、使用NxN卷积提取特征

3、使用1x1卷积提升通道数,提升后与输入通道数相等

4、将残差块的输入与输出相加,形成short-cut连接

备注:标准残差块的通道数两头大,中间小

反向残差块(inverted residual):

1、使用1x1卷积提升通道数

2、使用深度可分离卷积提取特征

3、使用1x1卷积减少通道数,减少后与输入通道数相等

4、将残差块的输入与输出相加,形成short-cut连接

备注:翻转残差块的通道数两头小,中间大,中间通道数多的部分采用深度可分离卷积降低参数量和计算量

MobileNetV2网络结构:

 Pytorch中MobileNetV2反转卷积源码:

class InvertedResidual(nn.Module):def __init__(self, inp, oup, stride, expand_ratio, norm_layer=None):super(InvertedResidual, self).__init__()self.stride = strideassert stride in [1, 2]if norm_layer is None:norm_layer = nn.BatchNorm2dhidden_dim = int(round(inp * expand_ratio))self.use_res_connect = self.stride == 1 and inp == ouplayers = []if expand_ratio != 1:# pwlayers.append(ConvBNReLU(inp, hidden_dim, kernel_size=1, norm_layer=norm_layer))layers.extend([# dw, 深度可分离卷积第一部,分组卷积,group不等于1ConvBNReLU(hidden_dim, hidden_dim, stride=stride, groups=hidden_dim, norm_layer=norm_layer),# pw-linear,深度可分离卷积第二步,使用1x1卷积进行channel融合nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),norm_layer(oup),])self.conv = nn.Sequential(*layers)def forward(self, x):if self.use_res_connect:# 使用残差连接return x + self.conv(x)else:# 不使用残差连接return self.conv(x)

Pytorch中MobileNetV2网络结构:

MobileNetV2((features): Sequential((0): ConvBNReLU((0): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(1): InvertedResidual((conv): Sequential((0): ConvBNReLU((0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(1): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(2): InvertedResidual((conv): Sequential((0): ConvBNReLU((0): Conv2d(16, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(1): ConvBNReLU((0): Conv2d(96, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=96, bias=False)(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(2): Conv2d(96, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)(3): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(3): InvertedResidual((conv): Sequential((0): ConvBNReLU((0): Conv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(1): ConvBNReLU((0): Conv2d(144, 144, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=144, bias=False)(1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(2): Conv2d(144, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)(3): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(4): InvertedResidual((conv): Sequential((0): ConvBNReLU((0): Conv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(1): ConvBNReLU((0): Conv2d(144, 144, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=144, bias=False)(1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(2): Conv2d(144, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)(3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(5): InvertedResidual((conv): Sequential((0): ConvBNReLU((0): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(1): ConvBNReLU((0): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192, bias=False)(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(2): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)(3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(6): InvertedResidual((conv): Sequential((0): ConvBNReLU((0): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(1): ConvBNReLU((0): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192, bias=False)(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(2): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)(3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(7): InvertedResidual((conv): Sequential((0): ConvBNReLU((0): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(1): ConvBNReLU((0): Conv2d(192, 192, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=192, bias=False)(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(2): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(8): InvertedResidual((conv): Sequential((0): ConvBNReLU((0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(1): ConvBNReLU((0): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(2): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(9): InvertedResidual((conv): Sequential((0): ConvBNReLU((0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(1): ConvBNReLU((0): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(2): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(10): InvertedResidual((conv): Sequential((0): ConvBNReLU((0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(1): ConvBNReLU((0): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(2): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(11): InvertedResidual((conv): Sequential((0): ConvBNReLU((0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(1): ConvBNReLU((0): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(2): Conv2d(384, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)(3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(12): InvertedResidual((conv): Sequential((0): ConvBNReLU((0): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(1): ConvBNReLU((0): Conv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=576, bias=False)(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(2): Conv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)(3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(13): InvertedResidual((conv): Sequential((0): ConvBNReLU((0): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(1): ConvBNReLU((0): Conv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=576, bias=False)(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(2): Conv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)(3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(14): InvertedResidual((conv): Sequential((0): ConvBNReLU((0): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(1): ConvBNReLU((0): Conv2d(576, 576, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=576, bias=False)(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(2): Conv2d(576, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)(3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(15): InvertedResidual((conv): Sequential((0): ConvBNReLU((0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(1): ConvBNReLU((0): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False)(1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(2): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)(3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(16): InvertedResidual((conv): Sequential((0): ConvBNReLU((0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(1): ConvBNReLU((0): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False)(1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(2): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)(3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(17): InvertedResidual((conv): Sequential((0): ConvBNReLU((0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(1): ConvBNReLU((0): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False)(1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True))(2): Conv2d(960, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)(3): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(18): ConvBNReLU((0): Conv2d(320, 1280, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(1280, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU6(inplace=True)))(classifier): Sequential((0): Dropout(p=0.2, inplace=False)(1): Linear(in_features=1280, out_features=1000, bias=True))
)

《MobileNetV2: Inverted Residuals and Linear Bottlenecks》相关推荐

  1. 《MobileNetV2: Inverted Residuals and Linear Bottlenecks》论文学习笔记

    文章目录 论文基本信息 研究背景 读完摘要后对上述问题的回答 读完论文后对上述问题的回答 什么是MobileNet V2? 为啥要在前边加一个Pointwise convolution? 什么是反向残 ...

  2. 轻量化网络(二)MobileNetV2: Inverted Residuals and Linear Bottlenecks

    论文链接 Pytorch实现 Tensorflow实现 Mobilenet V2是谷歌在Mobilenet V1上的进一步改进,第一版参考文章,是Mobilenet系列的第二篇.该文章以深度可分离卷积 ...

  3. 【MobileNet V2】《MobileNetV2:Inverted Residuals and Linear Bottlenecks》

    CVPR-2018 caffe 版本的代码:https://github.com/shicai/MobileNet-Caffe/blob/master/mobilenet_v2_deploy.prot ...

  4. 轻量型网络之MobileNetV2: Inverted Residuals and Linear Bottlenecks论文学习

    0.摘要 针对残差结构提出来倒残差结构(Inverted Residuals),由于使用的是1x1卷积在resnet中也叫瓶颈层,所以这个模块最终叫做具有线性瓶颈的倒残差结构(the inverted ...

  5. 论文阅读:MobileNetV2: Inverted Residuals and Linear Bottlenecks(MobileNetV2)

    文章目录 1 摘要 2 存在问题 3 亮点 3.1 Linear Bottlenecks 3.2 Inverted residual block 3.4 总的结构体 4 部分结果 4.1 部分数据对比 ...

  6. MobilenetV2学习笔记 --- MobileNetV2: Inverted Residuals and Linear Bottlenecks

    论文:https://arxiv.org/abs/1801.04381 代码:https://github.com/tonylins/pytorch-mobilenet-v2 此外给出Mobilene ...

  7. 论文精读:MobileNetV2: Inverted Residuals and Linear Bottlenecks

    论文地址:https://arxiv.org/pdf/1801.04381.pdf 模型结构简单,重点是理解模型设计的动机,并记录一下卷积的通用知识,已经熟知的知识就不再记录了,详细读原文. Abst ...

  8. 论文分享 MobileNetV2: Inverted Residuals and Linear Bottlenecks

    摘要 在本文中,我们描述了一种新的移动架构 MobileNetV2,它提高了移动模型在多个任务和基准测试以及不同模型大小范围内的最新性能.我们还描述了在我们称为 SSDLite 的新框架中将这些移动模 ...

  9. MobileNetV2: Inverted Residuals and Linear Bottlenecks

    推荐阅读:https://blog.csdn.net/kangdi7547/article/details/81431572 16->96->24, 为什么中间有96怎么确定, 由t来确定 ...

  10. MobileNetV2: Inverted Residuals and Linear Bottlenecks论文解读

    目录 摘要 主要贡献 相关工作 先导内容,预备工作(preliminaries,intuition) 模型结构 实验结果 摘要 作者提出了一种基于倒置残差块(姑且这么翻译吧,inverted resi ...

最新文章

  1. linux内核网络协议栈--linux协议栈调用流程(七)
  2. Oracle 块修改跟踪 (Block Change Tracking) 说明
  3. matplotlib xticks 基于 旋转_咬文嚼字——对matplotlib的文字绘图总结
  4. 绿盟数据库审计系统hive_数据库审计系统
  5. [51nod1079]中国剩余定理
  6. java aio为什么不稳定_烯醇式结构为什么不稳定?
  7. python经典项目实战_2个Python入门级的实战项目
  8. C语言fscanf和fprintf函数的用法详解
  9. mysql数据库需求分析工具_一份全面的“数据库设计需求分析”是怎样的?
  10. Xamarin.Forms 仿照京东搜索记录控件
  11. kuangbin数学训练1
  12. 如何利用ECRS原则进行线平衡改善?
  13. sql分组排序mysql_SQL分组排序
  14. RTX3060安装pytorch
  15. 在VS Code中使用Clang-Format
  16. Django 指定目录创建app
  17. 解决Anaconda安装包时报错CondaVerificationError: The package for pytorch located at...
  18. 二建机电实务视频教程
  19. 工作中的一些经验和教训
  20. MySql数据库-58沈剑 架构师之路

热门文章

  1. VAM语料库--学习笔记
  2. NYOJ 33 蛇形填数——————思维
  3. linux环境hexo和GithubPages搭建个人博客
  4. 修改tomcat的默认端口号是在tomcat的哪个配置文件里面?
  5. 如何用python编写财务记账软件_Python实现简单的记账本功能
  6. 关于抢红包的_酷乐研究所 | 过年净抢红包了?我们准备了50种新玩法
  7. java中BigDecimal求余
  8. 年利率、七日年化收益率、万份收益
  9. JAVA-pdfBox-纯文件流实现PDF文件加水印后转PDF图片文件下载
  10. 你打开的那些网页,大概率是被监控了