神经网络学习小记录50——Pytorch 利用efficientnet系列模型搭建yolov3目标检测平台

  • 学习前言
  • 什么是EfficientNet模型
  • 源码下载
  • EfficientNet模型的实现思路
    • 1、EfficientNet模型的特点
    • 2、EfficientNet网络的结构
  • EfficientNet的代码构建
    • 1、模型代码的构建
    • 2、Yolov3上的应用

学习前言

也看看Pytorch版本的Efficientnet。

什么是EfficientNet模型

2019年,谷歌新出EfficientNet,网络如其名,这个网络非常的有效率,怎么理解有效率这个词呢,我们从卷积神经网络的发展来看:
从最初的VGG16发展到如今的Xception,人们慢慢发现,提高神经网络的性能不仅仅在于堆叠层数,更重要的几点是:

1、网络要可以训练,可以收敛。
2、参数量要比较小,方便训练,提高速度。
3、创新神经网络的结构,学到更重要的东西。

而EfficientNet很好的做到了这一点,它利用更少的参数量(关系到训练、速度)得到最好的识别度(学到更重要的特点)

源码下载

https://github.com/bubbliiiing/efficientnet-yolo3-pytorch

EfficientNet模型的实现思路

1、EfficientNet模型的特点

EfficientNet模型具有很独特的特点,这个特点是参考其它优秀神经网络设计出来的。经典的神经网络特点如下:
1、利用残差神经网络增大神经网络的深度,通过更深的神经网络实现特征提取。
2、改变每一层提取的特征层数,实现更多层的特征提取,得到更多的特征,提升宽度。
3、通过增大输入图片的分辨率也可以使得网络可以学习与表达的东西更加丰富,有利于提高精确度

EfficientNet就是将这三个特点结合起来,通过一起缩放baseline模型MobileNet中就通过缩放α实现缩放模型,不同的α有不同的模型精度,α=1时为baseline模型;ResNet其实也是有一个baseline模型,在baseline的基础上通过改变图片的深度实现不同的模型实现),同时调整深度宽度输入图片的分辨率完成一个优秀的网络设计。

EfficientNet的效果如下:

在EfficientNet模型中,其使用一组固定的缩放系数统一缩放网络深度、宽度和分辨率。
假设想使用 2N倍的计算资源,我们可以简单的对网络深度扩大αN倍、宽度扩大βN 、图像尺寸扩大γN倍,这里的α,β,γ都是由原来的小模型上做微小的网格搜索决定的常量系数。
如图为EfficientNet的设计思路,从三个方面同时拓充网络的特性。

2、EfficientNet网络的结构

EfficientNet一共由一个Stem + 16个Blocks + Con2D + GlobalAveragePooling2D + Dense组成,其核心内容是16个Blocks,其它的结构与常规的卷积神经网络差距不大。

下图展示的是EfficientNet-B0也就是EfficientNet的设计基线的结构:

第一部分是Stem,用于进行初步的特征提取,实际内容是一个卷积+标准化+激活函数。
第二部分是16个Blocks,是efficientnet特有的特征提取结构,在Blocks堆叠的过程中完成高效的特征提取。
第三部分是Con2D + GlobalAveragePooling2D + Dense,是efficientnet的分类头,在构建efficientnet-yolov3的时候没有使用到。

整个efficientnet由7个部分的Block组成,对应上图的Block1-Block7,其中每个部分的Block的的参数如下:

BlockArgs(kernel_size=3, num_repeat=1, input_filters=32, output_filters=16, expand_ratio=1, id_skip=True, stride=[1], se_ratio=0.25),
BlockArgs(kernel_size=3, num_repeat=2, input_filters=16, output_filters=24, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25),
BlockArgs(kernel_size=5, num_repeat=2, input_filters=24, output_filters=40, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25),
BlockArgs(kernel_size=3, num_repeat=3, input_filters=40, output_filters=80, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25),
BlockArgs(kernel_size=5, num_repeat=3, input_filters=80, output_filters=112, expand_ratio=6, id_skip=True, stride=[1], se_ratio=0.25),
BlockArgs(kernel_size=5, num_repeat=4, input_filters=112, output_filters=192, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25),
BlockArgs(kernel_size=3, num_repeat=1, input_filters=192, output_filters=320, expand_ratio=6, id_skip=True, stride=[1], se_ratio=0.25)]GlobalParams(batch_norm_momentum=0.99, batch_norm_epsilon=0.001, dropout_rate=0.2, num_classes=1000, width_coefficient=1.0, depth_coefficient=1.0, depth_divisor=8, min_depth=None, drop_connect_rate=0.2, image_size=224)

Block的通用结构如下,其总体的设计思路是一个结合深度可分离卷积注意力机制逆残差结构,每个Block可分为两部分:

  • 左边为主干部分,首先利用1x1卷积升维,再使用3x3或者5x5的逐层卷积进行跨特征点的特征提取。完成特征提取后添加一个通道注意力机制,最后利用1x1卷积降维
  • 右边为残差边,不进行处理。


Block实现代码如下

class MBConvBlock(nn.Module):'''EfficientNet-b0:[BlockArgs(kernel_size=3, num_repeat=1, input_filters=32, output_filters=16, expand_ratio=1, id_skip=True, stride=[1], se_ratio=0.25), BlockArgs(kernel_size=3, num_repeat=2, input_filters=16, output_filters=24, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25), BlockArgs(kernel_size=5, num_repeat=2, input_filters=24, output_filters=40, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25), BlockArgs(kernel_size=3, num_repeat=3, input_filters=40, output_filters=80, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25), BlockArgs(kernel_size=5, num_repeat=3, input_filters=80, output_filters=112, expand_ratio=6, id_skip=True, stride=[1], se_ratio=0.25), BlockArgs(kernel_size=5, num_repeat=4, input_filters=112, output_filters=192, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25), BlockArgs(kernel_size=3, num_repeat=1, input_filters=192, output_filters=320, expand_ratio=6, id_skip=True, stride=[1], se_ratio=0.25)]GlobalParams(batch_norm_momentum=0.99, batch_norm_epsilon=0.001, dropout_rate=0.2, num_classes=1000, width_coefficient=1.0, depth_coefficient=1.0, depth_divisor=8, min_depth=None, drop_connect_rate=0.2, image_size=224)'''def __init__(self, block_args, global_params):super().__init__()self._block_args = block_args# 获得标准化的参数self._bn_mom = 1 - global_params.batch_norm_momentumself._bn_eps = global_params.batch_norm_epsilon# 注意力机制的缩放比例self.has_se = (self._block_args.se_ratio is not None) and (0 < self._block_args.se_ratio <= 1)# 是否需要短接边self.id_skip = block_args.id_skip Conv2d = get_same_padding_conv2d(image_size=global_params.image_size)# 1x1卷积通道扩张inp = self._block_args.input_filters  # number of input channelsoup = self._block_args.input_filters * self._block_args.expand_ratio  # number of output channelsif self._block_args.expand_ratio != 1:self._expand_conv = Conv2d(in_channels=inp, out_channels=oup, kernel_size=1, bias=False)self._bn0 = nn.BatchNorm2d(num_features=oup, momentum=self._bn_mom, eps=self._bn_eps)# 深度可分离卷积k = self._block_args.kernel_sizes = self._block_args.strideself._depthwise_conv = Conv2d(in_channels=oup, out_channels=oup, groups=oup,kernel_size=k, stride=s, bias=False)self._bn1 = nn.BatchNorm2d(num_features=oup, momentum=self._bn_mom, eps=self._bn_eps)# 注意力机制模块组,先进行通道数的收缩再进行通道数的扩张if self.has_se:num_squeezed_channels = max(1, int(self._block_args.input_filters * self._block_args.se_ratio))self._se_reduce = Conv2d(in_channels=oup, out_channels=num_squeezed_channels, kernel_size=1)self._se_expand = Conv2d(in_channels=num_squeezed_channels, out_channels=oup, kernel_size=1)# 输出部分final_oup = self._block_args.output_filtersself._project_conv = Conv2d(in_channels=oup, out_channels=final_oup, kernel_size=1, bias=False)self._bn2 = nn.BatchNorm2d(num_features=final_oup, momentum=self._bn_mom, eps=self._bn_eps)self._swish = MemoryEfficientSwish()def forward(self, inputs, drop_connect_rate=None):x = inputsif self._block_args.expand_ratio != 1:x = self._swish(self._bn0(self._expand_conv(inputs)))x = self._swish(self._bn1(self._depthwise_conv(x)))# 添加了注意力机制if self.has_se:x_squeezed = F.adaptive_avg_pool2d(x, 1)x_squeezed = self._se_expand(self._swish(self._se_reduce(x_squeezed)))x = torch.sigmoid(x_squeezed) * xx = self._bn2(self._project_conv(x))# 满足以下条件才可以短接input_filters, output_filters = self._block_args.input_filters, self._block_args.output_filtersif self.id_skip and self._block_args.stride == 1 and input_filters == output_filters:if drop_connect_rate:x = drop_connect(x, p=drop_connect_rate,training=self.training)x = x + inputs  # skip connectionreturn xdef set_swish(self, memory_efficient=True):"""Sets swish function as memory efficient (for training) or standard (for export)"""self._swish = MemoryEfficientSwish() if memory_efficient else Swish()

EfficientNet的代码构建

1、模型代码的构建

EfficientNet的实现代码如下,该代码是EfficientNet在YoloV3上的应用,可以参考一下:

import collections
import math
import re
from functools import partialimport torch
from torch import nn
from torch.nn import functional as F
from torch.utils import model_zoo########################################################################
############### HELPERS FUNCTIONS FOR MODEL ARCHITECTURE ###############
######################################################################### Parameters for the entire model (stem, all blocks, and head)
GlobalParams = collections.namedtuple('GlobalParams', ['batch_norm_momentum', 'batch_norm_epsilon', 'dropout_rate','num_classes', 'width_coefficient', 'depth_coefficient','depth_divisor', 'min_depth', 'drop_connect_rate', 'image_size'])# Parameters for an individual model block
BlockArgs = collections.namedtuple('BlockArgs', ['kernel_size', 'num_repeat', 'input_filters', 'output_filters','expand_ratio', 'id_skip', 'stride', 'se_ratio'])# Change namedtuple defaults
GlobalParams.__new__.__defaults__ = (None,) * len(GlobalParams._fields)
BlockArgs.__new__.__defaults__ = (None,) * len(BlockArgs._fields)class SwishImplementation(torch.autograd.Function):@staticmethoddef forward(ctx, i):result = i * torch.sigmoid(i)ctx.save_for_backward(i)return result@staticmethoddef backward(ctx, grad_output):i = ctx.saved_variables[0]sigmoid_i = torch.sigmoid(i)return grad_output * (sigmoid_i * (1 + i * (1 - sigmoid_i)))class MemoryEfficientSwish(nn.Module):def forward(self, x):return SwishImplementation.apply(x)class Swish(nn.Module):def forward(self, x):return x * torch.sigmoid(x)def round_filters(filters, global_params):""" Calculate and round number of filters based on depth multiplier. """multiplier = global_params.width_coefficientif not multiplier:return filtersdivisor = global_params.depth_divisormin_depth = global_params.min_depthfilters *= multipliermin_depth = min_depth or divisornew_filters = max(min_depth, int(filters + divisor / 2) // divisor * divisor)if new_filters < 0.9 * filters:  # prevent rounding by more than 10%new_filters += divisorreturn int(new_filters)def round_repeats(repeats, global_params):""" Round number of filters based on depth multiplier. """multiplier = global_params.depth_coefficientif not multiplier:return repeatsreturn int(math.ceil(multiplier * repeats))def drop_connect(inputs, p, training):""" Drop connect. """if not training: return inputsbatch_size = inputs.shape[0]keep_prob = 1 - prandom_tensor = keep_probrandom_tensor += torch.rand([batch_size, 1, 1, 1], dtype=inputs.dtype, device=inputs.device)binary_tensor = torch.floor(random_tensor)output = inputs / keep_prob * binary_tensorreturn outputdef get_same_padding_conv2d(image_size=None):""" Chooses static padding if you have specified an image size, and dynamic padding otherwise.Static padding is necessary for ONNX exporting of models. """if image_size is None:return Conv2dDynamicSamePaddingelse:return partial(Conv2dStaticSamePadding, image_size=image_size)class Conv2dDynamicSamePadding(nn.Conv2d):""" 2D Convolutions like TensorFlow, for a dynamic image size """def __init__(self, in_channels, out_channels, kernel_size, stride=1, dilation=1, groups=1, bias=True):super().__init__(in_channels, out_channels, kernel_size, stride, 0, dilation, groups, bias)self.stride = self.stride if len(self.stride) == 2 else [self.stride[0]] * 2def forward(self, x):ih, iw = x.size()[-2:]kh, kw = self.weight.size()[-2:]sh, sw = self.strideoh, ow = math.ceil(ih / sh), math.ceil(iw / sw)pad_h = max((oh - 1) * self.stride[0] + (kh - 1) * self.dilation[0] + 1 - ih, 0)pad_w = max((ow - 1) * self.stride[1] + (kw - 1) * self.dilation[1] + 1 - iw, 0)if pad_h > 0 or pad_w > 0:x = F.pad(x, [pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2])return F.conv2d(x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups)class Conv2dStaticSamePadding(nn.Conv2d):""" 2D Convolutions like TensorFlow, for a fixed image size"""def __init__(self, in_channels, out_channels, kernel_size, image_size=None, **kwargs):super().__init__(in_channels, out_channels, kernel_size, **kwargs)self.stride = self.stride if len(self.stride) == 2 else [self.stride[0]] * 2# Calculate padding based on image size and save itassert image_size is not Noneih, iw = image_size if type(image_size) == list else [image_size, image_size]kh, kw = self.weight.size()[-2:]sh, sw = self.strideoh, ow = math.ceil(ih / sh), math.ceil(iw / sw)pad_h = max((oh - 1) * self.stride[0] + (kh - 1) * self.dilation[0] + 1 - ih, 0)pad_w = max((ow - 1) * self.stride[1] + (kw - 1) * self.dilation[1] + 1 - iw, 0)if pad_h > 0 or pad_w > 0:self.static_padding = nn.ZeroPad2d((pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2))else:self.static_padding = Identity()def forward(self, x):x = self.static_padding(x)x = F.conv2d(x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups)return xclass Identity(nn.Module):def __init__(self, ):super(Identity, self).__init__()def forward(self, input):return input########################################################################
############## HELPERS FUNCTIONS FOR LOADING MODEL PARAMS ##############
########################################################################def efficientnet_params(model_name):""" Map EfficientNet model name to parameter coefficients. """params_dict = {# Coefficients:   width,depth,res,dropout'efficientnet-b0': (1.0, 1.0, 224, 0.2),'efficientnet-b1': (1.0, 1.1, 240, 0.2),'efficientnet-b2': (1.1, 1.2, 260, 0.3),'efficientnet-b3': (1.2, 1.4, 300, 0.3),'efficientnet-b4': (1.4, 1.8, 380, 0.4),'efficientnet-b5': (1.6, 2.2, 456, 0.4),'efficientnet-b6': (1.8, 2.6, 528, 0.5),'efficientnet-b7': (2.0, 3.1, 600, 0.5),'efficientnet-b8': (2.2, 3.6, 672, 0.5),'efficientnet-l2': (4.3, 5.3, 800, 0.5),}return params_dict[model_name]class BlockDecoder(object):""" Block Decoder for readability, straight from the official TensorFlow repository """@staticmethoddef _decode_block_string(block_string):""" Gets a block through a string notation of arguments. """assert isinstance(block_string, str)ops = block_string.split('_')options = {}for op in ops:splits = re.split(r'(\d.*)', op)if len(splits) >= 2:key, value = splits[:2]options[key] = value# Check strideassert (('s' in options and len(options['s']) == 1) or(len(options['s']) == 2 and options['s'][0] == options['s'][1]))return BlockArgs(kernel_size=int(options['k']),num_repeat=int(options['r']),input_filters=int(options['i']),output_filters=int(options['o']),expand_ratio=int(options['e']),id_skip=('noskip' not in block_string),se_ratio=float(options['se']) if 'se' in options else None,stride=[int(options['s'][0])])@staticmethoddef _encode_block_string(block):"""Encodes a block to a string."""args = ['r%d' % block.num_repeat,'k%d' % block.kernel_size,'s%d%d' % (block.strides[0], block.strides[1]),'e%s' % block.expand_ratio,'i%d' % block.input_filters,'o%d' % block.output_filters]if 0 < block.se_ratio <= 1:args.append('se%s' % block.se_ratio)if block.id_skip is False:args.append('noskip')return '_'.join(args)@staticmethoddef decode(string_list):"""Decodes a list of string notations to specify blocks inside the network.:param string_list: a list of strings, each string is a notation of block:return: a list of BlockArgs namedtuples of block args"""assert isinstance(string_list, list)blocks_args = []for block_string in string_list:blocks_args.append(BlockDecoder._decode_block_string(block_string))return blocks_args@staticmethoddef encode(blocks_args):"""Encodes a list of BlockArgs to a list of strings.:param blocks_args: a list of BlockArgs namedtuples of block args:return: a list of strings, each string is a notation of block"""block_strings = []for block in blocks_args:block_strings.append(BlockDecoder._encode_block_string(block))return block_stringsdef efficientnet(width_coefficient=None, depth_coefficient=None, dropout_rate=0.2,drop_connect_rate=0.2, image_size=None, num_classes=1000):""" Creates a efficientnet model. """blocks_args = ['r1_k3_s11_e1_i32_o16_se0.25', 'r2_k3_s22_e6_i16_o24_se0.25','r2_k5_s22_e6_i24_o40_se0.25', 'r3_k3_s22_e6_i40_o80_se0.25','r3_k5_s11_e6_i80_o112_se0.25', 'r4_k5_s22_e6_i112_o192_se0.25','r1_k3_s11_e6_i192_o320_se0.25',]blocks_args = BlockDecoder.decode(blocks_args)global_params = GlobalParams(batch_norm_momentum=0.99,batch_norm_epsilon=1e-3,dropout_rate=dropout_rate,drop_connect_rate=drop_connect_rate,# data_format='channels_last',  # removed, this is always true in PyTorchnum_classes=num_classes,width_coefficient=width_coefficient,depth_coefficient=depth_coefficient,depth_divisor=8,min_depth=None,image_size=image_size,)return blocks_args, global_paramsdef get_model_params(model_name, override_params):""" Get the block args and global params for a given model """if model_name.startswith('efficientnet'):w, d, s, p = efficientnet_params(model_name)# note: all models have drop connect rate = 0.2blocks_args, global_params = efficientnet(width_coefficient=w, depth_coefficient=d, dropout_rate=p, image_size=s)else:raise NotImplementedError('model name is not pre-defined: %s' % model_name)if override_params:# ValueError will be raised here if override_params has fields not included in global_params.global_params = global_params._replace(**override_params)return blocks_args, global_paramsurl_map = {'efficientnet-b0': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b0-355c32eb.pth','efficientnet-b1': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b1-f1951068.pth','efficientnet-b2': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b2-8bb594d6.pth','efficientnet-b3': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b3-5fb5a3c3.pth','efficientnet-b4': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b4-6ed6700e.pth','efficientnet-b5': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b5-b6417697.pth','efficientnet-b6': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b6-c76e70fd.pth','efficientnet-b7': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b7-dcc49843.pth',
}url_map_advprop = {'efficientnet-b0': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/adv-efficientnet-b0-b64d5a18.pth','efficientnet-b1': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/adv-efficientnet-b1-0f3ce85a.pth','efficientnet-b2': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/adv-efficientnet-b2-6e9d97e5.pth','efficientnet-b3': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/adv-efficientnet-b3-cdd7c0f4.pth','efficientnet-b4': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/adv-efficientnet-b4-44fb3a87.pth','efficientnet-b5': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/adv-efficientnet-b5-86493f6b.pth','efficientnet-b6': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/adv-efficientnet-b6-ac80338e.pth','efficientnet-b7': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/adv-efficientnet-b7-4652b6dd.pth','efficientnet-b8': 'https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/adv-efficientnet-b8-22a8fe65.pth',
}def load_pretrained_weights(model, model_name, load_fc=True, advprop=False):""" Loads pretrained weights, and downloads if loading for the first time. """# AutoAugment or Advprop (different preprocessing)url_map_ = url_map_advprop if advprop else url_mapstate_dict = model_zoo.load_url(url_map_[model_name], model_dir="model_data")if load_fc:model.load_state_dict(state_dict, strict=True)else:state_dict.pop('_fc.weight')state_dict.pop('_fc.bias')res = model.load_state_dict(state_dict, strict=False)assert set(res.missing_keys) == set(['_fc.weight', '_fc.bias']), 'issue loading pretrained weights'print('Loaded pretrained weights for {}'.format(model_name))class MBConvBlock(nn.Module):'''EfficientNet-b0:[BlockArgs(kernel_size=3, num_repeat=1, input_filters=32, output_filters=16, expand_ratio=1, id_skip=True, stride=[1], se_ratio=0.25), BlockArgs(kernel_size=3, num_repeat=2, input_filters=16, output_filters=24, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25), BlockArgs(kernel_size=5, num_repeat=2, input_filters=24, output_filters=40, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25), BlockArgs(kernel_size=3, num_repeat=3, input_filters=40, output_filters=80, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25), BlockArgs(kernel_size=5, num_repeat=3, input_filters=80, output_filters=112, expand_ratio=6, id_skip=True, stride=[1], se_ratio=0.25), BlockArgs(kernel_size=5, num_repeat=4, input_filters=112, output_filters=192, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25), BlockArgs(kernel_size=3, num_repeat=1, input_filters=192, output_filters=320, expand_ratio=6, id_skip=True, stride=[1], se_ratio=0.25)]GlobalParams(batch_norm_momentum=0.99, batch_norm_epsilon=0.001, dropout_rate=0.2, num_classes=1000, width_coefficient=1.0, depth_coefficient=1.0, depth_divisor=8, min_depth=None, drop_connect_rate=0.2, image_size=224)'''def __init__(self, block_args, global_params):super().__init__()self._block_args = block_args# 获得标准化的参数self._bn_mom = 1 - global_params.batch_norm_momentumself._bn_eps = global_params.batch_norm_epsilon# 注意力机制的缩放比例self.has_se = (self._block_args.se_ratio is not None) and (0 < self._block_args.se_ratio <= 1)# 是否需要短接边self.id_skip = block_args.id_skip Conv2d = get_same_padding_conv2d(image_size=global_params.image_size)# 1x1卷积通道扩张inp = self._block_args.input_filters  # number of input channelsoup = self._block_args.input_filters * self._block_args.expand_ratio  # number of output channelsif self._block_args.expand_ratio != 1:self._expand_conv = Conv2d(in_channels=inp, out_channels=oup, kernel_size=1, bias=False)self._bn0 = nn.BatchNorm2d(num_features=oup, momentum=self._bn_mom, eps=self._bn_eps)# 深度可分离卷积k = self._block_args.kernel_sizes = self._block_args.strideself._depthwise_conv = Conv2d(in_channels=oup, out_channels=oup, groups=oup,kernel_size=k, stride=s, bias=False)self._bn1 = nn.BatchNorm2d(num_features=oup, momentum=self._bn_mom, eps=self._bn_eps)# 注意力机制模块组,先进行通道数的收缩再进行通道数的扩张if self.has_se:num_squeezed_channels = max(1, int(self._block_args.input_filters * self._block_args.se_ratio))self._se_reduce = Conv2d(in_channels=oup, out_channels=num_squeezed_channels, kernel_size=1)self._se_expand = Conv2d(in_channels=num_squeezed_channels, out_channels=oup, kernel_size=1)# 输出部分final_oup = self._block_args.output_filtersself._project_conv = Conv2d(in_channels=oup, out_channels=final_oup, kernel_size=1, bias=False)self._bn2 = nn.BatchNorm2d(num_features=final_oup, momentum=self._bn_mom, eps=self._bn_eps)self._swish = MemoryEfficientSwish()def forward(self, inputs, drop_connect_rate=None):x = inputsif self._block_args.expand_ratio != 1:x = self._swish(self._bn0(self._expand_conv(inputs)))x = self._swish(self._bn1(self._depthwise_conv(x)))# 添加了注意力机制if self.has_se:x_squeezed = F.adaptive_avg_pool2d(x, 1)x_squeezed = self._se_expand(self._swish(self._se_reduce(x_squeezed)))x = torch.sigmoid(x_squeezed) * xx = self._bn2(self._project_conv(x))# 满足以下条件才可以短接input_filters, output_filters = self._block_args.input_filters, self._block_args.output_filtersif self.id_skip and self._block_args.stride == 1 and input_filters == output_filters:if drop_connect_rate:x = drop_connect(x, p=drop_connect_rate,training=self.training)x = x + inputs  # skip connectionreturn xdef set_swish(self, memory_efficient=True):"""Sets swish function as memory efficient (for training) or standard (for export)"""self._swish = MemoryEfficientSwish() if memory_efficient else Swish()class EfficientNet(nn.Module):'''EfficientNet-b0:[BlockArgs(kernel_size=3, num_repeat=1, input_filters=32, output_filters=16, expand_ratio=1, id_skip=True, stride=[1], se_ratio=0.25), BlockArgs(kernel_size=3, num_repeat=2, input_filters=16, output_filters=24, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25), BlockArgs(kernel_size=5, num_repeat=2, input_filters=24, output_filters=40, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25), BlockArgs(kernel_size=3, num_repeat=3, input_filters=40, output_filters=80, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25), BlockArgs(kernel_size=5, num_repeat=3, input_filters=80, output_filters=112, expand_ratio=6, id_skip=True, stride=[1], se_ratio=0.25), BlockArgs(kernel_size=5, num_repeat=4, input_filters=112, output_filters=192, expand_ratio=6, id_skip=True, stride=[2], se_ratio=0.25), BlockArgs(kernel_size=3, num_repeat=1, input_filters=192, output_filters=320, expand_ratio=6, id_skip=True, stride=[1], se_ratio=0.25)]GlobalParams(batch_norm_momentum=0.99, batch_norm_epsilon=0.001, dropout_rate=0.2, num_classes=1000, width_coefficient=1.0, depth_coefficient=1.0, depth_divisor=8, min_depth=None, drop_connect_rate=0.2, image_size=224)'''def __init__(self, blocks_args=None, global_params=None):super().__init__()assert isinstance(blocks_args, list), 'blocks_args should be a list'assert len(blocks_args) > 0, 'block args must be greater than 0'self._global_params = global_paramsself._blocks_args = blocks_args# 获得一种卷积方法Conv2d = get_same_padding_conv2d(image_size=global_params.image_size)# 获得标准化的参数bn_mom = 1 - self._global_params.batch_norm_momentumbn_eps = self._global_params.batch_norm_epsilon# 网络主干部分开始# 设定输入进来的是RGB三通道图像in_channels = 3  # 利用round_filters可以使得通道数在扩张的时候可以被8整除out_channels = round_filters(32, self._global_params)# 卷积+标准化self._conv_stem = Conv2d(in_channels, out_channels, kernel_size=3, stride=2, bias=False)self._bn0 = nn.BatchNorm2d(num_features=out_channels, momentum=bn_mom, eps=bn_eps)# 对每个block的参数进行修改self._blocks = nn.ModuleList([])for i in range(len(self._blocks_args)):# 对每个block的参数进行修改,根据所选的efficient版本进行修改self._blocks_args[i] = self._blocks_args[i]._replace(input_filters=round_filters(self._blocks_args[i].input_filters, self._global_params),output_filters=round_filters(self._blocks_args[i].output_filters, self._global_params),num_repeat=round_repeats(self._blocks_args[i].num_repeat, self._global_params))# 第一次大的Block里面的卷积需要考虑步长和输入进来的通道数!self._blocks.append(MBConvBlock(self._blocks_args[i], self._global_params))if self._blocks_args[i].num_repeat > 1:self._blocks_args[i] = self._blocks_args[i]._replace(input_filters=self._blocks_args[i].output_filters, stride=1)for _ in range(self._blocks_args[i].num_repeat - 1):self._blocks.append(MBConvBlock(self._blocks_args[i], self._global_params))# 增加了head部分in_channels = self._blocks_args[len(self._blocks_args)-1].output_filtersout_channels = round_filters(1280, self._global_params)# 卷积+标准化self._conv_head = Conv2d(in_channels, out_channels, kernel_size=1, bias=False)self._bn1 = nn.BatchNorm2d(num_features=out_channels, momentum=bn_mom, eps=bn_eps)# 最后的线性全连接层self._avg_pooling = nn.AdaptiveAvgPool2d(1)self._dropout = nn.Dropout(self._global_params.dropout_rate)self._fc = nn.Linear(out_channels, self._global_params.num_classes)# 进行swish激活函数self._swish = MemoryEfficientSwish()def set_swish(self, memory_efficient=True):"""Sets swish function as memory efficient (for training) or standard (for export)"""# swish函数self._swish = MemoryEfficientSwish() if memory_efficient else Swish()for block in self._blocks:block.set_swish(memory_efficient)def extract_features(self, inputs):""" Returns output of the final convolution layer """# Stemx = self._swish(self._bn0(self._conv_stem(inputs)))# Blocksfor idx, block in enumerate(self._blocks):drop_connect_rate = self._global_params.drop_connect_rateif drop_connect_rate:drop_connect_rate *= float(idx) / len(self._blocks)x = block(x, drop_connect_rate=drop_connect_rate)# Headx = self._swish(self._bn1(self._conv_head(x)))return xdef forward(self, inputs):""" Calls extract_features to extract features, applies final linear layer, and returns logits. """bs = inputs.size(0)# Convolution layersx = self.extract_features(inputs)# Pooling and final linear layerx = self._avg_pooling(x)x = x.view(bs, -1)x = self._dropout(x)x = self._fc(x)return x@classmethoddef from_name(cls, model_name, override_params=None):cls._check_model_name_is_valid(model_name)blocks_args, global_params = get_model_params(model_name, override_params)return cls(blocks_args, global_params)@classmethoddef from_pretrained(cls, model_name, load_weights=True, advprop=False, num_classes=1000, in_channels=3):model = cls.from_name(model_name, override_params={'num_classes': num_classes})if load_weights:load_pretrained_weights(model, model_name, load_fc=(num_classes == 1000), advprop=advprop)if in_channels != 3:Conv2d = get_same_padding_conv2d(image_size = model._global_params.image_size)out_channels = round_filters(32, model._global_params)model._conv_stem = Conv2d(in_channels, out_channels, kernel_size=3, stride=2, bias=False)return model@classmethoddef get_image_size(cls, model_name):cls._check_model_name_is_valid(model_name)_, _, res, _ = efficientnet_params(model_name)return res@classmethoddef _check_model_name_is_valid(cls, model_name):""" Validates model name. """valid_models = ['efficientnet-b'+str(i) for i in range(9)]if model_name not in valid_models:raise ValueError('model_name should be one of: ' + ', '.join(valid_models))

2、Yolov3上的应用


对于yolov3来讲,我们需要利用主干特征提取网络获得的三个有效特征进行加强特征金字塔的构建

我们通过上述代码可以取出三个有效特征层,我们可以利用这三个有效特征层替换原来yolov3主干网络darknet53的有效特征层。

为了进一步减少参数量,我们减少了yolov3中用到的普通卷积的通道数。

最终EfficientNet-YoloV3的构建代码如下:

from collections import OrderedDictimport torch
import torch.nn as nnfrom nets.efficientnet import EfficientNet as EffNetclass EfficientNet(nn.Module):def __init__(self, phi, load_weights=False):super(EfficientNet, self).__init__()model = EffNet.from_pretrained(f'efficientnet-b{phi}', load_weights)del model._conv_headdel model._bn1del model._avg_poolingdel model._dropoutdel model._fcself.model = modeldef forward(self, x):x = self.model._conv_stem(x)x = self.model._bn0(x)x = self.model._swish(x)feature_maps = []last_x = Nonefor idx, block in enumerate(self.model._blocks):drop_connect_rate = self.model._global_params.drop_connect_rateif drop_connect_rate:drop_connect_rate *= float(idx) / len(self.model._blocks)x = block(x, drop_connect_rate=drop_connect_rate)if block._depthwise_conv.stride == [2, 2]:feature_maps.append(last_x)elif idx == len(self.model._blocks) - 1:feature_maps.append(x)last_x = xdel last_xout_feats = [feature_maps[2],feature_maps[3],feature_maps[4]]return out_featsdef conv2d(filter_in, filter_out, kernel_size):pad = (kernel_size - 1) // 2 if kernel_size else 0return nn.Sequential(OrderedDict([("conv", nn.Conv2d(filter_in, filter_out, kernel_size=kernel_size, stride=1, padding=pad, bias=False)),("bn", nn.BatchNorm2d(filter_out)),("relu", nn.LeakyReLU(0.1)),]))#------------------------------------------------------------------------#
#   make_last_layers里面一共有七个卷积,前五个用于提取特征。
#   后两个用于获得yolo网络的预测结果
#------------------------------------------------------------------------#
def make_last_layers(filters_list, in_filters, out_filter):m = nn.Sequential(conv2d(in_filters, filters_list[0], 1),conv2d(filters_list[0], filters_list[1], 3),conv2d(filters_list[1], filters_list[0], 1),conv2d(filters_list[0], filters_list[1], 3),conv2d(filters_list[1], filters_list[0], 1),conv2d(filters_list[0], filters_list[1], 3),nn.Conv2d(filters_list[1], out_filter, kernel_size=1, stride=1, padding=0, bias=True))return mclass YoloBody(nn.Module):def __init__(self, anchors_mask, num_classes, phi=0, load_weights = False):super(YoloBody, self).__init__()#---------------------------------------------------#   #   生成darknet53的主干模型#   获得三个有效特征层,他们的shape分别是:#   52,52,256#   26,26,512#   13,13,1024#---------------------------------------------------#self.backbone = EfficientNet(phi, load_weights = load_weights)out_filters = {0: [40, 112, 320],1: [40, 112, 320],2: [48, 120, 352],3: [48, 136, 384],4: [56, 160, 448],5: [64, 176, 512],6: [72, 200, 576],7: [80, 224, 640],}[phi]#------------------------------------------------------------------------##   计算yolo_head的输出通道数,对于voc数据集而言#   final_out_filter0 = final_out_filter1 = final_out_filter2 = 75#------------------------------------------------------------------------#self.last_layer0            = make_last_layers([out_filters[-1], int(out_filters[-1]*2)], out_filters[-1], len(anchors_mask[0]) * (num_classes + 5))self.last_layer1_conv       = conv2d(out_filters[-1], out_filters[-2], 1)self.last_layer1_upsample   = nn.Upsample(scale_factor=2, mode='nearest')self.last_layer1            = make_last_layers([out_filters[-2], int(out_filters[-2]*2)], out_filters[-2] + out_filters[-2], len(anchors_mask[1]) * (num_classes + 5))self.last_layer2_conv       = conv2d(out_filters[-2], out_filters[-3], 1)self.last_layer2_upsample   = nn.Upsample(scale_factor=2, mode='nearest')self.last_layer2            = make_last_layers([out_filters[-3], int(out_filters[-3]*2)], out_filters[-3] + out_filters[-3], len(anchors_mask[2]) * (num_classes + 5))def forward(self, x):#---------------------------------------------------#   #   获得三个有效特征层,他们的shape分别是:#   52,52,256;26,26,512;13,13,1024#---------------------------------------------------#x2, x1, x0 = self.backbone(x)#---------------------------------------------------##   第一个特征层#   out0 = (batch_size,255,13,13)#---------------------------------------------------## 13,13,1024 -> 13,13,512 -> 13,13,1024 -> 13,13,512 -> 13,13,1024 -> 13,13,512out0_branch = self.last_layer0[:5](x0)out0        = self.last_layer0[5:](out0_branch)# 13,13,512 -> 13,13,256 -> 26,26,256x1_in = self.last_layer1_conv(out0_branch)x1_in = self.last_layer1_upsample(x1_in)# 26,26,256 + 26,26,512 -> 26,26,768x1_in = torch.cat([x1_in, x1], 1)#---------------------------------------------------##   第二个特征层#   out1 = (batch_size,255,26,26)#---------------------------------------------------## 26,26,768 -> 26,26,256 -> 26,26,512 -> 26,26,256 -> 26,26,512 -> 26,26,256out1_branch = self.last_layer1[:5](x1_in)out1        = self.last_layer1[5:](out1_branch)# 26,26,256 -> 26,26,128 -> 52,52,128x2_in = self.last_layer2_conv(out1_branch)x2_in = self.last_layer2_upsample(x2_in)# 52,52,128 + 52,52,256 -> 52,52,384x2_in = torch.cat([x2_in, x2], 1)#---------------------------------------------------##   第一个特征层#   out3 = (batch_size,255,52,52)#---------------------------------------------------## 52,52,384 -> 52,52,128 -> 52,52,256 -> 52,52,128 -> 52,52,256 -> 52,52,128out2 = self.last_layer2(x2_in)return out0, out1, out2

神经网络学习小记录50——Pytorch 利用efficientnet系列模型搭建yolov3目标检测平台相关推荐

  1. 神经网络学习小记录26——Keras 利用efficientnet系列模型搭建yolov3目标检测平台

    神经网络学习小记录26--Keras 利用efficientnet系列模型搭建efficientnet-yolov3目标检测平台 学习前言 什么是EfficientNet模型 源码下载 Efficie ...

  2. 神经网络学习小记录52——Pytorch搭建孪生神经网络(Siamese network)比较图片相似性

    神经网络学习小记录52--Pytorch搭建孪生神经网络(Siamese network)比较图片相似性 学习前言 什么是孪生神经网络 代码下载 孪生神经网络的实现思路 一.预测部分 1.主干网络介绍 ...

  3. 神经网络学习小记录69——Pytorch 使用Google Colab进行深度学习

    神经网络学习小记录69--Pytorch 使用Google Colab进行深度学习 注意事项 学习前言 什么是Google Colab 相关链接 利用Colab进行训练 一.数据集与预训练权重的上传 ...

  4. 神经网络学习小记录39——MobileNetV3(small)模型的复现详解

    神经网络学习小记录39--MobileNetV3(small)模型的复现详解 学习前言 什么是MobileNetV3 代码下载 large与small的区别 MobileNetV3(small)的网络 ...

  5. 神经网络学习小记录14——slim常用函数与如何训练、保存模型

    神经网络学习小记录14--slim训练与保存模型 学习前言 slim是什么 slim常用函数 1.slim = tf.contrib.slim 2.slim.create_global_step 3. ...

  6. 神经网络学习小记录2——利用tensorflow构建循环神经网络(RNN)

    神经网络学习小记录2--利用tensorflow构建循环神经网络(RNN) 学习前言 RNN简介 tensorflow中RNN的相关函数 tf.nn.rnn_cell.BasicLSTMCell tf ...

  7. 神经网络学习小记录45——Keras常用学习率下降方式汇总

    神经网络学习小记录45--Keras常用学习率下降方式汇总 2020年5月19日更新 前言 为什么要调控学习率 下降方式汇总 1.阶层性下降 2.指数型下降 3.余弦退火衰减 4.余弦退火衰减更新版 ...

  8. 神经网络学习小记录19——微调VGG分类模型训练自己的数据(猫狗数据集)

    神经网络学习小记录19--微调VGG分类模型训练自己的数据(猫狗数据集) 注意事项 学习前言 什么是VGG16模型 VGG模型的复杂程度 训练前准备 1.数据集处理 2.创建Keras的VGG模型 3 ...

  9. 神经网络学习小记录-番外篇——常见问题汇总

    神经网络学习小记录-番外篇--常见问题汇总 前言 问题汇总 1.下载问题 a.代码下载 b. 权值下载 c. 数据集下载 2.环境配置问题 a.20系列所用的环境 b.30系列显卡环境配置 c.CPU ...

最新文章

  1. Oracle数据库游标操作
  2. html读取本地txt_手机本地电子书籍阅读器 — 静读天下
  3. 关于“幽灵架构”的补充说明5:改造控制器
  4. jsp:include和%@include file=%的区别(简单了解)
  5. 手机浏览器自动播放视频video(设置autoplay无效)的解决方案
  6. vue - webpack.dev.conf.js for FriendlyErrorsPlugin
  7. 罗技 连点 脚本_罗技G933S无线游戏耳机评测
  8. 外卖侠使用教程加体验地址
  9. fw325r没有虚拟服务器,FAST FW325R配置教程
  10. 用matlab的dsp软件仿真,基于MATLAB的DSP软件仿真
  11. 人工智能数学基础1:三角函数的定义、公式及固定角三角函数值
  12. html css依赖管理,composer 管理js css等依赖文件【fxp/composer-asset-plugin】
  13. Vue回炉重造之图片加载性能优化
  14. 腾讯推出微信企业服务平台风铃
  15. Linus批评英特尔的LAM代码,拒绝将其合并到内核
  16. 计算机语言phal语言,[6.1]-基于接口查询语言的SDK包
  17. 参加SODA数据比赛
  18. Docker删除Exited镜像
  19. Django 运行报错 Manager isnt accessible via Category instances
  20. 食人女孩2 mac版(tle Dew 2)v1.0.2

热门文章

  1. 破解你的选择困难症——综合评价分析法
  2. 唤醒手腕51单片机学习笔记(第1期)基本知识、LED和轻触按键
  3. 玩转黑莓8900,不信你不会。超级实用
  4. 乌云和漏洞盒子停业整顿:白帽子被抓是导火索?
  5. x60 深度linux,全新的Origin OS,X60 Pro系统深度体验
  6. 【Nodejs】使用robotjs控制鼠标键盘 自动点击屏幕上指定位置的图标 实现连接wifi等操作
  7. Power BI Premium Per User (PPU) 介绍
  8. 程序员如何快速敲代码呢?进来,我教你!
  9. 【九宫格抽奖源代码】
  10. SketchUp6.0常用操作的快捷键列表