【TensorFlow实战】TensorFlow实现经典卷积神经网络之ResNet
ResNet
ResNet(Residual Neural Network)通过使用Residual Unit成功训练152层深的神经网络,在ILSVRC 2015比赛中获得冠军,取得3.57%的top-5错误率,同时参数量却比VGGNet低,效果突出。ResNet的结构可以极快地加速超深神经网络的训练,模型的准确率也有非常大的提升。ResNet是一个推广性非常好的网络结构,可以直接应用到Inception Net中。
在ResNet之前,Schmidhuber教授提出了Highway Network,原理与ResNet很相似。通常认为网络的深度对其性能非常重要,但是网络越深其训练难度越大,Highway Network的目标就是解决极深的神经网络难以训练的问题。Highway Network 相当于修改了每一层的激活函数,此前的激活函数只是对输入做一个非线性变换,Highway Network则允许保留一定比例的原始输入x,即,其中T为变换系数,C为保留系数,论文中令C=1-T。这样前一层的信息,有一定比例可以不经过矩阵乘法和非线性变换,直接传输到下一层,仿佛一条信息高速公路,因此得名Highway Network。其主要通过gating units学习如何控制网络中的信息流,即学习原始信息应保留的比例。Highway Network的设计在理论上允许其训练任意深度的网络,其优化方法基本上与网络的深度独立,而传统的神经网络结构则对深度非常敏感,训练复杂度随深度增加而急剧增加。
ResNet也是允许输入信息直接传输到后面的层中。ResNet最初的灵感出自这个问题:在不断加神经网络的深度时,会出现一个Degradation的问题,即准确率会先上升然后达到饱和,再持续增加深度则会导致准确率下降。假设有一个比较浅的网络达到了饱和的准确率,那么后面再加上几个y=x的全等映射层起码误差不会增加,即更深的网络不应该带来训练集上误差上升。而这里提到的使用全等映射直接将前一层输出传到后面的思想,就是ResNet的灵感来源。假定某段神经网络的输入是x,期望输出是H(x),如果我们直接把输入x传到输出作为初试结果,那么此时我们需要学习的目标就是F(x)=H(x)-x。ResNet相当于将学习目标改变了,不再是学习一个完整的输出H(x),只是输出和输入的差别H(x)-x,即残差。下图就是一个ResNet的残差学习单元(Residual Unit):
普通直连的卷积神经网络和ResNet的最大区别在于,ResNet有很多旁路的支线将输入直接连到后面的层,使得后面的层可以直接学习残差,这种结构也被称为shortcut或skip connections。下图展示VGGNet-19以及一个34层深的普通卷积网络,和34层深的ResNet网络的对比图:
传统的卷积层或全连接层在信息传递时,或多或少会存在信息丢失、耗损等问题。ResNet在某种程度上解决了这个问题,通过直接将输入信息绕道传出输出,保护信息的完整性,整个网络则只需要学习输入、输出差别的那一部分,简化学习目标和难度。
在ResNet的论文中,提出了两层残差学习单元和三层残差学习单元。两层的残差学习单元包含两个相同输出通道数(因为残差等于目标输出减去输入,即H(x)-x,因此输入、输出维度需保持一致)的3*3卷积;而3层的残差网络则使用了Network In NetWork和Inception Net中的1*1卷积,并且是在中间3*3卷积前后都使用了1*1卷积,有先降维再升维的操作。另外,如果有输入、输出维度不同的情况,我们可以对x做一个线性映射变换维度,再连接到后面的层。如下图所示:
下图为ResNet在不同层的网络配置,其中基础结构很类似,都是前面提到的两层和三层残差单元的堆叠。
在使用了ResNet的结构后,可以发现层数不断加深导致的训练集上误差增大的现象被消除了,ResNet网络的训练误差会随着层数增大而逐渐减小,并且测试集上的表现也会变好。
ResNet V2和ResNet V1的区别在于,作者通过研究ResNet残差学习单元的传播公式,发现前馈和反馈信号可以直接传输,因此skip connection的非线性激活函数(如ReLU)替换为Identity Mappings(y=x)。同时,ResNet v2 在每一层都使用了Batch Normalization。这样处理后,新的残差学习单元将比以前更容易训练且泛化性更强。
下面用TensorFlow实现ResNet V2:
"""Typical use:from tensorflow.contrib.slim.nets import resnet_v2ResNet-101 for image classification into 1000 classes:# inputs has shape [batch, 224, 224, 3]with slim.arg_scope(resnet_v2.resnet_arg_scope(is_training)):net, end_points = resnet_v2.resnet_v2_101(inputs, 1000)ResNet-101 for semantic segmentation into 21 classes:# inputs has shape [batch, 513, 513, 3]with slim.arg_scope(resnet_v2.resnet_arg_scope(is_training)):net, end_points = resnet_v2.resnet_v2_101(inputs,21,global_pool=False,output_stride=16) """ import collections import tensorflow as tf slim = tf.contrib.slimclass Block(collections.namedtuple('Block', ['scope', 'unit_fn', 'args'])):"""A named tuple describing a ResNet block.Its parts are:scope: The scope of the `Block`.unit_fn: The ResNet unit function which takes as input a `Tensor` andreturns another `Tensor` with the output of the ResNet unit.args: A list of length equal to the number of units in the `Block`. The listcontains one (depth, depth_bottleneck, stride) tuple for each unit in theblock to serve as argument to unit_fn."""def subsample(inputs, factor, scope=None): # 降采样 # inputs 输入;factor 采样因子"""Subsamples the input along the spatial dimensions.Args:inputs: A `Tensor` of size [batch, height_in, width_in, channels].factor: The subsampling factor.scope: Optional variable_scope.Returns:output: A `Tensor` of size [batch, height_out, width_out, channels] with theinput, either intact (if factor == 1) or subsampled (if factor > 1)."""if factor == 1:return inputselse:return slim.max_pool2d(inputs, [1, 1], stride=factor, scope=scope)def conv2d_same(inputs, num_outputs, kernel_size, stride, scope=None): # 卷积层"""Strided 2-D convolution with 'SAME' padding.When stride > 1, then we do explicit zero-padding, followed by conv2d with'VALID' padding.Note thatnet = conv2d_same(inputs, num_outputs, 3, stride=stride)is equivalent tonet = slim.conv2d(inputs, num_outputs, 3, stride=1, padding='SAME')net = subsample(net, factor=stride)whereasnet = slim.conv2d(inputs, num_outputs, 3, stride=stride, padding='SAME')is different when the input's height or width is even, which is why we add thecurrent function. For more details, see ResnetUtilsTest.testConv2DSameEven().Args:inputs: A 4-D tensor of size [batch, height_in, width_in, channels].num_outputs: An integer, the number of output filters.kernel_size: An int with the kernel_size of the filters.stride: An integer, the output stride.rate: An integer, rate for atrous convolution.scope: Scope.Returns:output: A 4-D tensor of size [batch, height_out, width_out, channels] withthe convolution output."""if stride == 1:return slim.conv2d(inputs, num_outputs, kernel_size, stride=1,padding='SAME', scope=scope)else:#kernel_size_effective = kernel_size + (kernel_size - 1) * (rate - 1)pad_total = kernel_size - 1pad_beg = pad_total // 2pad_end = pad_total - pad_beginputs = tf.pad(inputs,[[0, 0], [pad_beg, pad_end], [pad_beg, pad_end], [0, 0]])return slim.conv2d(inputs, num_outputs, kernel_size, stride=stride,padding='VALID', scope=scope)@slim.add_arg_scope def stack_blocks_dense(net, blocks,outputs_collections=None): # 堆叠Blocks的函数;使用两层循环,逐个Residual Unit堆叠"""Stacks ResNet `Blocks` and controls output feature density.First, this function creates scopes for the ResNet in the form of'block_name/unit_1', 'block_name/unit_2', etc.Args:net: A `Tensor` of size [batch, height, width, channels].blocks: A list of length equal to the number of ResNet `Blocks`. Eachelement is a ResNet `Block` object describing the units in the `Block`.outputs_collections: Collection to add the ResNet block outputs.Returns:net: Output tensor """for block in blocks:with tf.variable_scope(block.scope, 'block', [net]) as sc:for i, unit in enumerate(block.args):with tf.variable_scope('unit_%d' % (i + 1), values=[net]):unit_depth, unit_depth_bottleneck, unit_stride = unitnet = block.unit_fn(net,depth=unit_depth,depth_bottleneck=unit_depth_bottleneck,stride=unit_stride)net = slim.utils.collect_named_outputs(outputs_collections, sc.name, net)return netdef resnet_arg_scope(is_training=True,weight_decay=0.0001,batch_norm_decay=0.997,batch_norm_epsilon=1e-5,batch_norm_scale=True): # 用来定义某些函数的参数默认值"""Defines the default ResNet arg scope.TODO(gpapan): The batch-normalization related default values above areappropriate for use in conjunction with the reference ResNet modelsreleased at https://github.com/KaimingHe/deep-residual-networks. Whentraining ResNets from scratch, they might need to be tuned.Args:is_training: Whether or not we are training the parameters in the batchnormalization layers of the model.weight_decay: The weight decay to use for regularizing the model.batch_norm_decay: The moving average decay when estimating layer activationstatistics in batch normalization.batch_norm_epsilon: Small constant to prevent division by zero whennormalizing activations by their variance in batch normalization.batch_norm_scale: If True, uses an explicit `gamma` multiplier to scale theactivations in the batch normalization layer.Returns:An `arg_scope` to use for the resnet models."""batch_norm_params = {'is_training': is_training,'decay': batch_norm_decay,'epsilon': batch_norm_epsilon,'scale': batch_norm_scale,'updates_collections': tf.GraphKeys.UPDATE_OPS,}with slim.arg_scope([slim.conv2d],weights_regularizer=slim.l2_regularizer(weight_decay),weights_initializer=slim.variance_scaling_initializer(),activation_fn=tf.nn.relu,normalizer_fn=slim.batch_norm,normalizer_params=batch_norm_params):with slim.arg_scope([slim.batch_norm], **batch_norm_params):# The following implies padding='SAME' for pool1, which makes feature# alignment easier for dense prediction tasks. This is also used in# https://github.com/facebook/fb.resnet.torch. However the accompanying# code of 'Deep Residual Learning for Image Recognition' uses# padding='VALID' for pool1. You can switch to that choice by setting# slim.arg_scope([slim.max_pool2d], padding='VALID').with slim.arg_scope([slim.max_pool2d], padding='SAME') as arg_sc:return arg_sc@slim.add_arg_scope def bottleneck(inputs, depth, depth_bottleneck, stride,outputs_collections=None, scope=None): # 核心的bottleneck残差学习单元"""Bottleneck residual unit variant with BN before convolutions.This is the full preactivation residual unit variant proposed in [2]. SeeFig. 1(b) of [2] for its definition. Note that we use here the bottleneckvariant which has an extra bottleneck layer.When putting together two consecutive ResNet blocks that use this unit, oneshould use stride = 2 in the last unit of the first block.Args:inputs: A tensor of size [batch, height, width, channels].depth: The depth of the ResNet unit output.depth_bottleneck: The depth of the bottleneck layers.stride: The ResNet unit's stride. Determines the amount of downsampling ofthe units output compared to its input.rate: An integer, rate for atrous convolution.outputs_collections: Collection to add the ResNet unit output.scope: Optional variable_scope.Returns:The ResNet unit's output."""with tf.variable_scope(scope, 'bottleneck_v2', [inputs]) as sc:depth_in = slim.utils.last_dimension(inputs.get_shape(), min_rank=4)preact = slim.batch_norm(inputs, activation_fn=tf.nn.relu, scope='preact')if depth == depth_in:shortcut = subsample(inputs, stride, 'shortcut')else:shortcut = slim.conv2d(preact, depth, [1, 1], stride=stride,normalizer_fn=None, activation_fn=None,scope='shortcut')residual = slim.conv2d(preact, depth_bottleneck, [1, 1], stride=1,scope='conv1')residual = conv2d_same(residual, depth_bottleneck, 3, stride,scope='conv2')residual = slim.conv2d(residual, depth, [1, 1], stride=1,normalizer_fn=None, activation_fn=None,scope='conv3')output = shortcut + residualreturn slim.utils.collect_named_outputs(outputs_collections,sc.name,output)def resnet_v2(inputs,blocks,num_classes=None,global_pool=True,include_root_block=True,reuse=None,scope=None): # 生成ResNetV2的主函数"""Generator for v2 (preactivation) ResNet models.This function generates a family of ResNet v2 models. See the resnet_v2_*()methods for specific model instantiations, obtained by selecting differentblock instantiations that produce ResNets of various depths.Args:inputs: A tensor of size [batch, height_in, width_in, channels].blocks: A list of length equal to the number of ResNet blocks. Each elementis a resnet_utils.Block object describing the units in the block.num_classes: Number of predicted classes for classification tasks. If Nonewe return the features before the logit layer.include_root_block: If True, include the initial convolution followed bymax-pooling, if False excludes it. If excluded, `inputs` should be theresults of an activation-less convolution.reuse: whether or not the network and its variables should be reused. To beable to reuse 'scope' must be given.scope: Optional variable_scope.Returns:net: A rank-4 tensor of size [batch, height_out, width_out, channels_out].If global_pool is False, then height_out and width_out are reduced by afactor of output_stride compared to the respective height_in and width_in,else both height_out and width_out equal one. If num_classes is None, thennet is the output of the last ResNet block, potentially after globalaverage pooling. If num_classes is not None, net contains the pre-softmaxactivations.end_points: A dictionary from components of the network to the correspondingactivation.Raises:ValueError: If the target output_stride is not valid."""with tf.variable_scope(scope, 'resnet_v2', [inputs], reuse=reuse) as sc:end_points_collection = sc.original_name_scope + '_end_points'with slim.arg_scope([slim.conv2d, bottleneck,stack_blocks_dense],outputs_collections=end_points_collection):net = inputsif include_root_block:# We do not include batch normalization or activation functions in conv1# because the first ResNet unit will perform these. Cf. Appendix of [2]. with slim.arg_scope([slim.conv2d],activation_fn=None, normalizer_fn=None):net = conv2d_same(net, 64, 7, stride=2, scope='conv1')net = slim.max_pool2d(net, [3, 3], stride=2, scope='pool1')net = stack_blocks_dense(net, blocks)# This is needed because the pre-activation variant does not have batch# normalization or activation functions in the residual unit output. See# Appendix of [2].net = slim.batch_norm(net, activation_fn=tf.nn.relu, scope='postnorm')if global_pool:# Global average pooling.net = tf.reduce_mean(net, [1, 2], name='pool5', keep_dims=True)if num_classes is not None:net = slim.conv2d(net, num_classes, [1, 1], activation_fn=None,normalizer_fn=None, scope='logits')# Convert end_points_collection into a dictionary of end_points.end_points = slim.utils.convert_collection_to_dict(end_points_collection)if num_classes is not None:end_points['predictions'] = slim.softmax(net, scope='predictions')return net, end_pointsdef resnet_v2_50(inputs,num_classes=None,global_pool=True,reuse=None,scope='resnet_v2_50'):"""ResNet-50 model of [1]. See resnet_v2() for arg and return description."""blocks = [Block('block1', bottleneck, [(256, 64, 1)] * 2 + [(256, 64, 2)]),Block('block2', bottleneck, [(512, 128, 1)] * 3 + [(512, 128, 2)]),Block('block3', bottleneck, [(1024, 256, 1)] * 5 + [(1024, 256, 2)]),Block('block4', bottleneck, [(2048, 512, 1)] * 3)]return resnet_v2(inputs, blocks, num_classes, global_pool,include_root_block=True, reuse=reuse, scope=scope)def resnet_v2_101(inputs,num_classes=None,global_pool=True,reuse=None,scope='resnet_v2_101'):"""ResNet-101 model of [1]. See resnet_v2() for arg and return description."""blocks = [Block('block1', bottleneck, [(256, 64, 1)] * 2 + [(256, 64, 2)]),Block('block2', bottleneck, [(512, 128, 1)] * 3 + [(512, 128, 2)]),Block('block3', bottleneck, [(1024, 256, 1)] * 22 + [(1024, 256, 2)]),Block('block4', bottleneck, [(2048, 512, 1)] * 3)]return resnet_v2(inputs, blocks, num_classes, global_pool,include_root_block=True, reuse=reuse, scope=scope)def resnet_v2_152(inputs,num_classes=None,global_pool=True,reuse=None,scope='resnet_v2_152'):"""ResNet-152 model of [1]. See resnet_v2() for arg and return description."""blocks = [Block('block1', bottleneck, [(256, 64, 1)] * 2 + [(256, 64, 2)]),Block('block2', bottleneck, [(512, 128, 1)] * 7 + [(512, 128, 2)]),Block('block3', bottleneck, [(1024, 256, 1)] * 35 + [(1024, 256, 2)]),Block('block4', bottleneck, [(2048, 512, 1)] * 3)]return resnet_v2(inputs, blocks, num_classes, global_pool,include_root_block=True, reuse=reuse, scope=scope)def resnet_v2_200(inputs,num_classes=None,global_pool=True,reuse=None,scope='resnet_v2_200'):"""ResNet-200 model of [2]. See resnet_v2() for arg and return description."""blocks = [Block('block1', bottleneck, [(256, 64, 1)] * 2 + [(256, 64, 2)]),Block('block2', bottleneck, [(512, 128, 1)] * 23 + [(512, 128, 2)]),Block('block3', bottleneck, [(1024, 256, 1)] * 35 + [(1024, 256, 2)]),Block('block4', bottleneck, [(2048, 512, 1)] * 3)]return resnet_v2(inputs, blocks, num_classes, global_pool,include_root_block=True, reuse=reuse, scope=scope)from datetime import datetime import math import time def time_tensorflow_run(session, target, info_string): # 评测函数num_steps_burn_in = 10total_duration = 0.0total_duration_squared = 0.0for i in range(num_batches + num_steps_burn_in):start_time = time.time()_ = session.run(target)duration = time.time() - start_timeif i >= num_steps_burn_in:if not i % 10:print ('%s: step %d, duration = %.3f' %(datetime.now(), i - num_steps_burn_in, duration))total_duration += durationtotal_duration_squared += duration * durationmn = total_duration / num_batchesvr = total_duration_squared / num_batches - mn * mnsd = math.sqrt(vr)print ('%s: %s across %d steps, %.3f +/- %.3f sec / batch' %(datetime.now(), info_string, num_batches, mn, sd))batch_size = 32 height, width = 224, 224 inputs = tf.random_uniform((batch_size, height, width, 3)) with slim.arg_scope(resnet_arg_scope(is_training=False)):net, end_points = resnet_v2_152(inputs, 1000)init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) num_batches=100 time_tensorflow_run(sess, net, "Forward")
ResNet可以算是深度学习中一个里程碑式的突破,真正意义上支持极深神经网络的训练。
转载于:https://www.cnblogs.com/Negan-ZW/p/9538414.html
【TensorFlow实战】TensorFlow实现经典卷积神经网络之ResNet相关推荐
- Tensorflow系列 | TensorFlowNews五大经典卷积神经网络介绍
编译 | fendouai 编辑 | 安可 [导读]:这个系列文章将会从经典的卷积神经网络历史开始,然后逐个讲解卷积神经网络结构,代码实现和优化方向.下一篇文章将会是 LeNet 卷积神经网络结构,代 ...
- 深度学习与TensorFlow实战(十)卷积神经网络—VGG(16)神经网络
架构: 训练输入:固定尺寸224x224的RGB图像. 预处理:每个像素减去训练集上的RGB均值. 卷积核:一系列3X3卷积核堆叠,步长为1,采用padding保持卷积后图像空间分辨率不变. 空间池化 ...
- tensorflow预定义经典卷积神经网络和数据集tf.keras.applications
自己开发了一个股票软件,功能很强大,需要的点击下面的链接获取: https://www.cnblogs.com/bclshuai/p/11380657.html 1.1 tensorflow预定义经 ...
- 通过 Tensorflow 的基础类,构建卷积神经网络,用于花朵图片的分类
实验目的 通过 Tensorflow 的基础类,构建卷积神经网络,用于花朵图片的分类. 实验环境 import tensorflow as tfprint(tf.__version__) output ...
- 一文总结经典卷积神经网络CNN模型
一般的DNN直接将全部信息拉成一维进行全连接,会丢失图像的位置等信息. CNN(卷积神经网络)更适合计算机视觉领域.下面总结从1998年至今的优秀CNN模型,包括LeNet.AlexNet.ZFNet ...
- 【深度学习基础】经典卷积神经网络
点击上方"小白学视觉",选择加"星标"或"置顶" 重磅干货,第一时间送达 导语 卷积神经网络(Convolutional Neural Ne ...
- Pytorch之CNN:基于Pytorch框架实现经典卷积神经网络的算法(LeNet、AlexNet、VGG、NIN、GoogleNet、ResNet)——从代码认知CNN经典架构
Pytorch之CNN:基于Pytorch框架实现经典卷积神经网络的算法(LeNet.AlexNet.VGG.NIN.GoogleNet.ResNet)--从代码认知CNN经典架构 目录 CNN经典算 ...
- AI基础:经典卷积神经网络
导语 卷积神经网络(Convolutional Neural Networks, CNN)是一类包含卷积计算且具有深度结构的前馈神经网络(Feedforward Neural Networks),是深 ...
- CNN(经典卷积神经网络)来了!
导语 卷积神经网络(Convolutional Neural Networks, CNN)是一类包含卷积计算且具有深度结构的前馈神经网络(Feedforward Neural Networks),是深 ...
最新文章
- 正则表达式获取TABLE里的内容
- 微信打开网页下载东西时如何调用其他浏览器下载
- 用 GRUB 引导自己的操作系统
- 【转】Dynamics 365中的事件框架与事件执行管道(Event execution pipeline)
- 神经网络“炼丹炉”内部构造?牛津大学博士小姐姐用论文解读
- 奇妙的go语言(面向对象)
- 一步一步写一个简单通用的makefile(一)
- centos7 安装couchbase集群
- 《Adobe Photoshop CS5中文版经典教程(全彩版)》—第1课1.1节开始在Adobe Photoshop中工作...
- java线程生命周期的图示以及文字说明
- Angular 从入坑到挖坑 - 路由守卫连连看
- 苹果全球开发者大会推出智能音箱网友吐槽Siri“掉链子”
- java代码制作activiti会签_activiti设计器会签人员配置
- 世界杯数据可视化分析
- 我国成功发射第七颗北斗导航卫星
- nginx配置静态资源为https
- 希腊神话中的人物名称大全
- 合同索赔的内容和处理方法
- 蓝桥杯单片机led指示
- 阅读文献:MHCSeqNet:a deep neural networkmodel for universal MHC binding prediction
热门文章
- 未来教育计算机二级答案19,2019年3月计算机二级MSOffice提分试题及答案019
- docker network 网络模式
- vue 组件 父向子传值
- pytorch torch.unsqueeze
- javascript Arrow functions(箭头函数)
- CUDA TOOlkit Programming Guide 1.Introduction
- Linux静态库与动态库
- 自学c语言中相关知识,设计出医院住院管理系统.要求如下所述:,C语言课程设计题Z目.doc...
- 为什么我们要从MySQL迁移到TiDB?(转)
- 分库分表学习总结(5)——有关分库分表相关面试题总结