接上文,处理好输入图片后即可输入ENet的网络模型进行训练。

        #Create the model inferencewith slim.arg_scope(ENet_arg_scope(weight_decay=weight_decay)): logits, probabilities = ENet(images, num_classes,batch_size=batch_size,is_training=True, reuse=None,num_initial_blocks=num_initial_blocks,stage_two_repeat=stage_two_repeat,skip_connections=skip_connections)

其中,slim.arg_scope是对函数进行修饰,修改已经定义函数中的某个参数值。在这里,修改了ENet_arg_scope函数中的weight_decay值为我们定义的值。而ENet_arg_scope函数又长下面这个模样:

def ENet_arg_scope(weight_decay=2e-4,batch_norm_decay=0.1,batch_norm_epsilon=0.001):'''The arg scope for enet model. The weight decay is 2e-4 as seen in the paper.Batch_norm decay is 0.1 (momentum 0.1) according to official implementation.INPUTS:- weight_decay(float): the weight decay for weights variables in conv2d and separable conv2d- batch_norm_decay(float): decay for the moving average of batch_norm momentums.- batch_norm_epsilon(float): small float added to variance to avoid dividing by zero.OUTPUTS:- scope(arg_scope): a tf-slim arg_scope with the parameters needed for xception.'''# Set weight_decay for weights in conv2d and separable_conv2d layers.with slim.arg_scope([slim.conv2d], # 使用slim.arg_scope对 slim.conv2d函数进行修饰,设置默认参数weights_regularizer=slim.l2_regularizer(weight_decay), # 修改了conv2d的weights_regularizer l2正则化biases_regularizer=slim.l2_regularizer(weight_decay)): # 修改了conv2d的biases_regularizer l2正则化# Set parameters for batch_norm.with slim.arg_scope([slim.batch_norm], # 同上 设置batchnorm的参数 decay=batch_norm_decay,epsilon=batch_norm_epsilon) as scope:return scope

这个函数通过再次嵌套slim.arg_scope函数来修改slim.conv2d函数的参数,修改conv2d权重及偏置正则化事l2正则化的权重衰减值。同时修改了slim.batch_norm函数中decay及epsilon的值。梳理一下就是,首先使用slim.arg_scope修改ENet_arg_scope的传入参数,然ENet_arg_scope内部再调用slim.arg_scope来修改slim.conv2d和slim.batch_norm的参数。

随后,调整好了ENet的参数,我们就可以把图像输入ENet进行训练。

            logits, probabilities = ENet(images, # 输入训练图像num_classes,batch_size=batch_size,is_training=True, # 训练过程是否使用PReLu 和 batch normalizationreuse=None,num_initial_blocks=num_initial_blocks,stage_two_repeat=stage_two_repeat,skip_connections=skip_connections)

ENet的传入参数为(原始图像,类别数,batch_size,是否使用PReLu和bn,reuse这个还不太理解,ENet中初始化模块的个数,ENet中第二部分模块的个数,是否使用跳跃连接)。

现在我们看一下ENet的网络结构,首先有个整体的概念(截图来自ENet原论文):

除了initial模块和fullconv模块,共计五大模块。且模块三中除了缺少一个下采样层外,同模块二完全相同。在ENet中,initial模块的定义如下:

图像输入后,一边通过13个3*3的卷积核,以步长2进行卷积;另一边使用2*2的核以步长2进行池化,因为输入为3通道的图片,池化后channel为3,将两个输出结合,得到channel为16的输出。

而bottleneck模块是由ResNet得到的启发,共定义了三种,分别是“普通卷积”,“空洞卷积”,“非对称卷积”三种。而bottleneck模块的总体定义为:

结合代码,对上述模块分别进行理解。

initial模块代码如下:

def initial_block(inputs, is_training=True, scope='initial_block'):'''The initial block for Enet has 2 branches: The convolution branch and Maxpool branch.The conv branch has 13 layers, while the maxpool branch gives 3 layers corresponding to the RGB channels.Both output layers are then concatenated to give an output of 16 layers.NOTE: Does not need to store pooling indices since it won't be used later for the final upsampling.INPUTS:- inputs(Tensor): A 4D tensor of shape [batch_size, height, width, channels]OUTPUTS:- net_concatenated(Tensor): a 4D Tensor that contains the '''#Convolutional branchnet_conv = slim.conv2d(inputs, 13, [3,3], stride=2, activation_fn=None, scope=scope+'_conv') # 3x3卷积,13个卷积核,步长2net_conv = slim.batch_norm(net_conv, is_training=is_training, fused=True, scope=scope+'_batchnorm')net_conv = prelu(net_conv, scope=scope+'_prelu')#Max pool branchnet_pool = slim.max_pool2d(inputs, [2,2], stride=2, scope=scope+'_max_pool')#Concatenated output - does it matter max pool comes first or conv comes first? probably not.net_concatenated = tf.concat([net_conv, net_pool], axis=3, name=scope+'_concat')return net_concatenated

卷积分支中,首先将输入使用13个3*3的卷积核,按步长为2进行卷积,然后输出结果输入bn层做归一化处理,然后使用PReLU作为激活函数。

而PReLU的函数代码为:

def prelu(x, scope, decoder=False):'''Performs the parametric relu operation. This implementation is based on:https://stackoverflow.com/questions/39975676/how-to-implement-prelu-activation-in-tensorflowFor the decoder portion, prelu becomes just a normal preluINPUTS:- x(Tensor): a 4D Tensor that undergoes prelu- scope(str): the string to name your prelu operation's alpha variable.- decoder(bool): if True, prelu becomes a normal relu.OUTPUTS:- pos + neg / x (Tensor): gives prelu output only during training; otherwise, just return x.'''#If decoder, then perform relu and just return the outputif decoder:return tf.nn.relu(x, name=scope)#tf.get_variable(name,shape=None,dtype=None,initializer=None,regularizer=None.#trainable=True,collections=None,caching_device=None,partitioner=None,validate_shape=Ture#use_resource=None,custom_getter=None,constraint=None)# name:新变量或现有变量的名称,shape:...的形状,initializer:用来初始化变量# dtype:....的类型alpha= tf.get_variable(scope + 'alpha', x.get_shape()[-1], # 获取一个已经存在的变量或者创建一个新的变量initializer=tf.constant_initializer(0.0),# ----存疑dtype=tf.float32)pos = tf.nn.relu(x)neg = alpha * (x - abs(x)) * 0.5return pos + neg

完完全全对PReLU的复现,大于0时就是relu函数,小于0时,乘以一个很小的alpha系数再乘以0.5。再记录一个疑问,就是tf.constant_initializer(0.0)岂不是将alpha设为了0?这不还是ReLU吗??现在还是不明白。

然后,就是最大池化分支,用了2*2的核,步长为2进行池化,最终使用concat将两个分支的输出进行合并,并输出最终结果。

接下来介绍五种种不同的bottleneck模块(原论文中是三种,普通、空洞卷积和非对称卷积三种)。在作者代码中统一定义了bottleneck函数,函数体中使用if条件分支来判断是何种bottleneck模块。

def bottleneck(inputs,output_depth,filter_size,regularizer_prob,projection_ratio=4,seed=0,is_training=True,downsampling=False,upsampling=False,pooling_indices=None,output_shape=None,dilated=False,dilation_rate=None,asymmetric=False,decoder=False,scope='bottleneck'):''''''#Calculate the depth reduction based on the projection ratio used in 1x1 convolution. #??投影降维?? 不理解reduced_depth = int(inputs.get_shape().as_list()[3] / projection_ratio)with slim.arg_scope([prelu], decoder=decoder):#=============DOWNSAMPLING BOTTLENECK====================if downsampling:...#============DILATION CONVOLUTION BOTTLENECK====================   空洞卷积模块#Everything is the same as a regular bottleneck except for the dilation rate argumentelif dilated:...#===========ASYMMETRIC CONVOLUTION BOTTLENECK============== 非对称卷积#Everything is the same as a regular bottleneck except for a [5,5] kernel decomposed into two [5,1] then [1,5]elif asymmetric:...#============UPSAMPLING BOTTLENECK================#Everything is the same as a regular one, except convolution becomes transposed.elif upsampling:...#OTHERWISE, just perform a regular bottleneck!#==============REGULAR BOTTLENECK==================#Save the main branch for addition laternet_main = inputs....

先看带下采样的bottleneck模块,代码已经很清晰了:

#=============DOWNSAMPLING BOTTLENECK====================if downsampling:#=============MAIN BRANCH=============#Just perform a max poolingnet_main, pooling_indices = tf.nn.max_pool_with_argmax(inputs,ksize=[1,2,2,1], # 池化的结果就是 batch size和 num channels不变,输入图片宽高减半strides=[1,2,2,1],padding='SAME',name=scope+'_main_max_pool')#First get the difference in depth to pad, then pad with zeros only on the last dimension.inputs_shape = inputs.get_shape().as_list()depth_to_pad = abs(inputs_shape[3] - output_depth)paddings = tf.convert_to_tensor([[0,0], [0,0], [0,0], [0, depth_to_pad]]) # shape(4,2)net_main = tf.pad(net_main, paddings=paddings, name=scope+'_main_padding')# tf.pad(tensor, paddings, mode="CONSTANT", name=None, constant_values=0)# 其中 tensor为输入 paddings指出要给tensor的哪个维度进行填充,以及填充多少,要注意的是paddings的rank必须和tensor的rank相同# mode指填充方式,"CONSTANT"表示用常数进行填充(默认为0,在需要的情况下可以用constant_value赋值)# name 节点名称#=============SUB BRANCH==============#First projection that has a 2x2 kernel and stride 2net = slim.conv2d(inputs, reduced_depth, [2,2], stride=2, scope=scope+'_conv1')net = slim.batch_norm(net, is_training=is_training, scope=scope+'_batch_norm1')net = prelu(net, scope=scope+'_prelu1')#Second conv blocknet = slim.conv2d(net, reduced_depth, [filter_size, filter_size], scope=scope+'_conv2')net = slim.batch_norm(net, is_training=is_training, scope=scope+'_batch_norm2')net = prelu(net, scope=scope+'_prelu2')#Final projection with 1x1 kernelnet = slim.conv2d(net, output_depth, [1,1], scope=scope+'_conv3')net = slim.batch_norm(net, is_training=is_training, scope=scope+'_batch_norm3')net = prelu(net, scope=scope+'_prelu3')#Regularizer # 正则化 # 使用spatial_dropout 随机将某些区域置0net = spatial_dropout(net, p=regularizer_prob, seed=seed, scope=scope+'_spatial_dropout')#Finally, combine the two branches together via an element-wise additionnet = tf.add(net, net_main, name=scope+'_add')net = prelu(net, scope=scope+'_last_prelu')#also return inputs shape for convenience laterreturn net, pooling_indices, inputs_shape

主分支首先进行2×2的最大池化,池化后因为前面有个投影降维,0padding这一步再将维度还原,然后输出到net_main。

次分支中,降维后的输入进行2×2,步长为2的卷积,卷积核个数等于计算出的reduced_depth用于替代原本的1×1的卷积;然后进行bn,pReLU;紧接着进行指定卷积核大小的卷积,随后又是bn,pReLU,然后1×1的卷积,bn,pReLU,最后使用spatial_dropout正则化。

主从分支求和,使用pReLU进行处理,带有downsampling功能的bottleneck模块定义完毕。

接下来是空洞卷积(dilated)bottleneck模块,代码如下:

 elif dilated:#Check if dilation rate is givenif not dilation_rate:raise ValueError('Dilation rate is not given.')#Save the main branch for addition laternet_main = inputs#First projection with 1x1 kernel (dimensionality reduction)net = slim.conv2d(inputs, reduced_depth, [1,1], scope=scope+'_conv1')net = slim.batch_norm(net, is_training=is_training, scope=scope+'_batch_norm1')net = prelu(net, scope=scope+'_prelu1')#Second conv block --- apply dilated convolution herenet = slim.conv2d(net, reduced_depth, [filter_size, filter_size], rate=dilation_rate, scope=scope+'_dilated_conv2') #----空洞卷积net = slim.batch_norm(net, is_training=is_training, scope=scope+'_batch_norm2')net = prelu(net, scope=scope+'_prelu2')#Final projection with 1x1 kernel (Expansion)net = slim.conv2d(net, output_depth, [1,1], scope=scope+'_conv3')net = slim.batch_norm(net, is_training=is_training, scope=scope+'_batch_norm3')net = prelu(net, scope=scope+'_prelu3')#Regularizernet = spatial_dropout(net, p=regularizer_prob, seed=seed, scope=scope+'_spatial_dropout')net = prelu(net, scope=scope+'_prelu4')#Add the main branchnet = tf.add(net_main, net, name=scope+'_add_dilated')net = prelu(net, scope=scope+'_last_prelu')return net

主分支就是原始输入,从分支在第二次卷积时使用了空洞卷积,空洞卷积的“间隔率”需要后面调用函数时指定,其他均相同。

随后就是非对称卷积模块:

 elif asymmetric:#Save the main branch for addition laternet_main = inputs#First projection with 1x1 kernel (dimensionality reduction)net = slim.conv2d(inputs, reduced_depth, [1,1], scope=scope+'_conv1')net = slim.batch_norm(net, is_training=is_training, scope=scope+'_batch_norm1')net = prelu(net, scope=scope+'_prelu1')#Second conv block --- apply asymmetric conv herenet = slim.conv2d(net, reduced_depth, [filter_size, 1], scope=scope+'_asymmetric_conv2a')net = slim.conv2d(net, reduced_depth, [1, filter_size], scope=scope+'_asymmetric_conv2b')net = slim.batch_norm(net, is_training=is_training, scope=scope+'_batch_norm2')net = prelu(net, scope=scope+'_prelu2')#Final projection with 1x1 kernelnet = slim.conv2d(net, output_depth, [1,1], scope=scope+'_conv3')net = slim.batch_norm(net, is_training=is_training, scope=scope+'_batch_norm3')net = prelu(net, scope=scope+'_prelu3')#Regularizernet = spatial_dropout(net, p=regularizer_prob, seed=seed, scope=scope+'_spatial_dropout')net = prelu(net, scope=scope+'_prelu4')#Add the main branchnet = tf.add(net_main, net, name=scope+'_add_asymmetric')net = prelu(net, scope=scope+'_last_prelu')return net

主分支还是原始输入,从分支中在1×1卷积后,用了f×1,1×f的两个卷积核分别进行卷积。对于非对称卷积如何计算还是不太理解。随后没有什么变化。

接下来就是带上采样的bottleneck模块,代码如下:

#============UPSAMPLING BOTTLENECK================#Everything is the same as a regular one, except convolution becomes transposed.elif upsampling:#Check if pooling indices is givenif pooling_indices == None:raise ValueError('Pooling indices are not given.')#Check output_shape given or notif output_shape == None:raise ValueError('Output depth is not given')#=======MAIN BRANCH=======#Main branch to upsample. output shape must match with the shape of the layer that was pooled initially, in order#for the pooling indices to work correctly. However, the initial pooled layer was padded, so need to reduce dimension#before unpooling. In the paper, padding is replaced with convolution for this purpose of reducing the depth!net_unpool = slim.conv2d(inputs, output_depth, [1,1], scope=scope+'_main_conv1')net_unpool = slim.batch_norm(net_unpool, is_training=is_training, scope=scope+'batch_norm1')net_unpool = unpool(net_unpool, pooling_indices, output_shape=output_shape, scope='unpool')#======SUB BRANCH=======#First 1x1 projection to reduce depthnet = slim.conv2d(inputs, reduced_depth, [1,1], scope=scope+'_conv1')net = slim.batch_norm(net, is_training=is_training, scope=scope+'_batch_norm2')net = prelu(net, scope=scope+'_prelu1')#Second conv block -----------------------------> NOTE: using tf.nn.conv2d_transpose for variable input shape.net_unpool_shape = net_unpool.get_shape().as_list()output_shape = [net_unpool_shape[0], net_unpool_shape[1], net_unpool_shape[2], reduced_depth]output_shape = tf.convert_to_tensor(output_shape)filter_size = [filter_size, filter_size, reduced_depth, reduced_depth]filters = tf.get_variable(shape=filter_size, initializer=initializers.xavier_initializer(), dtype=tf.float32, name=scope+'_transposed_conv2_filters')# net = slim.conv2d_transpose(net, reduced_depth, [filter_size, filter_size], stride=2, scope=scope+'_transposed_conv2')net = tf.nn.conv2d_transpose(net, filter=filters, strides=[1,2,2,1], output_shape=output_shape, name=scope+'_transposed_conv2')net = slim.batch_norm(net, is_training=is_training, scope=scope+'_batch_norm3')net = prelu(net, scope=scope+'_prelu2')#Final projection with 1x1 kernelnet = slim.conv2d(net, output_depth, [1,1], scope=scope+'_conv3')net = slim.batch_norm(net, is_training=is_training, scope=scope+'_batch_norm4')net = prelu(net, scope=scope+'_prelu3')#Regularizernet = spatial_dropout(net, p=regularizer_prob, seed=seed, scope=scope+'_spatial_dropout')net = prelu(net, scope=scope+'_prelu4')#Finally, add the unpooling layer and the sub branch togethernet = tf.add(net, net_unpool, name=scope+'_add_upsample')net = prelu(net, scope=scope+'_last_prelu')return net

其中,用到了上池化操作,作者自己编写了函数,完全看不懂是如何操作的。

def unpool(updates, mask, k_size=[1, 2, 2, 1], output_shape=None, scope=''):'''Unpooling function based on the implementation by Panaetius at https://github.com/tensorflow/tensorflow/issues/2169INPUTS:- inputs(Tensor): a 4D tensor of shape [batch_size, height, width, num_channels] that represents the input block to be upsampled- mask(Tensor): a 4D tensor that represents the argmax values/pooling indices of the previously max-pooled layer- k_size(list): a list of values representing the dimensions of the unpooling filter.- output_shape(list): a list of values to indicate what the final output shape should be after unpooling- scope(str): the string name to name your scopeOUTPUTS:- ret(Tensor): the returned 4D tensor that has the shape of output_shape.'''with tf.variable_scope(scope):mask = tf.cast(mask, tf.int32) # 数据格式转换,将mask转换为tf.int32 mask就是max pooling时的最大值索引input_shape = tf.shape(updates, out_type=tf.int32)#  calculation new shapeif output_shape is None:output_shape = (input_shape[0], input_shape[1] * ksize[1], input_shape[2] * ksize[2], input_shape[3])# calculation indices for batch, height, width and feature mapsone_like_mask = tf.ones_like(mask, dtype=tf.int32) # 将mask中所元素都变成1batch_shape = tf.concat([[input_shape[0]], [1], [1], [1]], 0) # [[input_shape[0],[1],[1],[1]]batch_range = tf.reshape(tf.range(output_shape[0], dtype=tf.int32), shape=batch_shape)b = one_like_mask * batch_rangey = mask // (output_shape[2] * output_shape[3]) #mask 向下取整 x = (mask // output_shape[3]) % output_shape[2] #mask % (output_shape[2] * output_shape[3]) // output_shape[3]feature_range = tf.range(output_shape[3], dtype=tf.int32)f = one_like_mask * feature_range# transpose indices & reshape update values to one dimensionupdates_size = tf.size(updates)indices = tf.transpose(tf.reshape(tf.stack([b, y, x, f]), [4, updates_size]))values = tf.reshape(updates, [updates_size])ret = tf.scatter_nd(indices, values, output_shape)return ret

最后,定义了最平常的 bottleneck模块:

        #OTHERWISE, just perform a regular bottleneck!#==============REGULAR BOTTLENECK==================#Save the main branch for addition laternet_main = inputs#First projection with 1x1 kernelnet = slim.conv2d(inputs, reduced_depth, [1,1], scope=scope+'_conv1')net = slim.batch_norm(net, is_training=is_training, scope=scope+'_batch_norm1')net = prelu(net, scope=scope+'_prelu1')#Second conv blocknet = slim.conv2d(net, reduced_depth, [filter_size, filter_size], scope=scope+'_conv2')net = slim.batch_norm(net, is_training=is_training, scope=scope+'_batch_norm2')net = prelu(net, scope=scope+'_prelu2')#Final projection with 1x1 kernelnet = slim.conv2d(net, output_depth, [1,1], scope=scope+'_conv3')net = slim.batch_norm(net, is_training=is_training, scope=scope+'_batch_norm3')net = prelu(net, scope=scope+'_prelu3')#Regularizernet = spatial_dropout(net, p=regularizer_prob, seed=seed, scope=scope+'_spatial_dropout')net = prelu(net, scope=scope+'_prelu4')#Add the main branchnet = tf.add(net_main, net, name=scope+'_add_regular')net = prelu(net, scope=scope+'_last_prelu')return net

对比ENet论文结构,作者将网络结构完全复现。

ENET的代码如下:

inputs_shape = inputs.get_shape().as_list()inputs.set_shape(shape=(batch_size, inputs_shape[1], inputs_shape[2], inputs_shape[3]))with tf.variable_scope(scope, reuse=reuse): # 设置变量作用域? 暂未理解#Set the primary arg scopes. Fused batch_norm is faster than normal batch norm.with slim.arg_scope([initial_block, bottleneck], is_training=is_training),\slim.arg_scope([slim.batch_norm], fused=True), \slim.arg_scope([slim.conv2d, slim.conv2d_transpose], activation_fn=None): #=================INITIAL BLOCK=================net = initial_block(inputs, scope='initial_block_1')for i in range(2, max(num_initial_blocks, 1) + 1): # range(2,2) 空列表 for循环不执行net = initial_block(net, scope='initial_block_' + str(i))#Save for skip connection laterif skip_connections:net_one = net#===================STAGE ONE=======================net, pooling_indices_1, inputs_shape_1 = bottleneck(net, output_depth=64, filter_size=3, regularizer_prob=0.01, downsampling=True, scope='bottleneck1_0')net = bottleneck(net, output_depth=64, filter_size=3, regularizer_prob=0.01, scope='bottleneck1_1')net = bottleneck(net, output_depth=64, filter_size=3, regularizer_prob=0.01, scope='bottleneck1_2')net = bottleneck(net, output_depth=64, filter_size=3, regularizer_prob=0.01, scope='bottleneck1_3')net = bottleneck(net, output_depth=64, filter_size=3, regularizer_prob=0.01, scope='bottleneck1_4')#Save for skip connection laterif skip_connections:net_two = net#regularization prob is 0.1 from bottleneck 2.0 onwardswith slim.arg_scope([bottleneck], regularizer_prob=0.1):net, pooling_indices_2, inputs_shape_2 = bottleneck(net, output_depth=128, filter_size=3, downsampling=True, scope='bottleneck2_0')#Repeat the stage two at least twice to get stage 2 and 3:for i in range(2, max(stage_two_repeat, 2) + 2):net = bottleneck(net, output_depth=128, filter_size=3, scope='bottleneck'+str(i)+'_1')net = bottleneck(net, output_depth=128, filter_size=3, dilated=True, dilation_rate=2, scope='bottleneck'+str(i)+'_2')net = bottleneck(net, output_depth=128, filter_size=5, asymmetric=True, scope='bottleneck'+str(i)+'_3')net = bottleneck(net, output_depth=128, filter_size=3, dilated=True, dilation_rate=4, scope='bottleneck'+str(i)+'_4')net = bottleneck(net, output_depth=128, filter_size=3, scope='bottleneck'+str(i)+'_5')net = bottleneck(net, output_depth=128, filter_size=3, dilated=True, dilation_rate=8, scope='bottleneck'+str(i)+'_6')net = bottleneck(net, output_depth=128, filter_size=5, asymmetric=True, scope='bottleneck'+str(i)+'_7')net = bottleneck(net, output_depth=128, filter_size=3, dilated=True, dilation_rate=16, scope='bottleneck'+str(i)+'_8')   # 以2\4\6\8空洞卷积,(是否更改参数提高效果?) with slim.arg_scope([bottleneck], regularizer_prob=0.1, decoder=True):#===================STAGE FOUR========================bottleneck_scope_name = "bottleneck" + str(i + 1)#The decoder section, so start to upsample.net = bottleneck(net, output_depth=64, filter_size=3, upsampling=True,pooling_indices=pooling_indices_2, output_shape=inputs_shape_2, scope=bottleneck_scope_name+'_0')#Perform skip connections hereif skip_connections:net = tf.add(net, net_two, name=bottleneck_scope_name+'_skip_connection')net = bottleneck(net, output_depth=64, filter_size=3, scope=bottleneck_scope_name+'_1')net = bottleneck(net, output_depth=64, filter_size=3, scope=bottleneck_scope_name+'_2')#===================STAGE FIVE========================bottleneck_scope_name = "bottleneck" + str(i + 2)net = bottleneck(net, output_depth=16, filter_size=3, upsampling=True,pooling_indices=pooling_indices_1, output_shape=inputs_shape_1, scope=bottleneck_scope_name+'_0')#perform skip connections hereif skip_connections:net = tf.add(net, net_one, name=bottleneck_scope_name+'_skip_connection')net = bottleneck(net, output_depth=16, filter_size=3, scope=bottleneck_scope_name+'_1')#=============FINAL CONVOLUTION=============logits = slim.conv2d_transpose(net, num_classes, [2,2], stride=2, scope='fullconv') # 反卷积probabilities = tf.nn.softmax(logits, name='logits_to_softmax')return logits, probabilities

【学习笔记】Tensorflow-ENet代码学习(二)相关推荐

  1. 【学习笔记】Tensorflow-ENet代码学习(一)

    针对Tensorflow版ENet,记录一下自己对代码的理解. *非代码解读(因为水平不足),仅作为自己理解的备忘 **理解有误的地方,希望可以得到大牛的指点 一.文件夹内容(结构) (图片截取自作者 ...

  2. 吴恩达《机器学习》学习笔记七——逻辑回归(二分类)代码

    吴恩达<机器学习>学习笔记七--逻辑回归(二分类)代码 一.无正则项的逻辑回归 1.问题描述 2.导入模块 3.准备数据 4.假设函数 5.代价函数 6.梯度下降 7.拟合参数 8.用训练 ...

  3. TensorFlow 深度学习笔记 TensorFlow实现与优化深度神经网络

    TensorFlow 深度学习笔记 TensorFlow实现与优化深度神经网络 转载请注明作者:梦里风林 Github工程地址:https://github.com/ahangchen/GDLnote ...

  4. 学习笔记之数据可视化(二)—— 页面布局(下)

    续上一章 2.7 地图区域(.map) 2.7.1 实现步骤: 2.8 用户统计模块 2.8.1 布局: 2.8.2 柱状图 2.9 订单模块 2.9.1 订单区域布局 2.9.2 订单区域(orde ...

  5. 学习笔记之数据可视化(二)——页面布局(中)

    续上一章 2.6 监控区域布局 2.6.1 布局结构解析: 2.6.2 样式描述: 2.6.3 HTML结构及CSS样式代码 2.6.3 ### 监控区域-效果 2.6.7 点位区域(point) 2 ...

  6. 学习笔记之数据可视化(二)——页面布局(上)

    ~续上一章 2. 项目页面布局 2.1 基础布局 2.1.1 PC端屏幕宽度适配设置 2.1.2 主体容器viewport背景图片 2.1.3 HTML结构 2.1.4 css样式代码 2.2 边框图 ...

  7. STM32学习笔记:FLASH读写之二

    因为关于STM32的Flash相关的知识点比较多,所以该内容的学习我们分为以下4个部分 1.RAM和ROM的一些基本概念 -- STM32学习笔记:FLASH读写之一 2.STM32的Flash寄存器 ...

  8. Java学习笔记-Day64 Spring 框架(二)

    Java学习笔记-Day64 Spring 框架(二) 一.控制反转IOC和依赖注入DI 1.控制反转IOC 2.依赖注入DI 3.Spring IOC容器 3.1.简介 3.2.实现容器 3.2.获 ...

  9. 【学习笔记】低代码平台(LCAP:Low-Code Application Platform)

    学习笔记:低代码平台(LCAP:Low-Code Application Platform) [概念] 开发者写很少的代码,通过低代码平台提供的界面.逻辑.对象.流程等可视化编排工具来完成大量的开发工 ...

最新文章

  1. Python 自动化办公之 Excel 对比工具
  2. R语言shapiro.test()函数实现Shapiro-Wilk正态分布检验
  3. 前端打包利器webpack里utils.cssLoaders的工作原理调试
  4. android 通过webview调起支付宝app支付
  5. interrupt、interrupted 、isInterrupted 区别
  6. Metal:对开发者和用户来说意味着什么
  7. HAUT校赛--最大奇子段和
  8. python---numpy简单用法
  9. office visio 2007 画流程图
  10. 迈普路由器访问控制列表配置命令_迈普路由器配置命令集合
  11. C#实现海康人脸门禁主机远程开关门和下发用户数据
  12. 硬盘柱面损坏怎么办_最靠谱的机械硬盘坏道修复工具一:DiskGenius
  13. DeepFool论文阅读
  14. 泡泡堂、QQ堂游戏通信架构分析
  15. ros学习记录1 Hello World 使用c++
  16. Linux CentOS 7中安装XXX(持续更新)
  17. FC-AE-ASM节点卡(支持 FC-AE-ASM 协议)
  18. Jmeter定时器之吞吐量整形定时器jp@gc Throughput Shaping Timer
  19. Java中的多态,引用类型的转换
  20. GPU释放显存-----无进程但显存占满解决方法

热门文章

  1. 超市智能商品推荐系统设计
  2. Markdown 编辑器
  3. 嵌入式新手学习路线,嵌入式课程学习
  4. python2.7中文编码_python2.7
  5. 艺术摄影--数码单反相机的基本操作和使用(2学时)--SDUST
  6. SpringCloud Hoxton——OpenFeign服务接口调用
  7. 二、Java-Map
  8. 服务器被攻击的几种常见类型
  9. win7 32位的4g内存可用内存只有2g到3g怎么解决?
  10. 基于STM32F103,SPI驱动GC9306屏幕