U-Net: Convolutional Networks for Biomedical Image Segmentation
https://arxiv.org/abs/1505.04597

网络结构图

编码器—解码器的网络架构

由网络结构图以及论文可得结论一下几点:

  1. 网络无全连接,只有卷积和下采样
  2. 端到端的网络,输入一幅图像,输出也是一幅图像
  3. 为了定位准确,上半部分的特征(copy and crop之后)与上采样的输出相结合。
  4. 适用于小数据集,但要配合数据增强(仅图像扭曲的方法)

Keras搭建U-Net网络

contracting path (left side) 收缩路径捕捉上下文信息

inputs = Input((IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS))
S = Lambda(lambda x: x/255)(inputs)c1 = Conv2D(16, (3,3), activation='elu',kernel_initializer = 'he_normal',padding='same')(S)
c1 = Dropout(0.1)(c1)
c1 = Conv2D(16, (3,3), activation='elu',kernel_initializer = 'he_normal',padding='same')(c1)
p1 = MaxPooling2D((2, 2)) (c1)c2 = Conv2D(32, (3,3), activation='elu',kernel_initializer= 'he_normal',padding='same')(p1)
c2 = Dropout(0.1)(c2)
c2 = Conv2D(32, (3,3), activation='elu',kernel_initializer= 'he_normal',padding='same')(c2)
p2 = MaxPooling2D((2, 2)) (c2)c3  = Conv2D(64, (3,3), activation='elu',kernel_initializer= 'he_normal', padding="same")(p2)
c3 = Dropout(0.2)(c3)
c3 = Conv2D(64, (3,3), activation='elu',kernel_initializer= 'he_normal', padding="same")(c3)
p3 = MaxPooling2D((2,2))(c3)c4 = Conv2D(128, (3,3), activation='elu',kernel_initializer= 'he_normal', padding='same')(p3)
c4 = Dropout(0.2)(c4)
c4 = Conv2D(128, (3,3), activation='elu',kernel_initializer= 'he_normal', padding='same')(c4)
p4 = MaxPooling2D((2,2))(c4)c5 = Conv2D(256, (3,3), activation='elu',kernel_initializer= 'he_normal', padding='same')(p4)
c5 = Dropout(0.3)(c5)
c5 = Conv2D(256, (3,3), activation='elu',kernel_initializer= 'he_normal', padding='same')(c5)

expansive path (right side) 扩张路径进行精准的定位

u6 = Conv2DTranspose(128, (2,2), strides=(2,2), padding='same')(c5)
u6 = concatenate([u6,c4])
c6 = Conv2D(128, (3,3), activation= 'elu', kernel_initializer='he_normal', padding='same')(u6)
c6 = Dropout(0.2)(c6)
c6 = Conv2D(128, (3,3), activation= 'elu', kernel_initializer='he_normal', padding='same')(c6)u7 = Conv2DTranspose(64, (2,2), strides=(2,2), padding='same')(c6)
u7 = concatenate([u7, c3])
c7 = Conv2D(64, (3,3), activation='elu', kernel_initializer='he_normal', padding='same')(u7)
c7 = Dropout(0.2)(c7)
c7 = Conv2D(64, (3,3), activation='elu', kernel_initializer='he_normal', padding='same')(c7)u8 = Conv2DTranspose(32, (2,2), strides=(2,2), padding='same')(c7)
u8 = concatenate([u8,c2])
c8 = Conv2D(32, (3,3), activation='elu', kernel_initializer='he_normal', padding='same')(u8)
c8 = Dropout(0.1)(c8)
c8 = Conv2D(32, (3,3), activation='elu', kernel_initializer='he_normal', padding='same')(u8)u9 = Conv2DTranspose(16, (2,2), strides=(2,2), padding='same')(c8)
u9 = concatenate([u9, c1], axis=3)
c9 = Conv2D(16, (3,3), activation='elu', kernel_initializer='he_normal', padding='same')(u9)
c9 = Dropout(0.1)(c9)
c9 = Conv2D(16, (3,3), activation='elu', kernel_initializer='he_normal', padding='same')(c9)outputs = Conv2D(1, (1,1), activation='sigmoid')(c9)

搭建完成。


经过测试,对U-net添加Batch normalization是非常有效的手段,添加的方式是:
conv --> BN --> ReLU

改进的网络如下:

GAUSSIAN_NOISE = 0.1
# UPSAMPLE_MODE = 'SIMPLE'
UPSAMPLE_MODE = 'DECONV'
# downsampling inside the network
NET_SCALING = None
# downsampling in preprocessing
IMG_SCALING = (1, 1)from keras import models, layers
# Build U-Net model
def upsample_conv(filters, kernel_size, strides, padding):return layers.Conv2DTranspose(filters, kernel_size, strides=strides, padding=padding)
def upsample_simple(filters, kernel_size, strides, padding):return layers.UpSampling2D(strides)
c1 = layers.Conv2D(16, (3, 3), padding='same') (pp_in_layer)
c1 = layers.BatchNormalization()(c1)
c1 = layers.ReLU()(c1)
c1 = layers.Conv2D(16, (3, 3), padding='same') (c1)c1 = layers.BatchNormalization()(c1)
c1 = layers.ReLU()(c1)# p1 = layers.MaxPooling2D((2, 2)) (c1)
p1 = layers.Conv2D(16, (3, 3), strides = 2, padding='same') (c1)
p1 = layers.BatchNormalization()(p1)
p1 = layers.ReLU()(p1)c2 = layers.Conv2D(32, (3, 3), padding='same') (p1)
c2 = layers.BatchNormalization()(c2)
c2 = layers.ReLU()(c2)c2 = layers.Conv2D(32, (3, 3), padding='same') (c2)
c2 = layers.BatchNormalization()(c2)
c2 = layers.ReLU()(c2)# p2 = layers.MaxPooling2D((2, 2)) (c2)
p2 = layers.Conv2D(32, (3, 3), strides = 2, padding='same') (c2)
p2 = layers.BatchNormalization()(p2)
p2 = layers.ReLU()(p2)c3 = layers.Conv2D(64, (3, 3), padding='same') (p2)
c3 = layers.BatchNormalization()(c3)
c3 = layers.ReLU()(c3)c3 = layers.Conv2D(64, (3, 3), padding='same') (c3)
c3 = layers.BatchNormalization()(c3)
c3 = layers.ReLU()(c3)# p3 = layers.MaxPooling2D((2, 2)) (c3)
p3 = layers.Conv2D(64, (3, 3), strides = 2, padding='same') (c3)
p3 = layers.BatchNormalization()(p3)
p3 = layers.ReLU()(p3)c4 = layers.Conv2D(128, (3, 3), padding='same') (p3)
c4 = layers.BatchNormalization()(c4)
c4 = layers.ReLU()(c4)
c4 = layers.Conv2D(128, (3, 3), padding='same') (c4)
c4 = layers.BatchNormalization()(c4)
c4 = layers.ReLU()(c4)
# p4 = layers.MaxPooling2D(pool_size=(2, 2)) (c4)
p4 = layers.Conv2D(128, (3, 3), strides = 2, padding='same') (c4)
p4 = layers.BatchNormalization()(p4)
p4 = layers.ReLU()(p4)c5 = layers.Conv2D(256, (3, 3), padding='same') (p4)
c5 = layers.BatchNormalization()(c5)
c5 = layers.ReLU()(c5)
c5 = layers.Conv2D(256, (3, 3), padding='same') (c5)
c5 = layers.BatchNormalization()(c5)
c5 = layers.ReLU()(c5)u6 = upsample(128, (2, 2), strides=(2, 2), padding='same') (c5)u6 = layers.concatenate([u6, c4])
c6 = layers.Conv2D(128, (3, 3), padding='same') (u6)
c6 = layers.BatchNormalization()(c6)
c6 = layers.ReLU()(c6)
c6 = layers.Conv2D(128, (3, 3), padding='same') (c6)
c6 = layers.BatchNormalization()(c6)
c6 = layers.ReLU()(c6)u7 = upsample(64, (2, 2), strides=(2, 2), padding='same') (c6)
u7 = layers.concatenate([u7, c3])
c7 = layers.Conv2D(64, (3, 3), padding='same') (u7)
c7 = layers.BatchNormalization()(c7)
c7 = layers.ReLU()(c7)
c7 = layers.Conv2D(64, (3, 3), padding='same') (c7)
c7 = layers.BatchNormalization()(c7)
c7 = layers.ReLU()(c7)u8 = upsample(32, (2, 2), strides=(2, 2), padding='same') (c7)
u8 = layers.concatenate([u8, c2])
c8 = layers.Conv2D(32, (3, 3), padding='same') (u8)
c8 = layers.BatchNormalization()(c8)
c8 = layers.ReLU()(c8)
c8 = layers.Conv2D(32, (3, 3), padding='same') (c8)
c8 = layers.BatchNormalization()(c8)
c8 = layers.ReLU()(c8)u9 = upsample(16, (2, 2), strides=(2, 2), padding='same') (c8)
u9 = layers.concatenate([u9, c1], axis=3)
c9 = layers.Conv2D(16, (3, 3), padding='same') (u9)
c9 = layers.BatchNormalization()(c9)
c9 = layers.ReLU()(c9)
c9 = layers.Conv2D(16, (3, 3), padding='same') (c9)
c9 = layers.BatchNormalization()(c9)
c9 = layers.ReLU()(c9)d = layers.Conv2D(1, (1, 1), activation='sigmoid') (c9)
d = layers.Cropping2D((EDGE_CROP, EDGE_CROP))(d)
d = layers.ZeroPadding2D((EDGE_CROP, EDGE_CROP))(d)
if NET_SCALING is not None:d = layers.UpSampling2D(NET_SCALING)(d)seg_model = models.Model(inputs=[input_img], outputs=[d])
seg_model.summary()

分割效果有明显的提升。

问题:

  1. 下采样是使用pooling好还是使用conv stride = 2 好呢?
  2. 上采样对于keras有UpSampling2D层和Conv2DTranspose层,使用的条件和优缺点是什么?

参考网址:

  1. http://blog.csdn.net/hduxiejun/article/details/71107285
  2. http://blog.csdn.net/qq_18293213/article/details/72423592

U-Net及使用keras搭建U-Net分割网络以及改进和问题纪实相关推荐

  1. 【24】搭建FCN语义分割网络完成自己数据库图像分割(1)

    [1]batchimageprocess.py #批量图片处理.改名字.改类型 #!/usr/bin/env python # -*- encoding: utf-8 -*- ''' @File : ...

  2. 掌声送给TensorFlow 2.0!用Keras搭建一个CNN | 入门教程

    作者 | Himanshu Rawlani 译者 | Monanfei,责编 | 琥珀 出品 | AI科技大本营(id:rgznai100) 2019 年 3 月 6 日,谷歌在 TensorFlow ...

  3. 基于Keras搭建cifar10数据集训练预测Pipeline

    基于Keras搭建cifar10数据集训练预测Pipeline 钢笔先生关注 0.5412019.01.17 22:52:05字数 227阅读 500 Pipeline 本次训练模型的数据直接使用Ke ...

  4. 不到 200 行代码,教你如何用 Keras 搭建生成对抗网络(GAN)

     不到 200 行代码,教你如何用 Keras 搭建生成对抗网络(GAN) 生成对抗网络(Generative Adversarial Networks,GAN)最早由 Ian Goodfello ...

  5. 使用tf.keras搭建mnist手写数字识别网络

    使用tf.keras搭建mnist手写数字识别网络 目录 使用tf.keras搭建mnist手写数字识别网络 1.使用tf.keras.Sequential搭建序列模型 1.1 tf.keras.Se ...

  6. TensorFlow高阶 API: keras教程-使用tf.keras搭建mnist手写数字识别网络

    TensorFlow高阶 API:keras教程-使用tf.keras搭建mnist手写数字识别网络 目录 TensorFlow高阶 API:keras教程-使用tf.keras搭建mnist手写数字 ...

  7. 神经网络densecnn_对比学习用 Keras 搭建 CNN RNN 等常用神经网络

    参考: 各模型完整代码 周莫烦的教学网站 这个网站上有很多机器学习相关的教学视频,推荐上去学习学习. Keras 是一个兼容 Theano 和 Tensorflow 的神经网络高级包, 用他来组件一个 ...

  8. 教你如何用Keras搭建分类神经网络

    摘要:本文主要通过Keras实现了一个分类学习的案例,并详细介绍了MNIST手写体识别数据集. 本文分享自华为云社区<[Python人工智能] 十七.Keras搭建分类神经网络及MNIST数字图 ...

  9. cnn神经网络可以用于数据拟合吗_使用Keras搭建卷积神经网络进行手写识别的入门(包含代码解读)...

    本文是发在Medium上的一篇博客:<Handwritten Equation Solver using Convolutional Neural Network>.本文是原文的翻译.这篇 ...

  10. Keras——用Keras搭建自编码神经网络(AutoEncoder)

    文章目录 1.前言 2.用Keras搭建自编码神经网络 2.1.导入必要模块 2.2.数据预处理 2.3.搭建模型 2.4.实例化并激活模型 2.5.训练 2.6.可视化 1.前言 自编码,简单来说就 ...

最新文章

  1. 【青少年编程】绘制等腰直角三角形
  2. 在Vmware中安装Hyper-V
  3. MongoRepository
  4. 内核程序实现多文件的调用
  5. 如果用户计算机已经与网络物理相连,计算机考试卷
  6. docker挂载本地文件
  7. SpringCloud集成Dubbo实现RPC调用
  8. 既然选择了远方,便只顾风雨兼程……
  9. html如何动态添加样式表,JavaScript动态插入CSS的方法
  10. 西门子PLC中各个组织块OB作用
  11. tiny6410裸机实验第0章--------------开发环境的搭建(USB转串口)
  12. Web前端开发面试题——将字符串转成驼峰写法
  13. 统计分析之:正态性检验——SPSS操作指南
  14. openlayers学习——13、openlayers比例尺
  15. 供水供气管道泄漏监测系统原理
  16. TypeError: Class constructor ServeCommand cannot be invoked without ‘new‘
  17. 计算机网络连接黄感叹号,电脑连接路由器网络连接显示黄色感叹号的解决办法...
  18. Lib库使用学习笔记
  19. Working Practice-善用酝酿效应
  20. 精选合辑 | 30个Python数据分析及实战项目(含源码)

热门文章

  1. java读取文件封装的一个类(有部分代码借鉴别人的)
  2. 分享,用sql快速创建MODEL,快速提高工作效率哦
  3. Liferay中配置MySQL数据库的两种方法
  4. AEF横空出世——查询语法详解
  5. LaTeX (1)——LaTex环境的下载与安装(Tex live 2020+ Tex studio编辑器、 proTeXt(MiKTeX+TeXstudio编辑器))
  6. JQuery canvas 验证码
  7. JSON在Java中的使用(一)
  8. 3732 Ahui Writes Word
  9. STL:STL各种容器的使用时机详解
  10. hrbust 1041(并查集)