Keras:Unet网络实现多类语义分割方式

更多python视频教程请到菜鸟教程https://www.piaodoo.com/

1 介绍

U-Net最初是用来对医学图像的语义分割,后来也有人将其应用于其他领域。但大多还是用来进行二分类,即将原始图像分成两个灰度级或者色度,依次找到图像中感兴趣的目标部分。

本文主要利用U-Net网络结构实现了多类的语义分割,并展示了部分测试效果,希望对你有用!

2 源代码

(1)训练模型

from __future__ import print_function
import os
import datetime
import numpy as np
from keras.models import Model
from keras.layers import Input, concatenate, Conv2D, MaxPooling2D, Conv2DTranspose, AveragePooling2D, Dropout, \BatchNormalization
from keras.optimizers import Adam
from keras.layers.convolutional import UpSampling2D, Conv2D
from keras.callbacks import ModelCheckpoint
from keras import backend as K
from keras.layers.advanced_activations import LeakyReLU, ReLU
import cv2

PIXEL = 512 #set your image size
BATCH_SIZE = 5
lr = 0.001
EPOCH = 100
X_CHANNEL = 3 # training images channel
Y_CHANNEL = 1 # label iamges channel
X_NUM = 422 # your traning data number

pathX = ‘I:\Pascal VOC Dataset\train1\images\’ #change your file path
pathY = ‘I:\Pascal VOC Dataset\train1\SegmentationObject\’ #change your file path

#data processing
def generator(pathX, pathY,BATCH_SIZE):
while 1:
X_train_files = os.listdir(pathX)
Y_train_files = os.listdir(pathY)
a = (np.arange(1, X_NUM))
X = []
Y = []
for i in range(BATCH_SIZE):
index = np.random.choice(a)

print(index)

img = cv2.imread(pathX + X_train_files[index], 1)
img = np.array(img).reshape(PIXEL, PIXEL, X_CHANNEL)
X.append(img)
img1 = cv2.imread(pathY + Y_train_files[index], 1)
img1 = np.array(img1).reshape(PIXEL, PIXEL, Y_CHANNEL)
Y.append(img1)

X = np.array(X)
Y = np.array(Y)
yield X, Y

#creat unet network
inputs = Input((PIXEL, PIXEL, 3))
conv1 = Conv2D(8, 3, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(inputs)
pool1 = AveragePooling2D(pool_size=(2, 2))(conv1) # 16

conv2 = BatchNormalization(momentum=0.99)(pool1)
conv2 = Conv2D(64, 3, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(conv2)
conv2 = BatchNormalization(momentum=0.99)(conv2)
conv2 = Conv2D(64, 1, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(conv2)
conv2 = Dropout(0.02)(conv2)
pool2 = AveragePooling2D(pool_size=(2, 2))(conv2) # 8

conv3 = BatchNormalization(momentum=0.99)(pool2)
conv3 = Conv2D(128, 3, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(conv3)
conv3 = BatchNormalization(momentum=0.99)(conv3)
conv3 = Conv2D(128, 1, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(conv3)
conv3 = Dropout(0.02)(conv3)
pool3 = AveragePooling2D(pool_size=(2, 2))(conv3) # 4

conv4 = BatchNormalization(momentum=0.99)(pool3)
conv4 = Conv2D(256, 3, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(conv4)
conv4 = BatchNormalization(momentum=0.99)(conv4)
conv4 = Conv2D(256, 1, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(conv4)
conv4 = Dropout(0.02)(conv4)
pool4 = AveragePooling2D(pool_size=(2, 2))(conv4)

conv5 = BatchNormalization(momentum=0.99)(pool4)
conv5 = Conv2D(512, 3, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(conv5)
conv5 = BatchNormalization(momentum=0.99)(conv5)
conv5 = Conv2D(512, 1, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(conv5)
conv5 = Dropout(0.02)(conv5)
pool4 = AveragePooling2D(pool_size=(2, 2))(conv4)

conv5 = Conv2D(35, 3, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(conv4)

drop4 = Dropout(0.02)(conv5)

pool4 = AveragePooling2D(pool_size=(2, 2))(pool3) # 2
pool5 = AveragePooling2D(pool_size=(2, 2))(pool4) # 1

conv6 = BatchNormalization(momentum=0.99)(pool5)
conv6 = Conv2D(256, 3, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(conv6)

conv7 = Conv2D(256, 3, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(conv6)
up7 = (UpSampling2D(size=(2, 2))(conv7)) # 2
conv7 = Conv2D(256, 3, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(up7)
merge7 = concatenate([pool4, conv7], axis=3)

conv8 = Conv2D(128, 3, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(merge7)
up8 = (UpSampling2D(size=(2, 2))(conv8)) # 4
conv8 = Conv2D(128, 3, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(up8)
merge8 = concatenate([pool3, conv8], axis=3)

conv9 = Conv2D(64, 3, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(merge8)
up9 = (UpSampling2D(size=(2, 2))(conv9)) # 8
conv9 = Conv2D(64, 3, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(up9)
merge9 = concatenate([pool2, conv9], axis=3)

conv10 = Conv2D(32, 3, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(merge9)
up10 = (UpSampling2D(size=(2, 2))(conv10)) # 16
conv10 = Conv2D(32, 3, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(up10)

conv11 = Conv2D(16, 3, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(conv10)
up11 = (UpSampling2D(size=(2, 2))(conv11)) # 32
conv11 = Conv2D(8, 3, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(up11)

conv12 = Conv2D(3, 1, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(conv11)

conv12 = Conv2D(3, 1, activation=‘relu’, padding=‘same’, kernel_initializer=‘he_normal’)(conv11)

model = Model(input=inputs, output=conv12)
print(model.summary())
model.compile(optimizer=Adam(lr=1e-3), loss=‘mse’, metrics=[‘accuracy’])

history = model.fit_generator(generator(pathX, pathY,BATCH_SIZE),
steps_per_epoch=600, nb_epoch=EPOCH)
end_time = datetime.datetime.now().strftime(’%Y-%m-%d %H:%M:%S’)

#save your training model
model.save(r’V1_828.h5’)

#save your loss data
mse = np.array((history.history[‘loss’]))
np.save(r’V1_828.npy’, mse)

(2)测试模型

from keras.models import load_model
import numpy as np
import matplotlib.pyplot as plt
import os
import cv2

model = load_model(‘V1_828.h5’)
test_images_path = ‘I:\Pascal VOC Dataset\test\test_images\’
test_gt_path = ‘I:\Pascal VOC Dataset\test\SegmentationObject\’
pre_path = ‘I:\Pascal VOC Dataset\test\pre\’

X = []
for info in os.listdir(test_images_path):
A = cv2.imread(test_images_path + info)
X.append(A)

i += 1

X = np.array(X)
print(X.shape)
Y = model.predict(X)

groudtruth = []
for info in os.listdir(test_gt_path):
A = cv2.imread(test_gt_path + info)
groudtruth.append(A)
groudtruth = np.array(groudtruth)

i = 0
for info in os.listdir(test_images_path):
cv2.imwrite(pre_path + info,Y[i])
i += 1

a = range(10)
n = np.random.choice(a)
cv2.imwrite(‘prediction.png’,Y[n])
cv2.imwrite(‘groudtruth.png’,groudtruth[n])
fig, axs = plt.subplots(1, 3)

cnt = 1

for j in range(1):

axs[0].imshow(np.abs(X[n]))
axs[0].axis(‘off’)
axs[1].imshow(np.abs(Y[n]))
axs[1].axis(‘off’)
axs[2].imshow(np.abs(groudtruth[n]))
axs[2].axis(‘off’)

cnt += 1

fig.savefig(“imagestest.png”)
plt.close()

3 效果展示

说明:从左到右依次是预测图像,真实图像,标注图像。可以看出,对于部分数据的分割效果还有待改进,主要原因还是数据集相对复杂,模型难于找到其中的规律。

以上这篇Keras:Unet网络实现多类语义分割方式就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多

茂名论坛https://www.hnthzk.com/

化州橘红http://www.sfkyty.com/

茂名论坛http://www.nrso.net/

源码搜藏网http://www.cntkd.net/

茂名市高级技工学校(茂名一技)http://www.szsyby.net/

茂名一技https://www.mmbbu.com/

Keras:Unet网络实现多类语义分割方式相关推荐

  1. oak深度相机入门教程-多类语义分割

      系列文章目录: oak深度相机入门教程-识别眼睛的凝视方向 oak深度相机入门教程-检测是否佩戴口罩 oak深度相机入门教程-文本检测+光学字符识别(OCR)管道 oak深度相机入门教程-识别人的 ...

  2. 深度卷积网络CNN与图像语义分割

    转载请注明出处:  http://xiahouzuoxin.github.io/notes/html/深度卷积网络CNN与图像语义分割.html 级别1:DL快速上手 级别2:从Caffe着手实践 级 ...

  3. Semi-supervised Semantic Segmentation with Error Localization Network(基于误差定位网络的半监督语义分割 )

    Semi-supervised Semantic Segmentation with Error Localization Network(基于误差定位网络的半监督语义分割 ) Abstract 本文 ...

  4. DRN - 扩张残留网络(图像分类和语义分割)

    DRN - 扩张残留网络(图像分类和语义分割) 原标题 | Review: DRN - Dilated Residual Networks (Image Classification & Se ...

  5. 【论文阅读】结合空洞卷积的 FuseNet变体网络高分辨率遥感影像语义分割

    [论文阅读]结合空洞卷积的 FuseNet变体网络高分辨率遥感影像语义分割 一.论文总体框架   首先,采用 FuseNet变体网络将数字地表模型(digital surface model,DSM) ...

  6. 上下文聚合网络用于遥感影像语义分割

    Context Aggregation Network for Semantic Labeling in Aerial Images 摘要: 高分辨率航拍图像的语义标注是遥感图像分析的基本和必要任务. ...

  7. LidarMultiNet:在单个多任务网络中统一LiDAR语义分割、三维目标检测和全景分割

    Abstract 这份技术报告介绍了2022年Waymo开放数据集3D语义分割挑战赛的第一名获奖解决方案.我们的网络称为LidarMultiNet,将主要的LiDAR感知任务(例如3D语义分割.目标检 ...

  8. CVPR 2020|图网络引导的实时语义分割网络搜索 (GAS)

    论文链接:https://arxiv.org/abs/1909.06793 之后代码将会开源:https://github.com/L-Lighter/LightNet 作者:林培文*,孙鹏*,程光亮 ...

  9. 毕业设计 U-Net遥感图像语义分割(源码+论文)

    文章目录 0 项目说明 1 研究目的 2 研究方法 3 研究结论 4 论文目录 5 项目源码 6 最后 0 项目说明 **基于 U-Net 网络的遥感图像语义分割 ** 提示:适合用于课程设计或毕业设 ...

最新文章

  1. echarts legend颜色_echarts数据可视化图表(二):双柱状图
  2. 树莓派学习笔记 1 -- 硬件的需求以及raspbian系统的安装
  3. 【Structs2】struts2单例多例以及spring整合的问题
  4. CNCF 宣布成立应用交付领域小组,正式开启云原生应用时代
  5. 整合Spring Security
  6. 402. 移掉K位数字(单调栈)
  7. Maven超详细配置
  8. MagicRecord For IOS 简介
  9. 如何做一个跨平台的游戏App?
  10. ElasticSearch架构反向思路
  11. 学习CSS中的BFC
  12. [网络结构]DenseNet网络结构
  13. MC9S12XEP100 本地RAM不够用了怎么办
  14. mac关闭谷歌自动更新
  15. Springboot 整合Shiro认证 集成第三方QQ登录
  16. android6.0相机假对焦,android相机对焦
  17. 什么是点对点?什么去中心化?
  18. Keysight的扫描电子元件软件EP-Scan 2023版本下载与安装配置教程
  19. 6款PC脑图工具,你pick哪一款呢
  20. docker(一):基本组成与常用命令

热门文章

  1. LR--web_reg_save_param实操
  2. 绘制多边形--scratch编程二级
  3. 用Github Pages+Hexo搭建博客之(八)Hexo博客Next主题添加统计文章阅读量(访问量/浏览量/阅读次数)功能
  4. 海王PHP面试,海王满天飞,offer收割机遍地跑,从四非到985,双跨不易何其幸运...
  5. STM32F103C8T6驱动6线OLED(SPI通讯)
  6. k8s部署java应用
  7. MIUI小米 卸载金山安全服务
  8. Common Weakness Enumeration (CWE) 2021 Lastest
  9. ElasticSearch(搜索服务器)-第一天
  10. Android.mk 编译so动态库以及如何使用so动态库