一、CNN训练模型

模型尺寸分析:卷积层全都采用了补0,所以经过卷积层长和宽不变,只有深度加深。池化层全都没有补0,所以经过池化层长和宽均减小,深度不变。http://download.tensorflow.org/example_images/flower_photos.tgz

模型尺寸变化:100×100×3->100×100×32->50×50×32->50×50×64->25×25×64->25×25×128->12×12×128->12×12×128->6×6×128

CNN训练代码如下:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
from skimage import io,transform
import glob
import os
import tensorflow as tf
import numpy as np
import time
#数据集地址
path='E:/data/datasets/flower_photos/'
#模型保存地址
model_path='E:/data/model/flower/model.ckpt'
#将所有的图片resize成100*100
w=100
h=100
c=3
#读取图片
def read_img(path):
    cate=[path+x for x in os.listdir(path) if os.path.isdir(path+x)]
    imgs=[]
    labels=[]
    for idx,folder in enumerate(cate):
        for im in glob.glob(folder+'/*.jpg'):
            print('reading the images:%s'%(im))
            img=io.imread(im)
            img=transform.resize(img,(w,h))
            imgs.append(img)
            labels.append(idx)
    return np.asarray(imgs,np.float32),np.asarray(labels,np.int32)
data,label=read_img(path)
#打乱顺序
num_example=data.shape[0]
arr=np.arange(num_example)
np.random.shuffle(arr)
data=data[arr]
label=label[arr]
#将所有数据分为训练集和验证集
ratio=0.8
s=np.int(num_example*ratio)
x_train=data[:s]
y_train=label[:s]
x_val=data[s:]
y_val=label[s:]
#-----------------构建网络----------------------
#占位符
x=tf.placeholder(tf.float32,shape=[None,w,h,c],name='x')
y_=tf.placeholder(tf.int32,shape=[None,],name='y_')
def inference(input_tensor, train, regularizer):
    with tf.variable_scope('layer1-conv1'):
        conv1_weights = tf.get_variable("weight",[5,5,3,32],initializer=tf.truncated_normal_initializer(stddev=0.1))
        conv1_biases = tf.get_variable("bias", [32], initializer=tf.constant_initializer(0.0))
        conv1 = tf.nn.conv2d(input_tensor, conv1_weights, strides=[1, 1, 1, 1], padding='SAME')
        relu1 = tf.nn.relu(tf.nn.bias_add(conv1, conv1_biases))
    with tf.name_scope("layer2-pool1"):
        pool1 = tf.nn.max_pool(relu1, ksize = [1,2,2,1],strides=[1,2,2,1],padding="VALID")
    with tf.variable_scope("layer3-conv2"):
        conv2_weights = tf.get_variable("weight",[5,5,32,64],initializer=tf.truncated_normal_initializer(stddev=0.1))
        conv2_biases = tf.get_variable("bias", [64], initializer=tf.constant_initializer(0.0))
        conv2 = tf.nn.conv2d(pool1, conv2_weights, strides=[1, 1, 1, 1], padding='SAME')
        relu2 = tf.nn.relu(tf.nn.bias_add(conv2, conv2_biases))
    with tf.name_scope("layer4-pool2"):
        pool2 = tf.nn.max_pool(relu2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
    with tf.variable_scope("layer5-conv3"):
        conv3_weights = tf.get_variable("weight",[3,3,64,128],initializer=tf.truncated_normal_initializer(stddev=0.1))
        conv3_biases = tf.get_variable("bias", [128], initializer=tf.constant_initializer(0.0))
        conv3 = tf.nn.conv2d(pool2, conv3_weights, strides=[1, 1, 1, 1], padding='SAME')
        relu3 = tf.nn.relu(tf.nn.bias_add(conv3, conv3_biases))
    with tf.name_scope("layer6-pool3"):
        pool3 = tf.nn.max_pool(relu3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
    with tf.variable_scope("layer7-conv4"):
        conv4_weights = tf.get_variable("weight",[3,3,128,128],initializer=tf.truncated_normal_initializer(stddev=0.1))
        conv4_biases = tf.get_variable("bias", [128], initializer=tf.constant_initializer(0.0))
        conv4 = tf.nn.conv2d(pool3, conv4_weights, strides=[1, 1, 1, 1], padding='SAME')
        relu4 = tf.nn.relu(tf.nn.bias_add(conv4, conv4_biases))
    with tf.name_scope("layer8-pool4"):
        pool4 = tf.nn.max_pool(relu4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
        nodes = 6*6*128
        reshaped = tf.reshape(pool4,[-1,nodes])
    with tf.variable_scope('layer9-fc1'):
        fc1_weights = tf.get_variable("weight", [nodes, 1024],
                                      initializer=tf.truncated_normal_initializer(stddev=0.1))
        if regularizer != None: tf.add_to_collection('losses', regularizer(fc1_weights))
        fc1_biases = tf.get_variable("bias", [1024], initializer=tf.constant_initializer(0.1))
        fc1 = tf.nn.relu(tf.matmul(reshaped, fc1_weights) + fc1_biases)
        if train: fc1 = tf.nn.dropout(fc1, 0.5)
    with tf.variable_scope('layer10-fc2'):
        fc2_weights = tf.get_variable("weight", [1024, 512],
                                      initializer=tf.truncated_normal_initializer(stddev=0.1))
        if regularizer != None: tf.add_to_collection('losses', regularizer(fc2_weights))
        fc2_biases = tf.get_variable("bias", [512], initializer=tf.constant_initializer(0.1))
        fc2 = tf.nn.relu(tf.matmul(fc1, fc2_weights) + fc2_biases)
        if train: fc2 = tf.nn.dropout(fc2, 0.5)
    with tf.variable_scope('layer11-fc3'):
        fc3_weights = tf.get_variable("weight", [512, 5],
                                      initializer=tf.truncated_normal_initializer(stddev=0.1))
        if regularizer != None: tf.add_to_collection('losses', regularizer(fc3_weights))
        fc3_biases = tf.get_variable("bias", [5], initializer=tf.constant_initializer(0.1))
        logit = tf.matmul(fc2, fc3_weights) + fc3_biases
    return logit
#---------------------------网络结束---------------------------
regularizer = tf.contrib.layers.l2_regularizer(0.0001)
logits = inference(x,False,regularizer)
#(小处理)将logits乘以1赋值给logits_eval,定义name,方便在后续调用模型时通过tensor名字调用输出tensor
b = tf.constant(value=1,dtype=tf.float32)
logits_eval = tf.multiply(logits,b,name='logits_eval')
loss=tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y_)
train_op=tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)
correct_prediction = tf.equal(tf.cast(tf.argmax(logits,1),tf.int32), y_)   
acc= tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
#定义一个函数,按批次取数据
def minibatches(inputs=None, targets=None, batch_size=None, shuffle=False):
    assert len(inputs) == len(targets)
    if shuffle:
        indices = np.arange(len(inputs))
        np.random.shuffle(indices)
    for start_idx in range(0, len(inputs) - batch_size + 1, batch_size):
        if shuffle:
            excerpt = indices[start_idx:start_idx + batch_size]
        else:
            excerpt = slice(start_idx, start_idx + batch_size)
        yield inputs[excerpt], targets[excerpt]
#训练和测试数据,可将n_epoch设置更大一些
n_epoch=10                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          
batch_size=64
saver=tf.train.Saver()
sess=tf.Session() 
sess.run(tf.global_variables_initializer())
for epoch in range(n_epoch):
    start_time = time.time()
    #training
    train_loss, train_acc, n_batch = 0, 0, 0
    for x_train_a, y_train_a in minibatches(x_train, y_train, batch_size, shuffle=True):
        _,err,ac=sess.run([train_op,loss,acc], feed_dict={x: x_train_a, y_: y_train_a})
        train_loss += err; train_acc += ac; n_batch += 1
    print("   train loss: %f" % (np.sum(train_loss)/ n_batch))
    print("   train acc: %f" % (np.sum(train_acc)/ n_batch))
    #validation
    val_loss, val_acc, n_batch = 0, 0, 0
    for x_val_a, y_val_a in minibatches(x_val, y_val, batch_size, shuffle=False):
        err, ac = sess.run([loss,acc], feed_dict={x: x_val_a, y_: y_val_a})
        val_loss += err; val_acc += ac; n_batch += 1
    print("   validation loss: %f" % (np.sum(val_loss)/ n_batch))
    print("   validation acc: %f" % (np.sum(val_acc)/ n_batch))
saver.save(sess,model_path)
sess.close()

二、调用模型进行预测

调用模型进行花卉的预测,代码如下:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
from skimage import io,transform
import tensorflow as tf
import numpy as np
path1 = "E:/data/datasets/flower_photos/daisy/5547758_eea9edfd54_n.jpg"
path2 = "E:/data/datasets/flower_photos/dandelion/7355522_b66e5d3078_m.jpg"
path3 = "E:/data/datasets/flower_photos/roses/394990940_7af082cf8d_n.jpg"
path4 = "E:/data/datasets/flower_photos/sunflowers/6953297_8576bf4ea3.jpg"
path5 = "E:/data/datasets/flower_photos/tulips/10791227_7168491604.jpg"
flower_dict = {0:'dasiy',1:'dandelion',2:'roses',3:'sunflowers',4:'tulips'}
w=100
h=100
c=3
def read_one_image(path):
    img = io.imread(path)
    img = transform.resize(img,(w,h))
    return np.asarray(img)
with tf.Session() as sess:
    data = []
    data1 = read_one_image(path1)
    data2 = read_one_image(path2)
    data3 = read_one_image(path3)
    data4 = read_one_image(path4)
    data5 = read_one_image(path5)
    data.append(data1)
    data.append(data2)
    data.append(data3)
    data.append(data4)
    data.append(data5)
    saver = tf.train.import_meta_graph('E:/data/model/flower/model.ckpt.meta')
    saver.restore(sess,tf.train.latest_checkpoint('E:/data/model/flower/'))
    graph = tf.get_default_graph()
    x = graph.get_tensor_by_name("x:0")
    feed_dict = {x:data}
    logits = graph.get_tensor_by_name("logits_eval:0")
    classification_result = sess.run(logits,feed_dict)
    #打印出预测矩阵
    print(classification_result)
    #打印出预测矩阵每一行最大值的索引
    print(tf.argmax(classification_result,1).eval())
    #根据索引通过字典对应花的分类
    output = []
    output = tf.argmax(classification_result,1).eval()
    for i in range(len(output)):
        print("第",i+1,"朵花预测:"+flower_dict[output[i]])

运行结果:

?
1
2
3
4
5
6
7
8
9
10
11
[[  5.76620245   3.18228579  -3.89464641  -2.81310582   1.40294015]
 [ -1.01490593   3.55570269  -2.76053429   2.93104005  -3.47138596]
 [ -8.05292606  -7.26499033  11.70479774   0.59627819   2.15948296]
 [ -5.12940931   2.18423128  -3.33257103   9.0591135    5.03963232]
 [ -4.25288343  -0.95963973  -2.33347392   1.54485476   5.76069307]]
[0 1 2 3 4]
1 朵花预测:dasiy
2 朵花预测:dandelion
3 朵花预测:roses
4 朵花预测:sunflowers
5 朵花预测:tulips

预测结果和调用模型代码中的五个路径相比较是完全准确的。

本文的模型对于花卉的分类准确率大概在70%左右,采用迁移学习调用Inception-v3模型对本文中的花卉数据集分类准确率在95%左右。主要的原因在于本文的CNN模型较于简单,而且花卉数据集本身就比mnist手写数字数据集分类难度就要大一点,同样的模型在mnist手写数字的识别上准确率要比花卉数据集准确率高不少。

CNN训练模型 花卉相关推荐

  1. 基于TensorFlow实现的CNN神经网络 花卉识别系统Demo

    基于TensorFlow实现的CNN神经网络 花卉识别系统Demo Demo展示 登录与注册 主页面 模型训练 识别 神经网络 训练 Demo下载 Demo展示 登录与注册 主页面 模型训练 识别 神 ...

  2. 基于CNN的花卉识别

    程序和数据集地址:https://download.csdn.net/my 数据集准备: 如图是五种类别的花卉数据集,分别放在五个文件夹. 训练神经网络模型文件在CNN中 定义数据集地址和模型保存地址 ...

  3. python查看CNN训练模型参数

    参照:http://blog.csdn.net/u011762313/article/details/49851795 #!/usr/bin/env python# 引入"咖啡" ...

  4. 基于TensorFlow的CNN卷积网络模型花卉分类GUI版(2)

    一.项目描述 10类花的图片1100张,按{牡丹,月季,百合,菊花,荷花,紫荆花,梅花,-}标注,其中1000张作为训练样本,100张作为测试样本,设计一个CNN卷积神经网络花卉分类器进行花卉的分类, ...

  5. tensorflow: 花卉分类

    本文主要通过CNN进行花卉的分类,训练结束保存模型,最后通过调用模型,输入花卉的图片通过模型来进行类别的预测. 测试平台:win 10+tensorflow 1.2 数据集:http://downlo ...

  6. 花卉识别(tensorflow)

    参考教材:人工智能导论(第4版) 王万良 高等教育出版社 实验环境:Python3.6 + Tensor flow 1.12 人工智能导论实验导航 实验一:斑马问题 https://blog.csdn ...

  7. 基于深度学习的花卉图像关键点检测

    点击上方"小白学视觉",选择加"星标"或"置顶" 重磅干货,第一时间送达 在本文中,我们描述了我们如何使用卷积神经网络 (CNN) 来估计花 ...

  8. 论文翻译-Three Stream 3D CNN with SE Block for Micro- Expression Recognition

    用于微表情识别的三流3D CNN SEnet:Squeeze-and-Excitation Networks 论文地址:链接:https://pan.baidu.com/s/1VQxxIKb51N4D ...

  9. Github 项目推荐 | 用手势输入表情符号 —— Emojinator

    Github 项目推荐 | 用手势输入表情符号 -- Emojinator 此代码可以帮助开发者识别和分类不同的 Emoji 图像,不过目前只支持手绘的 Emoji 图像. 您可以安装 Conda f ...

最新文章

  1. uniapp - easycom模式(自动引入组件)
  2. 后盾网lavarel视频项目---图片上传
  3. 开启php,php开启openssl的方法
  4. ajax mode,DWR的三种Reverse Ajax Mode配置详解
  5. [Linux] linux下安装配置 zookeeper/redis/solr/tomcat/IK分词器 详细实例.
  6. Nacos配置中心规范
  7. 【Spark】大数据+AI mettup【视频笔记】从lambda到HSAP实时数仓的演进 机器学习易用性
  8. iOS 指纹识别常见问题汇总
  9. 程序员的数学【概率论】
  10. 多媒体计算机的扫描仪属于感觉媒体吗,多媒体计算机中的扫描仪属于感觉媒体...
  11. .ftl文件简介及语法
  12. ABAP 中的搜索帮助
  13. 为什么 fac_us=SystemCoreClock/8000000?
  14. 用布尔代数简化下列各逻辑表达式
  15. 计算机服务器组装,一台家用虚拟化测试服务器组装
  16. git远程分支强制覆盖本地分支
  17. 蓝桥杯基础练习之 闰年判断 、 01字串 、查找整数、数列特征 、字母图形
  18. PostgreSQL入门基本语法之DDL-(user、database、schema)
  19. grafana启动失败 报错:Failed at step USER spawning /usr/sbin/grafana-server: No such process
  20. 操作系统-PV操作-理发师问题

热门文章

  1. linux c实现一个简单的sniffer
  2. Brush notes:stack、queue、heap
  3. MPEG的完整形式是什么?
  4. nextgaussian_Java Random nextGaussian()方法与示例
  5. python 示例_Python日历类| yeardayscalendar()方法与示例
  6. java date传输类型错误_转换日期格式:Java中的转换错误?
  7. mysql主从同步面试题_面试被问MySQL 主从复制,怎么破?
  8. 【剑指offer】_04 重建二叉树
  9. HDU - 1796——容斥原理+二进制枚举
  10. 一文说尽C++赋值运算符重载函数(operator=)