github:https://github.com/MichaelBeechan
CSDN:https://blog.csdn.net/u011344545

涉及代码:https://github.com/MichaelBeechan/Learning_TensorFlow-Kaggle_MNIST 欢迎Fork和Star

Learning_TensorFlow-Kaggle_MNIS

一步步带你通过项目(MNIST手写识别)学习入门TensorFlow以及神经网络的知识

**

TF_Variable:TensorFlow入门

**

# -*- coding:utf-8 -*-
"""
Name: Michael Beechan
School: Chongqing University of Technology
Time: 2018.10.4
Description: tensorflow变量初始化
https://baike.baidu.com/item/TensorFlow/18828108?fr=aladdin
"""
import tensorflow as tf
# 变量定义
w = tf.Variable([[0.5, 1.0]])
x = tf.Variable([[2.0], [1.0]])
# 矩阵乘法
y = tf.matmul(w, x)
print(y)# 函数
norm = tf.random_normal([2, 3], mean = -1, stddev = 4)
c = tf.constant([[1, 2], [3, 4], [5, 6]])
shuff = tf.random_shuffle(c)  # shuffle洗牌
sess = tf.Session()
print(sess.run(norm))
print(sess.run(shuff))
# 将numpy的一些数据转换为tensorflow能用的类型
import numpy as np
a = np.zeros((3, 3))
ta = tf.convert_to_tensor(a)
print(sess.run(ta))# 创建一个变量 并用for循环对变量进行赋值操作
num  =tf.Variable(0, name="count")
new_value = tf.add(num, 10)
op = tf.assign(num, new_value)
print(op)
# 初始化全局变量
init_op = tf.global_variables_initializer()
# 定义运行会话
with tf.Session() as sess:sess.run(init_op)print(sess.run(num))for i in range(5):sess.run(op)print(sess.run(num))# 通过feed设置placeholder的值
# 声明变量是不赋值,计算时进行赋值  使用feed
input1 = tf.placeholder(tf.float32)
input2 = tf.placeholder(tf.float32)
value_new = tf.multiply(input1, input2)with tf.Session() as sess:print(sess.run(value_new, feed_dict={input1:23.0, input2:11.0}))

**

Kaggle_mnist

**
使用softMax作为激活函数,交叉熵做损失函数,梯度下降法优化的单层神经网络学习识别
准确率:88%左右

#-*- coding:utf-8 -*-
"""
Name: Michael Beechan
School: Chongqing University of Technology
Time: 2018.10.4
Description: Kaggle MINIST 手写图片识别  Digit Recognizer
http://wiki.jikexueyuan.com/project/tensorflow-zh/tutorials/mnist_beginners.html
"""
"""
一、数据的准备
二、模型的设计
三、代码实现
28*28 = 784 的二维数组
训练数据和测试数据,都可以分别转化为[42000,769]和[28000,768]的数组
模型建立:
1)使用一个最简单的单层的神经网络进行学习
2)用SoftMax来做为激活函数
3)用交叉熵来做损失函数
4)用梯度下降来做优化方式
"""#88.45% 识别正确率
import pandas as pd
import numpy as np
import tensorflow as tf#加载数据
train = pd.read_csv("train.csv")
images = train.iloc[:, 1:].values
#labels_flat = train[[0]].values.ravel()
labels_flat = train.iloc[:, 0].values.ravel()#输入处理
images = images.astype(np.float)
images = np.multiply(images, 1.0 / 255.0)
print("输入数据的数量:(%g, %g)" % images.shape)
images_size = images.shape[1]
images_width = images_height = np.ceil(np.sqrt(images_size)).astype(np.uint8)
print("图片的长 = {0}\n图片的高 = {1}".format(images_width, images_height))x = tf.placeholder('float', shape=[None, images_size])#结果处理
labels_count = np.unique(labels_flat).shape[0]
print('结果的种类 = {0}'.format(labels_count))
y = tf.placeholder('float', shape=[None, labels_count])#One-Hot编码 :离散特征处理——独热编码  scikit_learn有封装了现成的编码函数OneHotEncoder()
def dense_to_one_hot(labels_dense, num_calsses):num_labels = labels_dense.shape[0]index_offset = np.arange(num_labels) * num_calsseslabels_one_hot = np.zeros((num_labels, num_calsses))#flat返回的是一个迭代器labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1return labels_one_hotlabels = dense_to_one_hot(labels_flat, labels_count)
labels = labels.astype(np.uint8)
print('结果的数量:({0[0]}, {0[1]})'.format(labels.shape))#数据划分
VALIDATION_SIZE = 2000validation_images = images[:VALIDATION_SIZE]
validation_labels = labels[:VALIDATION_SIZE]train_images = images[VALIDATION_SIZE:]
train_labels = labels[VALIDATION_SIZE:]batch_size = 100
n_batch = len(train_images)//batch_size#建立神经网络
weight = tf.Variable(tf.zeros([784, 10]))
biases = tf.Variable(tf.zeros([10]))
result = tf.matmul(x, weight) + biases
prediction = tf.nn.softmax(result)loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=prediction))
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)init = tf.global_variables_initializer()correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(prediction, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))with tf.Session() as sess:sess.run(init)for epoch in range(50):for batch in range(n_batch):batch_x = train_images[batch * batch_size:(batch+1) * batch_size]batch_y = train_labels[batch * batch_size:(batch+1) * batch_size]sess.run(train_step, feed_dict={x:batch_x, y:batch_y})accuracy_n = sess.run(accuracy, feed_dict={x:validation_images, y:validation_labels})print("第"+str(epoch+1)+"轮,准确度为:" + str(accuracy_n))```**

CNN_mnist

卷积神经网络——卷积层1+池化层1+卷积层2+池化层2+全连接1+Dropout层+输出层
准确率:训练20 accuracy is 0.984

#-*- coding:utf-8 -*-
"""
Name: Michael Beechan
School: Chongqing University of Technology
Time: 2018.10.4
Description: MINIST Digit Recognizer CNN
https://www.zhihu.com/question/52668301
"""
#卷积层1+池化层1+卷积层2+池化层2+全连接1+Dropout层+输出层
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plot
from tensorflow.examples.tutorials.mnist import input_data
import pandas as pd#Add data
train = pd.read_csv("train.csv")
test = pd.read_csv("test.csv")#Get data and deal data  astype()转换数据类型
x_train = train.iloc[:, 1:].values
x_train = x_train.astype(np.float)
x_train = np.multiply(x_train, 1.0 / 255.0)#Get image width and height
image_size = x_train.shape[1]
images_width = images_height = np.ceil(np.sqrt(image_size)).astype(np.uint8)print('数据样本大小:(%g, %g)' % x_train.shape)
print('图像的维度大小:{0}'.format(image_size))
print('图像长度:{0}\n高度:{1}'.format(images_width, images_height))#Get data labels
labels_flat = train.iloc[:, 0].values.ravel()
#对于一维数组或者列表,unique函数去除其中重复的元素,并按元素由大到小返回一个新的无元素重复的元组或者列表
labels_count = np.unique(labels_flat).shape[0]#One-Hot function
def dense_to_one_hot(labels_dense, num_classes):num_labels = labels_dense.shape[0]index_offset = np.arange(num_labels) * num_classeslabels_one_hot = np.zeros((num_labels, num_classes))labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1return labels_one_hot#one-hot deal labels
labels = dense_to_one_hot(labels_flat, labels_count)
labels = labels.astype(np.uint8)print('标签({0[0]}, {0[1]})'.format(labels.shape))
print('图像标签Example:[{0}] --> {1}'.format(25, labels[25]))#Divide train data to train and validation
VALIDATION_SIZE = 2000
train_images = x_train[VALIDATION_SIZE:]
train_labels = labels[VALIDATION_SIZE:]validation_images = x_train[:VALIDATION_SIZE]
validation_labels = labels[:VALIDATION_SIZE]#set batch size and get the sum total of batch
batch_size = 100
n_batch = len(train_images) // batch_size#define Empty variable (data)x: 784 (labels)y: 10
x = tf.placeholder(tf.float32, [None, 784])
y = tf.placeholder(tf.float32, [None, 10])#define function to deal data
def weight_variable(shape):#initial weight --- normal distribution#一个截断的产生正太分布的函数,就是说产生正太分布的值如果与均值的差值大于两倍的标准差,那就重新生成initial = tf.truncated_normal(shape, stddev=0.1)return tf.Variable(initial)def bias_variable(shape):# initial bias -- nonzeroinitial = tf.constant(0.1, shape=shape)return tf.Variable(initial)#packaging TensorFlow 2D convolution
def conv2D(x, W):return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
#packaging Tensorflow Pooling layer
def max_pool_2x2(x):return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')#Transform input data to 4D tensor, 2 and 3 is width and height, 4 is color
x_image = tf.reshape(x, [-1, 28, 28, 1])#compute 32 features 3*3 patch
w_conv1 = weight_variable([3, 3, 1, 32])
b_conv1 = bias_variable([32])#28*28 images conv step-size is 1   2*2 max pool
#After pool [28/2, 28/2] = [14, 14] the second pool [14/2, 14/2] = [7, 7]
#conv data
h_conv1 = tf.nn.relu(conv2D(x_image, w_conv1) + b_conv1)
#pool result
h_pool1 = max_pool_2x2(h_conv1)#On the previous basis, generate 64 features
w_conv2 = weight_variable([6, 6, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2D(h_pool1, w_conv2) + b_conv2)#max_pool 2*2 --> [7, 7]
h_pool2 = max_pool_2x2(h_conv2)
h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64])#Fully connected neural network  1024 Neural
w_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, w_fc1) + b_fc1)#Dropout
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)#1024 to 10D output
w_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, w_fc2) + b_fc2#build loss function --> cross entropy
#tf.nn.softmax_cross_entropy_with_logits
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels = y, logits=y_conv))
#optimizing para
train_step_1 = tf.train.AdadeltaOptimizer(learning_rate=0.1).minimize(loss)#compute accuracy
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_conv, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))#Set the filename parameter to save the model
global_step = tf.Variable(0, name='globle_step', trainable=False)
saver  =tf.train.Saver()#initial variable
init = tf.global_variables_initializer()#train
with tf.Session() as sess:sess.run(init)# saver.restore(sess, "model.ckpt-12")# iter 20for epoch in range(1, 20):for batch in range(n_batch):# each times get one data patch to trainbatch_x = train_images[(batch) * batch_size:(batch+1) * batch_size]batch_y = train_labels[(batch) * batch_size:(batch+1) * batch_size]# the most important step -->sess.run(train_step_1, feed_dict={x:batch_x, y:batch_y, keep_prob:0.5})# each period compute accuracyaccuracy_n = sess.run(accuracy, feed_dict={x:validation_images, y:validation_labels, keep_prob:1.0})print("The " + str(epoch+1) + "th, accuracy is " + str(accuracy_n))# save train model# global_step.assign(epoch).eval()# saver.save(sess, "model.ckpt", global_step=global_step)

接下来改进方案进一步提高准确率。。。。。使用大神的自归一化神经网络

TensorFlow | 使用Tensorflow带你实现MNIST手写字体识别相关推荐

  1. MNIST手写字体识别入门编译过程遇到的问题及解决

    MNIST手写字体识别入门编译过程遇到的问题及解决 以MNIST手写字体识别作为神经网络及各种网络模型的作为练手,将遇到的问题在这里记录与交流. 激活tensorflow环境后,运行spyder或者j ...

  2. matlab文字bp识别,MNIST手写字体识别(CNN+BP两种实现)-Matlab程序

    [实例简介] MNIST手写字 Matlab程序,包含BP和CNN程序.不依赖任何库,包含MNIST数据,BP网络可达到98.3%的识别率,CNN可达到99%的识别率.CNN比较耗时,关于CNN的程序 ...

  3. AI常用框架和工具丨11. 基于TensorFlow(Keras)+Flask部署MNIST手写数字识别至本地web

    代码实例,基于TensorFlow+Flask部署MNIST手写数字识别至本地web,希望对您有所帮助. 文章目录 环境说明 文件结构 模型训练 本地web创建 实现效果 环境说明 操作系统:Wind ...

  4. Tensorflow之 CNN卷积神经网络的MNIST手写数字识别

    点击"阅读原文"直接打开[北京站 | GPU CUDA 进阶课程]报名链接 作者,周乘,华中科技大学电子与信息工程系在读. 前言 tensorflow中文社区对官方文档进行了完整翻 ...

  5. pytorch应用于MNIST手写字体识别

    前言 手写字体MNIST数据集是一组常见的图像,其常用于测评和比较机器学习算法的性能,本文使用pytorch框架来实现对该数据集的识别,并对结果进行逐步的优化. 一.数据集 MNIST数据集是由28x ...

  6. linux手写数字识别,OpenCV 3.0中的SVM训练 mnist 手写字体识别

    前言: SVM(支持向量机)一种训练分类器的学习方法 mnist 是一个手写字体图像数据库,训练样本有60000个,测试样本有10000个 LibSVM 一个常用的SVM框架 OpenCV3.0 中的 ...

  7. TensorFlow 高级之二 (卷积神经网络手写字体识别)

    文章目录 一.数据集获取 二.数据感知与处理 2.1 导包 2.2 导入数据 2.3 把标签转换为one-hot格式 2.4. 数据维度 2.5. 打印部分样例图片 三.创建神经网络 3.1 定义网络 ...

  8. TensorFlow 第四步 多层神经网络 Mnist手写数字识别

    从训练样例中取1000个进行训练,再对1000个测试样例进行检测,出现过拟合情况,而且损失函数值和测试精度值波动很大. # coding=utf-8 import os os.environ[&quo ...

  9. (二)Tensorflow搭建卷积神经网络实现MNIST手写字体识别及预测

    1 搭建卷积神经网络 1.0 网络结构 图1.0 卷积网络结构 1.2 网络分析 序号 网络层 描述 1 卷积层 一张原始图像(28, 28, 1),batch=1,经过卷积处理,得到图像特征(28, ...

最新文章

  1. Facebook AI新研究:可解释神经元或许会阻碍DNN的学习
  2. java培训机构_java编程软件培训机构
  3. python与php8-别再盲目学 Python 了!
  4. There is no public key available for the following key IDs: 3B4FE6ACC0B21F32
  5. MVC应用程序实现文件库(FlexPaper)
  6. oem是代工还是贴牌_食用油OEM贴牌代工业务要注意哪些问题?
  7. 【转】自旋锁-SpinLock(.NET 4.0+)
  8. NLP十大研究方向Highlights!
  9. lnmp编译安装mysql_LNMP一键包不安装mysql | 厘米天空
  10. Android Studio + TensorFlow lite 0.1.7
  11. JAVA后端开发常用的Linux命令总结
  12. SAP FB60\FB70\MIRO 默认税码配置
  13. 媒体播放器之:TCPMP播放器简介
  14. 快乐、聪明和有用,你会如何选择?
  15. POI Excel列宽设置
  16. an怎么做淡入_切换场景的淡入淡出效果
  17. IPAD DHCP
  18. MATLAB工作空间变量的保存方法总结,非常实用!
  19. 单链表就地逆置(Java版)
  20. 新时达电梯服务器维修,常见的新时达电梯维修时问题分析

热门文章

  1. [USACO1.1]坏掉的项链Broken Necklace
  2. sudu在linux的命令,Linux的sudo命令
  3. 中if判断中文_当Excel表格中的条件判断超过8个,用IF函数不容易实现怎么办?...
  4. ef mysql 数据模型,EF Core使用CodeFirst在MySql中创建新数据库以及已有的Mysql数据库如何使用DB First生成域模型...
  5. 用C语言编写小学四则运算程序,用C语言编写生成小学四则运算程序
  6. 前n个正整数相乘的时间复杂度为_初一数学必学必考的21个知识点,附第一章有理数测试卷...
  7. Mysql学习(二)之安装、开启自启、启动、重启、停止
  8. Thinkphp5.0快速入门笔记(2)
  9. Python学习笔记(二)——HelloWorld
  10. 九、Node.js中文乱码问题