加粗样式边学边记录,感谢莫烦大神的教学视频,获益良多,之前已经看完了吴恩达Andrew Ng的视频,但对Tensorflow的使用还是有很多不懂的地方,还是要花些时间好好学学tensorflow和keras。


莫烦大神的视频学习地址


Tensorflow 基础框架

基于tensorflow构建的三层(单隐层)神经网络如下图所示:

上图中,圆形和正方形的节点被称为node,在node间流动的数据流(多维数组)被称为张量(tensor)

张量 Tensor :

阶数 说明
0阶张量 = 标量(Scalar) 也就是一个数值,如[1][1][1]
1阶张量 = 向量(Vector) 比如一维的[1,2,3][1 , 2 , 3][1,2,3]
2阶张量 = 矩阵(Matrix) 比如[[1,2,3],[4,5,6],[7,8,9]][[1,2,3],[4,5,6],[7,8,9]][[1,2,3],[4,5,6],[7,8,9]]
…\dots… …\dots…
n阶张量 = n维数组

tensor和node之间关系:

如果输入tensor的维度是5000×645000\times645000×64,表示有5000个训练样本,每个样本有64个特征,则输入层必须有64个node来接受这些特征。


例子

创建数据训练函数,目标是要获得公式y = 0.1*x+0.3中的0.1和0.3这两个值,即0.1为权重weight和0.3为偏执bias

import tensorflow as tf
import numpy as np
# create data
x_data=np.random.rand(100).astype(np.float32)
y_data=x_data*0.1+0.3### create tensorflow structure start##
#参数定义(一维,第二个是取值范围的下限,第三个参数是取值范围的上限)
Weights = tf.Variable(tf.random_uniform([1],-1.0,1.0))
biases = tf.Variable(tf.zeros([1]))y = Weights*x_data+biases#计算预测值和真实值的差值
loss=tf.reduce_mean(tf.square(y-y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5) #learning_rate
train= optimizer.minimize(loss)#建立完以上这些变量,还需要在NN中初始化这些变量variables
init=tf.initialize_all_variables() ###create tensorflow structure end ###sess = tf.Session()
sess.run(init)  # Very importantfor step in range(201):sess.run(train)if step % 20 == 0:   #每隔20步打印一次print(step,sess.run(Weights),sess.run(biases))
WARNING:tensorflow:From /home/will/anaconda3/envs/py35/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py:118: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
0 [0.36647058] [0.20860067]
20 [0.16722272] [0.25959927]
40 [0.11913844] [0.28849784]
60 [0.10544876] [0.2967253]
80 [0.10155129] [0.2990677]
100 [0.10044166] [0.2997346]
120 [0.10012573] [0.29992446]
140 [0.10003577] [0.29997852]
160 [0.10001019] [0.2999939]
180 [0.10000289] [0.29999828]
200 [0.10000083] [0.2999995]

这时输出的Weight就很接近1了,bias也很接近0.3


Session 会话控制

import tensorflow as tfmatrix1 = tf.constant([[3,3]])  # 一行两列的matrix
matrix2 = tf.constant([[2],[2]])  #两行一列的matrixproduct = tf.matmul(matrix1,matrix2)  #matrix multipy矩阵乘法。而在np中,是np.dot(m1,m2)类似的功能#method 1
#sess = tf.Session()
#result = sess.run(product) # tensorflow思考模式:每run一次就会执行一次这个结构
#print(result)
#sess.close() #有跟没有差不多,有就系统一点#method 2
with tf.Session() as sess: #意思是打开tf.Session并以sess命名它result2 = sess.run(product)print(result2)

Variable变量

import tensorflow as tfstate = tf.Variable(0,name='counter')  # 只有定义成一个变量才是一个变量,不像Python里的
#print(state.name)
one = tf.constant(1)
#变量 + 常亮 等于 变量
new_value = tf.add(state,one)
update = tf.assign(state,new_value) #将new_value变量加载到state上,所以当前state的状态等于new_value
#这里的state就作为储存新数值的载体#如果在tensorflow中设置了变量,那么接下来这一步很重要
init=tf.initialize_all_variables()#must have if define variable#初始化素有的变量,还需要session.run才能激活 with tf.Session() as sess:sess.run(init)for _ in range(3):sess.run(update)print(sess.run(state))
1
2
3

只要在tensorflow中设置了变量,记得要在初始化.


Placeholder传入值

placeholderTensorflow 中的占位符,暂时储存变量.

Tensorflow 如果想要从外部传入data, 那就需要用到 tf.placeholder(), 然后以这种形式传输数据 sess.run(***, feed_dict={input: **})
也就是喂数据。

placeholder和variable的区别:
通俗讲:placeholder 是你输入自己数据的接口, variable 是网络自身的变量, 通常不是你来进行修改, 而是网络自身会改动更新.

import tensorflow as tfinput1=tf.placeholder(tf.float32)  #tensorflow里大多数情况只能处理float32的情况
#也可以规定数组大小,如两行两列的数组:input1=tf.placeholder(tf.float32,[2,2])
input2=tf.placeholder(tf.float32)output=tf.multiply(input1,input2)with tf.Session() as sess:print(sess.run(output,feed_dict={input1:[7.],input2:[2.]}))
#用placeholder跟feed_dict是一个绑定的关系,用了就需要给它输入数据
[14.]

添加层 def add_layer()

定义添加神经层的函数def add_layer(),它有四个参数:输入值、输入的大小、输出的大小和激励函数,我们设定默认的激励函数是None。

import tensorflow as tfdef add_layer(inputs,in_size,out_size,activation_function=None):  #None的情况就是线性激活函数Weights = tf.Variable(tf.random_normal([in_size,out_size]))biases = tf.Variable(tf.zeros([1,out_size])+0.1) #在机器学习中bias推荐不为0Wx_plus_b = tf.matmul(inputs,Weights) + biasesif activation_function is None:outputs = Wx_plus_belse:outputs = activation_function(Wx_plus_b)return outputs

建造神经网络

import tensorflow as tf
import numpy as npdef add_layer(inputs,in_size,out_size,activation_function=None):  #None的情况就是线性激活函数Weights = tf.Variable(tf.random_normal([in_size,out_size]))biases = tf.Variable(tf.zeros([1,out_size])+0.1) #在机器学习中bias推荐不为0Wx_plus_b = tf.matmul(inputs,Weights) + biasesif activation_function is None:outputs = Wx_plus_belse:outputs = activation_function(Wx_plus_b)return outputsx_data = np.linspace(-1,1,300)[:,np.newaxis] #-1到1的范围内300个单位,即有300行,300个例子
noise = np.random.normal(0,0.05,x_data.shape)
y_data = np.square(x_data) - 0.5 + noisexs = tf.placeholder(tf.float32,[None,1]) #输入只有1个特征,所以这里是1,输出也一样
ys = tf.placeholder(tf.float32,[None,1])  # None的意思是无论输入多少个例子都okl1 = add_layer(xs,1,10,activation_function=tf.nn.relu)
prediction = add_layer(l1,10,1,activation_function=None)
#构建的是——输入层1个、隐藏层10个、输出层1个的神经网络。# 对求和的所有数求一个平均值,也就是平均的误差是多少
loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction),reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss) #learning_rateinit = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)for i in range(1000):sess.run(train_step,feed_dict={xs:x_data,ys:y_data})if i % 50 == 0:print(sess.run(loss,feed_dict={xs:x_data,ys:y_data}))  #loss也是要用到placeholder,只要用到placeholder都要加上feed_dict
0.53408515
0.0043351008
0.003585369
0.0032615305
0.0030836272
0.0029586344
0.0028774897
0.0028177483
0.0027668795
0.0027297884
0.0026985474
0.0026706108
0.0026462309
0.0026261543
0.0026080583
0.002592326
0.0025791153
0.0025680475
0.0025586013
0.0025521303

训练结果


输出结果可视化

优化过程中,结果可视化可以给我们指引优化方向

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt   # 作为python中输出结果可视化的一个模块
%matplotlib
#在jupyter notebook加上这个才能看到动态图def add_layer(inputs,in_size,out_size,activation_function=None):  #None的情况就是线性激活函数Weights = tf.Variable(tf.random_normal([in_size,out_size]))biases = tf.Variable(tf.zeros([1,out_size])+0.1) #在机器学习中bias推荐不为0Wx_plus_b = tf.matmul(inputs,Weights) + biasesif activation_function is None:outputs = Wx_plus_belse:outputs = activation_function(Wx_plus_b)return outputsx_data = np.linspace(-1,1,300)[:,np.newaxis] #-1到1的范围内300个单位,即有300行,300个例子
noise = np.random.normal(0,0.05,x_data.shape)
y_data = np.square(x_data) - 0.5 + noisexs = tf.placeholder(tf.float32,[None,1]) #输入只有1个特征,所以这里是1,输出也一样
ys = tf.placeholder(tf.float32,[None,1])  # None的意思是无论输入多少个例子都okl1 = add_layer(xs,1,10,activation_function=tf.nn.relu)
prediction = add_layer(l1,10,1,activation_function=None)
#构建的是——输入层1个、隐藏层10个、输出层1个的神经网络。# 对求和的所有数求一个平均值,也就是平均的误差是多少
loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction),reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss) #learning_rateinit = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.scatter(x_data,y_data)
plt.ion() #主程序如果不加这个,就会在运行完plt.show()之后暂停
plt.show() for i in range(1000):sess.run(train_step,feed_dict={xs:x_data,ys:y_data})if i % 50 == 0:#print(sess.run(loss,feed_dict={xs:x_data,ys:y_data}))  #loss也是要用到placeholder,只要用到placeholder都要加上feed_dicttry:ax.lines.remove(lines[0]) #在图片里去除lines的第一个线段,lines也只有一个线段,不然会有很多条线,最后密密麻麻的线except Exception:passprediction_value = sess.run(prediction,feed_dict={xs:x_data})lines = ax.plot(x_data,prediction_value,'r-',lw=5) #将prediction的结果以曲线的形式plot上去,不是点的形式, lw是线的宽度plt.pause(0.1) #plot过程中暂停0.1s

如果是在jupyter notebook上跑的注意了,如果不加%matplotlib只会输出散点图,连红色曲线都没有,加上这个指令可以另外打开一个窗口显示。

各种优化器

Tensorflow 中的优化器会有很多不同的种类。最基本, 也是最常用的一种就是GradientDescentOptimizer。

其实都是优化learning_rate

Tensorboard 可视化好帮手

先放图片,下图这里一个块都可以打开,而且块需要在原来的代码中加入一些代码把它们框起来,以及可以命名它们

import tensorflow as tfdef add_layer(inputs,in_size,out_size,activation_function=None):  #None的情况就是线性激活函数with tf.name_scope('layer'):  with tf.name_scope('weights'):Weights = tf.Variable(tf.random_normal([in_size,out_size]),name='W')with tf.name_scope('biases'):biases = tf.Variable(tf.zeros([1,out_size])+0.1,name='b') #在机器学习中bias推荐不为0with tf.name_scope('Wx_plus_b'):Wx_plus_b = tf.matmul(inputs,Weights) + biasesif activation_function is None:outputs = Wx_plus_belse:outputs = activation_function(Wx_plus_b)return outputs#define placeholder for inputs to network
#下面这句意思其实就是在tensorboard里看到的inputs层大框架下包含x_input和y_input
with tf.name_scope('inputs'):  #要缩进,别忘了xs = tf.placeholder(tf.float32,[None,1],name='x_input')ys = tf.placeholder(tf.float32,[None,1],name='y_input')#add hidden layer
l1 = add_layer(xs,1,10,activation_function=tf.nn.relu)
#add output layer
prediction = add_layer(l1,10,1,activation_function=None)#the error between prediction and real data
with tf.name_scope('loss'):  loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction),reduction_indices=[1]),name='loss')
with tf.name_scope('train'):  train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss) #learning_rateinit = tf.initialize_all_variables()
sess = tf.Session()
writer = tf.summary.FileWriter("logs/",sess.graph) #graph就是把之前编辑的所有信息收集起来放到文件夹里,这里是logs/
#important step
sess.run(init)

然后可以在/home/username下看到logs文件夹,然后打开terminal,输入tensorboard --logdir='logs/'
然后会看到

will@will-450R5G-450R5U:~$ tensorboard --logdir='logs/'
TensorBoard 1.10.0 at http://will-450R5G-450R5U:6006 (Press CTRL+C to quit)

按住Ctrl点击里面的网址就可以打开tensorboard可视化助手查看当前的神经网络结构了。


如果在jupyter notebook里面重复运行同一段代码生成可视化结果,可能会报错
这时把jupyter notebook的kernel重启把里面的缓存数据清空(最简单就是把整个jupyter notebook重启),再把/logs目录下的结果清空,再打开jupyter notebook运行代码就可以了。


可视化Weights、bias及loss等数据

这个的代码运行完了,只能看到graph的内容,看不到histogram的内容和event的内容,奇怪,不知道什么情况,有哪位大神知道的,望指教


from __future__ import print_function
import tensorflow as tf
import numpy as np#这里加多了一个参数,n_layer是添加层的名字
def add_layer(inputs, in_size, out_size, n_layer, activation_function=None):# add one more layer and return the output of this layerlayer_name = 'layer%s' % n_layer  #从这里可以看出with tf.name_scope(layer_name):with tf.name_scope('weights'):Weights = tf.Variable(tf.random_normal([in_size, out_size]), name='W')#可视化Weights的变化量,第一个参数是名称,第二参数是输入tf.summary.histogram(layer_name + '/weights', Weights)with tf.name_scope('biases'):biases = tf.Variable(tf.zeros([1, out_size]) + 0.1, name='b')#可视化biases的变化量tf.summary.histogram(layer_name + '/biases', biases)with tf.name_scope('Wx_plus_b'):Wx_plus_b = tf.add(tf.matmul(inputs, Weights), biases)if activation_function is None:outputs = Wx_plus_belse:outputs = activation_function(Wx_plus_b, )#可视化outputs的变化量tf.summary.histogram(layer_name + '/outputs', outputs)return outputs# Make up some real data
x_data = np.linspace(-1, 1, 300)[:, np.newaxis]
noise = np.random.normal(0, 0.05, x_data.shape)
y_data = np.square(x_data) - 0.5 + noise# define placeholder for inputs to network
with tf.name_scope('inputs'):xs = tf.placeholder(tf.float32, [None, 1], name='x_input')ys = tf.placeholder(tf.float32, [None, 1], name='y_input')# add hidden layer
l1 = add_layer(xs, 1, 10, n_layer=1, activation_function=tf.nn.relu)
# add output layer
prediction = add_layer(l1, 10, 1, n_layer=2, activation_function=None)# the error between prediciton and real data
with tf.name_scope('loss'):loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction),reduction_indices=[1]))#可视化loss的变化量,这个很重要,用scalar的话,不会出现在histogram里面,而是出现在event里面#也可以改成在histogram里面显示,跟上面变量显示一样的代码格式tf.summary.scalar('loss', loss)with tf.name_scope('train'):train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)sess = tf.Session()
#给所有训练图合并,tf.merge_all_summaries()方法会对我们所有的 summaries 合并到一起.
merged = tf.summary.merge_all()writer = tf.summary.FileWriter("logs/", sess.graph)init = tf.global_variables_initializer()
sess.run(init)for i in range(1000):sess.run(train_step, feed_dict={xs: x_data, ys: y_data})if i % 50 == 0:result = sess.run(merged,feed_dict={xs: x_data, ys: y_data})  #merged也是要运行才能记录结果的
writer.add_summary(result, i)

高阶内容

Classification 分类任务

这里需要MNIST的数据集,这个需要翻墙才能下载,一开始不知道老是报错,数据集不大,建议找找资源跑一下,挺有收获的。
数据集下载地址:http://yann.lecun.com/exdb/mnist/
其实运行代码就会自动下载,不需要手动下载,手动下载需要把压缩包放在/home/user/MNIST_data/目录下。

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
#number 1 to 10 data
#下面这句命令是帮助你下载这个数据集
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)def add_layer(inputs, in_size, out_size, activation_function=None,):# add one more layer and return the output of this layerWeights = tf.Variable(tf.random_normal([in_size, out_size]))biases = tf.Variable(tf.zeros([1, out_size]) + 0.1,)Wx_plus_b = tf.matmul(inputs, Weights) + biasesif activation_function is None:outputs = Wx_plus_belse:outputs = activation_function(Wx_plus_b,)return outputsdef compute_accuracy(v_xs,v_ys):global predictiony_pre = sess.run(prediction,feed_dict={xs:v_xs})#下面开始计算真实值与预测值之间的误差correct_prediction = tf.equal(tf.argmax(y_pre,1),tf.argmax(v_ys,1))accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))result = sess.run(accuracy,feed_dict={xs:v_xs,ys:v_ys})return result#define placeholder for inputs to network
xs = tf.placeholder(tf.float32,[None,784])  #784表示像素点,28×28
ys = tf.placeholder(tf.float32,[None,10])#add output layer
prediction = add_layer(xs,784,10,activation_function=tf.nn.softmax)#the error between prediction and real data
cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys*tf.log(prediction),reduction_indices=[1])) #loss
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)sess = tf.Session()
#important step
sess.run(tf.initialize_all_variables())for i in range(1000):batch_xs,batch_ys = mnist.train.next_batch(100) #从下载好的database提取100个样本sess.run(train_step,feed_dict={xs:batch_xs,ys:batch_ys})if i%50 ==0:print(compute_accuracy(mnist.test.images,mnist.test.labels))

这一节没有跑,之前做Andrew Ng的作业也跑过一个类似的,识别数字手势的。

Dropout解决overfitting

"""
Please note, this code is only for python 3+. If you are using python 2+, please modify the code accordingly.
"""
import tensorflow as tf
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer# load data
digits = load_digits()
X = digits.data
y = digits.target
y = LabelBinarizer().fit_transform(y) #使y的标签成为one-hot vector,在对应位置上显示1表示对应的数字
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3) #拆分数据集def add_layer(inputs, in_size, out_size, layer_name, activation_function=None, ):# add one more layer and return the output of this layerWeights = tf.Variable(tf.random_normal([in_size, out_size]))biases = tf.Variable(tf.zeros([1, out_size]) + 0.1, )Wx_plus_b = tf.matmul(inputs, Weights) + biases# here to dropout,dropout操作Wx_plus_b = tf.nn.dropout(Wx_plus_b, keep_prob)if activation_function is None:outputs = Wx_plus_belse:outputs = activation_function(Wx_plus_b, )tf.summary.histogram(layer_name + '/outputs', outputs)return outputs# define placeholder for inputs to network
keep_prob = tf.placeholder(tf.float32) #dropout操作
xs = tf.placeholder(tf.float32, [None, 64])  # 8x8
ys = tf.placeholder(tf.float32, [None, 10])# add output layer
l1 = add_layer(xs, 64, 50, 'l1', activation_function=tf.nn.tanh)
prediction = add_layer(l1, 50, 10, 'l2', activation_function=tf.nn.softmax)# the loss between prediction and real data
cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction),reduction_indices=[1]))  # loss
tf.summary.scalar('loss', cross_entropy)
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)sess = tf.Session()
merged = tf.summary.merge_all()
# summary writer goes in here
train_writer = tf.summary.FileWriter("logs/train", sess.graph)
test_writer = tf.summary.FileWriter("logs/test", sess.graph)# tf.initialize_all_variables() no long valid from
# 2017-03-02 if using tensorflow >= 0.12
if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:init = tf.initialize_all_variables()
else:init = tf.global_variables_initializer()
sess.run(init)
for i in range(500):# here to determine the keeping probability,0.5表示保留50%的节点,0.6就保留60%sess.run(train_step, feed_dict={xs: X_train, ys: y_train, keep_prob: 0.5})if i % 50 == 0:# record loss #这里的keep_prob要设置成1,因为这里是要记录数据,所以要记录全部的数据train_result = sess.run(merged, feed_dict={xs: X_train, ys: y_train, keep_prob: 1}) test_result = sess.run(merged, feed_dict={xs: X_test, ys: y_test, keep_prob: 1})train_writer.add_summary(train_result, i)
test_writer.add_summary(test_result, i)

生成的logs/文件夹下有testtrain两个文件夹,terminal输入tensorboard --logdir='logs/',然后打开网址,就可以查看dropout的效果,但是很奇怪,test数据集好像没跑成功,只显示train的效果。


CNN部分

关键内容:建立卷积层、池化层

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
# number 1 to 10 data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)def compute_accuracy(v_xs, v_ys):global predictiony_pre = sess.run(prediction, feed_dict={xs: v_xs, keep_prob: 1})correct_prediction = tf.equal(tf.argmax(y_pre,1), tf.argmax(v_ys,1))accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))result = sess.run(accuracy, feed_dict={xs: v_xs, ys: v_ys, keep_prob: 1})return resultdef weight_variable(shape):initial = tf.truncated_normal(shape,stddev=0.1)  #shape表示生成张量的维度,mean是均值,stddev是标准差return tf.Variable(initial)def bias_variable(shape):initial = tf.constant(0.1, shape=shape) #创建常量return tf.Variable(initial)def conv2d(x,W):#stride[1,x_movement,y_movement,1] #Must have strides[0]=strides[3]=1,就是上面的第一和第四个数return tf.nn.conv2d(x,W,strides=[1,1,1,1],padding='SAME') def max_pool_2x2(x):#stride[1,x_movement,y_movement,1] return tf.nn.max_pool(x,ksize=[1,2,2,1],strides=[1,2,2,1],padding='SAME')# define placeholder for inputs to network
xs = tf.placeholder(tf.float32, [None, 784]) # 28x28
ys = tf.placeholder(tf.float32, [None, 10])
keep_prob = tf.placeholder(tf.float32)
#-1代表先不考虑输入的图片例子多少这个维度
#后面的1是channel的数量,因为我们输入的图片是黑白的,因此channel是1,例如如果是RGB图像,那么channel就是3
x_image = tf.reshape(xs,[-1,28,28,1])
# print(x_image.shape) #[n_samples,28,28,1]## conv1 layer ##
#卷积核patch/kernel 5×5
#in size 1是image的厚度,因为是黑白图片channel是1
#out size 32输出是32个featuremap即32个filters
W_conv1 = weight_variable([5,5,1,32]) #
b_conv1 = bias_variable([32])
h_conv1 = tf.nn.relu(conv2d(x_image,W_conv1)+b_conv1) #output size 28*28*32
h_pool1 = max_pool_2x2(h_conv1)                       #output size 14*14*32## conv2 layer ##
W_conv2 = weight_variable([5,5,32,64]) #
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1,W_conv2)+b_conv2) #output size 14*14*64
h_pool2 = max_pool_2x2(h_conv2)                       #output size 7*7*64## func1 layer ##
W_fc1 = weight_variable([7*7*64,1024])
b_fc1 = bias_variable([1024])h_pool2_flat = tf.reshape(h_pool2,[-1,7*7*64]) #[n_smaples,7,7,64] ->> [n_samples,7*7*64]
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat,W_fc1) + b_fc1)
h_fc1_drop = tf.nn.dropout(h_fc1,keep_prob)## func2 layer ##
W_fc2 = weight_variable([1024,10])
b_fc2 = bias_variable([10])
prediction = tf.nn.softmax(tf.matmul(h_fc1_drop,W_fc2) + b_fc2)# the error between prediction and real data
cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction),reduction_indices=[1]))       # loss
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) #这里用Adam优化器,不用Gradient Descentsess = tf.Session()
# important step
# tf.initialize_all_variables() no long valid from
# 2017-03-02 if using tensorflow >= 0.12
if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:init = tf.initialize_all_variables()
else:init = tf.global_variables_initializer()
sess.run(init)for i in range(1000):batch_xs, batch_ys = mnist.train.next_batch(100)sess.run(train_step, feed_dict={xs: batch_xs, ys: batch_ys, keep_prob: 0.5})if i % 50 == 0:print(compute_accuracy(
mnist.test.images[:1000], mnist.test.labels[:1000]))

运行结果:

Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
0.124
0.736
0.824
0.88
0.899
0.908
0.92
0.932
0.931
0.94
0.949
0.949
0.944
0.958
0.955
0.956
0.96
0.963
0.959
0.965

测试集的准确率已经很高了。

Saver保存和读取

参数保存方法(目前TF只能保存参数,不能保存整个网络结构):

import tensorflow as tf## Save to file
# remember to define the same dtype and shape when restore
W = tf.Variable([[1,2,3],[3,4,5]],dtype=tf.float32,name="weights")
b = tf.Variable([[1,2,3]],dtype=tf.float32,name="biases") #定义dtype很重要init = tf.global_variables_initializer()saver = tf.train.Saver() #存储变量with tf.Session() as sess:sess.run(init)save_path = saver.save(sess,"my_net/save_net.ckpt") #第二个是保存路径print("Save to path:",save_path)

提取方法:

import tensorflow as tf
import numpy as np
# restore variables
# redefine the same shape and same type for your variables
#即重新加载参数需要重定义相同形状和类型的变量在装载参数
W = tf.Variable(np.arange(6).reshape((2,3)),dtype=tf.float32,name="weights")
b = tf.Variable(np.arange(3).reshape((1,3)),dtype=tf.float32,name="biases")# not need init step saver = tf.train.Saver()
with tf.Session() as sess:#提取变量saver.restore(sess,"my_net/save_net.ckpt")print("weights:",sess.run(W))print("biases:",sess.run(b))

如果使用jupyter notebook在同一个窗口使用保存和提取,会报错,分开使用就可以了。
输出结果:

INFO:tensorflow:Restoring parameters from my_net/save_net.ckpt
weights: [[1. 2. 3.][3. 4. 5.]]
biases: [[1. 2. 3.]]

迁移学习transfer learning

"""
This is a simple example of transfer learning using VGG.
Fine tune a CNN from a classifier to regressor.
Generate some fake data for describing cat and tiger length.
Fake length setting:
Cat - Normal distribution (40, 8)
Tiger - Normal distribution (100, 30)
The VGG model and parameters are adopted from:
https://github.com/machrisaa/tensorflow-vgg
Learn more, visit my tutorial site: [莫烦Python](https://morvanzhou.github.io)
"""from urllib.request import urlretrieve
import os
import numpy as np
import tensorflow as tf
import skimage.io
import skimage.transform
import matplotlib.pyplot as pltdef download():     # download tiger and kittycat imagecategories = ['tiger', 'kittycat']for category in categories:os.makedirs('./for_transfer_learning/data/%s' % category, exist_ok=True)with open('./for_transfer_learning/imagenet_%s.txt' % category, 'r') as file:urls = file.readlines()n_urls = len(urls)for i, url in enumerate(urls):try:urlretrieve(url.strip(), './for_transfer_learning/data/%s/%s' % (category, url.strip().split('/')[-1]))print('%s %i/%i' % (category, i, n_urls))except:print('%s %i/%i' % (category, i, n_urls), 'no image')def load_img(path):img = skimage.io.imread(path)img = img / 255.0# print "Original Image Shape: ", img.shape# we crop image from centershort_edge = min(img.shape[:2])yy = int((img.shape[0] - short_edge) / 2)xx = int((img.shape[1] - short_edge) / 2)crop_img = img[yy: yy + short_edge, xx: xx + short_edge]# resize to 224, 224resized_img = skimage.transform.resize(crop_img, (224, 224))[None, :, :, :]   # shape [1, 224, 224, 3]return resized_imgdef load_data():imgs = {'tiger': [], 'kittycat': []}for k in imgs.keys():dir = './for_transfer_learning/data/' + kfor file in os.listdir(dir):if not file.lower().endswith('.jpg'):continuetry:resized_img = load_img(os.path.join(dir, file))except OSError:continueimgs[k].append(resized_img)    # [1, height, width, depth] * nif len(imgs[k]) == 400:        # only use 400 imgs to reduce my memory loadbreak# fake length data for tiger and cattigers_y = np.maximum(20, np.random.randn(len(imgs['tiger']), 1) * 30 + 100)cat_y = np.maximum(10, np.random.randn(len(imgs['kittycat']), 1) * 8 + 40)return imgs['tiger'], imgs['kittycat'], tigers_y, cat_yclass Vgg16:vgg_mean = [103.939, 116.779, 123.68]def __init__(self, vgg16_npy_path=None, restore_from=None):# pre-trained parameterstry:self.data_dict = np.load(vgg16_npy_path, encoding='latin1').item()except FileNotFoundError:print('Please download VGG16 parameters from here https://mega.nz/#!YU1FWJrA!O1ywiCS2IiOlUCtCpI6HTJOMrneN-Qdv3ywQP5poecM\nOr from my Baidu Cloud: https://pan.baidu.com/s/1Spps1Wy0bvrQHH2IMkRfpg')self.tfx = tf.placeholder(tf.float32, [None, 224, 224, 3])self.tfy = tf.placeholder(tf.float32, [None, 1])# Convert RGB to BGRred, green, blue = tf.split(axis=3, num_or_size_splits=3, value=self.tfx * 255.0)bgr = tf.concat(axis=3, values=[blue - self.vgg_mean[0],green - self.vgg_mean[1],red - self.vgg_mean[2],])# pre-trained VGG layers are fixed in fine-tuneconv1_1 = self.conv_layer(bgr, "conv1_1")conv1_2 = self.conv_layer(conv1_1, "conv1_2")pool1 = self.max_pool(conv1_2, 'pool1')conv2_1 = self.conv_layer(pool1, "conv2_1")conv2_2 = self.conv_layer(conv2_1, "conv2_2")pool2 = self.max_pool(conv2_2, 'pool2')conv3_1 = self.conv_layer(pool2, "conv3_1")conv3_2 = self.conv_layer(conv3_1, "conv3_2")conv3_3 = self.conv_layer(conv3_2, "conv3_3")pool3 = self.max_pool(conv3_3, 'pool3')conv4_1 = self.conv_layer(pool3, "conv4_1")conv4_2 = self.conv_layer(conv4_1, "conv4_2")conv4_3 = self.conv_layer(conv4_2, "conv4_3")pool4 = self.max_pool(conv4_3, 'pool4')conv5_1 = self.conv_layer(pool4, "conv5_1")conv5_2 = self.conv_layer(conv5_1, "conv5_2")conv5_3 = self.conv_layer(conv5_2, "conv5_3")pool5 = self.max_pool(conv5_3, 'pool5')# detach original VGG fc layers and# reconstruct your own fc layers serve for your own purposeself.flatten = tf.reshape(pool5, [-1, 7*7*512])self.fc6 = tf.layers.dense(self.flatten, 256, tf.nn.relu, name='fc6')self.out = tf.layers.dense(self.fc6, 1, name='out')self.sess = tf.Session()if restore_from:  saver = tf.train.Saver()#初始restore_from为None,训练完之后就可以使用相应的参数保存文件,这是路径saver.restore(self.sess, restore_from) else:   # training graphself.loss = tf.losses.mean_squared_error(labels=self.tfy, predictions=self.out)self.train_op = tf.train.RMSPropOptimizer(0.001).minimize(self.loss)self.sess.run(tf.global_variables_initializer())def max_pool(self, bottom, name):return tf.nn.max_pool(bottom, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name=name)def conv_layer(self, bottom, name):with tf.variable_scope(name):   # CNN's filter is constant, NOT Variable that can be trainedconv = tf.nn.conv2d(bottom, self.data_dict[name][0], [1, 1, 1, 1], padding='SAME')lout = tf.nn.relu(tf.nn.bias_add(conv, self.data_dict[name][1]))return loutdef train(self, x, y):loss, _ = self.sess.run([self.loss, self.train_op], {self.tfx: x, self.tfy: y})return lossdef predict(self, paths):fig, axs = plt.subplots(1, 2)for i, path in enumerate(paths):x = load_img(path)length = self.sess.run(self.out, {self.tfx: x})axs[i].imshow(x[0])axs[i].set_title('Len: %.1f cm' % length)axs[i].set_xticks(()); axs[i].set_yticks(())plt.show()def save(self, path='./for_transfer_learning/model/transfer_learn'):saver = tf.train.Saver()saver.save(self.sess, path, write_meta_graph=False)def train():tigers_x, cats_x, tigers_y, cats_y = load_data()# plot fake length distributionplt.hist(tigers_y, bins=20, label='Tigers')plt.hist(cats_y, bins=10, label='Cats')plt.legend()plt.xlabel('length')plt.show()xs = np.concatenate(tigers_x + cats_x, axis=0)ys = np.concatenate((tigers_y, cats_y), axis=0)vgg = Vgg16(vgg16_npy_path='./for_transfer_learning/vgg16.npy')print('Net built')for i in range(100):b_idx = np.random.randint(0, len(xs), 6)train_loss = vgg.train(xs[b_idx], ys[b_idx])print(i, 'train loss: ', train_loss)vgg.save('./for_transfer_learning/model/transfer_learn')      # save learned fc layersdef eval():vgg = Vgg16(vgg16_npy_path='./for_transfer_learning/vgg16.npy',restore_from='./for_transfer_learning/model/transfer_learn')vgg.predict(['./for_transfer_learning/data/kittycat/000129037.jpg', './for_transfer_learning/data/tiger/391412.jpg'])if __name__ == '__main__':# download()# train()eval()

先运行download()把图片都下载下来,只需要训练最后的两层,全连接层和输出层,运行train(),最后运行eval()用样本里的图片测试模型。

莫烦---Tensorflow学习相关推荐

  1. 莫烦强化学习笔记整理(九)DDPG

    莫烦强化学习笔记整理(九)DDPG 1.DDPG 要点 2.DDPG 算法 actor critic actor与critic结合 类似于DQN的记忆库 回合更新 链接: DDPG代码. 1.DDPG ...

  2. 莫烦pytorch学习笔记5

    莫烦pytorch学习笔记5 1 自编码器 2代码实现 1 自编码器 自编码,又称自编码器(autoencoder),是神经网络的一种,经过训练后能尝试将输入复制到输出.自编码器(autoencode ...

  3. 莫烦Tensorflow教程(15~22)

    十五.卷积神经网络 图像和语言方面结果突出 神经网络是由多层级联组成的,每层中包含很多神经元 卷积:神经网络不再是对每个像素做处理,而是对一小块区域的处理,这种做法加强了图像信息的连续性,使得神经网络 ...

  4. 莫烦pytorch学习之问题记录与总结

    目录 一.三分类问题 二.创建网络结构部分,还有另一种形式,如下: 三.pytorch中save_model和load_model: 四.batch批量数据读取 五.pytorch测试SGD.Mome ...

  5. 莫烦---Pytorch学习

    今天翻翻资料,发现有些地方的说明不太到位,修改过来了. Will Yip 2020.7.29 莫烦大神Pytorch -->> 学习视频地址 2020年开年就遇上疫情,还不能上学,有够难受 ...

  6. 莫烦tensorflow视频 和 LiYu's personal knowledge 和 sklearn

    莫烦 https://morvanzhou.github.io/tutorials/machine-learning/ RNN LSTM 循环神经网络 (分类例子) 作者: 莫烦 编辑: 莫烦 201 ...

  7. 莫烦python---pytorch学习(上)

    一.推荐学习网站: 莫烦python 二.pytorch学习 1.介绍 PyTorch是一个非常有可能改变深度学习领域前景的Python库. PyTorch是一个基于Python的库,用来提供一个具有 ...

  8. TensorFlow 莫烦视频学习笔记例子二(一)

    注释链接 所有代码 # -*- coding: utf-8 -*- """ Created on Wed Apr 19 12:30:49 2017@author: lg同 ...

  9. 莫烦Tensorflow教程(1~14)(转)

    一.Tensorflow结构 import tensorflow as tf import numpy as np#create data x_data=np.random.rand(100).ast ...

  10. 莫烦keras学习代码二(手写数字识别MNIST classifier CNN版)

    知道了CNN的原理,同样是只要将之前用tensorflow写的几个建立网络的函数用keras的更简单的方法替换就行. 训练结果: 用Sequential().add()添加想要的层,添加卷积层就用Co ...

最新文章

  1. fiddler抓包实战(5)
  2. Python基础知识(第五天)
  3. 用C++ Builder3 制作NotePad(记事本)
  4. typora及vue主题安装
  5. html 内嵌xml数据库,在SQLite数据库中存储XML/HTML文件 - 可能吗?
  6. Mac Book Pro不能识别移动硬盘
  7. 学成在线--10.页面预览
  8. windows故障转移群集和mysql_Windows 2016 无域故障转移群集部署方法 超详细图文教程...
  9. 傅里叶级数的数学推导
  10. 几种SQL取日期部分的方法
  11. TypeScript入坑
  12. 转的一个itoa实现(效率很高,并且能够正确处理INT_MIN)
  13. SSH 协议端口号 22 背后的故事
  14. goc 介绍与源代码分析
  15. EC Final 2019 题解
  16. (旧)子数涵数·PS——换脸
  17. [复现论文程序图]High Speed Continuous Variable Source-Independent Quantum Random Number Generation...
  18. win10卸载软件_【电脑软件】win10自带浏览器|教你一招,如何完美将它卸载!
  19. hp ilo 服务器磁盘定位
  20. SPSS做主成分分析

热门文章

  1. 【2.Delphi语法基础】7.程序异常处理
  2. 手机邮件打开一个html会中木马,小心,QQ邮件中的木马!
  3. Coffice协同办公管理系统(C#)(
  4. linux模拟器使用教程,Ubuntu多机种游戏模拟器Mednafen教程
  5. 「 LaTex 」写论文,natbib宏的参考文献引用格式详解
  6. Mail_Android_Video_SW_DDK_Intergration_Guide_And_Codec_User_Manual中文翻译【chapter2】
  7. 经典怀旧软件----PP点点通
  8. python论文排版格式_Latex论文排版工具使用教程
  9. FishC笔记—33 讲 异常处理:你不可能总是对的2
  10. dependencyManagement使用简介