一.bazel编译tensorflow注意版本号:

在/tensorflow/tensorflow/configure.py 查看bazel版本号

https://github.com/tensorflow/tensorflow

https://github.com/bazelbuild/bazel/releases?after=0.26.1

https://tensorflow.google.cn/

二,基础知识点

1.打印出与训练变量相关的信息

"""
slim.model_analyzer.analyze_vars打印出与训练变量相关的信息
"""
import tensorflow as tf
import tensorflow.contrib.slim as slim
x1=tf.Variable(tf.constant(1,shape=[1],dtype=tf.float32,name='x1'))
x2=tf.Variable(tf.random_normal(shape=[2,1],dtype=tf.float32,name='x2'))
y=tf.trainable_variables()
for i in y:print(6666)print(i)
slim.model_analyzer.analyze_vars(y,print_info=True)
print(88888888)

2. tf.concat拼接

"""
tf.concat拼接
"""
import tensorflow as tf
t1=tf.constant([[1,2,3],[4,5,6]])
t2=tf.constant([[7,8,9],[10,11,12]])
t3=tf.concat([t1,t2],0)
t4=tf.concat([t1,t2],1)
print('t1={}'.format(t1))
print('t2={}'.format(t2))
print('t3={}'.format(t3))
print('t4={}'.format(t4))

"""
tf.concat拼接
"""
import tensorflow as tf
t1=tf.constant([[[1,2,3],[4,5,6]]])
t2=tf.constant([[[7,8,9],[10,11,12]]])
t3=tf.concat([t1,t2],0)
t4=tf.concat([t1,t2],1)
t5=tf.concat([t1,t2],-1)
print('t1={}'.format(t1))
print('t2={}'.format(t2))
print('t3={}'.format(t3))
print('t4={}'.format(t4))
print('t5={}'.format(t5))

3.tensorboard调用

graphs:计算图显示

tf.summary.FileWritter(path,sess.graph)

import tensorflow as tf
input1=tf.constant([1.0,2.0,3.0],name='input_1')
input2=tf.constant([2.0,5.0,8.0],name='input_2')
output=tf.add(input1,input2,name='add')
with tf.Session() as sess:writer=tf.summary.FileWriter('./data/', sess.graph)sess.run(tf.global_variables_initializer())print(sess.run(output))
writer.close()

GRPHS显示了网络结构信息。

4. 池化和卷积的‘SAME’和‘VALID’

一维:

  • "VALID" = without padding:

       inputs:         1  2  3  4  5  6  7  8  9  10 11 (12 13)|________________|                dropped|_________________|
  • valid采用丢弃

  • "SAME" = with zero padding:

               pad|                                      |padinputs:      0 |1  2  3  4  5  6  7  8  9  10 11 12 13|0  0|________________||_________________||________________|

SAME为向上取整。ceil

卷积过程:

  • For the SAME padding, the output height and width are computed as:

out_height = ceil(float(in_height) / float(strides[1]))

out_width = ceil(float(in_width) / float(strides[2]))

And

  • For the VALID padding, the output height and width are computed as:

out_height = ceil(float(in_height - filter_height + 1) / float(strides[1]))

out_width = ceil(float(in_width - filter_width + 1) / float(strides[2]))

5.tf.variable_scope和tf.name_scope

tf.variable_scope可以让变量有相同的命名,包括tf.get_variable得到的变量,还有tf.Variable的变量

tf.name_scope可以让变量有相同的命名,但只是限于tf.Variable的变量

import tensorflow as tfwith tf.variable_scope('V1'):a1 = tf.get_variable(name='a1', shape=[1], initializer=tf.constant_initializer(1))a2 = tf.Variable(tf.random_normal(shape=[2, 3], mean=0, stddev=1), name='a2')
with tf.variable_scope('V2'):a3 = tf.get_variable(name='a1', shape=[1], initializer=tf.constant_initializer(1))a4 = tf.Variable(tf.random_normal(shape=[2, 3], mean=0, stddev=1), name='a2')with tf.Session() as sess:sess.run(tf.global_variables_initializer())print(a1.name)print(a2.name)print(a3.name)print(a4.name)

import tensorflow as tfwith tf.name_scope('V1'):a1 = tf.get_variable(name='a1', shape=[1], initializer=tf.constant_initializer(1))a2 = tf.Variable(tf.random_normal(shape=[2, 3], mean=0, stddev=1), name='a2')
with tf.name_scope('V2'):a3 = tf.get_variable(name='a1', shape=[1], initializer=tf.constant_initializer(1))a4 = tf.Variable(tf.random_normal(shape=[2, 3], mean=0, stddev=1), name='a2')with tf.Session() as sess:sess.run(tf.global_variables_initializer())print(a1.name)print(a2.name)print(a3.name)print(a4.name)

import tensorflow as tfwith tf.name_scope('V1'):# a1 = tf.get_variable(name='a1', shape=[1], initializer=tf.constant_initializer(1))a2 = tf.Variable(tf.random_normal(shape=[2, 3], mean=0, stddev=1), name='a2')
with tf.name_scope('V2'):# a3 = tf.get_variable(name='a1', shape=[1], initializer=tf.constant_initializer(1))a4 = tf.Variable(tf.random_normal(shape=[2, 3], mean=0, stddev=1), name='a2')with tf.Session() as sess:sess.run(tf.global_variables_initializer())# print(a1.name)print(a2.name)# print(a3.name)print(a4.name)

6. tf.pad,二维情况

import tensorflow as tf
t = tf.constant([[1, 2, 3], [4, 5, 6]])
#向上两行 向下一行 向左三行 向右两行
paddings = tf.constant([[2, 1,], [3, 2]])
#CONSTAN代表补零
t_pad_constant=tf.pad(t,paddings,mode='CONSTANT')with tf.Session() as sess:print(sess.run(t_pad_constant))

import tensorflow as tf
t = tf.constant([[1, 2, 3], [4, 5, 6]])
#向上两行 向下一行 向左三行 向右两行
paddings = tf.constant([[1, 1,], [2, 2]])
#CONSTAN代表补零
t_pad_constant=tf.pad(t,paddings,mode='CONSTANT')
#REFLECT代表镜像对称不包含轴
t_pad_reflect=tf.pad(t,paddings,mode='REFLECT')with tf.Session() as sess:print(sess.run(t_pad_constant))print(sess.run(t_pad_reflect))

import tensorflow as tf
t = tf.constant([[1, 2, 3], [4, 5, 6]])
#向上两行 向下一行 向左三行 向右两行
paddings = tf.constant([[1, 1,], [2, 2]])
#CONSTAN代表补零
t_pad_constant=tf.pad(t,paddings,mode='CONSTANT')
#REFLECT代表镜像对称不包含轴
t_pad_reflect=tf.pad(t,paddings,mode='REFLECT')
#SYMMETRIC代表镜像对称 包含轴
t_pad_symmetric=tf.pad(t,paddings,mode='SYMMETRIC')with tf.Session() as sess:print(sess.run(t_pad_constant))print(sess.run(t_pad_reflect))print(sess.run(t_pad_symmetric))

三维情况:

import tensorflow as tf
t = tf.constant([[[[1, 2, 3],[4, 5, 6]],[[1, 2, 3],[4, 5, 6]],[[1, 2, 3],[4, 5, 6]]]])
#向上两行 向下一行 向左三行 向右两行
paddings = tf.constant([[0,0],[1, 1,], [1, 1],[0,0]])
#CONSTAN代表补零
t_pad_constant=tf.pad(t,paddings,mode='CONSTANT')
t_pad_constant=tf.pad(t,paddings,mode='CONSTANT')
t_pad_constant=tf.pad(t,paddings,mode='CONSTANT')
t_pad_constant=tf.pad(t,paddings,mode='CONSTANT')
#REFLECT代表镜像对称不包含轴
t_pad_reflect=tf.pad(t,paddings,mode='REFLECT')
# #SYMMETRIC代表镜像对称 包含轴
t_pad_symmetric=tf.pad(t,paddings,mode='SYMMETRIC')with tf.Session() as sess:print(sess.run(t))print(sess.run(t).shape)print(sess.run(t_pad_constant))print(sess.run(t_pad_constant).shape)print(sess.run(t_pad_reflect))print(sess.run(t_pad_reflect).shape)print(sess.run(t_pad_symmetric))print(sess.run(t_pad_symmetric).shape)

打印结果:

7.学习率衰减

tf.train.exponential_decay的返回值
decayed_learning_rate = learning_rate * decay_rate ^ (global_step / decay_steps)

其中,decayed_learning_rate为每一轮优化时使用的学习率;learning_rate为事先设定的初始学习率; decay_rate为衰减系数;decay_steps为衰减速度。

下面代码:返回值=0.1×0.96^ (global_step / 100),其中staircase=True代表取整,故打印的结果是蓝线,staircase=Fasle代表取小数,故打印的结果是红线

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as pltlearning_rate = 0.1
decay_rate = 0.96
global_steps = 1000
decay_steps = 100global_ = tf.Variable(tf.constant(0))
c = tf.train.exponential_decay(learning_rate, global_, decay_steps, decay_rate, staircase=True)
d = tf.train.exponential_decay(learning_rate, global_, decay_steps, decay_rate, staircase=False)T_C = []
F_D = []with tf.Session() as sess:for i in range(global_steps):T_c = sess.run(c, feed_dict={global_: i})T_C.append(T_c)F_d = sess.run(d, feed_dict={global_: i})F_D.append(F_d)plt.figure()
plt.plot(range(global_steps), F_D, 'r-')
plt.plot(range(global_steps), T_C, 'b-')
plt.show()

打印结果:

8. tf.assign

#tf.assign实现值的转换
import tensorflow as tf
a = tf.Variable(tf.constant(0.0), dtype=tf.float32)
init=tf.global_variables_initializer()
with tf.Session() as sess:sess.run(init)print('a={}'.format(sess.run(a)))print('a={}'.format(sess.run(tf.assign(a, 1))))

9. tf.nn.moments

#tf求均值和方差
import tensorflow as tf
W = tf.constant([[1.,2.,3.],[4.,5.,6.]])
mean,var = tf.nn.moments(W, axes = [0])
init=tf.global_variables_initializer()
with tf.Session() as sess:sess.run(init)Mean = sess.run(mean)print(Mean)Var = sess.run(var)print(Var)

#tf求均值和方差
import tensorflow as tf
W = tf.constant([[1.,2.,3.],[4.,5.,6.]])
mean,var = tf.nn.moments(W, axes = [1])
init=tf.global_variables_initializer()
with tf.Session() as sess:sess.run(init)Mean = sess.run(mean)print(Mean)Var = sess.run(var)print(Var)

#tf求均值和方差
import tensorflow as tf
W = tf.constant([[1.,2.,3.],[4.,5.,6.]])
mean,var = tf.nn.moments(W, axes = [0,1])
init=tf.global_variables_initializer()
with tf.Session() as sess:sess.run(init)Mean = sess.run(mean)print(Mean)Var = sess.run(var)print(Var)

# tf求均值和方差
import tensorflow as tf
W = tf.constant([[[[1., 2., 3.], [4., 5., 6.]]]])
mean, var = tf.nn.moments(W, axes=[0,1,2])
y=W-mean
init = tf.global_variables_initializer()
with tf.Session() as sess:sess.run(init)Mean = sess.run(mean)print(Mean)print(sess.run(y))

求每一个feature map的均值,把每个feature map当成一个神经元,故轴方向取[0,1,2],而对于二位矩阵,需要计算神经元的均值和方差故轴取的是0。

10. tensorflow队列操作

队列是一种先入先出的线性数据结构,队尾增加数据,队首输出和删除数据。

队列的创建:

import tensorflow as tf
with tf.Session() as sess:#队列先入先出  存放三个数据q=tf.FIFOQueue(3,'float')#将上文创建的FIFOQueue函数 填充3个数据 做预备工作init=q.enqueue_many(([0.1,0.2,0.3],))#真正填充三个数据sess.run(init)#获取长度quelen=sess.run(q.size())for i in range(quelen):#将元素从队列移出print(sess.run(q.dequeue()))

注意:每次做元素填充队列和弹出都需要sess run

import tensorflow as tf
with tf.Session() as sess:#队列先入先出  存放三个数据q=tf.FIFOQueue(3,'float')#将上文创建的FIFOQueue函数 填充3个数据 做预备工作init=q.enqueue_many(([0.1,0.2,0.3],))#弹出一个元素  准备工作 后面需要 sess runinit2=q.dequeue()#将元素放入队列 准备工作 后面需要 sess runinit3=q.enqueue(1.)#真正填充三个数据sess.run(init)sess.run(init2)sess.run(init3)#获取长度quelen=sess.run(q.size())for i in range(quelen):#将元素从队列移出print(sess.run(q.dequeue()))

上述的程序队列的操作主要是在session中进行,优点不易阻塞,好找bug,缺点,效率低。下面用队列管理器QueueRunner解决异步操作问题,创建一系列线程在主线程内操作,数据读取与操作(训练模型)是同步的,提升效率。

import tensorflow as tf
with tf.Session() as sess:# 队列先入先出  存放三个数据q=tf.FIFOQueue(1000, 'float')counter=tf.Variable(0.0)#counter=counter+1.0add_op=tf.assign_add(counter,tf.constant(1.0))# 将元素放入队列 准备工作 后面需要 sess runenqueueData_op=q.enqueue(counter)qr=tf.train.QueueRunner(q,enqueue_ops=[add_op,enqueueData_op]*2)sess.run(tf.global_variables_initializer())qr.create_threads(sess,start=True)for i in range(10):print(sess.run(q.dequeue()))

首先正常执行,最后队列管理器QueueRunner报错。原因是多线程虽然方便了在一个session下共同工作,并行地相互执行,但是这种同步会造成某个线程想要关闭session时,session被强行关闭而未完成的线程也被强行关闭。

故为了解决多线程的同步和处理问题,提供了Coordinator和QueueRunner函数来对线程进行控制与协调。

import tensorflow as tf
with tf.Session() as sess:# 队列先入先出  存放三个数据q=tf.FIFOQueue(1000, 'float')counter=tf.Variable(0.0)#counter=counter+1.0add_op=tf.assign_add(counter,tf.constant(1.0))# 将元素放入队列 准备工作 后面需要 sess runenqueueData_op=q.enqueue(counter)#调用线程qr=tf.train.QueueRunner(q,enqueue_ops=[add_op,enqueueData_op]*2)sess.run(tf.global_variables_initializer())coord=tf.train.Coordinator()# 开启线程 启动入队线程  coord线程协调器 启动线程后负责对所有线程接受和处理,故当一个线程结束时,其会对所有线程发出通知,协调完毕。enqueue_threads = qr.create_threads(sess, coord=coord,start=True)for i in range(10):print(sess.run(q.dequeue()))coord.request_stop()coord.join(enqueue_threads)

11. minimize里的 global_steps

x = tf.placeholder(tf.float32, shape=[None, 1], name='x')
y = tf.placeholder(tf.float32, shape=[None, 1], name='y')
w = tf.Variable(tf.constant(0.0))# global_steps = tf.Variable(0, trainable=False)
global_steps = tf.train.get_or_create_global_step()
# learning_rate = tf.train.exponential_decay(0.1, global_steps, 10, 2, staircase=False)
loss = tf.pow(w*x - y, 2)train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss, global_step=global_steps)with tf.Session() as sess:sess.run(tf.global_variables_initializer())for i in range(5):sess.run(train_step, feed_dict={x: np.linspace(1, 2, 10).reshape([10, 1]),y: np.linspace(1, 2, 10).reshape([10, 1])})# print sess.run(learning_rate)print sess.run(global_steps)

可看出,global_steps每次自动加1

12. tf.add_to_collection

tf.add_to_collection(‘list_name’, element):将元素element添加到列表list_name中

tf.get_collection(‘list_name’):返回名称为list_name的列表

tf.add_n(list):将列表元素相加并返回

tf.add_to_collection('losses', tf.constant(1.2))
tf.add_to_collection('losses', tf.constant(5.))
with tf.Session() as sess:print(sess.run(tf.get_collection('losses')))print(sess.run(tf.add_n(tf.get_collection('losses'))))

13. tf.nn.depthwise_conv2d,其输出的通道数是卷积核的输入与输出channel相乘

https://blog.csdn.net/mao_xiao_feng/article/details/78003476

import tensorflow as tf
img1 = tf.constant(value=[[[[1],[2],[3],[4]],[[1],[2],[3],[4]],[[1],[2],[3],[4]],[[1],[2],[3],[4]]]],dtype=tf.float32)
img2 = tf.constant(value=[[[[1],[1],[1],[1]],[[1],[1],[1],[1]],[[1],[1],[1],[1]],[[1],[1],[1],[1]]]],dtype=tf.float32)
img = tf.concat(values=[img1,img2],axis=3)
print(img1.shape)
print('img.shape={}'.format(img.shape))
filter1 = tf.constant(value=0, shape=[3,3,1,1],dtype=tf.float32)
filter2 = tf.constant(value=1, shape=[3,3,1,1],dtype=tf.float32)
filter3 = tf.constant(value=2, shape=[3,3,1,1],dtype=tf.float32)
filter4 = tf.constant(value=3, shape=[3,3,1,1],dtype=tf.float32)
filter_out1 = tf.concat(values=[filter1,filter2],axis=2)
filter_out2 = tf.concat(values=[filter3,filter4],axis=2)
filter = tf.concat(values=[filter_out1,filter_out2,filter_out2],axis=3)
print(filter_out1.shape)
print(filter_out2.shape)
print('filter.shape={}'.format(filter.shape))out_img = tf.nn.conv2d(input=img, filter=filter, strides=[1,1,1,1], padding='VALID')
print('out_img.shape={}'.format(out_img.shape))#
t_img = tf.nn.depthwise_conv2d(input=img, filter=filter, strides=[1,1,1,1], rate=[1,1], padding='VALID')
print('t_img.shape={}'.format(t_img.shape))
  • input: 
    指需要做卷积的输入图像,要求是一个4维Tensor,具有[batch, height, width, in_channels]这样的shape,具体含义是[训练时一个batch的图片数量, 图片高度, 图片宽度, 图像通道数]

  • filter: 
    相当于CNN中的卷积核,要求是一个4维Tensor,具有[filter_height, filter_width, in_channels, channel_multiplier]这样的shape,具体含义是[卷积核的高度,卷积核的宽度,输入通道数,输出卷积乘子],同理这里第三维in_channels,就是参数value的第四维

  • strides: 
    卷积的滑动步长。

  • 结果返回一个Tensor,shape为[batch, out_height, out_width, in_channels * channel_multiplier],注意这里输出通道变成了in_channels * channel_multiplier

14. tf.expand_dims扩充维度

import tensorflow as tfa=tf.Variable(tf.zeros(shape=[2,3,4]))
b=tf.expand_dims(a,axis=-1)
c=tf.expand_dims(a,axis=0)
d=a[:,:,0]
e=tf.expand_dims(a[:,:,0],axis=0)
f=tf.expand_dims(a[:,:,0],axis=-1)
# tf.reset_default_graph()
with tf.Session() as sess:sess.run(tf.global_variables_initializer())print(a)print(b)# print(sess.run(b))print(c)print(d)print(e)print(f)

15. tf.gradients,求梯度

https://blog.csdn.net/taoyanqi8932/article/details/77602721

import tensorflow as tfw1 = tf.Variable([[1,2]])#a1 a2
w2 = tf.Variable([[3,4]])res = tf.matmul(w1, [[2],[1]])#2*a1+a2grads = tf.gradients(res,[w1])#求梯度a = tf.constant(0.)
b=2*a
g1 = tf.gradients(a + b, [a, b], stop_gradients=[a, b])
g2 = tf.gradients(b, [a, b])
with tf.Session() as sess:sess.run(tf.global_variables_initializer())print(sess.run(res))print(sess.run(grads))print(sess.run(g1))print(sess.run(g2))

16. tf.trainable_variables()与tf.global_variables()

tf.trainable_variables返回的是需要训练的变量列表

tf.global_variables返回的是所有变量的列表

import tensorflow as tfv = tf.Variable(tf.constant(0.0, shape=[1], dtype=tf.float32), name='v')
v1 = tf.Variable(tf.constant(5, shape=[1], dtype=tf.float32), name='v1')global_step = tf.Variable(tf.constant(5, shape=[1], dtype=tf.float32), name='global_step', trainable=False)
ema = tf.train.ExponentialMovingAverage(0.99, global_step)for i in tf.trainable_variables():print(i)
print('===============')
for i in tf.global_variables():print(i)

17. tf.clip_by_global_norm

https://blog.csdn.net/u013713117/article/details/56281715

18. tf.reduce_sum中的reduction_indices和axis一样的用法,keepdims=True表示维持维度

x = tf.constant([[1, 1, 1], [1, 1, 1]])
a=tf.reduce_sum(x,keepdims=True)  # 6
b=tf.reduce_sum(x, 0,keepdims=True)  # [2, 2, 2]
c=tf.reduce_sum(x, reduction_indices=0)  # [2, 2, 2]
d=tf.reduce_sum(x, 1,keepdims=True)  # [2, 2, 2]
e=tf.reduce_sum(x, reduction_indices=1)  # [2, 2, 2]
with tf.Session() as sess:print(sess.run(a))print(sess.run(b))print(sess.run(c))print(sess.run(d))print(sess.run(e))

19. tf.app.flags.FLAGS

命令行解析

import tensorflow  as tfFLAGS = tf.app.flags.FLAGS
tf.app.flags.DEFINE_float('flag_float', 0.01, 'input a float')
tf.app.flags.DEFINE_integer('flag_int', 400, 'input a int')
tf.app.flags.DEFINE_boolean('flag_bool', True, 'input a bool')
tf.app.flags.DEFINE_string('flag_string', 'yes', 'input a string')print(FLAGS.flag_float)
print(FLAGS.flag_int)
print(FLAGS.flag_bool)
print(FLAGS.flag_string)

20. tensorflow读图片和opencv输出图片

def tf_read_image():path='./img_size.jpg'with tf.gfile.FastGFile(path, 'r') as f:image_data = f.read()with tf.Session() as sess:image_data = tf.image.decode_jpeg(image_data)image = sess.run(image_data)"""第一种方式"""r, g, b = cv2.split(image)image = cv2.merge([b, g, r])cv2.imwrite('img_size_out.jpg',image)"""第二种方式"""# plt.imshow(image)# plt.show()# print(image.shape)

由于opencv读图片是b,g,r故在decode成image时,拆分成r,g,b在融合成b,g,r给opencv输出,否则直接输出会造成r通道,却是b通道的值,整张图片现蓝色,而用plot直接show就行。故在用opencv读图片时,记得加下面这句话。

cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

21. 不同计算图之间的联系,用来隔离tensor和计算

#定义计算图g1
g1=tf.Graph()
with g1.as_default():v=tf.get_variable('v',shape=[1],initializer=tf.zeros_initializer)
# 定义计算图g2
g2 = tf.Graph()
with g2.as_default():v = tf.get_variable('v', shape=[1], initializer=tf.ones_initializer)
#在计算图g1中读取变量v的值
with tf.Session(graph=g1) as sess:sess.run(tf.global_variables_initializer())with tf.variable_scope('',reuse=True):print(sess.run(tf.get_variable('v')))# 在计算图g2中读取变量v的值
with tf.Session(graph=g2) as sess:sess.run(tf.global_variables_initializer())with tf.variable_scope('', reuse=True):print(sess.run(tf.get_variable('v')))

22. tf.greater,tf.where

v1 = tf.constant([1.0, 2.0, 3.0, 4.0])
v2 = tf.constant([4.0, 3.0, 2.0, 1.0])
with tf.Session() as sess:print(sess.run(tf.greater(v1,v2)))

tf.where两种用法

where(condition, x=None, y=None,name=None)

如果x,y为空,返回condition中值为True的位置的Tensor

labels=tf.constant([[[1],[2],[3]],[[4], [5], [6]]],dtype=tf.float32)
ignore_label=-1
a=tf.squeeze(labels)
b=tf.not_equal(a, ignore_label)
c=tf.where(b)
with tf.Session() as sess:print(sess.run(a))print(sess.run(b))print(sess.run(c))

如果x,y不为空,返回值和x、y有相同的形状,如果condition对应位置值为True那么返回Tensor对应位置为x的值,否则为y的值

v1 = tf.constant([1.0, 2.0, 3.0, 4.0])
v2 = tf.constant([4.0, 3.0, 2.0, 1.0])
with tf.Session() as sess:print(sess.run(tf.greater(v1,v2)))#true选v1,False选2print(sess.run(tf.where(tf.greater(v1,v2),v1,v2)))

23. shape,set_shape,reshape,get_shape,注意看feed的是x1,还是x2

x1 = tf.placeholder(tf.float32, shape=[2, 2])
print(tf.shape(x1))
print(x1.get_shape())

可看出,shape返回值是一个tensor,而get_shape返回的是一个tuple

x1 = tf.placeholder(tf.int32)
x2 = tf.reshape(x1, [2, 2])
print(tf.shape(x1))with tf.Session() as sess:print(sess.run(tf.shape(x2), feed_dict={x1: [0, 1, 2, 3]}))

可看出:reshape生成新的shape,创造一个新的tensor以供我们使用

x1 = tf.placeholder(tf.int32)
x1 = tf.reshape(x1, [2, 2])  # use tf.reshape()
print(tf.shape(x1))sess = tf.Session()
print(sess.run(tf.shape(x1), feed_dict={x1:[0,1,2,3]}))

reshape改变了x1的shape,此时传入就会报错

x1=tf.placeholder(tf.float32)
print(x1.get_shape())with tf.Session() as sess:print(sess.run(tf.shape(x1), feed_dict={x1: [[0, 1], [2, 3]]}))

未加set_shape,没有更新图信息。

x1 = tf.placeholder(tf.int32)
x1.set_shape([2,2])
print(x1.get_shape())with tf.Session() as sess:print(sess.run(tf.shape(x1), feed_dict={x1:[0,1,2,3]}))# print(sess.run(tf.shape(x1), feed_dict={x1: [[0, 1], [2, 3]]}))

set_shape更新了图信息

x1 = tf.placeholder(tf.int32)
x1.set_shape([2,2])
print(x1.get_shape())with tf.Session() as sess:print(sess.run(tf.shape(x1), feed_dict={x1:[0,1,2,3]}))# print(sess.run(tf.shape(x1), feed_dict={x1: [[0, 1], [2, 3]]}))

图中最开始没有shape的x1在使用了set_shape后,它的图中的信息已经改变了,但是却不能改变tensor的shape,传入了和图不符合的参数就会报错。

x1 = tf.Variable([[0, 1], [2, 3]])
print(x1.get_shape())x1 = tf.reshape(x1, [4, 1])
print(x1.get_shape())

x1 = tf.Variable([[0, 1], [2, 3]])
print(x1.get_shape())x1 = x1.set_shape([4, 1])
print(x1.get_shape())

可见创建新的tensor或者动态地改变原有tensor的shape的时候可以使用reshape;而当我们只是想更新图中某个tensor的shape或者补充某个tensor的shape信息可以使用set_shape来进行更新

tf.reshape(,-1)变为一行

labels=tf.constant([[[1],[2],[3]],[[4], [5], [6]]],dtype=tf.float32)
d = tf.reshape(labels, [-1])
with tf.Session() as sess:print(sess.run(d))

24. 不同计算graph

def different_graph():#定义计算图g1g1=tf.Graph()with g1.as_default():v=tf.get_variable('v',shape=[1],initializer=tf.zeros_initializer)# 定义计算图g2g2 = tf.Graph()with g2.as_default():v = tf.get_variable('v', shape=[1,2], initializer=tf.ones_initializer)#在计算图g1中读取变量v的值with tf.Session(graph=g1) as sess:sess.run(tf.global_variables_initializer())with tf.variable_scope('',reuse=True):print('g1_V',sess.run(tf.get_variable('v')))# 在计算图g2中读取变量v的值with tf.Session(graph=g2) as sess:sess.run(tf.global_variables_initializer())with tf.variable_scope('', reuse=True):print('g2_V',sess.run(tf.get_variable('v')))a=tf.get_variable('v')print('g2_V.shape',sess.run(tf.shape(a)))

25. tf.boolean_mask,用于找出需要的元素

with tf.Session() as sess:    tensor = [0, 1, 2, 3]mask = np.array([True, False, True, False])print(sess.run(tf.boolean_mask(tensor, mask)))#2Dtensor = [[1, 2], [3, 4], [5, 6]]mask = np.array([True, False, True])print(sess.run(tf.boolean_mask(tensor, mask)))

26. tf.slice与tf.gather切片

http://www.360doc.com/content/17/0115/14/10408243_622618137.shtml

tf.slice(input_, begin, size, name=None):按照指定的下标范围抽取连续区域的子集,可以从图片中截取指定的像素点

tf.gather(params, indices, validate_indices=None, name=None):按照指定的下标集合从axis=0中抽取子集,适合抽取不连续区域的子集

input = tf.constant([[[1, 1, 1], [2, 2, 2]],[[3, 3, 3], [4, 4, 4]],[[5, 5, 5], [6, 6, 6]]],dtype=tf.float32)
a=tf.slice(input, [1, 0, 0], [1, 1, 3])
b=tf.slice(input, [1, 0, 0], [1, 2, 3])
c=tf.slice(input, [1, 0, 0], [2, 1, 3])
with tf.Session() as sess:print(sess.run(a))print(sess.run(b))print(sess.run(c))

input = tf.constant([[[1, 1, 1], [2, 2, 2]],[[3, 3, 3], [4, 4, 4]],[[5, 5, 5], [6, 6, 6]]],dtype=tf.float32)
a=tf.gather(input, [0,1])
with tf.Session() as sess:print(sess.run(a))

用于语义分割中,可用于踢掉不需要的ground truth,比如下面示例是6分类,不需要用到6

raw_gt = np.array([0,1,2, 3, 4, 5, 6])
#6 class
less_euqal=tf.less_equal(raw_gt, 6 - 1)
where_index=tf.where(less_euqal)
indices=tf.squeeze(where_index,axis=1)
gt=tf.gather(raw_gt, indices)

然后预测的结果也用类似处理

raw_prediction = tf.reshape(raw_output, [-1, self.conf.num_classes])
prediction = tf.gather(raw_prediction, indices)# Pixel-wise softmax_cross_entropy loss
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=prediction, labels=gt)

然后在做交叉熵。

完整示例:

raw_gt = tf.reshape(label_proc, [-1,])
indices = tf.squeeze(tf.where(tf.less_equal(raw_gt, self.conf.num_classes - 1)), 1)
gt = tf.cast(tf.gather(raw_gt, indices), tf.int32)
raw_prediction = tf.reshape(raw_output, [-1, self.conf.num_classes])
prediction = tf.gather(raw_prediction, indices)# Pixel-wise softmax_cross_entropy loss
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=prediction, labels=gt)

27. tf.group,tf.tuple

w = tf.Variable(1)
mul = tf.multiply(w, 2)
add = tf.add(w, 2)
group = tf.group(mul, add)
tuple = tf.tuple([mul, add])
print(group)
print(tuple)
with tf.Session() as sess:sess.run(tf.global_variables_initializer())print(sess.run(group))print(sess.run(tuple))

# sess.run(group)和sess.run(tuple)都会求Tensor(add)
#Tensor(mul)的值。区别是,tf.group()返回的是`op`
#tf.tuple()返回的是list of tensor。
#这样就会导致,sess.run(tuple)的时候,会返回 Tensor(mul),Tensor(add)的值.
#而 sess.run(group)不会

28. tf.metrics.true_positives

https://blog.csdn.net/jyzhang_cvml/article/details/82694631

a = tf.Variable([0, 1, 1, 0], tf.bool)
b = tf.Variable([0, 1, 0, 1], tf.bool)
tp, tp_update = tf.metrics.true_positives(predictions=a, labels=b)with tf.Session() as sess:sess.run(tf.global_variables_initializer())# tp_update 是保存在 tf.local_variables()中sess.run(tf.local_variables_initializer())sess.run(tp_update)print(sess.run(tp))

29. tf.one_hot

classes = 3
labels = tf.constant([0, 1, 2])  # 输入的元素值最小为0,最大为2
output = tf.one_hot(labels, classes)
with tf.Session() as sess:output = sess.run(output)print("output of one-hot is : ", output)

classes = 3
labels = tf.constant([[0, 1, 2],[1,2,0]])  # 输入的元素值最小为0,最大为2
output = tf.one_hot(labels, classes)
with tf.Session() as sess:output = sess.run(output)print("output of one-hot is : ", output)

tensorflow知识点相关推荐

  1. python与tensorflow知识点截图集锦(持续囤积)

    目录 前言 conda环境管理 python语法 [1]语言属性 [2]代码缩进问题 [3]input和output函数与print函数 [4]关键字与简单数据类型与简单运算符 [5]利用缩进体现逻辑 ...

  2. TensorFlow CNN卷积神经网络实现工况图分类识别(一)

    1. Tensorflow知识点 1.1. 张量 在Tensorflow程序中,所有的数据都是通过张量的形式来表示.从功能的角度上看,张量可以简单的理解为多维数组. (1)占位符Placeholder ...

  3. Bert模型 fine tuning 代码run_squad.py学习

    文章目录 关于run_squad.py 分模块学习 SquadExample InputFeatures create_model model_fn_builder input_fn_builder ...

  4. 我是如何转型走上计算机视觉OpenCV开发之路的

    我是如何转型走上计算机视觉OpenCV开发之路的 2004年我大学毕业,学的是软件工程专业,第一份工作是在一家日资外包企业,无法忍受学习日语,忍无可忍无须再忍,干了八个月就跳槽啦,来到了第二家公司还是 ...

  5. AI工程师面试知识点:TensorFlow 框架

    AI工程师面试知识点:TensorFlow 框架

  6. tensorflow和python先学哪个-前辈说先学会了这些Python知识点,再谈学习人工智能!...

    原标题:前辈说先学会了这些Python知识点,再谈学习人工智能! 首先我们看一看Python的优势: 开源,跨平台. 社区.不要小看这一点.社区意味着有很多教程.书籍,出了问题很容易google到,乃 ...

  7. 深度学习tensorflow数据流图基础知识点

    一.深度学习与机器学习区别 (一)特征提取方面 1.机器学习的特征工程步骤是要靠手动完成的,而且需要大量领域专业知识 深度学习通常由多个层组成,它们通常将更简单的模型组合在一起,通过将数 据从一层传 ...

  8. 【深度学习】TensorFlow基础知识点总结

    TensorFlow基础 ​ 其是一个面向于深度学习算法的科学计算库,内部数据保存在张量(Tensor)对象上,所有操作也都是基于张量对象进行. 1.数据类型 数值型 --其是TensorFlow的主 ...

  9. TensorFlow基础知识点(五)供给/Feeds

    TensorFlow 还提 供给 (feed) 机制 , 该机制可临时替代图中的任意操作中的 tensor可以对图中任何操作提交补丁 , 直接插入一个 tensor. feed 使用一个 tensor ...

最新文章

  1. 高并发大流量专题---5、CDN加速
  2. Fiori hash and route
  3. 2015 UESTC 数据结构专题G题 秋实大哥去打工 单调栈
  4. php 时间周期,php 的生命周期
  5. java invoke int long,将Long转换为Integer
  6. 桌面版应用_【Nordic博文分享系列】开发你的第一个NCS(Zephyr)应用程序
  7. 为什么选择springcloud作为微服务架构
  8. 5G及移动边缘计算(MEC)
  9. 热血传说复古传奇老显示服务器维护,《复古传奇之热血传说》新系列地图,新的装备即将出现...
  10. vue根据文件名后缀区分
  11. mysql修改密码椰子作用_全新椰子皮博客版本介绍及说明。
  12. Python每日一练-----快乐数
  13. 接地/漏电(原理图)/接零/零线保护
  14. LSTM调参经验(细读)
  15. dve 二维数组信号 显示波形_交互式仿真下dve和verdi中查看二维数组值
  16. BAPI货物移动时报错
  17. 【Springboot】SpringBoot基础知识及整合Thymeleaf模板引擎
  18. 集线器、交换机和路由器(图解)
  19. 约瑟夫问题(丢手绢问题)
  20. colored manual page

热门文章

  1. 推荐:26种NLP练手项目(代码+数据)
  2. 《ACL 2020丨哈工大多领域端到端任务型对话系统》
  3. KD Tree的原理及Python实现
  4. Android官方开发文档Training系列课程中文版:Activity测试之测试环境配置
  5. 如何下载Android源码(非常详细,含自动恢复下载,编译,运行模拟器说明)
  6. 常识推理相关最新研究进展
  7. Java基础:数组的声明,循环,赋值,拷贝。
  8. Java 代码复用 —— 泛型
  9. 数据库高级知识——查询截取分析(二)
  10. 牛客16500 珠心算测试