版本说明
tensorflow 1.8.0

python 3.6.2
conda 3.10.5
h5py 2.10.0
keras 2.1.6
numpy 1.19.3    !!!1.19.4可能会报错!

pandas 0.25.3

1. 导入tensorflow库

import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict%matplotlib inline
np.random.seed(1)

计算loss function:

y_hat = tf.constant(36, name='y_hat')            # Define y_hat constant. Set to 36.
y = tf.constant(39, name='y')                    # Define y. Set to 39loss = tf.Variable((y - y_hat)**2, name='loss')  # Create a variable for the lossinit = tf.global_variables_initializer()         # When init is run later (session.run(init)),# the loss variable will be initialized and ready to be computed
with tf.Session() as session:                    # Create a session and print the outputsession.run(init)                            # Initializes the variablesprint(session.run(loss))                     # Prints the loss

输出 9

在 TensorFlow 中编写和运行程序以下步骤:(大意)

1.创建的张量(变量)。
2.所创建张量之间的操作。
3.初始化张量。
4.创建一个会话。
5.运行会话。

a = tf.constant(2)
b = tf.constant(10)
c = tf.multiply(a,b)
print(c)

输出:

sess = tf.Session()
print(sess.run(c))

输出:20

接下来,学习占位符的概念。占位符是一个对象,您只能在定义以后指定其值。要为占位符指定值,您可以使用“字典”(feed_dict 变量)传入值。下面,我们为 x 创建了一个占位符,并在稍后运行会话时传入一个数字。

# Change the value of x in the feed_dictx = tf.placeholder(tf.int64, name = 'x')
print(sess.run(2 * x, feed_dict = {x: 3}))
sess.close()

输出:6

1.1 线性回归

Y = WX + b   W,X 随机矩阵 b随机向量

# GRADED FUNCTION: linear_functiondef linear_function():"""Implements a linear function: Initializes W to be a random tensor of shape (4,3)Initializes X to be a random tensor of shape (3,1)Initializes b to be a random tensor of shape (4,1)Returns: result -- runs the session for Y = WX + b """np.random.seed(1)### START CODE HERE ### (4 lines of code)X = np.random.randn(3,1)W = np.random.randn(4,3)b = np.random.randn(4,1) Y = tf.add(tf.matmul(W,X),b)### END CODE HERE ### # Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate### START CODE HERE ###sess = tf.Session()result = sess.run(Y)### END CODE HERE ### # close the session sess.close()return resultprint( "result = " + str(linear_function()))

输出:

注:X,W,b的初始化顺序不同可能会产生不同结果。

1.2 计算sigmoid

# GRADED FUNCTION: sigmoiddef sigmoid(z):"""Computes the sigmoid of zArguments:z -- input value, scalar or vectorReturns: results -- the sigmoid of z"""### START CODE HERE ### ( approx. 4 lines of code)# Create a placeholder for x. Name it 'x'.x = tf.placeholder(tf.float32, name = "x")# compute sigmoid(x)Y = tf.sigmoid(x)# Create a session, and run it. Please use the method 2 explained above. # You should use a feed_dict to pass z's value to x. # Run session and call the output "result"sess = tf.Session()# Run the variables initialization (if needed), run the operationsresult = sess.run(Y, feed_dict = {x:z})sess.close() # Close the session### END CODE HERE ###return resultprint ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(12) = " + str(sigmoid(12)))

输出:

总结:

1.创建占位符。
2.指定与您要计算的操作相对应的计算图。
3.创建会话。
4.运行会话,必要时使用字典来指定占位符变量的值。

1.3 计算损失函数

# GRADED FUNCTION: costdef cost(logits, labels):"""Computes the cost using the sigmoid cross entropyArguments:logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)labels -- vector of labels y (1 or 0) Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels" in the TensorFlow documentation. So logits will feed into z, and labels into y. Returns:cost -- runs the session of the cost (formula (2))"""### START CODE HERE ### # Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)z = tf.placeholder(tf.float32, name = "z")y = tf.placeholder(tf.float32, name = "y")# Use the loss function (approx. 1 line)p = tf.nn.sigmoid_cross_entropy_with_logits(logits = z,  labels = y)# Create a session (approx. 1 line). See method 1 above.sess = tf.Session()# Run the session (approx. 1 line).cost = sess.run(p, feed_dict = {z:logits,y:labels})### END CODE HERE ### # close the session sess.close()# Close the session (approx. 1 line). See method 1 above.### END CODE HERE ###return costlogits = sigmoid(np.array([0.2,0.4,0.7,0.9]))
cost = cost(logits, np.array([0,0,1,1]))
print ("cost = " + str(cost))

输出:

1.4 one-hot编码

# GRADED FUNCTION: one_hot_matrixdef one_hot_matrix(labels, C):"""Creates a matrix where the i-th row corresponds to the ith class number and the jth columncorresponds to the jth training example. So if example j had a label i. Then entry (i,j) will be 1. Arguments:labels -- vector containing the labels C -- number of classes, the depth of the one hot dimensionReturns: one_hot -- one hot matrix"""### START CODE HERE #### Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)C = tf.constant(C ,name="C")# Use tf.one_hot, be careful with the axis (approx. 1 line)one_hot_matrix = tf.one_hot(indices=labels, depth=C, axis=0)# Create the session (approx. 1 line)sess = tf.Session()# Run the session (approx. 1 line)one_hot = sess.run(one_hot_matrix)# Close the session (approx. 1 line). See method 1 above.sess.close()### END CODE HERE ###return one_hotlabels = np.array([1,2,3,0,2,1])
one_hot = one_hot_matrix(labels, C = 4)
print ("one_hot = " + str(one_hot))

输出:

1.5 初始化

# GRADED FUNCTION: onesdef ones(shape):"""Creates an array of ones of dimension shapeArguments:shape -- shape of the array you want to createReturns: ones -- array containing only ones"""### START CODE HERE #### Create "ones" tensor using tf.ones(...). (approx. 1 line)one = tf.ones(shape)# Create the session (approx. 1 line)sess = tf.Session()# Run the session to compute 'ones' (approx. 1 line)ones = sess.run(one)# Close the session (approx. 1 line). See method 1 above.sess.close()### END CODE HERE ###return onesprint ("ones = " + str(ones([3])))

输出:

2.在 tensorflow 中构建你的第一个神经网络

2.0 问题陈述

现在你的工作是构建一种算法,以促进语言障碍者与不懂手语的人之间的交流。

训练集:1080 张图片(64 x 64 像素)的符号表示从 0 到 5 的数字(每个数字 180 张图片)。
测试集:120 张图片(64 x 64 像素)的符号表示从 0 到 5 的数字(每个数字 20 张图片)。

加载数据集:

# Loading the dataset
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()# Example of a picture
index = 70
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))

输出:

数据预处理:

# Flatten the training and test images
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
# Normalize image vectors
X_train = X_train_flatten/255.
X_test = X_test_flatten/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6)
Y_test = convert_to_one_hot(Y_test_orig, 6)print ("number of training examples = " + str(X_train.shape[1]))
print ("number of test examples = " + str(X_test.shape[1]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))

输出:

您的目标是构建一种能够高精度识别标志的算法。为此,您将构建一个 tensorflow 模型,该模型与您之前在 numpy 中构建的用于猫识别的模型几乎相同(但现在使用的是 softmax 输出)。这是一个将您的 numpy 实现与 tensorflow 进行比较的好机会。

模型为 LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX。 SIGMOID 输出层已转换为 SOFTMAX。 SOFTMAX 层将 SIGMOID 推广到有两个以上的类情况。

2.1 创建占位符

# GRADED FUNCTION: create_placeholdersdef create_placeholders(n_x, n_y):"""Creates the placeholders for the tensorflow session.Arguments:n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)n_y -- scalar, number of classes (from 0 to 5, so -> 6)Returns:X -- placeholder for the data input, of shape [n_x, None] and dtype "float"Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float"Tips:- You will use None because it let's us be flexible on the number of examples you will for the placeholders.In fact, the number of examples during test/train is different."""### START CODE HERE ### (approx. 2 lines)X = tf.placeholder(tf.float32, [n_x,None],name = "X")Y = tf.placeholder(tf.float32, [n_y,None],name = "Y")### END CODE HERE ###return X, YX, Y = create_placeholders(12288, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))

输出:

2.2 初始化参数

# GRADED FUNCTION: initialize_parametersdef initialize_parameters():"""各个参数的维度如下:W1 : [25, 12288]b1 : [25, 1]W2 : [12, 25]b2 : [12, 1]W3 : [6, 12]b3 : [6, 1]返回:parameters - 包含了W和b的字典"""tf.set_random_seed(1) #指定随机种子W1 = tf.get_variable("W1",[25,12288],initializer=tf.contrib.layers.xavier_initializer(seed=1))b1 = tf.get_variable("b1",[25,1],initializer=tf.zeros_initializer())W2 = tf.get_variable("W2", [12, 25], initializer = tf.contrib.layers.xavier_initializer(seed=1))b2 = tf.get_variable("b2", [12, 1], initializer = tf.zeros_initializer())W3 = tf.get_variable("W3", [6, 12], initializer = tf.contrib.layers.xavier_initializer(seed=1))b3 = tf.get_variable("b3", [6, 1], initializer = tf.zeros_initializer())parameters = {"W1": W1,"b1": b1,"W2": W2,"b2": b2,"W3": W3,"b3": b3}return parameterstf.reset_default_graph()
with tf.Session() as sess:parameters = initialize_parameters()print("W1 = " + str(parameters["W1"]))print("b1 = " + str(parameters["b1"]))print("W2 = " + str(parameters["W2"]))print("b2 = " + str(parameters["b2"]))

输出:

2.3 前向传播过程

# GRADED FUNCTION: forward_propagationdef forward_propagation(X, parameters):"""Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAXArguments:X -- input dataset placeholder, of shape (input size, number of examples)parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"the shapes are given in initialize_parametersReturns:Z3 -- the output of the last LINEAR unit"""# Retrieve the parameters from the dictionary "parameters" W1 = parameters['W1']b1 = parameters['b1']W2 = parameters['W2']b2 = parameters['b2']W3 = parameters['W3']b3 = parameters['b3']### START CODE HERE ### (approx. 5 lines)              # Numpy Equivalents:Z1 = tf.add(tf.matmul(W1,X),b1)A1 = tf.nn.relu(Z1)Z2 = tf.add(tf.matmul(W2,A1),b2)A2 = tf.nn.relu(Z2)Z3 = tf.add(tf.matmul(W3,A2),b3)### END CODE HERE ###return Z3tf.reset_default_graph()with tf.Session() as sess:X, Y = create_placeholders(12288, 6)parameters = initialize_parameters()Z3 = forward_propagation(X, parameters)print("Z3 = " + str(Z3))

输出:

2.4 计算损失函数

# GRADED FUNCTION: compute_cost def compute_cost(Z3, Y):"""Computes the costArguments:Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)Y -- "true" labels vector placeholder, same shape as Z3Returns:cost - Tensor of the cost function"""# to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)logits = tf.transpose(Z3)labels = tf.transpose(Y)### START CODE HERE ### (1 line of code)cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = labels))### END CODE HERE ###return costtf.reset_default_graph()with tf.Session() as sess:X, Y = create_placeholders(12288, 6)parameters = initialize_parameters()Z3 = forward_propagation(X, parameters)cost = compute_cost(Z3, Y)print("cost = " + str(cost))

输出:

注:出现的warning好像是版本问题,忽略即可~

2.5 反向过程与参数更新

所有的反向传播和参数更新都在一行代码中得到了处理。在模型中加入这一行是非常容易的。

例如,对于梯度下降,优化器可以这样写:

optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)

要进行优化,您将执行以下操作:

_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})

2.6 创建模型

def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.005,num_epochs = 1500, minibatch_size = 64, print_cost = True):"""Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.Arguments:X_train -- training set, of shape (input size = 12288, number of training examples = 1080)Y_train -- test set, of shape (output size = 6, number of training examples = 1080)X_test -- training set, of shape (input size = 12288, number of training examples = 120)Y_test -- test set, of shape (output size = 6, number of test examples = 120)learning_rate -- learning rate of the optimizationnum_epochs -- number of epochs of the optimization loopminibatch_size -- size of a minibatchprint_cost -- True to print the cost every 100 epochsReturns:parameters -- parameters learnt by the model. They can then be used to predict."""ops.reset_default_graph()                         # to be able to rerun the model without overwriting tf variablestf.set_random_seed(1)                             # to keep consistent resultsseed = 3                                          # to keep consistent results(n_x, m) = X_train.shape                          # (n_x: input size, m : number of examples in the train set)n_y = Y_train.shape[0]                            # n_y : output sizecosts = []                                        # To keep track of the cost# Create Placeholders of shape (n_x, n_y)### START CODE HERE ### (1 line)X,Y = create_placeholders(n_x, n_y)### END CODE HERE #### Initialize parameters### START CODE HERE ### (1 line)parameters = initialize_parameters()### END CODE HERE #### Forward propagation: Build the forward propagation in the tensorflow graph### START CODE HERE ### (1 line)Z3 = forward_propagation(X, parameters)### END CODE HERE #### Cost function: Add cost function to tensorflow graph### START CODE HERE ### (1 line)cost = compute_cost(Z3, Y)### END CODE HERE #### Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.### START CODE HERE ### (1 line)optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)### END CODE HERE #### Initialize all the variablesinit = tf.global_variables_initializer()# Start the session to compute the tensorflow graphwith tf.Session() as sess:# Run the initializationsess.run(init)# Do the training loopfor epoch in range(num_epochs):epoch_cost = 0.                       # Defines a cost related to an epochnum_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train setseed = seed + 1minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)for minibatch in minibatches:# Select a minibatch(minibatch_X, minibatch_Y) = minibatch# IMPORTANT: The line that runs the graph on a minibatch.# Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).### START CODE HERE ### (1 line)_,minibatch_cost = sess.run([optimizer,cost],feed_dict={X:minibatch_X,Y:minibatch_Y})### END CODE HERE ###epoch_cost += minibatch_cost / num_minibatches# Print the cost every epoch
#             if print_cost == True and epoch % 100 == 0:
#                 print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
#             if print_cost == True and epoch % 5 == 0:
#                 costs.append(epoch_cost)if epoch%5==0:costs.append(epoch_cost)if print_cost and epoch%100==0:print("epoch="+str(epoch)+"  epoch_cost =  "+str(epoch_cost))# plot the costplt.plot(np.squeeze(costs))plt.ylabel('cost')plt.xlabel('iterations (per tens)')plt.title("Learning rate =" + str(learning_rate))plt.show()# lets save the parameters in a variableparameters = sess.run(parameters)print ("Parameters have been trained!")# Calculate the correct predictionscorrect_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))# Calculate accuracy on the test setaccuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))return parametersparameters = model(X_train, Y_train, X_test, Y_test)

注:我调参过程中发现batch_size=64,lr=0.001会得到较好的效果

batch_size = 64 lr = 0.005

batch_size = 64 lr = 0.0001

batch_size = 64 lr = 0.001

batch_size = 64 lr = 0.005

batch_size = 32 lr = 0.005

关于图片检测我用了好像是训练集的图片?就是截的图 1-5可以识别出来,0还是识别不出来。

吴恩达深度学习课后作业course2第三周 超参数调试、Batch正则化和程序框架相关推荐

  1. 吴恩达深度学习课后编程作业IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and i

    吴恩达深度学习课后编程作业出现的错误 IndexError: only integers, slices (" : "), ellipsis ("-"), nu ...

  2. 吴恩达深度学习 | (24) 序列模型专项第二周学习笔记

    课程视频 吴恩达深度学习专项课程共分为五个部分,本篇博客将介绍第五部分序列模型专项的第二周课程:自然语言处理与词嵌入. 目录 1. 词汇表征 2. 使用词嵌入 3. 词嵌入的特性 4. 嵌入矩阵 5. ...

  3. 吴恩达 深度学习 编程作业(2-2)- Optimization Methods

    吴恩达Coursera课程 DeepLearning.ai 编程作业系列,本文为<改善深层神经网络:超参数调试.正则化以及优化 >部分的第二周"优化算法"的课程作业,同 ...

  4. 吴恩达深度学习编程作业汇总

    以下列表为吴恩达的深度学习课程所对应的编程作业列表,都直接指向了github的连接地址:这些作业也是我在网上购买,可能与官方的内容有所出入:同时由于有的训练集和测试集以及预训练好的参数过大,不便上传, ...

  5. 吴恩达深度学习课后编程题讲解(python)

    小博极其喜欢这位人工智能领域大牛,非常膜拜,早在他出机器学习的课程的时候,就对机器学习产生了浓厚的兴趣,最近他又推出深度学习的课程,实在是又大火了一把,小博怎能不关注呢,我也跟随着吴恩达老师慢慢敲开深 ...

  6. 吴恩达深度学习编程作业 part 2-2

    本章节学习神经网络中的正则化 import numpy as np import matplotlib.pyplot as plt import sklearn import sklearn.data ...

  7. pytorch l2正则化_吴恩达深度学习 编程作业六 正则化(2)

    推荐守门员应该将球踢到哪个位置,才能让自己的队员用头击中. 1.无正则化模型 判别是否有正则化与调用其他计算函数. 准确率:0.948/0.915 明显过拟合overfiting了. 2.L2正则化 ...

  8. 【吴恩达深度学习编程作业】4.4特殊应用——人脸识别和神经风格转换(问题未解决)

    参考文章:1.人脸识别与神经风格转换 2.神经风格转换编程作业 神经网络风格中遇到的问题已经解决了并将解决方案写在了备注里面,但是人脸识别那里运行到database就出错了,目前仍没有找到解决方案.我 ...

  9. 吴恩达 深度学习 编程作业(2-3)- TensorFlow Tutorial

    TensorFlow Tutorial Welcome to this week's programming assignment. Until now, you've always used num ...

最新文章

  1. Python 字典的 使用
  2. 使用Python批量处理行、列和单元格
  3. python flask(1)
  4. for循环和数组练习
  5. 如何在.NET Core控制台程序中使用依赖注入
  6. 转:Linux 2.4.x内核软中断机制
  7. RedHat官方OpenShift Hands-on实验环境脚本
  8. Kotlin入门(20)几种常见的对话框
  9. php如何输入错误返回,php – 从函数返回“错误”的最佳做法
  10. 流风ASP.NET框架企业版试用地址公布
  11. 交通生成器Road Architect推荐
  12. Golang优秀开源项目汇总
  13. 目标检测里,视频与图像有何区别?
  14. Java程序员的薪资对照,快看看你在哪个层级?
  15. 【热搜词方案】android/java热搜词方案设计
  16. 如何下载旧版本R和R包?
  17. 阻容感基础10:电感器分类(4)-变压器
  18. 超级产品:解秘国潮对服装类企业的柔性供应链改造
  19. 十分钟深入理解const用法(趣味故事)
  20. php调用layer alert弹窗

热门文章

  1. 英语影视台词---绿皮书(2)(利普 我以为你要把那家伙打死了)
  2. 基本概念的理解与讨论
  3. 【知乎高赞】软件测试工程师应该怎样规划自己?成为年薪30W+测试工程师(乾坤未定,皆是黑马)
  4. Netty学习二:Netty整体框架
  5. Vue.js中props的使用
  6. 阿里面试:分析为什么B+树更适合作为索引的结构以及索引原理
  7. numpy多维数组shape的理解
  8. CentOS 6.3安装配置LAMP服务器(Linux+Apache+MySQL+PHP5)
  9. 【C++/嵌入式笔试面试八股】大纲介绍
  10. vue跳转链接(新页签)