本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。

课程链接:https://www.deeplearning.ai/deep-learning-specialization/

目录

1 - Neural Network model

2 - Zero initialization

3 - Random initialization(掌握)

4 - He initialization(理解)


To get started, run the following cell to load the packages and the planar dataset you will try to classify.

import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()


1 - Neural Network model

You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with:

  • Zeros initialization -- setting initialization = "zeros" in the input argument.
  • Random initialization -- setting initialization = "random" in the input argument. This initializes the weights to large random values.
  • He initialization -- setting initialization = "he" in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015.

Instructions: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this model() calls.

def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"):"""Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.Arguments:X -- input data, of shape (2, number of examples)Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)learning_rate -- learning rate for gradient descent num_iterations -- number of iterations to run gradient descentprint_cost -- if True, print the cost every 1000 iterationsinitialization -- flag to choose which initialization to use ("zeros","random" or "he")Returns:parameters -- parameters learnt by the model"""grads = {}costs = [] # to keep track of the lossm = X.shape[1] # number of exampleslayers_dims = [X.shape[0], 10, 5, 1]# Initialize parameters dictionary.if initialization == "zeros":parameters = initialize_parameters_zeros(layers_dims)elif initialization == "random":parameters = initialize_parameters_random(layers_dims)elif initialization == "he":parameters = initialize_parameters_he(layers_dims)# Loop (gradient descent)for i in range(0, num_iterations):# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.a3, cache = forward_propagation(X, parameters)# Losscost = compute_loss(a3, Y)# Backward propagation.grads = backward_propagation(X, Y, cache)# Update parameters.parameters = update_parameters(parameters, grads, learning_rate)# Print the loss every 1000 iterationsif print_cost and i % 1000 == 0:print("Cost after iteration {}: {}".format(i, cost))costs.append(cost)# plot the lossplt.plot(costs)plt.ylabel('cost')plt.xlabel('iterations (per hundreds)')plt.title("Learning rate =" + str(learning_rate))plt.show()return parameters

2 - Zero initialization

There are two types of parameters to initialize in a neural network:

the weight matrices:

the bias vectors:

Exercise: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.

def initialize_parameters_zeros(layers_dims):"""Arguments:layer_dims -- python array (list) containing the size of each layer.Returns:parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])b1 -- bias vector of shape (layers_dims[1], 1)...WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])bL -- bias vector of shape (layers_dims[L], 1)"""parameters = {}L = len(layers_dims)            # number of layers in the networkfor l in range(1, L):parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l-1]))parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))W1 = [[0. 0. 0.][0. 0. 0.]]
b1 = [[0.][0.]]
W2 = [[0. 0.]]
b2 = [[0.]]parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, np.squeeze(train_Y))

The model is predicting 0 for every example.

In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with ?[?]=1 for every layer, and the network is no more powerful than a linear classifier such as logistic regression.

**What you should remember**: - The weights  should be initialized randomly to break symmetry. - It is however okay to initialize the biases to zeros. Symmetry is still broken so long as is initialized randomly.


3 - Random initialization(掌握)

To break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values.

Exercise: Implement the following function to initialize your weights to large random values (scaled by *10) and your biases to zeros. Use np.random.randn(..,..) * 10 for weights and np.zeros((.., ..)) for biases. We are using a fixed np.random.seed(..) to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.

def initialize_parameters_random(layers_dims):"""Arguments:layer_dims -- python array (list) containing the size of each layer.Returns:parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])b1 -- bias vector of shape (layers_dims[1], 1)...WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])bL -- bias vector of shape (layers_dims[L], 1)"""np.random.seed(3)               # This seed makes sure your "random" numbers will be the as oursparameters = {}L = len(layers_dims)            # integer representing the number of layersfor l in range(1, L):parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * 10parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))return parameters
parameters = model(train_X, train_Y, initialization = "random")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)

**In summary**: - Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part!


4 - He initialization(理解)

Finally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights ?[?]W[l] of sqrt(1./layers_dims[l-1]) where He initialization would use sqrt(2./layers_dims[l-1]).)

Exercise: Implement the following function to initialize your parameters with He initialization.

Hint: This function is similar to the previous initialize_parameters_random(...). The only difference is that instead of multiplying np.random.randn(..,..) by 10, you will multiply it by ,which is what He initialization recommends for layers with a ReLU activation.

# GRADED FUNCTION: initialize_parameters_hedef initialize_parameters_he(layers_dims):"""Arguments:layer_dims -- python array (list) containing the size of each layer.Returns:parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])b1 -- bias vector of shape (layers_dims[1], 1)...WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])bL -- bias vector of shape (layers_dims[L], 1)"""np.random.seed(3)parameters = {}L = len(layers_dims) - 1 # integer representing the number of layersfor l in range(1, L + 1):### START CODE HERE ### (≈ 2 lines of code)parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * np.sqrt(2 / layers_dims[l-1])parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))### END CODE HERE ###return parameters

6.深度学习练习:Initialization相关推荐

  1. 谷歌工程师:聊一聊深度学习的weight initialization

    TLDR (or the take-away) Weight Initialization matters!!! 深度学习中的weight initialization对模型收敛速度和模型质量有重要影 ...

  2. 聊一聊深度学习的weight initialization

    转载自:https://zhuanlan.zhihu.com/p/25110150 TLDR (or the take-away) Weight Initialization matters!!! 深 ...

  3. [深度学习] 权重初始化--Weight Initialization

    深度学习中的weight initialization对模型收敛速度和模型质量有重要影响! 在ReLU activation function中推荐使用Xavier Initialization的变种 ...

  4. 深度学习中的Initialization

    深度学习中的Initialization Initialization in Deep Learning. 对神经网络进行训练时,需要对神经网络的参数进行初始化. 常见的初始化方法: Zero Ini ...

  5. 深度学习中的优化简介

    深度学习算法在许多情况下都涉及到优化. 1. 学习和纯优化有什么不同 在大多数机器学习问题中,我们关注某些性能度量P,其定义于测试集上并且可能是不可解的.因此,我们只是间接地优化P.我们系统通过降低代 ...

  6. AI四巨头Google、DeepMind、Microsoft、Uber深度学习框架大比拼

    编者按:Google.Uber.DeepMind和Microsoft这四大科技公司是当前将深度学习研究广泛应用于自身业务的典型代表,跻身全球深度学习研究水平最高的科技公司之列.GPipe.Horovo ...

  7. Github标星24k,127篇经典论文下载,这份深度学习论文阅读路线图不容错过

    作者  | Floodsung 翻译 | 黄海广 来源 | 机器学习初学者(ID:ai-start-com) [导读]如果你是深度学习领域的新手,那么你可能会遇到的第一个问题是"我应该从哪篇 ...

  8. 干货 | 2021年,深度学习还有哪些研究方向可以做?

    点击上方"视学算法",选择加"星标"或"置顶" 重磅干货,第一时间送达 作者丨谢凌曦.数据误码率.Zhifeng 来源丨知乎问答 编辑丨极市 ...

  9. Adam 那么棒,为什么还对 SGD 念念不忘?一个框架看懂深度学习优化算法

    作者|Juliuszh 链接 | https://zhuanlan.zhihu.com/juliuszh 本文仅作学术分享,若侵权,请联系后台删文处理 机器学习界有一群炼丹师,他们每天的日常是: 拿来 ...

最新文章

  1. Unbalanced calls to begin/end appearance transitions for XXXX
  2. java 对象转json,java首字母小写,判断方法是否为javabean方法
  3. 如何动态添加修改删除定时任务
  4. leetcode 257. 二叉树的所有路径(Java版)
  5. 热烈庆祝蓝启旭大佬开通博客
  6. zabbix监控windows服务器简单介绍
  7. [转]Windows关机过程分析与快速关机
  8. 由粒子加速器产生的反中子形成的白洞
  9. accept搭配用法_accept的用法与搭配是什么
  10. 2022-2028中国防爆电话市场现状研究分析与发展前景预测报告
  11. 更新Win10版本后,wifi图标不见了,并且连接不到wifi和宽带,以及点击网络和Internet闪退的情况
  12. 当你对成功的渴望足以与你对呼吸的渴望相媲美的时候,你就会成功!
  13. CHAPTER 23 Question Answering
  14. Python自然语言处理 10 分析语句的含义
  15. 将一个数组中重复的元素去除,并且返回一个新数组
  16. Extract Method(提炼函数)
  17. 深度学习去燥学习编码_学习编码的警示故事。 我自己的。
  18. 政府大数据治理体系的框架及其实现的有效路径
  19. 剑指offer系列-30.包含min函数的栈
  20. 计算机课用不用带眼镜,上网课要戴防蓝光眼镜吗?

热门文章

  1. 幼小衔接语言教案上c册_关于幼小衔接,这里有你最想要的解答
  2. string的反转输出以及char型字符串的反转输出
  3. java类与对象实验报告心得体会_第四周课程总结与实验报告(Java简单类与对象)...
  4. 2021年河南高考成绩排名查询一分一段表,2018河南高考一分一段统计表,查排名必备!...
  5. C# winform中判断控件类型
  6. eclipse 快捷键及插件
  7. notes邮件正文显示不全_python实现一次性批量发邮件
  8. python两个线程交替打印_三线程按顺序交替打印ABC的四种方法
  9. ubuntu开机出现错误“Error found when loading /root/.profile”解决
  10. qmake生成vs2013工程文件