深层神经网络

  • 内容概述
  • 深层神经网络概述
  • 前向传播和反向传播(Forward and backward propagation)
    • 前向传播
    • 反向传播
  • 搭建神经网络块
  • 超参数
  • 代码作业——helper function
    • 初始化
      • 对于一个2层的神经网络
      • 初始化一个L层的神经网络
    • 向前传播模块
      • 线性Forward
      • 带有激活函数的线性Forward
      • L层的Forward部分
    • Cost函数
    • Backward部分
      • 线性backward
      • Linear-Activation backward
      • L 层的backward过程
    • 参数更新
  • 代码作业——实现神经网络
    • 数据集加载
    • 神经网络结构
      • 二层网络(单隐层)
      • L层神经网络
    • 实现两层的神经网络
    • 实现L层神经网络

内容概述

这一次课的内容主要是把之前的内容进行了一个综合,从而可以实现一个神经网络。
在作业部分,第一份作业是helper function,完成了第一份,第二份就顺水推舟了,注意一下参数选取即可。

相关练习可以在我的资源中免费下载,我会尽快上传。

深层神经网络概述

之前介绍的神经网络只有一个隐藏层,下面是一个四层神经网络的示意图

第一层(第一个隐藏层)有5个神经元,第二层也是5个,第三层是3个
用符号表示, n [ 0 ] = 3 , n [ 1 ] = 5 , n [ 2 ] = 5 , n [ 3 ] = 3 , n [ 4 ] = 1 n^{[0]}=3,n^{[1]}=5,n^{[2]}=5,n^{[3]}=3,n^{[4]}=1 n[0]=3,n[1]=5,n[2]=5,n[3]=3,n[4]=1
对于第i层,用 a [ i ] a^{[i]} a[i]来表示这一层激活后的结果。forward部分会算出 a [ i ] a^{[i]} a[i]

前向传播和反向传播(Forward and backward propagation)

前向传播


前向传播需要用X来初始化。

反向传播

搭建神经网络块

超参数

learning rate α \alpha α(学习率)、iterations(梯度下降法循环的数量)、L(隐藏层数目)、 n [ l ] n^{[l]} n[l](隐藏层单元数目)、choice of activation function(激活函数的选择)…这些参数控制了最后的W和b

代码作业——helper function

首先还是要引入相关的包

import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v2 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'%load_ext autoreload
%autoreload 2np.random.seed(1)

在搭建一个完整的深层神经网络之前,需要先实现一些“helper function”

初始化

对于一个2层的神经网络

def initialize_parameters(n_x, n_h, n_y):"""Argument:n_x -- 输入层大小size of the input layern_h -- 隐藏层大小size of the hidden layern_y -- 输出层大小size of the output layerReturns:parameters -- python dictionary containing your parameters:W1 -- weight matrix of shape (n_h, n_x)b1 -- bias vector of shape (n_h, 1)W2 -- weight matrix of shape (n_y, n_h)b2 -- bias vector of shape (n_y, 1)"""np.random.seed(1)### START CODE HERE ### (≈ 4 lines of code)# 注意这里的shape应该是Tuple,括号层数要注意!W1 = np.random.randn(n_h,n_x)*0.01b1 = np.zeros((n_h,1))W2 = np.random.randn(n_y,n_h)*0.01b2 = np.zeros((n_y,1))### END CODE HERE ###assert(W1.shape == (n_h, n_x))assert(b1.shape == (n_h, 1))assert(W2.shape == (n_y, n_h))assert(b2.shape == (n_y, 1))parameters = {"W1": W1,"b1": b1,"W2": W2,"b2": b2}return parameters

初始化一个L层的神经网络

对于一个多层的神经网络,初始化的时候,层数一定要匹配上

def initialize_parameters_deep(layer_dims):"""Arguments:layer_dims -- python array (list) containing the dimensions of each layer in our networkReturns:parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])bl -- bias vector of shape (layer_dims[l], 1)"""np.random.seed(3)parameters = {}L = len(layer_dims)            # number of layers in the networkfor l in range(1, L):### START CODE HERE ### (≈ 2 lines of code)parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))### END CODE HERE ###assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))return parameters

向前传播模块

线性Forward

Z [ l ] = W [ l ] A [ l − 1 ] + b [ l ] Z^{[l]}=W^{[l]}A^{[l-1]+b^{[l]}} Z[l]=W[l]A[l−1]+b[l],其中 Z [ 0 ] = X Z^{[0]}=X Z[0]=X

def linear_forward(A, W, b):"""Implement the linear part of a layer's forward propagation.Arguments:A -- activations from previous layer (or input data): (size of previous layer, number of examples)W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)b -- bias vector, numpy array of shape (size of the current layer, 1)Returns:Z -- the input of the activation function, also called pre-activation parameter cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently"""### START CODE HERE ### (≈ 1 line of code)Z = np.dot(W,A)+b### END CODE HERE ###assert(Z.shape == (W.shape[0], A.shape[1]))cache = (A, W, b)return Z, cache

带有激活函数的线性Forward

def linear_activation_forward(A_prev, W, b, activation):"""Implement the forward propagation for the LINEAR->ACTIVATION layerArguments:A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)b -- bias vector, numpy array of shape (size of the current layer, 1)activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"Returns:A -- the output of the activation function, also called the post-activation value cache -- a python dictionary containing "linear_cache" and "activation_cache";stored for computing the backward pass efficiently"""if activation == "sigmoid":# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".### START CODE HERE ### (≈ 2 lines of code)Z, linear_cache = linear_forward(A_prev,W,b)A, activation_cache = sigmoid(Z)### END CODE HERE ###elif activation == "relu":# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".### START CODE HERE ### (≈ 2 lines of code)Z, linear_cache = linear_forward(A_prev,W,b)A, activation_cache = relu(Z)### END CODE HERE ###assert (A.shape == (W.shape[0], A_prev.shape[1]))cache = (linear_cache, activation_cache)return A, cache

L层的Forward部分

def L_model_forward(X, parameters):"""Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computationArguments:X -- data, numpy array of shape (input size, number of examples)parameters -- output of initialize_parameters_deep()Returns:AL -- last post-activation valuecaches -- list of caches containing:every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)the cache of linear_sigmoid_forward() (there is one, indexed L-1)"""caches = []A = XL = len(parameters) // 2                  # number of layers in the neural network# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.for l in range(1, L):A_prev = A ### START CODE HERE ### (≈ 2 lines of code)A, cache = linear_activation_forward(A_prev,parameters['W' + str(l)],parameters['b' + str(l)],activation = "relu")caches.append(cache)### END CODE HERE #### Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.### START CODE HERE ### (≈ 2 lines of code)AL, cache = linear_activation_forward(A,parameters['W' + str(L)],parameters['b' + str(L)],activation = "sigmoid")caches.append(cache)### END CODE HERE ###assert(AL.shape == (1,X.shape[1]))return AL, caches

Cost函数

这里的成本函数依然使用了之前用过的公式

def compute_cost(AL, Y):"""Implement the cost function defined by equation (7).Arguments:AL -- probability vector corresponding to your label predictions, shape (1, number of examples)Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)Returns:cost -- cross-entropy cost"""m = Y.shape[1]# Compute loss from aL and y.### START CODE HERE ### (≈ 1 lines of code)cost = (-1/m)*np.sum(np.log(AL)*Y+(1-Y)*np.log(1-AL),axis=1,keepdims=True)### END CODE HERE ###cost = np.squeeze(cost)      # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).assert(cost.shape == ())return cost

Backward部分

线性backward

def linear_backward(dZ, cache):"""Implement the linear portion of backward propagation for a single layer (layer l)Arguments:dZ -- Gradient of the cost with respect to the linear output (of current layer l)cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layerReturns:dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prevdW -- Gradient of the cost with respect to W (current layer l), same shape as Wdb -- Gradient of the cost with respect to b (current layer l), same shape as b"""A_prev, W, b = cachem = A_prev.shape[1]print(A_prev.shape)print(dZ.shape)### START CODE HERE ### (≈ 3 lines of code)dW = (1/m)*np.dot(dZ,A_prev.T)db = (1/m)*np.sum(dZ,axis=1,keepdims=True)dA_prev = W.T*dZ### END CODE HERE ###assert (dA_prev.shape == A_prev.shape)assert (dW.shape == W.shape)assert (db.shape == b.shape)return dA_prev, dW, db

Linear-Activation backward

def linear_activation_backward(dA, cache, activation):"""Implement the backward propagation for the LINEAR->ACTIVATION layer.Arguments:dA -- post-activation gradient for current layer l cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficientlyactivation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"Returns:dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prevdW -- Gradient of the cost with respect to W (current layer l), same shape as Wdb -- Gradient of the cost with respect to b (current layer l), same shape as b"""linear_cache, activation_cache = cacheif activation == "relu":### START CODE HERE ### (≈ 2 lines of code)dZ = relu_backward(dA, activation_cache)dA_prev, dW, db = linear_backward(dZ,linear_cache)### END CODE HERE ###elif activation == "sigmoid":### START CODE HERE ### (≈ 2 lines of code)dZ = sigmoid_backward(dA, activation_cache)dA_prev, dW, db = linear_backward(dZ,linear_cache)### END CODE HERE ###return dA_prev, dW, db

L 层的backward过程

def L_model_backward(AL, Y, caches):"""Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID groupArguments:AL -- probability vector, output of the forward propagation (L_model_forward())Y -- true "label" vector (containing 0 if non-cat, 1 if cat)caches -- list of caches containing:every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])Returns:grads -- A dictionary with the gradientsgrads["dA" + str(l)] = ...grads["dW" + str(l)] = ...grads["db" + str(l)] = ..."""grads = {}L = len(caches) # the number of layersm = AL.shape[1]Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL# Initializing the backpropagation### START CODE HERE ### (1 line of code)dAL = -(np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))### END CODE HERE #### Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"]### START CODE HERE ### (approx. 2 lines)current_cache = caches[L-1]grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache, activation="sigmoid")### END CODE HERE for l in reversed(range(L - 1)):# lth layer: (RELU -> LINEAR) gradients.# Inputs: "grads["dA" + str(l + 2)], caches". Outputs: "grads["dA" + str(l + 1)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)] ### START CODE HERE ### (approx. 5 lines)current_cache = caches[l]dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l+2)],current_cache, activation = "relu")grads["dA" + str(l + 1)] = dA_prev_tempgrads["dW" + str(l + 1)] = dW_tempgrads["db" + str(l + 1)] = db_temp### END CODE HERE ###return grads

参数更新

def update_parameters(parameters, grads, learning_rate):"""Update parameters using gradient descentArguments:parameters -- python dictionary containing your parameters grads -- python dictionary containing your gradients, output of L_model_backwardReturns:parameters -- python dictionary containing your updated parameters parameters["W" + str(l)] = ... parameters["b" + str(l)] = ..."""L = len(parameters) // 2 # number of layers in the neural network# Update rule for each parameter. Use a for loop.### START CODE HERE ### (≈ 3 lines of code)for l in range(L):parameters["W" + str(l+1)] = parameters["W" + str(l+1)]-learning_rate*grads["dW"+str(l+1)]parameters["b" + str(l+1)] = parameters["b" + str(l+1)]-learning_rate*grads["db"+str(l+1)]### END CODE HERE ###return parameters

代码作业——实现神经网络

这一次练习的目的是用神经网络进行监督学习
还是先引入包

import time
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
from dnn_app_utils_v2 import *%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'%load_ext autoreload
%autoreload 2np.random.seed(1)

数据集加载

train_x_orig, train_y, test_x_orig, test_y, classes = load_data()

下面的代码可以查看数据集中的图像,以及查看样本数等

# Example of a picture
index = 6
plt.imshow(train_x_orig[index])
print ("y = " + str(train_y[0,index]) + ". It's a " + classes[train_y[0,index]].decode("utf-8") +  " picture.")# Explore your dataset
m_train = train_x_orig.shape[0]
num_px = train_x_orig.shape[1]
m_test = test_x_orig.shape[0]print ("Number of training examples: " + str(m_train))
print ("Number of testing examples: " + str(m_test))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_x_orig shape: " + str(train_x_orig.shape))
print ("train_y shape: " + str(train_y.shape))
print ("test_x_orig shape: " + str(test_x_orig.shape))
print ("test_y shape: " + str(test_y.shape))

在把样本输入神经网络之前,要reshape一下,获得一个feature向量

# Reshape the training and test examples
train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T   # The "-1" makes reshape flatten the remaining dimensions
test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T# Standardize data to have feature values between 0 and 1.
train_x = train_x_flatten/255.
test_x = test_x_flatten/255.print ("train_x's shape: " + str(train_x.shape))
print ("test_x's shape: " + str(test_x.shape))

神经网络结构

二层网络(单隐层)

feature向量在输入之后,经过一个线性的forward和ReLU激活函数后,再经历一次线性的forward和sigmoid激活函数,最后输出。

L层神经网络


把二层神经网络中的linear-activation forward重复L-1次,最后一次还是linear-sigmoid forward

实现两层的神经网络

n_x = 12288     # num_px * num_px * 3
n_h = 7
n_y = 1
layers_dims = (n_x, n_h, n_y)def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):"""Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.Arguments:X -- input data, of shape (n_x, number of examples)Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)layers_dims -- dimensions of the layers (n_x, n_h, n_y)num_iterations -- number of iterations of the optimization looplearning_rate -- learning rate of the gradient descent update ruleprint_cost -- If set to True, this will print the cost every 100 iterations Returns:parameters -- a dictionary containing W1, W2, b1, and b2"""np.random.seed(1)grads = {}costs = []                              # to keep track of the costm = X.shape[1]                           # number of examples(n_x, n_h, n_y) = layers_dims# Initialize parameters dictionary, by calling one of the functions you'd previously implemented### START CODE HERE ### (≈ 1 line of code)parameters = initialize_parameters(n_x, n_h, n_y)### END CODE HERE #### Get W1, b1, W2 and b2 from the dictionary parameters.W1 = parameters["W1"]b1 = parameters["b1"]W2 = parameters["W2"]b2 = parameters["b2"]# Loop (gradient descent)for i in range(0, num_iterations):# Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1". Output: "A1, cache1, A2, cache2".### START CODE HERE ### (≈ 2 lines of code)A1, cache1 = linear_activation_forward(X, W1, b1, activation="relu")A2, cache2 = linear_activation_forward(A1, W2, b2, activation="sigmoid")### END CODE HERE #### Compute cost### START CODE HERE ### (≈ 1 line of code)cost = compute_cost(A2, Y)### END CODE HERE #### Initializing backward propagationdA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))# Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1".### START CODE HERE ### (≈ 2 lines of code)dA1, dW2, db2 = linear_activation_backward(dA2, cache2, activation="sigmoid")dA0, dW1, db1 = linear_activation_backward(dA1, cache1, activation="relu")### END CODE HERE #### Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2grads['dW1'] = dW1grads['db1'] = db1grads['dW2'] = dW2grads['db2'] = db2# Update parameters.### START CODE HERE ### (approx. 1 line of code)parameters = update_parameters(parameters, grads, learning_rate=0.0075)### END CODE HERE #### Retrieve W1, b1, W2, b2 from parametersW1 = parameters["W1"]b1 = parameters["b1"]W2 = parameters["W2"]b2 = parameters["b2"]# Print the cost every 100 training exampleif print_cost and i % 100 == 0:print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))if print_cost and i % 100 == 0:costs.append(cost)# plot the costplt.plot(np.squeeze(costs))plt.ylabel('cost')plt.xlabel('iterations (per tens)')plt.title("Learning rate =" + str(learning_rate))plt.show()return parameters

调用如下

parameters = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2500, print_cost=True)


二层网络的准确率为72%,和逻辑回归的准确率差不多,不过,增加网络层数,或许能提升准确率。

实现L层神经网络

在开始搭建L层神经网络之前,先列出可能用到的help function


这里就先搭一个5层的神经网络

layers_dims = [12288, 20, 7, 5, 1] #  5-layer modeldef L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009"""Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.Arguments:X -- data, numpy array of shape (number of examples, num_px * num_px * 3)Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).learning_rate -- learning rate of the gradient descent update rulenum_iterations -- number of iterations of the optimization loopprint_cost -- if True, it prints the cost every 100 stepsReturns:parameters -- parameters learnt by the model. They can then be used to predict."""np.random.seed(1)costs = []                         # keep track of cost# Parameters initialization.### START CODE HERE ###parameters = initialize_parameters_deep(layers_dims)### END CODE HERE #### Loop (gradient descent)for i in range(0, num_iterations):# Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.### START CODE HERE ### (≈ 1 line of code)AL, caches =  L_model_forward(X, parameters)### END CODE HERE #### Compute cost.### START CODE HERE ### (≈ 1 line of code)cost = compute_cost(AL, Y)### END CODE HERE #### Backward propagation.### START CODE HERE ### (≈ 1 line of code)grads = L_model_backward(AL, Y, caches)### END CODE HERE #### Update parameters.### START CODE HERE ### (≈ 1 line of code)parameters = update_parameters(parameters, grads, learning_rate=0.0075)### END CODE HERE #### Print the cost every 100 training exampleif print_cost and i % 100 == 0:print ("Cost after iteration %i: %f" %(i, cost))if print_cost and i % 100 == 0:costs.append(cost)# plot the costplt.plot(np.squeeze(costs))plt.ylabel('cost')plt.xlabel('iterations (per tens)')plt.title("Learning rate =" + str(learning_rate))plt.show()return parameters

遇到的问题

上面这个图像明显不太对劲,事实证明,其准确率是74%,并没有比逻辑回归高那么多。
之后把学习率learning_rate调成0.0015,测试集上的准确率上升到了80%

吴恩达深度学习笔记——第一课第四周相关推荐

  1. 《吴恩达深度学习》第一课第四周任意层的神经网络实现及BUG处理

    目录 一.实现 1.吴恩达提供的工具函数 sigmoid sigmoid求导 relu relu求导 2.实现代码 导包和配置 初始化参数 前向运算 计算损失 后向运算 更新参数 组装模型 3.问题及 ...

  2. 在等吴恩达深度学习第5课的时候,你可以先看看第4课的笔记

    大数据文摘作品 编译:党晓芊.元元.龙牧雪 等待吴恩达放出深度学习第5课的时候,你还能做什么?今天,大数据文摘给大家带来了加拿大银行首席分析师Ryan Shrott的吴恩达深度学习第4课学习笔记,一共 ...

  3. 吴恩达深度学习第四课第一周 卷积神经网络

    文章目录 前言 一.计算机视觉(引言) 二.边缘检测示例(过滤器) 三.更多边缘检测内容(由亮到暗还是由暗到亮?) 四.Padding(Valid.Same.p) 五.卷积步长(s) 六.三维卷积(通 ...

  4. 吴恩达深度学习笔记(四)

    吴恩达深度学习笔记(四) 卷积神经网络CNN-第二版 卷积神经网络 深度卷积网络:实例探究 目标检测 特殊应用:人脸识别和神经风格转换 卷积神经网络编程作业 卷积神经网络CNN-第二版 卷积神经网络 ...

  5. 吴恩达深度学习笔记——卷积神经网络(Convolutional Neural Networks)

    深度学习笔记导航 前言 传送门 卷积神经网络(Convolutional Neural Networks) 卷积神经网络基础(Foundations of Convolutional Neural N ...

  6. 吴恩达深度学习笔记——神经网络与深度学习(Neural Networks and Deep Learning)

    文章目录 前言 传送门 神经网络与深度学习(Neural Networks and Deep Learning) 绪论 梯度下降法与二分逻辑回归(Gradient Descend and Logist ...

  7. 吴恩达深度学习笔记——结构化机器学习项目(Structuring Machine Learning Projects)

    深度学习笔记导航 前言 传送门 结构化机器学习项目(Machine Learning Strategy) 机器学习策略概述 正交化(orthogonalization) 评价指标 数字评估指标的单一性 ...

  8. 799页!吴恩达深度学习笔记.PDF

    吴恩达深度学习课程,是公认的最优秀的深度学习课程之一,目前没有教材,只有视频,本文提供完整笔记下载,这本笔记非常适合和深度学习入门. 0.导语 黄海广博士和同学将吴恩达老师深度学习视频课程做了完整的笔 ...

  9. 吴恩达深度学习笔记1-Course1-Week1【深度学习概论】

    2018.5.7 吴恩达深度学习视频教程网址 网易云课堂:https://mooc.study.163.com/smartSpec/detail/1001319001.htm Coursera:htt ...

最新文章

  1. java awt 初始化_Java awt项目开发
  2. Ecshop中的ajax+json
  3. 笨小熊 -- ACM解决方法
  4. x264里的2pass指的是什么意思? x264源代码分析2.encode()
  5. 学计算机的专属表白方式,九个学科专属表白句子-花式表白公式【蜜匠婚礼】...
  6. python php multiprocessing,Python多进程并发(multiprocessing)用法实例详解
  7. AMD总裁兼CEO苏姿丰再添要职 已被选为公司董事长
  8. VMware SD-WAN 修复6个漏洞,可关闭整个企业网络
  9. mysql清除内存不足_MySQL内存不足怎么办
  10. Pareto Optimality 帕累托最优 是什么
  11. 被众多车企“抛弃”、增长放缓,Mobileye值不值500亿美元?
  12. mysql plus多表关联_结合mybatis-plus 实现实体操作多表关联查询
  13. 债券的到期收益率、即期收益率、远期收益率及远期利率的推导
  14. ORA-12569: TNS: 包校验和失败解决方法一例
  15. 【数据分析之道】数据分析导读
  16. 使用cordova + vue搭建混合app框架
  17. 以太坊开发(一)——Truffle和Ganache
  18. ROS2 foxy 学习1 :认识节点=模块
  19. Java指令-Djava.ext.dirs的陷阱
  20. 三天搞定射频识别技术(二)2.4 S50卡原理

热门文章

  1. Linux下pip离线安装库及其依赖库
  2. 创龙DSP6748开发板音频代码无法运行的问题
  3. 自然语言处理从入门到应用——动态词向量预训练:ELMo词向量
  4. RequestBodyAdvice 和 ResponseBodyAdvice增强器使用
  5. 数据出境安全合规路径梳理
  6. 有关python方面的论文_有关python基础知识的文章推荐5篇
  7. linux 多个条件查询,linux中怎么条件查询
  8. 智能仓储系统哪家公司做的比较好?求推荐排名不错的智能仓储公司?
  9. 微服务架构—服务降级
  10. 如果时光倒流,你还会选择做 Android 开发吗?