头条ID:钱多多先森,关注更多AI、CV、数码、个人理财领域知识,关注我,一起成长

在深度学习中,数据、模型、参数、非线性、前向传播预测、反向偏微分参数更新等等,都是该领域的基础内容。究竟他们最基础的都有哪些?什么原理?用python如何实现?都是本节要描述的内容。

sigmoid激活函数

import numpy as npimport matplotlib.pyplot as pltimport h5pyimport sklearnimport sklearn.datasetsimport sklearn.linear_modelimport scipy.iodef sigmoid(x):    """    Compute the sigmoid of x    Arguments:    x -- A scalar or numpy array of any size.    Return:    s -- sigmoid(x)    """    s = 1/(1+np.exp(-x))    return s

relu激活函数

def relu(x):    """    Compute the relu of x    Arguments:    x -- A scalar or numpy array of any size.    Return:    s -- relu(x)    """    s = np.maximum(0,x)    return s

网络层参数的初始化

网络层参数的初始化,就是初始化网络模型中间的权值和偏执(简单理解)


def initialize_parameters(layer_dims):    """    Arguments:    layer_dims -- python array (list) containing the dimensions of each layer in our network    Returns:    parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":                    W1 -- weight matrix of shape (layer_dims[l], layer_dims[l-1])                    b1 -- bias vector of shape (layer_dims[l], 1)                    Wl -- weight matrix of shape (layer_dims[l-1], layer_dims[l])                    bl -- bias vector of shape (1, layer_dims[l])    Tips:    - For example: the layer_dims for the "Planar Data classification model" would have been [2,2,1].     This means W1's shape was (2,2), b1 was (1,2), W2 was (2,1) and b2 was (1,1). Now you have to generalize it!    - In the for loop, use parameters['W' + str(l)] to access Wl, where l is the iterative integer.    """    np.random.seed(3)    parameters = {}    L = len(layer_dims) # number of layers in the network    for l in range(1, L):        parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) / np.sqrt(layer_dims[l-1])        parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))        assert(parameters['W' + str(l)].shape == layer_dims[l], layer_dims[l-1])        assert(parameters['W' + str(l)].shape == layer_dims[l], 1)    return parameters

前向传播(FP)

从网络输入到网络最终输出的过程称为前向算法。前向传播包括三块内容,一是输入,二是网络中间参数,三是输出,具体过程如下图所示:

def forward_propagation(X, parameters):    """    Implements the forward propagation (and computes the loss) presented in Figure 2.    Arguments:        X -- input dataset, of shape (input size, number of examples)        parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":                        W1 -- weight matrix of shape ()                        b1 -- bias vector of shape ()                        W2 -- weight matrix of shape ()                        b2 -- bias vector of shape ()                        W3 -- weight matrix of shape ()                        b3 -- bias vector of shape ()    Returns:    loss -- the loss function (vanilla logistic loss)    """    # retrieve parameters    W1 = parameters["W1"]    b1 = parameters["b1"]    W2 = parameters["W2"]    b2 = parameters["b2"]    W3 = parameters["W3"]    b3 = parameters["b3"]    # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID    Z1 = np.dot(W1, X) + b1    A1 = relu(Z1)    Z2 = np.dot(W2, A1) + b2    A2 = relu(Z2)    Z3 = np.dot(W3, A2) + b3    A3 = sigmoid(Z3)    cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)    return A3, cache

​反向传播(BP)

用来解决网络优化问题,通过调节输出层的结果和真实值之间的偏差来进行逐层调节参数。该学习过程是一个不断迭代的过程。

def backward_propagation(X, Y, cache):    """    Implement the backward propagation presented in figure 2.    Arguments:        X -- input dataset, of shape (input size, number of examples)        Y -- true "label" vector (containing 0 if cat, 1 if non-cat)        cache -- cache output from forward_propagation()    Returns:        gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables    """    m = X.shape[1]    (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache    dZ3 = A3 - Y   # error    dW3 = 1./m * np.dot(dZ3, A2.T)#矩阵点乘    db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)    dA2 = np.dot(W3.T, dZ3)     dZ2 = np.multiply(dA2, np.int64(A2 > 0))    #数组和矩阵对应位置相乘,输出与相乘数组/矩阵的大小一致    dW2 = 1./m * np.dot(dZ2, A1.T)    db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)    dA1 = np.dot(W2.T, dZ2)    dZ1 = np.multiply(dA1, np.int64(A1 > 0))    dW1 = 1./m * np.dot(dZ1, X.T)    db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)    gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,                 "dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,                 "dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}    return gradients

更新模型(权值w、偏执b)参数

def update_parameters(parameters, grads, learning_rate):    """    Update parameters using gradient descent    Arguments:    parameters -- python dictionary containing your parameters:                    parameters['W' + str(i)] = Wi                    parameters['b' + str(i)] = bi    grads -- python dictionary containing your gradients for each parameters:                    grads['dW' + str(i)] = dWi                    grads['db' + str(i)] = dbi    learning_rate -- the learning rate, scalar.    Returns:    parameters -- python dictionary containing your updated parameters     """    n = len(parameters) // 2 # number of layers in the neural networks    # Update rule for each parameter    for k in range(n):        parameters["W" + str(k+1)] = parameters["W" + str(k+1)] - learning_rate * grads["dW" + str(k+1)]        parameters["b" + str(k+1)] = parameters["b" + str(k+1)] - learning_rate * grads["db" + str(k+1)]    return parameters

前向传播进行预测

网络执行前向传播,预测的结果大于阈值的就置为1。

def predict(X, y, parameters):    """    This function is used to predict the results of a  n-layer neural network.    Arguments:        X -- data set of examples you would like to label        parameters -- parameters of the trained model    Returns:        p -- predictions for the given dataset X    """    m = X.shape[1]    p = np.zeros((1,m), dtype = np.int)    # Forward propagation    a3, caches = forward_propagation(X, parameters)    # convert probas to 0/1 predictions    for i in range(0, a3.shape[1]):        if a3[0,i] > 0.5:            p[0,i] = 1        else:            p[0,i] = 0    # print results    #print ("predictions: " + str(p[0,:]))    #print ("true labels: " + str(y[0,:]))    print("Accuracy: "  + str(np.mean((p[0,:] == y[0,:]))))    return p

计算代价函数

以交叉熵损失函数为例(Cross Entropy Loss),其代价函数的计算公式如下:

def compute_cost(a3, Y):    """    Implement the cost function    Arguments:        a3 -- post-activation, output of forward propagation        Y -- "true" labels vector, same shape as a3    Returns:        cost - value of the cost function    """    m = Y.shape[1]    logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)    cost = 1./m * np.nansum(logprobs)    return cost

结语

通过这篇文章,你应该对深度学习中的地基模块:数据、模型、参数、非线性、前向传播预测、反向偏微分参数更新等等有了新的认识。在平时的学习中,不能单纯的知道tf.sigmoid就可以四线非线性,而更加深入的了解其底层的代码,这样能加深我们对深度学习的认识。

最后,感谢你关注:钱多多先森,一个关注更多AI、CV、数码、个人理财领域知识的同学。关注我,一起成长。

往期内容回顾:

  1. Python: 告别Print?优秀的Debug神器---pysnooper
  2. 基金省心,炒股伤神,理财不易,剖析下我的个人理财
  3. iPad 配上这款键盘,就是一款好用的MacBook

前向传播和反向传播_深度学习的地基模块:模型、参数、非线性、前向传播、反向偏微分相关推荐

  1. python人脸识别训练模型生产_深度学习-人脸识别DFACE模型pytorch训练(二)

    首先介绍一下MTCNN的网络结构,MTCNN有三种网络,训练网络的时候需要通过三部分分别进行,每一层网络都依赖前一层网络产生训练数据供当前训练网络,这样也推动了两个网络之间的最小损耗. Pnet Rn ...

  2. 深度学习(11)TensorFlow基础操作七: 向前传播(张量)实战

    深度学习(11)TensorFlow基础操作七: 向前传播(张量)实战 1. 导包 2. 加载数据集 3. 转换数据类型 4. 查看x.shape, y.shape, x.dtype, y.dtype ...

  3. 深度学习 图像分类_深度学习时代您应该阅读的10篇文章了解图像分类

    深度学习 图像分类 前言 (Foreword) Computer vision is a subject to convert images and videos into machine-under ...

  4. 深度学习深度前馈网络_深度学习前馈网络中的讲义第4部分

    深度学习深度前馈网络 FAU深度学习讲义 (FAU Lecture Notes in Deep Learning) These are the lecture notes for FAU's YouT ...

  5. ann人工神经网络_深度学习-人工神经网络(ANN)

    ann人工神经网络 Building your first neural network in less than 30 lines of code. 用不到30行代码构建您的第一个神经网络. 1.W ...

  6. 为什么深层神经网络难以训练_深度学习与统计力学(III) :神经网络的误差曲面...

    谷歌和斯坦福最新合作综述报告,发表在物理学的顶级期刊"凝聚态物理年鉴"(Annual Review of Condensed Matter Physics).作者Yasaman B ...

  7. 论文阅读_深度学习的医疗异常检测综述

    英文题目:Deep Learning for Medical Anomaly Detection - A Survey 中文题目:深度学习的医疗异常检测综述 论文地址:https://arxiv.or ...

  8. 前深度学习时代CTR预估模型的演化之路:从LR到FFM\n

    本文是王喆在 AI 前线 开设的原创技术专栏"深度学习 CTR 预估模型实践"的第二篇文章(以下"深度学习 CTR 预估模型实践"简称"深度 CTR ...

  9. 前深度学习时代CTR预估模型的演化之路 [王喆观点]

    毕业于清华大学计算机系的王喆学长梳理从传统机器学习时代到深度学习时代所有经典CTR(click through rate)模型的演化关系和模型特点.内容来源:https://zhuanlan.zhih ...

最新文章

  1. 计算机统考408卷子谁批,【计算机统考】你对计算机统考408了解有多少?
  2. 自考计算机本科学校好,自考本科的难度跟选择的专业有关吗?过来人:有很大的关系...
  3. 听说Attention与Softmax更配哦~
  4. canvas绘制阴影
  5. 元组-元组变量的常用操作
  6. 获取对象的属性,并且判断对象属性是否存在
  7. xampp mysql 卸载_卸载Xampp并安装apache + mysql + php 过程
  8. mac升级10.12后,安全和隐私中没有了安装任何来源的选项的解决办法
  9. 患者数据库mysql_关系型数据库之MySQL基础总结_part1
  10. Python分析热门话题“不生孩子的人后来都怎么了”,看看丁克家庭最后都怎么样了...
  11. JS的构造及其事件注意点总结
  12. 500 Internal Server Error
  13. ElementUI:使用nav报错Invalid prop: custom validator check failed for prop “index“.
  14. Monte Carlo Algorithms
  15. Unity源码分享之 电视遥控器按钮事件控制
  16. Drupal 建站
  17. 汇编指令与Intrinsics指令的对应关系汇总
  18. 关于在win8下面安装虚拟机出现的一些问题
  19. Unreal 入门-EQS
  20. (小甲鱼python)集合笔记合集一 集合(上)总结 集合的简单用法 集合的各种方法合集:子、交、并、补、差、对称差集、超集

热门文章

  1. Codeforces 611D New Year and Ancient Prophecy DP
  2. 简单三步-实现dede站内搜索功能
  3. windows server 2012 添加中文语言包(英文转为中文)(离线)
  4. 云计算时代的网络安全
  5. sleep() wait() notify/notifyAll() 的区别
  6. jsp内置对象pageContext和config对象
  7. 单例模式 之 单例模式——枚举
  8. Mysql数据类型(二)
  9. git 代码托管使用方法
  10. Java关键字及其作用