本文节选自吴恩达老师《深度学习专项课程》编程作业,在此表示感谢。

课程链接:https://www.deeplearning.ai/deep-learning-specialization/

Welcome to the first assignment of week 4! Here you will build a face recognition system. Many of the ideas presented here are from FaceNet. In lecture, we also talked about DeepFace.

Face recognition problems commonly fall into two categories:

  • Face Verification - "is this the claimed person?". For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem.
  • Face Recognition - "who is this person?". For example, the video lecture showed a face recognition video (https://www.youtube.com/watch?v=wr4rx0Spihs) of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem.

FaceNet learns a neural network that encodes a face image into a vector of 128 numbers. By comparing two such vectors, you can then determine if two pictures are of the same person.

In this assignment, you will:

  • Implement the triplet loss function
  • Use a pretrained model to map face images into 128-dimensional encodings
  • Use these encodings to perform face verification and face recognition

In this exercise, we will be using a pre-trained model which represents ConvNet activations using a "channels first" convention, as opposed to the "channels last" convention used in lecture and previous programming assignments. In other words, a batch of images will be of shape (?,??,??,??)instead of (?,??,??,??)

. Both of these conventions have a reasonable amount of traction among open-source implementations; there isn't a uniform standard yet within the deep learning community.

Let's load the required packages.

目录

0 - Naive Face Verification

1 - Encoding face images into a 128-dimensional vector

1.1 - Using an ConvNet to compute encodings

1.2 - The Triplet Loss

2 - Loading the trained model

3 - Applying the model

3.1 - Face Verification

3.2 - Face Recognition

References:


from keras.models import Sequential
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
from keras.layers.merge import Concatenate
from keras.layers.core import Lambda, Flatten, Dense
from keras.initializers import glorot_uniform
from keras.engine.topology import Layer
from keras import backend as K
K.set_image_data_format('channels_first')
import cv2
import os
import numpy as np
from numpy import genfromtxt
import pandas as pd
import tensorflow as tf
from fr_utils import *
from inception_blocks import *%matplotlib inline
%load_ext autoreload
%autoreload 2np.set_printoptions(threshold=10000000)

0 - Naive Face Verification

In Face Verification, you're given two images and you have to tell if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images are less than a chosen threshold, it may be the same person!

Of course, this algorithm performs really poorly, since the pixel values change dramatically due to variations in lighting, orientation of the person's face, even minor changes in head position, and so on.

You'll see that rather than using the raw image, you can learn an encoding ?(???) so that element-wise comparisons of this encoding gives more accurate judgements as to whether two pictures are of the same person.


1 - Encoding face images into a 128-dimensional vector

1.1 - Using an ConvNet to compute encodings

The FaceNet model takes a lot of data and a long time to train. So following common practice in applied deep learning settings, let's just load weights that someone else has already trained. The network architecture follows the Inception model from Szegedy et al.. We have provided an inception network implementation. You can look in the file inception_blocks.py to see how it is implemented (do so by going to "File->Open..." at the top of the Jupyter notebook).

The key things you need to know are:

  • This network uses 96x96 dimensional RGB images as its input. Specifically, inputs a face image (or batch of ?

face images) as a tensor of shape (?,??,??,??)=(?,3,96,96)

  • It outputs a matrix of shape (?,128) that encodes each input face image into a 128-dimensional vector

By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings the compare two face images as follows:

So, an encoding is a good one if:

  • The encodings of two images of the same person are quite similar to each other
  • The encodings of two images of different persons are very different

The triplet loss function formalizes this, and tries to "push" the encodings of two images of the same person (Anchor and Positive) closer together, while "pulling" the encodings of two images of different persons (Anchor, Negative) further apart.


1.2 - The Triplet Loss

For an image ?, we denote its encoding ?(?), where ? is the function computed by the neural network.

Training will use triplets of images (?,?,?):

  • A is an "Anchor" image--a picture of a person.
  • P is a "Positive" image--a picture of the same person as the Anchor image.
  • N is a "Negative" image--a picture of a different person than the Anchor image.

These triplets are picked from our training dataset. We will write (A^{(i)},P^{(i)},N^{(i)}) to denote the ?-th training example.

You'd like to make sure that an image ?(?) of an individual is closer to the Positive ?(?) than to the Negative image ?(?) by at least a margin ?:

$$\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 + \alpha < \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$$

You would thus like to minimize the following "triplet cost":

$$\mathcal{J} = \sum^{N}_{i=1} \large[ \small \underbrace{\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2}_\text{(1)} - \underbrace{\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2}_\text{(2)} + \alpha \large ] \small_+ \tag{3}$$

Here, we are using the notation "[?]+" to denote ???(?,0).

Notes:

  • The term (1) is the squared distance between the anchor "A" and the positive "P" for a given triplet; you want this to be small.
  • The term (2) is the squared distance between the anchor "A" and the negative "N" for a given triplet, you want this to be relatively large, so it thus makes sense to have a minus sign preceding it.
  • ? is called the margin. It is a hyperparameter that you should pick manually. We will use ?=0.2.

Most implementations also normalize the encoding vectors to have norm equal one (i.e., ||f(img)||_2=1); you won't have to worry about that here.

Exercise: Implement the triplet loss as defined by formula (3). Here are the 4 steps:

  • Compute the distance between the encodings of "anchor" and "positive" :  $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$
  • Compute the distance between the encodings of "anchor" and "negative":  $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$
  • Compute the formula per training example:  $ \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2 + \alpha$
  • Compute the full formula by taking the max with zero and summing over the training examples:

$$\mathcal{J} = \sum^{N}_{i=1} \large[ \small \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2+ \alpha \large ] \small_+ \tag{3}$$

Useful functions: `tf.reduce_sum()`, `tf.square()`, `tf.subtract()`, `tf.add()`, `tf.reduce_mean`, `tf.maximum()`.

def triplet_loss(y_true, y_pred, alpha = 0.2):"""Implementation of the triplet loss as defined by formula (3)Arguments:y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.y_pred -- python list containing three objects:anchor -- the encodings for the anchor images, of shape (None, 128)positive -- the encodings for the positive images, of shape (None, 128)negative -- the encodings for the negative images, of shape (None, 128)Returns:loss -- real number, value of the loss"""anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]# Step 1: Compute the (encoding) distance between the anchor and the positivepos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), axis = -1 )# Step 2: Compute the (encoding) distance between the anchor and the negativeneg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), axis = -1 )# Step 3: subtract the two previous distances and add alpha.basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), alpha)# Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.loss = tf.reduce_sum(tf.maximum(basic_loss, 0))return loss

2 - Loading the trained model

FaceNet is trained by minimizing the triplet loss. But since training requires a lot of data and a lot of computation, we won't train it from scratch here. Instead, we load a previously trained model. Load a model using the following cell; this might take a couple of minutes to run.

FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])
load_weights_from_FaceNet(FRmodel)

Here're some examples of distances between the encodings between three individuals:


3 - Applying the model

Back to the Happy House! Residents are living blissfully since you implemented happiness recognition for the house in an earlier assignment.

However, several issues keep coming up: The Happy House became so happy that every happy person in the neighborhood is coming to hang out in your living room. It is getting really crowded, which is having a negative impact on the residents of the house. All these random happy people are also eating all your food.

So, you decide to change the door entry policy, and not just let random happy people enter anymore, even if they are happy! Instead, you'd like to build a Face verification system so as to only let people from a specified list come in. To get admitted, each person has to swipe an ID card (identification card) to identify themselves at the door. The face recognition system then checks that they are who they claim to be.

3.1 - Face Verification

Let's build a database containing one encoding vector for each person allowed to enter the happy house. To generate the encoding we use img_to_encoding(image_path, model) which basically runs the forward propagation of the model on the specified image.

Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.

Exercise: Implement the verify() function which checks if the front-door camera picture (image_path) is actually the person called "identity". You will have to go through the following steps:

  1. Compute the encoding of the image from image_path
  2. Compute the distance about this encoding and the encoding of the identity image stored in the database
  3. Open the door if the distance is less than 0.7, else do not open.

As presented above, you should use the L2 distance (np.linalg.norm). (Note: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.)

def verify(image_path, identity, database, model):"""Function that verifies if the person on the "image_path" image is "identity".Arguments:image_path -- path to an imageidentity -- string, name of the person you'd like to verify the identity. Has to be a resident of the Happy house.database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors).model -- your Inception model instance in KerasReturns:dist -- distance between the image_path and the image of "identity" in the database.door_open -- True, if the door should open. False otherwise."""# Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line)encoding = img_to_encoding(image_path, model)# Step 2: Compute distance with identity's image (≈ 1 line)dist = np.linalg.norm(encoding - database[identity])# Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines)if dist < 0.7:print("It's " + str(identity) + ", welcome home!")door_open = Trueelse:print("It's not " + str(identity) + ", please go away")door_open = Falsereturn dist, door_open

3.2 - Face Recognition

Your face verification system is mostly working well. But since Kian got his ID card stolen, when he came back to the house that evening he couldn't get in!

To reduce such shenanigans, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the house, and the front door will unlock for them!

You'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, we will no longer get a person's name as another input.

Exercise: Implement who_is_it(). You will have to go through the following steps:

  1. Compute the target encoding of the image from image_path
  2. Find the encoding from the database that has smallest distance with the target encoding.
    • Initialize the min_dist variable to a large enough number (100). It will help you keep track of what is the closest encoding to the input's encoding.
    • Loop over the database dictionary's names and encodings. To loop use for (name, db_enc) in database.items().
      • Compute L2 distance between the target "encoding" and the current "encoding" from the database.
      • If this distance is less than the min_dist, then set min_dist to dist, and identity to name.
# GRADED FUNCTION: who_is_itdef who_is_it(image_path, database, model):"""Implements face recognition for the happy house by finding who is the person on the image_path image.Arguments:image_path -- path to an imagedatabase -- database containing image encodings along with the name of the person on the imagemodel -- your Inception model instance in KerasReturns:min_dist -- the minimum distance between image_path encoding and the encodings from the databaseidentity -- string, the name prediction for the person on image_path"""## Step 1: Compute the target "encoding" for the image. Use img_to_encoding() see example above. ## (≈ 1 line)encoding = img_to_encoding(image_path, model)## Step 2: Find the closest encoding ### Initialize "min_dist" to a large value, say 100 (≈1 line)min_dist = 100# Loop over the database dictionary's names and encodings.for (name, db_enc) in database.items():# Compute L2 distance between the target "encoding" and the current "emb" from the database. (≈ 1 line)dist = np.linalg.norm(encoding - db_enc)# If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines)if dist < min_dist:min_dist = distidentity = nameif min_dist > 0.7:print("Not in the database.")else:print ("it's " + str(identity) + ", the distance is " + str(min_dist))return min_dist, identity

Your Happy House is running well. It only lets in authorized persons, and people don't need to carry an ID card around anymore!

You've now seen how a state-of-the-art face recognition system works.

Although we won't implement it here, here're some ways to further improve the algorithm:

  • Put more images of each person (under different lighting conditions, taken on different days, etc.) into the database. Then given a new image, compare the new face to multiple pictures of the person. This would increae accuracy.
  • Crop the images to just contain the face, and less of the "border" region around the face. This preprocessing removes some of the irrelevant pixels around the face, and also makes the algorithm more robust.

**What you should remember**: - Face verification solves an easier 1:1 matching problem; face recognition addresses a harder 1:K matching problem. - The triplet loss is an effective loss function for training a neural network to learn an encoding of a face image. - The same encoding can be used for verification and recognition. Measuring distances between two images' encodings allows you to determine whether they are pictures of the same person.

Congrats on finishing this assignment!

References:

  • Florian Schroff, Dmitry Kalenichenko, James Philbin (2015). FaceNet: A Unified Embedding for Face Recognition and Clustering
  • Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, Lior Wolf (2014). DeepFace: Closing the gap to human-level performance in face verification
  • The pretrained model we use is inspired by Victor Sy Wang's implementation and was loaded using his code: https://github.com/iwantooxxoox/Keras-OpenFace.
  • Our implementation also took a lot of inspiration from the official FaceNet github repository: https://github.com/davidsandberg/facenet

14.深度学习练习:Face Recognition for the Happy House相关推荐

  1. 吴恩达深度学习笔记 course 2 1.1~1.14 深度学习的实用层面

    1.1 Train/dev/test sets 在构建一个神经网络的时候我们往往需要设计很多参数,如:layers,learning rates ,acivation functions,hidden ...

  2. 深度学习 Feeding behavior recognition for group-housed pigs with the Faster R-CNN 论文篇

    Feeding behavior recognition for group-housed pigs with the Faster R-CNN -- 在Faster R-CNN中对群养猪的进食行为识 ...

  3. deeplearning.14深度学习猫咪识别阶段性检测记录

    猫咪识别阶段性检测记录 numpy基础 猫咪图像识别 一.加载数据以及数据预处理 二.使用神经网络的思维方式建立logistic回归 总结完整代码及注释 numpy基础 打开jupyter Noteb ...

  4. 基于深度学习的Action Recognition(行为识别)【二】

    本文发表在: 知乎专栏 看了近一个月的论文,对行为识别领域中目前主流的基于深度学习的方法算是有了基本的认识.但今天只做总结,关于每一篇论文的详细理解,之后有时间补上.也欢迎指正交流.

  5. 深度学习 14. 深度学习调参,CNN参数调参,各个参数理解和说明以及调整的要领。underfitting和overfitting的理解,过拟合的解释。

    本文为原创文章转载必须注明本文出处以及附上 本文地址超链接以及 博主博客地址:http://blog.csdn.NET/qq_20259459  和 作者邮箱( jinweizhi93@gmai.co ...

  6. 深度学习实战14(进阶版)-手写文字OCR识别,手写笔记也可以识别了

    大家好,我是微学AI,今天给大家带来手写OCR识别的项目.手写的文稿在日常生活中较为常见,比如笔记.会议记录,合同签名.手写书信等,手写体的文字到处都有,所以针对手写体识别也是有较大的需求.目前手写体 ...

  7. 深度学习入门必看的书和论文?有哪些必备的技能需学习?

    作者:机器之心 链接:https://www.zhihu.com/question/31785984/answer/204979840 来源:知乎 著作权归作者所有.商业转载请联系作者获得授权,非商业 ...

  8. 学习:深度学习公开课

    [转] http://www.leiphone.com/news/201701/0milWCyQO4ZbBvuW.html 导语:入门机器学习不知道从哪着手?看这篇就够了. 在当下的机器学习热潮,人才 ...

  9. 这38篇原创文章,带我入门深度学习!

    这38篇原创文章带我入门卷积神经网络,循环神经网络和强化学习,希望对您有帮助. 37.  深度学习算法(第37期)----如何用强化学习玩游戏? 36.  深度学习算法(第36期)----强化学习之时 ...

最新文章

  1. R语言使用tidyquant包的tq_transmute函数计算持有某只股票的天、月、周收益率、ggplot2使用条形图(bar plot)可视化股票年收益率数据使用不同的色彩表征正收益率和负收益率
  2. paddle版fnet_google
  3. 使用Pycharm开发python下django框架项目生成的文件解释
  4. 大数据时代下,App数据隐私安全你真的了解么?
  5. Linux静态库与动态库
  6. luogu_P4767 [IOI2000]邮局
  7. 圆通快递单号yt开头_乡镇快递取件二次收费,四川省消委会点名这些快递公司...
  8. windows 修改MySQL默认3306端口
  9. 日记、2021/9/30
  10. 为了物尽其用报废的涉密计算机的硬盘,检测不到硬盘不能轻易将其定为报废
  11. 【Adobe Premiere Pro 2020】pr模板下载和pr使用模板创建视频、pr调色说明、pr全景视频编辑说明、pr无缝转场特效制作流程、pr保存预设效果和pr使用预设效果
  12. jQuery写登录弹窗并居中显示
  13. php挂马检测工具,网站挂马检测工具,网站被挂马在线检测工具 | 帮助信息-动天数据...
  14. 在Unity3D中制作VR全景视频、图片
  15. Spring启动,constructor,@PostConstruct,afterPropertiesSet,onApplicationEvent执行顺序 原创 2016年09月29日 11:39:2
  16. conda搜索安装包时显示没有匹配No match found for: fastaqc. Search: *fastaqc* PackagesNotFoundError:
  17. 如何设计账户余额的数据准确性?
  18. Debian Apache完整
  19. PIV粒子成像测试拍摄风洞设备
  20. LET: Linguistic Knowledge Enhanced Graph Transformer for Chinese Short Text Matching学习笔记

热门文章

  1. python默认参数举例_Python中的默认参数实例分析
  2. RT-Thread中自定义MSH命令传入的参数是字符串,需用户自行检查和解析
  3. linux隐藏apache信息,Linux下如何隐藏Apache版本号信息
  4. rabbitmq 拉取消息太慢_面试官:消息队列这些我都要问
  5. MFC 多文档源码分析1
  6. pin controller driver代码分析
  7. Socket的send函数在执行时报EAGAIN的错误
  8. mysql主从同步表结构_mysql主从同步的结构模式
  9. 教师资格证计算机考察知识点,教师资格证考试信息技术常考知识点同步练习题.docx...
  10. win10taskkill无法终止进程_Win10无法终止进程拒绝访问