What is face recognition? Or what is recognition? When you look at an apple fruit, your mind immediately tells you that this is an apple fruit. This process, your mind telling you that this is an apple fruit is recognition in simple words. So what is face recognition then? I am sure you have guessed it right. When you look at your friend walking down the street or a picture of him, you recognize that he is your friend Paulo. Interestingly when you look at your friend or a picture of him you look at his face first before looking at anything else. Ever wondered why you do that? This is so that you can recognize him by looking at his face. Well, this is you doing face recognition.

什么是人脸识别? 还是什么是认可? 当您查看一个苹果果实时,您的大脑会立即告诉您这是一个苹果果实。 这个过程,您的大脑告诉您这是一个苹果果实,简而言之就是识别。 那么,什么是人脸识别呢? 我相信你猜对了。 当您看着在街上行走的朋友或他的照片时,您会发现他是您的朋友Paulo。 有趣的是,当您看着您的朋友或他的照片时,您首先要看他的脸,然后再看其他任何东西。 有没有想过为什么要这么做? 这样一来,您可以通过看着他的脸认出他。 好吧,这就是您要进行人脸识别。

But the real question is how does face recognition works? It is quite simple and intuitive. Take a real-life example, when you meet someone first time in your life you don’t recognize him, right? While he talks or shakes hands with you, you look at his face, eyes, nose, mouth, color, and overall look. This is your mind learning or training for the face recognition of that person by gathering face data. Then he tells you that his name is Paulo. At this point, your mind knows that the face data it just learned belongs to Paulo. Now your mind is trained and ready to do face recognition on Paulo’s face. Next time when you will see Paulo or his face in a picture you will immediately recognize him. This is how to face recognition work. The more you will meet Paulo, the more data your mind will collect about Paulo and especially his face, and the better you will become at recognizing him.

但是真正的问题是面部识别如何工作? 这非常简单直观。 以一个现实生活中的例子为例,当您第一次遇到某人时,您不会认出他,对吗? 当他与您交谈或握手时,您会看着他的脸,眼睛,鼻子,嘴巴,颜色和整体外观。 这是您通过收集面部数据来学习或培训该人的面部识别的思想。 然后他告诉你,他的名字叫Paulo。 至此,您的大脑已经知道,刚刚学习的面部数据属于Paulo。 现在,您的思想已经训练完毕,可以在Paulo的脸上进行脸部识别了。 下次当您在图片中看到Paulo或他的脸时,您将立即识别出他。 这是面对识别工作的方式。 您遇到Paulo的次数越多,您的思想就可以收集到有关Paulo特别是他的面Kong的更多数据,并且您越能识别他。

Now the next question is how to code face recognition with OpenCV, after all this is the only reason why you are reading this article, right? OK then. You might say that our mind can do these things easily but to actually code them into a computer is difficult? Don’t worry, it is not. Thanks to OpenCV, coding face recognition is as easier as it feels. The coding steps for face recognition are the same as we discussed in the real-life example above.

现在,下一个问题是如何使用OpenCV编写人脸识别代码,毕竟这是您阅读本文的唯一原因,对吗? 那好吧。 您可能会说我们的思想可以轻松地完成这些事情,但是很难将它们实际编码到计算机中? 不用担心,事实并非如此。 多亏了OpenCV,对面部识别进行编码变得像感觉一样容易。 人脸识别的编码步骤与我们在上面的实际示例中讨论的相同。

  • Training Data Gathering: Gather face data (face images in this case) of the persons you want to recognize

    训练数据收集:收集您要识别的人员的面部数据(在这种情况下为面部图像)

  • Training of Recognizer: Feed that face data (and respective names of each face) to the face recognizer so that it can learn.

    识别器的训练:将面部数据(以及每个面部的相应名称)输入到面部识别器,以便可以学习。

  • Recognition: Feed new faces of the persons and see if the face recognizer you just trained recognizes them.

    识别:喂养人员的新面Kong,并查看您刚刚训练的面部识别器是否能够识别出他们。

OpenCV comes equipped with a built-in face recognizer, all you have to do is feed it the face data. It’s that simple and this how it will look once we are done coding it.

OpenCV配备了内置的面部识别器,您所要做的就是向其提供面部数据。 就这么简单,这就是我们完成编码后的样子。

OpenCV人脸识别器 (OpenCV Face Recognizers)

OpenCV has three built-in face recognizers and thanks to OpenCV’s clean coding, you can use any of them by just changing a single line of code. Below are the names of those face recognizers and their OpenCV calls.

OpenCV具有三个内置的人脸识别器,并且由于OpenCV简洁编码,您只需更改一行代码即可使用其中任何一个。 以下是这些面部识别器的名称及其OpenCV调用。

  1. EigenFaces Face Recognizer Recognizer — cv2.face.createEigenFaceRecognizer()

    cv2.face.createEigenFaceRecognizer()人脸识别器识别器— cv2.face.createEigenFaceRecognizer()

  2. FisherFaces Face Recognizer Recognizer — cv2.face.createFisherFaceRecognizer()

    FisherFaces人脸识别器识别器— cv2.face.createFisherFaceRecognizer()

  3. Local Binary Patterns Histograms (LBPH) Face Recognizer — cv2.face.createLBPHFaceRecognizer()

    本地二进制模式直方图(LBPH)人脸识别器— cv2.face.createLBPHFaceRecognizer()

We have got three face recognizers but do you know which one to use and when? Or which one is better? I guess not. So why not go through a brief summary of each, what you say? I am assuming you said yes :) So let’s dive into the theory of each.

我们有3个面部识别器,但是您知道使用哪个以及何时使用吗? 还是哪个更好? 我猜不会。 那么,为什么不对每个内容都做一个简短的总结呢? 我假设您说的是:)因此,让我们深入研究每个概念。

EigenFaces人脸识别器 (EigenFaces Face Recognizer)

This algorithm considers the fact that not all parts of a face are equally important and equally useful. When you look at someone you recognize him/her by his distinct features like eyes, nose, cheeks, forehead, and how they vary with respect to each other. So you are actually focusing on the areas of maximum change (mathematically speaking, this change is variance) of the face. For example, from eyes to nose there is a significant change and the same is the case from nose to mouth. When you look at multiple faces you compare them by looking at these parts of the faces because these parts are the most useful and important components of a face. Important because they catch the maximum change among faces, change the helps you differentiate one face from the other. This is exactly how EigenFaces face recognizer works.

该算法考虑到以下事实:并非面部的所有部分都同样重要和有用。 当您看着某人时,您会通过他的独特特征(例如眼睛,鼻子,脸颊,前额以及它们之间的相互差异)来识别他/她。 因此,您实际上是在关注面部的最大变化区域(从数学上来说,这种变化是方差)。 例如,从眼睛到鼻子有很大的变化,从鼻子到嘴也是如此。 当您查看多个面Kong时,可以通过查看面Kong的这些部分进行比较,因为这些部分是面Kong中最有用和最重要的组成部分。 这很重要,因为它们可以捕捉到面Kong之间的最大变化,因此更改有助于您将一张面Kong与另一张面Kong区分开。 这正是EigenFaces人脸识别器的工作原理。

EigenFaces face recognizer looks at all the training images of all the persons as a whole and tries to extract the components which are important and useful (the components that catch the maximum variance/change) and discards the rest of the components. This way it not only extracts the important components from the training data but also saves memory by discarding the less important components. These important components it extracts are called principal components. Below is an image showing the principal components extracted from a list of faces.

EigenFaces人脸识别器会整体上查看所有人员的所有训练图像,并尝试提取重要且有用的组件(捕获最大方差/变化的组件),并丢弃其余组件。 这样,它不仅从训练数据中提取重要组成部分,而且还通过丢弃次要重要组成部分来节省内存。 它提取的这些重要成分称为主成分 。 下图显示了从面部列表中提取的主要成分。

Principal Components, 主要成分, source 来源

You can see that the principal components actually represent faces and these faces are called eigenfaces and hence the name of the algorithm.

您可以看到,主要成分实际上代表人脸,这些人脸被称为本征人脸 ,因此是算法的名称。

So this is how EigenFaces face recognizer trains itself (by extracting principal components). Remember, it also keeps a record of which principal component belongs to which person. One thing to note in the above image is that the Eigenfaces algorithm also considers illumination as an important component.

因此,这就是EigenFaces人脸识别器如何进行自身训练(通过提取主要成分)。 请记住,它还会记录哪个主要组成部分属于哪个人。 上图中需要注意的一件事是,特征脸算法也将照明视为重要组成部分

Later during recognition, when you feed a new image to the algorithm, it repeats the same process on that image as well. It extracts the principal component from that new image and compares that component with the list of components it stored during training and finds the component with the best match and returns the person label associated with that best match component.

在识别的后期,当您向算法提供新图像时,它也会对该图像重复相同的过程。 它从该新图像中提取主要成分,并将该成分与其在训练过程中存储的成分列表进行比较,找到最匹配的成分,并返回与该最匹配成分关联的人员标签。

Easy peasy, right? The next one is easier than this one.

轻轻松松吧? 下一个比这个容易。

FisherFaces人脸识别器 (FisherFaces Face Recognizer)

This algorithm is an improved version of EigenFaces face recognizer. Eigenfaces face recognizer looks at all the training faces of all the persons at once and finds principal components from all of them combined. By capturing principal components from all the of them combined you are not focusing on the features that discriminate one person from the other but the features that represent all the persons in the training data as a whole.

该算法是EigenFaces人脸识别器的改进版本。 特征脸识别器一次查看所有人员的所有训练脸,然后从所有结合的人中找到主要成分。 通过从所有组合中获取主要组成部分,您将不会专注于将一个人与另一个人区分开的功能,而是将整个训练数据中代表所有人的功能都放在了一边。

This approach has drawbacks, for example, images with sharp changes (like light changes which is not a useful feature at all) may dominate the rest of the images and you may end up with features that are from an external source like light and are not useful for discrimination at all. In the end, your principal components will represent light changes and not the actual facial features.

这种方法有一些缺点,例如, 具有剧烈变化的图像(例如光线变化根本不是有用的功能)可能会主导其余图像,并且您最终可能会获得来自外部光源(例如光线)的特征,而这些特征却并非如此。对歧视很有用。 最后,您的主要成分将代表光线变化,而不是实际的面部特征。

Fisher faces algorithm, instead of extracting useful features that represent all the faces of all the persons, it extracts useful features that discriminate one person from the others. This way features of one person do not dominate over the others and you have the features that discriminate one person from the others.

Fisher人脸算法没有提取代表所有人的所有面Kong的有用特征,而是提取了将一个人与其他人区分开的有用特征。 这样,一个人的功能不会主导其他人,而您却具有将一个人与其他人区分开的功能。

Below is an image of features extracted using the Fisherfaces algorithm.

下图是使用Fisherfaces算法提取的特征图像。

Fisher Faces, Fisher Faces, source 来源

You can see that features extracted actually represent faces and these faces are called fisher faces and hence the name of the algorithm.

您可以看到提取的特征实际上代表了面Kong,这些面Kong称为费舍尔面Kong ,因此是算法的名称。

One thing to note here is that even in the Fisherfaces algorithm if multiple persons have images with sharp changes due to external sources like the light they will dominate over other features and affect recognition accuracy.

这里要注意的一件事是, 即使在Fisherfaces算法中,如果多个人的图像由于外部光源(例如光源)而具有急剧变化的图像,它们也会在其他特征上占主导地位并影响识别精度

Getting bored with this theory? Don’t worry, only one face recognizer is left and then we will dive deep into the coding part.

厌倦了这种理论? 不用担心,只剩下一个人脸识别器,然后我们将深入研究编码部分。

本地二进制模式直方图(LBPH)人脸识别器 (Local Binary Patterns Histograms (LBPH) Face Recognizer)

I wrote a detailed explanation of Local Binary Patterns Histograms in my previous article on face detection using local binary pattern histograms. So here I will just give a brief overview of how it works.

我在上一篇有关使用局部二进制模式直方图的人脸检测的文章中写了局部二进制模式直方图的详细说明。 因此,在这里我将简要概述其工作原理。

We know that Eigenfaces and Fisherfaces are both affected by light and in real life, we can’t guarantee perfect light conditions. LBPH face recognizer is an improvement to overcome this drawback.

我们知道本征面和Fisherfaces都受光照影响,在现实生活中,我们不能保证完美的光照条件。 LBPH人脸识别器是克服此缺点的一项改进。

The idea is to not look at the image as a whole instead of finding the local features of an image. LBPH algorithm try to find the local structure of an image and it does that by comparing each pixel with its neighboring pixels.

这个想法是不要整体看图像,而不是寻找图像的局部特征。 LBPH算法尝试查找图像的局部结构,并通过将每个像素与其相邻像素进行比较来做到这一点。

Take a 3x3 window and move it one image, at each move (each local part of an image), compare the pixel at the center with its neighbor pixels. The neighbors with intensity value less than or equal to a center pixels are denoted by 1 and others by 0. Then you read these 0/1 values under a 3x3 window in clockwise order and you will have a binary pattern like 11100011 and this pattern is local to some area of the image. You do this on the whole image and you will have a list of local binary patterns.

取一个3x3的窗口并将其移动一张图像,每次移动(图像的每个局部),将中心的像素与其相邻像素进行比较。 强度值小于或等于中心像素的邻居用1表示,其他用0表示。然后在3x3窗口下按顺时针顺序读取这些0/1值,将得到一个像11100011这样的二进制模式,该模式为局部于图像的某些区域。 您在整个图像上执行此操作,并且将具有本地二进制模式的列表。

LBP Labeling LBP标签

Now you get why this algorithm has Local Binary Patterns in its name? Because you get a list of local binary patterns. Now you may be wondering, what about the histogram part of the LBPH? Well after you get a list of local binary patterns, you convert each binary pattern into a decimal number (as shown in the above image) and then you make a histogram of all of those values. A sample histogram looks like this.

现在您了解了为什么该算法的名称中包含“本地二进制模式”? 因为您可以获得本地二进制模式的列表。 现在您可能想知道,LBPH的直方图部分如何? 在获得本地二进制模式列表之后,可以将每个二进制模式转换为十进制数(如上图所示),然后对所有这些值进行直方图绘制。 样本直方图如下所示。

Sample Histogram
直方图样本

I guess this answers the question about the histogram part. So in the end you will have one histogram for each face image in the training data set. That means if there were 100 images in training data set then LBPH will extract 100 histograms after training and store them for later recognition. Remember, the algorithm also keeps track of which histogram belongs to which person.

我猜这回答了有关直方图部分的问题。 因此,最后,您将在训练数据集中为每个面部图像创建一个直方图 。 这意味着如果训练数据集中有100张图像,则LBPH将在训练后提取100个直方图并将其存储以供以后识别。 请记住,该算法还跟踪哪个直方图属于哪个人

Later during recognition, when you will feed a new image to the recognizer for the recognition it will generate a histogram for that new image, compare that histogram with the histograms it already has, finds the best match histogram and return the person label associated with that best match histogram.

稍后在识别期间,当您将新图像提供给识别器进行识别时,它将为该新图像生成一个直方图,将该直方图与已有的直方图进行比较,找到最匹配的直方图,并返回与该图像相关联的人员标签最佳匹配直方图。

Below is a list of faces and their respective local binary patterns images. You can see that the LBP images are not affected by changes in light conditions.

以下是面Kong及其各自的本地二进制图案图像的列表。 您可以看到LBP图像不受光照条件变化的影响。

LBP Faces, LBP Faces, source 来源

The theory part is over and now comes the coding part! Ready to dive into coding? Let’s get into it then.

理论部分已经结束,现在是编码部分! 准备开始编写代码了吗? 让我们开始吧。

使用OpenCV编码人脸识别 (Coding Face Recognition with OpenCV)

The Face Recognition process in this tutorial is divided into three steps.

本教程中的人脸识别过程分为三个步骤。

  1. Prepare training data: In this step, we will read training images for each person/subject along with their labels, detect faces from each image, and assign each detected face an integer label of the person it belongs to.

    准备训练数据:在此步骤中,我们将读取每个人/对象的训练图像及其标签,从每个图像中检测面部,并为每个检测到的面部分配一个所属人员的整数标签。

  2. Train Face Recognizer: In this step, we will train OpenCV’s LBPH face recognizer by feeding it the data we prepared in step 1.

    训练人脸识别器:在这一步中,我们将向OpenCVLBPH人脸识别器提供在步骤1中准备的数据,以对其进行训练。

  3. Testing: In this step, we will pass some test images to face recognizer and see if it predicts them correctly.

    测试:在此步骤中,我们会将一些测试图像传递给人脸识别器,看看它是否正确预测了它们。

To detect faces, I will use the code from my previous article on face detection. So if you have not read it, I encourage you to do so to understand how to face detection works and its Python coding.

为了检测面部,我将使用上一篇有关面部检测的文章中的代码。 因此,如果您还没有阅读它,我建议您这样做,以了解脸部检测的工作原理及其Python编码。

导入所需的模块 (Import Required Modules)

Before starting the actual coding we need to import the required modules for coding. So let’s import them first.

在开始实际编码之前,我们需要导入所需的编码模块。 因此,让我们先导入它们。

  • cv2: is the OpenCV module for Python which we will use for face detection and face recognition.

    cv2:是Python的OpenCV模块,我们将使用它进行面部检测和面部识别。

  • os: We will use this Python module to read our training directories and file names.

    os:我们将使用此Python模块读取我们的训练目录和文件名。

  • NumPy: We will use this module to convert Python lists to NumPy arrays as OpenCV face recognizers accept NumPy arrays.

    NumPy:当OpenCV人脸识别器接受NumPy数组时,我们将使用此模块将Python列表转换为NumPy数组。

#import OpenCV moduleimport cv2#import os module for reading training data directories and pathsimport os#import numpy to convert python lists to numpy arrays as #it is needed by OpenCV face recognizersimport numpy as np#matplotlib for display our imagesimport matplotlib.pyplot as plt%matplotlib inline

训练数据 (Training Data)

The more images used in training the better. Normally a lot of images are used for training a face recognizer so that it can learn different looks of the same person, for example with glasses, without glasses, laughing, sad, happy, crying, with a beard, without a beard, etc. To keep our tutorial simple we are going to use only 12 images for each person.

训练中使用的图像越多越好。 通常,很多图像用于训练面部识别器,以便可以识别同一个人的不同外观,例如戴着眼镜,不戴眼镜,笑,悲伤,快乐,哭泣,有胡须,没有胡须等。为了使我们的教程简单,我们将只为每个人使用12张图像。

So our training data consists of a total of 2 persons with 12 images of each person. All training data is inside the folder. training-data folder contains one folder for each person and each folder is named with a format sLabel (e.g. s1, s2) where the label is actually the integer label assigned to that person. For example, a folder named s1 means that this folder contains images for person 1. The directory structure tree for training data is as follows:

因此,我们的训练数据总共由2个人组成,每人12张图片。 所有训练数据都在文件夹内。 training-data文件夹为每个人包含一个文件夹,并且每个文件夹都以 sLabel (eg s1, s2) 格式 sLabel (eg s1, s2) ,其中标签实际上是分配给该人的整数标签 。 例如,名为s1的文件夹意味着该文件夹包含人员1的图像。训练数据的目录结构树如下:

training-data |-------------- s1 |                |-- 1.jpg |                |-- ... |                |-- 12.jpg |-------------- s2 |                |-- 1.jpg |                |-- ... |                |-- 12.jpg

The folder contains images that we will use to test our face recognizer after it has been successfully trained.

该文件夹包含成功训练后将用于测试面部识别器的图像。

As OpenCV face recognizer accepts labels as integers so we need to define a mapping between integer labels and persons actual names so below I am defining a mapping of persons integer labels and their respective names.

由于OpenCV人脸识别器接受标签为整数,因此我们需要定义整数标签与人员实际姓名之间的映射,因此下面我定义人员整数标签及其相应名称的映射。

Note: As we have not assigned label 0 to any person so the mapping for label 0 is empty.

注意:由于我们尚未将label 0分配给任何人,因此标签0的映射为空

#there is no label 0 in our training data so subject name for #index/label 0 is empty subjects = ["", "Tom Cruise", "Shahrukh Khan"]

准备训练数据 (Prepare training data)

You may be wondering why data preparation, right? Well, OpenCV face recognizer accepts data in a specific format. It accepts two vectors, one vector is of faces of all the persons and the second vector is of integer labels for each face so that when processing a face the face recognizer knows which person that particular face belongs too.

您可能想知道为什么要准备数据,对吗? 好吧,OpenCV人脸识别器可以接受特定格式的数据。 它接受两个向量,一个向量是所有人的脸,第二个向量是每个人脸的整数标签,因此在处理人脸时,人脸识别器也知道该特定人脸属于哪个人。

For example, if we had 2 persons and 2 images for each person.

例如,如果我们有2个人,每个人有2张图片。

PERSON-1    PERSON-2   img1        img1         img2        img2

Then the prepare data step will produce the following face and label vectors.

然后,准备数据步骤将产生以下面部和标签向量。

FACES                        LABELSperson1_img1_face              1person1_img2_face              1person2_img1_face              2person2_img2_face              2

Preparing the data step can be further divided into the following sub-steps.

准备数据的步骤可以进一步分为以下子步骤。

  1. Read all the folder names of subjects/persons provided in the training data folder. So for example, in this tutorial we have folder names: s1, s2.

    阅读培训数据文件夹中提供的所有科目/人员的文件夹名称。 因此,例如,在本教程中,我们具有文件夹名称: s1, s2

  2. For each subject, extract the label number. Do you remember that our folders have a special naming convention? Folder names follow the format sLabel where Label is an integer representing the label we have assigned to that subject. So for example, folder name s1 means that the subject has label 1, s2 means the subject label is 2, and so on. The label extracted in this step is assigned to each face detected in the next step.

    对于每个主题,提取标签编号。 您还记得我们的文件夹有特殊的命名约定吗? 文件夹名称遵循sLabel格式,其中Label是一个整数,表示我们已分配给该主题的标签。 因此,例如,文件夹名称s1表示主题具有标签1,s2表示主题标签为2,依此类推。 在此步骤中提取的标签被分配给在下一步中检测到的每个面部。

  3. Read all the images of the subject, detect face from each image.读取对象的所有图像,从每个图像中检测脸部。
  4. Add each face to faces vector with the corresponding subject label (extracted in the above step) added to the labels vector.将每个人脸添加到人脸向量,并在标签向量中添加相应的主题标签(在上述步骤中提取)。

Did you read my last article on face detection? No? Then you better do so right now because to detect faces, I am going to use the code from my previous article on face detection. So if you have not read it, I encourage you to do so to understand how to face detection works and its coding. Below is the same code.

您读过我上一篇有关面部检测的文章吗? 没有? 那么您最好现在就这样做,因为要检测面部,我将使用上一篇有关面部检测的文章中的代码。 因此,如果您还没有阅读它,我鼓励您这样做,以了解脸部检测的工作原理及其编码。 下面是相同的代码。

#function to detect face using OpenCVdef detect_face(img):    #convert the test image to gray image as opencv face detector expects gray images    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

    #load OpenCV face detector, I am using LBP which is fast    #there is also a more accurate but slow Haar classifier    face_cascade = cv2.CascadeClassifier('opencv-files/lbpcascade_frontalface.xml')    #let's detect multiscale (some images may be closer to camera than others) images    #result is a list of faces    faces = face_cascade.detectMultiScale(gray, scaleFactor=1.2, minNeighbors=5);

    #if no faces are detected then return original img    if (len(faces) == 0):        return None, None

    #under the assumption that there will be only one face,    #extract the face area    (x, y, w, h) = faces[0]

    #return only the face part of the image    return gray[y:y+w, x:x+h], faces[0]

I am using OpenCV’s LBP face detector. On line 4, I convert the image to grayscale because most operations in OpenCV are performed in grayscale, then on line 8, I load LBP face detector using cv2.CascadeClassifier class. After that online 12, I use cv2.CascadeClassifier class' detectMultiScale method to detect all the faces in the image. on line 20, from detected faces, I only pick the first face because in one image there will be only one face (under the assumption that there will be only one prominent face). As faces returned by detectMultiScale the method are actually rectangles (x, y, width, height) and not actual faces images so we have to extract face image area from the main image. So on line 23 I extract face area from the gray image and return both the face image area and face rectangle.

我正在使用OpenCVLBP人脸检测器 。 在第4行,我将图像转换为灰度,因为OpenCV中的大多数操作都是在灰度下执行的,然后在第8行,我使用cv2.CascadeClassifier类加载了LBP人脸检测器。 在在线12之后,我使用cv2.CascadeClassifier类的detectMultiScale方法检测图像中的所有面Kong。 在第20行中 ,我仅从检测到的面部中选择第一张面部,因为在一个图像中将只有一个面部(在假设只有一个突出面部的情况下)。 由于detectMultiScale返回的detectMultiScale实际上是矩形(x,y,宽度,高度),而不是实际的面部图像,因此我们必须从主图像中提取面部图像区域。 因此,在第23行,我从灰度图像中提取了脸部区域,并返回了脸部图像区域和脸部矩形。

Now you have got a face detector and you know the 4 steps to prepare the data, so are you ready to code the prepare data step? Yes? So let’s do it.

现在您已经有了一个面部检测器,并且您知道准备数据的四个步骤,那么您准备好编写准备数据的步骤了吗? 是? 因此,让我们开始吧。

#this function will read all persons' training images, detect face from each image#and will return two lists of exactly same size, one list # of faces and another list of labels for each facedef prepare_training_data(data_folder_path):

    #------STEP-1--------    #get the directories (one directory for each subject) in data folder    dirs = os.listdir(data_folder_path)

    #list to hold all subject faces    faces = []    #list to hold labels for all subjects    labels = []

    #let's go through each directory and read images within it    for dir_name in dirs:

        #our subject directories start with letter 's' so        #ignore any non-relevant directories if any        if not dir_name.startswith("s"):            continue;

        #------STEP-2--------        #extract label number of subject from dir_name        #format of dir name = slabel        #, so removing letter 's' from dir_name will give us label        label = int(dir_name.replace("s", ""))

        #build path of directory containin images for current subject subject        #sample subject_dir_path = "training-data/s1"        subject_dir_path = data_folder_path + "/" + dir_name

        #get the images names that are inside the given subject directory        subject_images_names = os.listdir(subject_dir_path)

        #------STEP-3--------        #go through each image name, read image,         #detect face and add face to list of faces        for image_name in subject_images_names:

            #ignore system files like .DS_Store            if image_name.startswith("."):                continue;

            #build image path            #sample image path = training-data/s1/1.pgm            image_path = subject_dir_path + "/" + image_name            #read image            image = cv2.imread(image_path)

            #display an image window to show the image             cv2.imshow("Training on image...", image)            cv2.waitKey(100)

            #detect face            face, rect = detect_face(image)

            #------STEP-4--------            #for the purpose of this tutorial            #we will ignore faces that are not detected            if face is not None:                #add face to list of faces                faces.append(face)                #add label for this face                labels.append(label)

    cv2.destroyAllWindows()    cv2.waitKey(1)    cv2.destroyAllWindows()

    return faces, labels

I have defined a function that takes the path, where training subjects’ folders are stored, as parameters. This function follows the same 4 prepare data substeps mentioned above.

我定义了一个函数,该函数采用存储培训对象文件夹的路径作为参数。 此功能遵循上述相同的4个准备数据子步骤。

(step-1) On line 8 I am using os.listdir a method to read names of all folders stored on path passed to function as a parameter. On line 10-13 I am defining labels and faces vectors.

(步骤1)第8行上,我正在使用os.listdir方法来读取存储在传递为参数的路径上的所有文件夹的名称。 在10-13行中,我正在定义标签和面向量。

(step-2) After that I traverse through all subjects’ folder names and from each subject’s folder name on line 27 I am extracting the label information. As folder names follow the sLabel naming convention so removing the letter s from folder name will give us the label assigned to that subject.

(步骤2)之后,遍历所有主题的文件夹名称,并从第27行的每个主题的文件夹名称中遍历,然后提取标签信息。 由于文件夹名称遵循sLabel命名约定,因此从文件夹名称中删除字母s将为我们分配给该主题的标签。

(step-3) On line 34, I read all the images names of the current subject being traversed, and on lines 39–66 I traverse those images one by one. On line 53–54 I am using OpenCV’s imshow(window_title, image) along with OpenCV's waitKey(interval) method to display the current image being traversed. The waitKey(interval) method pauses the code flow for the given interval (milliseconds), I am using it with 100ms interval so that we can view the image window for 100ms. On line 57, I detect face from the current image being traversed.

(第3步)第34行上 ,我读取了正在遍历的当前对象的所有图像名称,在第39-66行上,我一张遍了遍历了这些图像。 在第53–54行中,我使用OpenCVimshow(window_title, image)以及OpenCVwaitKey(interval)方法来显示正在遍历的当前图像。 waitKey(interval)方法将代码流暂停给定的间隔(毫秒),我以100ms的间隔使用它,以便我们可以查看图像窗口100ms。 在第57行 ,我从正在遍历的当前图像中检测到人脸。

(step-4) On line 62–66, I add the detected face and label to their respective vectors.

(第4步)第62–66行上 ,我将检测到的面部和标签添加到它们各自的向量中。

But a function can’t do anything unless we call it on some data that it has to prepare, right? Don’t worry, I have got data of two beautiful and famous celebrities. I am sure you will recognize them!

但是,除非我们在必须准备的数据上调用它,否则函数将无法执行任何操作,对吗? 别担心,我有两个美丽而著名的名人的数据。 我相信您会认出他们!

Let’s call this function on images of these beautiful celebrities to prepare data for the training of our Face Recognizer. Below is a simple code to do that.

让我们在这些美丽名人的图像上调用此功能,以准备用于训练人脸识别器的数据。 以下是执行此操作的简单代码。

#let's first prepare our training data#data will be in two lists of same size#one list will contain all the faces#and other list will contain respective labels for each faceprint("Preparing data...")faces, labels = prepare_training_data("training-data")print("Data prepared")#print total faces and labelsprint("Total faces: ", len(faces))print("Total labels: ", len(labels))

Output

输出量

Preparing data...Data preparedTotal faces:  23Total labels:  23

This was probably the boring part, right? Don’t worry, the fun stuff is coming up next. It’s time to train our own face recognizer so that once trained it can recognize new faces of the persons it was trained on. Read? Ok then let’s train our face recognizer.

这可能是无聊的部分,对吧? 不用担心,接下来会出现有趣的东西。 现在该训练我们自己的面部识别器了,以便一旦受过训练就可以识别受过训练的人员的新面Kong。 读? 好吧,让我们训练我们的面部识别器。

火车面部识别器 (Train Face Recognizer)

As we know, OpenCV comes equipped with three face recognizers.

众所周知,OpenCV配备了三个面部识别器。

  1. EigenFace Recognizer: This can be created with cv2.face.createEigenFaceRecognizer()

    EigenFace Recognizer:可以使用cv2.face.createEigenFaceRecognizer()创建

  2. FisherFace Recognizer: This can be created with cv2.face.createFisherFaceRecognizer()

    FisherFace Recognizer:可以使用cv2.face.createFisherFaceRecognizer()创建

  3. Local Binary Patterns Histogram (LBPH): This can be created with cv2.face.LBPHFisherFaceRecognizer()

    本地二进制模式直方图(LBPH):可以使用cv2.face.LBPHFisherFaceRecognizer()创建

I am going to use LBPH face recognizer but you can use any face recognizer of your choice. No matter which of the OpenCV’s face recognizer you use the code will remain the same. You just have to change one line, the face recognizer initialization line given below.

我将使用LBPH人脸识别器,但您可以使用您选择的任何人脸识别器。 无论您使用哪种OpenCV面部识别器,代码都将保持不变。 您只需要更改一行,即下面给出的人脸识别器初始化行。

#create our LBPH face recognizer face_recognizer = cv2.face.createLBPHFaceRecognizer()#or use EigenFaceRecognizer by replacing above line with #face_recognizer = cv2.face.createEigenFaceRecognizer()#or use FisherFaceRecognizer by replacing above line with #face_recognizer = cv2.face.createFisherFaceRecognizer()

Now that we have initialized our face recognizer and we also have prepared our training data, it’s time to train the face recognizer. We will do that by calling the train(faces-vector, labels-vector) method of face recognizer.

现在我们已经初始化了脸部识别器,并且我们还准备了训练数据,现在该训练脸部识别器了。 我们将通过调用人脸识别器的train(faces-vector, labels-vector)方法来做到这一点。

#train our face recognizer of our training faces face_recognizer.train(faces, np.array(labels))

Did you notice that instead of passing labels vector directly to face recognizer I am first converting it to numpy array? This is because OpenCV expects labels vector to be a numpy array.

您是否注意到 ,不是将labels矢量直接传递给面部识别器,而是先将其转换为numpy数组? 这是因为OpenCV期望标签vector是一个numpy数组。

Still not satisfied? Want to see some action? The next step is the real action, I promise!

还是不满意? 想看到一些动作吗? 我保证,下一步就是真正的行动!

预测 (Prediction)

Now comes my favorite part, the prediction part. This is where we actually get to see if our algorithm is actually recognizing our trained subjects’ faces or not. We will take two test images of our celebrities, detect faces from each of them, and then pass those faces to our trained face recognizer to see if it recognizes them.

现在是我最喜欢的部分,预测部分。 在这里,我们实际上可以看到算法是否真正识别出受过训练的受试者的脸。 我们将为名人拍摄两张测试图像,从每位名人中检测出面Kong,然后将这些面Kong传递给我们训练有素的面部识别器,以查看其是否能够识别出他们。

Below are some utility functions that we will use for drawing bounding box (rectangle) around face and putting celeberity name near the face bounding box.

以下是一些实用程序函数,我们将使用它们来在脸部周围绘制边界框(矩形)并将名人姓名放在脸部边界框附近。

First function draw_rectangle draws a rectangle on image based on passed rectangle coordinates. It uses OpenCV's built in function cv2.rectangle(img, topLeftPoint, bottomRightPoint, rgbColor, lineWidth) to draw rectangle. We will use it to draw a rectangle around the face detected in test image.

第一个函数draw_rectangle基于传递的矩形坐标在图像上绘制一个矩形。 它使用OpenCV内置函数cv2.rectangle(img, topLeftPoint, bottomRightPoint, rgbColor, lineWidth)绘制矩形。 我们将使用它在测试图像中检测到的面部周围绘制一个矩形。

Second function draw_text uses OpenCV's built in function cv2.putText(img, text, startPoint, font, fontSize, rgbColor, lineWidth) to draw text on image.

第二个函数draw_text使用OpenCV内置函数cv2.putText(img, text, startPoint, font, fontSize, rgbColor, lineWidth)在图像上绘制文本。

Now that we have the drawing functions, we just need to call the face recognizer’s predict(face) method to test our face recognizer on test images. Following function does the prediction for us.

现在我们有了绘图功能,我们只需要调用面部识别器的predict(face)方法即可在测试图像上测试面部识别器。 跟随函数为我们做预测。

#this function recognizes the person in image passed#and draws a rectangle around detected face with name of the #subjectdef predict(test_img):    #make a copy of the image as we don't want to change original            img = test_img.copy()    #detect face from the image    face, rect = detect_face(img)    #predict the image using our face recognizer     label= face_recognizer.predict(face)    #get name of respective label returned by face recognizer    label_text = subjects[label]

    #draw a rectangle around face detected    draw_rectangle(img, rect)    #draw name of predicted person    draw_text(img, label_text, rect[0], rect[1]-5)

    return img
  • line-6 read the test image

    第6行读取测试图像

  • line-7 detect face from test image

    第7行从测试图像中检测人脸

  • line-11 recognize the face by calling face recognizer’s predict(face) method. This method will return a lable

    第11行通过调用人脸识别器的predict(face)方法来识别人predict(face) 。 此方法将返回一个标签

  • line-12 get the name associated with the label

    第12行获得与标签关联的名称

  • line-16 draw rectangle around the detected face

    第16行在检测到的人脸周围绘制矩形

  • line-18 draw name of predicted subject above face rectangle

    第18行在脸部矩形上方绘制预测主题的名称

Now that we have the prediction function well defined, next step is to actually call this function on our test images and display those test images to see if our face recognizer correctly recognized them. So let’s do it. This is what we have been waiting for.

现在我们已经很好地定义了预测功能,下一步是在测试图像上实际调用此功能,并显示这些测试图像,以查看我们的面部识别器是否正确识别了它们。 因此,让我们开始吧。 这就是我们一直在等待的。

print("Predicting images...")#load test imagestest_img1 = cv2.imread("test-data/test1.jpg")test_img2 = cv2.imread("test-data/test2.jpg")#perform a predictionpredicted_img1 = predict(test_img1)predicted_img2 = predict(test_img2)print("Prediction complete")#create a figure of 2 plots (one for each test image)f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))#display test image1 resultax1.imshow(cv2.cvtColor(predicted_img1, cv2.COLOR_BGR2RGB))#display test image2 resultax2.imshow(cv2.cvtColor(predicted_img2, cv2.COLOR_BGR2RGB))#display both imagescv2.imshow("Tom cruise test", predicted_img1)cv2.imshow("Shahrukh Khan test", predicted_img2)cv2.waitKey(0)cv2.destroyAllWindows()cv2.waitKey(1)cv2.destroyAllWindows()

Output

输出量

Predicting images...Prediction complete
Face Recognition Results
人脸识别结果

woohoo! Isn't it beautiful? Indeed, it is!

呜呼! 不漂亮吗 的确是!

The full code can be found on this GitHub link.

完整的代码可以在GitHub链接上找到 。

尾注 (End Notes)

Face Recognition is a fascinating idea to work on and OpenCV has made it extremely simple and easy for us to code it. It just takes a few lines of code to have a fully working face recognition application and we can switch between all three face recognizers with a single line of code change. It’s that simple.

人脸识别是一个有趣的想法,OpenCV使我们对其进行编码变得极其简单和容易。 一个完整的人脸识别应用程序只需要几行代码,我们只需更改一行代码就可以在所有三个人脸识别器之间进行切换。 就这么简单。

Although EigenFaces, FisherFaces, and LBPH face recognizers are good but there are even better ways to perform face recognition like using Histogram of Oriented Gradients (HOGs) and Neural Networks. So the more advanced face recognition algorithms are nowadays implemented using a combination of OpenCV and Machine learning. I have plans to write some articles on those more advanced methods as well, so stay tuned!

尽管EigenFaces,FisherFaces和LBPH面部识别器都不错,但是还有更好的方法来进行面部识别,例如使用定向梯度直方图(HOG)和神经网络。 因此,如今结合使用OpenCV和机器学习来实现更高级的面部识别算法。 我也计划写一些关于这些更高级方法的文章,敬请期待!

Originally published at https://github.com/informramiz.

最初发布在 https://github.com/informramiz

翻译自: https://medium.com/swlh/face-recognition-with-opencv-and-python-f51fb0389254


http://www.taodudu.cc/news/show-6004955.html

相关文章:

  • 使用python解决图像识别中常见的问题
  • python,人工智能,水果识别
  • jQuery删除表格中指定行
  • xss基础认证钓鱼代码收集
  • HTML表格制作的例子
  • Xeam Visual Installer白金版,Xeam Visual Installer完整用户体验
  • 优盘连接时显示参数错误请问咋才能修复
  • 制作启动u盘总结 centos6/centos7
  • OpenMV新手上路1 -- OpenMV简介、参数描述
  • web网页设计实例作业 网页Dreamweaver设计
  • PPASR流式与非流式语音识别
  • 11.9 至 11.17 四道典型题记录: Counter 弹出 | map函数 | 子集求取 | 有序字符桶分装
  • 花房集团CEO于丹内部信:上市即暴富年代已一去不复返
  • 【C语言】用回调函数实现冒泡排序
  • 【2022持续更新】大数据最全知识点整理-HBase篇
  • HBase 数据库检索性能优化策略
  • Redis系列总结--这几点你会了吗?
  • 容灾恢复 | 记一次K8S集群中etcd数据快照的备份恢复实践
  • 虚拟现实的起源、趋势及应用
  • 人工智能被批不环保,训练一个神经网络的排炭量竟然比5辆车还多?
  • 印度:农村是IT发展的根据地
  • oracle12c 重启服务,OBIEE12c的服务启动/关闭命令
  • weblogic12c重置密码linux,weblogic 12c忘记domain密码重置方法
  • oracle12c关闭pdb,oracle 12c pdb启动与关闭
  • Oracle 12c之Oracle 12c与云计算
  • 12c及以上参数推荐设置
  • oracle12c备份与恢复,Oracle 12c 备份与恢复
  • oracle12c用plsql连不上,PLSQL连接oracle12c
  • 解读 Oracle 12c 自适应执行计划一例
  • oracle12c安装卡住_记一次oracle12c安装过程问题及处理方法

使用OpenCV和Python进行人脸识别相关推荐

  1. python人脸识别理论_使用OpenCV和Python进行人脸识别

    介绍 人脸识别是什么?或识别是什么?当你看到一个苹果时,你的大脑会立刻告诉你这是一个苹果.在这个过程中,你的大脑告诉你这是一个苹果水果,用简单的语言来说就是识别.那么什么是人脸识别呢?我肯定你猜对了. ...

  2. 人眼定位python代码_使用dlib,OpenCV和Python进行人脸识别—人眼眨眼检测

    前期文章我们分享了如何使用python与dlib来进行人脸识别,以及来进行人脸部分的识别, 如下图,dlib人脸数据把人脸分成了68个数据点,从图片可以看出,人脸识别主要是识别:人眉,人眼,人鼻,人嘴 ...

  3. 基于 OpenCV + Python 的人脸识别上课签到系统

    目录 前言 安装第三方库 第一步:采集人脸图像 (1)修改姓名学号 (2)运行capture_face.py (3)采集人脸图像 (4)查看采集到的人脸图像 第二步:训练模型 第三步:识别签到 (1) ...

  4. Python OpenCV开发MR智能人脸识别打卡系统(一、需求分析与系统设计)

    需要源码请点赞关注收藏后评论区留言私信~~~ 整体系统讲解如下链接 Python OpenCV开发MR智能人脸识别打卡系统(二.文件系统.数据实体模块设计) Python OpenCV开发MR智能人脸 ...

  5. Python OpenCV开发MR智能人脸识别打卡系统(四、服务模块设计)

    需要源码请点赞关注收藏后评论区留言私信~~~ 整体系统讲解如下 Python OpenCV开发MR智能人脸识别打卡系统(一.需求分析与系统设计) Python OpenCV开发MR智能人脸识别打卡系统 ...

  6. Python OpenCV开发MR智能人脸识别打卡系统(三、工具模块设计)

    需要源码请点赞关注收藏后评论区留言私信~~~ 整体系统讲解如下 Python OpenCV开发MR智能人脸识别打卡系统(一.需求分析与系统设计) Python OpenCV开发MR智能人脸识别打卡系统 ...

  7. python人脸识别opencv_Python基于Opencv来快速实现人脸识别过程详解(完整版)

    前言 随着人工智能的日益火热,计算机视觉领域发展迅速,尤其在人脸识别或物体检测方向更为广泛,今天就为大家带来最基础的人脸识别基础,从一个个函数开始走进这个奥妙的世界. 首先看一下本实验需要的数据集,为 ...

  8. Python OpenCV开发MR智能人脸识别打卡系统(五、程序入口设计与测试)

    需要源码请点赞关注收藏后评论区留言私信~~~ 整体系统讲解如下 Python OpenCV开发MR智能人脸识别打卡系统(一.需求分析与系统设计) Python OpenCV开发MR智能人脸识别打卡系统 ...

  9. python读取视频流做人脸识别_基于OpenCV和Keras实现人脸识别系列——二、使用OpenCV通过摄像头捕获实时视频并探测人脸、准备人脸数据...

    基于OpenCV和Keras实现人脸识别系列手记: 项目完整代码参见Github仓库. 本篇是上面这一系列手记的第二篇. 在Opencv初接触,图片的基本操作这篇手记中,我介绍了一些图片的基本操作,而 ...

最新文章

  1. python pandas 独热编码
  2. Leetcode刷题 1441题: 用栈操作构建数组(基于python3和c++两种语言)
  3. 【随笔】深度学习的数据增强还分在线和离线?
  4. windows 环境下Eclipse开发MapReduce环境设置
  5. 你可能不知道的.Net Core Configuration
  6. 上传文件白名单_十大常见web漏洞——文件上传漏洞
  7. 边界信任模型,零信任模型
  8. java 进阶 知乎_(二)零基础写Java知乎爬虫之进阶篇
  9. 流利说公布上市后首份财报:第三季净收入1.8亿
  10. github html5 预览,github 上如何直接预览仓库中的html
  11. Python实战之SocketServer模块
  12. Mac 上使用 zmodem 发送和接收堡垒机文件
  13. 关于时频分析的一些感想
  14. 利用BP神经网络求解非线性方程组
  15. 苹果电脑分屏之后没有声音_MAC录屏没有声音? 如何在苹果电脑MACBOOK上录音录屏...
  16. wxwidgets自定义事件+调试
  17. 关于USGS 共享光谱库读取问题
  18. 蓝桥杯试题 入门训练 圆的面积
  19. STM32模拟USB多点触控屏
  20. 2015上半年手机GPU排行榜

热门文章

  1. double浮点数转字符串算法
  2. Android Launcher分析和修改5——HotSeat分析
  3. MVC路由自定义及视图找寻规则
  4. iOS开发之网络监听(一)Reachability
  5. Processing残影拖尾效果实现套路分享
  6. git fatal: The remote end hung up unexpectedly 错误
  7. appinventor网络编程php,AppInventor离线版下载
  8. 代码源每日一题-宝箱(贪心/思维)
  9. 有什么真无线蓝牙耳机推荐?2022蓝牙耳机全球排行榜
  10. 女孩假扮大学生跪地乞讨月入万元