android 人脸检测

With the release of Google Play services 7.8, Google has brought in the Mobile Vision API that lets you do
Face Detection, Barcode Detection and Text Detection. In this tutorial, we’ll develop an android face detection application that lets you do detect human faces in an image.

随着Google Play服务7.8的发布,Google引入了Mobile Vision API,可让您
人脸检测,条形码检测和文本检测。 在本教程中,我们将开发一个Android人脸检测应用程序,使您可以检测图像中的人脸。

Android人脸检测 (Android Face Detection)

Android Face detection API tracks face in photos, videos using some landmarks like eyes, nose, ears, cheeks, and mouth.

Android人脸检测API使用眼睛,鼻子,耳朵,脸颊和嘴巴等地标来跟踪照片,视频中的人脸。

Rather than detecting the individual features, the API detects the face at once and then if defined, detects the landmarks and classifications. Besides, the API can detect faces at various angles too.

API不会立即检测到个人特征,而是立即检测到人脸,然后在定义后检测界标和分类。 此外,API还可以检测各种角度的面部。

Android人脸检测–地标 (Android Face Detection – Landmarks)

A landmark is a point of interest within a face. The left eye, right eye, and nose base are all examples of landmarks. Following are the landmarks that are possible to find currently with the API:

地标是人脸内的兴趣点。 左眼,右眼和鼻根都是地标的例子。 以下是该API当前可能找到的地标:

  1. left and right eye左右眼
  2. left and right ear左右耳
  3. left and right ear tip左右耳尖
  4. base of the nose鼻根
  5. left and right cheek左右脸颊
  6. left and right corner of the mouth嘴的左右角
  7. base of the mouth口根

When ‘left’ and ‘right’ are used, they are relative to the subject. For example, the LEFT_EYE landmark is the subject’s left eye, not the eye that is on the left when viewing the image.

当使用“左”和“右”时,它们相对于主题。 例如,LEFT_EYE地标是对象的左眼,而不是查看图像时在左眼。

分类 (Classification)

Classification determines whether a certain facial characteristic is present. The Android Face API currently supports two classifications:

分类确定是否存在某种面部特征。 Android Face API当前支持两种分类:

  • eyes open : getIsLeftEyeOpenProbability() and getIsRightEyeOpenProbability() method are used.睁开眼睛 :使用getIsLeftEyeOpenProbability()getIsRightEyeOpenProbability()方法。
  • smiling : getIsSmilingProbability() method is used.微笑 :使用getIsSmilingProbability()方法。

面对面 (Face Orientation)

The orientation of the face is determined using Euler Angles.
These refer to the rotation angle of the face around the X, Y and Z axes.

使用欧拉角确定脸部的方向。
这些是指面围绕X,Y和Z轴的旋转角度。

  • Euler Y tells us if the face is looking left or right.欧拉Y告诉我们该脸是向左看还是向右看。
  • Euler Z tells us if the face is rotated/slated欧拉Z告诉我们面部是否旋转/定格
  • Euler X tells us if the face is looking up or down (currently not supported)欧拉X告诉我们面部是向上还是向下(当前不支持)
  • Note: If a probability can’t be computed, it’s set to -1.

    注意 :如果无法计算概率,则将其设置为-1。

    Let’s jump into the business end of this tutorial. Our application shall contain a few sample images along with the functionality to capture your own image.
    Note: The API supports face detection only. Face Recognition isn’t available with the current Mobile Vision API.

    让我们跳到本教程的业务结束。 我们的应用程序应包含一些示例图像以及捕获您自己的图像的功能。
    注意 :API仅支持人脸检测。 当前的Mobile Vision API不支持面部识别。

    Android人脸检测示例项目结构 (Android face detection example project structure)

    Android人脸检测代码 (Android face detection code)

    Add the following dependency inside the build.gradle file of your application.

    在应用程序的build.gradle文件中添加以下依赖项。

    compile 'com.google.android.gms:play-services-vision:11.0.4'

    Add the following meta-deta inside the application tag in the AndroidManifest.xml file as shown below.

    如下所示,在AndroidManifest.xml文件的application标记内添加以下meta-deta

    <meta-dataandroid:name="com.google.android.gms.vision.DEPENDENCIES"android:value="face"/>

    This lets the Vision library know that you plan to detect faces within your application.

    这使Vision库知道您计划检测应用程序中的面部。

    Add the following permissions inside the manifest tag in the AndroidManifest.xml for camera permissions.

    在AndroidManifest.xml的清单标记中添加以下权限以获取相机权限。

    <uses-featureandroid:name="android.hardware.camera"android:required="true"/><uses-permissionandroid:name="android.permission.WRITE_EXTERNAL_STORAGE"/>

    The code for the activity_main.xml layout file is given below.

    下面给出了activity_main.xml布局文件的代码。

    <?xml version="1.0" encoding="utf-8"?><ScrollView xmlns:android="https://schemas.android.com/apk/res/android"xmlns:app="https://schemas.android.com/apk/res-auto"xmlns:tools="https://schemas.android.com/tools"android:layout_width="match_parent"android:layout_height="match_parent"><android.support.constraint.ConstraintLayout xmlns:app="https://schemas.android.com/apk/res-auto"xmlns:tools="https://schemas.android.com/tools"android:layout_width="match_parent"android:layout_height="wrap_content"tools:context="com.journaldev.facedetectionapi.MainActivity"><ImageViewandroid:id="@+id/imageView"android:layout_width="300dp"android:layout_height="300dp"android:layout_marginTop="8dp"android:src="@drawable/sample_1"app:layout_constraintLeft_toLeftOf="parent"app:layout_constraintRight_toRightOf="parent"app:layout_constraintTop_toTopOf="parent" /><Buttonandroid:id="@+id/btnProcessNext"android:layout_width="wrap_content"android:layout_height="wrap_content"android:layout_marginTop="8dp"android:text="PROCESS NEXT"app:layout_constraintHorizontal_bias="0.501"app:layout_constraintLeft_toLeftOf="parent"app:layout_constraintRight_toRightOf="parent"app:layout_constraintTop_toBottomOf="@+id/imageView" /><ImageViewandroid:id="@+id/imgTakePic"android:layout_width="250dp"android:layout_height="250dp"android:layout_marginTop="8dp"app:layout_constraintLeft_toLeftOf="parent"app:layout_constraintRight_toRightOf="parent"app:layout_constraintTop_toBottomOf="@+id/txtSampleDescription"app:srcCompat="@android:drawable/ic_menu_camera" /><Buttonandroid:id="@+id/btnTakePicture"android:layout_width="wrap_content"android:layout_height="wrap_content"android:layout_marginTop="8dp"android:text="TAKE PICTURE"app:layout_constraintLeft_toLeftOf="parent"app:layout_constraintRight_toRightOf="parent"app:layout_constraintTop_toBottomOf="@+id/imgTakePic" /><TextViewandroid:id="@+id/txtSampleDescription"android:layout_width="match_parent"android:layout_height="wrap_content"android:layout_marginBottom="8dp"android:layout_marginTop="8dp"android:gravity="center"app:layout_constraintBottom_toTopOf="@+id/txtTakePicture"app:layout_constraintLeft_toLeftOf="parent"app:layout_constraintRight_toRightOf="parent"app:layout_constraintTop_toBottomOf="@+id/btnProcessNext"app:layout_constraintVertical_bias="0.0" /><TextViewandroid:id="@+id/txtTakePicture"android:layout_width="wrap_content"android:layout_height="wrap_content"android:layout_marginTop="8dp"android:gravity="center"app:layout_constraintLeft_toLeftOf="parent"app:layout_constraintRight_toRightOf="parent"app:layout_constraintTop_toBottomOf="@+id/btnTakePicture" /></android.support.constraint.ConstraintLayout></ScrollView>

    We’ve defined two ImageViews, TextViews and Buttons. One that would loop through the sample images and display the results. The other is used for capturing an image from the camera.

    我们定义了两个ImageViews ,TextViews和Buttons。 一种将遍历样本图像并显示结果的方法。 另一个用于从相机捕获图像 。

    The code for the MainActivity.java file is given below.

    MainActivity.java文件的代码如下。

    package com.journaldev.facedetectionapi;import android.Manifest;
    import android.content.Context;
    import android.content.Intent;
    import android.content.pm.PackageManager;
    import android.graphics.Bitmap;
    import android.graphics.BitmapFactory;
    import android.graphics.Canvas;
    import android.graphics.Color;
    import android.graphics.Paint;
    import android.net.Uri;
    import android.os.Environment;
    import android.provider.MediaStore;
    import android.support.annotation.NonNull;
    import android.support.v4.app.ActivityCompat;
    import android.support.v7.app.AppCompatActivity;
    import android.os.Bundle;
    import android.util.SparseArray;
    import android.view.View;
    import android.widget.Button;
    import android.widget.ImageView;
    import android.widget.TextView;
    import android.widget.Toast;import com.google.android.gms.vision.Frame;
    import com.google.android.gms.vision.face.Face;
    import com.google.android.gms.vision.face.FaceDetector;
    import com.google.android.gms.vision.face.Landmark;import java.io.File;
    import java.io.FileNotFoundException;public class MainActivity extends AppCompatActivity implements View.OnClickListener {ImageView imageView, imgTakePicture;Button btnProcessNext, btnTakePicture;TextView txtSampleDesc, txtTakenPicDesc;private FaceDetector detector;Bitmap editedBitmap;int currentIndex = 0;int[] imageArray;private Uri imageUri;private static final int REQUEST_WRITE_PERMISSION = 200;private static final int CAMERA_REQUEST = 101;private static final String SAVED_INSTANCE_URI = "uri";private static final String SAVED_INSTANCE_BITMAP = "bitmap";@Overrideprotected void onCreate(Bundle savedInstanceState) {super.onCreate(savedInstanceState);setContentView(R.layout.activity_main);imageArray = new int[]{R.drawable.sample_1, R.drawable.sample_2, R.drawable.sample_3};detector = new FaceDetector.Builder(getApplicationContext()).setTrackingEnabled(false).setLandmarkType(FaceDetector.ALL_CLASSIFICATIONS).setClassificationType(FaceDetector.ALL_CLASSIFICATIONS).build();initViews();}private void initViews() {imageView = (ImageView) findViewById(R.id.imageView);imgTakePicture = (ImageView) findViewById(R.id.imgTakePic);btnProcessNext = (Button) findViewById(R.id.btnProcessNext);btnTakePicture = (Button) findViewById(R.id.btnTakePicture);txtSampleDesc = (TextView) findViewById(R.id.txtSampleDescription);txtTakenPicDesc = (TextView) findViewById(R.id.txtTakePicture);processImage(imageArray[currentIndex]);currentIndex++;btnProcessNext.setOnClickListener(this);btnTakePicture.setOnClickListener(this);imgTakePicture.setOnClickListener(this);}@Overridepublic void onClick(View v) {switch (v.getId()) {case R.id.btnProcessNext:imageView.setImageResource(imageArray[currentIndex]);processImage(imageArray[currentIndex]);if (currentIndex == imageArray.length - 1)currentIndex = 0;elsecurrentIndex++;break;case R.id.btnTakePicture:ActivityCompat.requestPermissions(MainActivity.this, newString[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, REQUEST_WRITE_PERMISSION);break;case R.id.imgTakePic:ActivityCompat.requestPermissions(MainActivity.this, newString[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, REQUEST_WRITE_PERMISSION);break;}}@Overridepublic void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {super.onRequestPermissionsResult(requestCode, permissions, grantResults);switch (requestCode) {case REQUEST_WRITE_PERMISSION:if (grantResults.length > 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) {startCamera();} else {Toast.makeText(getApplicationContext(), "Permission Denied!", Toast.LENGTH_SHORT).show();}}}@Overrideprotected void onActivityResult(int requestCode, int resultCode, Intent data) {if (requestCode == CAMERA_REQUEST && resultCode == RESULT_OK) {launchMediaScanIntent();try {processCameraPicture();} catch (Exception e) {Toast.makeText(getApplicationContext(), "Failed to load Image", Toast.LENGTH_SHORT).show();}}}private void launchMediaScanIntent() {Intent mediaScanIntent = new Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE);mediaScanIntent.setData(imageUri);this.sendBroadcast(mediaScanIntent);}private void startCamera() {Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);File photo = new File(Environment.getExternalStorageDirectory(), "photo.jpg");imageUri = Uri.fromFile(photo);intent.putExtra(MediaStore.EXTRA_OUTPUT, imageUri);startActivityForResult(intent, CAMERA_REQUEST);}@Overrideprotected void onSaveInstanceState(Bundle outState) {if (imageUri != null) {outState.putParcelable(SAVED_INSTANCE_BITMAP, editedBitmap);outState.putString(SAVED_INSTANCE_URI, imageUri.toString());}super.onSaveInstanceState(outState);}private void processImage(int image) {Bitmap bitmap = decodeBitmapImage(image);if (detector.isOperational() && bitmap != null) {editedBitmap = Bitmap.createBitmap(bitmap.getWidth(), bitmap.getHeight(), bitmap.getConfig());float scale = getResources().getDisplayMetrics().density;Paint paint = new Paint(Paint.ANTI_ALIAS_FLAG);paint.setColor(Color.GREEN);paint.setTextSize((int) (16 * scale));paint.setShadowLayer(1f, 0f, 1f, Color.WHITE);paint.setStyle(Paint.Style.STROKE);paint.setStrokeWidth(6f);Canvas canvas = new Canvas(editedBitmap);canvas.drawBitmap(bitmap, 0, 0, paint);Frame frame = new Frame.Builder().setBitmap(editedBitmap).build();SparseArray<Face> faces = detector.detect(frame);txtSampleDesc.setText(null);for (int index = 0; index < faces.size(); ++index) {Face face = faces.valueAt(index);canvas.drawRect(face.getPosition().x,face.getPosition().y,face.getPosition().x + face.getWidth(),face.getPosition().y + face.getHeight(), paint);canvas.drawText("Face " + (index + 1), face.getPosition().x + face.getWidth(), face.getPosition().y + face.getHeight(), paint);txtSampleDesc.setText(txtSampleDesc.getText() + "FACE " + (index + 1) + "\n");txtSampleDesc.setText(txtSampleDesc.getText() + "Smile probability:" + " " + face.getIsSmilingProbability() + "\n");txtSampleDesc.setText(txtSampleDesc.getText() + "Left Eye Is Open Probability: " + " " + face.getIsLeftEyeOpenProbability() + "\n");txtSampleDesc.setText(txtSampleDesc.getText() + "Right Eye Is Open Probability: " + " " + face.getIsRightEyeOpenProbability() + "\n\n");for (Landmark landmark : face.getLandmarks()) {int cx = (int) (landmark.getPosition().x);int cy = (int) (landmark.getPosition().y);canvas.drawCircle(cx, cy, 8, paint);}}if (faces.size() == 0) {txtSampleDesc.setText("Scan Failed: Found nothing to scan");} else {imageView.setImageBitmap(editedBitmap);txtSampleDesc.setText(txtSampleDesc.getText() + "No of Faces Detected: " + " " + String.valueOf(faces.size()));}} else {txtSampleDesc.setText("Could not set up the detector!");}}private Bitmap decodeBitmapImage(int image) {int targetW = 300;int targetH = 300;BitmapFactory.Options bmOptions = new BitmapFactory.Options();bmOptions.inJustDecodeBounds = true;BitmapFactory.decodeResource(getResources(), image,bmOptions);int photoW = bmOptions.outWidth;int photoH = bmOptions.outHeight;int scaleFactor = Math.min(photoW / targetW, photoH / targetH);bmOptions.inJustDecodeBounds = false;bmOptions.inSampleSize = scaleFactor;return BitmapFactory.decodeResource(getResources(), image,bmOptions);}private void processCameraPicture() throws Exception {Bitmap bitmap = decodeBitmapUri(this, imageUri);if (detector.isOperational() && bitmap != null) {editedBitmap = Bitmap.createBitmap(bitmap.getWidth(), bitmap.getHeight(), bitmap.getConfig());float scale = getResources().getDisplayMetrics().density;Paint paint = new Paint(Paint.ANTI_ALIAS_FLAG);paint.setColor(Color.GREEN);paint.setTextSize((int) (16 * scale));paint.setShadowLayer(1f, 0f, 1f, Color.WHITE);paint.setStyle(Paint.Style.STROKE);paint.setStrokeWidth(6f);Canvas canvas = new Canvas(editedBitmap);canvas.drawBitmap(bitmap, 0, 0, paint);Frame frame = new Frame.Builder().setBitmap(editedBitmap).build();SparseArray<Face> faces = detector.detect(frame);txtTakenPicDesc.setText(null);for (int index = 0; index < faces.size(); ++index) {Face face = faces.valueAt(index);canvas.drawRect(face.getPosition().x,face.getPosition().y,face.getPosition().x + face.getWidth(),face.getPosition().y + face.getHeight(), paint);canvas.drawText("Face " + (index + 1), face.getPosition().x + face.getWidth(), face.getPosition().y + face.getHeight(), paint);txtTakenPicDesc.setText("FACE " + (index + 1) + "\n");txtTakenPicDesc.setText(txtTakenPicDesc.getText() + "Smile probability:" + " " + face.getIsSmilingProbability() + "\n");txtTakenPicDesc.setText(txtTakenPicDesc.getText() + "Left Eye Is Open Probability: " + " " + face.getIsLeftEyeOpenProbability() + "\n");txtTakenPicDesc.setText(txtTakenPicDesc.getText() + "Right Eye Is Open Probability: " + " " + face.getIsRightEyeOpenProbability() + "\n\n");for (Landmark landmark : face.getLandmarks()) {int cx = (int) (landmark.getPosition().x);int cy = (int) (landmark.getPosition().y);canvas.drawCircle(cx, cy, 8, paint);}}if (faces.size() == 0) {txtTakenPicDesc.setText("Scan Failed: Found nothing to scan");} else {imgTakePicture.setImageBitmap(editedBitmap);txtTakenPicDesc.setText(txtTakenPicDesc.getText() + "No of Faces Detected: " + " " + String.valueOf(faces.size()));}} else {txtTakenPicDesc.setText("Could not set up the detector!");}}private Bitmap decodeBitmapUri(Context ctx, Uri uri) throws FileNotFoundException {int targetW = 300;int targetH = 300;BitmapFactory.Options bmOptions = new BitmapFactory.Options();bmOptions.inJustDecodeBounds = true;BitmapFactory.decodeStream(ctx.getContentResolver().openInputStream(uri), null, bmOptions);int photoW = bmOptions.outWidth;int photoH = bmOptions.outHeight;int scaleFactor = Math.min(photoW / targetW, photoH / targetH);bmOptions.inJustDecodeBounds = false;bmOptions.inSampleSize = scaleFactor;return BitmapFactory.decodeStream(ctx.getContentResolver().openInputStream(uri), null, bmOptions);}@Overrideprotected void onDestroy() {super.onDestroy();detector.release();}
    }

    Few inferences drawn from the above code are:

    从以上代码得出的推论很少是:

    • imageArray holds the sample images that’ll be scanned for faces when the “PROCESS NEXT” button is clicked.imageArray包含单击“ PROCESS NEXT”按钮将要扫描的样本图像。
    • The detector is instantiated with the below code snippet:
      FaceDetector detector = new FaceDetector.Builder( getContext() ).setTrackingEnabled(false).setLandmarkType(FaceDetector.ALL_LANDMARKS).setMode(FaceDetector.FAST_MODE).build();

      Landmarks add up to the computation time, hence they need to be explicitly set.
      Face Detector can be set to FAST_MODE or ACCURATE_MODE as per our requirements.
      We’ve set tracking to false in the above code since we’re dealing with still images. It can be set to true for detecting faces in a video.

      地标加起来需要计算时间,因此需要明确设置。
      可以根据我们的要求将面部检测器设置为FAST_MODEACCURATE_MODE
      由于我们要处理静止图像,因此在上面的代码中将跟踪设置为false。 可以将其设置为true以检测视频中的面部。

    • processImage() and processCameraPicture() methods contain the code where we actually detect the faces and draw a rectangle over themprocessImage()processCameraPicture()方法包含代码,在这些代码中我们实际检测到人脸并在其上绘制一个矩形
    • detector.isOperational() is used to check whether the current Google Play Services library in your phone supports the vision API(If it doesn’t Google Play downloads the required native libraries to allow support).detector.isOperational()用于检查手机中当前的Google Play服务库是否支持视觉API(如果不支持,则Google Play将下载所需的本机库以提供支持)。
    • The code snippet that actually does the work of face detection is :
      Frame frame = new Frame.Builder().setBitmap(editedBitmap).build();SparseArray faces = detector.detect(frame);

      实际进行人脸检测的代码段为:

    • Once detected, we loop through the faces array to find the position and attributes of each face.一旦检测到,我们就会循环访问faces数组以查找每个面Kong的位置和属性。
    • The attributes for each face are appended in the TextView beneath the button.每个面Kong的属性都附加在按钮下方的TextView中。
    • This works the same when an image is captured by camera except the fact that we need to ask for the camera permissions at runtime and save the uri, bitmap returned by the camera application.当相机捕获图像时,此功能相同,除了我们需要在运行时要求相机权限并保存uri,由相机应用程序返回的位图这一事实。

    The output of the above application in action is given below.

    下面给出了上面应用程序的输出。

    Try capturing the photo of a dog and you’ll see that the Vision API doesn’t detect its face (The API detects human faces only).

    尝试捕获狗的照片,您会发现Vision API不会检测到它的脸(该API仅检测人脸)。

    This brings an end to this tutorial. You can download the final Android Face Detection API Project from the link below.

    本教程到此结束。 您可以从下面的链接下载最终的Android人脸检测API项目

    Download Android Face Detection Project下载Android人脸检测项目

    Reference: Official Documentation

    参考: 官方文档

    翻译自: https://www.journaldev.com/15629/android-face-detection

    android 人脸检测

android 人脸检测_Android人脸检测相关推荐

  1. flutter 刷脸_GitHub - nnnggel/baidu_face_plugin: 百度人脸识别和活体检测 Flutter 插件(目前版本仅支持 Android)...

    baidu_face_plugin 百度人脸识别和活体检测 Flutter 插件(目前版本仅支持 Android) 使用方式 注册百度开发者账号 前往 百度开发者账号 进行注册. 申请并配置licen ...

  2. Android开发之虹软人脸识别活体检测基本步骤

    首先,我简单说下虹软的人脸识别基本步骤: 1.你的设置好设置视频模式方向用于人脸检测 有如下几个可设置方向 //设置视频模式全方向人脸检测ConfigUtil.setFtOrient(this, Fa ...

  3. 基于Android平台的简易人脸检测库

    代码地址如下: http://www.demodashi.com/demo/12135.html ViseFace 简易人脸检测库,不依赖三方库,可快速接入人脸检测功能. 项目依赖:compile ' ...

  4. 【Android App】利用自带的人脸检测器和OpenCV检测人脸讲解及实战(附源码和演示 超详细)

    需要源码请点赞关注收藏后评论区留言私信~~~~ 一.利用人脸检测器识别人脸 对于简单的人脸识别操作,Android已经提供了专门的识别工具,名叫人脸检测器FaceDetector,部分常用方法如下 F ...

  5. Android手把手教你使用阿里云接口实现人脸定位、人脸检测、人脸对比功能。

    前言 现如今,人工智能越来越火,以至于我们必须了解和掌握它,今天我们就来结合阿里云的接口来实现人脸定位,人脸检测等功能. 废话不多说,先上效果图: 随便在网上找了三张图片进行检测,检测结果只显示了每一 ...

  6. android 人脸检测 对焦,Android API教程:人脸检测(上)

    通过两个主要的API,Android提供了一个直接在位图上进行脸部检测的方法,这两个API分别是    android.media.FaceDetector和android.media.FaceDet ...

  7. android人脸特征提取,基于人脸检测和特征提取的移动人像采集系统

    摘要: 目前公安部门使用的人脸识别系统大多属于台式设备和专业器材,而且是在成像条件相对较好,取得被拍照人员良好配合的情况下进行人像采集,软件算法针对的是约束条件下采集得到的人像照片.但是,公安警务还涉 ...

  8. android检测张嘴眨眼,手机端APP活体真活人检测扫描人脸识别SDK之张嘴摇头眨眼点头确认真人非图片...

    关键词:活体活人检测.手机端.APP软件.SDK原生态开发包.人脸识别.张嘴.摇头.点头.眨眼.左右摇头 image 随着AI智能的应用.发展,越来越多的行业运用了人脸识别,而人脸识别最重要的一步是活 ...

  9. Android实现人脸识别(人脸检测)初识

    title: Android实现人脸识别(人脸检测)初识 categories: Android tags: 人脸识别 人脸检测 相机处理 date: 2020-05-21 11:35:51 介绍 本 ...

最新文章

  1. Redis - 事务
  2. 继鼎晖夹层完成80亿募集后,鼎晖夹层IDC基金首轮关账15亿
  3. 深度解读DynamIQ架构cache的替换策略
  4. 我在PMCAFF,感觉身体被掏空
  5. Redis入门与数据类型介绍
  6. Eclipse Neon安装指导
  7. Problem B: 字符类的封装
  8. 解析xml_QT开发(四十四)——流方法解析XML
  9. max6675一直读0_女儿读完我要收藏起来的英文杂志,它让0~15岁孩子阅读无缝对接!...
  10. Oracle中 如何用一个表的数据更新另一个表中的数据(含表备份)
  11. 艾伟也谈项目管理,我的项目管理观点
  12. 针对LSB 信息隐藏的卡方分析算法实现
  13. java 实现 微博_java实现的新浪微博分享代码实例
  14. 固态硬盘usb测试软件,固态硬盘检测修复坏道三级OP设置软件HDAT2 5.3 ISO版
  15. 【读书】2020年阅读记录及心得
  16. VBA小白的福音 如何在EXCEL中实现连续编号自动更新打印?
  17. 红色印章制作过程记录
  18. 一个初中生到程序员的辛酸经历
  19. Eclipse+webservice简单实例搭建
  20. RDA1846的驱动程序和频率设定

热门文章

  1. Spring MVC自动为对象注入枚举数据
  2. 一看就懂ReactJS
  3. html 5 video
  4. redis 在windows 下的安装和使用
  5. vimnbsp;自动识别UTF8和GB2312
  6. Spring.NET学习笔记——目录(原)
  7. jQuery入门[2]-选择器
  8. [转载] python3安装superset踩坑解决过程
  9. .eslintrc.js相关配置
  10. 收缩Vcenter数据库