本文是基于吴恩达《深度学习》卷积神经网络第四周习题而做。通常人脸识别问题分为两类:人脸验证和人脸识别,人脸验证可以作为人脸识别的前提,具体的讲解可以观看达叔《深度学习》教程,在此默认大家对人脸识别问题已有了解。

所需的第三方库如下,其中所用的数据集和辅助程序可点击此处下载。

from keras.models import Sequential
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
from keras.layers.merge import Concatenate
from keras.layers.core import Lambda, Flatten, Dense
from keras.initializers import glorot_uniform
from keras.engine.topology import Layer
from keras import backend as K
K.set_image_data_format('channels_first')
import cv2
import os
import numpy as np
from numpy import genfromtxt
import pandas as pd
import tensorflow as tf
from fr_utils import *
from inception_blocks_v2 import *np.set_printoptions(threshold=np.nan)

1.人脸图像编码

1.1 使用卷积计算编码值

想要对比两张人脸图像是否为同一人,最直接的思路是按照像素点逐一求距离,如果总和小于某个阈值则认为是同一人的不同图像,但这种方法很容易受到光照、背景等因素的影响,因此我们需要对输入图像img进行一定程度的编码,对编码后的f(img)进行比较。

为了节省训练模型的时间,我们采用已经训练过的FaceNet模型的权重参数,此处达叔为我们提供了inception模型,通过该模型我们可以将输入图像转化为128维的向量,即图像编码后得到一个128维的编码值。inception模型在文首链接中可下载,名为inception_blocks.py.

网络使用96x96x3大小的图像作为输入,假设batch_size = m, 则输入张量的shape为(m, n_C, n_H, n_W) = (m, 3, 96,96)。输出的shape为(m,128),因为将图像编码成128维。

调用inception_blocks.py中的模型faceRecoModel

FRmodel = faceRecoModel(input_shape=(3,96,96))

faceRecoModel的完整代码如下

import tensorflow as tf
import numpy as np
import os
from numpy import genfromtxt
from keras import backend as K
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
import fr_utils
from keras.layers.core import Lambda, Flatten, Densedef inception_block_1a(X):"""Implementation of an inception block"""X_3x3 = Conv2D(96, (1, 1), data_format='channels_first', name ='inception_3a_3x3_conv1')(X)X_3x3 = BatchNormalization(axis=1, epsilon=0.00001, name = 'inception_3a_3x3_bn1')(X_3x3)X_3x3 = Activation('relu')(X_3x3)X_3x3 = ZeroPadding2D(padding=(1, 1), data_format='channels_first')(X_3x3)X_3x3 = Conv2D(128, (3, 3), data_format='channels_first', name='inception_3a_3x3_conv2')(X_3x3)X_3x3 = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3a_3x3_bn2')(X_3x3)X_3x3 = Activation('relu')(X_3x3)X_5x5 = Conv2D(16, (1, 1), data_format='channels_first', name='inception_3a_5x5_conv1')(X)X_5x5 = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3a_5x5_bn1')(X_5x5)X_5x5 = Activation('relu')(X_5x5)X_5x5 = ZeroPadding2D(padding=(2, 2), data_format='channels_first')(X_5x5)X_5x5 = Conv2D(32, (5, 5), data_format='channels_first', name='inception_3a_5x5_conv2')(X_5x5)X_5x5 = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3a_5x5_bn2')(X_5x5)X_5x5 = Activation('relu')(X_5x5)X_pool = MaxPooling2D(pool_size=3, strides=2, data_format='channels_first')(X)X_pool = Conv2D(32, (1, 1), data_format='channels_first', name='inception_3a_pool_conv')(X_pool)X_pool = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3a_pool_bn')(X_pool)X_pool = Activation('relu')(X_pool)X_pool = ZeroPadding2D(padding=((3, 4), (3, 4)), data_format='channels_first')(X_pool)X_1x1 = Conv2D(64, (1, 1), data_format='channels_first', name='inception_3a_1x1_conv')(X)X_1x1 = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3a_1x1_bn')(X_1x1)X_1x1 = Activation('relu')(X_1x1)# CONCATinception = concatenate([X_3x3, X_5x5, X_pool, X_1x1], axis=1)return inceptiondef inception_block_1b(X):X_3x3 = Conv2D(96, (1, 1), data_format='channels_first', name='inception_3b_3x3_conv1')(X)X_3x3 = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3b_3x3_bn1')(X_3x3)X_3x3 = Activation('relu')(X_3x3)X_3x3 = ZeroPadding2D(padding=(1, 1), data_format='channels_first')(X_3x3)X_3x3 = Conv2D(128, (3, 3), data_format='channels_first', name='inception_3b_3x3_conv2')(X_3x3)X_3x3 = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3b_3x3_bn2')(X_3x3)X_3x3 = Activation('relu')(X_3x3)X_5x5 = Conv2D(32, (1, 1), data_format='channels_first', name='inception_3b_5x5_conv1')(X)X_5x5 = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3b_5x5_bn1')(X_5x5)X_5x5 = Activation('relu')(X_5x5)X_5x5 = ZeroPadding2D(padding=(2, 2), data_format='channels_first')(X_5x5)X_5x5 = Conv2D(64, (5, 5), data_format='channels_first', name='inception_3b_5x5_conv2')(X_5x5)X_5x5 = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3b_5x5_bn2')(X_5x5)X_5x5 = Activation('relu')(X_5x5)X_pool = AveragePooling2D(pool_size=(3, 3), strides=(3, 3), data_format='channels_first')(X)X_pool = Conv2D(64, (1, 1), data_format='channels_first', name='inception_3b_pool_conv')(X_pool)X_pool = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3b_pool_bn')(X_pool)X_pool = Activation('relu')(X_pool)X_pool = ZeroPadding2D(padding=(4, 4), data_format='channels_first')(X_pool)X_1x1 = Conv2D(64, (1, 1), data_format='channels_first', name='inception_3b_1x1_conv')(X)X_1x1 = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3b_1x1_bn')(X_1x1)X_1x1 = Activation('relu')(X_1x1)inception = concatenate([X_3x3, X_5x5, X_pool, X_1x1], axis=1)return inceptiondef inception_block_1c(X):X_3x3 = fr_utils.conv2d_bn(X,layer='inception_3c_3x3',cv1_out=128,cv1_filter=(1, 1),cv2_out=256,cv2_filter=(3, 3),cv2_strides=(2, 2),padding=(1, 1))X_5x5 = fr_utils.conv2d_bn(X,layer='inception_3c_5x5',cv1_out=32,cv1_filter=(1, 1),cv2_out=64,cv2_filter=(5, 5),cv2_strides=(2, 2),padding=(2, 2))X_pool = MaxPooling2D(pool_size=3, strides=2, data_format='channels_first')(X)X_pool = ZeroPadding2D(padding=((0, 1), (0, 1)), data_format='channels_first')(X_pool)inception = concatenate([X_3x3, X_5x5, X_pool], axis=1)return inceptiondef inception_block_2a(X):X_3x3 = fr_utils.conv2d_bn(X,layer='inception_4a_3x3',cv1_out=96,cv1_filter=(1, 1),cv2_out=192,cv2_filter=(3, 3),cv2_strides=(1, 1),padding=(1, 1))X_5x5 = fr_utils.conv2d_bn(X,layer='inception_4a_5x5',cv1_out=32,cv1_filter=(1, 1),cv2_out=64,cv2_filter=(5, 5),cv2_strides=(1, 1),padding=(2, 2))X_pool = AveragePooling2D(pool_size=(3, 3), strides=(3, 3), data_format='channels_first')(X)X_pool = fr_utils.conv2d_bn(X_pool,layer='inception_4a_pool',cv1_out=128,cv1_filter=(1, 1),padding=(2, 2))X_1x1 = fr_utils.conv2d_bn(X,layer='inception_4a_1x1',cv1_out=256,cv1_filter=(1, 1))inception = concatenate([X_3x3, X_5x5, X_pool, X_1x1], axis=1)return inceptiondef inception_block_2b(X):#inception4eX_3x3 = fr_utils.conv2d_bn(X,layer='inception_4e_3x3',cv1_out=160,cv1_filter=(1, 1),cv2_out=256,cv2_filter=(3, 3),cv2_strides=(2, 2),padding=(1, 1))X_5x5 = fr_utils.conv2d_bn(X,layer='inception_4e_5x5',cv1_out=64,cv1_filter=(1, 1),cv2_out=128,cv2_filter=(5, 5),cv2_strides=(2, 2),padding=(2, 2))X_pool = MaxPooling2D(pool_size=3, strides=2, data_format='channels_first')(X)X_pool = ZeroPadding2D(padding=((0, 1), (0, 1)), data_format='channels_first')(X_pool)inception = concatenate([X_3x3, X_5x5, X_pool], axis=1)return inceptiondef inception_block_3a(X):X_3x3 = fr_utils.conv2d_bn(X,layer='inception_5a_3x3',cv1_out=96,cv1_filter=(1, 1),cv2_out=384,cv2_filter=(3, 3),cv2_strides=(1, 1),padding=(1, 1))X_pool = AveragePooling2D(pool_size=(3, 3), strides=(3, 3), data_format='channels_first')(X)X_pool = fr_utils.conv2d_bn(X_pool,layer='inception_5a_pool',cv1_out=96,cv1_filter=(1, 1),padding=(1, 1))X_1x1 = fr_utils.conv2d_bn(X,layer='inception_5a_1x1',cv1_out=256,cv1_filter=(1, 1))inception = concatenate([X_3x3, X_pool, X_1x1], axis=1)return inceptiondef inception_block_3b(X):X_3x3 = fr_utils.conv2d_bn(X,layer='inception_5b_3x3',cv1_out=96,cv1_filter=(1, 1),cv2_out=384,cv2_filter=(3, 3),cv2_strides=(1, 1),padding=(1, 1))X_pool = MaxPooling2D(pool_size=3, strides=2, data_format='channels_first')(X)X_pool = fr_utils.conv2d_bn(X_pool,layer='inception_5b_pool',cv1_out=96,cv1_filter=(1, 1))X_pool = ZeroPadding2D(padding=(1, 1), data_format='channels_first')(X_pool)X_1x1 = fr_utils.conv2d_bn(X,layer='inception_5b_1x1',cv1_out=256,cv1_filter=(1, 1))inception = concatenate([X_3x3, X_pool, X_1x1], axis=1)return inceptiondef faceRecoModel(input_shape):"""Implementation of the Inception model used for FaceNetArguments:input_shape -- shape of the images of the datasetReturns:model -- a Model() instance in Keras"""# Define the input as a tensor with shape input_shapeX_input = Input(input_shape)# Zero-PaddingX = ZeroPadding2D((3, 3))(X_input)# First BlockX = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1')(X)X = BatchNormalization(axis = 1, name = 'bn1')(X)X = Activation('relu')(X)# Zero-Padding + MAXPOOLX = ZeroPadding2D((1, 1))(X)X = MaxPooling2D((3, 3), strides = 2)(X)# Second BlockX = Conv2D(64, (1, 1), strides = (1, 1), name = 'conv2')(X)X = BatchNormalization(axis = 1, epsilon=0.00001, name = 'bn2')(X)X = Activation('relu')(X)# Zero-Padding + MAXPOOLX = ZeroPadding2D((1, 1))(X)# Second BlockX = Conv2D(192, (3, 3), strides = (1, 1), name = 'conv3')(X)X = BatchNormalization(axis = 1, epsilon=0.00001, name = 'bn3')(X)X = Activation('relu')(X)# Zero-Padding + MAXPOOLX = ZeroPadding2D((1, 1))(X)X = MaxPooling2D(pool_size = 3, strides = 2)(X)# Inception 1: a/b/cX = inception_block_1a(X)X = inception_block_1b(X)X = inception_block_1c(X)# Inception 2: a/bX = inception_block_2a(X)X = inception_block_2b(X)# Inception 3: a/bX = inception_block_3a(X)X = inception_block_3b(X)# Top layerX = AveragePooling2D(pool_size=(3, 3), strides=(1, 1), data_format='channels_first')(X)X = Flatten()(X)X = Dense(128, name='dense_layer')(X)# L2 normalizationX = Lambda(lambda  x: K.l2_normalize(x,axis=1))(X)# Create model instancemodel = Model(inputs = X_input, outputs = X, name='FaceRecoModel')return model

网络的最后一层是全连接层设置128个神经元,保证了输出的向量是128维的, 然后就可以使用这输入的128维向量比对两幅面部图像。

那么如何判断一个编码方式是适用的呢?有如下两个原则

  • 同一人的不同照片的编码非常相似
  • 不同人的照片的编码差距很大

上述的两个原则在三元组损失函数中的体现就是:推近同一人的两张图像的距离,拉远两张不同人的图像的距离。

1.2 Triplet损失函数

对于输入图像x,我们将其编码表示为f(x),f是神经网络计算得出的。

在训练中使用的三元组为(A,P,N)

A:Anchor,某人脸图像

P:Positive,与A为同一人的图像

N:Negative,与A为不同人的图像

这些三元组是从训练集中选取的,我们使用(A(i),P(i),N(i))作为第i个样本的标注。在triplet损失函数中我们要确保A(i)到P(i)的距离与A(iN(i)到N(i)的距离相差alpha,通常取0.2.

triplet损失函数J为:

注意公式右下角的+号,表示取max(z,0),代码如下:

def triplet_loss(y_true, y_pred, alpha = 0.2):'''Arguments:y_true -- true lablesy_pred -- python list containing three objects:anchor -- shape(None, 128)positive -- shape(None, 128)negative -- shape(None, 128)returns:loss -- value of the loss'''anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)))neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)))basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), alpha)loss = tf.reduce_sum(tf.maximum(basic_loss, 0.))return loss
with tf.Session() as test:tf.set_random_seed(1)y_true = (None, None, None)y_pred = (tf.random_normal([3, 128], mean = 6, stddev = 0.1, seed = 1),tf.random_normal([3, 128], mean = 1, stddev = 1, seed = 1),tf.random_normal([3, 128], mean = 3, stddev = 4, seed = 1))loss = triplet_loss(y_true, y_pred)print("loss = " + st(loss.eval()))
loss = 350.026

2.下载训练模型

由于训练模型需要大量的数据和计算,我们就不重新训练了。继而采用一个事先训练好的模型,这样可以节省大量的时间。

FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])
load_weights_from_FaceNet(FRmodel)

如下是三个不同个体之间的编码距离

3.应用模型

现在我们可以使用这个模型进行人脸验证和识别了,在这里我们将Happy House问题进行优化,不仅要识别出Happy的表情,而且要识别出住客人脸。

3.1人脸验证

首先我们要建立一个数据库,包含每一个允许进入房间者的编码向量,我们使用img_to_encoding(image_path, model) 函数来进行编码,该函数是对每个特定的图像执行模型的前向传播运算。

def img_to_encoding(image_path, model):img1 = cv2.imread(image_path, 1)img = img1[...,::-1]img = np.around(np.transpose(img, (2,0,1))/255.0, decimals=12)x_train = np.array([img])embedding = model.predict_on_batch(x_train)return embedding

执行下列代码生成数据库,这个数据库将每个人名映射为脸部的128维的向量

database = {}
database["danielle"] = img_to_encoding("images/danielle.png", FRmodel)
database["younes"] = img_to_encoding("images/younes.jpg", FRmodel)
database["tian"] = img_to_encoding("images/tian.jpg", FRmodel)
database["andrew"] = img_to_encoding("images/andrew.jpg", FRmodel)
database["kian"] = img_to_encoding("images/kian.jpg", FRmodel)
database["dan"] = img_to_encoding("images/dan.jpg", FRmodel)
database["sebastiano"] = img_to_encoding("images/sebastiano.jpg", FRmodel)
database["bertrand"] = img_to_encoding("images/bertrand.jpg", FRmodel)
database["kevin"] = img_to_encoding("images/kevin.jpg", FRmodel)
database["felix"] = img_to_encoding("images/felix.jpg", FRmodel)
database["benoit"] = img_to_encoding("images/benoit.jpg", FRmodel)
database["arnaud"] = img_to_encoding("images/arnaud.jpg", FRmodel)

我们编写verify()函数来验证门前摄像头拍摄到的图片来识别入门者是否具有资格,这个函数实现起来一共分三步:

(1)计算摄像头拍摄图片的编码;

(2)与数据库中的资格人员进行比对,计算编码之间的距离;

(3)如果编码的距离小于0.7则开门。

def verify(image_path, identity, database, model):encoding = img_to_encoding(image_path, model)dist = np.linalg.norm(encoding - database[identity])if dist < 0.7:print("It's " + str(identity) + "welcome home!")door_open = Trueelse:print("It's not" + str(identity) + "please go away")door_open = Falsereturn dist, door_open
verify("images/camera_0.jpg", "younes", database, FRmodel)

假设younes想进入happy house并且摄像头捕捉到了他的头像(存储为“images/camera_0.jpg”),我们试用verify函数来比对看看会得到怎样的结果。.

It's younes welcome home!

假设benoit借用了kian的ID卡试图进入happy house,摄像头捕捉到了他的头像(存储为“images/camera_2.jpg”),我们来运行verify函数看看是否会让他进入。

It's not kian please go away

3.2 人脸识别

我们已经成功的训练好了人脸验证系统,但是该系统有一个棘手的问题,就是如果某人的ID卡丢失那么他将无法回家,我们可以将人脸验证系统升级为人脸识别系统,人们将不再需要携带ID卡,系统会比对摄像头拍摄到的照片和数据库中的信息,如果一致则会让此人通过。

下面我们编写who_is_it()函数来验证门前摄像头拍摄到的图片来识别入门者是否具有资格,这个函数实现起来一共分两步:

(1)计算目标图像的编码矩阵;

(2)从数据库中找出与目标图像编码矩阵有最小距离的编码。

def who_is_it(image_path, database, model):encoding = img_to_encoding(image_path, model)min_dist = 100for (name, db_enc) in database.items():dist = np.linalg.norm(encoding - db_enc)if dist < min_dist:min_dist = distidentity = nameif min_dist > 0.7:print("Not in the database.")else:print("It's " + str(identity) + ", the distance is " + str(min_dist))return min_dist, identity

假设younes 想进入happy house并且摄像头捕捉到了他的头像(存储为“images/camera_0.jpg”)

who_is_it("images/camera_0.jpg", database, FRmodel)

结果为:

It's younes, the distance is 0.6710074

现在我们的人脸识别系统已经运转正常了。

4.参考文献

  • Florian Schroff, Dmitry Kalenichenko, James Philbin (2015). FaceNet: A Unified Embedding for Face Recognition and Clustering
  • Yaniv Taigman, Ming Yang, Marc’Aurelio Ranzato, Lior Wolf (2014). DeepFace: Closing the gap to human-level performance in face verification
  • The pretrained model we use is inspired by Victor Sy Wang’s implementation and was loaded using his code: https://github.com/iwantooxxoox/Keras-OpenFace.
  • Our implementation also took a lot of inspiration from the official FaceNet github repository: https://github.com/davidsandberg/facenet

人脸识别——基于CNN的模型实现相关推荐

  1. CV之FRec之ME/LF:人脸识别中常用的模型评估指标/损失函数(Triplet Loss、Center Loss)简介、使用方法之详细攻略

    CV之FRec之ME/LF:人脸识别中常用的模型评估指标/损失函数(Triplet Loss.Center Loss)简介.使用方法之详细攻略 目录 T1.Triplet Loss 1.英文原文解释 ...

  2. caffe检测图片是否包含人脸_人脸识别(基于Caffe)

    人脸识别(基于Caffe, 来自tyd) 人脸识别(判断是否为人脸) LMDB(数据库, 为Caffe支持的分类数据源) mkdir face_detect cd face_detect mkdir ...

  3. 人脸识别(基于Caffe, 来自tyd)

    人脸识别(基于Caffe, 来自tyd) 人脸识别(基于Caffe, 来自tyd) 数据获取 在相关的bentchmark找 在thinkface论坛上找数据集 截取人脸 基于IoU 负样本可以找一些 ...

  4. 视觉识别入门之人脸识别——基于FACENET的高精度人脸识别

    视觉识别入门之人脸识别---- 基于FACENET的高精度人脸识别 一:项目展示: - 这是实时视频读取的展示,是可以读单张图片,或者本地视频流,抑或是实时人脸检测与分类的,至于我为什么不展示我的自拍 ...

  5. java+js实现人脸识别-基于百度api

    java+js实现人脸识别-基于百度api 我的第一次分享 第一步-我们了解下实现的思路 代码部分:1.js代码 2.后台代码 3.如何使用百度大脑 4.如何使用 navigator.mediaDev ...

  6. FPGA驱动摄像头实现人脸识别(基于肤色)

    FPGA驱动摄像头实现人脸识别(基于肤色) 验证平台:黑金AX309 OV5640 时至今日图像处理已经得到了广泛应用,今天我们来讲一讲利用FPGA结合摄像头来实现一个简单的人脸识别(当然误检率很高, ...

  7. 深度学习之 人脸识别(2) 模型

          本篇文将介绍人脸识别原理,下文介绍实现实例.   1. 人脸识别模型  经过上一篇步骤后,已得到包含人脸的区域的图像了,接下来就要进行人脸识别了.这一步一般是使用深度卷积网络, 将人脸图像 ...

  8. GAN掉人脸识别系统?GAN模型「女扮男装」

    文章来源 新智元 编辑:LRS [新智元导读]人脸识别技术最近又有新的破解方式!一位斯坦福的学生使用GAN模型生成了几张自己的图片,轻松攻破两个约会软件,最离谱的是「女扮男装」都识别不出来. 真的有人 ...

  9. python人脸识别基于mtcnn和facenet考勤

    基于人脸识别的课堂考勤系统   课堂考勤是保证学生出勤率的重要手段之一,也是学生课程成绩重要的组成部分,课堂考勤可以很好的监督学生,从而确保了课堂的教学质量.目前主要的考勤手段仍然是教师人工点名或者随 ...

最新文章

  1. Grid不可编辑时,设置行背景颜色
  2. 64位win10系统无法安装.Net framework3.5的两种解决方法
  3. Java并发编程—ThreadLocal底层原理
  4. Linux | 进程概念、进程状态(僵尸进程、孤儿进程、守护进程)、进程地址空间
  5. 三星“打法”:先模仿对手 再吃掉对手
  6. bootstrap 垂直居中 布局_CSS3 flex 布局必须要掌握的知识点
  7. MySQL基础(二)数据库、表的创建及操作
  8. c# maiform父窗体改变动态的gridew 奇偶行变色的快捷方法
  9. Struts2之OGNL表达式语言
  10. vue 判断对象不为空_Vue 学习笔记(二):实例
  11. java 重载赋值_java中构造方法,set/get方法,方法重载使用解读
  12. php中使用curl采集小说网,PHP:通过curl实现采集网站内容
  13. git reset --hard命令小结
  14. [LeetCode]Link List Cycle
  15. mysql query cache_MySQL Query Cache开启与否的必要性分析
  16. 修改Linux窗口大小
  17. 修改MAC地址的方法
  18. 安装FeHelper插件
  19. 按键精灵python插件_按键精灵必须掌握的命令之插件命令
  20. 【无标题】对Unity的Windows项目进行dll反编译修改

热门文章

  1. H266 ISP 帧内子划分
  2. word只能以安全模式打开
  3. 联通校园网不能开热点问题解决办法
  4. matlab心电信号的qrs波检测,基于matlab的操作员心电信号QRS波检测及分析
  5. linux上mysql初次运行的报错
  6. 入职新公司后如何快速上手项目
  7. Postgresql之split_part()切割函数
  8. 漫画:骚操作系列(必须掌握的疯子找座问题)
  9. ESP32开发路程——环境搭建、引脚、烧录、UART、ADC、WS2812、RFID、DAC、FreeRTOS、CJSON
  10. Genymotion安卓模拟器常见问题汇总