这里根据已经标记好眼眶的模型结果来寻找虹膜(眼睛中间黑色部分)和瞳孔,并且利用tkinter的Frame来把视频流信息放到UI上,以便美观。
识别瞳孔利用平均灰度值思想,在眼眶范围内找到周围平均灰度值最低的点作为瞳孔中心,然后画圆表示瞳孔。
tkinter图形界面上用Frame组件来显示,并且把这个放在一个Label上,Frame和Label都需要grid()一下。把视频流放到一个函数里,最后把读取的帧转化格式并且对该Label用after语句即可实现不断播放帧。
代码如下:

import tkinter as tk
import imutils
from PIL import ImageTk, Image
import cv2
from scipy.spatial import distance as dist
from imutils.video import FileVideoStream
from imutils.video import VideoStream
from imutils import face_utils
import numpy as np
import argparse
import imutils
import time
import dlibdef eye_aspect_ratio(eye):# compute the euclidean distances between the two sets of# vertical eye landmarks (x, y)-coordinatesA = dist.euclidean(eye[1], eye[5])B = dist.euclidean(eye[2], eye[4])# compute the euclidean distance between the horizontal# eye landmark (x, y)-coordinatesC = dist.euclidean(eye[0], eye[3])# compute the eye aspect ratioear = (A + B) / (2.0 * C)# return the eye aspect ratioreturn ear# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
# ap.add_argument("-p", "--shape-predictor", required=True,
#   help="path to facial landmark predictor")
ap.add_argument("-v", "--video", type=str, default="",help="path to input video file")
args = vars(ap.parse_args())# define two constants, one for the eye aspect ratio to indicate
# blink and then a second constant for the number of consecutive
# frames the eye must be below the threshold
EYE_AR_THRESH = 0.3
EYE_AR_CONSEC_FRAMES = 3
# initialize the frame counters and the total number of blinks
COUNTER = 0
TOTAL = 0# initialize dlib's face detector (HOG-based) and then create
# the facial landmark predictor
print("[INFO] loading facial landmark predictor...")
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")# grab the indexes of the facial landmarks for the left and
# right eye, respectively
(lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"]
(rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]
(lbrowStart, lbrowEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eyebrow"]
(rbrowStart, rbrowEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eyebrow"]
(mouthStart, mouthEnd) = face_utils.FACIAL_LANDMARKS_IDXS["mouth"]
(innermouthStart, innermouthEnd) = face_utils.FACIAL_LANDMARKS_IDXS["inner_mouth"]
(noseStart, noseEnd) = face_utils.FACIAL_LANDMARKS_IDXS["nose"]
(jawStart, jawEnd) = face_utils.FACIAL_LANDMARKS_IDXS["jaw"]#下巴root = tk.Tk()
root.title('摄像头窗口')
root.geometry("600x400")lsize = tk.Label(root, text='窗口大小', font=('Arial', 12), bg='red')
lsize.place(x=0, y=0, width=120, height=30)# Create a frame
app = tk.Frame(root, bg="white")
app.grid()
app.place(x=0, y=30, width=450, height=337)# Create a label in the frame
lmain = tk.Label(app)
lmain.grid()# width=tk.IntVar()
# width.set(0)
# height=tk.IntVar()
# height.set(0)
width = 0
height = 0k = 0
# Capture from camera
cap = cv2.VideoCapture(0)
# cap = cv2.VideoCapture(0,cv2.CAP_DSHOW)lsize2 = tk.Label(root, text=str(width) + 'x' + str(height), font=('Arial', 12), bg='green')
lsize2.place(x=120, y=0, width=80, height=30)vblink = tk.IntVar()
veye = tk.IntVar()
vpupil = tk.IntVar()
viris = tk.IntVar()
vmouth = tk.IntVar()
vinnermouth = tk.IntVar()
veyebrow = tk.IntVar()
vnose = tk.IntVar()
vjaw = tk.IntVar()def destroy():cap.release()# root.destroy()# cv2.destroyAllWindows()# function for video streaming
def video_stream():# while(cap.isOpened):cv2.waitKey(50)global k, TOTAL, COUNTER, vblink, veye, vpupil,viris,vmouth,vinnermouth,veyebrow,vjawret, frame = cap.read()# try:frame = imutils.resize(frame, width=450)global height, widthif k % 100 == 0:(height, width, dimension) = frame.shape# height.set(h)# width.set(w)lsize2.config(text=str(width) + 'x' + str(height))k = k + 1# print(k)# print(frame.shape)gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)rects = detector(gray, 0)print(rects)# loop over the face detectionsfor rect in rects:# determine the facial landmarks for the face region, then# convert the facial landmark (x, y)-coordinates to a NumPy# array(height, width) = gray.shapeshape = predictor(gray, rect)shape = face_utils.shape_to_np(shape)# extract the left and right eye coordinates, then use the# coordinates to compute the eye aspect ratio for both eyesleftEye = shape[lStart:lEnd]rightEye = shape[rStart:rEnd]leftEAR = eye_aspect_ratio(leftEye)rightEAR = eye_aspect_ratio(rightEye)# average the eye aspect ratio together for both eyesear = (leftEAR + rightEAR) / 2.0lefteyebrow = shape[lbrowStart:lbrowEnd]righteyebrow = shape[rbrowStart:rbrowEnd]mouth = shape[mouthStart:mouthEnd]innermouth = shape[innermouthStart:innermouthEnd]nose = shape[noseStart:noseEnd]jaw = shape[jawStart:jawEnd]# compute the convex hull for the left and right eye, then# visualize each of the eyes# 把眼眶标绿leftEyeHull = cv2.convexHull(leftEye)rightEyeHull = cv2.convexHull(rightEye)# print(vkind2)if veye.get() > 0:cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1)cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0), 1)lefteyebrowHull=cv2.convexHull(lefteyebrow)righteyebrowHull=cv2.convexHull(righteyebrow)mouthHull = cv2.convexHull(mouth)innermouthHull = cv2.convexHull(innermouth)noseHull = cv2.convexHull(nose)jawHull = cv2.convexHull(jaw)if veyebrow.get()>0:#眉毛cv2.drawContours(frame, [lefteyebrowHull], -1, (0, 0, 0), 1)cv2.drawContours(frame, [righteyebrowHull], -1, (0, 0, 0), 1)if vmouth.get()>0:#嘴cv2.drawContours(frame, [mouthHull], -1, (0, 128, 255), 1)if vinnermouth.get() > 0:#嘴内部cv2.drawContours(frame, [innermouthHull], -1, (255, 0, 255), 1)if vnose.get() > 0:#鼻子cv2.drawContours(frame, [noseHull], -1, (128, 128, 255), 1)if vjaw.get() > 0:#下颌cv2.drawContours(frame, [jawHull], -1, (0, 128, 200), 1)lefteyexmin = 1000lefteyexmax = 0lefteyeymin = 1000lefteyeymax = 0righteyexmin = 1000righteyexmax = 0righteyeymin = 1000righteyeymax = 0for p in leftEye:if p[0] < lefteyexmin:lefteyexmin = p[0]if p[0] > lefteyexmax:lefteyexmax = p[0]if p[1] < lefteyeymin:lefteyeymin = p[1]if p[1] > lefteyeymax:lefteyeymax = p[1]for p in rightEye:if p[0] < righteyexmin:righteyexmin = p[0]if p[0] > righteyexmax:righteyexmax = p[0]if p[1] < righteyeymin:righteyeymin = p[1]if p[1] > righteyeymax:righteyeymax = p[1]print('left range:', lefteyexmin, lefteyeymin, 'to', lefteyexmax, lefteyeymax)print('right range:', righteyexmin, righteyeymin, 'to', righteyexmax, righteyeymax)# lefteyexmin=np.min(leftEyeHull)# lefteyexmax=np.max(leftEyeHull[0])# righteyexmin=np.min(rightEyeHull[0])# righteyexmax=np.max(rightEyeHull[0])# lefteyeymin=np.min(leftEyeHull[1])# lefteyeymax=np.max(leftEyeHull[1])# righteyeymin=np.min(rightEyeHull[1])# righteyeymax=np.max(rightEyeHull[1])minxl = 0minyl = 0minl = 255minxr = 0minyr = 0minr = 255lengthxl = lefteyexmax - lefteyexminlengthxr = righteyexmax - righteyexminlengthyl = lefteyeymax - lefteyeyminlengthyr = righteyeymax - righteyeyminprint('left size:', lengthxl, lengthyl, 'right size:', lengthxr, lengthyr)# radiusil = int(lengthxl / 6)#左虹膜半径# radiusir = int(lengthxr / 6)#右虹膜半径## irislxmin = max(lefteyexmin, 0)# irislxmax = min(lefteyexmax, width-1)# irislymin = max(lefteyeymin, 0)# irislymax = min(lefteyeymax, height-1)# irisrxmin = max(lefteyexmin, 0)# irisrxmax = min(lefteyexmax, width-1)# irisrymin = max(lefteyeymin, 0)# irisrymax = min(lefteyeymax, height-1)radiuspl = int(lengthxl / 6)  # 左瞳孔检测半径radiuspr = int(lengthxr / 6)  # 右瞳孔检测半径# if irislxmax-irislxmin<15 or irislymax-irislymin<15:#距离太远,虹膜太小了就不检测了#   continuepupillxmin = max(int(lefteyexmin + lengthxl / 6), 0)pupillxmax = min(int(lefteyexmax - lengthxl / 6), width)pupillymin = max(int(lefteyeymin - lengthyl / 4), 0)pupillymax = min(int(lefteyeymax + lengthyl / 4), height)if pupillxmax - pupillxmin < 3 or pupillymax - pupillymin < 3:print('距离太近或者太远')continueif 2 * radiuspl + 1 > pupillxmax - pupillxmin:print('左眼瞳孔半径估计', radiuspl)print('左眼瞳孔瞳孔搜索范围x坐标', pupillxmin, 'to', pupillxmax)print('左眼太靠左右两边')continueif 2 * radiuspl + 1 > pupillymax - pupillymin:print('左眼瞳孔半径估计', radiuspl)print('左眼瞳孔瞳孔搜索范围y坐标', pupillymin, 'to', pupillymax)print('左眼太靠上下两边')continuepupilrxmin = max(int(righteyexmin + lengthxr / 6), 0)pupilrxmax = min(int(righteyexmax - lengthxr / 6), width)pupilrymin = max(int(righteyeymin - lengthyr / 4), 0)pupilrymax = min(int(righteyeymax + lengthyr / 4), height)if pupilrxmax - pupilrxmin < 3 or pupilrymax - pupilrymin < 3:print('距离太近或者太远')continueif 2 * radiuspr + 1 > pupilrxmax - pupilrxmin:print('右眼瞳孔半径估计', radiuspr)print('右眼瞳孔瞳孔搜索范围x坐标', pupilrxmin, 'to', pupilrxmax)print('右眼太靠左右两边')continueif 2 * radiuspr + 1 > pupilrymax - pupilrymin:print('右眼瞳孔半径估计', radiuspr)print('右眼瞳孔瞳孔搜索范围y坐标', pupilrymin, 'to', pupilrymax)print('右眼太靠上下两边')continuesumrowl = np.zeros((pupillymax - pupillymin, pupillxmax - pupillxmin))suml = np.zeros((pupillymax - pupillymin, pupillxmax - pupillxmin))arearowl = np.zeros((pupillymax - pupillymin, pupillxmax - pupillxmin))areal = np.zeros((pupillymax - pupillymin, pupillxmax - pupillxmin))# avgl = np.zeros((pupillymax - pupillymin, pupillxmax - pupillxmin))for y in range(pupillymin, pupillymax):relay = y - pupillymin  # 原图的点在瞳孔检测范围内的对应点,用于坐标变换sumrowl[relay][0] = 0arearowl[relay][0] = radiuspl + 1for x in range(pupillxmin, pupillxmin + radiuspl + 1):  # 从中心点再向右radiusl个点sumrowl[relay][0] = sumrowl[relay][0] + gray[y][x]for x in range(pupillxmin + 1, pupillxmin + radiuspl + 1):relax = x - pupillxmin  # 原图的点在瞳孔检测范围内的对应点,用于坐标变换sumrowl[relay][relax] = sumrowl[relay][relax - 1] + gray[y][x + radiuspl]arearowl[relay][relax] = relax + radiuspl + 1for x in range(pupillxmin + radiuspl + 1, pupillxmax - radiuspl):relax = x - pupillxminsumrowl[relay][relax] = sumrowl[relay][relax - 1] + gray[y][x + radiuspl] - gray[y][x - radiuspl - 1]arearowl[relay][relax] = 2 * radiuspl + 1for x in range(pupillxmax - radiuspl, pupillxmax):relax = x - pupillxmin  # 原图的点在瞳孔检测范围内的对应点,用于坐标变换sumrowl[relay][relax] = sumrowl[relay][relax - 1] - gray[y][x - radiuspl - 1]arearowl[relay][relax] = pupillxmax - x + radiusplfor x in range(pupillxmin, pupillxmax):relax = x - pupillxmin  # 原图的点在瞳孔检测范围内的对应点,用于坐标变换suml[0][relax] = 0areal[0][relax] = 0for y in range(pupillymin, pupillymin + radiuspl + 1):  # 从中心点再向下radiusl个点relay = y - pupillyminsuml[0][relax] = suml[0][relax] + sumrowl[relay][relax]areal[0][relax] = areal[0][relax] + arearowl[relay][relax]for y in range(pupillymin + 1, pupillymin + radiuspl + 1):relay = y - pupillymin  # 原图的点在瞳孔检测范围内的对应点,用于坐标变换suml[relay][relax] = suml[relay - 1][relax] + sumrowl[relay + radiuspl][relax]areal[relay][relax] = areal[relay - 1][relax] + arearowl[relay + radiuspl][relax]for y in range(pupillymin + radiuspl + 1, pupillymax - radiuspl):relay = y - pupillyminsuml[relay][relax] = suml[relay - 1][relax] + sumrowl[relay + radiuspl][relax] - \sumrowl[relay - radiuspl - 1][relax]areal[relay][relax] = areal[relay - 1][relax]for y in range(pupillymax - radiuspl, pupillymax):relay = y - pupillymin  # 原图的点在瞳孔检测范围内的对应点,用于坐标变换suml[relay][relax] = suml[relay - 1][relax] - sumrowl[relay - radiuspl - 1][relax]areal[relay][relax] = areal[relay - 1][relax] - arearowl[relay - radiuspl - 1][relax]avgl = suml / arealprint('suml', suml)print('areal', areal)print('avgl', avgl)for y in range(pupillymin, pupillymax):for x in range(pupillxmin, pupillxmax):if avgl[y - pupillymin][x - pupillxmin] < minl:minl = avgl[y - pupillymin][x - pupillxmin]minxl = xminyl = ysumrowr = np.zeros((pupilrymax - pupilrymin, pupilrxmax - pupilrxmin))sumr = np.zeros((pupilrymax - pupilrymin, pupilrxmax - pupilrxmin))arearowr = np.zeros((pupilrymax - pupilrymin, pupilrxmax - pupilrxmin))arear = np.zeros((pupilrymax - pupilrymin, pupilrxmax - pupilrxmin))avgr = np.zeros((pupilrymax - pupilrymin, pupilrxmax - pupilrxmin))for y in range(pupilrymin, pupilrymax):relay = y - pupilrymin  # 原图的点在瞳孔检测范围内的对应点,用于坐标变换sumrowr[relay][0] = 0arearowr[relay][0] = radiuspr + 1for x in range(pupilrxmin, pupilrxmin + radiuspr + 1):  # 从中心点再向右radiusr个点sumrowr[relay][0] = sumrowr[relay][0] + gray[y][x]for x in range(pupilrxmin + 1, pupilrxmin + radiuspr + 1):relax = x - pupilrxmin  # 原图的点在瞳孔检测范围内的对应点,用于坐标变换sumrowr[relay][relax] = sumrowr[relay][relax - 1] + gray[y][x + radiuspr]arearowr[relay][relax] = relax + radiuspr + 1for x in range(pupilrxmin + radiuspr + 1, pupilrxmax - radiuspr):relax = x - pupilrxminsumrowr[relay][relax] = sumrowr[relay][relax - 1] + gray[y][x + radiuspr] - gray[y][x - radiuspr - 1]arearowr[relay][relax] = 2 * radiuspr + 1for x in range(pupilrxmax - radiuspr, pupilrxmax):relax = x - pupilrxmin  # 原图的点在瞳孔检测范围内的对应点,用于坐标变换sumrowr[relay][relax] = sumrowr[relay][relax - 1] - gray[y][x - radiuspr - 1]arearowr[relay][relax] = pupilrxmax - x + radiusprfor x in range(pupilrxmin, pupilrxmax):relax = x - pupilrxmin  # 原图的点在瞳孔检测范围内的对应点,用于坐标变换sumr[0][relax] = 0arear[0][relax] = 0for y in range(pupilrymin, pupilrymin + radiuspr + 1):  # 从中心点再向下radiusr个点relay = y - pupilryminsumr[0][relax] = sumr[0][relax] + sumrowr[relay][relax]arear[0][relax] = arear[0][relax] + arearowr[relay][relax]for y in range(pupilrymin + 1, pupilrymin + radiuspr + 1):relay = y - pupilrymin  # 原图的点在瞳孔检测范围内的对应点,用于坐标变换sumr[relay][relax] = sumr[relay - 1][relax] + sumrowr[relay + radiuspr][relax]arear[relay][relax] = arear[relay - 1][relax] + arearowr[relay + radiuspr][relax]for y in range(pupilrymin + radiuspr + 1, pupilrymax - radiuspr):relay = y - pupilryminsumr[relay][relax] = sumr[relay - 1][relax] + sumrowr[relay + radiuspr][relax] - \sumrowr[relay - radiuspr - 1][relax]arear[relay][relax] = arear[relay - 1][relax]for y in range(pupilrymax - radiuspr, pupilrymax):relay = y - pupilrymin  # 原图的点在瞳孔检测范围内的对应点,用于坐标变换sumr[relay][relax] = sumr[relay - 1][relax] - sumrowr[relay - radiuspr - 1][relax]arear[relay][relax] = arear[relay - 1][relax] - arearowr[relay - radiuspr - 1][relax]avgr = sumr / arearprint('sumr', sumr)print('arear', arear)print('avgr', avgr)for y in range(pupilrymin, pupilrymax):for x in range(pupilrxmin, pupilrxmax):if avgr[y - pupilrymin][x - pupilrxmin] < minr:minr = avgr[y - pupilrymin][x - pupilrxmin]minxr = xminyr = yif viris.get()>0:# print('left:',minxl,minyl,'right:',minxr,minyr)cv2.circle(frame, (minxl, minyl), int(lengthxl / 5), (255, 0, 0), 2)cv2.circle(frame, (minxr, minyr), int(lengthxr / 5), (255, 0, 0), 2)# print(vkind3)if vpupil.get() > 0:cv2.circle(frame, (minxl, minyl), int(lengthxl / 20), (0, 0, 255), 2)cv2.circle(frame, (minxr, minyr), int(lengthxr / 20), (0, 0, 255), 2)content='left:('+str(minxl)+','+str(minyl)+'),right:('+str(minxr)+','+str(minyr)+')'txtfile=open('../OpenCV/1.txt','w+')print(content,file=txtfile)cv2.imwrite('1.png', frame)# check to see if the eye aspect ratio is below the blink# threshold, and if so, increment the blink frame counterif ear < EYE_AR_THRESH:COUNTER += 1# otherwise, the eye aspect ratio is not below the blink# thresholdelse:# if the eyes were closed for a sufficient number of# then increment the total number of blinksif COUNTER >= EYE_AR_CONSEC_FRAMES:TOTAL += 1# reset the eye frame counter# COUNTER = 0# draw the total number of blinks on the frame along with# the computed eye aspect ratio for the frame# print(vkind1)if vblink.get() > 0:cv2.putText(frame, "Blinks: {}".format(TOTAL), (10, 30),cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)# cv2.putText(frame, "EAR: {:.2f}".format(ear), (300, 30),# cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)cv2image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGBA)img = Image.fromarray(cv2image)imgtk = ImageTk.PhotoImage(image=img)lmain.imgtk = imgtklmain.configure(image=imgtk)lmain.after(1, video_stream)# except:
#     cap.release()def start():global capcap = cv2.VideoCapture(0)video_stream()# video_stream()bstart = tk.Button(root, text='开启摄像头', font=('Arial', 12), bg='gray', command=start)
bstart.place(x=470, y=300, width=100, height=30)bstop = tk.Button(root, text='暂停', font=('Arial', 12), bg='gray', command=destroy)
bstop.place(x=470, y=350, width=100, height=30)cblink = tk.Checkbutton(root, text='显示眨眼', font=('Arial', 12), fg='red', variable=vblink, onvalue=1, offvalue=0)
cblink.place(x=470, y=0, width=90, height=30)ceye = tk.Checkbutton(root, text='显示眼睛', font=('Arial', 12), fg='green', variable=veye, onvalue=1, offvalue=0)
ceye.place(x=470, y=30, width=90, height=30)cpupil = tk.Checkbutton(root, text='显示瞳孔', font=('Arial', 12), fg='red', variable=vpupil, onvalue=1, offvalue=0)
cpupil.place(x=470, y=60, width=90, height=30)ciris = tk.Checkbutton(root, text='显示虹膜', font=('Arial', 12), fg='blue', variable=viris, onvalue=1, offvalue=0)
ciris.place(x=470, y=90, width=90, height=30)# mouth = tk.IntVar()
# vinnermouth = tk.IntVar()
# veyebrow = tk.IntVar()
# vnose = tk.IntVar()
# vjaw = tk.IntVar()cmouth=tk.Checkbutton(root, text='显示嘴', font=('Arial', 12), fg='orange', variable=vmouth, onvalue=1, offvalue=0)
cmouth.place(x=462, y=120, width=90, height=30)cnose=tk.Checkbutton(root, text='显示鼻子', font=('Arial', 12), fg='pink', variable=vnose, onvalue=1, offvalue=0)
cnose.place(x=470, y=150, width=90, height=30)ceyebrow=tk.Checkbutton(root, text='显示眉毛', font=('Arial', 12), fg='black', variable=veyebrow, onvalue=1, offvalue=0)
ceyebrow.place(x=470, y=180, width=90, height=30)cinnermouth=tk.Checkbutton(root, text='显示内嘴', font=('Arial', 12), fg='purple', variable=vinnermouth, onvalue=1, offvalue=0)
cinnermouth.place(x=470, y=210, width=90, height=30)cjaw=tk.Checkbutton(root, text='显示下颌', font=('Arial', 12), fg='brown', variable=vjaw, onvalue=1, offvalue=0)
cjaw.place(x=470, y=240, width=90, height=30)root.mainloop()

关于瞳孔跟踪、面部器官识别的最终UI界面相关推荐

  1. 视频行人重识别系统(UI界面,Python源码,可下载)

    下载链接:https://mbd.pub/o/bread/mbd-Y5WVmJpt 演示视频链接:https://live.csdn.net/v/236533 目录: 前言 1.功能及操作说明 2.目 ...

  2. 使用matlab编写人脸识别的程序,并制作UI界面

    使用 MATLAB 编写人脸识别程序可以使用 MATLAB 自带的图像处理工具箱,如 Image Processing Toolbox 来实现. 首先,需要导入人脸数据库并进行预处理,包括对图像进行预 ...

  3. 【百度大脑新品体验】人脸面部动作识别

    [百度大脑新品体验]人脸面部动作识别 作者:busyboxs 最近在尝试做一个类似于 Facedance challenge 的应用,这个应用需要识别人脸五官的动作.目前可能的解决方法就是通过对关键点 ...

  4. iOS开发-面部距离识别 Vision 护眼模式 儿童防近视功能

    iOS开发-面部距离识别 Vision 护眼模式 儿童防近视功能 前言-基于Vision和CIDetector 举例 大概思路 API 使用 Demo 前言-基于Vision和CIDetector i ...

  5. 实时车辆行人多目标检测与跟踪系统-上篇(UI界面清新版,Python代码)

    摘要:本文详细介绍如何利用深度学习中的YOLO及SORT算法实现车辆.行人等多目标的实时检测和跟踪,并利用PyQt5设计了清新简约的系统UI界面,在界面中既可选择自己的视频.图片文件进行检测跟踪,也可 ...

  6. 基于深度学习的人脸识别与管理系统(UI界面增强版,Python代码)

    摘要:人脸检测与识别是机器视觉领域最热门的研究方向之一,本文详细介绍博主自主设计的一款基于深度学习的人脸识别与管理系统.博文给出人脸识别实现原理的同时,给出Python的人脸识别实现代码以及PyQt设 ...

  7. 人脸表情识别系统的设计与实现(含UI界面,有完整代码)

    人脸表情识别系统的设计与实现(含UI界面,有完整代码) 这是之前本科做的毕设,当时使用的是keras搭建了一个简单的神经网络作为入门实现了在fer2013人脸表情数据集上的表情分类,并移植到了树莓派上 ...

  8. 人脸表情识别系统介绍——上篇(python实现,含UI界面及完整代码)

    人脸表情识别介绍与演示视频 博客及代码详细介绍:https://www.bilibili.com/video/BV18C4y1H7mH/(欢迎关注博主B站视频) 摘要:这篇博文介绍基于深度卷积神经网络 ...

  9. 基于深度学习的人脸性别识别系统(含UI界面,Python代码)

    摘要:人脸性别识别是人脸识别领域的一个热门方向,本文详细介绍基于深度学习的人脸性别识别系统,在介绍算法原理的同时,给出Python的实现代码以及PyQt的UI界面.在界面中可以选择人脸图片.视频进行检 ...

最新文章

  1. 程序员怎么赚更多的钱_自由职业技巧:如何感到更加自信和赚更多钱
  2. 常用的设计模式汇总,超详细!
  3. LeetCode Decode Ways(动态规划)
  4. 7-26 Windows消息队列
  5. 会做饭的机器人曰记_颜真卿《麻姑仙坛记》:苍劲古朴,体态沉雄,气象宏大...
  6. 攻击 | 破解windows7密码(利用PE系统破解XP密码)
  7. iOS界面设计之基础控件的学习 --- UITextField
  8. 9.5noip模拟试题
  9. ant vue 树形菜单横向显示_快速实现一个简单可复用可扩展的Vue树组件
  10. Python的if判断和两重判断
  11. 神码与SUSE共促Unix向x86+Linux迁移
  12. 常用视频接口线头介绍
  13. 大物狭义相对论中的四维时空与闵氏时空图(上)
  14. PLM Agile BOM表结构笔记
  15. 小瘦牛虚拟无线路由器官方版
  16. 独木舟上的旅行-OJ
  17. java增函数的单变量求解,最底层码农的不易谁能体会?谁心里苦谁知道啊。
  18. 索尼发布新款VR头盔与FB竞争,暂未公布定价
  19. 解决 WPS 输入文字颜色无法改变并自带下划线的问题
  20. JAVA考试多选题判断得分

热门文章

  1. YOLOv5 5.0版本检测FPS
  2. 智能对话之对话管理综述
  3. Web开发之常用框架BootStrap
  4. 《图解密码技术》笔记13:PGP-密码技术的完美组合
  5. python正则表达式介绍
  6. Java——继承——Extends
  7. Vue中props属性
  8. [地图]构建欧氏距离场
  9. 怎么将heic转为jpg格式,哪个图片转换器好用
  10. Python 语言及其应用 Chapter_3_Note_2 容器_列表_元组_字典_集合