一、概述

  • OpenPose最开始由卡内基梅隆大学提出,其主要基于先后发表的几篇文章中提出的模型中进行实现:

    • CVPR 2016: Convolutional Pose Machine(CPM)
    • CVPR2017 : realtime multi-person pose estimation
    • CVPR2017 : Hand Keypoint Detection in Single Images using Multiview Bootstrapping
  • 但运行计算量非常大,通常得在GPU上运行,并且帧率较低(低于5fps),在此后也陆续出现了一些改进版
  • 改进版主要在模型上进行了一些改进或裁剪,另外移动端(如各种尬舞app) 为能够跑通OpenPose,在改网络结构的同时,对算法本身也进行了优化,减少了计算量,但与此同时准确性也有相应地降低。

二、简化版OpenPose实现代码

  • 代码来源GitHub:human-pose-estimation-opencv
  • 其代码较为简单,模型(较小:7.8M)已经训练好在graph_opt.pb文件中,其中全部实现代码在openpose.py文件中,下面是实现代码及测试效果:
# To use Inference Engine backend, specify location of plugins:
# export LD_LIBRARY_PATH=/opt/intel/deeplearning_deploymenttoolkit/deployment_tools/external/mklml_lnx/lib:$LD_LIBRARY_PATH
import cv2 as cv
import numpy as np
import argparseparser = argparse.ArgumentParser()
parser.add_argument('--input', help='Path to image or video. Skip to capture frames from camera')
parser.add_argument('--thr', default=0.2, type=float, help='Threshold value for pose parts heat map')
parser.add_argument('--width', default=368, type=int, help='Resize input to specific width.')
parser.add_argument('--height', default=368, type=int, help='Resize input to specific height.')args = parser.parse_args()BODY_PARTS = { "Nose": 0, "Neck": 1, "RShoulder": 2, "RElbow": 3, "RWrist": 4,"LShoulder": 5, "LElbow": 6, "LWrist": 7, "RHip": 8, "RKnee": 9,"RAnkle": 10, "LHip": 11, "LKnee": 12, "LAnkle": 13, "REye": 14,"LEye": 15, "REar": 16, "LEar": 17, "Background": 18 }POSE_PAIRS = [ ["Neck", "RShoulder"], ["Neck", "LShoulder"], ["RShoulder", "RElbow"],["RElbow", "RWrist"], ["LShoulder", "LElbow"], ["LElbow", "LWrist"],["Neck", "RHip"], ["RHip", "RKnee"], ["RKnee", "RAnkle"], ["Neck", "LHip"],["LHip", "LKnee"], ["LKnee", "LAnkle"], ["Neck", "Nose"], ["Nose", "REye"],["REye", "REar"], ["Nose", "LEye"], ["LEye", "LEar"] ]inWidth = args.width
inHeight = args.heightnet = cv.dnn.readNetFromTensorflow("graph_opt.pb")cap = cv.VideoCapture(args.input if args.input else 0)while cv.waitKey(1) < 0:hasFrame, frame = cap.read()if not hasFrame:cv.waitKey()breakframeWidth = frame.shape[1]frameHeight = frame.shape[0]net.setInput(cv.dnn.blobFromImage(frame, 1.0, (inWidth, inHeight), (127.5, 127.5, 127.5), swapRB=True, crop=False))out = net.forward()out = out[:, :19, :, :]  # MobileNet output [1, 57, -1, -1], we only need the first 19 elementsassert(len(BODY_PARTS) == out.shape[1])points = []for i in range(len(BODY_PARTS)):# Slice heatmap of corresponging body's part.heatMap = out[0, i, :, :]# Originally, we try to find all the local maximums. To simplify a sample# we just find a global one. However only a single pose at the same time# could be detected this way._, conf, _, point = cv.minMaxLoc(heatMap)x = (frameWidth * point[0]) / out.shape[3]y = (frameHeight * point[1]) / out.shape[2]# Add a point if it's confidence is higher than threshold.points.append((int(x), int(y)) if conf > args.thr else None)for pair in POSE_PAIRS:partFrom = pair[0]partTo = pair[1]assert(partFrom in BODY_PARTS)assert(partTo in BODY_PARTS)idFrom = BODY_PARTS[partFrom]idTo = BODY_PARTS[partTo]if points[idFrom] and points[idTo]:cv.line(frame, points[idFrom], points[idTo], (0, 255, 0), 3)cv.ellipse(frame, points[idFrom], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)cv.ellipse(frame, points[idTo], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)t, _ = net.getPerfProfile()freq = cv.getTickFrequency() / 1000cv.putText(frame, '%.2fms' % (t / freq), (10, 20), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0))cv.imshow('OpenPose using OpenCV', frame)

检测效果如下:

  • 下面两张图中体现出单人且姿态展开时较好的检测效果:
  • 下面两张图中体现出多人或特殊姿态时较差的检测效果(如乱连接或者遗漏关键点等):
  • 从左上角显示的处理时间可看到,处理较慢,基本一张图片需耗时0.5S

三、较复杂版OpenPose实现代码

  • 代码来源GitHub:camera-openpose-keras
  • 其代码比起前面这个更复杂一些,模型(更大:200M)已经训练好在可自行下载,但在外网不易下载,因此也可在百度云下载:model.h5
  • 其中全部实现代码在demo_camera.py文件中,下面是修改了一点点代码,采取读入图片的方式进行了测试:
import argparse
import cv2
import math
import time
import numpy as np
import util
from config_reader import config_reader
from scipy.ndimage.filters import gaussian_filter
from model import get_testing_modeltic=0
# find connection in the specified sequence, center 29 is in the position 15
limbSeq = [[2, 3], [2, 6], [3, 4], [4, 5], [6, 7], [7, 8], [2, 9], [9, 10], \[10, 11], [2, 12], [12, 13], [13, 14], [2, 1], [1, 15], [15, 17], \[1, 16], [16, 18], [3, 17], [6, 18]]# the middle joints heatmap correpondence
mapIdx = [[31, 32], [39, 40], [33, 34], [35, 36], [41, 42], [43, 44], [19, 20], [21, 22], \[23, 24], [25, 26], [27, 28], [29, 30], [47, 48], [49, 50], [53, 54], [51, 52], \[55, 56], [37, 38], [45, 46]]# visualize
colors = [[255, 0, 0], [255, 85, 0], [255, 170, 0], [255, 255, 0], [170, 255, 0], [85, 255, 0],[0, 255, 0], \[0, 255, 85], [0, 255, 170], [0, 255, 255], [0, 170, 255], [0, 85, 255], [0, 0, 255],[85, 0, 255], \[170, 0, 255], [255, 0, 255], [255, 0, 170], [255, 0, 85]]def process (input_image, params, model_params):oriImg = cv2.imread(input_image)  # B,G,R ordermultiplier = [x * model_params['boxsize'] / oriImg.shape[0] for x in params['scale_search']]heatmap_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 19))paf_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 38))#for m in range(len(multiplier)):for m in range(1):scale = multiplier[m]imageToTest = cv2.resize(oriImg, (0, 0), fx=scale, fy=scale, interpolation=cv2.INTER_CUBIC)imageToTest_padded, pad = util.padRightDownCorner(imageToTest, model_params['stride'],model_params['padValue'])input_img = np.transpose(np.float32(imageToTest_padded[:,:,:,np.newaxis]), (3,0,1,2)) # required shape (1, width, height, channels)output_blobs = model.predict(input_img)# extract outputs, resize, and remove paddingheatmap = np.squeeze(output_blobs[1])  # output 1 is heatmapsheatmap = cv2.resize(heatmap, (0, 0), fx=model_params['stride'], fy=model_params['stride'],interpolation=cv2.INTER_CUBIC)heatmap = heatmap[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3],:]heatmap = cv2.resize(heatmap, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC)paf = np.squeeze(output_blobs[0])  # output 0 is PAFspaf = cv2.resize(paf, (0, 0), fx=model_params['stride'], fy=model_params['stride'],interpolation=cv2.INTER_CUBIC)paf = paf[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :]paf = cv2.resize(paf, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC)heatmap_avg = heatmap_avg + heatmap / len(multiplier)paf_avg = paf_avg + paf / len(multiplier)all_peaks = []peak_counter = 0prinfTick(1)for part in range(18):map_ori = heatmap_avg[:, :, part]map = gaussian_filter(map_ori, sigma=3)map_left = np.zeros(map.shape)map_left[1:, :] = map[:-1, :]map_right = np.zeros(map.shape)map_right[:-1, :] = map[1:, :]map_up = np.zeros(map.shape)map_up[:, 1:] = map[:, :-1]map_down = np.zeros(map.shape)map_down[:, :-1] = map[:, 1:]peaks_binary = np.logical_and.reduce((map >= map_left, map >= map_right, map >= map_up, map >= map_down, map > params['thre1']))peaks = list(zip(np.nonzero(peaks_binary)[1], np.nonzero(peaks_binary)[0]))  # note reversepeaks_with_score = [x + (map_ori[x[1], x[0]],) for x in peaks]id = range(peak_counter, peak_counter + len(peaks))peaks_with_score_and_id = [peaks_with_score[i] + (id[i],) for i in range(len(id))]all_peaks.append(peaks_with_score_and_id)peak_counter += len(peaks)connection_all = []special_k = []mid_num = 10prinfTick(2)for k in range(len(mapIdx)):score_mid = paf_avg[:, :, [x - 19 for x in mapIdx[k]]]candA = all_peaks[limbSeq[k][0] - 1]candB = all_peaks[limbSeq[k][1] - 1]nA = len(candA)nB = len(candB)indexA, indexB = limbSeq[k]if (nA != 0 and nB != 0):connection_candidate = []for i in range(nA):for j in range(nB):vec = np.subtract(candB[j][:2], candA[i][:2])norm = math.sqrt(vec[0] * vec[0] + vec[1] * vec[1])# failure case when 2 body parts overlapsif norm == 0:continuevec = np.divide(vec, norm)startend = list(zip(np.linspace(candA[i][0], candB[j][0], num=mid_num), \np.linspace(candA[i][1], candB[j][1], num=mid_num)))vec_x = np.array([score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 0] \for I in range(len(startend))])vec_y = np.array([score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 1] \for I in range(len(startend))])score_midpts = np.multiply(vec_x, vec[0]) + np.multiply(vec_y, vec[1])score_with_dist_prior = sum(score_midpts) / len(score_midpts) + min(0.5 * oriImg.shape[0] / norm - 1, 0)criterion1 = len(np.nonzero(score_midpts > params['thre2'])[0]) > 0.8 * len(score_midpts)criterion2 = score_with_dist_prior > 0if criterion1 and criterion2:connection_candidate.append([i, j, score_with_dist_prior,score_with_dist_prior + candA[i][2] + candB[j][2]])connection_candidate = sorted(connection_candidate, key=lambda x: x[2], reverse=True)connection = np.zeros((0, 5))for c in range(len(connection_candidate)):i, j, s = connection_candidate[c][0:3]if (i not in connection[:, 3] and j not in connection[:, 4]):connection = np.vstack([connection, [candA[i][3], candB[j][3], s, i, j]])if (len(connection) >= min(nA, nB)):breakconnection_all.append(connection)else:special_k.append(k)connection_all.append([])# last number in each row is the total parts number of that person# the second last number in each row is the score of the overall configurationsubset = -1 * np.ones((0, 20))candidate = np.array([item for sublist in all_peaks for item in sublist])prinfTick(3)for k in range(len(mapIdx)):if k not in special_k:partAs = connection_all[k][:, 0]partBs = connection_all[k][:, 1]indexA, indexB = np.array(limbSeq[k]) - 1for i in range(len(connection_all[k])):  # = 1:size(temp,1)found = 0subset_idx = [-1, -1]for j in range(len(subset)):  # 1:size(subset,1):if subset[j][indexA] == partAs[i] or subset[j][indexB] == partBs[i]:subset_idx[found] = jfound += 1if found == 1:j = subset_idx[0]if (subset[j][indexB] != partBs[i]):subset[j][indexB] = partBs[i]subset[j][-1] += 1subset[j][-2] += candidate[partBs[i].astype(int), 2] + connection_all[k][i][2]elif found == 2:  # if found 2 and disjoint, merge themj1, j2 = subset_idxmembership = ((subset[j1] >= 0).astype(int) + (subset[j2] >= 0).astype(int))[:-2]if len(np.nonzero(membership == 2)[0]) == 0:  # mergesubset[j1][:-2] += (subset[j2][:-2] + 1)subset[j1][-2:] += subset[j2][-2:]subset[j1][-2] += connection_all[k][i][2]subset = np.delete(subset, j2, 0)else:  # as like found == 1subset[j1][indexB] = partBs[i]subset[j1][-1] += 1subset[j1][-2] += candidate[partBs[i].astype(int), 2] + connection_all[k][i][2]# if find no partA in the subset, create a new subsetelif not found and k < 17:row = -1 * np.ones(20)row[indexA] = partAs[i]row[indexB] = partBs[i]row[-1] = 2row[-2] = sum(candidate[connection_all[k][i, :2].astype(int), 2]) + \connection_all[k][i][2]subset = np.vstack([subset, row])# delete some rows of subset which has few parts occurdeleteIdx = [];for i in range(len(subset)):if subset[i][-1] < 4 or subset[i][-2] / subset[i][-1] < 0.4:deleteIdx.append(i)subset = np.delete(subset, deleteIdx, axis=0)canvas = cv2.imread(input_image)  # B,G,R orderfor i in range(18):for j in range(len(all_peaks[i])):cv2.circle(canvas, all_peaks[i][j][0:2], 4, colors[i], thickness=-1)stickwidth = 4for i in range(17):for n in range(len(subset)):index = subset[n][np.array(limbSeq[i]) - 1]if -1 in index:continuecur_canvas = canvas.copy()Y = candidate[index.astype(int), 0]X = candidate[index.astype(int), 1]mX = np.mean(X)mY = np.mean(Y)length = ((X[0] - X[1]) ** 2 + (Y[0] - Y[1]) ** 2) ** 0.5angle = math.degrees(math.atan2(X[0] - X[1], Y[0] - Y[1]))polygon = cv2.ellipse2Poly((int(mY), int(mX)), (int(length / 2), stickwidth), int(angle), 0,360, 1)cv2.fillConvexPoly(cur_canvas, polygon, colors[i])canvas = cv2.addWeighted(canvas, 0.4, cur_canvas, 0.6, 0)return canvasdef prinfTick(i):toc = time.time()print ('processing time%d is %.5f' % (i,toc - tic))        if __name__ == '__main__':parser = argparse.ArgumentParser()parser.add_argument('--image', type=str, default='sample_images/ski.jpg', help='input image')parser.add_argument('--output', type=str, default='result.png', help='output image')parser.add_argument('--model', type=str, default='model/keras/model.h5', help='path to the weights file')args = parser.parse_args()input_image = args.imageoutput = args.outputkeras_weights_file = args.modeltic = time.time()print('start processing...')# load model# authors of original model don't use# vgg normalization (subtracting mean) on input imagesmodel = get_testing_model()model.load_weights(keras_weights_file)cap=cv2.VideoCapture(0)vi=cap.isOpened()if(vi == False):time.sleep(2) #必须要此步骤,否则失败#fr = cv2.imread('./sample_images/ski2.jpg',1)tic = time.time()# cv2.imwrite(input_image, fr)params, model_params = config_reader()canvas = process(input_image, params, model_params)    cv2.imshow("capture",canvas)#cv2.waitKey(0)#修改使得按ESC能退出终止程序key = cv2.waitKey(0)if key == 27:cv2.destroyAllWindows()if(vi == True):cap.set(3,160)cap.set(4,120)time.sleep(2) #必须要此步骤,否则失败while(1):tic = time.time()ret,frame=cap.read()cv2.imwrite(input_image, frame)params, model_params = config_reader()# generate image with body partscanvas = process(input_image, params, model_params)    cv2.imshow("capture",canvas) if cv2.waitKey(1) & 0xFF==ord('q'):breakcap.release()
cv2.destroyAllWindows()

程序的运行方式如下:

python demo_camera.py --image ./sample_images/ski.jpg
  • ` - -image后面接的是输入图片的路径

检测效果如下

  • 首先对比了上面简化版代码效果不好的两张图在这个模型中的处理效果
  • 另外,结合其他图片测试了一下效果,发现人能够呈现清楚时,关节点能够比较准确得检测出来

基于OpenPose的人体姿态检测相关推荐

  1. 基于OpenPose的人体姿态检测(非常好)

    参考:https://blog.csdn.net/yph001/article/details/83218839 一.概述 OpenPose最开始由卡内基梅隆大学提出,其主要基于先后发表的几篇文章中提 ...

  2. 基于OpenPose的人体姿态检测两个群众

    一.概述 OpenPose最开始由卡内基梅隆大学提出,其主要基于先后发表的几篇文章中提出的模型中进行实现: CVPR 2016: Convolutional Pose Machine(CPM) CVP ...

  3. MATLAB基于视频的人体姿态检测

    基于视频的人体姿态检测 设计目的和要求 1.根据已知要求分析视频监控中行人站立和躺卧姿态检测的处理流程,确定视频监中行人的检测设计的方法,画出流程图,编写实现程序,并进行调试,录制实验视频,验证检测方 ...

  4. MATLAB基于视频的人体姿态检测(第二期)

    设计思路 首先利用统计的方法得到背景模型,并实时地对背景模型进行更新以适应光线变化和场景本身的变化,用形态学方法和检测连通域面积进行后处理,消除噪声和背景扰动带来的影响,在HSV色度空间下检测阴影,得 ...

  5. Python+OpenCV+OpenPose实现人体姿态估计(人体关键点检测)

    目录 1.人体姿态估计简介 2.人体姿态估计数据集 3.OpenPose库 4.实现原理 5.实现神经网络 6.实现代码 1.人体姿态估计简介 人体姿态估计(Human Posture Estimat ...

  6. 项目开源!基于PaddleDetection打造实时人体姿态检测的多关节控制皮影机器人

    本文已在[飞桨PaddlePaddle]公众号平台发布,详情请戳链接:项目开源!基于PaddleDetection打造实时人体姿态检测的多关节控制皮影机器人 皮影戏是一种以兽皮或纸板做成的人物剪影以表 ...

  7. 轻量级openpose人体姿态检测

    1.去github下载轻量级openpose人体姿态检测项目:https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.py ...

  8. OpenPose实现人体姿态估计(人体关键点检测)

    转载:Python+OpenCV+OpenPose实现人体姿态估计(人体关键点检测)_不脱发的程序猿-CSDN博客_python人体姿态识别

  9. 加盟依图科技后,颜水成首篇顶会论文提出“高效多人体姿态检测SPM”

    唐木 发自 天龙寺  量子位 出品 | 公众号 QbitAI 颜水成团队研究实力依然强劲. 从360到依图,颜水成依然保持着高质量的学术输出. 最近提出的单阶段高效人体姿态检测模型SPM就是最好的例证 ...

  10. ​MATLAB差影法人体姿态检测系统

    ​MATLAB差影法人体姿态检测系统 1.应用背景 运动目标的定位跟踪,检测识别,运动分析在图像压缩.运动分析.交通检测,智能监控等方面有主要的应用. 首先,在图像压缩中,运动目标检测技术可以在背景区 ...

最新文章

  1. 狼奔代码生成工具使用心得
  2. ETL工具框架开源软件
  3. 关于oracle数据库的操作的命令
  4. 入坑-DM导论-第一章绪论笔记
  5. Java黑皮书课后题第10章:*10.11(几何:Circle2D类)定义Circle2D类
  6. 音视频技术开发周刊 | 158
  7. LPTSTR、LPCSTR、LPCTSTR、LPSTR的区别
  8. word2003计算机考试题,[2018职称计算机Word2003考前练习题] 2018年职称计算机考试练习题库...
  9. 有关判读flex 模板载入是否结束的一些问题。
  10. String.format(“0:D2}“,a)字符串格式化
  11. 乐优商城个人笔记上-主要框架、基础知识、管理系统代码
  12. FPGA入门——初学建议
  13. HCNA华为认证网络工程师视频教程
  14. UFS系列三:UFS数据包UPIU
  15. 推荐一款微信小程序《诗词万卷》
  16. Rasa课程、Rasa培训、Rasa面试、Rasa实战系列之Gavin大咖免费公益课程Rasa Paper论文解析核心版
  17. barcode4j CODE128/EAN128生成 不定长 msg值 分隔符
  18. 三星android10手势,三星全面屏手势终于来了!看着有点熟悉
  19. 大数据之Hive:正则表达式
  20. c#中的命名空间、类

热门文章

  1. 小米手机助手linux,小米手机助手
  2. MATLAB热障涂层成像,考虑孔隙细观特征的热障涂层脱粘缺陷超声检测数值模拟
  3. 【GlobalMapper精品教程】008:如何根据指定区域(shp、kml、cad)下载卫星影像?
  4. win 10 企业版 激活
  5. 4.2V锂电池充电、放电保护电路分享
  6. 基于51单片机的智能路灯照明控制系统proteus仿真原理图程序设计
  7. java web程序设计任务教程——源码(全)
  8. 网络PXE启动WinPE,支持UEFI和LEGACY引导
  9. 模糊控制在matlab的实现,模糊控制系统的MATLAB实现
  10. 优盘(U 盘) 采用TLC, MLC, SLC芯片 的区别 与使用寿命