基于github的DJI-SDK


```pythondef is_arms_V_down(self, points):"""Determine if the person moves his arm flatlike:    |\/| \/ \:param points: set of body key points:return: if the person detected moves both of his arms flat"""right = Falseif points[2] and points[3] and points[4]:# calculate the shoulder angleshoulder_angle = self.getAngle(points[2], points[3])if -60 < shoulder_angle < -20:elbow_angle = self.getAngle(points[3], points[4])# if arm is straightif abs(elbow_angle - shoulder_angle) < 25:right = Trueleft = Falseif points[5] and points[6] and points[7]:shoulder_angle = self.getAngle(points[5], points[6])# correct the  dimensionif shoulder_angle < 0:shoulder_angle = shoulder_angle + 360if 200 < shoulder_angle < 240:elbow_angle = self.getAngle(points[6], points[7])if 90 < elbow_angle < 180:left = True# If at least one arm meets the requirements, it is considered a successful capture```pythonif left or right:return Trueelse:return False```## 标题`

新的tello_pose.py

import cv2
import time
import math
import numpy as np
import datetime
import os'''
Correspondence between the number of the skeleton node and
the human body:
Head - 0, Neck - 1, Right Shoulder - 2, Right Elbow - 3, Right Wrist - 4,
Left Shoulder - 5, Left Elbow - 6, Left Wrist - 7, Right Hip - 8,
Right Knee - 9, Right Ankle - 10, Left Hip - 11, Left Knee - 12,
Left Ankle - 13, Chest - 14, Background - 15Add new pose
'''class Tello_Pose:def __init__(self):# read the path of the trained model of the neural network for pose recognitionself.protoFile = "model/pose/mpi/pose_deploy_linevec_faster_4_stages.prototxt"self.weightsFile = "model/pose/mpi/pose_iter_160000.caffemodel"# total number of the skeleton nodesself.nPoints = 15# read the neural network of the pose recognitionself.net = cv2.dnn.readNetFromCaffe(self.protoFile, self.weightsFile)# count the number of frames,and after every certain number of frames# is read, frame_cnt will be cleared and recounted.self.frame_cnt = 0 self.arm_down_45_cnt = 0 # count numbers of the arm_dowm_45 captured in every certain number of framesself.arm_flat_cnt = 0    # count numbers of the arm_flat captured in every certain number of framesself.arm_V_cnt = 0       # count numbers of the arm_V captured in every certain number of framesself.arm_V_down_cnt=0     #cccccccccccccccccc# the period of pose reconigtion,it depends on your computer performance self.period = 0# record how many times the period of pose reconigtion is calculated.self.period_calculate_cnt =0def getAngle(self, start, end):"""Calculate the angle between start and end:param start: start point [x, y]:param end: end point [x, y]:return: the clockwise angle from start to end"""angle = int(math.atan2((start[1] - end[1]), (start[0] - end[0])) * 180 / math.pi)return angledef is_arms_V_down(self, points):       #cccccccccccccc"""Determine if the person moves his arm flatlike:    |\/| \/ \:param points: set of body key points:return: if the person detected moves both of his arms flat"""right = Falseif points[2] and points[3] and points[4]:# calculate the shoulder angleshoulder_angle = self.getAngle(points[2], points[3])if -60 < shoulder_angle < -20:elbow_angle = self.getAngle(points[3], points[4])# if arm is straightif abs(elbow_angle - shoulder_angle) < 25:right = Trueleft = Falseif points[5] and points[6] and points[7]:shoulder_angle = self.getAngle(points[5], points[6])# correct the  dimensionif shoulder_angle < 0:shoulder_angle = shoulder_angle + 360if 200 < shoulder_angle < 240:elbow_angle = self.getAngle(points[6], points[7])if 90 < elbow_angle < 180:left = True# If at least one arm meets the requirements, it is considered a successful captureif left or right:return Trueelse:return Falsedef is_arms_down_45(self, points):"""Determine if the person is holding  the arms like:| / | \/ \:param points: set of body key points:return: if the person detected moves both of his arms down for about 45 degrees"""right = Falseif points[2] and points[3] and points[4]:# calculate the shoulder angleshoulder_angle = self.getAngle(points[2], points[3])if -60 < shoulder_angle < -20:elbow_angle = self.getAngle(points[3], points[4])# if arm is straightif abs(elbow_angle - shoulder_angle) < 25:right = Trueleft = Falseif points[5] and points[6] and points[7]:shoulder_angle = self.getAngle(points[5], points[6])# correct the dimensionif shoulder_angle < 0:shoulder_angle = shoulder_angle + 360if 200 < shoulder_angle < 240:elbow_angle = self.getAngle(points[6], points[7])if elbow_angle < 0:elbow_angle = elbow_angle + 360# if arm is straightif abs(elbow_angle - shoulder_angle) < 25:left = True# If at least one arm meets the requirements, it is considered a successful captureif left or right:return Trueelse:return Falsedef is_arms_flat(self, points):"""Determine if the person moves his arm flatlike: _ _|_ _|/ \:param points: set of body key points:return: if the person detected moves both of his arms flat"""right = Falseif points[2] and points[3] and points[4]:shoulder_angle = self.getAngle(points[2], points[3])# if arm is flatif -10 < shoulder_angle < 40:elbow_angle = self.getAngle(points[3], points[4])# if arm is straightif abs(elbow_angle - shoulder_angle) < 30:right = Trueleft = Falseif points[5] and points[6] and points[7]:shoulder_angle = self.getAngle(points[5], points[6])            # correct the  dimension            if shoulder_angle < 0:shoulder_angle = shoulder_angle + 360            # if arm is flatif 140 < shoulder_angle < 190:elbow_angle = self.getAngle(points[6], points[7])if elbow_angle < 0:elbow_angle = elbow_angle + 360# if arm is straightif abs(elbow_angle - shoulder_angle) < 30:left = True# If at least one arm meets the requirements, it is considered a successful captureif left or right:return Trueelse:return Falsedef is_arms_V(self, points):"""Determine if the person has his/her shoulder and elbow to a certain degreelike:   |\/|\// \:param points: set of body key pointss"""right = Falseif points[2] and points[3] and points[4]:shoulder_angle = self.getAngle(points[2], points[3])if -60 < shoulder_angle < -20:elbow_angle = self.getAngle(points[3], points[4])if 0 < elbow_angle < 90 :right = Trueleft = Falseif points[5] and points[6] and points[7]:shoulder_angle = self.getAngle(points[5], points[6])# correct the  dimension  if shoulder_angle < 0:shoulder_angle = shoulder_angle + 360if 200 < shoulder_angle < 240:elbow_angle = self.getAngle(points[6], points[7])if  90 < elbow_angle < 180:left = True# If at least one arm meets the requirements, it is considered a successful captureif left or right:return Trueelse:return Falsedef detect(self, frame):"""Main operation to recognize body pose using a trained model:param frame: raw h264 decoded frame:return:draw_skeleton_flag: the flag that indicates if the skeletonare detected and depend if the skeleton is drawn on the piccmd: the command to be received by Tellopoints:the coordinates of the skeleton nodes"""frameWidth = frame.shape[1]frameHeight = frame.shape[0]frame_cnt_threshold = 0prob_threshold = 0.05pose_captured_threshold = 0    draw_skeleton_flag = False# input image dimensions for the network# IMPORTANT:# Greater inWidth and inHeight will result in higher accuracy but longer process time# Smaller inWidth and inHeight will result in lower accuracy but shorter process time# Play around it by yourself to get best result!inWidth = 168inHeight = 168inpBlob = cv2.dnn.blobFromImage(frame, 1.0 / 255, (inWidth, inHeight),(0, 0, 0), swapRB=False, crop=False)self.net.setInput(inpBlob)# get the output of the neural network and calculate the period of the processperiod_starttime = time.time()output = self.net.forward()period_endtime = time.time()# calculation the period of pose reconigtion for 6 times,and get the average value if self.period_calculate_cnt <= 5 :self.period = self.period + period_endtime - period_starttimeself.period_calculate_cnt = self.period_calculate_cnt + 1if self.period_calculate_cnt == 6 :self.period = self.period / 6H = output.shape[2]W = output.shape[3]# Empty list to store the detected keypointspoints = []for i in range(self.nPoints):# confidence map of corresponding body's part.probMap = output[0, i, :, :]# Find global maxima of the probMap.minVal, prob, minLoc, point = cv2.minMaxLoc(probMap)# Scale the point to fit on the original imagex = (frameWidth * point[0]) / Wy = (frameHeight * point[1]) / Hif prob > prob_threshold:points.append((int(x), int(y)))draw_skeleton_flag = Trueelse:points.append(None) draw_skeleton_flag = False# check the captured poseif self.is_arms_down_45(points):self.arm_down_45_cnt += 1print "%d:arm down captured"%self.frame_cntif self.is_arms_flat(points):self.arm_flat_cnt += 1print "%d:arm up captured"%self.frame_cntif self.is_arms_V(points):self.arm_V_cnt += 1print '%d:arm V captured'%self.frame_cntif self.is_arms_V_down(points):     #cccccccccccccccself.arm_V_down_cnt += 1print '%d:arm V & down captured'%self.frame_cntself.frame_cnt += 1# set the frame_cnt_threshold and pose_captured_threshold according to # the period of the pose recognitioncmd = ''        if self.period < 0.3:frame_cnt_threshold = 5pose_captured_threshold = 4elif self.period >= 0.3 and self.period <0.6:frame_cnt_threshold = 4pose_captured_threshold = 3elif self.period >= 0.6:frame_cnt_threshold = 2pose_captured_threshold = 2# check whether pose control command are generated once for # certain times of pose recognition   if self.frame_cnt == frame_cnt_threshold:if self.arm_down_45_cnt >= pose_captured_threshold:print '!!!arm up,move back!!!'cmd =  'moveback'if self.arm_flat_cnt >= pose_captured_threshold:print '!!!arm down,moveforward!!!'cmd =  'moveforward'if self.arm_V_cnt == self.frame_cnt :print '!!!arm V,land!!!'cmd =  'land'self.frame_cnt = 0self.arm_down_45_cnt = 0self.arm_flat_cnt = 0self.arm_V_cnt = 0self.arm_V_down_cnt = 0 #cccccccreturn cmd,draw_skeleton_flag,points

Tello:无人机新姿势识别(小创实验)相关推荐

  1. 开门“新姿势”,商汤3D人脸识别智能门锁解决方案来了

    古文有云, "禹八年于外,三过其门而不入."--<孟子.滕文公上>,这是家喻户晓的大禹治水三过家门而不入的故事,今人皆叹大禹大公无私的奉献精神.其实,真实情况是这样的: ...

  2. 体验AI拜年新姿势,爱奇艺技术团队这波操作真秀!

    春节是中国人最看重,也是最具仪式感的节日,回首往年春节假期,无外乎三个关键词:拜年.消费.聚会.而2021年春节和以往不同,为响应国家号召,不少人选择就地过年,与之相伴而生的还有"云拜年&q ...

  3. IEEE2019论文:使用基于特征融合和缩放的具有空间上下文分析的单镜头检测器在无人机图像中检测小物体

    摘要: 无人机(UAV)图像中的目标由于拍摄高度较高通常都很小,我们虽然在目标检测方面已经做了很多工作,但是如何准确.快速地检测出小目标仍然是一个有待解决的挑战.本文针对无人机图像中的小目标检测问题, ...

  4. 在成人影片里做17种姿势识别?大佬在线求助:训练集不够用!

    点击上方"迈微AI研习社",选择"星标★"公众号 重磅干货,第一时间送达 转载自:新智元 后台回复"加群"进入-> CV 微信技术交流 ...

  5. 物体识别_小鼠新物体识别Protocol

    Ennaceur 和 Delacour于 1988 年报道了一种非奖赏性的.简单的认知记忆实验模型--新物体识别实验(Novel Object Recognition, NOR),用于评估啮齿动物记忆 ...

  6. 免杀新姿势:利用线程将恶意代码注入到内存中

    本文讲的是免杀新姿势:利用线程将恶意代码注入到内存中, 产生存放远程攻击线程的进程 在这篇文章中我不想一步一步解释我编写的C#代码,但是我会展示下它能够绕过杀毒软件,并且操作非常简单,而且实用. 首先 ...

  7. #开工新姿势#开启一年新征程,云社区叫你来充电啦!

    摘要:这有份开工红包等你来拿哟~~ 一年之计在于春 值此春暖花开.开学复工之季 华为云社区"内容共创计划"再次来袭! 无论你是正在读书的学生开发者,还是经验丰富的职场技术人,都非常 ...

  8. 计算机系女学霸男生追,杨紫李现解锁恋爱新姿势:吃最甜的糖,追最燃的梦

    原标题:杨紫李现解锁恋爱新姿势:吃最甜的糖,追最燃的梦 由杨紫.李现领衔主演,汇聚了胡一天(特别出演).李鸿其.王真儿.李泽锋.姜珮瑶.王乐君(友情出演)等一众年轻演员的暖心爱恋剧<亲爱的,热爱 ...

  9. 国家级专新特精“小巨人”「皖仪科技」携手企企通,打造采购数字化平台成功上线

    近日,安徽皖仪科技股份有限公司(以下简称"皖仪科技")携手企企通共同打造的数字化采购管理系统成功上线.基于皖仪科技的采购业务流程和规则,形成全新的数字化采购体系,在推动企业降本增效 ...

最新文章

  1. oracle 函数to_char(数据,'FM999,999,999,999,990.00') 格式化数据
  2. 【学习笔记】Python - Beautiful Soup
  3. Android安全系列工具
  4. sql注入pythonpoco_.NET EF(Entity Framework)详解
  5. 游戏上线... 记录下...
  6. C# 极限压缩 dotnet core 控制台发布文件
  7. 【Git】Git-常用命令备忘录(一)
  8. ASP.NET Core 沉思录 - ServiceProvider 的二度出生
  9. 【渝粤教育】国家开放大学2018年秋季 2726T畜禽生产概论 参考试题
  10. 列表相关元素及其属性
  11. 空字符是什么类型_Redis是什么?看这一篇就够了!
  12. 手机软件项目管理—项目组内部的沟通
  13. 正则表达式批量替换 单引号
  14. 在Winform中使用MoonPdfLib(Wpf控件)
  15. 题目2-括号配对问题
  16. MySQL数据库权限操作指南
  17. Sublime Text 如何连接 FTP/SFTP ——图文详细教程
  18. redis.clients.jedis.exceptions.JedisDataException
  19. 看这里!java兼职一天多少钱
  20. 【语音唤醒】MDTC:Multi-scale dilated temporal convolutional network

热门文章

  1. 盘点独立游戏开发者的12个开发和运营技巧
  2. 中兴通讯加入星策开源社区 携手推动企业智能化转型建设
  3. NYOJ283对称排序
  4. 求二叉树中度为1的结点个数
  5. oracle多边形经纬范围筛选_Oracle根据经纬度查询一定范围内的数据
  6. Windows下对游戏杆编程
  7. springcache使用详解
  8. [Eureka单机] SpringCloudEureka的单机服务端配置(Dalston.SR5版本)
  9. 根据输入的半径值,计算球的体积。
  10. 一个猜灯谜的游戏(求解)