此项目直接调用zed相机实现三维测距,无需标定,相关内容如下:
1. yolov5 + 双目测距(标定测距)
2. yolov5直接调用zed相机实现三维测距
3. 具体实现效果已在哔哩哔哩发布,点击此链接跳转

相关配置
python==3.7
Windows系统
pycharm平台
zed api(zed api配置步骤)

1.首先加载模型

    config_path = "yolov4-tiny.cfg"weight_path = "yolov4-tiny.weights"meta_path = "coco.names"svo_path = Nonezed_id = 0help_str = 'zed_yolo.py -c <config> -w <weight> -m <meta> -s <svo_file> -z <zed_id>'try:opts, args = getopt.getopt(argv, "hc:w:m:s:z:", ["config=", "weight=", "meta=", "svo_file=", "zed_id="])except getopt.GetoptError:log.exception(help_str)sys.exit(2)for opt, arg in opts:if opt == '-h':log.info(help_str)sys.exit()elif opt in ("-c", "--config"):config_path = argelif opt in ("-w", "--weight"):weight_path = argelif opt in ("-m", "--meta"):meta_path = argelif opt in ("-s", "--svo_file"):svo_path = argelif opt in ("-z", "--zed_id"):zed_id = int(arg)weightsPath_tiny = weight_pathconfigPath_tiny = config_pathnet = cv2.dnn.readNet(weightsPath_tiny, configPath_tiny)net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA_FP16)model = cv2.dnn_DetectionModel(net)

2.打开zed相机并配置

zed = sl.Camera()# Set configuration parametersinput_type = sl.InputType()init = sl.InitParameters(input_t=input_type)init.camera_resolution = sl.RESOLUTION.HD720init.depth_mode = sl.DEPTH_MODE.PERFORMANCEinit.coordinate_units = sl.UNIT.MILLIMETER# Open the cameraerr = zed.open(init)if err != sl.ERROR_CODE.SUCCESS:print(repr(err))zed.close()exit(1)# Set runtime parameters after opening the cameraruntime = sl.RuntimeParameters()runtime.sensing_mode = sl.SENSING_MODE.STANDARD# Prepare new image size to retrieve half-resolution imagesimage_size = zed.get_camera_information().camera_resolutionimage_size.width = image_size.widthimage_size.height = image_size.height# Declare your sl.Mat matricesimage_zed = sl.Mat(image_size.width, image_size.height, sl.MAT_TYPE.U8_C4)disparity = sl.Mat()  # 视差值dep = sl.Mat()  # 深度图depth_image_zed = sl.Mat(image_size.width, image_size.height, sl.MAT_TYPE.U8_C4)point_cloud = sl.Mat()

3.进行图像处理

    def YOLOv4_video(pred_image):model.setInputParams(size=(416, 416), scale=1 / 255, swapRB=True)image_test = cv2.cvtColor(pred_image, cv2.COLOR_RGBA2RGB)image = image_test.copy()print('image', image.shape)confThreshold = 0.5nmsThreshold = 0.4classes, confidences, boxes = model.detect(image, confThreshold, nmsThreshold)return classes, confidences, boxeswhile (exit_flag == True):err = zed.grab(runtime)if err == sl.ERROR_CODE.SUCCESS:i = 0# Retrieve the left image, depth image in the half-resolutionzed.retrieve_image(image_zed, sl.VIEW.LEFT, sl.MEM.CPU, image_size)zed.retrieve_image(depth_image_zed, sl.VIEW.DEPTH, sl.MEM.CPU, image_size)# 获取视差值zed.retrieve_measure(disparity, sl.MEASURE.DISPARITY, sl.MEM.CPU)dis_map = disparity.get_data()zed.retrieve_image(dep, sl.VIEW.DEPTH)  # 深度图depth_map = depth_image_zed.get_data()dep_map = dep.get_data()# Retrieve the RGBA point cloud in half resolutionzed.retrieve_measure(point_cloud, sl.MEASURE.XYZRGBA, sl.MEM.CPU, image_size)point_map = point_cloud.get_data()# Get and print distance value in mm at the center of the image# We measure the distance camera - object using Euclidean distance# To recover data from sl.Mat to use it with opencv, use the get_data() method# It returns a numpy array that can be used as a matrix with opencvimage_ocv = image_zed.get_data()# depth_image_ocv = depth_image_zed.get_data()view = np.concatenate((cv2.resize(image_ocv, (640, 360)), cv2.resize(dep_map, (640, 360))), axis=1)cv2.imshow("View", view)key = cv2.waitKey(1)if key & 0xFF == 27:  # esc退出breakif key & 0xFF == ord('s'):  # 图像保存savePath = os.path.join("./images", "V{:0>3d}.png".format(i))  # 注意根目录是否存在"./images"文件夹cv2.imwrite(savePath, view)i = i + 1classes, confidences, boxes = YOLOv4_video(image_ocv)for cl, score, (left, top, width, height) in zip(classes, confidences, boxes):start_pooint = (int(left), int(top))end_point = (int(left + width), int(top + height))x = int(left + width / 2)y = int(top + height / 2)color = COLORS[0]img = cv2.rectangle(image_ocv, start_pooint, end_point, color, 3)img = cv2.circle(img, (x, y), 5, [0, 0, 255], 5)text = f'{LABELS[cl]}: {score:0.2f}'cv2.putText(img, text, (int(left), int(top - 7)), cv2.FONT_ITALIC, 1, COLORS[0], 2)x = round(x)y = round(y)err, point_cloud_value = point_cloud.get_value(x, y)distance = math.sqrt(point_cloud_value[0] * point_cloud_value[0] + point_cloud_value[1] * point_cloud_value[1] +point_cloud_value[2] * point_cloud_value[2])print("Distance to Camera at (class : {0}, score : {1:0.2f}): distance : {2:0.2f} mm".format(LABELS[cl],score,distance),end="\r")cv2.putText(img, "Distance: " + str(round(distance / 1000, 2)) + 'm',(int(left), int(top + 25)),cv2.FONT_HERSHEY_COMPLEX, 1, COLORS[1], 2)cv2.imshow("Image", img)key = cv2.waitKey(2)frame_count = frame_count + 1if key & 0xFF == 27:  # esc退出breakif key & 0xFF == ord('s'):  # 图像保存savePath = os.path.join("./images", "V{:0>3d}.png".format(i))  # 注意根目录是否存在"./images"文件夹cv2.imwrite(savePath, view)i = i + 1cv2.destroyAllWindows()zed.close()

完整代码

import os
import sys
import numpy as np
import pyzed.sl as sl
import cv2
import math
import logging
import getoptlog = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)def main(argv):global imgconfig_path = "yolov4-tiny.cfg"weight_path = "yolov4-tiny.weights"meta_path = "coco.names"svo_path = Nonezed_id = 0help_str = 'zed_yolo.py -c <config> -w <weight> -m <meta> -s <svo_file> -z <zed_id>'try:opts, args = getopt.getopt(argv, "hc:w:m:s:z:", ["config=", "weight=", "meta=", "svo_file=", "zed_id="])except getopt.GetoptError:log.exception(help_str)sys.exit(2)for opt, arg in opts:if opt == '-h':log.info(help_str)sys.exit()elif opt in ("-c", "--config"):config_path = argelif opt in ("-w", "--weight"):weight_path = argelif opt in ("-m", "--meta"):meta_path = argelif opt in ("-s", "--svo_file"):svo_path = argelif opt in ("-z", "--zed_id"):zed_id = int(arg)# Set configuration parametersinput_type = sl.InputType()if svo_path is not None:log.info("SVO file : " + svo_path)input_type.set_from_svo_file(svo_path)else:# Launch camera by idinput_type.set_from_camera_id(zed_id)# Create a ZED camera objectzed = sl.Camera()# Set configuration parametersinput_type = sl.InputType()init = sl.InitParameters(input_t=input_type)init.camera_resolution = sl.RESOLUTION.HD720init.depth_mode = sl.DEPTH_MODE.PERFORMANCEinit.coordinate_units = sl.UNIT.MILLIMETER# Open the cameraerr = zed.open(init)if err != sl.ERROR_CODE.SUCCESS:print(repr(err))zed.close()exit(1)# Set runtime parameters after opening the cameraruntime = sl.RuntimeParameters()runtime.sensing_mode = sl.SENSING_MODE.STANDARD# Prepare new image size to retrieve half-resolution imagesimage_size = zed.get_camera_information().camera_resolutionimage_size.width = image_size.widthimage_size.height = image_size.height# Declare your sl.Mat matricesimage_zed = sl.Mat(image_size.width, image_size.height, sl.MAT_TYPE.U8_C4)disparity = sl.Mat()  # 视差值dep = sl.Mat()  # 深度图depth_image_zed = sl.Mat(image_size.width, image_size.height, sl.MAT_TYPE.U8_C4)point_cloud = sl.Mat()# =======================================  yolov4  video test et ============================================weightsPath_tiny = weight_pathconfigPath_tiny = config_pathnet = cv2.dnn.readNet(weightsPath_tiny, configPath_tiny)net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA_FP16)model = cv2.dnn_DetectionModel(net)def YOLOv4_video(pred_image):model.setInputParams(size=(416, 416), scale=1 / 255, swapRB=True)image_test = cv2.cvtColor(pred_image, cv2.COLOR_RGBA2RGB)image = image_test.copy()print('image', image.shape)confThreshold = 0.5nmsThreshold = 0.4classes, confidences, boxes = model.detect(image, confThreshold, nmsThreshold)return classes, confidences, boxesLABELS = []with open(meta_path, 'r') as f:LABELS = [cname.strip() for cname in f.readlines()]COLORS = [[0, 0, 255], [30, 255, 255], [0, 255, 0]]frame_count = 0exit_flag = Truewhile (exit_flag == True):err = zed.grab(runtime)if err == sl.ERROR_CODE.SUCCESS:i = 0# Retrieve the left image, depth image in the half-resolutionzed.retrieve_image(image_zed, sl.VIEW.LEFT, sl.MEM.CPU, image_size)zed.retrieve_image(depth_image_zed, sl.VIEW.DEPTH, sl.MEM.CPU, image_size)# 获取视差值zed.retrieve_measure(disparity, sl.MEASURE.DISPARITY, sl.MEM.CPU)dis_map = disparity.get_data()zed.retrieve_image(dep, sl.VIEW.DEPTH)  # 深度图depth_map = depth_image_zed.get_data()dep_map = dep.get_data()# Retrieve the RGBA point cloud in half resolutionzed.retrieve_measure(point_cloud, sl.MEASURE.XYZRGBA, sl.MEM.CPU, image_size)point_map = point_cloud.get_data()image_ocv = image_zed.get_data()# depth_image_ocv = depth_image_zed.get_data()view = np.concatenate((cv2.resize(image_ocv, (640, 360)), cv2.resize(dep_map, (640, 360))), axis=1)cv2.imshow("View", view)key = cv2.waitKey(1)if key & 0xFF == 27:  # esc退出breakif key & 0xFF == ord('s'):  # 图像保存savePath = os.path.join("./images", "V{:0>3d}.png".format(i))  # 注意根目录是否存在"./images"文件夹cv2.imwrite(savePath, view)i = i + 1classes, confidences, boxes = YOLOv4_video(image_ocv)for cl, score, (left, top, width, height) in zip(classes, confidences, boxes):start_pooint = (int(left), int(top))end_point = (int(left + width), int(top + height))x = int(left + width / 2)y = int(top + height / 2)color = COLORS[0]img = cv2.rectangle(image_ocv, start_pooint, end_point, color, 3)img = cv2.circle(img, (x, y), 5, [0, 0, 255], 5)text = f'{LABELS[cl]}: {score:0.2f}'cv2.putText(img, text, (int(left), int(top - 7)), cv2.FONT_ITALIC, 1, COLORS[0], 2)x = round(x)y = round(y)err, point_cloud_value = point_cloud.get_value(x, y)distance = math.sqrt(point_cloud_value[0] * point_cloud_value[0] + point_cloud_value[1] * point_cloud_value[1] +point_cloud_value[2] * point_cloud_value[2])print("Distance to Camera at (class : {0}, score : {1:0.2f}): distance : {2:0.2f} mm".format(LABELS[cl],score,distance),end="\r")cv2.putText(img, "Distance: " + str(round(distance / 1000, 2)) + 'm',(int(left), int(top + 25)),cv2.FONT_HERSHEY_COMPLEX, 1, COLORS[1], 2)cv2.imshow("Image", img)key = cv2.waitKey(2)frame_count = frame_count + 1if key & 0xFF == 27:  # esc退出breakif key & 0xFF == ord('s'):  # 图像保存savePath = os.path.join("./images", "V{:0>3d}.png".format(i))  # 注意根目录是否存在"./images"文件夹cv2.imwrite(savePath, view)i = i + 1cv2.destroyAllWindows()zed.close()print("\nFINISH")if __name__ == "__main__":main(sys.argv[1:])

测距图片挺准确的,因为免去了棋盘格标定,直接调用的zed内部参数,具体效果图看深度图可以看出来,可以对比标定和未标定的效果

深度图(将左相机画面和深度图做了拼接)

yolov4直接调用zed相机实现三维测距(免标定)相关推荐

  1. yolov5直接调用zed相机实现三维测距(python)

    此项目直接调用zed相机实现三维测距,无需标定,相关内容如下: 1. YOLOV5 + 双目测距 2. yolov4直接调用zed相机实现三维测距 3.具体实现效果已在哔哩哔哩发布,点击此链接跳转 相 ...

  2. ZED 相机 ORB-SLAM2安装环境配置与ROS下的调试

    注:1. 对某些地方进行了更新(红色标注),以方便进行配置. 2. ZED ROS Wrapper官方github已经更新,根据描述新的Wrapper可能已经不适用与Ros Indigo了,如果大家想 ...

  3. 使用ZED相机识别颜色醒目的水壶并计算与相机的距离

    这是以前做的小项目里的一部分,由于时间久远,在这里整理以下,也方便自己以后查阅. 前期准备 ZED-stereo是很好用功能强大的双目相机,可以调用自带的库直接读取点云数据,也可以很方便获得图像任意一 ...

  4. zed相机拆机_「zed」zed双目相机的windows配置 - 金橙教程网

    zed zed相机非常方便,我用的时候10m内的测距效果非常不错,这里讲一下怎么配置. 首先去官网下载安装包,如果你买的zed相机里面附带u盘有安装包的话,,不建议你使用U盘里面的安装包,它给你的安装 ...

  5. ZED相机使用记录(一):利用ZED SDK使用python完成局域网内的远程视频(视频流)传输

    ** 本文主要介绍ZED2相机以及具有的功能,ZED2相机(这里使用ZED2相机,主要是因为视频流传输功能目前只有ZED2.ZED mini等新版本相机才有的功能)** 本文所使用的环境: pytho ...

  6. ROS wrapper for the ZED SDK (ZED相机的ROS驱动包的使用)

    前言 ZED相机 High-Resolution and High Frame-rate 3D Video Capture Depth Perception indoors and outdoors ...

  7. ZED相机 C# Api环境配置

    文章目录 ZED相机 C# api环境配置 为什么要配置ZED相机在C#的Api环境 基础环境 环境配置 基础条件 配置过程 Cmake生成工程文件 构建Stereolabs.zed 构建tutori ...

  8. zed相机的基本使用

    # zed相机的基本使用 # =========================== # author: wubing # date :2020.11.22 # =================== ...

  9. 【ZED】从零开始使用ZED相机(五):Opencv+Python实现相机标定(双目)

    引言 同样Opencv+Python实现双目相机的标定,单目标定详见[ZED]从零开始使用ZED相机(五):Opencv+Python实现相机标定(单目) 1 cv2.stereoCalibrate ...

最新文章

  1. SAP QM QE02 修改检验结果,报错 -No characteristics were found–
  2. python语言入门书籍-Python入门书籍有哪些?
  3. 《算法设计编程实验:大学程序设计课程与竞赛训练教材》——2.3 构造法模拟的实验范例...
  4. CSS_DIV学习记录2(用背景颜色实现一个网页的完整布局)
  5. windows server 2008 iis6.0 无法下载.exe
  6. windows环境 安装python的虚拟环境,安装第三方包的总结
  7. 方法覆盖异常篇 java 1615387415
  8. 【Python爬虫】信息组织与提取方法
  9. 诽谤、窃密、禁令、和解:文远知行创始团队纠纷暂时完结
  10. 使用 strtok 提取 ip (点分十进制)网段
  11. tcp测试监听工具_高清无码多图详解!性能测试六大核心体系(没人的时候偷偷看)...
  12. POJ 1635 树的最小表示法
  13. Base64编码及应用
  14. CentOS7.5 安装mysql
  15. 【SSM直击大厂】第十三章:MyBatis 详解
  16. 仿京东放大镜效果案例
  17. 赋科技以温度,百度集团副总裁吴甜入选《财富》“40 Under 40榜单”
  18. 发卡小程序源码,自动发卡小程序
  19. 项目1_小鲨鱼记账系统
  20. MATLAB秦九韶多项式求值算法的原理和迭代法求解的近似数值方法。

热门文章

  1. 安卓手机能升级鸿蒙系统,华为官宣好消息,20款手机适配新系统,未来还能升级鸿蒙OS...
  2. 每个成功的男人背后都有个厉害的女人,这篇文章带你看看互联网大佬们背后的女人。...
  3. 【SUMO学习】初级 Quick Start
  4. 微软GitHub宣布官方iOS/安卓客户端
  5. 【它山之玉】写科研论文导师不传授的细节-科学网马臻
  6. 老游戏新写之Jetpac重返地球
  7. 【UML】什么是UML
  8. 昨日关注:SQL Server 索引结构及其使用
  9. python安装pandas错误怎么解决,在Python中我安装了pandas,但它无法正常工作
  10. 解决: 误将分区的GHO镜像文件恢复到整个硬盘