NumPy Integration:
Librealsense frames support the buffer protocol. A numpy array can be constructed using this protocol with no data marshalling overhead:
Numpy集成:
librealsense帧支持缓冲区协议。可以使用此协议构造numpy数组,而无需数据编组开销:

将深度帧数据转换为 Numpy 数组进行处理:

        import numpy as npdepth_data = depth.as_frame().get_data()""" as_frame(self: pyrealsense2.pyrealsense2.frame) -> pyrealsense2.pyrealsense2.frame """# 可以说 .as_frame()用了跟没用一样吗?"""get_data(self: pyrealsense2.pyrealsense2.frame) -> pyrealsense2.pyrealsense2.BufDataRetrieve data from the frame handle."""        print('depth_data 的类型:', type(depth_data))# depth_data 的类型: <class 'pyrealsense2.pyrealsense2.BufData'>print(depth_data)# < pyrealsense2.pyrealsense2.BufDataobjectat0x0000024F5D07BA40 >np_image = np.asanyarray(depth_data)print('np_image 的类型:', type(np_image))# print('np_image:', np_image)print('np_image 的大小:', np_image.shape)# np_image的类型: <class 'numpy.ndarray'># (480, 640)

应用到 Intel Realsense D435 python (Python Wrapper)example00: streaming using rs.pipeline(235) 中,就是:

# First import the library
import pyrealsense2 as rspipeline = rs.pipeline()
"""
# Create a context object. This object owns the handles to all connected realsense devices
# 创建pipeline对象
# The caller can provide a context created by the application, usually for playback or testing purposes.
"""pipeline.start()
"""
start(*args, **kwargs)
Overloaded function.1. start(self: pyrealsense2.pyrealsense2.pipeline, config: rs2::config) -> rs2::pipeline_profileStart the pipeline streaming according to the configuraion. The pipeline streaming loop captures samples from the
device, and delivers them to the attached computer vision modules and processing blocks, according to each module
requirements and threading model. During the loop execution, the application can access the camera streams by calling
wait_for_frames() or poll_for_frames(). The streaming loop runs until the pipeline is stopped. Starting the pipeline
is possible only when it is not started. If the pipeline was started, an exception is raised(引发异常). The pipeline
selects and activates the device upon start, according to configuration or a default configuration. When the
rs2::config is provided to the method, the pipeline tries to activate the config resolve() result. If the application
requests are conflicting with pipeline computer vision modules or no matching device is available on the platform,
the method fails. Available configurations and devices may change between config resolve() call and pipeline start,
in case devices are connected or disconnected, or another application acquires ownership of a device. 2. start(self: pyrealsense2.pyrealsense2.pipeline) -> rs2::pipeline_profileStart the pipeline streaming with its default configuration. The pipeline streaming loop captures samples from the
device, and delivers them to the attached computer vision modules and processing blocks, according to each module
requirements and threading model. During the loop execution, the application can access the camera streams by calling
wait_for_frames() or poll_for_frames(). The streaming loop runs until the pipeline is stopped. Starting the pipeline
is possible only when it is not started. If the pipeline was started, an exception is raised. 3. start(self: pyrealsense2.pyrealsense2.pipeline, callback: Callable[[pyrealsense2.pyrealsense2.frame],
None]) -> rs2::pipeline_profile Start the pipeline streaming with its default configuration.
The pipeline captures samples from the device, and delivers them to the through the provided frame callback.
Starting the pipeline is possible only when it is not started. If the pipeline was started, an exception is raised.
When starting the pipeline with a callback both wait_for_frames() and poll_for_frames() will throw exception.4. start(self: pyrealsense2.pyrealsense2.pipeline, config: rs2::config, callback: Callable[[
pyrealsense2.pyrealsense2.frame], None]) -> rs2::pipeline_profile Start the pipeline streaming according to the configuraion. The pipeline captures samples from the device,
and delivers them to the through the provided frame callback. Starting the pipeline is possible only when it is not
started. If the pipeline was started, an exception is raised. When starting the pipeline with a callback both
wait_for_frames() and poll_for_frames() will throw exception. The pipeline selects and activates the device upon
start, according to configuration or a default configuration. When the rs2::config is provided to the method,
the pipeline tries to activate the config resolve() result. If the application requests are conflicting with pipeline
computer vision modules or no matching device is available on the platform, the method fails. Available
configurations and devices may change between config resolve() call and pipeline start, in case devices are connected
or disconnected, or another application acquires ownership of a device. """try:while True:# Create a pipeline object. This object configures the streaming camera and owns it's handleframes = pipeline.wait_for_frames()"""wait_for_frames(self: pyrealsense2.pyrealsense2.pipeline, timeout_ms: int=5000) -> pyrealsense2.pyrealsense2.composite_frame Wait until a new set of frames becomes available. The frames set includes time-synchronized frames of each enabled stream in the pipeline. In case of(若在......情况下) different frame rates of the streams, the frames set include a matching frame of the slow stream, which may have been included in previous frames set. The method blocks(阻塞) the calling thread, and fetches(拿来、取来) the latest unread frames set. Device frames, which were produced while the function wasn't called, are dropped(被扔掉). To avoid frame drops(丢帧、掉帧), this method should be called as fast as the device frame rate. The application can maintain the frames handles to defer(推迟) processing. However, if the application maintains too long history, the device may lack memory resources to produce new frames, and the following call to this method shall fail to retrieve(检索、取回) new frames, until resources become available. """depth = frames.get_depth_frame()"""get_depth_frame(self: pyrealsense2.pyrealsense2.composite_frame) -> rs2::depth_frameRetrieve the first depth frame, if no frame is found, return an empty frame instance."""print(type(frames))# <class 'pyrealsense2.pyrealsense2.composite_frame'>print(type(depth))# <class 'pyrealsense2.pyrealsense2.depth_frame'>print(frames)# <pyrealsense2.pyrealsense2.composite_frame object at 0x000001E4D0AAB7D8>print(depth)# <pyrealsense2.pyrealsense2.depth_frame object at 0x000001E4D0C4B228>import numpy as npdepth_data = depth.as_frame().get_data()print('depth_data 的类型:', type(depth_data))# depth_data 的类型: <class 'pyrealsense2.pyrealsense2.BufData'>print(depth_data)# < pyrealsense2.pyrealsense2.BufDataobjectat0x0000024F5D07BA40 >np_image = np.asanyarray(depth_data)print('np_image 的类型:', type(np_image))# print('np_image:', np_image)print('np_image 的大小:', np_image.shape)# np_image的类型: <class 'numpy.ndarray'># (480, 640)# 如果没有接收到深度帧,跳过执行下一轮循环if not depth:continueprint('not depth:', not depth)# not depth: False# 如果 depth 为空(False),则 not depth 为True,如果 depth 不为空(True),则 not depth 为False# Print a simple text-based representation of the image, by breaking it into 10x20 pixel regions and# approximating the coverage of pixels within one metercoverage = [0] * 64print(type(coverage))# <class 'list'>print(coverage)# [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,# 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]for y in range(480):for x in range(640):# 获取当前深度图像(x, y)坐标像素的深度数据dist = depth.get_distance(x, y)"""get_distance(self: pyrealsense2.pyrealsense2.depth_frame, x: int, y: int) -> floatProvide the depth in meters at the given pixel"""# 如果当前坐标(x, y)像素的深度在1m范围以内,将其所负责的列表元素变量加1。(如:x在0到9范围内负责列表元素coverage[0])if 0 < dist and dist < 1:# x方向上每10个像素宽度整合为一个新的像素区域(最后整合成 640/10=64 个新像素值),将符合深度要求的点加起来作统计。coverage[x // 10] += 1# y方向上每20个像素宽度整合为一个新的像素区域(最后整合成 480/20=24 个新像素值)if y % 20 is 19:line = ""# coverage 列表中元素最大值为200(该区域内【10×20】所有像素点都在所给深度范围内)for c in coverage:# c//25的最大值为8# 用所占颜色空间由小到大的文本来近似复现深度图像line += " .:nhBXWW"[c // 25]# 重置coverage列表coverage = [0] * 64print(line)finally:pipeline.stop()

Intel Realsense D435 python (Python Wrapper)example00: NumPy Integration 将深度帧数据转换为 Numpy 数组进行处理相关推荐

  1. Intel Realsense d435 使用python对深度图进行预处理

    Intel Realsense d435 使用python对深度图进行预处理 本文中主要翻译一下一篇关于深度图预处理过滤器的内容,后面还会有关于距离测量的. 原文中的图像显示,是使用matplotli ...

  2. Intel Realsense D435 如何通过摄像头序列号获取指定摄像头的帧集对?

    需要先创建上下文管理器对象,通过该对象去获取已连接摄像头设备的序列号. 当然也可以直接指定摄像头的序列号,通过config.enable_device(ds5_serial)即可启动它. 如果不指定, ...

  3. Intel Realsense D435 python (Python Wrapper)examples 官方案例汇总

    From pypi/pyrealsense 1.Intel Realsense D435 python (Python Wrapper)example -1: quick start (快速开始) F ...

  4. Intel Realsense D435 在C/C++中表示的frame_set就是python中的frames?【wait_for_frames()】

    参考文章:Intel Realsense D435 (Python Wrapper)example03: Stream Alignment 流对齐 通过深度去除背景

  5. Intel Realsense D435 python(Python Wrapper)example02: NumPy and OpenCV 用窗口展示并排堆叠的RGB流和深度流

    example02: NumPy and OpenCV https://github.com/IntelRealSense/librealsense/blob/development/wrappers ...

  6. python如何拟合三维平面(拟合Intel Realsense D435深度数据点)

    文章目录 拟合Intel Realsense D435深度数据点 参考文章:[MQ笔记]超简单的最小二乘法拟合平面(Python) import numpy as np import matplotl ...

  7. python Intel Realsense D435 多线程资源分配问题(卡住、卡死)

    在使用python多线程调用Intel Realsense D435多个摄像头时,发现pyrealsense的例如pipeline.start().context.query_devices()函数会 ...

  8. ubutnu16.04下Intel Realsense D435驱动的安装和python环境的配置

    ubutnu16.04下Intel Realsense D435驱动的安装和python环境的配置 一. Intel Realsense D435驱动的安装 普遍操作:这里就复制别人的: 1.Regi ...

  9. Intel Realsense D435 python multiprocessing 摄像头多进程流传输

    参考文章1:python 测试multiprocessing多进程 参考文章2:Intel Realsense D435 多摄像头目标识别架构

最新文章

  1. 提示YOU DON'T HAVE PERMISSION TO ACCESS / ON THIS的解决方法
  2. 备份 CSDN 博客(上)
  3. mysql联合子查询_2020-09-08MySQL多表联合查询之子查询
  4. centos下离线安装mysql
  5. android webview 获取 title,【报Bug】app webview 安卓机 title显示问题
  6. Win7如何修复开机画面
  7. UE4 使用蓝图进行编辑器扩展
  8. 职业锚和倾向测试,让你发现真实的自己
  9. Win10系统中没有Microsoft Store(微软商城)解决方法【详细步骤】
  10. 【从本人QQ空间迁移】业务知识通俗理解
  11. ae去闪插件deflicker使用_ae去闪烁插件Flicker Free怎么用-ae去闪烁插件Flicker Free的使用教程 - 河东软件园...
  12. 好看的女孩男生拍照姿势大全
  13. C++对象模型学习——构造函数语意学
  14. aws mediatailor运行原理图
  15. MATLAB图像的两种模糊模式
  16. 利用eda函数对文本数据进行增强
  17. 已解决UserWarning: Manipulating `w3c` setting can have unintended consequences.
  18. 逐行读txt文件(读写文件try catch finally 处理空行,编码格式,文件流释放问题,处理读到重复问题)
  19. Android:根据文件大小自动转化为KB, MB, GB
  20. Nexus 7 二代 刷机

热门文章

  1. 应收应付重组配置和操作解析
  2. 计划策略-20-订货型生产
  3. SAP推出iPhone手机端企业智能管理应用
  4. 未对销售组织 XXX 分销渠道 00 语言 ZH 定义
  5. 新时代营销解决方案:敏捷BI助力银行高效精准营销
  6. 重磅推出校园疫情填报系统,永洪BI助力疫情防控
  7. “奶茶第一股”香飘飘,“香”不起来了?
  8. C语言实现上三角蛇形矩阵不用数组,C/C++编程笔记:C++ 嵌套循环,含循环打印及蛇形矩阵实例...
  9. 远程访问mysql数据库_关于远程连接MySQL数据库的问题解决
  10. html5怎么改变submit样式,html5中submit是按钮么