Nvidia Deepstream小细节系列:Deepstream python保存pipeline结构图

提示:此章节我们将着重阐述如何在Deepstream Python运行的情况下保存pipeline结构图。版本:Deepstream 6.0。


文章目录

  • Nvidia Deepstream小细节系列:Deepstream python保存pipeline结构图
    • 步骤
    • 代码(不重要,可跳过)
    • 参考

步骤

首先安装依赖:sudo apt-get install graphviz。当然,需要确保你这个py文件是跑的起来的。

在你要运行的py文件import的部分添加:

import os
os.environ["GST_DEBUG_DUMP_DOT_DIR"] = "/tmp"
os.putenv('GST_DEBUG_DUMP_DIR_DIR', '/tmp')

最后,在文件中快开始允许loop之前添加Gst.debug_bin_to_dot_file(pipeline, Gst.DebugGraphDetails.ALL, "pipeline"),如下所示:

osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)Gst.debug_bin_to_dot_file(pipeline, Gst.DebugGraphDetails.ALL, "pipeline")# start play back and listen to events
print("Starting pipeline \n")
pipeline.set_state(Gst.State.PLAYING)
try:loop.run()
except:pass
# cleanup
pipeline.set_state(Gst.State.NULL)

当我们这个py文件跑起来之后,在/tmp路径下就会出现一个pipeline.dot文件。

最后的最后,我们通过dot -Tpng pipeline.dot > pipeline.png.dot文件转换为.png文件。打开后,你就能看到:

代码(不重要,可跳过)

这里附上python的源代码,其实就是deepstream_test_1.py加了两行:

#!/usr/bin/env python3
import sys
sys.path.append('../')
import gi
gi.require_version('Gst', '1.0')
from gi.repository import GObject, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_callimport pyds
import os
os.environ["GST_DEBUG_DUMP_DOT_DIR"] = "/tmp"
os.putenv('GST_DEBUG_DUMP_DIR_DIR', '/tmp')PGIE_CLASS_ID_VEHICLE = 0
PGIE_CLASS_ID_BICYCLE = 1
PGIE_CLASS_ID_PERSON = 2
PGIE_CLASS_ID_ROADSIGN = 3def osd_sink_pad_buffer_probe(pad,info,u_data):frame_number=0#Intiallizing object counter with 0.obj_counter = {PGIE_CLASS_ID_VEHICLE:0,PGIE_CLASS_ID_PERSON:0,PGIE_CLASS_ID_BICYCLE:0,PGIE_CLASS_ID_ROADSIGN:0}num_rects=0gst_buffer = info.get_buffer()if not gst_buffer:print("Unable to get GstBuffer ")return# Retrieve batch metadata from the gst_buffer# Note that pyds.gst_buffer_get_nvds_batch_meta() expects the# C address of gst_buffer as input, which is obtained with hash(gst_buffer)batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))l_frame = batch_meta.frame_meta_listwhile l_frame is not None:try:# Note that l_frame.data needs a cast to pyds.NvDsFrameMeta# The casting is done by pyds.glist_get_nvds_frame_meta()# The casting also keeps ownership of the underlying memory# in the C code, so the Python garbage collector will leave# it alone.#frame_meta = pyds.glist_get_nvds_frame_meta(l_frame.data)frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)except StopIteration:breakframe_number=frame_meta.frame_numnum_rects = frame_meta.num_obj_metal_obj=frame_meta.obj_meta_listwhile l_obj is not None:try:# Casting l_obj.data to pyds.NvDsObjectMeta#obj_meta=pyds.glist_get_nvds_object_meta(l_obj.data)obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)except StopIteration:breakobj_counter[obj_meta.class_id] += 1obj_meta.rect_params.border_color.set(0.0, 0.0, 1.0, 0.0)try: l_obj=l_obj.nextexcept StopIteration:break# Acquiring a display meta object. The memory ownership remains in# the C code so downstream plugins can still access it. Otherwise# the garbage collector will claim it when this probe function exits.display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)display_meta.num_labels = 1py_nvosd_text_params = display_meta.text_params[0]# Setting display text to be shown on screen# Note that the pyds module allocates a buffer for the string, and the# memory will not be claimed by the garbage collector.# Reading the display_text field here will return the C address of the# allocated string. Use pyds.get_string() to get the string content.py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}".format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE], obj_counter[PGIE_CLASS_ID_PERSON])# Now set the offsets where the string should appearpy_nvosd_text_params.x_offset = 10py_nvosd_text_params.y_offset = 12# Font , font-color and font-sizepy_nvosd_text_params.font_params.font_name = "Serif"py_nvosd_text_params.font_params.font_size = 10# set(red, green, blue, alpha); set to Whitepy_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)# Text background colorpy_nvosd_text_params.set_bg_clr = 1# set(red, green, blue, alpha); set to Blackpy_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)# Using pyds.get_string() to get display_text as stringprint(pyds.get_string(py_nvosd_text_params.display_text))pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)try:l_frame=l_frame.nextexcept StopIteration:breakreturn Gst.PadProbeReturn.OK def main(args):# Check input argumentsif len(args) != 2:sys.stderr.write("usage: %s <media file or uri>\n" % args[0])sys.exit(1)# Standard GStreamer initializationGObject.threads_init()Gst.init(None)# Create gstreamer elements# Create Pipeline element that will form a connection of other elementsprint("Creating Pipeline \n ")pipeline = Gst.Pipeline()if not pipeline:sys.stderr.write(" Unable to create Pipeline \n")# Source element for reading from the fileprint("Creating Source \n ")source = Gst.ElementFactory.make("filesrc", "file-source")if not source:sys.stderr.write(" Unable to create Source \n")# Since the data format in the input file is elementary h264 stream,# we need a h264parserprint("Creating H264Parser \n")h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")if not h264parser:sys.stderr.write(" Unable to create h264 parser \n")# Use nvdec_h264 for hardware accelerated decode on GPUprint("Creating Decoder \n")decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")if not decoder:sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")# Create nvstreammux instance to form batches from one or more sources.streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")if not streammux:sys.stderr.write(" Unable to create NvStreamMux \n")# Use nvinfer to run inferencing on decoder's output,# behaviour of inferencing is set through config filepgie = Gst.ElementFactory.make("nvinfer", "primary-inference")if not pgie:sys.stderr.write(" Unable to create pgie \n")# Use convertor to convert from NV12 to RGBA as required by nvosdnvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")if not nvvidconv:sys.stderr.write(" Unable to create nvvidconv \n")# Create OSD to draw on the converted RGBA buffernvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")if not nvosd:sys.stderr.write(" Unable to create nvosd \n")# Finally render the osd outputif is_aarch64():transform = Gst.ElementFactory.make("nvegltransform", "nvegl-transform")print("Creating EGLSink \n")sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")if not sink:sys.stderr.write(" Unable to create egl sink \n")print("Playing file %s " %args[1])source.set_property('location', args[1])streammux.set_property('width', 1920)streammux.set_property('height', 1080)streammux.set_property('batch-size', 1)streammux.set_property('batched-push-timeout', 4000000)pgie.set_property('config-file-path', "dstest1_pgie_config.txt")print("Adding elements to Pipeline \n")pipeline.add(source)pipeline.add(h264parser)pipeline.add(decoder)pipeline.add(streammux)pipeline.add(pgie)pipeline.add(nvvidconv)pipeline.add(nvosd)pipeline.add(sink)if is_aarch64():pipeline.add(transform)# we link the elements together# file-source -> h264-parser -> nvh264-decoder -># nvinfer -> nvvidconv -> nvosd -> video-rendererprint("Linking elements in the Pipeline \n")source.link(h264parser)h264parser.link(decoder)sinkpad = streammux.get_request_pad("sink_0")if not sinkpad:sys.stderr.write(" Unable to get the sink pad of streammux \n")srcpad = decoder.get_static_pad("src")if not srcpad:sys.stderr.write(" Unable to get source pad of decoder \n")srcpad.link(sinkpad)streammux.link(pgie)pgie.link(nvvidconv)nvvidconv.link(nvosd)if is_aarch64():nvosd.link(transform)transform.link(sink)else:nvosd.link(sink)# create an event loop and feed gstreamer bus mesages to itloop = GObject.MainLoop()bus = pipeline.get_bus()bus.add_signal_watch()bus.connect ("message", bus_call, loop)# Lets add probe to get informed of the meta data generated, we add probe to# the sink pad of the osd element, since by that time, the buffer would have# had got all the metadata.osdsinkpad = nvosd.get_static_pad("sink")if not osdsinkpad:sys.stderr.write(" Unable to get sink pad of nvosd \n")osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)Gst.debug_bin_to_dot_file(pipeline, Gst.DebugGraphDetails.ALL, "pipeline")# start play back and listen to eventsprint("Starting pipeline \n")pipeline.set_state(Gst.State.PLAYING)try:loop.run()except:pass# cleanuppipeline.set_state(Gst.State.NULL)if __name__ == '__main__':sys.exit(main(sys.argv))

参考

  • 参考1:deepstream保存pipeline图片
  • 参考2:Python DeepStream program not generating dot file

Nvidia Deepstream小细节系列:Deepstream python保存pipeline结构图相关推荐

  1. python not instance_python isinstance 判断各种类型的小细节|python3教程|python入门|python教程...

    https://www.xin3721.com/eschool/python.html 1. 基本语法 isinstance(object, classinfo) Return true if the ...

  2. python selenium采集速卖通_2.不苟的爬虫小教程系列:python爬虫技术栈介绍

    鉴于爬虫初学者们,往往也是编程的初学者,我在这里介绍一套最常用的技术栈,不求多,只求精. 毕竟我们的目标是采集到数据,只要能够成功实现目标的工具就是好工具. 爬虫框架scrapy:该框架是scrapi ...

  3. Nvidia Deepstream极致细节:3. Deepstream Python RTSP视频输出显示

    Nvidia Deepstream极致细节:3. Deepstream Python RTSP视频输出显示 此章节将详细对官方案例:deepstream_test_1_rtsp_out.py作解读.d ...

  4. Nvidia Deepstream极致细节:2. Deepstream Python Meta数据解读

    Nvidia Deepstream极致细节:2. Deepstream Python Meta数据解读 这一章节中,我们将介绍如何使用Nvidia Deepstream内自定义的Metadata,包括 ...

  5. NVIDIA中文车牌识别系列-1” 在Jetson上用DeepStream识别中文车牌

    前言 这是NVIDIA在2021年初公布的一个开源项目,用NVIDA Jetson设备上的DeepStream视频分析套件实现"车牌识别"的功能,这是个实用性非常高的应用,能应用在 ...

  6. Deepstream 6.1.1 以及 Python Binding Docker 安装

    Deepstream 6.1.1 以及 Python Binding Docker 安装 这里将详细介绍我们如何通过 docker 运行 Deepstream 6.1.1 以及 Python Bind ...

  7. 有趣python小程序系列之一

    文章目录 前言 一.飘落的银杏 二.代码部分 1.导入所需的库 2.生成斐波那契数列 3.定义生成叶子的方法 4.定义生成树的方法 5.主函数部分 三.结果展示 前言 关于学python的初衷,如图, ...

  8. matlab如何测两点的角度_【邢不行|量化小讲堂系列01-Python量化入门】如何快速上手使用Python进行金融数据分析...

    引言: 邢不行的系列帖子"量化小讲堂",通过实际案例教初学者使用python进行量化投资,了解行业研究方向,希望能对大家有帮助. [历史文章汇总]请点击此处 [必读文章]: [邢不 ...

  9. python中shift函数rolling_【邢不行|量化小讲堂系列18-Python量化入门】简易波动指标(EMV)策略实证...

    引言: 邢不行的系列帖子"量化小讲堂",通过实际案例教初学者使用python进行量化投资,了解行业研究方向,希望能对大家有帮助. 个人微信:xingbuxing0807,有问题欢迎 ...

最新文章

  1. 链表问题16——单链表的选择排序
  2. hdu6989 (莫队+单调栈+ST表)
  3. jQuery Easing 动画效果扩展--使用Easing插件,让你的动画更具美感。
  4. Yarn 和 Npm 命令行切换 摘录
  5. DevExpress控件库----AlertControl提示控件
  6. SEO之网站内链优化策略
  7. ARM入门笔记(7)
  8. [29期] 打仗、打球、打游戏、打代码。。。
  9. PowerDesigner 11 一些小技巧
  10. 系统蓝屏日志DMP文件分析工具WinDbg及教程
  11. 人脸识别Demo解析C#
  12. 为什么每个阿里新人都要上“百阿”?
  13. 人工智能真正值得担心的是缺德,而不是聪明
  14. 突发!意外!华芯通公司将于4月30日关闭
  15. 爬虫——代理IP的高匿、匿名、透明介绍
  16. AnyTXT 一款强大的本地文件内容搜索软件
  17. 《增长黑客》的背后逻辑是什么?(上)
  18. Android增量升级
  19. 图神经网络GNN详解
  20. stlink/Jlink在线调试仿真

热门文章

  1. 万科2015苏州城市乐跑音乐节昨日在太湖国际会议中心开跑
  2. Linux下快速使用makedown
  3. 迎接大数据时代的挑战,开创多元交通数据新格局│前沿
  4. 武林传刀剑江湖录(下)攻略
  5. 华为android9.0升级,华为安卓9.0升级名单 EMUI9.0升级支持机型汇总
  6. 5G+60倍超级变焦!vivo X30系列打造专业影像旗舰
  7. 计算机技术与生活PPT,电子计算机与多媒体教学课件
  8. 2021年巫妖易语言-js逆向-安卓逆向hook教程
  9. VODTOPODO%POJO
  10. 婚礼答谢宴ppt模板_婚礼答谢宴致谢词2篇