Nvidia Deepstream极致细节:2. Deepstream Python Meta数据解读
Nvidia Deepstream极致细节:2. Deepstream Python Meta数据解读
这一章节中,我们将介绍如何使用Nvidia Deepstream内自定义的Metadata,包括NvDsBatchMeta
,NvDsFrameMeta
,NvDsObjectMeta
,NvDsClassifierMeta
,NvDsDisplayMeta
。
文章目录
- Nvidia Deepstream极致细节:2. Deepstream Python Meta数据解读
- 1. 一个例子:deepstream_test_1
- 2. NvDsBatchMeta:基本的Metadata结构
- 3. Meta-data Overview
- 4. Code: NvDsBatchMeta
- 5. Code: NvDsFrameMeta
- 6. Code: NvDsObjectMeta
- 7. Code: NvDsClassifierMeta
- 8. Code: NvDsDisplayMeta
Gst-nvstreammux的输出是Gst Buffer
以及NvDsBatchMeta
。参见下图:
Gst Buffer
是GStreamer中数据传输的基本单位。每个 Gst Buffer
都有关联的Meta数据。 DeepStream SDK 附加了 DeepStream Meta数据对象“NvDsBatchMeta”,在以下部分中进行了描述。NVIDIA DeepStream SDK Python API以及NVIDIA DeepStream SDK API Reference Documentation包含了详细描述。
1. 一个例子:deepstream_test_1
我们将从下面这个例子中,来详细理解和解读Meta数据的使用。
def osd_sink_pad_buffer_probe(pad,info,u_data):frame_number=0#Intiallizing object counter with 0.obj_counter = {PGIE_CLASS_ID_VEHICLE:0,PGIE_CLASS_ID_PERSON:0,PGIE_CLASS_ID_BICYCLE:0,PGIE_CLASS_ID_ROADSIGN:0}num_rects=0gst_buffer = info.get_buffer()if not gst_buffer:print("Unable to get GstBuffer ")return# Retrieve batch metadata from the gst_buffer# Note that pyds.gst_buffer_get_nvds_batch_meta() expects the# C address of gst_buffer as input, which is obtained with hash(gst_buffer)batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))l_frame = batch_meta.frame_meta_listwhile l_frame is not None:try:# Note that l_frame.data needs a cast to pyds.NvDsFrameMeta# The casting is done by pyds.glist_get_nvds_frame_meta()# The casting also keeps ownership of the underlying memory# in the C code, so the Python garbage collector will leave# it alone.#frame_meta = pyds.glist_get_nvds_frame_meta(l_frame.data)frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)except StopIteration:breakframe_number=frame_meta.frame_numnum_rects = frame_meta.num_obj_metal_obj=frame_meta.obj_meta_listwhile l_obj is not None:try:# Casting l_obj.data to pyds.NvDsObjectMeta#obj_meta=pyds.glist_get_nvds_object_meta(l_obj.data)obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)except StopIteration:breakobj_counter[obj_meta.class_id] += 1obj_meta.rect_params.border_color.set(0.0, 0.0, 1.0, 0.0)try: l_obj=l_obj.nextexcept StopIteration:break# Acquiring a display meta object. The memory ownership remains in# the C code so downstream plugins can still access it. Otherwise# the garbage collector will claim it when this probe function exits.display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)display_meta.num_labels = 1py_nvosd_text_params = display_meta.text_params[0]# Setting display text to be shown on screen# Note that the pyds module allocates a buffer for the string, and the# memory will not be claimed by the garbage collector.# Reading the display_text field here will return the C address of the# allocated string. Use pyds.get_string() to get the string content.py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}".format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE], obj_counter[PGIE_CLASS_ID_PERSON])# Now set the offsets where the string should appearpy_nvosd_text_params.x_offset = 10py_nvosd_text_params.y_offset = 12# Font , font-color and font-sizepy_nvosd_text_params.font_params.font_name = "Serif"py_nvosd_text_params.font_params.font_size = 10# set(red, green, blue, alpha); set to Whitepy_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)# Text background colorpy_nvosd_text_params.set_bg_clr = 1# set(red, green, blue, alpha); set to Blackpy_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)# Using pyds.get_string() to get display_text as stringprint(pyds.get_string(py_nvosd_text_params.display_text))pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)try:l_frame=l_frame.nextexcept StopIteration:breakreturn Gst.PadProbeReturn.OK
2. NvDsBatchMeta:基本的Metadata结构
DeepStream对Meta数据使用可扩展的标准结构。基本Meta数据结构NvDsBatchMeta
从批处理级Meta数据开始,在Gst-nvstreammux
插件中创建。 辅助Meta数据结构包含框架、对象、分类器和标签数据。DeepStream还提供了一种在批处理(Batch)、帧或对象级别添加用户特定Meta数据的机制。
DeepStream通过附加NvDsBatchMeta
结构并将GstNvDsMetaType.meta_type
设置为Gst-nvstreammux
插件中的NVDS_BATCH_GST_META
,将Meta数据附加到Gst buffer
。当您的应用程序处理Gst buffer
时,它可以遍历附加的Meta数据以找到NVDS_BATCH_GST_META
。
函数gst_buffer_get_nvds_batch_meta()
从Gst Buffer
中提取NvDsBatchMeta
。 (请参阅 sources/include/gstnvdsmeta.h
中的声明。)有关更多详细信息,请参阅 NVIDIA DeepStream SDK API。
3. Meta-data Overview
下图显示了metadata NvDsBatchMeta
、NvDsFrameMeta
、NvDsObjectMeta
、NvDsClassifierMeta
、NvDsDisplayMeta
的结构和里面的结构内容。 逻辑非常简单。 首先,Gst-nvstreammux
插件生成NvDsBatchMeta
并存储在Gst Buffer
中。 NvDsBatchMeta
仅表示一个批次的单桢Meta数据。当然,我们可以从中提取一帧Meta数据。而这个帧数据被称为NvDsFrameMeta
。
需要注意的一件事:上图中的结构参数已经有了不小的变化。 (我猜上面的参数名称在 Deepstream 3.0 中。当前版本中更改了一些名称)。 有关更新的名称,请阅读目录 .../sources/includes
下的 nvdsmeta.h
。
4. Code: NvDsBatchMeta
/*** Holds information about a formed batch containing frames from different* sources.* NOTE: Both Video and Audio metadata uses the same NvDsBatchMeta type.* NOTE: Audio batch metadata is formed within nvinferaudio plugin* and will not be corresponding to any one buffer output from nvinferaudio.* The NvDsBatchMeta for audio is attached to the last input buffer* when the audio batch buffering reach configurable threshold* (audio frame length) and this is when inference output is available.*/
typedef struct _NvDsBatchMeta {NvDsBaseMeta base_meta;/** Holds the maximum number of frames in the batch. */guint max_frames_in_batch;/** Holds the number of frames now in the batch. */guint num_frames_in_batch;/** Holds a pointer to a pool of pointers of type @ref NvDsFrameMeta,representing a pool of frame metas. */NvDsMetaPool *frame_meta_pool;/** Holds a pointer to a pool of pointers of type NvDsObjMeta,representing a pool of object metas. */NvDsMetaPool *obj_meta_pool;/** Holds a pointer to a pool of pointers of type @ref NvDsClassifierMeta,representing a pool of classifier metas. */NvDsMetaPool *classifier_meta_pool;/** Holds a pointer to a pool of pointers of type @ref NvDsDisplayMeta,representing a pool of display metas. */NvDsMetaPool *display_meta_pool;/** Holds a pointer to a pool of pointers of type @ref NvDsUserMeta,representing a pool of user metas. */NvDsMetaPool *user_meta_pool;/** Holds a pointer to a pool of pointers of type @ref NvDsLabelInfo,representing a pool of label metas. */NvDsMetaPool *label_info_meta_pool;/** Holds a pointer to a list of pointers of type NvDsFrameMetaor NvDsAudioFrameMeta (when the batch represent audio batch),representing frame metas used in the current batch.*/NvDsFrameMetaList *frame_meta_list;/** Holds a pointer to a list of pointers of type NvDsUserMeta,representing user metas in the current batch. */NvDsUserMetaList *batch_user_meta_list;/** Holds a lock to be set before accessing metadata to avoidsimultaneous update by multiple components. */GRecMutex meta_mutex;/** Holds an array of user-specific batch information. */gint64 misc_batch_info[MAX_USER_FIELDS];/** For internal use. */gint64 reserved[MAX_RESERVED_FIELDS];
} NvDsBatchMeta;
在案例代码中,我们只用到了l_frame = batch_meta.frame_meta_list
。
5. Code: NvDsFrameMeta
/*** Holds metadata for a frame in a batch.*/
typedef struct _NvDsFrameMeta {/** Holds the base metadata for the frame. */NvDsBaseMeta base_meta;/** Holds the pad or port index of the Gst-streammux plugin for the framein the batch. */guint pad_index;/** Holds the location of the frame in the batch. The frame's@ref NvBufSurfaceParams are at index @a batch_id in the @a surfaceListarray of @ref NvBufSurface. */guint batch_id;/** Holds the current frame number of the source. */gint frame_num;/** Holds the presentation timestamp (PTS) of the frame. */guint64 buf_pts;/** Holds the ntp timestamp. */guint64 ntp_timestamp;/** Holds the source IDof the frame in the batch, e.g. the camera ID.It need not be in sequential order. */guint source_id;/** Holds the number of surfaces in the frame, required in case ofmultiple surfaces in the frame. */gint num_surfaces_per_frame;/* Holds the width of the frame at input to Gst-streammux. */guint source_frame_width;/* Holds the height of the frame at input to Gst-streammux. */guint source_frame_height;/* Holds the surface type of the subframe, required in case ofmultiple surfaces in the frame. */guint surface_type;/* Holds the surface index of tje subframe, required in case ofmultiple surfaces in the frame. */guint surface_index;/** Holds the number of object meta elements attached to current frame. */guint num_obj_meta;/** Holds a Boolean indicating whether inference is performed on the frame. */gboolean bInferDone;/** Holds a pointer to a list of pointers of type @ref NvDsObjectMetain use for the frame. */NvDsObjectMetaList *obj_meta_list;/** Holds a pointer to a list of pointers of type @ref NvDsDisplayMetain use for the frame. */NvDisplayMetaList *display_meta_list;/** Holds a pointer to a list of pointers of type @ref NvDsUserMetain use for the frame. */NvDsUserMetaList *frame_user_meta_list;/** Holds additional user-defined frame information. */gint64 misc_frame_info[MAX_USER_FIELDS];/* Holds the width of the frame at output of Gst-streammux. */guint pipeline_width;/* Holds the height of the frame at output of Gst-streammux. */guint pipeline_height;/** For internal use. */gint64 reserved[MAX_RESERVED_FIELDS];
} NvDsFrameMeta;
在案例代码中,我们使用了如下几个元素:
frame_number=frame_meta.frame_num
num_rects = frame_meta.num_obj_meta
l_obj=frame_meta.obj_meta_list
实际上,NvDsFrameMeta
包含了一些其他有用的元素,比如
- pad_index:Holds the pad or port index of the Gst-streammux plugin for the frame in the batch. 如果我们有超过1路视频的时候,我们可以通过获得这个参数来判断这一桢数据是来自哪一路视频。当然,我们也可以用这个参数:
source_id
。对我来说,这两个参数应该是一样的,因为输入端的视频FPS是相同的,并且batch数量等于视频数量; - source_frame_width:source_frame_height:指的是在Gst-streammux的输入端图像的尺寸。(streammux输入输出图像尺寸可能不同);
- obj_meta_list, display_meta_list, frame_user_meta_list:所以说,NvDsFrameMeta是和这三个模块都打通的。
6. Code: NvDsObjectMeta
/*** Holds metadata for an object in the frame.*/
typedef struct _NvDsObjectMeta {NvDsBaseMeta base_meta;/** Holds a pointer to the parent @ref NvDsObjectMeta. Set to NULL ifno parent exists. */struct _NvDsObjectMeta *parent;/** Holds a unique component ID that identifies the metadatain this structure. */gint unique_component_id;/** Holds the index of the object class inferred by the primarydetector/classifier. */gint class_id;/** Holds a unique ID for tracking the object. @ref UNTRACKED_OBJECT_IDindicates that the object has not been tracked. */guint64 object_id;/** Holds a structure containing bounding box parameters of the object whendetected by detector. */NvDsComp_BboxInfo detector_bbox_info;/** Holds a structure containing bounding box coordinates of the object when* processed by tracker. */NvDsComp_BboxInfo tracker_bbox_info;/** Holds a confidence value for the object, set by the inferencecomponent. confidence will be set to -0.1, if "Group Rectangles" mode ofclustering is chosen since the algorithm does not preserve confidencevalues. Also, for objects found by tracker and not inference component,confidence will be set to -0.1 */gfloat confidence;/** Holds a confidence value for the object set by nvdcf_tracker.* tracker_confidence will be set to -0.1 for KLT and IOU tracker */gfloat tracker_confidence;/** Holds a structure containing positional parameters of the object* processed by the last component that updates it in the pipeline.* e.g. If the tracker component is after the detector component in the* pipeline then positinal parameters are from tracker component.* Positional parameters are clipped so that they do not fall outside frame* boundary. Can also be used to overlay borders or semi-transparent boxes on* objects. @see NvOSD_RectParams. */NvOSD_RectParams rect_params;/** Holds mask parameters for the object. This mask is overlayed on object* @see NvOSD_MaskParams. */NvOSD_MaskParams mask_params;/** Holds text describing the object. This text can be overlayed on thestandard text that identifies the object. @see NvOSD_TextParams. */NvOSD_TextParams text_params;/** Holds a string describing the class of the detected object. */gchar obj_label[MAX_LABEL_SIZE];/** Holds a pointer to a list of pointers of type @ref NvDsClassifierMeta. */NvDsClassifierMetaList *classifier_meta_list;/** Holds a pointer to a list of pointers of type @ref NvDsUserMeta. */NvDsUserMetaList *obj_user_meta_list;/** Holds additional user-defined object information. */gint64 misc_obj_info[MAX_USER_FIELDS];/** For internal use. */gint64 reserved[MAX_RESERVED_FIELDS];
}NvDsObjectMeta;
NvDsObjectMeta
包含了每个检测到的object的信息,在示例代码中,我们只是用了一个元素:obj_meta.rect_params.border_color.set(0.0, 0.0, 1.0, 0.0)
。但实际上,NvDsObjectMeta
包含了一些其他有用的元素,比如
- class_id:Holds the index of the object class inferred by the primary detector/classifier。通过这个class_id,我们可以知道监测到的物体属于哪一类;
- object_id:Holds a unique ID for tracking the object。这个参数其实很有用,但我们需要计数的时候,有一些物体它会一直出现在屏幕中,只要这个物体可以被检测到,并且track上,那么这个id号就不会变。
- rect_params, mask_params, text_params:这三个分别对应识别到物体的框框,mask,以及文字。
- obj_user_meta_list,classifier_meta_list
7. Code: NvDsClassifierMeta
typedef struct _NvDsClassifierMeta {NvDsBaseMeta base_meta;/** Holds the number of outputs/labels produced by the classifier. */guint num_labels;/** Holds a unique component ID for the classifier metadata. */gint unique_component_id;/** Holds a pointer to a list of pointers of type @ref NvDsLabelInfo. */NvDsLabelInfoList *label_info_list;
} NvDsClassifierMeta;
8. Code: NvDsDisplayMeta
typedef struct NvDsDisplayMeta {NvDsBaseMeta base_meta;/** Holds the number of rectangles described. */guint num_rects;/** Holds the number of labels (strings) described. */guint num_labels;/** Holds the number of lines described. */guint num_lines;/** Holds the number of arrows described. */guint num_arrows;/** Holds the number of circles described. */guint num_circles;/** Holds an array of positional parameters for rectangles.Used to overlay borders or semi-transparent rectangles,as required by the application. @see NvOSD_RectParams. */NvOSD_RectParams rect_params[MAX_ELEMENTS_IN_DISPLAY_META];/** Holds an array of text parameters for user-defined strings that can beoverlayed using this structure. @see NvOSD_TextParams. */NvOSD_TextParams text_params[MAX_ELEMENTS_IN_DISPLAY_META];/** Holds an array of line parameters that the user can use to draw polygonsin the frame, e.g. to show a RoI in the frame. @see NvOSD_LineParams. */NvOSD_LineParams line_params[MAX_ELEMENTS_IN_DISPLAY_META];/** Holds an array of arrow parameters that the user can use to draw arrowsin the frame. @see NvOSD_ArrowParams */NvOSD_ArrowParams arrow_params[MAX_ELEMENTS_IN_DISPLAY_META];/** Holds an array of circle parameters that the user can use to draw circlesin the frame. @see NvOSD_CircleParams */NvOSD_CircleParams circle_params[MAX_ELEMENTS_IN_DISPLAY_META];/** Holds an array of user-defined OSD metadata. */gint64 misc_osd_data[MAX_USER_FIELDS];/** For internal use. */gint64 reserved[MAX_RESERVED_FIELDS];
} NvDsDisplayMeta;
Nvidia Deepstream极致细节:2. Deepstream Python Meta数据解读相关推荐
- Nvidia Deepstream极致细节:3. Deepstream Python RTSP视频输出显示
Nvidia Deepstream极致细节:3. Deepstream Python RTSP视频输出显示 此章节将详细对官方案例:deepstream_test_1_rtsp_out.py作解读.d ...
- Nvidia Deepstream小细节系列:Deepstream python保存pipeline结构图
Nvidia Deepstream小细节系列:Deepstream python保存pipeline结构图 提示:此章节我们将着重阐述如何在Deepstream Python运行的情况下保存pipel ...
- Deepstream 6.1.1 以及 Python Binding Docker 安装
Deepstream 6.1.1 以及 Python Binding Docker 安装 这里将详细介绍我们如何通过 docker 运行 Deepstream 6.1.1 以及 Python Bind ...
- MLOps极致细节:15. Azure ML数据集的上传(Azure Workspace DataStore Upload)与注册(Azure Dataset Register)
MLOps极致细节:15. Azure ML数据集的上传(Azure Workspace DataStore Upload)与注册(Azure Dataset Register) 这一章节中,我们将基 ...
- MLOps极致细节:18. Azure ML Pipeline(机器学习管道),Azure Container Instances (ACI)部署模型
MLOps极致细节:18. Azure ML Pipeline(机器学习管道),Azure Container Instances (ACI)部署模型 在之前的章节中,我们已经完成了数据预处理,机器学 ...
- MLOps极致细节:16. Azure ML Pipeline(机器学习管道),Azure Compute Instance搭建与使用
MLOps极致细节:16. Azure ML Pipeline(机器学习管道),Azure Compute Instance搭建与使用 这篇博客与下篇博客,我们将介绍Azure ML Pipeline ...
- 英伟达DeepStream学习笔记27——deepstream下载历史版本
英伟达DeepStream学习笔记27--deepstream下载历史版本 https://docs.nvidia.com/metropolis/deepstream-archive.html htt ...
- Python学习细节总结以及python与c语言区别比较(4)
本文python学习基于廖雪峰老师的学习网站:字符串和编码 - 廖雪峰的官方网站 (liaoxuefeng.com),其内容相对完整,适合初学者学习.由于楼主之前有c语言的学习经验,在此本文仅对其中与 ...
- MLOps极致细节:17. Azure ML Pipeline(机器学习管道),模型训练,打包和注册
MLOps极致细节:17. Azure ML Pipeline(机器学习管道),模型训练,打包和注册 这两个章节中,我们将介绍Azure ML Pipeline的使用,并且结合MLFlow一起跟踪ML ...
最新文章
- 机器学习-贝叶斯分类器
- 【Silverlight】Bing Maps开发应用与技巧三:Bing Maps Silverlight Control的离线开发
- c# 调用restful json_微服务调用为啥用RPC框架,http不更简单吗?
- 【小白学习C++ 教程】十、C++中指针和内存分配
- 外观模式(Façade Pattern)
- 计算机如何学会自动地进行图像美学增强?
- 谷歌最强NLP模型BERT官方代码来了!GitHub一天3000星
- c 语言 小波变换,小波变换C语言
- python矩阵转置与zip(*)的使用
- mysql query profiler_MySQL Query Profiler
- centos 7安装搭建confluence-wiki
- WINDOWS:OPEN62541编译
- 苏州旅游网站的设计与实现 毕业论文+Html静态源码
- Keil C51 Code Banking
- Tableau超市数据分析报告
- HTML+JavaScript+CSS的人员信息管理系统
- uni-app 图片上传插件使用说明
- 统计知识基础(三)常用构造估计量的两种方法——矩估计、最大似然估计
- 犹他大学支付45万美元赎金以阻止被盗数据泄露
- htc one m7刷Linux,HTC One M7 刷机图文教程 一键刷Recovery教程