该问题发给 NVIDIA 论坛,下面是与管理员的交流内容:

https://forums.developer.nvidia.com/t/can-deepstream-test1-run-on-jetson-nano-2g/179912


xuyeping

Hi,

After I compile deepstream-test1 directly, the program can’t run on my Jetson Nano 2G. However, after removing the deep learning model nvinferfrom the codes, the program can run. Is it because my nano 2G memory is too small, so the deep learning model can’t run? Or other reasons?


Fiona.Chen

What kind of error have you met?


xuyeping

When I enter the command line ./deepstream-test1-app sample_720p.h264, the window deepstream-test1-app was showed. However, the window only draws the title and the border, the image is not displayed in it. The canvas of the window we saw is the desktop of Jetson Nano. The output of this program is as follows:

jetson@jetson-desktop:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1$ ./deepstream-test1-app sample_720p.h264
Now playing: sample_720p.h264Using winsys: x11
Opening in BLOCKING MODE
0:00:11.198570435  8436   0x55934fcad0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640
1   OUTPUT kFLOAT conv2d_bbox     16x23x40
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         0:00:11.218911904  8436   0x55934fcad0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine
0:00:11.316350994  8436   0x55934fcad0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Running...
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
ERROR: [TRT]: ../rtSafe/cuda/reformat.cu (925) - Cuda Error in NCHWToNCHHW2: 719 (unspecified launch failure)
ERROR: [TRT]: FAILED_EXECUTION: std::exception
ERROR: Failed to enqueue inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:12.668284961  8436   0x5593091de0 WARN                 nvinfer gstnvinfer.cpp:1225:gst_nvinfer_input_queue_loop:<primary-nvinference-engine> error: Failed to queue input batch for inferencing
ERROR: Failed to make stream wait on event, cuda err_no:4, err_str:cudaErrorLaunchFailure
ERROR: Preprocessor transform input data failed., nvinfer error:NVDSINFER_CUDA_ERROR
ERROR from element primary-nvinference-engine: Failed to queue input batch for inferencing
0:00:12.668905003  8436   0x5593091de0 WARN                 nvinfer gstnvinfer.cpp:1225:gst_nvinfer_input_queue_loop:<primary-nvinference-engine> error: Failed to queue input batch for inferencing
ERROR: Failed to make stream wait on event, cuda err_no:4, err_str:cudaErrorLaunchFailure
ERROR: Preprocessor transform input data failed., nvinfer error:NVDSINFER_CUDA_ERROR
0:00:12.669423588  8436   0x5593091de0 WARN                 nvinfer gstnvinfer.cpp:1225:gst_nvinfer_input_queue_loop:<primary-nvinference-engine> error: Failed to queue input batch for inferencing
ERROR: Failed to make stream wait on event, cuda err_no:4, err_str:cudaErrorLaunchFailure
ERROR: Preprocessor transform input data failed., nvinfer error:NVDSINFER_CUDA_ERROR
0:00:12.669702021  8436   0x5593091de0 WARN                 nvinfer gstnvinfer.cpp:1225:gst_nvinfer_input_queue_loop:<primary-nvinference-engine> error: Failed to queue input batch for inferencing
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1225): gst_nvinfer_input_queue_loop (): /GstPipeline:dstest1-pipeline/GstNvInfer:primary-nvinference-engine
Returned, stopping playback
Frame Number = 0 Number of objects = 0 Vehicle Count = 0 Person Count = 0** (deepstream-test1-app:8436): WARNING **: 14:51:19.402: Use gst_egl_image_allocator_alloc() to allocate from this allocator
0:00:13.099598980  8436   0x5593091c00 WARN                 nvinfer gstnvinfer.cpp:1984:gst_nvinfer_output_loop:<primary-nvinference-engine> error: Internal data stream error.
0:00:13.099640229  8436   0x5593091c00 WARN                 nvinfer gstnvinfer.cpp:1984:gst_nvinfer_output_loop:<primary-nvinference-engine> error: streaming stopped, reason error (-5)
Frame Number = 1 Number of objects = 0 Vehicle Count = 0 Person Count = 0** (deepstream-test1-app:8436): WARNING **: 14:51:19.417: Use gst_egl_image_allocator_alloc() to allocate from this allocator
Frame Number = 2 Number of objects = 0 Vehicle Count = 0 Person Count = 0** (deepstream-test1-app:8436): WARNING **: 14:51:19.429: Use gst_egl_image_allocator_alloc() to allocate from this allocator
Frame Number = 3 Number of objects = 0 Vehicle Count = 0 Person Count = 0** (deepstream-test1-app:8436): WARNING **: 14:51:19.442: Use gst_egl_image_allocator_alloc() to allocate from this allocator

Fiona.Chen

I just tried with Nano board with deepstream-test1-app, it can run and I can see the video displayed.

Can you provide platform information?

**• Hardware Platform (Jetson / GPU)**
**• DeepStream Version**
**• JetPack Version (valid for Jetson only)**
**• TensorRT Version**
**• NVIDIA GPU Driver Version (valid for GPU only)**

xuyeping

Thank you so much! My running environment is as follows:

**• Hardware Platform: Jetson Nano 2G **
**• DeepStream Version: DeepStream 5.1 **
**• JetPack Version UNKNOWN [L4T 32.4.4] (this information is from JTOP)**
**• TensorRT Version: 7.1.3.0 **
**• NVIDIA GPU Driver Version (valid for GPU only)**

The information displayed in jtop is shown in the figure below:


Fiona.Chen

Deepstream5.1 is based on JetPack 4.5.1 GA(corresponding to L4T 32.5.1 release) .

Please install Deepstream5.1 and the dependencies correctly.

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html#jetson-setup


按照 Fiona.Chen 的答复,问题原因应该是我的 Jetson Nano 相关软件版本不匹配所致。我会按照所提的 URL 网址下载系统镜像,重新制作系统。稍后我会把实验结果详细写成文章与大家共享!

为什么我的 Jetson Nano 不能运行 deepstream-test1相关推荐

  1. jetson nano 5 运行YOLOV5

    yolo系列算法在目标检测上很有地位,速度很快.之前都是在电脑上跑代码,今天来踩踩nano的坑,话不多少,开始吧! 1.首先下载yolov5的源码(我用的是5.0版本),可以去github,不过我个人 ...

  2. Jetson nano sudo运行vncserver后导致Ubuntu循环登录

    今早在jetson nano上安装vncserver后,重启后出现了循环登录的问题.输入密码后黑屏,然后需要再次输入密码... 问题是因为 : ~/下的 .ICEauthority文件的owner被修 ...

  3. jetson nano 用 tensorrt 运行 nanodet(kitti数据集)

    题目是目标,我们先一步一步来做,第一步是训练神经网络, 我用的是 kitti 数据集,训练顺序为,第一步,拿到kitti数据集,第二步,把kitti数据集修改为voc数据集格式,第三步,修改配置文件进 ...

  4. jetson nano开发使用的基础详细分享

    前言: 最近拿到一块jetson nano 2GB版本的板子,折腾了一下,从烧录镜像.修改配件等,准备一篇开箱基础文章给大家介绍一下这块AI开发板. 作者:良知犹存 转载授权以及围观:欢迎关注微信公众 ...

  5. 英伟达 Nano 新手必读:Jetson Nano 深度学习算法模型基准性能测评

    NVIDIA在2019年NVIDIA GPU技术会议(GTC)上宣布了Jetson纳米开发工具包,这是一款99美元的计算机,目前可供嵌入式设计师.研究人员和DIY制造商使用,在一个紧凑.易用的平台上, ...

  6. Deepson在Jetson Nano上进行视频分析的入门

    系列文章目录 提示:这里可以添加系列文章的所有文章的目录,目录需要自己手动添加 例如:第一章 Python 机器学习入门之pandas的使用 提示:写完文章后,目录可以自动生成,如何生成可参考右边的帮 ...

  7. 英伟达|jetson nano开发使用的基础详细分享

    大家好,我是写代码的篮球球痴,最近我朋友写了一篇英伟达开发板的文章,分享给大家. 前言: 最近拿到一块jetson nano 2GB版本的板子,折腾了一下,从烧录镜像.修改配件等,准备一篇开箱基础文章 ...

  8. NVIDIA之AI Course:Getting Started with AI on Jetson Nano—Class notes(五)

    NVIDIA之AI Course:Getting Started with AI on Jetson Nano-Class notes(五) Notice The original text come ...

  9. TensorFlow对象检测-1.0和2.0:训练,导出,优化(TensorRT),推断(Jetson Nano)

    作者|Abhishek 编译|Flin 来源|analyticsvidhya 第1部分 从在自定义数据集中训练检测器到使用TensorFlow 1.15在Jetson纳米板或云上进行推理的详细步骤 完 ...

最新文章

  1. mysql的语句分类,查询、子查询及连接查询
  2. 树莓派python实例_使用Python实现树莓派WiFi断线自动重连实例(附代码)
  3. 接口测试用例设计思路
  4. 如何在CDH5.16.2中部署海豚调度器Apache Dolphin Scheduler 1.2.0
  5. [ESC] EnTT 学习记录 2
  6. Linux命令之查找
  7. centos7 mysql 冲突_CentOS7安装MySQL冲突和问题解决小结
  8. matlab monte carlo,Monte Carlo Simulation
  9. java的max函数比较三个数_java – 使用泛型创建返回较大函数的max函数
  10. 判断回文(0315)SWUST-OJ
  11. Java经典设计模式-创建型模式-抽象工厂模式(Abstract Factory)
  12. ThinkPad R400 Windows7 驱动安装
  13. 2022年最新iOS面试题(附答案)
  14. 临平职高计算机分数线,权威发布!余杭区2017年各类高中招生第一批次录取分数线划定!...
  15. 开始做公众号的一些方法技巧总结
  16. ORACLE的语言从中文修改为英文
  17. 【有利可图网】PS实战系列:PS美化婚纱照片
  18. 高德地图功能点使用整理
  19. 22-09-20 西安 谷粒商城(04)Redisson做分布式锁、布隆过滤器、AOP赋能、自定义注解做缓存管理、秒杀测试
  20. g33k 专用:使用 Mutt Email 客户端管理你的 Gmail

热门文章

  1. 双目立体匹配入门【一】(理论)
  2. Java+MySql+BootStrap开源项目学生宿舍管理系统
  3. 植物群体遗传学--笔记
  4. (DDD)领域驱动设计的边界划分
  5. linux下查看cpu核数以及内存大小
  6. Eclipse配置Tomcat超级基础教程
  7. 作业管理-----操作系统
  8. CV:图像色彩空间及色彩处理
  9. 万象网管的密码藏身之处
  10. 携万钧之力 趣享付春雷计划搅动2019创业市场