文章目录

  • 先补充一下: 查看版本信息
  • 非法指令 (核心已转储)
  • PyTorch和torchvision版本之间的对应关系
  • PyTorch and torchvision报错: RuntimeError: Couldn't load custom C++ ops.
  • Deepstream运行出错
  • 运行LD_PRELOAD=./libmyplugins.so deepstream-app -c deepstream_app_config_yoloV5.txt指令报错
  • 安装Deepstream 5.0
  • dpkg: 处理归档 /var/cache/apt/archives/deepstream-5.0_5.0.1-1_arm64.deb (–unpack)时出错:
  1. 参考文章: Jetson nano从烧录系统到DeepStream+TensorRT+yolov5检测CSI摄像头视频
  2. 我主要的安装过程都是按照这一篇文章进行的, 这一篇文章写得很详细, 而且还配有详细的图片介绍, 基本上按照文章一步步来就可以实现想要的功能了
  3. 注意: 复制此文的语句时, 由于B站的复制会格外带上作者和文章链接, 最好首先按F12->F1, 选择禁用JavaScript, 这样复制的时候就只有完整的命令了!
  4. 对了, 一定要注意版本之间的对应关系, 否则安装时可能会出错❗
  5. 后面再写一些可能会出现的错误的情况以及对应的解决办法:

先补充一下: 查看版本信息

1.查看jetpack版本

jtop
显示页面后, 再按数字6可以查看jetpack、CUDA、cuDNN、OpenCV、TensorRT等的版本

2.查看cuda版本

nvcc -V
release 10.2, V10.2.89

3.查看cuDNN版本:

dpkg -l libcudnn8
8.0.0.180-1

4.查看opencv版本:

dpkg -l libopencv
4.1.1-2-gd5a

python3 -c "import cv2; print(cv2.__version__)"
3.3.1         注意: 如果版本为4.5.3, 则后面的运行可能会出错
python -c "import cv2; print(cv2.__version__)"
3.3.1

查看opencv3及以下版本:
pkg-config --modversion opencv
3.3.1
查看opencv4版本:
pkg-config --modversion opencv4
4.1.1

5.TensorRT版本:

dpkg -l tensorrt
7.1.3.0-1

6.操作系统版本:

lsb_release -i -r
Ubuntu 18.04

7.查看python版本

python -V
Python 2.7.17
python3 -V
Python 3.6.9

8.查看numpy版本

python -c "import numpy; print(numpy.__version__)"
1.16.6
python3 -c "import numpy; print(numpy.__version__)"
1.19.3

9.查看numpy的位置

python3 -c "import numpy; print(numpy.__file__)"
/home/deng/.local/lib/python3.6/site-packages/numpy/init.py
python -c "import numpy; print(numpy.__file__)"
/home/deng/.local/lib/python2.7/site-packages/numpy/init.pyc

10.查看pytorch版本

python3 -c "import torch; print(torch.__version__)"
1.8.0

11.查看torchvision版本

python3 -c "import torchvision; print(torchvision.__version__)"
0.9.0

12.查看deepstream版本

deepstream-app --version-all
deepstream-app version 5.0.0
DeepStreamSDK 5.0.0
CUDA Driver Version: 10.2
CUDA Runtime Version: 10.2
TensorRT Version: 7.1
cuDNN Version: 8.0
libNVWarp360 Version: 2.0.1d3

非法指令 (核心已转储)

  1. 问题描述: 在python3命令行中, 语句import numpy报错(见下图), 且由于numpy出错, 使用cv2、torch库也会报同样的错误
  2. 参考文章: https://blog.csdn.net/qq_43711697/article/details/119081630
    这一篇文章已经讲得很清楚了, 是numpy的版本问题
  3. 解决步骤: 要删除掉位于/usr/lib/python3/dist-packages/的版本为1.13.3的numpy, 并将/home/username(这里指本机的名字)/.local/lib/python3.6/site-packages/版本号为1.19.5的numpy替换成1.19.3版本

补充一些命令语句:

python3 -m pip uninstall numpy      # 卸载掉最新版的numpy
pip install numpy                   # 安装最新版的numpy
pip unintall numpy                  # 卸载掉最新版的numpy
pip install numpy==1.19.3           # 安装1.19.3版本
sudo rm -rf 文件夹名                  # 删除该文件夹

PyTorch和torchvision版本之间的对应关系

官网: https://github.com/pytorch/vision#installation

torch torchvision python
1.9.0 0.10.0 >=3.6
1.8.1 0.9.1 >=3.6
1.8.0 0.9.0 >=3.6
1.7.1 0.8.2 >=3.6
1.7.0 0.8.1 >=3.6
1.7.0 0.8.0 >=3.6
1.6.0 0.7.0 >=3.6
1.5.1 0.6.1 >=3.5
1.5.0 0.6.0 >=3.5
1.4.0 0.5.0 ==2.7, >=3.5, <=3.8
1.3.1 0.4.2 ==2.7, >=3.5, <=3.7
1.3.0 0.4.1 ==2.7, >=3.5, <=3.7
1.2.0 0.4.0 ==2.7, >=3.5, <=3.7
1.1.0 0.3.0 ==2.7, >=3.5, <=3.7
<=1.0.1 0.2.2 ==2.7, >=3.5, <=3.7

PyTorch and torchvision报错: RuntimeError: Couldn’t load custom C++ ops.

问题描述: 使用detect.py文件进行推理时出现如下报错:
RuntimeError: Couldn’t load custom C++ ops. This can happen if your PyTorch and torchvision versions are incompatible, or if you had errors while compiling torchvision from source. For further information on the compatible versions, check https://github.com/pytorch/vision#installation for the compatibility matrix. Please check your PyTorch version with torch.version and your torchvision version with torchvision.version and verify if they are compatible, and if not please reinstall torchvision so that it matches your PyTorch install.
意思是说PyTorch和torchvision的版本不相符❗

参考文章: https://blog.csdn.net/weixin_48695448/article/details/116424783

解决方案: 由于我以前还安装了另外一个版本的torchvision, 两个版本导致冲突! 我的 torch 版本是1.8,而torchvision版本0.10.0和0.9.0, 因此, 只需卸载掉0.10.0版本的torchvision即可

pip uninstall torchvision

Deepstream运行出错

问题描述: 安装完Deepstream后, 使用deepstream-app -c source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt命令测试时, 报错如下:
OpenCV Error: Assertion failed (vxGetStatus((vx_reference)img) == VX_SUCCESS) in createVXImageFromCVMat, file /dvs/git/dirty/git-master_linux/multimedia/deep-learning/VisionWorksAPI/1.5/headers/NVX/nvx_opencv_interop.hpp, line 296
An exception occurred. /dvs/git/dirty/git-master_linux/multimedia/deep-learning/VisionWorksAPI/1.5/headers/NVX/nvx_opencv_interop.hpp:296: error: (-215) vxGetStatus((vx_reference)img) == VX_SUCCESS in function createVXImageFromCVMat

详细信息:
deng@deng-desktop:/opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app$ deepstream-app -c source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app/…/…/models/Primary_Detector_Nano/resnet10.caffemodel_b8_gpu0_fp16.engine open error
0:00:06.643761144 32628 0x7f28002260 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app/…/…/models/Primary_Detector_Nano/resnet10.caffemodel_b8_gpu0_fp16.engine failed
0:00:06.643953340 32628 0x7f28002260 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app/…/…/models/Primary_Detector_Nano/resnet10.caffemodel_b8_gpu0_fp16.engine failed, try rebuild
0:00:06.644056799 32628 0x7f28002260 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
INFO: [TRT]:
INFO: [TRT]: --------------- Layers running on DLA:
INFO: [TRT]:
INFO: [TRT]: --------------- Layers running on GPU:
INFO: [TRT]: conv1 + activation_1/Relu, block_1a_conv_1 + activation_2/Relu, block_1a_conv_2, block_1a_conv_shortcut + add_1 + activation_3/Relu, block_2a_conv_1 + activation_4/Relu, block_2a_conv_2, block_2a_conv_shortcut + add_2 + activation_5/Relu, block_3a_conv_1 + activation_6/Relu, block_3a_conv_2, block_3a_conv_shortcut + add_3 + activation_7/Relu, block_4a_conv_1 + activation_8/Relu, block_4a_conv_2, block_4a_conv_shortcut + add_4 + activation_9/Relu, conv2d_cov, conv2d_cov/Sigmoid, conv2d_bbox,
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector_Nano/resnet10.caffemodel_b8_gpu0_fp16.engine opened error
0:00:32.702663741 32628 0x7f28002260 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1744> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector_Nano/resnet10.caffemodel_b8_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x272x480
1 OUTPUT kFLOAT conv2d_bbox 16x17x30
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x17x30
0:00:32.867504842 32628 0x7f28002260 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app/config_infer_primary_nano.txt sucessfully
Runtime commands:
h: Print this help
q: Quit
p: Pause
r: Resume
NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.
**PERF: FPS 0 (Avg) FPS 1 (Avg) FPS 2 (Avg) FPS 3 (Avg) FPS 4 (Avg) FPS 5 (Avg) FPS 6 (Avg) FPS 7 (Avg)
**PERF: 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00)
** INFO: <bus_callback:181>: Pipeline ready
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:167>: Pipeline running
ERROR: [TRT]: …/rtSafe/cuda/caskConvolutionRunner.cpp (317) - Cuda Error in allocateContextResources: 700 (an illegal memory access was encountered)
ERROR: [TRT]: FAILED_EXECUTION: std::exception
ERROR: Failed to enqueue inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:33.827139022 32628 0x15e41a30 WARN nvinfer gstnvinfer.cpp:1225:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR from primary_gie: Failed to queue input batch for inferencing
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1225): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
Quitting
vxSetImmediateModeTarget returned 0xfffffff4
KLT Tracker Init
OpenCV Error: Assertion failed (vxGetStatus((vx_reference)img) == VX_SUCCESS) in createVXImageFromCVMat, file /dvs/git/dirty/git-master_linux/multimedia/deep-learning/VisionWorksAPI/1.5/headers/NVX/nvx_opencv_interop.hpp, line 296
An exception occurred. /dvs/git/dirty/git-master_linux/multimedia/deep-learning/VisionWorksAPI/1.5/headers/NVX/nvx_opencv_interop.hpp:296: error: (-215) vxGetStatus((vx_reference)img) == VX_SUCCESS in function createVXImageFromCVMat
OpenCV Error: Assertion failed (vxGetStatus((vx_reference)img) == VX_SUCCESS) in createVXImageFromCVMat, file /dvs/git/dirty/git-master_linux/multimedia/deep-learning/VisionWorksAPI/1.5/headers/NVX/nvx_opencv_interop.hpp, line 296
An exception occurred. /dvs/git/dirty/git-master_linux/multimedia/deep-learning/VisionWorksAPI/1.5/headers/NVX/nvx_opencv_interop.hpp:296: error: (-215) vxGetStatus((vx_reference)img) == VX_SUCCESS in function createVXImageFromCVMat

一开始, 由下面这一个报错, 我认为是缺少了一个engine文件, 可能是安装出错的原因, 就一直到网上去找这个文件的问题, 但仍然没有解决。。。 (下面有解决方案❗)
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app/…/…/models/Primary_Detector_Nano/resnet10.caffemodel_b8_gpu0_fp16.engine open error

于是, 我干脆先跳过测试Deepstream这一步, 直接进行Yolov5检测CSI摄像头那一步, 安装没有出现问题, 但是运行指令LD_PRELOAD=./libmyplugins.so deepstream-app -c deepstream_app_config_yoloV5.txt报错

运行LD_PRELOAD=./libmyplugins.so deepstream-app -c deepstream_app_config_yoloV5.txt指令报错

报错如下:
0:00:05.440110343 14335 0x3aa54800 ERROR nvinfer gstnvinfer.cpp:1111:get_converted_buffer:<primary_gie> cudaMemset2DAsync failed with error cudaErrorInvalidValue while converting buffer
0:00:05.440207400 14335 0x3aa54800 WARN nvinfer gstnvinfer.cpp:1372:gst_nvinfer_process_full_frame:<primary_gie> error: Buffer conversion failed
ERROR from primary_gie: Buffer conversion failed

详细信息:
deng@deng-desktop:~/Yolov5-in-Deepstream-5.0/Deepstream 5.0$ LD_PRELOAD=./libmyplugins.so deepstream-app -c deepstream_app_config_yoloV5.txt
Using winsys: x11
0:00:04.112944064 14335 0x3b176350 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/home/deng/Yolov5-in-Deepstream-5.0/Deepstream 5.0/yolov5s.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT data 3x640x640
1 OUTPUT kFLOAT prob 6001x1x1
0:00:04.113348357 14335 0x3b176350 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /home/deng/Yolov5-in-Deepstream-5.0/Deepstream 5.0/yolov5s.engine
0:00:04.298495025 14335 0x3b176350 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/deng/Yolov5-in-Deepstream-5.0/Deepstream 5.0/config_infer_primary_yoloV5.txt sucessfully
Runtime commands:
h: Print this help
q: Quit
p: Pause
r: Resume
** INFO: <bus_callback:181>: Pipeline ready
**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:167>: Pipeline running
0:00:05.440110343 14335 0x3aa54800 ERROR nvinfer gstnvinfer.cpp:1111:get_converted_buffer:<primary_gie> cudaMemset2DAsync failed with error cudaErrorInvalidValue while converting buffer
0:00:05.440207400 14335 0x3aa54800 WARN nvinfer gstnvinfer.cpp:1372:gst_nvinfer_process_full_frame:<primary_gie> error: Buffer conversion failed
ERROR from primary_gie: Buffer conversion failed
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1372): gst_nvinfer_process_full_frame (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
Quitting

参考文章: https://forums.developer.nvidia.com/t/problem-facing-with-deepstream-app/176675

原因分析: Deepstream和jetpack版本不相符
我的Jetson Xavier NX的jetpack版本是4.4.1 [L4T 32.4.4], 按照博客安装的Deepstream版本是5.1, 两者版本不相符导致报错

问题解决:

  • 由于Deepstream 5.1对应JetPack 4.5.1 [L4T 32.5.1]
  • 而Deepstream 5.0对应jetpack4.4.1 [L4T 32.4.4]
  • 因此, 只需要将Deepstream的版本改为5.0即可

安装Deepstream 5.0

  • NVIDIA官网: https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html#jetson-setup
  • 由于官网的版本是Deepstream 5.1, 因此可以按照下面的两篇文章来下载Deepstream 5.0
  • 参考文章①: https://blog.csdn.net/weixin_38661447/article/details/107214222
  • 参考文章②: https://www.cnblogs.com/cc1784380709/p/14429294.html
  • 注意: 安装完依赖包和librdkafka后, 之后可能需要换源
  • sudo gedit /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
  • 完成之后, 直接使用如下命令进行安装❗
sudo apt install deepstream-5.0

dpkg: 处理归档 /var/cache/apt/archives/deepstream-5.0_5.0.1-1_arm64.deb (–unpack)时出错:

上面安装deepstream-5.0的方法是没有问题的, 但是我这里出现了报错:
dpkg: 处理归档 /var/cache/apt/archives/deepstream-5.0_5.0.1-1_arm64.deb (–unpack)时出错:
正试图覆盖 /opt/nvidia/deepstream/deepstream,它同时被包含于软件包 deepstream-5.1 5.1.0-1
在处理时有错误发生:
/var/cache/apt/archives/deepstream-5.0_5.0.1-1_arm64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

参考文章: https://blog.csdn.net/weixin_45885232/article/details/109290570

解决方案: 执行下面的命令, 强制覆盖安装deepstream-5.0即可
sudo dpkg -i --force-overwrite /var/cache/apt/archives/deepstream-5.0_5.0.1-1_arm64.deb


下一篇文章: Jetson Xavier NX使用yolov5+deepsort实现CSI摄像头的目标跟踪

Jetson Xavier NX使用Yolov5+DeepStream+TensorRT实现CSI摄像头的目标识别及采坑记录相关推荐

  1. Jetson Xavier NX使用yolov5+deepsort实现CSI摄像头的目标跟踪

    文章目录 安装过程 运行效果 用python代码来打开CSI摄像头 实现CSI摄像头目标跟踪 报错: AttributeError: 'NoneType' object has no attribut ...

  2. 19、Jetson Xavier NX使用yolov5对比GPU模型下的pt、onnx、engine 、 DeepStream 加速性能

    基本思想:手中有块Jetson Xavier NX开发板,难得对比一下yolov5在相同模型下,不同形式下的加速性能 一.在ubuntu20.04的基础上,简单做一下对比实验,然后在使用 Jetson ...

  3. Jetson Xavier NX部署Yolov5(GPU版)

    根据我自身的成功部署经验进行了总结,首先希望可以帮助到有需要的朋友们. 一.前期准备: 1.硬件准备: Jetson Xavier NX开发板(带128g内存条的EMMC版).跳线帽(杜邦线).mic ...

  4. 【模型部署】Jetson Xavier NX(eMMC)部署YOLOv5-5.0

    文章目录 前言 NVIDIA Jetson Jetson Xavier NX 版本区别(SD | eMMC) 规格参数 Jetpack4.6.1环境搭建 烧录系统(OS) SSD启动 SSD分区 设置 ...

  5. 在NVIDIA Jetson Xavier NX上把yolov4-deepsort的模型pb模型使用tensorflow-onnx和onnx-tensorrt工具最终转换为tensorrt模型

    文章目录: 1 安装tensorflow-onnx环境和把tensorflow的pb模型转换为onnx模型 1.1 安装tensorflow2onnx环境 1.2 把tensorflow的pb模型转换 ...

  6. 四、Jetson Xavier Nx内置16G emmc刷机、CUDA、SSD启动

    1 刷机 注意JETSON Xavier NX DEV KIT 搭配的是官方16eMMC版本的Jetson Xavier NX 16GB/8GB 核心板,不带SD卡卡槽.因此烧录系统需要用到ubunt ...

  7. NVIDIA Jetson Xavier NX设备上使用jtop监控GPU、CPU、内存等的使用

    文章目录: 1 在Jetson设别上安装jtop 2 jtop的使用 2.1 jtop的使用 2.2 jetson-stats一些可执行文件使用 3 jetson NX默认刷好机之后的环境 1 在Je ...

  8. 在Jetson Xavier NX上安装pycuda报错:src/cpp/cuda.hpp:14:10: fatal error: cuda.h: No such file or directory

    文章目录: 1 我的系统环境和遇到问题分析 1.1 我的系统环境 1.2 问题描述 2 问题解决方式 1 我的系统环境和遇到问题分析 1.1 我的系统环境 我的详细系统环境如下:使用jetson_re ...

  9. NVIDIA Jetson Xavier NX 刷机方法(sdk manager)

    1.记录一下Jetson Xavier NX刷机过程方便后面自己做重复性工作,同时也希望能帮助到大家.我尽量的回忆每一小步希望你不会觉得有点啰嗦.我的设备为ubuntu20.04+Jetson Xav ...

  10. Jetson Xavier NX (11) -- NX介绍与系统烧录

    目录 1.  Jetson Xavier NX介绍 1.1 NX 性能 1.2 硬件总览 1.3 相关资料 2. 系统烧录 2.1 下载官方镜像 2.2  格式化SD卡 2.3 烧录系统 3 测试 1 ...

最新文章

  1. Sql 某一字段统计
  2. SpringBoot整合Swagger 自动生成在线API文档 偷懒必备 同时也是我们的基本操作啦!!!
  3. 关于投资银行和咨询的理解和感悟
  4. P1101 单词方阵
  5. Mybatis接口注解开发
  6. java中include标签的用法_原 ng-include用法分析以及多标签页面的简单实现方式
  7. “猜心思”的Hard模式:问答系统在智能法律场景的实践与优化
  8. fiddler抓包工具-- 本地资源替换线上文件
  9. mysql如何只查询表中的前几条数据?多表查询前提了解
  10. hive sql 替换指定的字符串
  11. 如何看待数字化转型对制造业的影响?
  12. 联想拯救者Y9000X 2020
  13. Shell进阶(三) 交互式脚本 函数 数组 分片 字符串处理
  14. 闲鱼无法确认收货显示服务器繁忙,闲鱼不确认收货怎么办?解决办法都是这样的...
  15. 实现简单的自定义音乐播放器
  16. Kali Linux渗透测试之被动信息收集(一)——nslookup、dig、DNS区域传输,DNS字典爆破,DNS注册信息
  17. js动态生成的DOM绑定事件失效的问题
  18. eWebEditor在线编辑器
  19. MATLAB基础学习(二)-变量类型与赋值
  20. 「UG/NX」BlockUI 操作按钮Button

热门文章

  1. react native 8081 端口号被占
  2. 沉没成本---欲罢不能的困局?
  3. “校园知网”5月5日冲刺计划书
  4. 数据科学-Matplotlib(直方条形和散点作业)
  5. java中判断对象不为空字符串_Java判断对象是否为空(包括null ,)的方法
  6. web2.0是个异数?
  7. python三大神器
  8. numpy中的array函数
  9. 什么是数据可视化大屏?如何制作一个数据可视化大屏?
  10. ecu故障现象_发动机各传感器故障现象总结