文章目录:

  • 1 安装tensorflow-onnx环境和把tensorflow的pb模型转换为onnx模型
    • 1.1 安装tensorflow2onnx环境
    • 1.2 把tensorflow的pb模型转换为onnx模型
      • 1.2.1 把tensorflow的pb模型转换为onnx模型
      • 1.2.2 转可能报错1:OSError: SavedModel file does not exist at: checkpoints//{saved_model.pbtxt|saved_model.pb}
      • 1.2.3 转可能报错2:`ValueError: StridedSlice: only strides=1 is supported`
      • 1.2.4 转可能报错3:RuntimeError: MetaGraphDef associated with tags 'serve' could not be found in SavedModel.
  • 2安装onnx-tensorrt环境与onnx模型转trt模型
    • 2.1 安装onnx-tensorrt环境
    • 2.2 使用`onnx2trt工具`把我们的`onnx模型`转换为`trt模型`

首先说明一下我的环境:

  • 硬件:NVIDIA Jetson Xavier NX
  • tensorflow-gpu==2.2.0
  • onnx==1.7.0
  • tf2onnx==1.7.1
  • cuda=10.2
  • cudnn=7.6.5

我主要是想把yolov4-deepsort的tenosorflow模型转换为tensorrt模型,然后在NVIDIA Jetson Xavier NX设备上部署,但yolov4-deepsort官网并没有给出如何把模型转换为TensorRT,可以看这个issues


1 安装tensorflow-onnx环境和把tensorflow的pb模型转换为onnx模型

1.1 安装tensorflow2onnx环境

1、去tensorflow-onnx克隆仓库

tensorflow2onnx的githu官网:https://github.com/onnx/tensorflow-onnx

2、克隆仓库

git clone https://github.com/onnx/tensorflow-onnx

3、安装tensorflow-onnx

python setup.py install

python setup.py develop

1.2 把tensorflow的pb模型转换为onnx模型

1.2.1 把tensorflow的pb模型转换为onnx模型

1、首先看一下我的tensorflow模型的文件的目录结构

(yolov4) shl@zhihui-mint:~/shl_res/1_project/yolov4-deepsort/checkpoints$ tree
.
└── yolov4-tiny-416├── assets├── saved_model.pb└── variables├── variables.data-00000-of-00001└── variables.index3 directories, 4 files
(yolov4) shl@zhihui-mint:~/shl_res/1_project/yolov4-deepsort/checkpoints$

2、使用如下命令把pb模型转换为onnx模型

python -m tf2onnx.convert --saved-model checkpoints/ --output saved_model.onnx

执行命令,如下可以看出成功执行:

python -m tf2onnx.convert --saved-model yolov4-tiny-416/ --output saved_model.onnx --opset 10

(yolov4) shl@zhihui-mint:~/shl_res/1_project/yolov4-deepsort/checkpoints$ python -m tf2onnx.convert --saved-model yolov4-tiny-416/ --output saved_model.onnx --opset 10
2020-10-27 20:29:41,173 - WARNING - '--tag' not specified for saved_model. Using --tag serve
2020-10-27 20:29:44,329 - INFO - Signatures found in model: [serving_default].
2020-10-27 20:29:44,329 - WARNING - '--signature_def' not specified, using first signature: serving_default
WARNING:tensorflow:From /home/shl/anaconda3/envs/yolov4/lib/python3.6/site-packages/tf2onnx/tf_loader.py:413: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.extract_sub_graph`
2020-10-27 20:29:44,935 - WARNING - From /home/shl/anaconda3/envs/yolov4/lib/python3.6/site-packages/tf2onnx/tf_loader.py:413: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.extract_sub_graph`
2020-10-27 20:29:45,264 - INFO - Using tensorflow=2.2.0, onnx=1.7.0, tf2onnx=1.7.1/796841
2020-10-27 20:29:45,264 - INFO - Using opset <onnx, 10>
2020-10-27 20:29:45,686 - INFO - Computed 0 values for constant folding
2020-10-27 20:29:46,825 - INFO - Optimizing ONNX model
2020-10-27 20:29:47,516 - INFO - After optimization: BatchNormalization -19 (19->0), Cast -8 (28->20), Const -210 (282->72), Gather +2 (2->4), Identity -5 (5->0), Mul -1 (12->11), NonZero -1 (2->1), Reshape -1 (12->11), Shape -6 (12->6), Slice -2 (18->16), Squeeze -7 (8->1), Transpose -80 (90->10), Unsqueeze -38 (40->2)
2020-10-27 20:29:47,524 - INFO -
2020-10-27 20:29:47,524 - INFO - Successfully converted TensorFlow model yolov4-tiny-416/ to ONNX
2020-10-27 20:29:47,537 - INFO - ONNX model is saved at saved_model.onnx
(yolov4) shl@zhihui-mint:~/shl_res/1_project/yolov4-deepsort/checkpoints$ ls

成功生成onnx模型

(yolov4) shl@zhihui-mint:~/shl_res/1_project/yolov4-deepsort/checkpoints$ tree
.
├── saved_model.onnx
└── yolov4-tiny-416├── assets├── saved_model.pb└── variables├── variables.data-00000-of-00001└── variables.index3 directories, 4 files
(yolov4) shl@zhihui-mint:~/shl_res/1_project/yolov4-deepsort/checkpoints$

1.2.2 转可能报错1:OSError: SavedModel file does not exist at: checkpoints//{saved_model.pbtxt|saved_model.pb}

然后就会报错:OSError: SavedModel file does not exist at: checkpoints//{saved_model.pbtxt|saved_model.pb}

这个错误是由于--saved-model参数路径不对,应该定位到yolov4-tiny-416目录,(官网的issues)

解决这个错误:

  • --saved-model参数后必须是存放.pb模型的目录,而不是直接是.pb文件,例如:--saved-model checkpoint是正确的,--saved-model checkpoint/saved_model.pb是错误的写法
  • .pb模型文件必须命令为:saved_model.pb`

.pb模型文件的命令和存放如下

(cuda10) shl@zhihui-mint:~/shl_res/1_project/deep_sort/pred_models$ tree checkpoint/
checkpoint/
└── saved_model.pb0 directories, 1 file
(cuda10) shl@zhihui-mint:~/shl_res/1_project/deep_sort/pred_models$

1.2.3 转可能报错2:ValueError: StridedSlice: only strides=1 is supported

1、可能报错:下面两个错误原因是一样的

  • ValueError: StridedSlice: only strides=1 is supported
  • NameError: name 'tf2onnx' is not defined
(yolov4) shl@zhihui-mint:~/shl_res/1_project/yolov4-deepsort/checkpoints$ python -m tf2onnx.convert --saved-model yolov4-tiny-416/ --output saved_model.onnx --opset 1...
Traceback (most recent call last):File "/home/shl/anaconda3/envs/yolov4/lib/python3.6/site-packages/tf2onnx/tfonnx.py", line 286, in tensorflow_onnx_mappingfunc(g, node, **kwargs)File "/home/shl/anaconda3/envs/yolov4/lib/python3.6/site-packages/tf2onnx/onnx_opset/tensor.py", line 688, in version_1raise ValueError("StridedSlice: only strides=1 is supported")
ValueError: StridedSlice: only strides=1 is supported
2020-10-27 20:27:57,379 - ERROR - Unsupported ops: Counter({'FusedBatchNormV3': 19, 'Where': 2, 'ResizeBilinear': 1, 'GreaterEqual': 1})
Traceback (most recent call last):File "/home/shl/anaconda3/envs/yolov4/lib/python3.6/runpy.py", line 193, in _run_module_as_main"__main__", mod_spec)File "/home/shl/anaconda3/envs/yolov4/lib/python3.6/runpy.py", line 85, in _run_codeexec(code, run_globals)File "/home/shl/anaconda3/envs/yolov4/lib/python3.6/site-packages/tf2onnx/convert.py", line 185, in <module>main()File "/home/shl/anaconda3/envs/yolov4/lib/python3.6/site-packages/tf2onnx/convert.py", line 163, in mainconst_node_values=const_node_values)File "/home/shl/anaconda3/envs/yolov4/lib/python3.6/site-packages/tf2onnx/tfonnx.py", line 528, in process_tf_graphraise exceptions[0]File "/home/shl/anaconda3/envs/yolov4/lib/python3.6/site-packages/tf2onnx/tfonnx.py", line 286, in tensorflow_onnx_mappingfunc(g, node, **kwargs)File "/home/shl/anaconda3/envs/yolov4/lib/python3.6/site-packages/tf2onnx/onnx_opset/tensor.py", line 969, in version_1dst = tf2onnx.utils.ONNX_DTYPE_NAMES[dst]
NameError: name 'tf2onnx' is not defined
(yolov4) shl@zhihui-mint:~/shl_res/1_project/yolov4-deepsort/checkpoints$ conda deactivate
(base) shl@zhihui-mint:~/shl_res/1_project/yolov4-deepsort/checkpoints$ conda activate yolov4
(yolov4) shl@zhihui-mint:~/shl_res/1_project/yolov4-deepsort/checkpoints$ 

2、错误解决方法

上面的两个错误都是由于--opset参数的值设置的问题,可以参考这个issues,把参数设置为--opset 10

1.2.4 转可能报错3:RuntimeError: MetaGraphDef associated with tags ‘serve’ could not be found in SavedModel.

下面的这个错误并不是在Jetson Xavier NX上遇到的,这个是我转换另外一个tensorflow的pb模型,由于这个pb模型是用tensorflow1.5.0生成的,而tensorflow-onnx支持的tensorflow版本至少要tensorflow1.12以上,因此才有这个错误:RuntimeError: MetaGraphDef associated with tags 'serve' could not be found in SavedModel. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI:saved_model_cli``

(cuda10) shl@zhihui-mint:~/shl_res/1_project/deep_sort/pred_models$ python -m tf2onnx.convert --saved-model checkpoint --output mars-small128.onnx 2020-10-28 20:04:30,246 - WARNING - From /home/shl/anaconda3/envs/cuda10/lib/python3.6/site-packages/tf2onnx/verbose_logging.py:76: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.2020-10-28 20:04:30,360 - WARNING - '--tag' not specified for saved_model. Using --tag serve
Traceback (most recent call last):File "/home/shl/anaconda3/envs/cuda10/lib/python3.6/runpy.py", line 193, in _run_module_as_main"__main__", mod_spec)File "/home/shl/anaconda3/envs/cuda10/lib/python3.6/runpy.py", line 85, in _run_codeexec(code, run_globals)File "/home/shl/anaconda3/envs/cuda10/lib/python3.6/site-packages/tf2onnx/convert.py", line 185, in <module>main()File "/home/shl/anaconda3/envs/cuda10/lib/python3.6/site-packages/tf2onnx/convert.py", line 136, in mainargs.signature_def, args.concrete_function, args.large_model)File "/home/shl/anaconda3/envs/cuda10/lib/python3.6/site-packages/tf2onnx/tf_loader.py", line 337, in from_saved_model_from_saved_model_v1(sess, model_path, input_names, output_names, tag, signatures)File "/home/shl/anaconda3/envs/cuda10/lib/python3.6/site-packages/tf2onnx/tf_loader.py", line 231, in _from_saved_model_v1imported = tf.saved_model.loader.load(sess, tag, model_path)File "/home/shl/anaconda3/envs/cuda10/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 324, in new_funcreturn func(*args, **kwargs)File "/home/shl/anaconda3/envs/cuda10/lib/python3.6/site-packages/tensorflow/python/saved_model/loader_impl.py", line 269, in loadreturn loader.load(sess, tags, import_scope, **saver_kwargs)File "/home/shl/anaconda3/envs/cuda10/lib/python3.6/site-packages/tensorflow/python/saved_model/loader_impl.py", line 422, in load**saver_kwargs)File "/home/shl/anaconda3/envs/cuda10/lib/python3.6/site-packages/tensorflow/python/saved_model/loader_impl.py", line 349, in load_graphmeta_graph_def = self.get_meta_graph_def_from_tags(tags)File "/home/shl/anaconda3/envs/cuda10/lib/python3.6/site-packages/tensorflow/python/saved_model/loader_impl.py", line 327, in get_meta_graph_def_from_tags"\navailable_tags: " + str(available_tags))
RuntimeError: MetaGraphDef associated with tags 'serve' could not be found in SavedModel. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: `saved_model_cli`
available_tags: [set()]
(cuda10) shl@zhihui-mint:~/shl_res/1_project/deep_sort/pred_models$ ls
checkpoint  detections  detections-20201028T101110Z-001.zip  networks  networks-20201028T101118Z-001.zip  tensorflow-onnx
(cuda10) shl@zhihui-mint:~/shl_res/1_project/deep_sort/pred_models$

2安装onnx-tensorrt环境与onnx模型转trt模型

2.1 安装onnx-tensorrt环境

我下面的安装过程主要参考官网的issues385

1、克隆你需要的onnx-tensorrt版本仓库

git clone -b 7.1 --single-branch https://github.com/onnx/onnx-tensorrt.git

2、进入到代码目录

cd onnx-tensorrt/

`zhihui@zhihui-desktop:~$ cd onnx-tensorrt/
zhihui@zhihui-desktop:~/onnx-tensorrt$ ls
CMakeLists.txt       ShapeTensor.hpp            onnx2trt_common.hpp
ImporterContext.hpp  ShapedWeights.cpp          onnx2trt_runtime.hpp
LICENSE              ShapedWeights.hpp          onnx2trt_utils.cpp
LoopHelpers.cpp      Status.hpp                 onnx2trt_utils.hpp
LoopHelpers.hpp      TensorOrWeights.hpp        onnx_backend_test.py
ModelImporter.cpp    builtin_op_importers.cpp   onnx_tensorrt
ModelImporter.hpp    builtin_op_importers.hpp   onnx_trt_backend.cpp
NvOnnxParser.cpp     common.hpp                 onnx_utils.hpp
NvOnnxParser.h       contributing.md            operators.md
OnnxAttrs.cpp        docker                     setup.py
OnnxAttrs.hpp        getSupportedAPITest.cpp    third_party
README.md            libnvonnxparser.version    toposort.hpp
RNNHelpers.cpp       main.cpp                   trt_utils.hpp
RNNHelpers.hpp       nv_onnx_parser_bindings.i  utils.hpp
ShapeTensor.cpp      onnx2trt.hpp
zhihui@zhihui-desktop:~/onnx-tensorrt$

3、克隆该仓库关联的子仓库

git submodule update --init --recursive

我之前由于网络代理的原因,导致错误,所以在笔记本上更新的,然后把代码再拷贝到服务器上的!

$ git submodule update --init --recursive
Submodule 'third_party/onnx' (https://github.com/onnx/onnx.git) registered for path 'third_party/onnx'
Cloning into 'E:/NVIDIAJetson/onnx-tensorrt/third_party/onnx'...
Submodule path 'third_party/onnx': checked out '553df22c67bee5f0fe6599cff60f1afc6748c635'
Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/onnx/third_party/ben          chmark'
Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/onnx/third_party/pybin          d11'
Cloning into 'E:/NVIDIAJetson/onnx-tensorrt/third_party/onnx/third_party/benchmark'...
Cloning into 'E:/NVIDIAJetson/onnx-tensorrt/third_party/onnx/third_party/pybind11'...
Submodule path 'third_party/onnx/third_party/benchmark': checked out 'e776aa0275e293707b6a0901e0e8d8a8a3679508'
Submodule path 'third_party/onnx/third_party/pybind11': checked out '09f082940113661256310e3f4811aa7261a9fa05'
Submodule 'tools/clang' (https://github.com/wjakob/clang-cindex-python3) registered for path 'third_party/onnx/third_party/pybind          11/tools/clang'
Cloning into 'E:/NVIDIAJetson/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang'...
Submodule path 'third_party/onnx/third_party/pybind11/tools/clang': checked out '6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5'shl@shliangPC MINGW64 /e/NVIDIAJetson/onnx-tensorrt (7.1)

注意1:

如果在使用gitpip的时候遇到错误:Received HTTP code 503 from proxy after CONNECT,请参考这篇博客

注意2:

如果下面编译的时候显示Should resolve your "does not contain a CMakeLists.txt" error错误,就是应为你没有克隆关联的子仓库,因此,上面的命令一定要执行!!!

4、安装prototbuf及其依赖,如果你已经安装可以跳过这一步

apt update -y && apt install -y libprotobuf-dev protobuf-compiler

zhihui@zhihui-desktop:~/onnx-tensorrt$ sudo apt update -y && apt install -y libprotobuf-dev protobuf-compiler
Hit:1 http://ports.ubuntu.com/ubuntu-ports bionic InRelease
Hit:2 http://ports.ubuntu.com/ubuntu-ports bionic-updates InRelease
Hit:3 http://ports.ubuntu.com/ubuntu-ports bionic-backports InRelease
Hit:4 http://ports.ubuntu.com/ubuntu-ports bionic-security InRelease
Hit:5 https://repo.download.nvidia.cn/jetson/common r32.4 InRelease
Hit:6 https://repo.download.nvidia.cn/jetson/t194 r32.4 InRelease
Reading package lists... Done
Building dependency tree
Reading state information... Done
6 packages can be upgraded. Run 'apt list --upgradable' to see them.
E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)
E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?
zhihui@zhihui-desktop:~/onnx-tensorrt$ sudo su# 切换到root用户下就合一避免上面的错误
root@zhihui-desktop:/home/zhihui/onnx-tensorrt# sudo apt update -y && apt install -y libprotobuf-dev protobuf-compiler
Hit:1 http://ports.ubuntu.com/ubuntu-ports bionic InRelease
Hit:2 http://ports.ubuntu.com/ubuntu-ports bionic-updates InRelease
Hit:3 http://ports.ubuntu.com/ubuntu-ports bionic-backports InRelease
Hit:4 http://ports.ubuntu.com/ubuntu-ports bionic-security InRelease
Hit:5 https://repo.download.nvidia.cn/jetson/common r32.4 InRelease
Hit:6 https://repo.download.nvidia.cn/jetson/t194 r32.4 InRelease
Reading package lists... Done
Building dependency tree
Reading state information... Done
6 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists... Done
Building dependency tree
Reading state information... Done
libprotobuf-dev is already the newest version (3.0.0-9.1ubuntu1).
The following packages were automatically installed and are no longer required:activity-log-manager archdetect-deb bamfdaemon bogl-bterm busybox-staticcompiz-core compiz-plugins-default cryptsetup-bin debhelper dh-autoreconfdh-strip-nondeterminism dpkg-repack gir1.2-accounts-1.0 gir1.2-gdata-0.0gir1.2-gst-plugins-base-1.0 gir1.2-gstreamer-1.0 gir1.2-harfbuzz-0.0gir1.2-rb-3.0 gir1.2-signon-1.0 gir1.2-timezonemap-1.0 gir1.2-totem-1.0gir1.2-totemplparser-1.0 gir1.2-xkl-1.0 gnome-calculatorgnome-system-monitor grub-common gtk3-nocsd icu-devtools kde-window-managerkpackagetool5 kwayland-data kwin-common kwin-x11 libarchive-cpio-perllibatkmm-1.6-1v5 libboost-python1.65.1 libcairomm-1.0-1v5libcolumbus1-common libcolumbus1v5 libcompizconfig0 libdebian-installer4libdecoration0 libdmapsharing-3.0-2 libegl1-mesa-dev libeigen3-devlibfile-stripnondeterminism-perl libgeonames-common libgeonames0libgles2-mesa-dev libglibmm-2.4-1v5 libgpod-common libgpod4 libgraphite2-devlibgtk3-nocsd0 libgtkmm-3.0-1v5 libharfbuzz-gobject0 libicu-le-hb0libiculx60 libkdecorations2-5v5 libkdecorations2private5v5 libkf5activities5libkf5declarative-data libkf5declarative5 libkf5globalaccelprivate5libkf5idletime5 libkf5kcmutils-data libkf5kcmutils5 libkf5package-datalibkf5package5 libkf5plasma5 libkf5quickaddons5 libkf5waylandclient5libkf5waylandserver5 libkscreenlocker5 libkwin4-effect-builtins1libkwineffects11 libkwinglutils11 libkwinxrenderutils11libmail-sendmail-perl libnm-gtk0 liborc-0.4-dev liborc-0.4-dev-binlibpanel-applet3 libpangomm-1.4-1v5 libpcre16-3 libpcre3-dev libpcre32-3libpcrecpp0v5 libpinyin-data libpinyin13 libqt5designer5 libqt5help5libqt5positioning5 libqt5sensors5 libqt5sql5 libqt5test5 libqt5webchannel5libqt5webkit5 libsgutils2-2 libsignon-glib1 libsys-hostname-long-perllibtimezonemap-data libtimezonemap1 libunity-control-center1libunity-core-6.0-9 libunity-misc4 libwayland-bin libwayland-devlibxcb-composite0 libxcb-cursor0 libxcb-damage0 libxrandr-dev libxrender-devlibzeitgeist-1.0-1 os-prober pkg-config po-debconfqml-module-org-kde-kquickcontrolsaddons rdate session-shortcuts taskseltasksel-data unity-asset-pool unity-greeter unity-lens-applicationsunity-lens-files unity-lens-music unity-lens-video unity-schemasunity-scope-video-remote unity-scopes-master-default unity-scopes-runnerx11proto-randr-dev
Use 'sudo apt autoremove' to remove them.
The following NEW packages will be installed:libprotoc10 protobuf-compiler
0 upgraded, 2 newly installed, 0 to remove and 6 not upgraded.
Need to get 556 kB of archives.
After this operation, 2348 kB of additional disk space will be used.
Get:1 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 libprotoc10 arm64 3.0.0-9.1ubuntu1 [531 kB]
Get:2 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 protobuf-compiler arm64 3.0.0-9.1ubuntu1 [24.4 kB]
Fetched 556 kB in 2s (259 kB/s)
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:LANGUAGE = (unset),LC_ALL = (unset),LC_TIME = "zh_CN.UTF-8",LC_MONETARY = "zh_CN.UTF-8",LC_ADDRESS = "zh_CN.UTF-8",LC_TELEPHONE = "zh_CN.UTF-8",LC_NAME = "zh_CN.UTF-8",LC_MEASUREMENT = "zh_CN.UTF-8",LC_IDENTIFICATION = "zh_CN.UTF-8",LC_NUMERIC = "zh_CN.UTF-8",LC_PAPER = "zh_CN.UTF-8",LANG = "en_US.UTF-8"are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
Selecting previously unselected package libprotoc10:arm64.
(Reading database ... 164315 files and directories currently installed.)
Preparing to unpack .../libprotoc10_3.0.0-9.1ubuntu1_arm64.deb ...
Unpacking libprotoc10:arm64 (3.0.0-9.1ubuntu1) ...
Selecting previously unselected package protobuf-compiler.
Preparing to unpack .../protobuf-compiler_3.0.0-9.1ubuntu1_arm64.deb ...
Unpacking protobuf-compiler (3.0.0-9.1ubuntu1) ...
Setting up libprotoc10:arm64 (3.0.0-9.1ubuntu1) ...
Setting up protobuf-compiler (3.0.0-9.1ubuntu1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1.2) ...
root@zhihui-desktop:/home/zhihui/onnx-tensorrt# sudo su zhihui

注意:

你可能会遇到上面的错误E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied),解决方法;

  • 方法一:切换到root用户下:sudo su
  • 方法二:删除文件:sudo rm -rf /var/lib/dpkg/lock-frontend

leader教我的,其实可以用apt-get安装protobuf的

先使用下面的命令查看protobuf相关的名字的库包,因为我们并不知道在apt的库中它叫什么名字:

apt-cache search protobuf

5、执行cmake,命令如下:

mkdir build && cd build
cmake .. -DTENSORRT_ROOT=/usr/src/tensorrt -DCUDA_INCLUDE_DIRS=/usr/local/cuda/include

cmake结果显示

root@zhihui-desktop:/home/zhihui/onnx-tensorrt# sudo su zhihui
zhihui@zhihui-desktop:~/onnx-tensorrt$ mkdir build && cd build
zhihui@zhihui-desktop:~/onnx-tensorrt/build$ cmake .. -DTENSORRT_ROOT=/usr/src/tensorrt -DCUDA_INCLUDE_DIRS=/usr/local/cuda/include
-- The CXX compiler identification is GNU 7.5.0
-- The C compiler identification is GNU 7.5.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Found Protobuf: /usr/local/lib/libprotobuf.so;-lpthread (found version "3.8.0")
-- Build type not set - defaulting to Release
Generated: /home/zhihui/onnx-tensorrt/build/third_party/onnx/onnx/onnx_onnx2trt_onnx-ml.proto
Generated: /home/zhihui/onnx-tensorrt/build/third_party/onnx/onnx/onnx-operators_onnx2trt_onnx-ml.proto
--
-- ******** Summary ********
--   CMake version         : 3.18.2
--   CMake command         : /usr/local/bin/cmake
--   System                : Linux
--   C++ compiler          : /usr/bin/c++
--   C++ compiler version  : 7.5.0
--   CXX flags             :  -Wall -Wno-deprecated-declarations -Wno-unused-function -Wnon-virtual-dtor
--   Build type            : Release
--   Compile definitions   : ONNX_NAMESPACE=onnx2trt_onnx
--   CMAKE_PREFIX_PATH     :
--   CMAKE_INSTALL_PREFIX  : /usr/local
--   CMAKE_MODULE_PATH     :
--
--   ONNX version          : 1.6.0
--   ONNX NAMESPACE        : onnx2trt_onnx
--   ONNX_BUILD_TESTS      : OFF
--   ONNX_BUILD_BENCHMARKS : OFF
--   ONNX_USE_LITE_PROTO   : OFF
--   ONNXIFI_DUMMY_BACKEND : OFF
--   ONNXIFI_ENABLE_EXT    : OFF
--
--   Protobuf compiler     : /usr/local/bin/protoc
--   Protobuf includes     : /usr/local/include
--   Protobuf libraries    : /usr/local/lib/libprotobuf.so;-lpthread
--   BUILD_ONNX_PYTHON     : OFF
-- Found TensorRT headers at /usr/include/aarch64-linux-gnu
-- Find TensorRT libs at /usr/lib/aarch64-linux-gnu/libnvinfer.so;/usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so;/usr/lib/aarch64-linux-gnu/libmyelin.so
-- Found TENSORRT: /usr/include/aarch64-linux-gnu
-- Configuring done
-- Generating done
CMake Warning:Manually-specified variables were not used by the project:CUDA_INCLUDE_DIRS-- Build files have been written to: /home/zhihui/onnx-tensorrt/build

注意:

如下,可能会出现cmake版本过对的情况!

(yolov4) shl@zhihui-mint:~/shl_res/1_project/yolov4-deepsort/checkpoints/onnx-tensorrt/build$ cmake .. -DTENSORRT_ROOT=../tensorrt_root && make -j
CMake Error at CMakeLists.txt:21 (cmake_minimum_required):CMake 3.13 or higher is required.  You are running version 3.10.2-- Configuring incomplete, errors occurred!
(yolov4) shl@zhihui-mint:~/shl_res/1_project/yolov4-deepsort/checkpoints/onnx-tensorrt/build$

上面显示我们的cmake版本过低,需要更新cmake的版本,如何更新cmake的版本,请参考这篇博客

6、编译:build

make -j8

编译的结果显示如下:

zhihui@zhihui-desktop:~/onnx-tensorrt/build$ make -j8
Scanning dependencies of target gen_onnx_proto
[  1%] Running gen_proto.py on onnx/onnx.in.proto
Processing /home/zhihui/onnx-tensorrt/third_party/onnx/onnx/onnx.in.proto
Writing /home/zhihui/onnx-tensorrt/build/third_party/onnx/onnx/onnx_onnx2trt_onnx-ml.proto
Writing /home/zhihui/onnx-tensorrt/build/third_party/onnx/onnx/onnx_onnx2trt_onnx-ml.proto3
Writing /home/zhihui/onnx-tensorrt/build/third_party/onnx/onnx/onnx-ml.pb.h
generating /home/zhihui/onnx-tensorrt/build/third_party/onnx/onnx/onnx_pb.py
[  2%] Running C++ protocol buffer compiler on /home/zhihui/onnx-tensorrt/build/third_party/onnx/onnx/onnx_onnx2trt_onnx-ml.proto
[  2%] Built target gen_onnx_proto
[  3%] Running gen_proto.py on onnx/onnx-operators.in.proto
Processing /home/zhihui/onnx-tensorrt/third_party/onnx/onnx/onnx-operators.in.proto
Writing /home/zhihui/onnx-tensorrt/build/third_party/onnx/onnx/onnx-operators_onnx2trt_onnx-ml.proto
Writing /home/zhihui/onnx-tensorrt/build/third_party/onnx/onnx/onnx-operators_onnx2trt_onnx-ml.proto3
Writing /home/zhihui/onnx-tensorrt/build/third_party/onnx/onnx/onnx-operators-ml.pb.h
generating /home/zhihui/onnx-tensorrt/build/third_party/onnx/onnx/onnx_operators_pb.py
[  5%] Running C++ protocol buffer compiler on /home/zhihui/onnx-tensorrt/build/third_party/onnx/onnx/onnx-operators_onnx2trt_onnx-ml.proto
Scanning dependencies of target onnx_proto
[  6%] Building CXX object third_party/onnx/CMakeFiles/onnx_proto.dir/onnx/onnx_onnx2trt_onnx-ml.pb.cc.o
[  7%] Building CXX object third_party/onnx/CMakeFiles/onnx_proto.dir/onnx/onnx-operators_onnx2trt_onnx-ml.pb.cc.o
[  8%] Linking CXX static library libonnx_proto.a
[ 11%] Built target onnx_proto
Scanning dependencies of target nvonnxparser_static
Scanning dependencies of target nvonnxparser
Scanning dependencies of target onnx
[ 12%] Building CXX object CMakeFiles/nvonnxparser_static.dir/NvOnnxParser.cpp.o
[ 15%] Building CXX object CMakeFiles/nvonnxparser_static.dir/ModelImporter.cpp.o
[ 14%] Building CXX object CMakeFiles/nvonnxparser_static.dir/builtin_op_importers.cpp.o
[ 16%] Building CXX object CMakeFiles/nvonnxparser.dir/ModelImporter.cpp.o
[ 17%] Building CXX object CMakeFiles/nvonnxparser_static.dir/onnx2trt_utils.cpp.o
[ 19%] Building CXX object CMakeFiles/nvonnxparser.dir/NvOnnxParser.cpp.o
[ 20%] Building CXX object CMakeFiles/nvonnxparser.dir/builtin_op_importers.cpp.o
[ 21%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/checker.cc.o
[ 23%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/common/assertions.cc.o
[ 24%] Building CXX object CMakeFiles/nvonnxparser_static.dir/ShapedWeights.cpp.o
[ 25%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/common/interned_strings.cc.o
[ 26%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/common/ir_pb_converter.cc.o
[ 28%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/common/model_helpers.cc.o
[ 29%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/common/status.cc.o
[ 30%] Building CXX object CMakeFiles/nvonnxparser_static.dir/ShapeTensor.cpp.o
[ 32%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/attr_proto_util.cc.o
[ 33%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/controlflow/defs.cc.o
[ 34%] Building CXX object CMakeFiles/nvonnxparser.dir/onnx2trt_utils.cpp.o
[ 35%] Building CXX object CMakeFiles/nvonnxparser_static.dir/LoopHelpers.cpp.o
[ 37%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/controlflow/old.cc.o
[ 38%] Building CXX object CMakeFiles/nvonnxparser_static.dir/RNNHelpers.cpp.o
[ 39%] Building CXX object CMakeFiles/nvonnxparser_static.dir/OnnxAttrs.cpp.o
[ 41%] Building CXX object CMakeFiles/nvonnxparser.dir/ShapedWeights.cpp.o
[ 42%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/data_type_utils.cc.o
[ 43%] Building CXX object CMakeFiles/nvonnxparser.dir/ShapeTensor.cpp.o
[ 44%] Building CXX object CMakeFiles/nvonnxparser.dir/LoopHelpers.cpp.o
[ 46%] Building CXX object CMakeFiles/nvonnxparser.dir/RNNHelpers.cpp.o
[ 47%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/function.cc.o
[ 48%] Building CXX object CMakeFiles/nvonnxparser.dir/OnnxAttrs.cpp.o
[ 50%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/generator/defs.cc.o
[ 51%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/generator/old.cc.o
[ 52%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/logical/defs.cc.o
[ 53%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/logical/old.cc.o
[ 55%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/math/defs.cc.o
[ 56%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/math/old.cc.o
[ 57%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/nn/defs.cc.o
[ 58%] Linking CXX static library libnvonnxparser_static.a
[ 58%] Built target nvonnxparser_static
Scanning dependencies of target getSupportedAPITest
[ 60%] Building CXX object CMakeFiles/getSupportedAPITest.dir/getSupportedAPITest.cpp.o
[ 61%] Building CXX object CMakeFiles/getSupportedAPITest.dir/ModelImporter.cpp.o
[ 62%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/nn/old.cc.o
[ 64%] Linking CXX shared library libnvonnxparser.so
[ 65%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/object_detection/defs.cc.o
[ 65%] Built target nvonnxparser
[ 66%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/object_detection/old.cc.o
[ 67%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/quantization/defs.cc.o
[ 69%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/reduction/defs.cc.o
[ 70%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/reduction/old.cc.o
[ 71%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/rnn/defs.cc.o
[ 73%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/rnn/old.cc.o
[ 74%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/schema.cc.o
[ 75%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/sequence/defs.cc.o
[ 76%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/tensor/defs.cc.o
[ 78%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/tensor/old.cc.o
[ 79%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/tensor/utils.cc.o
[ 80%] Linking CXX executable getSupportedAPITest
[ 82%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/tensor_proto_util.cc.o
[ 82%] Built target getSupportedAPITest
[ 83%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/tensor_util.cc.o
[ 84%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/traditionalml/defs.cc.o
[ 85%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/traditionalml/old.cc.o
[ 87%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/onnxifi_utils.cc.o
[ 88%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/optimizer/optimize.cc.o
[ 89%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/optimizer/pass.cc.o
[ 91%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/optimizer/pass_manager.cc.o
[ 92%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/optimizer/pass_registry.cc.o
[ 93%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/shape_inference/implementation.cc.o
[ 94%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/version_converter/convert.cc.o
[ 96%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/version_converter/helper.cc.o
[ 97%] Linking CXX static library libonnx.a
[ 97%] Built target onnx
Scanning dependencies of target onnx2trt
[ 98%] Building CXX object CMakeFiles/onnx2trt.dir/main.cpp.o
[100%] Linking CXX executable onnx2trt
[100%] Built target onnx2trt

编译后可以在build目录下查看到如下文件

zhihui@zhihui-desktop:~/onnx-tensorrt/build$ ls
CMakeCache.txt           cmake_install.cmake       libnvonnxparser_static.a
CMakeFiles               getSupportedAPITest       onnx2trt
CPackConfig.cmake        libnvonnxparser.so        third_party
CPackSourceConfig.cmake  libnvonnxparser.so.7
Makefile                 libnvonnxparser.so.7.1.0

7、查看编译生成的onnx2trt可执行文件,是否能用

zhihui@zhihui-desktop:~/onnx-tensorrt/build$ ./onnx2trt
ONNX to TensorRT model parser
Usage: onnx2trt onnx_model.pb[-o engine_file.trt]  (output TensorRT engine)[-t onnx_model.pbtxt] (output ONNX text file without weights)[-T onnx_model.pbtxt] (output ONNX text file with weights)[-m onnx_model_out.pb] (output ONNX model)[-b max_batch_size (default 32)][-w max_workspace_size_bytes (default 1 GiB)][-d model_data_type_bit_depth] (32 => float32, 16 => float16)[-O passes] (optimize onnx model. Argument is a semicolon-separated list of passes)[-p] (list available optimization passes and exit)[-l] (list layers and their shapes)[-g] (debug mode)[-F] (optimize onnx model in fixed mode)[-v] (increase verbosity)[-q] (decrease verbosity)[-V] (show version information)[-h] (show help)
zhihui@zhihui-desktop:~/onnx-tensorrt/build$

2.2 使用onnx2trt工具把我们的onnx模型转换为trt模型

使用onnx2trt把onnx模型转换成trt模型,命令如下:

./onnx2trt ~/yolov4-deepsort/checkpoints/yolov4-tiny-416.onnx -o ~/yolov4-deepsort/checkpoints/saved_model_engine.trt

我转换的过程中遇到问题:ERROR] INVALID_ARGUMENT: getPluginCreator could not find plugin NonZero,这问题最终我是没有解决

zhihui@zhihui-desktop:~/onnx-tensorrt/build$ ./onnx2trt ~/yolov4-deepsort/checkpoints/yolov4-tiny-416.onnx -o ~/yolov4-deepsort/checkpoints/saved_model_engine.trt
----------------------------------------------------------------
Input filename:   /home/zhihui/yolov4-deepsort/checkpoints/yolov4-tiny-416.onnx
ONNX IR version:  0.0.5
Opset version:    10
Producer name:    tf2onnx
Producer version: 1.8.0
Domain:
Model version:    0
Doc string:
----------------------------------------------------------------
Parsing model
[2020-10-28 07:27:28 WARNING] [TRT]/home/zhihui/onnx-tensorrt/onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[2020-10-28 07:27:28   ERROR] INVALID_ARGUMENT: getPluginCreator could not find plugin NonZero version 1
While parsing node number 128 [NonZero -> "StatefulPartitionedCall/functional_1/tf_op_layer_Where/Where:0"]:
ERROR: /home/zhihui/onnx-tensorrt/builtin_op_importers.cpp:3750 In function importFallbackPluginImporter:
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
zhihui@zhihui-desktop:~/onnx-tensorrt/build$ 

使用tensorrt的trtexec工具转换也是报同样的错误:INVALID_ARGUMENT: getPluginCreator could not find plugin NonZero version 1

zhihui@zhihui-desktop:/usr/src/tensorrt/bin$ ./trtexec --onnx=/home/zhihui/yolov4-deepsort/checkpoints/saved_model.onnx --explicitBatch --saveEngine=/home/zhihui/yolov4-deepsort/checkpoints/saved_model.trt --fp16
&&&& RUNNING TensorRT.trtexec # ./trtexec --onnx=/home/zhihui/yolov4-deepsort/checkpoints/saved_model.onnx --explicitBatch --saveEngine=/home/zhihui/yolov4-deepsort/checkpoints/saved_model.trt --fp16
[10/28/2020-11:23:25] [I] === Model Options ===
[10/28/2020-11:23:25] [I] Format: ONNX
[10/28/2020-11:23:25] [I] Model: /home/zhihui/yolov4-deepsort/checkpoints/saved_model.onnx
[10/28/2020-11:23:25] [I] Output:
[10/28/2020-11:23:25] [I] === Build Options ===
[10/28/2020-11:23:25] [I] Max batch: explicit
[10/28/2020-11:23:25] [I] Workspace: 16 MB
[10/28/2020-11:23:25] [I] minTiming: 1
[10/28/2020-11:23:25] [I] avgTiming: 8
[10/28/2020-11:23:25] [I] Precision: FP32+FP16
[10/28/2020-11:23:25] [I] Calibration:
[10/28/2020-11:23:25] [I] Safe mode: Disabled
[10/28/2020-11:23:25] [I] Save engine: /home/zhihui/yolov4-deepsort/checkpoints/saved_model.trt
[10/28/2020-11:23:25] [I] Load engine:
[10/28/2020-11:23:25] [I] Builder Cache: Enabled
[10/28/2020-11:23:25] [I] NVTX verbosity: 0
[10/28/2020-11:23:25] [I] Inputs format: fp32:CHW
[10/28/2020-11:23:25] [I] Outputs format: fp32:CHW
[10/28/2020-11:23:25] [I] Input build shapes: model
[10/28/2020-11:23:25] [I] Input calibration shapes: model
[10/28/2020-11:23:25] [I] === System Options ===
[10/28/2020-11:23:25] [I] Device: 0
[10/28/2020-11:23:25] [I] DLACore:
[10/28/2020-11:23:25] [I] Plugins:
[10/28/2020-11:23:25] [I] === Inference Options ===
[10/28/2020-11:23:25] [I] Batch: Explicit
[10/28/2020-11:23:25] [I] Input inference shapes: model
[10/28/2020-11:23:25] [I] Iterations: 10
[10/28/2020-11:23:25] [I] Duration: 3s (+ 200ms warm up)
[10/28/2020-11:23:25] [I] Sleep time: 0ms
[10/28/2020-11:23:25] [I] Streams: 1
[10/28/2020-11:23:25] [I] ExposeDMA: Disabled
[10/28/2020-11:23:25] [I] Spin-wait: Disabled
[10/28/2020-11:23:25] [I] Multithreading: Disabled
[10/28/2020-11:23:25] [I] CUDA Graph: Disabled
[10/28/2020-11:23:25] [I] Skip inference: Disabled
[10/28/2020-11:23:25] [I] Inputs:
[10/28/2020-11:23:25] [I] === Reporting Options ===
[10/28/2020-11:23:25] [I] Verbose: Disabled
[10/28/2020-11:23:25] [I] Averages: 10 inferences
[10/28/2020-11:23:25] [I] Percentile: 99
[10/28/2020-11:23:25] [I] Dump output: Disabled
[10/28/2020-11:23:25] [I] Profile: Disabled
[10/28/2020-11:23:25] [I] Export timing to JSON file:
[10/28/2020-11:23:25] [I] Export output to JSON file:
[10/28/2020-11:23:25] [I] Export profile to JSON file:
[10/28/2020-11:23:25] [I]
----------------------------------------------------------------
Input filename:   /home/zhihui/yolov4-deepsort/checkpoints/saved_model.onnx
ONNX IR version:  0.0.5
Opset version:    10
Producer name:    tf2onnx
Producer version: 1.7.1
Domain:
Model version:    0
Doc string:
----------------------------------------------------------------
[10/28/2020-11:23:29] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[10/28/2020-11:23:29] [I] [TRT] ModelImporter.cpp:135: No importer registered for op: NonZero. Attempting to import as plugin.
[10/28/2020-11:23:29] [I] [TRT] builtin_op_importers.cpp:3659: Searching for plugin: NonZero, plugin_version: 1, plugin_namespace:
[10/28/2020-11:23:29] [E] [TRT] INVALID_ARGUMENT: getPluginCreator could not find plugin NonZero version 1
ERROR: builtin_op_importers.cpp:3661 In function importFallbackPluginImporter:
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
[10/28/2020-11:23:29] [E] Failed to parse onnx file
[10/28/2020-11:23:29] [E] Parsing model failed
[10/28/2020-11:23:29] [E] Engine creation failed
[10/28/2020-11:23:29] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # ./trtexec --onnx=/home/zhihui/yolov4-deepsort/checkpoints/saved_model.onnx --explicitBatch --saveEngine=/home/zhihui/yolov4-deepsort/checkpoints/saved_model.trt --fp16
zhihui@zhihui-desktop:/usr/src/tensorrt/bin$

在NVIDIA Jetson Xavier NX上把yolov4-deepsort的模型pb模型使用tensorflow-onnx和onnx-tensorrt工具最终转换为tensorrt模型相关推荐

  1. NVIDIA Jetson Xavier NX上导入tensorflow报错:AttributeError: module ‘wrapt‘ has no attribute ‘ObjectProxy‘

    欢迎大家关注笔者,你的关注是我持续更博的最大动力 原创文章,转载告知,盗版必究 在Jetson Xavier NX上导入tensorflow报错:AttributeError: module 'wra ...

  2. 在NVIDIA Jetson Xavier NX上安装llvmlite报错:No such file or directory: ‘llvm-config‘: ‘llvm-config‘

    文章目录: 1 问题说明 2 解决问题方式 1 问题说明 1.在安装llvmlite的时候报错:No such file or directory: 'llvm-config': 'llvm-conf ...

  3. 使用ubuntu16.04对NVIDIA Jetson Xavier NX使用刷机:两种刷机方式:SD卡镜像法 和 NVIDIA SDK Manager法

    文章目录: 1 NVIDIA Jetson Xavier NX的两种刷机方式 2 使用SD卡镜像法对Jetson Xavier NX刷机的具体步骤 3 使用NVIDIA SDK Manager法对Je ...

  4. NVIDIA Jetson Xavier NX使用SD镜像刷机流程

    关于NVIDIA Xavier的一些介绍 文章目录: 1 下载NVIDIA Jetson Xavier NX 的Jetpack镜像和烧录工具 1.1 下载NVIDIA Jetson Xavier NX ...

  5. 在Jetson Xavier NX上安装pycuda报错:src/cpp/cuda.hpp:14:10: fatal error: cuda.h: No such file or directory

    文章目录: 1 我的系统环境和遇到问题分析 1.1 我的系统环境 1.2 问题描述 2 问题解决方式 1 我的系统环境和遇到问题分析 1.1 我的系统环境 我的详细系统环境如下:使用jetson_re ...

  6. NVIDIA Jetson Xavier NX部署VINS-fusion-GPU

    NVIDIA Jetson Xavier NX部署VINS-fusion-GPU 一.环境配置(Ubuntu 18.04) 1.Cuda 10.2的安装 sudo apt-get update sud ...

  7. (十七)NVIDIA Jetson Xavier NX——镜像烧写

    转载自:https://zhuanlan.zhihu.com/p/370701948 (十七)NVIDIA Jetson Xavier NX--镜像烧写 梦里寻梦 Future has arrived ...

  8. NVIDIA Jetson Xavier NX禁用上电自启,使用按键开关机

    NVIDIA Jetson Xavier NX禁用上电自启,使用按键开关机 文章目录 NVIDIA Jetson Xavier NX禁用上电自启,使用按键开关机 前言 一.原理 二.拓展 前言 NX默 ...

  9. NVIDIA Jetson Xavier NX入门(3)——安装miniforge和Pytorch

    1 安装miniforge 1.1 miniforge简介 conda是一个开源的包.环境管理器,可以用于在同一个机器上安装不同版本的软件包及其依赖,并能够在不同的环境之间切换.搞深度学习的应该都十分 ...

最新文章

  1. inputStream输入流转为String对象(将String对象转为inputStream输入流)
  2. 杭州线下|2019产品经理年终轰趴
  3. 五分钟完成 ABP vNext 通讯录 App 开发
  4. QString::arg()//用字符串变量参数依次替代字符串中最小数值
  5. C++面试题:list和vector有什么区别?
  6. 第二周函数-的基本格式:
  7. VM9.0链接+汉化包+序列号
  8. 将图片放大如何保持图片的清晰度?
  9. js学习小计5-零宽断言
  10. 还在为微信朋友圈的大量广告而苦恼吗?一文教你如何清除微信朋友圈的广告!!!
  11. apache2.4配置https协议(key文件、crt文件、csr文件生成方法)
  12. 基于 Agora SDK 实现 macOS 端的一对一视频通话
  13. DM8:达梦数据库DEM--dmagent监控服务器代理部署(详细步骤)
  14. 华硕Prime B250M-K+英特尔i3 7100 3.9GHz+HD 630黑苹果EFI引导文件
  15. Maxthon,TheWorld,MyIE等多标签浏览器的Flash缓存问题
  16. 天下数据教你提升网站访问速度的妙招
  17. C语言windows.h库的常用函数(三)
  18. 本地生活团购达人看这里了
  19. 从你的名字看百度排名
  20. Linux应用层与内核驱动层3种交互方式

热门文章

  1. php单元测试断言方法
  2. 20141016--for 菱形
  3. python怎么读文件夹下的文件夹-python2.7读取文件夹下所有文件名称及内容的方法...
  4. 儿童学python编程入门用途-干货 | 看了此文,家长就知道为啥要让孩子学Python?...
  5. python.freelycode.com-Python中的并行处理 -- 实例编程指南
  6. 怎么查看python是否安装成功-如何查看python是否安装成功?
  7. python爬虫百度百科-如何入门 Python 爬虫?
  8. python3菜鸟教程中文-我的python学习方法和资源整理,干货分享
  9. python pip-python的pip安装以及使用教程
  10. python下载安装教程mac-Anaconda2 Mac版下载