把onnx模型转TensorRT模型的trt模型报错:Your ONNX model has been generated with INT64 weights. while TensorRT
欢迎大家关注笔者,你的关注是我持续更博的最大动力
原创文章,转载告知,盗版必究
把onnx模型转TensorRT模型的trt模型报错:[TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
文章目录:
- 1 错误原因分析
- 2 错误解决方式
- 2.1 错误解决方式一(亲测可行)
- 2.2 解决方法二:从新生成onnx模型的精度为INT32(还没有尝试)
本人环境声明:
系统环境
:Ubuntu18.04.1
cuda版本
:10.2.89
cudnn版本
:7.6.5
torch版本
:1.5.0
torchvision版本
:0.6.0
mmcv版本
:0.5.5
- 项目代码
mmdetection v2.0.0
,官网是在20200506
正式发布的v2.0.0版本
TensorRT-7.0.0.11
uff0.6.5
1 错误原因分析
我是在把mmdetection的模型转换为onnx模型之后,再把onnx模型转化为trt模式的时候,遇到的这个错误。从Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
提示信息可以看出;
我们转化后的ONNX模型的参数类型是INT64
- 然而:
TensorRT本身不支持INT64
- 而对于:
INT32的精度,TensorRT是支持的
,因此可以尝试把ONNX模型的精度改为INT32
,然后再进行转换
错误代码内容:
(mmdetection) shl@zfcv:~/TensorRT-7.0.0.11/bin$ ./trtexec --onnx=retinate_hat_hair_beard.onnx --saveEngine=retinate_hat_hair_beard.trt --device=1
&&&& RUNNING TensorRT.trtexec # ./trtexec --onnx=retinate_hat_hair_beard.onnx --saveEngine=retinate_hat_hair_beard.trt --device=1
[07/31/2020-13:56:39] [I] === Model Options ===
[07/31/2020-13:56:39] [I] Format: ONNX
[07/31/2020-13:56:39] [I] Model: retinate_hat_hair_beard.onnx
[07/31/2020-13:56:39] [I] Output:
[07/31/2020-13:56:39] [I] === Build Options ===
[07/31/2020-13:56:39] [I] Max batch: 1
[07/31/2020-13:56:39] [I] Workspace: 16 MB
[07/31/2020-13:56:39] [I] minTiming: 1
[07/31/2020-13:56:39] [I] avgTiming: 8
[07/31/2020-13:56:39] [I] Precision: FP32
[07/31/2020-13:56:39] [I] Calibration:
[07/31/2020-13:56:39] [I] Safe mode: Disabled
[07/31/2020-13:56:39] [I] Save engine: retinate_hat_hair_beard.trt
[07/31/2020-13:56:39] [I] Load engine:
[07/31/2020-13:56:39] [I] Inputs format: fp32:CHW
[07/31/2020-13:56:39] [I] Outputs format: fp32:CHW
[07/31/2020-13:56:39] [I] Input build shapes: model
[07/31/2020-13:56:39] [I] === System Options ===
[07/31/2020-13:56:39] [I] Device: 1
[07/31/2020-13:56:39] [I] DLACore:
[07/31/2020-13:56:39] [I] Plugins:
[07/31/2020-13:56:39] [I] === Inference Options ===
[07/31/2020-13:56:39] [I] Batch: 1
[07/31/2020-13:56:39] [I] Iterations: 10
[07/31/2020-13:56:39] [I] Duration: 3s (+ 200ms warm up)
[07/31/2020-13:56:39] [I] Sleep time: 0ms
[07/31/2020-13:56:39] [I] Streams: 1
[07/31/2020-13:56:39] [I] ExposeDMA: Disabled
[07/31/2020-13:56:39] [I] Spin-wait: Disabled
[07/31/2020-13:56:39] [I] Multithreading: Disabled
[07/31/2020-13:56:39] [I] CUDA Graph: Disabled
[07/31/2020-13:56:39] [I] Skip inference: Disabled
[07/31/2020-13:56:39] [I] Input inference shapes: model
[07/31/2020-13:56:39] [I] Inputs:
[07/31/2020-13:56:39] [I] === Reporting Options ===
[07/31/2020-13:56:39] [I] Verbose: Disabled
[07/31/2020-13:56:39] [I] Averages: 10 inferences
[07/31/2020-13:56:39] [I] Percentile: 99
[07/31/2020-13:56:39] [I] Dump output: Disabled
[07/31/2020-13:56:39] [I] Profile: Disabled
[07/31/2020-13:56:39] [I] Export timing to JSON file:
[07/31/2020-13:56:39] [I] Export output to JSON file:
[07/31/2020-13:56:39] [I] Export profile to JSON file:
[07/31/2020-13:56:39] [I]
----------------------------------------------------------------
Input filename: retinate_hat_hair_beard.onnx
ONNX IR version: 0.0.6
Opset version: 9
Producer name: pytorch
Producer version: 1.5
Domain:
Model version: 0
Doc string:
----------------------------------------------------------------
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/31/2020-13:56:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
While parsing node number 191 [Upsample]:
ERROR: builtin_op_importers.cpp:3240 In function importUpsample:
[8] Assertion failed: scales_input.is_weights()
[07/31/2020-13:56:40] [E] Failed to parse onnx file
[07/31/2020-13:56:40] [E] Parsing model failed
[07/31/2020-13:56:40] [E] Engine creation failed
[07/31/2020-13:56:40] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # ./trtexec --onnx=retinate_hat_hair_beard.onnx --saveEngine=retinate_hat_hair_beard.trt --device=1
(mmdetection) shl@zfcv:~/TensorRT-7.0.0.11/bin$ ls
2 错误解决方式
2.1 错误解决方式一(亲测可行)
可能是我们生成的.onnx模型的graph太复杂
,我们先把它变简单点
1、安装onnx-simplifier
pip install onnx-simplifier
2、把之前转化的onnx模型转化为更简单的onnx模型
python -m onnxsim retinate_hat_hair_beard.onnx retinate_hat_hair_beard_sim.onnx
3、然后在把onnx模型转换为TensorRT的trt模型
./trtexec --onnx=retinate_hat_hair_beard_sim.onnx --saveEngine=retinate_hat_hair_beard_sim.trt --device=1
(mmdetection) shl@zfcv:~/TensorRT-7.0.0.11/bin$ ./trtexec --onnx=retinate_hat_hair_beard_sim.onnx --saveEngine=retinate_hat_hair_beard_sim.trt --device=1
&&&& RUNNING TensorRT.trtexec # ./trtexec --onnx=retinate_hat_hair_beard_sim.onnx --saveEngine=retinate_hat_hair_beard_sim.trt --device=1
[07/31/2020-14:15:14] [I] === Model Options ===
[07/31/2020-14:15:14] [I] Format: ONNX
[07/31/2020-14:15:14] [I] Model: retinate_hat_hair_beard_sim.onnx
[07/31/2020-14:15:14] [I] Output:
[07/31/2020-14:15:14] [I] === Build Options ===
[07/31/2020-14:15:14] [I] Max batch: 1
[07/31/2020-14:15:14] [I] Workspace: 16 MB
[07/31/2020-14:15:14] [I] minTiming: 1
[07/31/2020-14:15:14] [I] avgTiming: 8
[07/31/2020-14:15:14] [I] Precision: FP32
[07/31/2020-14:15:14] [I] Calibration:
[07/31/2020-14:15:14] [I] Safe mode: Disabled
[07/31/2020-14:15:14] [I] Save engine: retinate_hat_hair_beard_sim.trt
[07/31/2020-14:15:14] [I] Load engine:
[07/31/2020-14:15:14] [I] Inputs format: fp32:CHW
[07/31/2020-14:15:14] [I] Outputs format: fp32:CHW
[07/31/2020-14:15:14] [I] Input build shapes: model
[07/31/2020-14:15:14] [I] === System Options ===
[07/31/2020-14:15:14] [I] Device: 1
[07/31/2020-14:15:14] [I] DLACore:
[07/31/2020-14:15:14] [I] Plugins:
[07/31/2020-14:15:14] [I] === Inference Options ===
[07/31/2020-14:15:14] [I] Batch: 1
[07/31/2020-14:15:14] [I] Iterations: 10
[07/31/2020-14:15:14] [I] Duration: 3s (+ 200ms warm up)
[07/31/2020-14:15:14] [I] Sleep time: 0ms
[07/31/2020-14:15:14] [I] Streams: 1
[07/31/2020-14:15:14] [I] ExposeDMA: Disabled
[07/31/2020-14:15:14] [I] Spin-wait: Disabled
[07/31/2020-14:15:14] [I] Multithreading: Disabled
[07/31/2020-14:15:14] [I] CUDA Graph: Disabled
[07/31/2020-14:15:14] [I] Skip inference: Disabled
[07/31/2020-14:15:14] [I] Input inference shapes: model
[07/31/2020-14:15:14] [I] Inputs:
[07/31/2020-14:15:14] [I] === Reporting Options ===
[07/31/2020-14:15:14] [I] Verbose: Disabled
[07/31/2020-14:15:14] [I] Averages: 10 inferences
[07/31/2020-14:15:14] [I] Percentile: 99
[07/31/2020-14:15:14] [I] Dump output: Disabled
[07/31/2020-14:15:14] [I] Profile: Disabled
[07/31/2020-14:15:14] [I] Export timing to JSON file:
[07/31/2020-14:15:14] [I] Export output to JSON file:
[07/31/2020-14:15:14] [I] Export profile to JSON file:
[07/31/2020-14:15:14] [I]
----------------------------------------------------------------
Input filename: retinate_hat_hair_beard_sim.onnx
ONNX IR version: 0.0.6
Opset version: 9
Producer name: pytorch
Producer version: 1.5
Domain:
Model version: 0
Doc string:
----------------------------------------------------------------
[07/31/2020-14:15:16] [I] [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[07/31/2020-14:15:57] [I] [TRT] Detected 1 inputs and 10 output network tensors.
[07/31/2020-14:16:03] [W] [TRT] Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
Cuda failure: out of memory
已放弃 (核心已转储)
转后后结果,可以看到有生成.trt
文件:
2.2 解决方法二:从新生成onnx模型的精度为INT32(还没有尝试)
♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠
把onnx模型转TensorRT模型的trt模型报错:Your ONNX model has been generated with INT64 weights. while TensorRT相关推荐
- obj模型 vue_uni-app npm 包手机端运行报错(vue-3d-model)
详细问题描述 最近想在uni-app中使用 vue-3d-model 来展示3D模型,npm安装包后,在 Chrome 中调试完美运行,但是换到手机端(iOS)j就会报错. 重现步骤 [步骤] 按官网 ...
- 【数据处理脚本】RA-CNN模型数据集处理及训练出现的报错解决
文件合并处理的脚本编写 在我的虹膜数据集中,每个人的数据对应一个编号的文件夹(如001),文件夹下分左右眼文件夹(L和R),结构如下: 001 L R 002 L R 在本次实验中,没有对左右眼进行区 ...
- pytorch 34 mmdeploy模型转换报错onnxruntime.capi.onnxruntime_pybind11_state.Fail解决方案
使用mmdeploy部署mmrotate模型时出现以下报错,虽然不影响模型转换为onnx,但是影响模型的加载.出现这个报错,是因为mmdeploy转换完模型后,要加载onnx模型测试一下,但是onnx ...
- django项目启动加载训练的模型报错OSError: Unable to open file (unable to open file: name = ‘model/model_weigh完美解决
1.原因分析 此错误原因多样通过网上整理有一下几种 ①h5py版本过高 ,重装h5py ② 相对路径改成绝对路径 ③文件无权限访问,点击文件属性,点击高级.赋予权限 ④这个是我报错的解决办法 因为他单 ...
- 使用TensorRt搭建自己的模型
使用TensorRt搭建自己的模型 文章目录 使用TensorRt搭建自己的模型 前言 一.问题 二.搭建过程 三.搭建网络过程肯定会出错,debug是必要的手段: 总结 前言 在推理过程中,基于 T ...
- DL | TensorRT将Tensorflow模型转换为uff格式 报错Unable to locate package uff-converter-tf
前情概要:尝试用Nvidia的tensorRT加速模型,生成tf的pb模型后,用uff的python工具包进一步转换模型时,出现错误. 实验环境:TensorRT 5.0+CUDA10.0的nvidi ...
- 实践教程 | TensorRT部署深度学习模型
作者 | ltpyuanshuai@知乎 来源 | https://zhuanlan.zhihu.com/p/84125533 编辑 | 极市平台 本文仅作学术分享,版权归原作者所有,如有侵权请联系删 ...
- TensorRT部署深度学习模型
1.背景 目前主流的深度学习框架(caffe,mxnet,tensorflow,pytorch等)进行模型推断的速度都并不优秀,在实际工程中用上述的框架进行模型部署往往是比较低效的.而通过Nvidia ...
- 收藏 | TensorRT部署深度学习模型
点上方计算机视觉联盟获取更多干货 仅作学术分享,不代表本公众号立场,侵权联系删除 转载于:作者 | ltpyuanshuai@知乎 来源 | https://zhuanlan.zhihu.com/p/ ...
最新文章
- 监控Oracle性能的SQL
- leetcode102
- G6 图可视化引擎——简介
- USB 设备类代码表
- Windows版nacos启动报错(nacos安装路径问题)
- Java—Queue队列详解(Deque/PriorityQueue/Deque/ArrayDeque/LinkedList)
- 妙算2的串口用自己的接线(杜邦线)连接无人机210或者stm32
- linux查看cpu架构命令,linux查看cpu型号命令
- python出现syntaxerror_Python SyntaxError语法错误原因及解决
- C#:快速排序,有相同的数字会忽略,然后继续先前的寻找方向去找下一个满足要求的数字进行替换
- web应用防火墙的部署方式
- 护眼体验新升级,引领2023护眼风潮,南卡Pro护眼台灯评测报告
- 在表示计算机存储容量中1T,1T等于多少G,
- 怎么理解anchor?
- matlab幂函数e,MATLAB e的幂函数拟合
- 华为五年自动化测试工程详细解说:unittest单元测试框架
- 字节大牛教你手撕Android学习,灵魂拷问
- ZUI易入门Android之Okhttp的相关概念
- c语言代码运行成图指令代码,C语言图形编程代码.doc
- jquery.print.js打印去掉页眉页脚和设置横纵向打印
热门文章
- 各银行ATM(自动柜员机) 取款费用+信用卡收费标准+网上银行收费标准
- 我在HIT第一次.net实验中用到的sql语句
- Cisco Packet Tracer思科模拟器中路由器的广域网HDLC封装
- 西西的小学之路——报拼音补习班
- 人工智能:智能语音技术应用场景介绍
- 支付系统设计四:轮询扣款设计04-详解
- 微软 Azure DevOps Server 2019 Update 1 (TFS 2019.1)
- 【算法设计与分析】 最优服务次序问题
- 目前学历提升的“含金量”到底有多少,“后高考”对我们来说真的有多重要?
- 单片机菜鸟哥的大学四年以及工作七年,献给迷茫的电子物联网类的师弟师妹,绝对干货