1. tf 转onnx

使用TensorFlow框架训练模型,然后导出为ONNX格式,一般需要经历以下几个步骤:

  • 训练(Training)
  • 转PB文件(Graph Freezing)
  • 模型格式转换(Model Conversion)

1.1 训练(Training)

为了成功将TF model转成ONNX model,需要准备3部分信息:

  1. TF的图定义,即只包含网络拓扑信息(不含权重信息)。获取方法为在inference代码中,插入以下代码进行输出:

    with open("net.proto", "wb") as file:graph = tf.get_default_graph().as_graph_def(add_shapes=True)file.write(graph.SerializeToString())
    
  2. 形状信息。默认情况下,as_graph_def() 方法不会导出形状信息,通过指定add_shapes参数为True来强制其输出,方法参考上述代码。

  3. 检查点文件(checkpoint,ckpt)。也就是我们平常所说的权重文件,TensorFlow一般将其导出为checkpoint文件。

总而言之,经过上述几步,我们可以得到:记录网络拓扑信息和形状信息的*.proto文件,以及记录权重的checkpoint文件若干。

1.2 转PB文件(Graph Freezing)

如上所述,常规TF的模型导出方法会将网络信息与权重信息分开存储在不同文件当中,这在部署时候不是很方便。官方提供了一种Freeze Graph的方式,用于将模型相关信息统统打包到一个*.pb文件当中。

官方提供了相关工具freeze_graph,一般安装完TensorFlow后会自动添加到用户PATH相应的bin目录下,如果没有找到的话可以去TensorFlow源码tensorflow/python/tools/free_graph.py这个位置去找一下,或者直接通过命令行导入module的方式调用。

举例如下,如果有多个输出节点,用逗号隔开:

# 1.直接调用
freeze_graph --input_graph=/home/mnist-tf/graph.proto \--input_checkpoint=/home/mnist-tf/ckpt/model.ckpt \--output_graph=/tmp/frozen_graph.pb \--output_node_names=fc2/add \--input_binary=True# 2. 通过调用module的方式
python -m tensorflow.python.tools.freeze_graph \--input_graph=my_checkpoint_dir/graphdef.pb \--input_binary=true \--output_node_names=output \--input_checkpoint=my_checkpoint_dir \--output_graph=tests/models/fc-layers/frozen.pb

其中有一个比较困扰的点在于需要准确的知道输出节点的节点名称,我的做法是通过

tf.get_default_graph().as_graph_def().node来得到各节点信息,然后再从中查看具体的输出节点名。

print([tensor.name for tensor in tf.get_default_graph().as_graph_def().node])

1.3 模型格式转换(Model Conversion)

使用https://github.com/onnx/tensorflow-onnx提供的tf2onnx工具,示例如下:

python -m tf2onnx.convert\--input tests/models/fc-layers/frozen.pb\--inputs X:0\--outputs output:0\--output tests/models/fc-layers/model.onnx\--verbose

模型输入输出名字需要用 node_name:port_id的格式,要不然后面会转换出错。

大功告成。

2. pytorch转onnx

pytorch在导入模型时,需要有定义模型类的文件(.py格式)

方法一

python pytorch2onnx.py ptmodel_path ptmodel_class_py_path insize_n insize_c insize_w insize_h saved_onnx_path_name
##################export###############
output_onnx = 'mobilenetv3.onnx'
x = origin_im_tensor
print("==> Exporting model to ONNX format at '{}'".format(output_onnx))
input_names = ["input0"]
output_names = ["output0"]
torch_out = torch.onnx._export(res_model, x, output_onnx, export_params=True,verbose=False, input_names=input_names, output_names=output_names)print("==> Loading and checking exported model from '{}'".format(output_onnx))
onnx_model = onnx.load(output_onnx)
onnx.checker.check_model(onnx_model)  # assuming throw on error
print("==> Passed")

方法二:

Step 1: Determine the ONNX version your model needs to be in

This depends on which releases of Windows you are targeting. Newer releases of Windows support newer versions of ONNX. This page lists the opset versions supported by different releases of Windows. ONNX 1.2 (opset 7) is the lowest one supported and will work on all versions of Windows ML. Newer versions of ONNX support more types of models.

Step 2: Export your PyTorch model to that ONNX version

PyTorch’s ONNX export support is documented here. As of PyTorch 1.2, the torch.onnx.export function takes a parameter that lets you specify the ONNX opset version.

import torch
import torchvisiondummy_in = torch.randn(10, 3, 224, 224)
model = torchvision.models.resnet18(pretrained=True)in_names = [ "actual_input_1" ] + [ "learned_%d" % i for i in range(16) ]
out_names = [ "output1" ]torch.onnx.export(model, dummy_in, "resnet18.onnx", input_names=in_names, output_names=out_names, opset_version=7, verbose=True)

Step 3: Integrate the ONNX model into your Windows app

Follow the tutorials and documentation to start using the model in your application. You can code directly against the Windows ML APIs or use the mlgen tool to automatically generate wrapper classes for you.

ref:

https://github.com/onnx/tutorials/blob/master/tutorials/ExportModelFromPyTorchForWinML.md
https://github.com/onnx/tutorials/blob/master/tutorials/PytorchOnnxExport.ipynb

3. 简化&优化onnx

pip3 install onnx-simplifier
python3 -m onnxsim mobilenetv3.onnx mobilenetv3-sim.onnx

ref:
https://github.com/daquexian/onnx-simplifier
https://github.com/onnx/optimizer

4. sklearn to onnx

https://github.com/onnx/sklearn-onnx

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y)
clr = RandomForestClassifier()
clr.fit(X_train, y_train)# Convert into ONNX format
from skl2onnx import convert_sklearn
from skl2onnx.common.data_types import FloatTensorType
initial_type = [('float_input', FloatTensorType([None, 4]))]
onx = convert_sklearn(clr, initial_types=initial_type)
with open("rf_iris.onnx", "wb") as f:f.write(onx.SerializeToString())# Compute the prediction with ONNX Runtime
import onnxruntime as rt
import numpy
sess = rt.InferenceSession("rf_iris.onnx")
input_name = sess.get_inputs()[0].name
label_name = sess.get_outputs()[0].name
pred_onx = sess.run([label_name], {input_name: X_test.astype(numpy.float32)})[0]

ref: http://onnx.ai/sklearn-onnx/index.html

5. tf to onnx

Install from pypi
pip install -U tf2onnx

python -m tf2onnx.convert --saved-model tensorflow-model-path --output model.onnx
The above command uses a default of 8 for the ONNX opset. If you need a newer opset, or want to limit your model to use an older opset then you can provide the --opset argument to the command. If you are unsure about which opset to use, refer to the ONNX operator documentation.

python -m tf2onnx.convert --saved-model tensorflow-model-path --opset 10 --output model.onnx

For checkpoint format:

python -m tf2onnx.convert --checkpoint tensorflow-model-meta-file-path --output model.onnx --inputs input0:0,input1:0 --outputs output0:0

For graphdef format:

python -m tf2onnx.convert --graphdef tensorflow-model-graphdef-file --output model.onnx --inputs input0:0,input1:0 --outputs output0:0

5.1 Step 1 - Get Tensorflow model

Tensorflow uses several file formats to represent a model, such as checkpoint files, graph with weight(called frozen graph next) and saved_model, and it has APIs to generate these files.

And tensorflow-onnx can accept all the three formats to represent a Tensorflow model, the format “saved_model” should be the preference since it doesn’t require the user to specify input and output names of graph.

import os
import shutil
import tensorflow as tf
from assets.tensorflow_to_onnx_example import create_and_train_mnist
def save_model_to_saved_model(sess, input_tensor, output_tensor):from tensorflow.saved_model import simple_savesave_path = r"./output/saved_model"if os.path.exists(save_path):shutil.rmtree(save_path)simple_save(sess, save_path, {input_tensor.name: input_tensor}, {output_tensor.name: output_tensor})print("please wait for a while, because the script will train MNIST from scratch")
tf.reset_default_graph()
sess_tf, saver, input_tensor, output_tensor = create_and_train_mnist()
print("save tensorflow in format \"saved_model\"")
save_model_to_saved_model(sess_tf, input_tensor, output_tensor)
please wait for a while, because the script will train MNIST from scratch
Extracting /tmp/tensorflow/mnist/input_data/train-images-idx3-ubyte.gz
Extracting /tmp/tensorflow/mnist/input_data/train-labels-idx1-ubyte.gz
Extracting /tmp/tensorflow/mnist/input_data/t10k-images-idx3-ubyte.gz
Extracting /tmp/tensorflow/mnist/input_data/t10k-labels-idx1-ubyte.gz
step 0, training accuracy 0.18
step 1000, training accuracy 0.98
step 2000, training accuracy 0.94
step 3000, training accuracy 1
step 4000, training accuracy 1
test accuracy 0.976
save tensorflow in format "saved_model"

Step 2 - Convert to ONNX

tensorflow-onnx has several entries to convert tensorflow model with different tensorflow formats, this section will cover “saved_model” only,

# generating mnist.onnx using saved_model
!python -m tf2onnx.convert \--saved-model ./output/saved_model \--output ./output/mnist1.onnx \--opset 7

2019-06-17 07:22:03,871 - INFO - Using tensorflow=1.12.0, onnx=1.5.0, tf2onnx=1.5.1/0c735a
2019-06-17 07:22:03,871 - INFO - Using opset <onnx, 7>
2019-06-17 07:22:03,989 - INFO -
2019-06-17 07:22:04,012 - INFO - Optimizing ONNX model
2019-06-17 07:22:04,029 - INFO - After optimization: Add -2 (4->2), Identity -3 (3->0), Transpose -8 (9->1)
2019-06-17 07:22:04,031 - INFO -
2019-06-17 07:22:04,032 - INFO - Successfully converted TensorFlow model ./output/saved_model to ONNX
2019-06-17 07:22:04,044 - INFO - ONNX model is saved at ./output/mnist1.onnx

Step 3 - Validate

There are several framework can run model in ONNX format, here ONNXRuntime , opensourced by Microsoft, is used to make sure the generated ONNX graph behaves well. The input “image.npz” is an image of handwritten “7”, so the expected classification result of model should be “7”.

import numpy as np
import onnxruntime as ortimg = np.load("./assets/image.npz").reshape([1, 784])
sess_ort = ort.InferenceSession("./output/mnist1.onnx")
res = sess_ort.run(output_names=[output_tensor.name], input_feed={input_tensor.name: img})
print("the expected result is \"7\"")
print("the digit is classified as \"%s\" in ONNXRruntime"%np.argmax(res))
the expected result is "7"
the digit is classified as "7" in ONNXRruntime

ref:
https://github.com/onnx/tensorflow-onnx
https://github.com/onnx/tutorials/blob/master/tutorials/TensorflowToOnnx-1.ipynb

6. onnx_to_tensorRT

ONNX models can be converted to serialized TensorRT engines using the onnx2trt executable:
onnx2trt my_model.onnx -o my_engine.trt

ONNX models can also be converted to human-readable text:
onnx2trt my_model.onnx -t my_model.onnx.txt

ONNX models can also be optimized by ONNX’s optimization libraries (added by dsandler). To optimize an ONNX model and output a new one use -m to specify the output model name and -O to specify a semicolon-separated list of optimization passes to apply:
onnx2trt my_model.onnx -O "pass_1;pass_2;pass_3" -m my_model_optimized.onnx

See more all available optimization passes by running:
onnx2trt -p

See more usage information by running:
onnx2trt -h

The ONNX-TensorRT backend can be installed by running:
python3 setup.py install

6.1 run_onnx_with_tensorRT_BackEND

import onnx
import onnx_tensorrt.backend as backend
import numpy as npmodel = onnx.load("/path/to/model.onnx")
engine = backend.prepare(model, device='CUDA:1')
input_data = np.random.random(size=(32, 3, 224, 224)).astype(np.float32)
output_data = engine.run(input_data)[0]
print(output_data)
print(output_data.shape)

ref:
https://github.com/onnx/onnx-tensorrt

7. run onnx with TensorFlow

  • install:
    pip install onnx-tf
    https://github.com/onnx/onnx-tensorflow
  • Converting Models from ONNX to TensorFlow
    From ONNX to TensorFlow: onnx-tf convert -i /path/to/input.onnx -o /path/to/output

    import onnx
    from onnx_tf.backend import prepareonnx_model = onnx.load("input_path")  # load onnx model
    output = prepare(onnx_model).run(input)  # run the loaded model
    

onnx模型转换代码

https://github.com/onnx/onnx
更多onnx安装、部署、使用教程:
https://github.com/onnx/tutorials

  • onnx models:
    https://github.com/onnx/models

ref:

https://blog.csdn.net/sinat_37532065/article/details/106664724

https://github.com/onnx/tutorials/blob/master/tutorials/OnnxTensorflowExport.ipynb
https://www.jianshu.com/p/8ec3a6c9c453

https://github.com/onnx/tutorials/blob/master/tutorials/PytorchOnnxExport.ipynb

转换优化 onnx模型相关推荐

  1. 将训练好的pytorch模型的pth文件转换成onnx模型(亲测成功)

    将训练好的pytorch模型的pth文件转换成onnx模型(亲测成功) 模型转换 声明:本文原创,未经许可严禁转载,原文地址https://blog.csdn.net/hutao1030813002/ ...

  2. darknet cpp weights模型转换成ONNX模型

    整理不易,如果觉得有用,记得点赞收藏和分享哦 1. 下载转换需要的代码文件 在下面地址下载代码文件 https://gitee.com/liangjiaxi2019/pytorch-YOLOv4 2. ...

  3. java调用onnx模型_开源一年多的模型交换格式ONNX,已经一统框架江湖了?

    原标题:开源一年多的模型交换格式ONNX,已经一统框架江湖了? 机器之心原创 作者:思源 近日,微软亚洲研究院和华为举办了 ONNX 合作伙伴研讨会,这是 ONNX 开源社区成立以来首次在中国举办的活 ...

  4. pytorch模型(.pt)转onnx模型(.onnx)的方法详解(1)

    1. pytorch模型转换到onnx模型 2.运行onnx模型 3.比对onnx模型和pytorch模型的输出结果 我这里重点是第一点和第二点,第三部分  比较容易 首先你要安装 依赖库:onnx ...

  5. 【yolov5】pytorch模型导出为onnx模型

    博主想拿官网的yolov5训练好pt模型,然后转换成rknn模型,然后在瑞芯微开发板上调用模型检测.但是官网的版本对npu不友好,所以采用改进结构的版本: 将Focus层改成Conv层 将Swish激 ...

  6. yolov3-tiny原始weights模型转onnx模型并进行推理

    时隔不知道多少天,我记起来我还有部分博客没写完(偷懒),所以不能偷懒把它完成!! 这篇博客的主要内容 将yolov3-tiny.weights模型转换到.onnx模型: 使用onnnxruntime- ...

  7. 【地平线开发板 模型转换】将pytorch生成的onnx模型转换成.bin模型

    文章目录 1 获取onnx模型 2 启动docker容器 3 onnx模型检查 3.1 为什么要检查? 3.2 如何操作 4 图像数据预处理 4.1 一些问题的思考 4.2 图片挑选与放置 4.2 使 ...

  8. ONNX 模型图优化

    需求真的是千奇百怪,最近项目需要修改多年前通过tensorflow转换得到的ONNX模型,关键转换前的tensorflow模型已经神秘地失踪了 本小姐真是无力吐槽,这个班真是一天都不想上了,冷静下来想 ...

  9. 【.pth模型转换为.onnx模型】模型转换 英特尔神经计算棒 树莓派

    转换代码 注意点:要根据你的代码进行修改,修改最初的包等 import torch from models.with_mobilenet import PoseEstimationWithMobile ...

  10. 模型转换:pytorch模型转onnx, onnx转tensorflow, tensorflow转tflite

    文章目录 软件版本: pytorch模型转onnx onnx模型转tensorflow tensorflow模型转tflite 软件版本: tensorflow 2.3.1 pytorch 1.6.0 ...

最新文章

  1. PHP生成PDF并转换成图片爬过的坑
  2. python教程书籍-大牛推荐的10本学习 Python 的好书
  3. 写给那些正奔三的80后[转]
  4. golang go mod包管理:安装第三方包的三种方式(go get/go mod download/go mod vendor)
  5. 比其他行业晚了十年的工业软件,转型的核心和动力是什么?
  6. Linux中的redis集群搭建
  7. html版本操作手册,全新HTML5用户手册(版本2017)–互联网股票买卖操作.PDF
  8. Python+OpenGL切分图形窗口在多视区中显示不同动画
  9. ASP.NET数据绑定控件数据项中的服务器控件注册JS方法
  10. CreateFeatureClass 异常,尝试读取或写入受保护的内存 Access
  11. [技术杂谈][转载]cuda下载官方通道
  12. 搜索引擎都有哪几种类型?
  13. 用降水、比湿和温度计算相对湿度(nc版、python)
  14. 千帆竞发-Redis分布式锁
  15. 韵达快递投诉一直显示服务器繁忙,快递查询自动识别查询方法(6)
  16. 关于华为P10(Android 8.0系统)出现的一个莫名奇妙的ANR
  17. 华为大数据HCIA题目1
  18. 幼儿园数学目标_幼儿园大班数学计划
  19. Android 底层知识拾零,android原生开发框架
  20. pod一直处于ContainerCreating,查看报错信息为挂载错误MountVolume.SetUp failed for volume

热门文章

  1. 奥迪A6(C5)停车加热(驻车暖风)01406故障
  2. 动手学深度学习笔记3.1+3.2+3.3
  3. win7系统怎样添加wifi连接到服务器,win7如何连接无线wifi设置详细教程
  4. mysql 5.6 触发器_【mysql】mysql触发器使用示例
  5. 对话系统简介与OPPO小布助手的工程实践
  6. C++ 控制字符串移动程序
  7. STM32f429开发中USB读写文件涉及到的库移植
  8. 天猫精灵GXIC2020 AIOT物联网大赛获奖了
  9. 英语背单词软件需求分析
  10. 记录自己的UCF—Crime代码debug