ONNX MLIR应用示例(含源码链接)
开放式神经网络交换在MLIR中的实现 (http://onnx.ai/onnx-mlir/)。

Prebuilt Containers
开始使用ONNX-MLIR的一个简单方法是使用预构建的docker映像。这些映像是在主干上成功合并生成的结果。最新的图像代表主干的顶端。目前,Docker Hub中保存的amd64、ppc64le和s390x的发布和调试模式映像分别为onnxmlirczar/onnx-mlir和onnxmlirczar/onnx-mlir-dev。要使用其中一个映像,可以直接从Docker Hub中取出,启动一个容器,运行一个交互式bash shell,或者用作dockerfile中的基础映像。onnx mlir映像只包含构建的编译器,可以立即使用编译模型,无需任何安装。提供了一个python便利脚本,在docker容器中运行ONNX-MLIR,就像直接在主机上运行ONNX-MLIR编译器一样。例如,

docker/onnx-mlir.py --EmitLib mnist/model.onnx

505a5a6fb7d0: Pulling fs layer
505a5a6fb7d0: Verifying Checksum
505a5a6fb7d0: Download complete
505a5a6fb7d0: Pull complete
Shared library model.so has been compiled.
如果onnx mlir映像在本地不可用,脚本将提取该映像,将包含model.onnx的目录load到容器中,在同一目录中编译和生成model.so。
onnx mlir dev映像包含完整的构建树,其中包括先决条件和源代码的克隆。可以在容器中修改源代码,重建onnx-mlir,因此可以用作开发环境。还可以将vscode附加到正在运行的容器。docs文件夹中,可以看到一个对开发和vscode配置文件有用的Dockerfile示例。如果运行Docker build的目录中,不存在工作区目录和vscode文件,应注释掉或删除引用的行。
本文参考链接:
https://github.com/clang-ykt/llvm-project
Dockerfile显示在这里。
FROM onnxmlirczar/onnx-mlir-dev
WORKDIR /workdir
ENV HOME=/workdir

1) Install packages.

ENV PATH=$PATH:/workdir/bin
RUN apt-get update
RUN apt-get install -y python-numpy
RUN apt-get install -y python3-pip
RUN python -m pip install --upgrade pip
RUN apt-get install -y gdb
RUN apt-get install -y lldb
RUN apt-get install -y emacs
RUN apt-get install -y vim

2) Instal optional packages, uncomment/add as you see fit.

RUN apt-get install -y valgrind

RUN apt-get install -y libeigen3-dev

RUN apt-get install -y clang-format

RUN python -m pip install wheel

RUN python -m pip install numpy

RUN python -m pip install torch1.6.0+cpu torchvision0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html

RUN git clone https://github.com/onnx/tutorials.git

Install clang-12.

RUN apt-get install -y lsb-release wget software-properties-common

RUN bash -c “$(wget -O - https://apt.llvm.org/llvm.sh)”

3) When using vscode, copy your .vscode in the Dockerfile dir and

uncomment the two lines below.

WORKDIR /workdir/.vscode

ADD .vscode /workdir/.vscode

4) When using a personal workspace folder, set your workspace sub-directory

in the Dockerfile dir and uncomment the two lines below.

WORKDIR /workdir/workspace

ADD workspace /workdir/workspace

5) Fix git by reattaching head and making git see other branches than main.

WORKDIR /workdir/onnx-mlir
RUN git checkout main
RUN git fetch --unshallow

6) Set the PATH environment vars for make/debug mode. Replace Debug

with Release in the PATH below when using Release mode.

WORKDIR /workdir
ENV MLIR_DIR=/workdir/llvm-project/build/lib/cmake/mlir
ENV NPROC=4
ENV PATH=$PATH:/workdir/onnx-mlir/build/Debug/bin/:/workdir/onnx-mlir/build/Debug/lib:/workdir/llvm-project/build/bin
Prerequisites
gcc >= 6.4
libprotoc >= 3.11.0
cmake >= 3.15.4
ninja >= 1.10.2
GCC可以在这里找到https://gcc.gnu.org/install/,或者如果有自制软件https://docs.brew.sh/Installation,可以使用brew安装GCC。要检查已安装的gcc版本,运行gcc–version。
安装libprotoc,或者,如果有自制软件,可以运行brew,安装protobuf。要检查已安装的版本,运行protoc–version。
可以在这里https://cmake.org/download/找到Cmake。但是,要使用Cmake,您需要遵循“如何安装以供命令行使用”教程,该教程可以在Cmake中的“工具”>“如何安装以供命令行使用”下找到。要检查您拥有的版本,您可以在桌面版本中的CMake>About下查看,或者运行CMake–version。
安装忍者的说明可以在这里找到https://ninja-build.org/。或者,使用自制软件,可以运行brew安装。要检查版本,运行ninja–version。
在任何时候,ONNX-MLIR都取决于LLVM项目的特定提交,该提交已证明与该项目一起工作。维护人员需要定期迁移到更新的LLVM级别。需要更新utils/clone-mlir.sh中的提交字符串。进行更改的结果是TravisCI构建将失败,直到重建包含prereqs的Docker映像。有一个GitHub工作流,可以为amd64体系结构重建映像,目前必须手动重建ppc64le和s390x映像。要完成的Dockerfiles在repo中。
Installation on UNIX
MLIR
Firstly, install MLIR (as a part of LLVM-Project):
git clone https://github.com/llvm/llvm-project.git

Check out a specific branch that is known to work with ONNX MLIR.

cd llvm-project && git checkout 0bf230d4220660af8b2667506f8905df2f716bdf && cd …
mkdir llvm-project/build
cd llvm-project/build
cmake -G Ninja …/llvm
-DLLVM_ENABLE_PROJECTS=mlir
-DLLVM_TARGETS_TO_BUILD=“host”
-DCMAKE_BUILD_TYPE=Release
-DLLVM_ENABLE_ASSERTIONS=ON
-DLLVM_ENABLE_RTTI=ON

cmake --build . – ${MAKEFLAGS}
cmake --build . --target check-mlir
ONNX-MLIR (this project)
The following environment variables can be set before building onnx-mlir (or alternatively, they need to be passed as CMake variables):
• MLIR_DIR should point to the mlir cmake module inside an llvm-project build or install directory (e.g., llvm-project/build/lib/cmake/mlir).
This project uses lit (LLVM’s Integrated Tester) for unit tests. When running CMake, we can also specify the path to the lit tool from LLVM using the LLVM_EXTERNAL_LIT define but it is not required as long as MLIR_DIR points to a build directory of llvm-project. If MLIR_DIR points to an install directory of llvm-project, LLVM_EXTERNAL_LIT is required.
To build ONNX-MLIR, use the following commands:
git clone --recursive https://github.com/onnx/onnx-mlir.git

Export environment variables pointing to LLVM-Projects.

export MLIR_DIR=$(pwd)/llvm-project/build/lib/cmake/mlir

mkdir onnx-mlir/build && cd onnx-mlir/build
cmake -G Ninja …
cmake --build .

Run lit tests:

export LIT_OPTS=-v
cmake --build . --target check-onnx-lit
If you are running on OSX Big Sur, you need to add -DCMAKE_CXX_COMPILER=/usr/bin/c++ to the cmake … command due to changes in the compilers. After the above commands succeed, an onnx-mlir executable should appear in the bin directory.
LLVM and ONNX-MLIR CMake variables
The following CMake variables from LLVM and ONNX MLIR can be used when compiling ONNX MLIR.
MLIR_DIR:PATH Path to to the mlir cmake module inside an llvm-project build or install directory (e.g., c:/repos/llvm-project/build/lib/cmake/mlir). This is required if MLIR_DIR is not specified as an environment variable.
LLVM_EXTERNAL_LIT:PATH Path to the lit tool. Defaults to an empty string and LLVM will find the tool based on MLIR_DIR if possible. This is required when MLIR_DIR points to an install directory.
Installation on Windows
Building onnx-mlir on Windows requires building some additional prerequisites that are not available by default.
Note that the instructions in this file assume you are using Visual Studio 2019 Community Edition with ninja. It is recommended that you have the Desktop development with C++ and Linux development with C++ workloads installed. This ensures you have all toolchains and libraries needed to compile this project and its dependencies on Windows.
Run all the commands from a shell started from “Developer Command Prompt for VS 2019”.
Protobuf
Build protobuf as a static library.
git clone --recurse-submodules https://github.com/protocolbuffers/protobuf.git
REM Check out a specific branch that is known to work with ONNX MLIR.
REM This corresponds to the v3.11.4 tag
cd protobuf && git checkout d0bfd5221182da1a7cc280f3337b5e41a89539cf && cd …

set root_dir=%cd%
md protobuf_build
cd protobuf_build
call cmake %root_dir%\protobuf\cmake -G “Ninja” ^
-DCMAKE_INSTALL_PREFIX="%root_dir%\protobuf_install" ^
-DCMAKE_BUILD_TYPE=Release ^
-Dprotobuf_BUILD_EXAMPLES=OFF ^
-Dprotobuf_BUILD_SHARED_LIBS=OFF ^
-Dprotobuf_BUILD_TESTS=OFF ^
-Dprotobuf_MSVC_STATIC_RUNTIME=OFF ^
-Dprotobuf_WITH_ZLIB=OFF

call cmake --build . --config Release
call cmake --build . --config Release --target install
Before running CMake for onnx-mlir, ensure that the bin directory to this protobuf is before any others in your PATH:
set PATH=%root_dir%\protobuf_install\bin;%PATH%
MLIR
Install MLIR (as a part of LLVM-Project):
git clone https://github.com/llvm/llvm-project.git

Check out a specific branch that is known to work with ONNX MLIR.

cd llvm-project && git checkout 0bf230d4220660af8b2667506f8905df2f716bdf && cd …
set root_dir=%cd%
md llvm-project\build
cd llvm-project\build
call cmake %root_dir%\llvm-project\llvm -G “Ninja” ^
-DCMAKE_INSTALL_PREFIX="%root_dir%\llvm-project\build\install" ^
-DLLVM_ENABLE_PROJECTS=mlir ^
-DLLVM_TARGETS_TO_BUILD=“host” ^
-DCMAKE_BUILD_TYPE=Release ^
-DLLVM_ENABLE_ASSERTIONS=ON ^
-DLLVM_ENABLE_RTTI=ON ^
-DLLVM_ENABLE_ZLIB=OFF

call cmake --build . --config Release
call cmake --build . --config Release --target install
call cmake --build . --config Release --target check-mlir
ONNX-MLIR (this project)
The following environment variables can be set before building onnx-mlir (or alternatively, they need to be passed as CMake variables):
• MLIR_DIR should point to the mlir cmake module inside an llvm-project build or install directory (e.g., c:/repos/llvm-project/build/lib/cmake/mlir).
This project uses lit (LLVM’s Integrated Tester) for unit tests. When running CMake, we can also specify the path to the lit tool from LLVM using the LLVM_EXTERNAL_LIT define but it is not required as long as MLIR_DIR points to a build directory of llvm-project. If MLIR_DIR points to an install directory of llvm-project, LLVM_EXTERNAL_LIT is required.
To build ONNX MLIR, use the following commands:
git clone --recursive https://github.com/onnx/onnx-mlir.git

set root_dir=%cd%

md onnx-mlir\build
cd onnx-mlir\build
call cmake %root_dir%\onnx-mlir -G “Ninja” ^
-DCMAKE_BUILD_TYPE=Release ^
-DCMAKE_PREFIX_PATH=%root_dir%\protobuf_install ^
-DLLVM_LIT_ARGS=-v ^
-DMLIR_DIR=%root_dir%\llvm-project\build\lib\cmake\mlir

call cmake --build . --config Release --target onnx-mlir
To run the lit ONNX MLIR tests, use the following command:
call cmake --build . --config Release --target check-onnx-lit
To run the numerical ONNX MLIR tests, use the following command:
call cmake --build . --config Release --target check-onnx-numerical
To run the doc ONNX MLIR tests, use the following command after installing third_party ONNX:
call cmake --build . --config Release --target check-docs
After the above commands succeed, an onnx-mlir executable should appear in the bin directory.
LLVM and ONNX-MLIR CMake variables
The following CMake variables from LLVM and ONNX MLIR can be used when compiling ONNX MLIR.
MLIR_DIR:PATH Path to to the mlir cmake module inside an llvm-project build or install directory (e.g., c:/repos/llvm-project/build/lib/cmake/mlir). This is required if MLIR_DIR is not specified as an environment variable.
LLVM_EXTERNAL_LIT:PATH Path to the lit tool. Defaults to an empty string and LLVM will find the tool based on MLIR_DIR if possible. This is required when MLIR_DIR points to an install directory.
Using ONNX-MLIR
The usage of onnx-mlir is as such:
OVERVIEW: ONNX MLIR modular optimizer driver

USAGE: onnx-mlir [options]

OPTIONS:

Generic Options:

–help - Display available options (–help-hidden for more)
–help-list - Display list of available options (–help-list-hidden for more)
–version - Display the version of this program

ONNX MLIR Options:
These are frontend options.

Choose target to emit:
–EmitONNXBasic - Ingest ONNX and emit the basic ONNX operations without inferred shapes.
–EmitONNXIR - Ingest ONNX and emit corresponding ONNX dialect.
–EmitMLIR - Lower model to MLIR built-in transformation dialect.
–EmitLLVMIR - Lower model to LLVM IR (LLVM dialect).
–EmitLib - Lower model to LLVM IR, emit (to file) LLVM bitcode for model, compile and link it to a shared library.
Simple Example
For example, to lower an ONNX model (e.g., add.onnx) to ONNX dialect, use the following command:
./onnx-mlir --EmitONNXIR add.onnx
The output should look like:
module {
func @main_graph(%arg0: tensor<10x10x10xf32>, %arg1: tensor<10x10x10xf32>) -> tensor<10x10x10xf32> {
%0 = “onnx.Add”(%arg0, %arg1) : (tensor<10x10x10xf32>, tensor<10x10x10xf32>) -> tensor<10x10x10xf32>
return %0 : tensor<10x10x10xf32>
}
}
An example based on the add operation is found here, which build an ONNX model using a python script, and then provide a main program to load the model’s value, compute, and print the models output.
End to end example
An end to end example is provided here, which train, compile, and execute a simple MNINST example.
Troubleshooting
If the latest LLVM project fails to work due to the latest changes to the MLIR subproject please consider using a slightly older version of LLVM. One such version, which we use, can be found here.
Installing third_party ONNX for Backend Tests or Rebuilding ONNX Operations
Backend tests are triggered by make check-onnx-backend in the build directory and require a few preliminary steps to run successfully. Similarily, rebuilding the ONNX operations in ONNX-MLIR from their ONNX descriptions is triggered by make OMONNXOpsIncTranslation.
You will need to install python 3.x if its not default in your environment, and possibly set the cmake PYTHON_EXECUTABLE varialbe in your top cmake file.
You will also need pybind11 which may need to be installed (mac: brew install pybind11 for example) and you may need to indicate where to find the software (Mac, POWER, possibly other platforms: export pybind11_DIR=). Then install the third_party/onnx software (Mac: pip install -e third_party/onnx) typed in the top directory.
On Macs/POWER and possibly other platforms, there is currently an issue that arises when installing ONNX. If you get an error during the build, try a fix where you edit the top CMakefile as reported in this PR: https://github.com/onnx/onnx/pull/2482/files.
Slack channel
We have a slack channel established under the Linux Foundation AI and Data Workspace, named #onnx-mlir-discussion. This channel can be used for asking quick questions related to this project.

参考链接:
https://github.com/onnx/onnx-mlir

ONNX MLIR应用示例(含源码链接)相关推荐

  1. 利用OpenCV实现图像修复(含源码链接)

    点击上方"小白学视觉",选择加"星标"或"置顶" 重磅干货,第一时间送达 前一段时间小白分享过关于图像修复技术介绍的推文(点击可以跳转),有 ...

  2. 微信公众平台开发4-长链接转短链接口调用实例(含源码)

    微信公众平台开发-access_token获取及应用(含源码) 作者: 孟祥磊-<微信公众平台开发实例教程> 将一条长链接转成短链接.开发者用于生成二维码的原链接(商品.支付二维码等)太长 ...

  3. python代码弄成网站_原创:用python把链接指向的网页直接生成图片的http服务及网站(含源码及思想)...

    原创:用python把链接指向的网页直接生成图片的http服务及网站(含源码及思想) 总体思想: 希望让调用方通过 http调用传入一个需要生成图片的网页链接生成一个网页的图片并返回图片链接 最终调用 ...

  4. 原创:用python把链接指向的网页直接生成图片的http服务及网站(含源码及思想)...

    原创:用python把链接指向的网页直接生成图片的http服务及网站(含源码及思想) 总体思想:     希望让调用方通过 http调用传入一个需要生成图片的网页链接生成一个网页的图片并返回图片链接 ...

  5. Qt数据库练习之QSqlTableModel的使用(MySql数据库示例,含源码+注释)

    文章目录 一.操作示例 1.1 修改记录(数据) 1.2 添加记录(数据) 1.3 删除记录(数据) 1.4 取消操作 1.5 排序操作 1.6 查询操作 二.了解QSqlTableModel 三.源 ...

  6. Hyperledger Fabric Rest API服务开发教程【含源码】

    Hyperledger Fabric Rest API服务开发教程[含源码] Hyperledger Fabric 提供了软件开发包/SDK以帮助开发者访问fabric网络 和部署在网络上的链码,但是 ...

  7. 面部表情识别3:Android实现表情识别(含源码,可实时检测)

    面部表情识别3:Android实现表情识别(含源码,可实时检测) 目录 面部表情识别3:Android实现表情识别(含源码,可实时检测) 1.面部表情识别方法 2.人脸检测方法 3.面部表情识别模型训 ...

  8. 行人检测(人体检测)3:Android实现人体检测(含源码,可实时人体检测)

    行人检测(人体检测)3:Android实现人体检测(含源码,可实时人体检测) 目录 行人检测(人体检测)3:Android实现人体检测(含源码,可实时人体检测) 1. 前言 2. 人体检测数据集说明 ...

  9. 跌倒检测和识别3:Android实现跌倒检测(含源码,可实时跌倒检测)

    跌倒检测和识别3:Android实现跌倒检测(含源码,可实时跌倒检测) 目录 跌倒检测和识别3:Android实现跌倒检测(含源码,可实时跌倒检测) 1. 前言 2. 跌倒检测数据集说明 3. 基于Y ...

最新文章

  1. iptables命令_理解 Linux 下的 Netfilter/iptables
  2. 包区别 版本_详解Linux下二进制包、源代码包、rpm包区别与联系
  3. python for in循环_Python傻瓜教程:跟我学for循环
  4. 定义mysql字段的编码模式_在 mysql 中 定义 数据库,表,列时,设定的各个的编码格式。...
  5. android6.0源码分析之Camera API2.0下的Capture流程分析
  6. python顺序结构有一个入口_高楼万丈平地起,基础要打牢!Python获取类的层次结构和继承顺序...
  7. C 常用新特性(上)
  8. 无需 Dockerfile 的镜像构建:BuildPack vs Dockerfile
  9. asa 防火墙基本配置管理
  10. 【TensorFlow系列二】经典损失函数(交叉熵、均方差)
  11. 共享单车变身“行走的弹幕”,清华大学等17所高校均有投放
  12. 【CCCC】L3-001 凑零钱 (30分),,01背包路径打印
  13. linux安装neo4j
  14. 项目管理经验的获取 .
  15. 使用svnadmin对VisualSVN进行项目迁移
  16. 《计算机入门》模拟卷 b卷,《计算机入门》模拟试卷B.doc
  17. 计算机专业毕业了,还要不要参加培训班
  18. Science:拟南芥根系三萜化合物塑造特异的微生物组
  19. ISO光盘镜像导入到U盘的方法
  20. CUDA——线程束分化

热门文章

  1. Ajax接收Java异常_java – 处理来自Servlet的Jquery AJAX响应中的异常
  2. Anaconda环境管理
  3. Java 树形结构数据生成--不需要顶级节点
  4. 【springboot】配置
  5. 【J2SE】学习基础
  6. Node.js 简单入门
  7. 天元MegEngine训练推理
  8. 单目视觉里程计性能估计
  9. Laravel中Redis的配置和使用
  10. 人工智能:深层神经网络