日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人文社科 > 生活经验 >内容正文

生活经验

编译ONNX模型Compile ONNX Models

發布時間:2023/11/28 生活经验 55 豆豆
生活随笔 收集整理的這篇文章主要介紹了 编译ONNX模型Compile ONNX Models 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

編譯ONNX模型Compile ONNX Models

本文是一篇介紹如何使用Relay部署ONNX模型的說明。

首先,必須安裝ONNX包。

一個快速的解決方案是安裝protobuf編譯器,然后

pip install onnx –user

或者參考官方網站: https://github.com/onnx/onnx

import onnx

import numpy as np

import tvm

from tvm import te

import tvm.relay as relay

from tvm.contrib.download import download_testdata

Load pretrained ONNX model

這里使用的示例超分辨率模型與onnx說明中的模型完全相同

http://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html

跳過pytorch模型構造部分,下載保存的onnx模型

model_url = “”.join(

["https://gist.github.com/zhreshold/","bcda4716699ac97ea44f791c24310193/raw/","93672b029103648953c4e5ad3ac3aadf346a4cdc/","super_resolution_0.2.onnx",]

)

model_path = download_testdata(model_url, “super_resolution.onnx”, module=“onnx”)

now you have super_resolution.onnx on

disk

onnx_model = onnx.load(model_path)

Out:

File /workspace/.tvm_test_data/onnx/super_resolution.onnx exists, skip.

Load a test image

Load a test image

A single cat dominates the examples!

from PIL import Image

img_url = “https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true”

img_path = download_testdata(img_url, “cat.png”, module=“data”)

img = Image.open(img_path).resize((224, 224))

img_ycbcr = img.convert(“YCbCr”) # convert to
YCbCr

img_y, img_cb, img_cr = img_ycbcr.split()

x = np.array(img_y)[np.newaxis, np.newaxis, :, :]

Out:

File
/workspace/.tvm_test_data/data/cat.png exists, skip.

Compile
the model with relay

target = “llvm”

input_name = “1”

shape_dict = {input_name: x.shape}

mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)

with tvm.transform.PassContext(opt_level=1):

intrp = relay.build_module.create_executor("graph", mod, tvm.cpu(0), target)

Out:

/workspace/docs/…/python/tvm/relay/frontend/onnx.py:2737:
UserWarning: Mismatched attribute type in ’ : kernel_shape’

==> Context: Bad node
spec: input: “1” input: “2” output: “11” op_type:
“Conv” attribute { name: “kernel_shape” ints: 5 ints: 5 }
attribute { name: “strides” ints: 1 ints: 1 } attribute { name:
“pads” ints: 2 ints: 2 ints: 2 ints: 2 } attribute { name:
“dilations” ints: 1 ints: 1 } attribute { name: “group” i:
1 }

warnings.warn(str(e))

Execute on TVM

dtype = “float32”

tvm_output = intrp.evaluate()(tvm.nd.array(x.astype(dtype)), **params).asnumpy()

Display results

We put input and output image neck to neck

from matplotlib import pyplot as plt

out_y = Image.fromarray(np.uint8((tvm_output[0, 0]).clip(0, 255)), mode=“L”)

out_cb = img_cb.resize(out_y.size, Image.BICUBIC)

out_cr = img_cr.resize(out_y.size, Image.BICUBIC)

result = Image.merge(“YCbCr”, [out_y, out_cb, out_cr]).convert(“RGB”)

canvas = np.full((672, 672 * 2, 3), 255)

canvas[0:224, 0:224, :] = np.asarray(img)

canvas[:, 672:, :] = np.asarray(result)

plt.imshow(canvas.astype(np.uint8))

plt.show()

Notes

默認情況下,ONNX以動態形狀定義模型。ONNX導入器在導入時保留這種動態性,編譯器在編譯時嘗試將模型轉換為靜態形狀。如果失敗,模型中可能仍有動態操作。目前并非所有TVM內核都支持動態形狀,請在discuss.tvm.apache.org上提交問題討論,如果使用動態內核遇到錯誤。.

https://tvm.apache.org/docs/tutorials/frontend/from_onnx.html#sphx-glr-tutorials-frontend-from-onnx-py

Download Python source code: from_onnx.py

Download Jupyter notebook: from_onnx.ipynb

總結

以上是生活随笔為你收集整理的编译ONNX模型Compile ONNX Models的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。