OpenVINO示例介绍
接著前面系列博客來(lái)講,我們知道學(xué)東西應(yīng)該先從官網(wǎng)的示例來(lái)著手,循序漸進(jìn),這是一個(gè)很好的學(xué)習(xí)方法。有兩處例子:
第一處,OpenVINO安裝路徑下的例子,在用Using Installer安裝OpenVINO Runtime的時(shí)候,其會(huì)提示你一個(gè)安裝路徑,如下圖:
安裝完畢后,可以在此目錄下(結(jié)合自己的路徑)看到有關(guān)的cpp以及python例子,然后就可以先練練手,玩起來(lái)了。
第二處,OpenVINO在github上 有個(gè)open_model_zoo ,這年頭大家都喜歡自家整個(gè)模型庫(kù),比如隔壁tensorflow家的model garden(博主也有一篇博客對(duì)它里面的object detection庫(kù)進(jìn)行了介紹,感興趣的看客可移步過(guò)去)。對(duì)于Open_model_zoo里面例子的使用介紹見(jiàn)博主的另一篇博客
https://blog.csdn.net/jiugeshao/article/details/124763586?csdn_share_tail=%7B%22type%22%3A%22blog%22%2C%22rType%22%3A%22article%22%2C%22rId%22%3A%22124763586%22%2C%22source%22%3A%22jiugeshao%22%7D&ctrtid=WFqK3https://blog.csdn.net/jiugeshao/article/details/124763586?csdn_share_tail=%7B%22type%22%3A%22blog%22%2C%22rType%22%3A%22article%22%2C%22rId%22%3A%22124763586%22%2C%22source%22%3A%22jiugeshao%22%7D&ctrtid=WFqK3
?如果是新手,且還是第一次接觸OpenVINO,那么很是建議先看完博主所寫(xiě)的前兩篇博客,本篇電腦的軟件環(huán)境也同前兩篇博客,python還是用的testOpenVINO虛擬環(huán)境。
OpenVINO使用介紹_竹葉青l(xiāng)vye的博客-CSDN博客_openvino使用
?Intel Movidius Neural Computer Stick 2使用(PC-Based Ubuntu)_竹葉青l(xiāng)vye的博客-CSDN博客
再交待下博主的環(huán)境配置:
Ubuntu 20.04
python3.6.13 (Anaconda)
cuda version: 11.2
cudnn version: cudnn-11.2-linux-x64-v8.1.1.33
一. OpenVINO安裝路徑下的自帶例子
1. 如果要跑c++版的例子,則需要按照如下官網(wǎng)頁(yè)面去編譯下自帶示例。
Get Started with Sample and Demo Applications — OpenVINO? documentation
?cd到/home/sxhlvye/intel/openvino_2022/samples/cpp目錄下,結(jié)合自己的路徑
?終端執(zhí)行如下語(yǔ)句進(jìn)行編譯
./build_samples.sh?編譯生成的文件見(jiàn)如下路徑(終端中的log日志底部會(huì)有提示,編譯出來(lái)的東西生成到哪里去了,注意去看下)
?移步到/home/sxhlvye/inference_engine_cpp_samples_build/intel64目錄下,便可以看到一個(gè)個(gè)示例對(duì)應(yīng)的可執(zhí)行文件。
2. 先在python環(huán)境下跑通一個(gè)sample例子
(1) 安裝對(duì)應(yīng)框架的開(kāi)發(fā)包。
由于博主前兩篇博客中,只安裝了tensorflow和onnx的openVINO包。所以博主這邊再安裝全,如下命令即可。
python -m pip install openvino-dev[caffe,onnx,tensorflow2,pytorch,mxnet](2) 下載模型
博主想要測(cè)試下classification_sample_async例子
?官網(wǎng)對(duì)這個(gè)例子使用介紹見(jiàn)如下鏈接
Image Classification Async Python* Sample — OpenVINO? documentation
?其測(cè)試的是alexnet模型
?博主這邊想來(lái)測(cè)試下resnet-50-pytorch模型,如下兩處都對(duì)這個(gè)模型進(jìn)行了介紹
resnet-50-pytorch — OpenVINO? documentation
open_model_zoo/models/public/resnet-50-pytorch at master · openvinotoolkit/open_model_zoo · GitHub
?這邊所使用的模型都是openVINO官網(wǎng)提供出來(lái)的模型,想了解更多模型可以直接去github上model zoo去看下
https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public
或者到此頁(yè)面也能看到列舉出來(lái)的模型,其和github上的也是保持一致的
resnet-50-pytorch — OpenVINO? documentation
下載之前需要在終端中設(shè)置如下的環(huán)境變量,不然下載會(huì)失敗
export OMZ_ROOT=https://github.com/openvinotoolkit/open_model_zoo官網(wǎng)如下頁(yè)面鏈接上也說(shuō)明了此步驟
Model Downloader and other automation tools — OpenVINO? documentation
?如下命令語(yǔ)句即可以下載模型
omz_downloader --name resnet-50-pytorch(3) 模型轉(zhuǎn)換
從終端中可以看到此在電腦上的保存路徑,但這邊是pytorch的pth模型結(jié)構(gòu),所以需要轉(zhuǎn)換,一種方法是類似博主博客 第二步里的方法,使用mo命令轉(zhuǎn)換; 一種在此同等目錄下,使用omz_converter轉(zhuǎn)換
omz_converter --name resnet-50-pytorch?其會(huì)自動(dòng)將剛下載的模型轉(zhuǎn)換成OpenVINO IR模型,提供了兩種量化模型,一種是FP16的,一種是FP32的
?轉(zhuǎn)換log日志太長(zhǎng),這邊只貼上在轉(zhuǎn)換FP32時(shí)的部分log日志
========== Converting resnet-50-pytorch to IR (FP32) Conversion command: /home/sxhlvye/anaconda3/envs/testOpenVINO/bin/python -- /home/sxhlvye/anaconda3/envs/testOpenVINO/bin/mo --framework=onnx --data_type=FP32 --output_dir=/home/sxhlvye/intel/openvino_2022.1.0.643/samples/python/classification_sample_async/public/resnet-50-pytorch/FP32 --model_name=resnet-50-pytorch --input=data '--mean_values=data[123.675,116.28,103.53]' '--scale_values=data[58.395,57.12,57.375]' --reverse_input_channels --output=prob --input_model=/home/sxhlvye/intel/openvino_2022.1.0.643/samples/python/classification_sample_async/public/resnet-50-pytorch/resnet-v1-50.onnx '--layout=data(NCHW)' '--input_shape=[1, 3, 224, 224]' '--layout=data(NCHW)' '--input_shape=[1, 3, 224, 224]'如下是將pytorch模型轉(zhuǎn)換為onnx模型
?如下是將onnx以FP32精度量化為OpenVINO IR模型
從如上截圖中可以看到omz_converter也是通過(guò)調(diào)用mo命令來(lái)轉(zhuǎn)換的,其跟隨參數(shù)很多。咱們也可以拿這個(gè)完全體的命令語(yǔ)句來(lái)做模板,所以這個(gè)轉(zhuǎn)換語(yǔ)句是相當(dāng)有價(jià)值的,值得參考。如下官網(wǎng)上有對(duì)mo命令可選參數(shù)的介紹:
Compression of a Model to FP16 — OpenVINO? documentation
Setting Input Shapes — OpenVINO? documentation
?Changing input shapes — OpenVINO? documentation
Convert model with Model Optimizer — OpenVINO? documentation
Embedding Preprocessing Computation — OpenVINO? documentation
Convert model with Model Optimizer — OpenVINO? documentation
(4) 運(yùn)行sample程序,用上面轉(zhuǎn)換得到的OpenVINO模型來(lái)預(yù)測(cè)圖片
終端cd到classification_sample_async.py文件所在目錄下,同時(shí)將上面FP32量化的模型文件拷貝到此目錄下,運(yùn)行命令行參考官網(wǎng),對(duì)一張香蕉圖片進(jìn)行預(yù)測(cè)
python classification_sample_async.py -m resnet-50-pytorch.xml -i banana.jpg -d GPU?所用香蕉圖片長(zhǎng)這個(gè)樣子
?結(jié)果是正確的,對(duì)應(yīng)于imagenet_2012.txt ,上面結(jié)果得分最大的類別位于954,因?yàn)橄聵?biāo)是0開(kāi)始,所以對(duì)應(yīng)txt中的955行。
小驗(yàn)證1:
博主又跑了下上面生成的FP16量化模型,結(jié)果如下:
?對(duì)比FP32的結(jié)果,差距微乎其微。
小驗(yàn)證2:
上面第(3)步是使用的omz_converter命令來(lái)對(duì)第(2)步的模型進(jìn)行轉(zhuǎn)換的,這里自己手動(dòng)直接使用mo命令語(yǔ)句來(lái)嘗試下轉(zhuǎn)化。可以參考官網(wǎng)Converting an ONNX Model — OpenVINO? documentation
mo --input_model resnet-v1-50.onnx --input=data --mean_values=data[123.675,116.28,103.53] --scale_values=data[58.395,57.12,57.375] --reverse_input_channels --output=prob --layout "nchw->nchw" '--input_shape=[1,3,224,224]'?轉(zhuǎn)換日志如下,可知默認(rèn)就是FP32精度模式
Model Optimizer arguments: Common parameters:- Path to the Input Model: /home/sxhlvye/intel/openvino_2022.1.0.643/samples/python/classification_sample_async/resnet-v1-50.onnx- Path for generated IR: /home/sxhlvye/intel/openvino_2022.1.0.643/samples/python/classification_sample_async/.- IR output name: resnet-v1-50- Log level: ERROR- Batch: Not specified, inherited from the model- Input layers: data- Output layers: prob- Input shapes: [1,3,224,224]- Source layout: Not specified- Target layout: Not specified- Layout: nchw->nchw- Mean values: data[123.675,116.28,103.53]- Scale values: data[58.395,57.12,57.375]- Scale factor: Not specified- Precision of IR: FP32- Enable fusing: True- User transformations: Not specified- Reverse input channels: True- Enable IR generation for fixed input shape: False- Use the transformations config file: None Advanced parameters:- Force the usage of legacy Frontend of Model Optimizer for model conversion into IR: False- Force the usage of new Frontend of Model Optimizer for model conversion into IR: False OpenVINO runtime found in: /home/sxhlvye/intel/openvino_2022/python/python3.6/openvino OpenVINO runtime version: 2022.1.0-7019-cdb9bec7210-releases/2022/1 Model Optimizer version: 2022.1.0-7019-cdb9bec7210-releases/2022/1 [ SUCCESS ] Generated IR version 11 model. [ SUCCESS ] XML file: /home/sxhlvye/intel/openvino_2022.1.0.643/samples/python/classification_sample_async/resnet-v1-50.xml [ SUCCESS ] BIN file: /home/sxhlvye/intel/openvino_2022.1.0.643/samples/python/classification_sample_async/resnet-v1-50.bin [ SUCCESS ] Total execution time: 0.97 seconds. [ SUCCESS ] Memory consumed: 296 MB. It's been a while, check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2022_bu_IOTG_OpenVINO-2022-1&content=upg_all&medium=organic or on the GitHub* [ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11. Find more information about API v2.0 and IR v11 at https://docs.openvino.ai再用其轉(zhuǎn)換后的OpenVINO IR模型預(yù)測(cè)下圖片,結(jié)果如下,和omz_converter命令一致。
3. 再在c++環(huán)境下跑通上面sample例子
有了上面的基礎(chǔ),這邊就很輕松了,上面第一點(diǎn)就介紹了openVINO自帶的例子,這里移步到編譯所生成的可執(zhí)行文件目錄下,同時(shí)將前面第二點(diǎn)生成的模型也拷貝過(guò)來(lái),圖片也一并拷貝過(guò)來(lái)
?執(zhí)行如下命令:
./classification_sample_async -m resnet-v1-50.xml -i banana.jpg -d GPU?結(jié)果如下:
[ INFO ] OpenVINO Runtime version ......... 2022.1.0 [ INFO ] Build ........... 2022.1.0-7019-cdb9bec7210-releases/2022/1 [ INFO ] [ INFO ] Parsing input parameters [ INFO ] Files were added: 1 [ INFO ] banana.jpg [ INFO ] Loading model files: [ INFO ] resnet-v1-50.xml [ INFO ] model name: torch-jit-export [ INFO ] inputs [ INFO ] input name: data [ INFO ] input type: f32 [ INFO ] input shape: {1, 3, 224, 224} [ INFO ] outputs [ INFO ] output name: prob [ INFO ] output type: f32 [ INFO ] output shape: {1, 1000} [ INFO ] Read input images [ WARNING ] Image is resized from (640, 447) to (224, 224) [ INFO ] Set batch size 1 [ INFO ] model name: torch-jit-export [ INFO ] inputs [ INFO ] input name: data [ INFO ] input type: u8 [ INFO ] input shape: {1, 224, 224, 3} [ INFO ] outputs [ INFO ] output name: prob [ INFO ] output type: f32 [ INFO ] output shape: {1, 1000} [ INFO ] Loading model to the device GPU [ INFO ] Create infer request [ INFO ] Start inference (asynchronous executions) [ INFO ] Completed 1 async request execution [ INFO ] Completed 2 async request execution [ INFO ] Completed 3 async request execution [ INFO ] Completed 4 async request execution [ INFO ] Completed 5 async request execution [ INFO ] Completed 6 async request execution [ INFO ] Completed 7 async request execution [ INFO ] Completed 8 async request execution [ INFO ] Completed 9 async request execution [ INFO ] Completed 10 async request execution [ INFO ] Completed async requests executionTop 10 results:Image banana.jpgclassid probability ------- ----------- 954 14.5919666 940 11.0981741 941 10.7811651 942 10.2868891 951 10.2691641 939 9.9962120 945 9.9133644 953 9.7936678 943 9.1681681 950 8.4921722可以看到分類結(jié)果和python下是保持一致的。
4. 繼續(xù)再跑跑別的安裝目錄下的OpenVINO Sample例子
這里選擇hello_reshape_ssd。這里只跑下c++版的,看個(gè)效果,參考官網(wǎng)運(yùn)行此例子
Hello Reshape SSD C++ Sample — OpenVINO? documentation
?博主測(cè)試圖片如下
?測(cè)試結(jié)果如下:
其它例子參考上面的方法運(yùn)行就可以了,這里不再詳述了。
二. open_model_zoo例子
上面第一步里已經(jīng)涉及了很多,比如模型下載,轉(zhuǎn)換等介紹,所以第二部分不會(huì)再詳解,快速的來(lái)看下怎么跑出效果來(lái)。
1.從github上下載例子
GitHub - openvinotoolkit/open_model_zoo: Pre-trained Deep Learning models and demos (high quality and extremely fast)
博主這里放在此路徑下(結(jié)合自己的路徑)
2.python下運(yùn)行例子
(1)準(zhǔn)備圖片、模型、標(biāo)簽文件
這里演示運(yùn)行classification_demo例子,例子路徑如下(結(jié)合自己的路徑)
?博主把圖片、imagenet_2012.txt(標(biāo)簽文件也就是label文件)、測(cè)試模型也放過(guò)來(lái)了
?注:imagenet_2012.txt不知道的童鞋可以在如下open_model_zoo目錄下找(結(jié)合自己的路徑)
?這里會(huì)分別測(cè)試resnet-50-pytorch模型(第一步里已經(jīng)生成了)和resnet-50-tf模型。resnet-50-tf模型參考上面的下載和轉(zhuǎn)換就行。
omz_downloader --name resnet-50-tf omz_converter --name resnet-50-tf?部分下載日志和轉(zhuǎn)換日志如下(因?yàn)橛X(jué)得里面信息有些參考價(jià)值,所以貼下)
?可以看出來(lái)和上面的pytorch模型通道結(jié)構(gòu)是不一樣的,這里是NHWC格式。
(2)安裝依賴包
參考gitbub這一頁(yè)
open_model_zoo/demos at master · openvinotoolkit/open_model_zoo · GitHub
?cd到目錄/home/sxhlvye/open_model_zoo-master/demos下(結(jié)合自己的open_model_zoo放置路徑)
終端執(zhí)行如下命令:
python -m pip install --user -r requirements.txt?此外還需要參考此頁(yè)
open_model_zoo/README.md at master · openvinotoolkit/open_model_zoo · GitHub
配置下openmodelzoo_modelapi庫(kù)
cd到/home/sxhlvye/open_model_zoo-master/demos/common/python目錄下,執(zhí)行如下語(yǔ)句
完畢后再此目錄下能夠看到whl文件
?cd到此目錄下執(zhí)行下命令語(yǔ)句,進(jìn)行安裝
pip install openmodelzoo_modelapi-0.0.0-py3-none-any.whl --force-reinstall?此時(shí)執(zhí)行如下語(yǔ)句不會(huì)出現(xiàn)報(bào)錯(cuò)
不然會(huì)報(bào)如下錯(cuò)誤:
from openvino.model_zoo.model_api.models import Classification, outputTransform
ModuleNotFoundError: No module named 'openvino.model_zoo.model_api
(3)pycharm中運(yùn)行此例子(classification python* Demo)
參看github講解及官網(wǎng)
https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/classification_demo/python
Classification Python* Demo — OpenVINO? documentation
?博主這里并沒(méi)有安裝介紹的那樣使用命令行語(yǔ)句,而是直接在py文件中給入?yún)?shù)
classification_demo中的代碼修改了如下幾處即可。
?運(yùn)行結(jié)果如下:
?ok,完畢。
3.c++下運(yùn)行例子
(1)先編譯例子
參照頁(yè)面
open_model_zoo/demos at master · openvinotoolkit/open_model_zoo · GitHub
先做下這兩步
?這邊沒(méi)有什么好說(shuō)的,如下是博主的記錄(結(jié)合自己的路徑)
下面一步是官網(wǎng)上沒(méi)有提到的,是個(gè)坑,下載open_mode_zoom時(shí),gflas文件夾不會(huì)下載下來(lái),需要手動(dòng)去這個(gè)頁(yè)面上下載下來(lái)
https://github.com/gflags/gflags/tree/e171aa2d15ed9eb17054558e0b3a6a413bb01067
下載完畢后,把文件夾里的文件拷貝到如下目錄下(結(jié)合自己的open_model_zoo在電腦上的位置)
?不然后面編譯的時(shí)候會(huì)出現(xiàn)報(bào)錯(cuò):
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE ?
CMake Error at CMakeLists.txt:143 (add_subdirectory):
? The source directory
??? /home/sxhlvye/open_model_zoo-master/demos/thirdparty/gflags
? does not contain a CMakeLists.txt file.
-- Configuring incomplete, errors occurred!
See also "/home/sxhlvye/omz_demos_build/CMakeFiles/CMakeOutput.log".
See also "/home/sxhlvye/omz_demos_build/CMakeFiles/CMakeError.log".
Error on or near line 105; exiting with status 1
只需要cd到/home/sxhlvye/open_model_zoo-master/demos(結(jié)合自己的路徑),執(zhí)行如下命令即可:
./build_demos.sh完畢后在如下目錄下就能看到編譯獲得的例子可執(zhí)行文件
這里將python環(huán)境下測(cè)試所用的圖片、模型、標(biāo)簽文件也都拷貝過(guò)來(lái),并在終端運(yùn)行如下命令,便可以運(yùn)行起c++版本的分類例子,參考頁(yè)面
open_model_zoo/demos/classification_benchmark_demo/cpp at master · openvinotoolkit/open_model_zoo · GitHub
./classification_benchmark_demo -m resnet-50-tf.xml -i banana.jpg -labels imagenet_2012.txt結(jié)果如下:
到此c++下例子也運(yùn)行結(jié)束。
4.再跑跑別的例子
這里跑下c++版的object_detection_demmo, 使用yolo-v3-tf模型,標(biāo)簽文件使用coco_80cl.txt(上面第2點(diǎn)提到了標(biāo)簽文件去哪里找)。
?準(zhǔn)備完畢后,執(zhí)行如下命令行語(yǔ)句
./object_detection_demo -d GPU -i dog.jpg -m yolo-v3-tf.xml -labels coco_80cl.txt -at yolo -o output.jpg檢測(cè)效果如下:真是杠杠的!
別的不再演示了,自行發(fā)揮吧。
總結(jié)
以上是生活随笔為你收集整理的OpenVINO示例介绍的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 写给通信年轻人的27个忠告
- 下一篇: Raki的读paper小记:Image