日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) >

英特尔OpenVINO使用入门(C++集成方式)

發(fā)布時(shí)間:2023/12/15 35 豆豆
生活随笔 收集整理的這篇文章主要介紹了 英特尔OpenVINO使用入门(C++集成方式) 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

一、簡(jiǎn)介

OpenVINO?是英特爾推出的一個(gè)用于優(yōu)化和部署AI推理的開(kāi)源工具包。常用于 Inter 的集成顯卡網(wǎng)絡(luò)推理使用。

官網(wǎng)地址:https://docs.openvino.ai

二、下載

下載地址:https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_linux.html

針對(duì)不同平臺(tái),在如圖紅框處選擇不同的文檔參考,按照官網(wǎng)文檔一步步執(zhí)行就行。

三、使用

注意:筆者當(dāng)前使用的版本為 openvino_2021。

假設(shè)你已經(jīng)有了模型的 xml 文件和對(duì)應(yīng)的 bin 文件了,基本代碼流程如下:

#include <stdio.h> #include <string> #include "inference_engine.hpp"#define LOGD(fmt, ...) printf("[%s][%s][%d]: " fmt "\n", __FILE__, __FUNCTION__, __LINE__, ##__VA_ARGS__)using namespace InferenceEngine;int main(int argc, char *argv[]) {// 1.查看版本號(hào)信息const Version* version = GetInferenceEngineVersion();LOGD("version description: %s, buildNumber: %s, major.minor: %d.%d",version->description, version->buildNumber, version->apiVersion.major, version->apiVersion.minor);// 2.創(chuàng)建推理引擎Core ie;std::vector<std::string> devices = ie.GetAvailableDevices(); // 查看可使用的Devices,包含 CPU、GPU 等for (std::string device : devices) {LOGD("GetAvailableDevices: %s", device.c_str());}// 3.讀取模型文件const std::string input_model_xml = "model.xml";CNNNetwork network = ie.ReadNetwork(input_model_xml);// 4.配置輸入輸出信息InputsDataMap& inputs = network.getInputsInfo();for (auto& input : inputs) {auto& input_name = input.first; //input是一個(gè)鍵值對(duì)類型InputInfo::Ptr& input_info = input.second;input_info->setLayout(Layout::NCHW); // 設(shè)置排列方式input_info->setPrecision(Precision::FP32); // 設(shè)置精度為float32input_info->getPreProcess().setResizeAlgorithm(ResizeAlgorithm::RESIZE_BILINEAR);input_info->getPreProcess().setColorFormat(ColorFormat::RAW); // 設(shè)置圖片格式}OutputsDataMap& outputs = network.getOutputsInfo();for (auto& output : outputs) {auto& output_name = output.first; //output也是一個(gè)鍵值對(duì)類型DataPtr& output_info = output.second;output_info->setPrecision(Precision::FP32);auto& dims = output_info->getDims();LOGD("output shape name: %s, dims: [%d, %d, %d, %d]", output_name.c_str(), dims[0], dims[1], dims[2], dims[3]);}// 5.根據(jù)設(shè)備(CPU、GPU 等)加載網(wǎng)絡(luò)std::string device_name = "CPU"; // 可用的device通過(guò)ie.GetAvailableDevices查詢ExecutableNetwork executable_network = ie.LoadNetwork(network, device_name);// 6.創(chuàng)建推理請(qǐng)求InferRequest infer_request = executable_network.CreateInferRequest();/* 如上6步,在多次執(zhí)行網(wǎng)絡(luò)推理過(guò)程中,可以緩存起來(lái)只創(chuàng)建一次,節(jié)約耗時(shí)*/// 7.設(shè)置輸入數(shù)據(jù)InputsDataMap& inputs = network.getInputsInfo();for (auto& input : inputs) {auto& input_name = input.first; //input是一個(gè)鍵值對(duì)類型Blob::Ptr blob = infer_request.GetBlob(name);unsigned char* data = static_cast<unsigned char*>(blob->buffer());// TODO: 通過(guò)memcpy等方式給data賦值// readFile(input_path, data);}// 8.網(wǎng)絡(luò)推理infer_request.Infer();// 9.獲取輸出OutputsDataMap& outputs = network.getOutputsInfo();for (auto& output : outputs) {auto& output_name = output.first; //output也是一個(gè)鍵值對(duì)類型const Blob::Ptr output_blob = infer_request.GetBlob(name);LOGD("size: %d, byte_size: %d", output_blob->size(), output_blob->byteSize());const float* output_data = static_cast<PrecisionTrait<Precision::FP32>::value_type*>(output_blob->buffer());// writeFile(path, (void *)output_data, output_blob->byteSize());} }

其余更復(fù)雜的使用場(chǎng)景,可以參考下載的SDK中的示例,路徑是 .\openvino_2021\inference_engine\samples\cpp。

四、ReadNetwork說(shuō)明

1.通過(guò)文件路徑讀取模型

通常我們的模型文件就是本地的一個(gè)文件,通過(guò)路徑加載即可,對(duì)應(yīng)的接口為:

/*** @brief Reads models from IR and ONNX formats* @param modelPath path to model* @param binPath path to data file* For IR format (*.bin):* * if path is empty, will try to read bin file with the same name as xml and* * if bin file with the same name was not found, will load IR without weights.* For ONNX format (*.onnx or *.prototxt):* * binPath parameter is not used.* @return CNNNetwork*/ CNNNetwork ReadNetwork(const std::string& modelPath, const std::string& binPath = {}) const;

如果bin文件路徑和xml文件路徑一致且文件名相同,該參數(shù)可以省略,如:CNNNetwork network = ie.ReadNetwork("model.xml")。

2.通過(guò)內(nèi)存地址讀取模型

假設(shè)我們的模型已經(jīng)在內(nèi)存中了,可以通過(guò)如下接口創(chuàng)建:

/*** @brief Reads models from IR and ONNX formats* @param model string with model in IR or ONNX format* @param weights shared pointer to constant blob with weights* Reading ONNX models doesn't support loading weights from data blobs.* If you are using an ONNX model with external data files, please use the* `InferenceEngine::Core::ReadNetwork(const std::string& model, const Blob::CPtr& weights) const`* function overload which takes a filesystem path to the model.* For ONNX case the second parameter should contain empty blob.* @note Created InferenceEngine::CNNNetwork object shares the weights with `weights` object.* So, do not create `weights` on temporary data which can be later freed, since the network* constant datas become to point to invalid memory.* @return CNNNetwork*/ CNNNetwork ReadNetwork(const std::string& model, const Blob::CPtr& weights) const;

使用示例:

extern unsigned char __res_model_xml []; extern unsigned int __res_model_xml_size; extern unsigned char __res_model_bin []; extern unsigned int __res_model_bin_size;std::string model(__res_model_xml, __res_model_xml + __res_model_xml_size); CNNNetwork network = ie.ReadNetwork(model,InferenceEngine::make_shared_blob<uint8_t>({InferenceEngine::Precision::U8,{__res_model_bin_size}, InferenceEngine::C}, __res_model_bin));

總結(jié)

以上是生活随笔為你收集整理的英特尔OpenVINO使用入门(C++集成方式)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。