日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

图漾深度相机开发-PCL点云实时显示

發(fā)布時(shí)間:2024/3/24 编程问答 54 豆豆
生活随笔 收集整理的這篇文章主要介紹了 图漾深度相机开发-PCL点云实时显示 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

目錄

  • 1. 從示例程序 SimpleView_FetchFrame 開始
    • 程序功能
    • 程序解讀
  • 2. 創(chuàng)建自己的點(diǎn)云處理程序
    • 文件結(jié)構(gòu)
    • 創(chuàng)建點(diǎn)云
    • 點(diǎn)云圖實(shí)時(shí)顯示完整代碼
  • 3. 新建工程


  • 相機(jī)型號(hào):圖漾科技 FS820 深度相機(jī)
    【參數(shù)信息】【深度相機(jī)開發(fā)說明文檔】【SDK下載】
  • 編譯環(huán)境:Ubuntu 18.04 / C++ / VS code
  • 安裝庫:OpenCV + PCL
  • 圖漾深度相機(jī)初步使用流程見博客,在能簡(jiǎn)單應(yīng)用相機(jī)示例程序的基礎(chǔ)上,對(duì)相機(jī)進(jìn)行開發(fā),以實(shí)現(xiàn)三維點(diǎn)云處理,本文實(shí)現(xiàn)的功能是顯示實(shí)時(shí)點(diǎn)云圖

1. 從示例程序 SimpleView_FetchFrame 開始

程序功能

SimpleView_FetchFrame 是深度相機(jī)獲取圖像數(shù)據(jù)并在數(shù)據(jù)獲取線程中進(jìn)行 OpenCV 渲染的示例程序,以此為例說明圖像獲取流程【圖像獲取的完整流程】

運(yùn)行程序,生成 color彩色圖像、depth深度圖像、leftIR、rightIR 窗口

程序解讀

打開 sample/SimpleView_FetchFrame/main.cpp,解讀代碼:

從主函數(shù)開始閱讀,可以看到多個(gè) LOGD() 函數(shù),這些函數(shù)實(shí)現(xiàn)的是打印功能,相當(dāng)于程序中的注釋(如 LOGD("Init lib"),說明下一段代碼的功能是初始化 API,初始化設(shè)備對(duì)象等數(shù)據(jù)結(jié)構(gòu))

對(duì)于開發(fā)者而言,我們需要關(guān)注的是如何獲取相機(jī)的數(shù)據(jù),以進(jìn)行后續(xù)的處理,也就是下圖中的 Loop 循環(huán)部分,這一循環(huán)的作用是不斷獲取相機(jī)的幀數(shù)據(jù),并對(duì)數(shù)據(jù)進(jìn)行處理(本例中的處理效果即為生成 color彩色圖像、depth深度圖像、leftIR、rightIR 窗口)

因此我們繼續(xù)往下閱讀代碼,讀到 LOGD("While loop to fetch frame") 語句,下面一段程序的功能就是獲取相機(jī)幀循環(huán),貼出代碼進(jìn)行解讀:

LOGD("While loop to fetch frame");bool exit_main = false; TY_FRAME_DATA frame;int index = 0;while(!exit_main) {int err = TYFetchFrame(hDevice, &frame, -1);if( err == TY_STATUS_OK ) {LOGD("Get frame %d", ++index);int fps = get_fps();if (fps > 0){LOGI("fps: %d", fps);}cv::Mat depth, irl, irr, color;parseFrame(frame, &depth, &irl, &irr, &color, hColorIspHandle);if(!depth.empty()){depthViewer.show(depth);}if(!irl.empty()){ cv::imshow("LeftIR", irl); }if(!irr.empty()){ cv::imshow("RightIR", irr); }if(!color.empty()){ cv::imshow("Color", color); }int key = cv::waitKey(1);switch(key & 0xff) {case 0xff:break;case 'q':exit_main = true;break;default:LOGD("Unmapped key %d", key);}TYISPUpdateDevice(hColorIspHandle);LOGD("Re-enqueue buffer(%p, %d)", frame.userBuffer, frame.bufferSize);ASSERT_OK( TYEnqueueBuffer(hDevice, frame.userBuffer, frame.bufferSize) );}}

首先定義了 bool 型變量exit_main:作為循環(huán)的標(biāo)志位,while(!exit_main) 表示當(dāng) exit_main = 1 時(shí)循環(huán)結(jié)束

  • Fetch Frame
    這一段代碼的功能是獲取相機(jī)的幀信息,即 frame:
int err = TYFetchFrame(hDevice, &frame, -1);if( err == TY_STATUS_OK ) {LOGD("Get frame %d", ++index);int fps = get_fps();if (fps > 0){LOGI("fps: %d", fps);}

這段代碼的核心部分為:TYFetchFrame(hDevice, &frame, -1),函數(shù)功能為 Fetch one frame,即通過輸入 hDevice 這一參數(shù),獲取一幀相機(jī)的信息到 frame 中,如果成功獲取幀信息,則返回值為 TY_STATUS_OK

當(dāng) err == TY_STATUS_OK (成功獲取幀信息)時(shí),會(huì)打印信息:Get frame + (index 的值),表示當(dāng)前獲取的是第幾幀,index 在每次循環(huán)中加1,如下圖所示:

  • Parse Frame
    這一段代碼的功能是解析獲取的幀信息:
cv::Mat depth, irl, irr, color; parseFrame(frame, &depth, &irl, &irr, &color, hColorIspHandle);

首先定義 cv::Mat 類型的深度圖 depth,彩色圖 color,左紅外圖像 irl,右紅外圖像 irr
接著通過 parseFrame() 函數(shù)解析 frame,分別生成深度圖、左右紅外圖和彩色圖

  • User Process
    在解析幀后,我們成功得到了相機(jī)的深度圖 depth 和彩色圖 color 等,用戶就可以利用獲取的數(shù)據(jù)進(jìn)行處理和開發(fā)了,示例程序中實(shí)現(xiàn)的是簡(jiǎn)單的圖像顯示功能,即分別可視化深度圖、左右紅外圖和彩色圖:
if(!depth.empty()){depthViewer.show(depth);}if(!irl.empty()){ cv::imshow("LeftIR", irl); }if(!irr.empty()){ cv::imshow("RightIR", irr); }if(!color.empty()){ cv::imshow("Color", color); }int key = cv::waitKey(1);switch(key & 0xff) {case 0xff:break;case 'q':exit_main = true;break;default:LOGD("Unmapped key %d", key);}

如果在 openCV 的圖窗中,鍵盤按下 q 鍵,則exit_main = true,整個(gè)幀循環(huán)會(huì)結(jié)束

  • Return Frame Buffer
TYISPUpdateDevice(hColorIspHandle); LOGD("Re-enqueue buffer(%p, %d)" , frame.userBuffer, frame.bufferSize); ASSERT_OK( TYEnqueueBuffer(hDevice, frame.userBuffer, frame.bufferSize) );

更新設(shè)備狀態(tài),將 frame buffer 推入幀緩沖隊(duì)列

2. 創(chuàng)建自己的點(diǎn)云處理程序

文件結(jié)構(gòu)

最簡(jiǎn)單的方式是直接在 sample 文件夾創(chuàng)建一個(gè)新的文件夾例如 point3D,并在該文件夾中創(chuàng)建 main.cpp ,接著在 CMakeLists.txt 中修改以下部分即可:

set(ALL_SAMPLESpoint3D # 加上自己命名的文件夾DumpAllFeaturesListDevices...

在 sample/build 目錄下打開終端,重新編譯運(yùn)行即可:

cmake .. make cd bin sudo ./point3D

創(chuàng)建點(diǎn)云

根據(jù)對(duì)示例程序的分析可知,通過 parseFrame(frame, &depth, &irl, &irr, &color, hColorIspHandle) 語句可以獲取相機(jī)的深度圖和彩色圖,處理深度圖得到位置信息 (x,y,z)(x, y, z)(x,y,z) ,處理彩色圖得到顏色信息(r,g,b)(r, g, b)(r,g,b) ,最終生成包含顏色信息的點(diǎn)云圖

使用 Point CLoud Library 處理點(diǎn)云,首先需要安裝 PCL 庫:

  • 安裝 PCL 庫
sudo apt install libpcl-dev
  • 修改 CMakeLists.txt 添加 PCL 庫

添加如下語句:

# ======================================== # === PCL # ======================================== find_package(PCL 1.8 REQUIRED)include_directories(${PCL_INCLUDE_DIRS})link_directories(${PCL_LIBRARY_DIRS})add_definitions(${PCL_DEFINITIONS})

修改倒數(shù)第五行:

target_link_libraries(${sample} sample_common ${ABSOLUTE_TARGET_LIB} ${OpenCV_LIBS} ${CLOUD_VIEWER} ${PCL_LIBRARIES})
  • Map depth image to 3D points
    根據(jù)深度相機(jī)的標(biāo)定參數(shù),將深度圖映射為三維點(diǎn)云:

(1) 首先需要獲取深度相機(jī)的標(biāo)定參數(shù),根據(jù)官方文檔可知,利用 TYGetStruct() 函數(shù)即可:

TY_CAMERA_CALIB_INFO depth_calib; TYGetStruct(hDevice, TY_COMPONENT_DEPTH_CAM, TY_STRUCT_CAM_CALIB_DATA, &depth_calib,sizeof(depth_calib)); // 提取深度相機(jī)的標(biāo)定數(shù)據(jù)

(2) 接著將深度圖轉(zhuǎn)換為三維數(shù)據(jù):

std::vector<TY_VECT_3F> p3d; // p3d 用于存儲(chǔ)三維數(shù)據(jù) TYMapDepthImageToPoint3d(&depth_calib, depth.cols, depth.rows , (uint16_t*)depth.data, &p3d[0]); // 輸入深度數(shù)據(jù)和標(biāo)定數(shù)據(jù),輸出三維數(shù)據(jù)

p3d[i].x 表示第 i 個(gè)點(diǎn)的 x值;p3d[i].y 表示第 i 個(gè)點(diǎn)的 y值;p3d[i].z 表示第 i 個(gè)點(diǎn)的 z值

  • Map original RGB image to depth coordinate RGB image
    根據(jù)彩色相機(jī)的標(biāo)定參數(shù),將彩色圖與深度圖對(duì)齊:

(1) 首先需要獲取彩色相機(jī)的標(biāo)定參數(shù),根據(jù)官方文檔可知,利用 TYGetStruct() 函數(shù)即可:

TY_CAMERA_CALIB_INFO color_calib; TYGetStruct(hDevice, TY_COMPONENT_RGB_CAM, TY_STRUCT_CAM_CALIB_DATA , &color_calib, sizeof(color_calib)); // 提取彩色相機(jī)的標(biāo)定數(shù)據(jù)

(2) 彩色圖與深度圖對(duì)齊:
首先定義函數(shù)doRgbRegister(),實(shí)現(xiàn)對(duì)齊功能:

// 定義一個(gè)函數(shù) doRgbRegister(),實(shí)現(xiàn)對(duì)齊功能 static void doRgbRegister(const TY_CAMERA_CALIB_INFO& depth_calib, const TY_CAMERA_CALIB_INFO& color_calib, const cv::Mat& depth, const cv::Mat& color, cv::Mat& out) {// do rgb undistortionTY_IMAGE_DATA src;src.width = color.cols;src.height = color.rows;src.size = color.size().area() * 3;src.pixelFormat = TY_PIXEL_FORMAT_RGB;src.buffer = color.data;cv::Mat undistort_color = cv::Mat(color.size(), CV_8UC3);TY_IMAGE_DATA dst;dst.width = color.cols;dst.height = color.rows;dst.size = undistort_color.size().area() * 3;dst.buffer = undistort_color.data;dst.pixelFormat = TY_PIXEL_FORMAT_RGB;TYUndistortImage(&color_calib, &src, NULL, &dst);// do registerout.create(depth.size(), CV_8UC3);TYMapRGBImageToDepthCoordinate(&depth_calib,depth.cols, depth.rows, depth.ptr<uint16_t>(),&color_calib,undistort_color.cols, undistort_color.rows, undistort_color.ptr<uint8_t>(),out.ptr<uint8_t>()); }

在主函數(shù)中調(diào)用函數(shù)doRgbRegister():

cv::Mat color_data_mat; // color_data_mat 為對(duì)齊后的彩色圖 if (!color.empty()) { bool hasColorCalib; TYHasFeature(hDevice, TY_COMPONENT_RGB_CAM, TY_STRUCT_CAM_CALIB_DATA, &hasColorCalib); // 查詢有無彩色相機(jī)標(biāo)定參數(shù)這一屬性if (hasColorCalib) {doRgbRegister(depth_calib, color_calib, depth, color, color_data_mat); // 輸入深度相機(jī)標(biāo)定數(shù)據(jù)、彩色相機(jī)標(biāo)定數(shù)據(jù)、深度圖和彩色圖,輸出對(duì)齊后的彩色圖cv::cvtColor(color_data_mat, color_data_mat, cv::COLOR_BGR2RGB); // BGR 格式轉(zhuǎn)換為 RGB 格式} }
  • 生成 PointXYZRGB 類型點(diǎn)云(核心代碼)
pcl::PointCloud<pcl::PointXYZRGB> cloud; // 生成的點(diǎn)云 cloud pcl::PointXYZRGB point; for (int m = 0; m < depth.rows; m++){for (int n=0; n < depth.cols; n++){point.x = p3d[(m*(depth.cols)+n)].x;point.y = p3d[(m*(depth.cols)+n)].y;point.z = p3d[(m*(depth.cols)+n)].z;point.r = color_data_mat.at<cv::Vec3b>(m, n)[0];point.g = color_data_mat.at<cv::Vec3b>(m, n)[1];point.b =color_data_mat.at<cv::Vec3b>(m, n)[2];cloud.points.push_back(point); // 構(gòu)造xyzrgb類型點(diǎn)云}}cloud.width = (uint32_t)cloud.points.size();cloud.height = 1;
  • 點(diǎn)云可視化
boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer1(new pcl::visualization::PCLVisualizer("3D Viewer"));pcl::PointCloud<pcl::PointXYZRGB>::Ptr basic_cloud_ptr(new pcl::PointCloud<pcl::PointXYZRGB>); basic_cloud_ptr = cloud.makeShared(); // 轉(zhuǎn)換為指針格式 basic_cloud_ptr basic_cloud_ptr->is_dense = false; // 自己創(chuàng)建的點(diǎn)云,默認(rèn)為dense,需要修改屬性,否則removenanfrompointcloud函數(shù)無效 pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud_ptr(new pcl::PointCloud<pcl::PointXYZRGB>); std::vector<int> mapping; pcl::removeNaNFromPointCloud(*basic_cloud_ptr, *cloud_ptr, mapping); // 移除無效點(diǎn)viewer1->removeAllPointClouds(); // 移除當(dāng)前所有點(diǎn)云 viewer1->addPointCloud<pcl::PointXYZRGB> (cloud_ptr, "initial"); viewer1->updatePointCloud(cloud_ptr, "initial"); viewer1->spinOnce(100);

點(diǎn)云圖實(shí)時(shí)顯示完整代碼

#include <TYApi.h> #include "TYImageProc.h" #include "../common.hpp"#include <vector>#include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp>#include <pcl/point_types.h> #include <pcl/io/ply_io.h> #include <pcl/visualization/pcl_visualizer.h> #include <pcl/visualization/cloud_viewer.h> #include <pcl/io/io.h> #include <pcl/filters/filter.h> #include <pcl/common/impl/io.hpp>static void doRgbRegister(const TY_CAMERA_CALIB_INFO& depth_calib, const TY_CAMERA_CALIB_INFO& color_calib, const cv::Mat& depth, const cv::Mat& color, cv::Mat& out) {// do rgb undistortionTY_IMAGE_DATA src;src.width = color.cols;src.height = color.rows;src.size = color.size().area() * 3;src.pixelFormat = TY_PIXEL_FORMAT_RGB;src.buffer = color.data;cv::Mat undistort_color = cv::Mat(color.size(), CV_8UC3);TY_IMAGE_DATA dst;dst.width = color.cols;dst.height = color.rows;dst.size = undistort_color.size().area() * 3;dst.buffer = undistort_color.data;dst.pixelFormat = TY_PIXEL_FORMAT_RGB;ASSERT_OK(TYUndistortImage(&color_calib, &src, NULL, &dst));// do registerout.create(depth.size(), CV_8UC3);ASSERT_OK(TYMapRGBImageToDepthCoordinate(&depth_calib,depth.cols, depth.rows, depth.ptr<uint16_t>(),&color_calib,undistort_color.cols, undistort_color.rows, undistort_color.ptr<uint8_t>(),out.ptr<uint8_t>())); }void eventCallback(TY_EVENT_INFO *event_info, void *userdata) {if (event_info->eventId == TY_EVENT_DEVICE_OFFLINE) {LOGD("=== Event Callback: Device Offline!");// Note: // Please set TY_BOOL_KEEP_ALIVE_ONOFF feature to false if you need to debug with breakpoint!}else if (event_info->eventId == TY_EVENT_LICENSE_ERROR) {LOGD("=== Event Callback: License Error!");} }int main(int argc, char* argv[]) {std::string ID, IP;TY_INTERFACE_HANDLE hIface = NULL;TY_ISP_HANDLE hColorIspHandle = NULL;TY_DEV_HANDLE hDevice = NULL;int32_t color, ir, depth;color = ir = depth = 1;for(int i = 1; i < argc; i++) {if(strcmp(argv[i], "-id") == 0){ID = argv[++i];} else if(strcmp(argv[i], "-ip") == 0) {IP = argv[++i];} else if(strcmp(argv[i], "-color=off") == 0) {color = 0;} else if(strcmp(argv[i], "-depth=off") == 0) {depth = 0;} else if(strcmp(argv[i], "-ir=off") == 0) {ir = 0;} else if(strcmp(argv[i], "-h") == 0) {LOGI("Usage: SimpleView_FetchFrame [-h] [-id <ID>] [-ip <IP>]");return 0;}}LOGD("Init lib");ASSERT_OK( TYInitLib() );TY_VERSION_INFO ver;ASSERT_OK( TYLibVersion(&ver) );LOGD(" - lib version: %d.%d.%d", ver.major, ver.minor, ver.patch);std::vector<TY_DEVICE_BASE_INFO> selected;ASSERT_OK( selectDevice(TY_INTERFACE_ALL, ID, IP, 1, selected) );ASSERT(selected.size() > 0);TY_DEVICE_BASE_INFO& selectedDev = selected[0];ASSERT_OK( TYOpenInterface(selectedDev.iface.id, &hIface) );ASSERT_OK( TYOpenDevice(hIface, selectedDev.id, &hDevice) );int32_t allComps;ASSERT_OK( TYGetComponentIDs(hDevice, &allComps) );///try to enable color cameraif(allComps & TY_COMPONENT_RGB_CAM && color) {LOGD("Has RGB camera, open RGB cam");ASSERT_OK( TYEnableComponents(hDevice, TY_COMPONENT_RGB_CAM) );//create a isp handle to convert raw image(color bayer format) to rgb imageASSERT_OK(TYISPCreate(&hColorIspHandle));//Init code can be modified in common.hpp//NOTE: Should set RGB image format & size before init ISPASSERT_OK(ColorIspInitSetting(hColorIspHandle, hDevice));//You can call follow function to show color isp supported features #if 0ColorIspShowSupportedFeatures(hColorIspHandle); #endif//You can turn on auto exposure function as follow ,but frame rate may reduce .//Device may be casually stucked 1~2 seconds while software is trying to adjust device exposure time value #if 0ASSERT_OK(ColorIspInitAutoExposure(hColorIspHandle, hDevice)); #endif}if (allComps & TY_COMPONENT_IR_CAM_LEFT && ir) {LOGD("Has IR left camera, open IR left cam");ASSERT_OK(TYEnableComponents(hDevice, TY_COMPONENT_IR_CAM_LEFT));}if (allComps & TY_COMPONENT_IR_CAM_RIGHT && ir) {LOGD("Has IR right camera, open IR right cam");ASSERT_OK(TYEnableComponents(hDevice, TY_COMPONENT_IR_CAM_RIGHT));}//try to enable depth mapLOGD("Configure components, open depth cam");if (allComps & TY_COMPONENT_DEPTH_CAM && depth) {int32_t image_mode;ASSERT_OK(get_default_image_mode(hDevice, TY_COMPONENT_DEPTH_CAM, image_mode));LOGD("Select Depth Image Mode: %dx%d", TYImageWidth(image_mode), TYImageHeight(image_mode));ASSERT_OK(TYSetEnum(hDevice, TY_COMPONENT_DEPTH_CAM, TY_ENUM_IMAGE_MODE, image_mode));ASSERT_OK(TYEnableComponents(hDevice, TY_COMPONENT_DEPTH_CAM));//depth map pixel format is uint16_t ,which default unit is 1 mm//the acutal depth (mm)= PixelValue * ScaleUnit }LOGD("Prepare image buffer");uint32_t frameSize;ASSERT_OK( TYGetFrameBufferSize(hDevice, &frameSize) );LOGD(" - Get size of framebuffer, %d", frameSize);LOGD(" - Allocate & enqueue buffers");char* frameBuffer[2];frameBuffer[0] = new char[frameSize];frameBuffer[1] = new char[frameSize];LOGD(" - Enqueue buffer (%p, %d)", frameBuffer[0], frameSize);ASSERT_OK( TYEnqueueBuffer(hDevice, frameBuffer[0], frameSize) );LOGD(" - Enqueue buffer (%p, %d)", frameBuffer[1], frameSize);ASSERT_OK( TYEnqueueBuffer(hDevice, frameBuffer[1], frameSize) );LOGD("Register event callback");ASSERT_OK(TYRegisterEventCallback(hDevice, eventCallback, NULL));bool hasTrigger;ASSERT_OK(TYHasFeature(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM, &hasTrigger));if (hasTrigger) {LOGD("Disable trigger mode");TY_TRIGGER_PARAM trigger;trigger.mode = TY_TRIGGER_MODE_OFF;ASSERT_OK(TYSetStruct(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM, &trigger, sizeof(trigger)));}LOGD("Start capture");ASSERT_OK( TYStartCapture(hDevice) );LOGD("While loop to fetch frame");TY_FRAME_DATA frame;boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer1(new pcl::visualization::PCLVisualizer("3D Viewer"));while(!viewer1->wasStopped()){int err = TYFetchFrame(hDevice, &frame, -1);cv::Mat depth, irl, irr, color;parseFrame(frame, &depth, &irl, &irr, &color, hColorIspHandle);std::vector<TY_VECT_3F> p3d;TY_CAMERA_CALIB_INFO depth_calib; TY_CAMERA_CALIB_INFO color_calib;pcl::PointCloud<pcl::PointXYZRGB> cloud;pcl::PointXYZRGB point; p3d.resize(depth.size().area());TYGetStruct(hDevice, TY_COMPONENT_DEPTH_CAM, TY_STRUCT_CAM_CALIB_DATA, &depth_calib, sizeof(depth_calib)); // 提取深度相機(jī)的標(biāo)定數(shù)據(jù)TYGetStruct(hDevice, TY_COMPONENT_RGB_CAM, TY_STRUCT_CAM_CALIB_DATA, &color_calib, sizeof(color_calib)); // 提取RGB相機(jī)的標(biāo)定數(shù)據(jù)TYMapDepthImageToPoint3d(&depth_calib, depth.cols, depth.rows, (uint16_t*)depth.data, &p3d[0]); // 深度圖像->xyz點(diǎn)云cv::Mat color_data_mat;if (!color.empty()){bool hasColorCalib;TYHasFeature(hDevice, TY_COMPONENT_RGB_CAM, TY_STRUCT_CAM_CALIB_DATA, &hasColorCalib);if (hasColorCalib){doRgbRegister(depth_calib, color_calib, depth, color, color_data_mat);cv::cvtColor(color_data_mat, color_data_mat, cv::COLOR_BGR2RGB);}}for (int m = 0; m < depth.rows; m++){for (int n=0; n < depth.cols; n++){point.x = p3d[(m*(depth.cols)+n)].x;point.y = p3d[(m*(depth.cols)+n)].y;point.z = p3d[(m*(depth.cols)+n)].z;point.r = color_data_mat.at<cv::Vec3b>(m, n)[0];point.g = color_data_mat.at<cv::Vec3b>(m, n)[1];point.b =color_data_mat.at<cv::Vec3b>(m, n)[2];cloud.points.push_back(point); // 構(gòu)造xyzrgb類型點(diǎn)云}}cloud.width = (uint32_t)cloud.points.size();cloud.height = 1;pcl::PointCloud<pcl::PointXYZRGB>::Ptr basic_cloud_ptr(new pcl::PointCloud<pcl::PointXYZRGB>);basic_cloud_ptr = cloud.makeShared(); // 轉(zhuǎn)換為指針格式 basic_cloud_ptrbasic_cloud_ptr->is_dense = false; // 自己創(chuàng)建的點(diǎn)云,默認(rèn)為dense,需要修改屬性,否則removenanfrompointcloud函數(shù)無效pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud_ptr(new pcl::PointCloud<pcl::PointXYZRGB>);std::vector<int> mapping;pcl::removeNaNFromPointCloud(*basic_cloud_ptr, *cloud_ptr, mapping); // 移除無效點(diǎn)viewer1->removeAllPointClouds(); // 移除當(dāng)前所有點(diǎn)云viewer1->addPointCloud<pcl::PointXYZRGB> (cloud_ptr, "initial"); viewer1->updatePointCloud(cloud_ptr, "initial"); viewer1->spinOnce(100);TYISPUpdateDevice(hColorIspHandle);LOGD("Re-enqueue buffer(%p, %d)", frame.userBuffer, frame.bufferSize);ASSERT_OK( TYEnqueueBuffer(hDevice, frame.userBuffer, frame.bufferSize));}ASSERT_OK( TYStopCapture(hDevice) );ASSERT_OK( TYCloseDevice(hDevice) );ASSERT_OK( TYCloseInterface(hIface) );ASSERT_OK(TYISPRelease(&hColorIspHandle));ASSERT_OK( TYDeinitLib() );LOGD("Main done!");return 0; }

3. 新建工程

如果不想使用官方 SDK 的文件結(jié)構(gòu),自己新建一個(gè)項(xiàng)目,可以新建工程文件夾 TYCamera,文件結(jié)構(gòu)如圖:

除了 CMakeLists.txt 和 main.cpp,其他的文件都可以直接從官方 SDK 中拷貝,main.cpp 即為上一節(jié)中的點(diǎn)云圖實(shí)時(shí)顯示完整代碼

CMakeLists.txt 修改為:
(可以正常運(yùn)行,但我對(duì) cmake 不是很熟悉,寫法上可能有不規(guī)范之處)

cmake_minimum_required(VERSION 2.8) project(rgbd_camera)if (NOT TARGET tycam) #only build samples option (OpenCV_STATIC OFF)set(INCLUDE_PATH include)include_directories(${INCLUDE_PATH})set(ABSOLUTE_TARGET_LIB tycam)add_library(${ABSOLUTE_TARGET_LIB} SHARED IMPORTED)if(ARCH)set_property(TARGET ${ABSOLUTE_TARGET_LIB} PROPERTY IMPORTED_LOCATION ${CMAKE_CURRENT_SOURCE_DIR}/../lib/linux/lib_${ARCH}/libtycam.so)else()set(ABSOLUTE_TARGET_LIB -ltycam)endif() endif()# ======================================== # === common, build static lib to speed up # ======================================== set(COMMON_SOURCES src/MatViewer.cpp src/TYThread.cpp)add_library(sample_common STATIC ${COMMON_SOURCES})if(UNIX)target_link_libraries(sample_common pthread) endif()# ======================================== # === OpenCV # ======================================== set(OpenCV_DIR "/home/olefine_casia/opencv-4.5.1/build/") find_package(OpenCV REQUIRED) if (NOT OpenCV_FOUND)message(FATAL_ERROR "OpenCV library not found") else()include_directories(${OpenCV_INCLUDE_DIRS})include_directories(${OpenCV2_INCLUDE_DIRS})# if(CMAKE_BUILD_TYPE STREQUAL Debug)# link_directories(${OpenCV_LIB_DIR_DBG})# else()# link_directories(${OpenCV_LIB_DIR_OPT})# endif()link_directories(${OpenCV_LIB_DIR})endif()# ======================================== # === PCL # ======================================== find_package(PCL 1.8 REQUIRED)include_directories(${PCL_INCLUDE_DIRS})link_directories(${PCL_LIBRARY_DIRS})add_definitions(${PCL_DEFINITIONS})aux_source_directory(./src/ DIR_SRCS)add_executable(rgbd_camera ${DIR_SRCS})include_directories ( ${CMAKE_CURRENT_SOURCE_DIR}/include)target_link_libraries(rgbd_camera ${ABSOLUTE_TARGET_LIB} ${OpenCV_LIBS} ${PCL_LIBRARIES} )

總結(jié)

以上是生活随笔為你收集整理的图漾深度相机开发-PCL点云实时显示的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。