日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程语言 > python >内容正文

python

Intel Realsense D435 python (Python Wrapper)example00: NumPy Integration 将深度帧数据转换为 Numpy 数组进行处理

發布時間:2025/3/19 python 56 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Intel Realsense D435 python (Python Wrapper)example00: NumPy Integration 将深度帧数据转换为 Numpy 数组进行处理 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

NumPy Integration:
Librealsense frames support the buffer protocol. A numpy array can be constructed using this protocol with no data marshalling overhead:
Numpy集成:
librealsense幀支持緩沖區協議。可以使用此協議構造numpy數組,而無需數據編組開銷:

將深度幀數據轉換為 Numpy 數組進行處理:

import numpy as npdepth_data = depth.as_frame().get_data()""" as_frame(self: pyrealsense2.pyrealsense2.frame) -> pyrealsense2.pyrealsense2.frame """# 可以說 .as_frame()用了跟沒用一樣嗎?"""get_data(self: pyrealsense2.pyrealsense2.frame) -> pyrealsense2.pyrealsense2.BufDataRetrieve data from the frame handle.""" print('depth_data 的類型:', type(depth_data))# depth_data 的類型: <class 'pyrealsense2.pyrealsense2.BufData'>print(depth_data)# < pyrealsense2.pyrealsense2.BufDataobjectat0x0000024F5D07BA40 >np_image = np.asanyarray(depth_data)print('np_image 的類型:', type(np_image))# print('np_image:', np_image)print('np_image 的大小:', np_image.shape)# np_image的類型: <class 'numpy.ndarray'># (480, 640)

應用到 Intel Realsense D435 python (Python Wrapper)example00: streaming using rs.pipeline(235) 中,就是:

# First import the library import pyrealsense2 as rspipeline = rs.pipeline() """ # Create a context object. This object owns the handles to all connected realsense devices # 創建pipeline對象 # The caller can provide a context created by the application, usually for playback or testing purposes. """pipeline.start() """ start(*args, **kwargs) Overloaded function.1. start(self: pyrealsense2.pyrealsense2.pipeline, config: rs2::config) -> rs2::pipeline_profileStart the pipeline streaming according to the configuraion. The pipeline streaming loop captures samples from the device, and delivers them to the attached computer vision modules and processing blocks, according to each module requirements and threading model. During the loop execution, the application can access the camera streams by calling wait_for_frames() or poll_for_frames(). The streaming loop runs until the pipeline is stopped. Starting the pipeline is possible only when it is not started. If the pipeline was started, an exception is raised(引發異常). The pipeline selects and activates the device upon start, according to configuration or a default configuration. When the rs2::config is provided to the method, the pipeline tries to activate the config resolve() result. If the application requests are conflicting with pipeline computer vision modules or no matching device is available on the platform, the method fails. Available configurations and devices may change between config resolve() call and pipeline start, in case devices are connected or disconnected, or another application acquires ownership of a device. 2. start(self: pyrealsense2.pyrealsense2.pipeline) -> rs2::pipeline_profileStart the pipeline streaming with its default configuration. The pipeline streaming loop captures samples from the device, and delivers them to the attached computer vision modules and processing blocks, according to each module requirements and threading model. During the loop execution, the application can access the camera streams by calling wait_for_frames() or poll_for_frames(). The streaming loop runs until the pipeline is stopped. Starting the pipeline is possible only when it is not started. If the pipeline was started, an exception is raised. 3. start(self: pyrealsense2.pyrealsense2.pipeline, callback: Callable[[pyrealsense2.pyrealsense2.frame], None]) -> rs2::pipeline_profile Start the pipeline streaming with its default configuration. The pipeline captures samples from the device, and delivers them to the through the provided frame callback. Starting the pipeline is possible only when it is not started. If the pipeline was started, an exception is raised. When starting the pipeline with a callback both wait_for_frames() and poll_for_frames() will throw exception.4. start(self: pyrealsense2.pyrealsense2.pipeline, config: rs2::config, callback: Callable[[ pyrealsense2.pyrealsense2.frame], None]) -> rs2::pipeline_profile Start the pipeline streaming according to the configuraion. The pipeline captures samples from the device, and delivers them to the through the provided frame callback. Starting the pipeline is possible only when it is not started. If the pipeline was started, an exception is raised. When starting the pipeline with a callback both wait_for_frames() and poll_for_frames() will throw exception. The pipeline selects and activates the device upon start, according to configuration or a default configuration. When the rs2::config is provided to the method, the pipeline tries to activate the config resolve() result. If the application requests are conflicting with pipeline computer vision modules or no matching device is available on the platform, the method fails. Available configurations and devices may change between config resolve() call and pipeline start, in case devices are connected or disconnected, or another application acquires ownership of a device. """try:while True:# Create a pipeline object. This object configures the streaming camera and owns it's handleframes = pipeline.wait_for_frames()"""wait_for_frames(self: pyrealsense2.pyrealsense2.pipeline, timeout_ms: int=5000) -> pyrealsense2.pyrealsense2.composite_frame Wait until a new set of frames becomes available. The frames set includes time-synchronized frames of each enabled stream in the pipeline. In case of(若在......情況下) different frame rates of the streams, the frames set include a matching frame of the slow stream, which may have been included in previous frames set. The method blocks(阻塞) the calling thread, and fetches(拿來、取來) the latest unread frames set. Device frames, which were produced while the function wasn't called, are dropped(被扔掉). To avoid frame drops(丟幀、掉幀), this method should be called as fast as the device frame rate. The application can maintain the frames handles to defer(推遲) processing. However, if the application maintains too long history, the device may lack memory resources to produce new frames, and the following call to this method shall fail to retrieve(檢索、取回) new frames, until resources become available. """depth = frames.get_depth_frame()"""get_depth_frame(self: pyrealsense2.pyrealsense2.composite_frame) -> rs2::depth_frameRetrieve the first depth frame, if no frame is found, return an empty frame instance."""print(type(frames))# <class 'pyrealsense2.pyrealsense2.composite_frame'>print(type(depth))# <class 'pyrealsense2.pyrealsense2.depth_frame'>print(frames)# <pyrealsense2.pyrealsense2.composite_frame object at 0x000001E4D0AAB7D8>print(depth)# <pyrealsense2.pyrealsense2.depth_frame object at 0x000001E4D0C4B228>import numpy as npdepth_data = depth.as_frame().get_data()print('depth_data 的類型:', type(depth_data))# depth_data 的類型: <class 'pyrealsense2.pyrealsense2.BufData'>print(depth_data)# < pyrealsense2.pyrealsense2.BufDataobjectat0x0000024F5D07BA40 >np_image = np.asanyarray(depth_data)print('np_image 的類型:', type(np_image))# print('np_image:', np_image)print('np_image 的大小:', np_image.shape)# np_image的類型: <class 'numpy.ndarray'># (480, 640)# 如果沒有接收到深度幀,跳過執行下一輪循環if not depth:continueprint('not depth:', not depth)# not depth: False# 如果 depth 為空(False),則 not depth 為True,如果 depth 不為空(True),則 not depth 為False# Print a simple text-based representation of the image, by breaking it into 10x20 pixel regions and# approximating the coverage of pixels within one metercoverage = [0] * 64print(type(coverage))# <class 'list'>print(coverage)# [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,# 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]for y in range(480):for x in range(640):# 獲取當前深度圖像(x, y)坐標像素的深度數據dist = depth.get_distance(x, y)"""get_distance(self: pyrealsense2.pyrealsense2.depth_frame, x: int, y: int) -> floatProvide the depth in meters at the given pixel"""# 如果當前坐標(x, y)像素的深度在1m范圍以內,將其所負責的列表元素變量加1。(如:x在0到9范圍內負責列表元素coverage[0])if 0 < dist and dist < 1:# x方向上每10個像素寬度整合為一個新的像素區域(最后整合成 640/10=64 個新像素值),將符合深度要求的點加起來作統計。coverage[x // 10] += 1# y方向上每20個像素寬度整合為一個新的像素區域(最后整合成 480/20=24 個新像素值)if y % 20 is 19:line = ""# coverage 列表中元素最大值為200(該區域內【10×20】所有像素點都在所給深度范圍內)for c in coverage:# c//25的最大值為8# 用所占顏色空間由小到大的文本來近似復現深度圖像line += " .:nhBXWW"[c // 25]# 重置coverage列表coverage = [0] * 64print(line)finally:pipeline.stop()

總結

以上是生活随笔為你收集整理的Intel Realsense D435 python (Python Wrapper)example00: NumPy Integration 将深度帧数据转换为 Numpy 数组进行处理的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 聚色av | 日本美女一区二区三区 | 亚洲成人xxx | 性xxxx狂欢老少配o | 中文字字幕一区二区三区四区五区 | 久久午夜伦理 | 美女又黄又爽 | 国模无码大尺度一区二区三区 | 成人精品999 | 久久久久久久黄色 | 性感av在线 | 色婷婷综合激情 | 夜色福利| 国产精品va在线观看无码 | 国产福利一区二区三区在线观看 | 中文字幕在线乱 | 最新国产一区 | 欧美日韩毛片 | 一级特黄aa| 高潮疯狂过瘾粗话对白 | 国产三级不卡 | 久草网在线观看 | 淫五月天 | 中文字幕网站在线观看 | 99久久久久久久 | 日日噜噜噜噜久久久精品毛片 | 特一级黄色大片 | 鲁丝片一区二区三区 | 日韩欧美视频免费观看 | 成人性生交大片免费看 | 成人免费xxxxxx视频 | 久久久久久久一区 | 欧美性视屏 | 69成人网 | 非洲一级黄色片 | 欧美xxx在线观看 | 日本一区二区在线播放 | 日韩色道 | www四虎com| 人人妻人人澡人人爽欧美一区 | www.色国产 | 黄色日b片 | 99久久精品一区二区 | 精品区在线观看 | 亚洲欧美在线视频免费 | 欧美三级网站在线观看 | 最近中文在线观看 | 日日日视频 | 色精品视频 | 亚洲黄色片网站 | 91狠狠 | 亚洲精品色图 | 香蕉日日 | 久久亚洲网站 | 成人在线激情网 | 丰满饥渴老女人hd | 中文字幕婷婷 | 亚洲人成免费电影 | 久草免费在线观看 | 激烈的性高湖波多野结衣 | 污污视频在线观看免费 | 午夜视频网址 | 老司机在线观看视频 | 免费看国产精品 | 海量av资源 | 亚洲精品在线观看网站 | 国产成人无码精品久在线观看 | 美女光屁股视频 | 九色视频网 | 成人免费看 | 欧美成人二区 | 人妻aⅴ无码一区二区三区 阿v免费视频 | 成人天堂 | 精品乱码一区内射人妻无码 | 777四色| 成人午夜免费福利 | 少妇毛片一区二区三区粉嫩av | 男人的天堂a在线 | 免费看a的网站 | 告诉我真相俄剧在线观看 | 京香julia在线观看 | 国产精品成 | 日美韩一区二区三区 | 香蕉国产 | 二区久久 | 国产欧美一区二区三区视频在线观看 | 欧美日a| 久久久人人爽 | 国产精品99久久久久久大便 | 亚洲综合久久久 | 亚洲图片欧美在线看 | 欧美猛男gaygay | 国产精品手机在线 | 思思99精品视频在线观看 | 亚洲精品无码久久 | 精品人人妻人人澡人人爽牛牛 | 国产精品.com| 亚洲国产精品激情在线观看 | 国产又粗又黄视频 |