日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

yunyang tensorflow-yolov3 Intel Realsense D435 (并发)使用locals()函数批量配置摄像头运行识别程序并画框(代码记录)(代码示例)

發(fā)布時(shí)間:2025/3/19 编程问答 33 豆豆
生活随笔 收集整理的這篇文章主要介紹了 yunyang tensorflow-yolov3 Intel Realsense D435 (并发)使用locals()函数批量配置摄像头运行识别程序并画框(代码记录)(代码示例) 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

文章目錄

      • 20191126
      • 20191202-1
      • 20191202-2

20191126

# -*- encoding: utf-8 -*- """ @File : test-使用locals()函數(shù)批量配置攝像頭運(yùn)行識(shí)別程序并畫(huà)框.py @Time : 2019/11/26 11:20 @Author : Dontla @Email : sxana@qq.com @Software: PyCharm """import cv2 import numpy as np import tensorflow as tf import core.utils as utils from core.config import cfg from core.yolov3 import YOLOV3 import pyrealsense2 as rsclass YoloTest(object):def __init__(self):# D·C 191111:__C.TEST.INPUT_SIZE = 544self.input_size = cfg.TEST.INPUT_SIZEself.anchor_per_scale = cfg.YOLO.ANCHOR_PER_SCALE# Dontla 191106注釋:初始化class.names文件的字典信息屬性self.classes = utils.read_class_names(cfg.YOLO.CLASSES)# D·C 191115:類數(shù)量屬性self.num_classes = len(self.classes)self.anchors = np.array(utils.get_anchors(cfg.YOLO.ANCHORS))# D·C 191111:__C.TEST.SCORE_THRESHOLD = 0.3self.score_threshold = cfg.TEST.SCORE_THRESHOLD# D·C 191120:__C.TEST.IOU_THRESHOLD = 0.45self.iou_threshold = cfg.TEST.IOU_THRESHOLDself.moving_ave_decay = cfg.YOLO.MOVING_AVE_DECAY# D·C 191120:__C.TEST.ANNOT_PATH = "./data/dataset/Dontla/20191023_Artificial_Flower/test.txt"self.annotation_path = cfg.TEST.ANNOT_PATH# D·C 191120:__C.TEST.WEIGHT_FILE = "./checkpoint/f_g_c_weights_files/yolov3_test_loss=15.8845.ckpt-47"self.weight_file = cfg.TEST.WEIGHT_FILE# D·C 191115:可寫(xiě)標(biāo)記(bool類型值)self.write_image = cfg.TEST.WRITE_IMAGE# D·C 191115:__C.TEST.WRITE_IMAGE_PATH = "./data/detection/"(識(shí)別圖片畫(huà)框并標(biāo)注文本后寫(xiě)入的圖片路徑)self.write_image_path = cfg.TEST.WRITE_IMAGE_PATH# D·C 191116:TEST.SHOW_LABEL設(shè)置為T(mén)rueself.show_label = cfg.TEST.SHOW_LABEL# D·C 191120:創(chuàng)建命名空間“input”with tf.name_scope('input'):# D·C 191120:建立變量(創(chuàng)建占位符開(kāi)辟內(nèi)存空間)self.input_data = tf.placeholder(dtype=tf.float32, name='input_data')self.trainable = tf.placeholder(dtype=tf.bool, name='trainable')model = YOLOV3(self.input_data, self.trainable)self.pred_sbbox, self.pred_mbbox, self.pred_lbbox = model.pred_sbbox, model.pred_mbbox, model.pred_lbbox# D·C 191120:創(chuàng)建命名空間“指數(shù)滑動(dòng)平均”with tf.name_scope('ema'):ema_obj = tf.train.ExponentialMovingAverage(self.moving_ave_decay)# D·C 191120:在允許軟設(shè)備放置的會(huì)話中啟動(dòng)圖形并記錄放置決策。(不懂啥意思。。。)allow_soft_placement=True表示允許tf自動(dòng)選擇可用的GPU和CPUself.sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))# D·C 191120:variables_to_restore()用于加載模型計(jì)算滑動(dòng)平均值時(shí)將影子變量直接映射到變量本身self.saver = tf.train.Saver(ema_obj.variables_to_restore())# D·C 191120:用于下次訓(xùn)練時(shí)恢復(fù)模型self.saver.restore(self.sess, self.weight_file)def predict(self, image):# D·C 191107:復(fù)制一份圖片的鏡像,避免對(duì)圖片直接操作改變圖片的內(nèi)在屬性org_image = np.copy(image)# D·C 191107:獲取圖片尺寸org_h, org_w, _ = org_image.shape# D·C 191108:該函數(shù)將源圖結(jié)合input_size,將其轉(zhuǎn)換成預(yù)投喂的方形圖像(作者默認(rèn)544×544,中間為縮小尺寸的源圖,上下空區(qū)域?yàn)榛覉D):image_data = utils.image_preprocess(image, [self.input_size, self.input_size])# D·C 191108:打印維度看看:# print(image_data.shape)# (544, 544, 3)# D·C 191108:創(chuàng)建新軸,不懂要?jiǎng)?chuàng)建新軸干嘛?image_data = image_data[np.newaxis, ...]# D·C 191108:打印維度看看:# print(image_data.shape)# (1, 544, 544, 3)# D·C 191110:三個(gè)box可能存放了預(yù)測(cè)框圖(可能是N多的框,有用的沒(méi)用的重疊的都在里面)的信息(但是打印出來(lái)的值完全看不懂啊喂?)pred_sbbox, pred_mbbox, pred_lbbox = self.sess.run([self.pred_sbbox, self.pred_mbbox, self.pred_lbbox],feed_dict={self.input_data: image_data,self.trainable: False})# D·C 191110:打印三個(gè)box的類型、形狀和值看看:# print(type(pred_sbbox))# print(type(pred_mbbox))# print(type(pred_lbbox))# 都是<class 'numpy.ndarray'># print(pred_sbbox.shape)# print(pred_mbbox.shape)# print(pred_lbbox.shape)# (1, 68, 68, 3, 6)# (1, 34, 34, 3, 6)# (1, 17, 17, 3, 6)# print(pred_sbbox)# print(pred_mbbox)# print(pred_lbbox)# D·C 191110:(-1,6)表示不知道有多少行,反正你給我整成6列,然后concatenate又把它們仨給疊起來(lái),最終得到無(wú)數(shù)個(gè)6列數(shù)組(后面self.num_classes)個(gè)數(shù)存放的貌似是這個(gè)框?qū)儆陬惖母怕?#xff09;pred_bbox = np.concatenate([np.reshape(pred_sbbox, (-1, 5 + self.num_classes)),np.reshape(pred_mbbox, (-1, 5 + self.num_classes)),np.reshape(pred_lbbox, (-1, 5 + self.num_classes))], axis=0)# D·C 191111:打印pred_bbox和它的維度看看:# print(pred_bbox)# print(pred_bbox.shape)# (18207, 6)# D·C 191111:猜測(cè)是第一道過(guò)濾,過(guò)濾掉score_threshold以下的圖片,過(guò)濾完之后少了好多:# D·C 191115:bboxes維度為[n,6],前四列是坐標(biāo),第五列是得分,第六列是對(duì)應(yīng)類下標(biāo)bboxes = utils.postprocess_boxes(pred_bbox, (org_h, org_w), self.input_size, self.score_threshold)# D·C 191111:猜測(cè)是第二道過(guò)濾,過(guò)濾掉iou_threshold以下的圖片:bboxes = utils.nms(bboxes, self.iou_threshold)return bboxesdef dontla_evaluate_detect(self):ctx = rs.context()# 判斷攝像頭是否全部連接cam_num = len(ctx.devices)if cam_num < 2:print('攝像頭未全部連接!')else:for i in range(cam_num):locals()['pipeline' + str(i)] = rs.pipeline()locals()['config' + str(i)] = rs.config()locals()['serial' + str(i)] = ctx.devices[i].get_info(rs.camera_info.serial_number)locals()['config' + str(i)].enable_device(locals()['serial' + str(i)])locals()['config' + str(i)].enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)locals()['config' + str(i)].enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)locals()['pipeline' + str(i)].start(locals()['config' + str(i)])# 創(chuàng)建對(duì)齊對(duì)象(深度對(duì)齊顏色)locals()['align' + str(i)] = rs.align(rs.stream.color)try:while True:for i in range(cam_num):locals()['frames' + str(i)] = locals()['pipeline' + str(i)].wait_for_frames()# 獲取對(duì)齊幀集locals()['aligned_frames' + str(i)] = locals()['align' + str(i)].process(locals()['frames' + str(i)])# 獲取對(duì)齊后的深度幀和彩色幀locals()['aligned_depth_frame' + str(i)] = locals()['aligned_frames' + str(i)].get_depth_frame()locals()['color_frame' + str(i)] = locals()['aligned_frames' + str(i)].get_color_frame()if not locals()['aligned_depth_frame' + str(i)] or not locals()['color_frame' + str(i)]:continue# 獲取顏色幀內(nèi)參locals()['color_profile' + str(i)] = locals()['color_frame' + str(i)].get_profile()locals()['cvsprofile' + str(i)] = rs.video_stream_profile(locals()['color_profile' + str(i)])locals()['color_intrin' + str(i)] = locals()['cvsprofile' + str(i)].get_intrinsics()locals()['color_intrin_part' + str(i)] = [locals()['color_intrin' + str(i)].ppx,locals()['color_intrin' + str(i)].ppy,locals()['color_intrin' + str(i)].fx,locals()['color_intrin' + str(i)].fy]locals()['color_image' + str(i)] = np.asanyarray(locals()['color_frame' + str(i)].get_data())# D·C 191121:顯示幀看看# cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)# cv2.imshow('RealSense', color_frame)# cv2.waitKey(1)locals()['bboxes_pr' + str(i)] = self.predict(locals()['color_image' + str(i)])locals()['image' + str(i)] = utils.draw_bbox(locals()['color_image' + str(i)],locals()['bboxes_pr' + str(i)],locals()['aligned_depth_frame' + str(i)],locals()['color_intrin_part' + str(i)],show_label=self.show_label)# cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)cv2.imshow('window{}'.format(i), locals()['image' + str(i)])cv2.waitKey(1)finally:locals()['pipeline' + str(i)].stop()if __name__ == '__main__':YoloTest().dontla_evaluate_detect()

20191202-1

增加了攝像頭初始化機(jī)制(hardware_reset)、設(shè)備檢測(cè)機(jī)制、程序終止機(jī)制等。

# -*- coding: utf-8 -*- """ @File : test-multicam_multithreading.py @Time : 2019/11/30 15:18 @Author : Dontla @Email : sxana@qq.com @Software: PyCharm """import cv2 import numpy as np import tensorflow as tf import core.utils as utils from core.config import cfg from core.yolov3 import YOLOV3 import pyrealsense2 as rs import time import sysclass YoloTest(object):def __init__(self):# D·C 191111:__C.TEST.INPUT_SIZE = 544self.input_size = cfg.TEST.INPUT_SIZEself.anchor_per_scale = cfg.YOLO.ANCHOR_PER_SCALE# Dontla 191106注釋:初始化class.names文件的字典信息屬性self.classes = utils.read_class_names(cfg.YOLO.CLASSES)# D·C 191115:類數(shù)量屬性self.num_classes = len(self.classes)self.anchors = np.array(utils.get_anchors(cfg.YOLO.ANCHORS))# D·C 191111:__C.TEST.SCORE_THRESHOLD = 0.3self.score_threshold = cfg.TEST.SCORE_THRESHOLD# D·C 191120:__C.TEST.IOU_THRESHOLD = 0.45self.iou_threshold = cfg.TEST.IOU_THRESHOLDself.moving_ave_decay = cfg.YOLO.MOVING_AVE_DECAY# D·C 191120:__C.TEST.ANNOT_PATH = "./data/dataset/Dontla/20191023_Artificial_Flower/test.txt"self.annotation_path = cfg.TEST.ANNOT_PATH# D·C 191120:__C.TEST.WEIGHT_FILE = "./checkpoint/f_g_c_weights_files/yolov3_test_loss=15.8845.ckpt-47"self.weight_file = cfg.TEST.WEIGHT_FILE# D·C 191115:可寫(xiě)標(biāo)記(bool類型值)self.write_image = cfg.TEST.WRITE_IMAGE# D·C 191115:__C.TEST.WRITE_IMAGE_PATH = "./data/detection/"(識(shí)別圖片畫(huà)框并標(biāo)注文本后寫(xiě)入的圖片路徑)self.write_image_path = cfg.TEST.WRITE_IMAGE_PATH# D·C 191116:TEST.SHOW_LABEL設(shè)置為T(mén)rueself.show_label = cfg.TEST.SHOW_LABEL# D·C 191120:創(chuàng)建命名空間“input”with tf.name_scope('input'):# D·C 191120:建立變量(創(chuàng)建占位符開(kāi)辟內(nèi)存空間)self.input_data = tf.placeholder(dtype=tf.float32, name='input_data')self.trainable = tf.placeholder(dtype=tf.bool, name='trainable')model = YOLOV3(self.input_data, self.trainable)self.pred_sbbox, self.pred_mbbox, self.pred_lbbox = model.pred_sbbox, model.pred_mbbox, model.pred_lbbox# D·C 191120:創(chuàng)建命名空間“指數(shù)滑動(dòng)平均”with tf.name_scope('ema'):ema_obj = tf.train.ExponentialMovingAverage(self.moving_ave_decay)# D·C 191120:在允許軟設(shè)備放置的會(huì)話中啟動(dòng)圖形并記錄放置決策。(不懂啥意思。。。)allow_soft_placement=True表示允許tf自動(dòng)選擇可用的GPU和CPUself.sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))# D·C 191120:variables_to_restore()用于加載模型計(jì)算滑動(dòng)平均值時(shí)將影子變量直接映射到變量本身self.saver = tf.train.Saver(ema_obj.variables_to_restore())# D·C 191120:用于下次訓(xùn)練時(shí)恢復(fù)模型self.saver.restore(self.sess, self.weight_file)def predict(self, image):# D·C 191107:復(fù)制一份圖片的鏡像,避免對(duì)圖片直接操作改變圖片的內(nèi)在屬性org_image = np.copy(image)# D·C 191107:獲取圖片尺寸org_h, org_w, _ = org_image.shape# D·C 191108:該函數(shù)將源圖結(jié)合input_size,將其轉(zhuǎn)換成預(yù)投喂的方形圖像(作者默認(rèn)544×544,中間為縮小尺寸的源圖,上下空區(qū)域?yàn)榛覉D):image_data = utils.image_preprocess(image, [self.input_size, self.input_size])# D·C 191108:打印維度看看:# print(image_data.shape)# (544, 544, 3)# D·C 191108:創(chuàng)建新軸,不懂要?jiǎng)?chuàng)建新軸干嘛?image_data = image_data[np.newaxis, ...]# D·C 191108:打印維度看看:# print(image_data.shape)# (1, 544, 544, 3)# D·C 191110:三個(gè)box可能存放了預(yù)測(cè)框圖(可能是N多的框,有用的沒(méi)用的重疊的都在里面)的信息(但是打印出來(lái)的值完全看不懂啊喂?)pred_sbbox, pred_mbbox, pred_lbbox = self.sess.run([self.pred_sbbox, self.pred_mbbox, self.pred_lbbox],feed_dict={self.input_data: image_data,self.trainable: False})# D·C 191110:打印三個(gè)box的類型、形狀和值看看:# print(type(pred_sbbox))# print(type(pred_mbbox))# print(type(pred_lbbox))# 都是<class 'numpy.ndarray'># print(pred_sbbox.shape)# print(pred_mbbox.shape)# print(pred_lbbox.shape)# (1, 68, 68, 3, 6)# (1, 34, 34, 3, 6)# (1, 17, 17, 3, 6)# print(pred_sbbox)# print(pred_mbbox)# print(pred_lbbox)# D·C 191110:(-1,6)表示不知道有多少行,反正你給我整成6列,然后concatenate又把它們仨給疊起來(lái),最終得到無(wú)數(shù)個(gè)6列數(shù)組(后面self.num_classes)個(gè)數(shù)存放的貌似是這個(gè)框?qū)儆陬惖母怕?#xff09;pred_bbox = np.concatenate([np.reshape(pred_sbbox, (-1, 5 + self.num_classes)),np.reshape(pred_mbbox, (-1, 5 + self.num_classes)),np.reshape(pred_lbbox, (-1, 5 + self.num_classes))], axis=0)# D·C 191111:打印pred_bbox和它的維度看看:# print(pred_bbox)# print(pred_bbox.shape)# (18207, 6)# D·C 191111:猜測(cè)是第一道過(guò)濾,過(guò)濾掉score_threshold以下的圖片,過(guò)濾完之后少了好多:# D·C 191115:bboxes維度為[n,6],前四列是坐標(biāo),第五列是得分,第六列是對(duì)應(yīng)類下標(biāo)bboxes = utils.postprocess_boxes(pred_bbox, (org_h, org_w), self.input_size, self.score_threshold)# D·C 191111:猜測(cè)是第二道過(guò)濾,過(guò)濾掉iou_threshold以下的圖片:bboxes = utils.nms(bboxes, self.iou_threshold)return bboxesdef dontla_evaluate_detect(self):ctx = rs.context()# devices = ctx.query_devices()# 攝像頭個(gè)數(shù)cam_num = 6# 循環(huán)reset攝像頭# hardware_reset()后是不是應(yīng)該延遲一段時(shí)間?不延遲就會(huì)報(bào)錯(cuò)for dev in ctx.query_devices():dev.hardware_reset()while len(ctx.query_devices()) != cam_num:time.sleep(0.5)print('攝像頭{}初始化成功'.format(dev.get_info(rs.camera_info.serial_number)))# D·C 191202:猜測(cè)攝像頭重置后的若干秒內(nèi),攝像頭是不穩(wěn)定的,這跟訪問(wèn)ctx.query_devices()時(shí)設(shè)備丟失是否是同一回事,有待考證!# D·C 191202:除此之外,我還懷疑,訪問(wèn)ctx.query_devices()會(huì)對(duì)設(shè)備連接造成影響,在這里,我們盡量減少訪問(wèn)頻率。如果沒(méi)有影響,那還是要等設(shè)備穩(wěn)定再運(yùn)行。# 設(shè)置睡眠延時(shí)倒計(jì)時(shí),防止重置失敗# sleep_time = 0# for i in range(sleep_time):# print('倒計(jì)時(shí){}'.format(sleep_time - i))# time.sleep(1)# 循環(huán)驗(yàn)證攝像頭個(gè)數(shù)是否為6,如果是則繼續(xù)向下,并獲取實(shí)際連接的攝像頭個(gè)數(shù)。否則持續(xù)驗(yàn)證(驗(yàn)證次數(shù)超過(guò)限制則退出程序)。devices = ctx.query_devices()connected_cam_num = len(devices)veri_times = 10while connected_cam_num != cam_num:veri_times -= 1if veri_times == -1:sys.exit()devices = ctx.query_devices()connected_cam_num = len(devices)print('攝像頭個(gè)數(shù):{}'.format(connected_cam_num))# 打印攝像頭序列號(hào)和接口號(hào)并創(chuàng)建需要顯示在窗口上的備注信息字符串列表(窗口名)cam_id = 0serial_list = []for i in devices:cam_id += 1serial_list.append('camera{}; serials number {}; usb port {}'.format(cam_id, i.get_info(rs.camera_info.serial_number),i.get_info(rs.camera_info.usb_type_descriptor)))print('serial number {}:{};usb port:{}'.format(cam_id, i.get_info(rs.camera_info.serial_number),i.get_info(rs.camera_info.usb_type_descriptor)))# 配置各個(gè)攝像頭的基本對(duì)象for i in range(connected_cam_num):locals()['pipeline' + str(i)] = rs.pipeline()locals()['config' + str(i)] = rs.config()locals()['serial' + str(i)] = ctx.devices[i].get_info(rs.camera_info.serial_number)locals()['config' + str(i)].enable_device(locals()['serial' + str(i)])locals()['config' + str(i)].enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)locals()['config' + str(i)].enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)locals()['pipeline' + str(i)].start(locals()['config' + str(i)])# 創(chuàng)建對(duì)齊對(duì)象(深度對(duì)齊顏色)locals()['align' + str(i)] = rs.align(rs.stream.color)# 運(yùn)行流并進(jìn)行識(shí)別try:# 設(shè)置break標(biāo)志,方便按下按鈕跳出循環(huán)退出窗口break2 = Falsewhile True:for i in range(connected_cam_num):locals()['frames' + str(i)] = locals()['pipeline' + str(i)].wait_for_frames()# 獲取對(duì)齊幀集locals()['aligned_frames' + str(i)] = locals()['align' + str(i)].process(locals()['frames' + str(i)])# 獲取對(duì)齊后的深度幀和彩色幀locals()['aligned_depth_frame' + str(i)] = locals()['aligned_frames' + str(i)].get_depth_frame()locals()['color_frame' + str(i)] = locals()['aligned_frames' + str(i)].get_color_frame()if not locals()['aligned_depth_frame' + str(i)] or not locals()['color_frame' + str(i)]:continue# 獲取顏色幀內(nèi)參locals()['color_profile' + str(i)] = locals()['color_frame' + str(i)].get_profile()locals()['cvsprofile' + str(i)] = rs.video_stream_profile(locals()['color_profile' + str(i)])locals()['color_intrin' + str(i)] = locals()['cvsprofile' + str(i)].get_intrinsics()locals()['color_intrin_part' + str(i)] = [locals()['color_intrin' + str(i)].ppx,locals()['color_intrin' + str(i)].ppy,locals()['color_intrin' + str(i)].fx,locals()['color_intrin' + str(i)].fy]locals()['color_image' + str(i)] = np.asanyarray(locals()['color_frame' + str(i)].get_data())# D·C 191121:顯示幀看看# cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)# cv2.imshow('RealSense', color_frame)# cv2.waitKey(1)locals()['bboxes_pr' + str(i)] = self.predict(locals()['color_image' + str(i)])locals()['image' + str(i)] = utils.draw_bbox(locals()['color_image' + str(i)],locals()['bboxes_pr' + str(i)],locals()['aligned_depth_frame' + str(i)],locals()['color_intrin_part' + str(i)],show_label=self.show_label)# cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)# cv2.imshow('window{}'.format(i), locals()['image' + str(i)])cv2.imshow('{}'.format(serial_list[i]), locals()['image' + str(i)])key = cv2.waitKey(1)# 如果按下ESC,則跳出循環(huán)if key == 27:# 貌似直接用return也行# returnbreak2 = Truebreakif break2 == True:breakfinally:# 大概覺(jué)得先關(guān)閉窗口再停止流比較靠譜# 銷毀所有窗口cv2.destroyAllWindows()print('已關(guān)閉所有窗口!')# 停止所有流locals()['pipeline' + str(i)].stop()print('正在停止所有流,請(qǐng)等待數(shù)秒至程序穩(wěn)定結(jié)束!')if __name__ == '__main__':YoloTest().dontla_evaluate_detect()print('程序已結(jié)束!')

20191202-2

增加了連續(xù)驗(yàn)證機(jī)制,優(yōu)化了部分結(jié)構(gòu),并且處理了一部分代碼

# -*- coding: utf-8 -*- """ @File : test-multicam_multithreading.py @Time : 2019/11/30 15:18 @Author : Dontla @Email : sxana@qq.com @Software: PyCharm """import cv2 import numpy as np import tensorflow as tf import core.utils as utils from core.config import cfg from core.yolov3 import YOLOV3 import pyrealsense2 as rs import time import sysclass YoloTest(object):def __init__(self):# D·C 191111:__C.TEST.INPUT_SIZE = 544self.input_size = cfg.TEST.INPUT_SIZEself.anchor_per_scale = cfg.YOLO.ANCHOR_PER_SCALE# Dontla 191106注釋:初始化class.names文件的字典信息屬性self.classes = utils.read_class_names(cfg.YOLO.CLASSES)# D·C 191115:類數(shù)量屬性self.num_classes = len(self.classes)self.anchors = np.array(utils.get_anchors(cfg.YOLO.ANCHORS))# D·C 191111:__C.TEST.SCORE_THRESHOLD = 0.3self.score_threshold = cfg.TEST.SCORE_THRESHOLD# D·C 191120:__C.TEST.IOU_THRESHOLD = 0.45self.iou_threshold = cfg.TEST.IOU_THRESHOLDself.moving_ave_decay = cfg.YOLO.MOVING_AVE_DECAY# D·C 191120:__C.TEST.ANNOT_PATH = "./data/dataset/Dontla/20191023_Artificial_Flower/test.txt"self.annotation_path = cfg.TEST.ANNOT_PATH# D·C 191120:__C.TEST.WEIGHT_FILE = "./checkpoint/f_g_c_weights_files/yolov3_test_loss=15.8845.ckpt-47"self.weight_file = cfg.TEST.WEIGHT_FILE# D·C 191115:可寫(xiě)標(biāo)記(bool類型值)self.write_image = cfg.TEST.WRITE_IMAGE# D·C 191115:__C.TEST.WRITE_IMAGE_PATH = "./data/detection/"(識(shí)別圖片畫(huà)框并標(biāo)注文本后寫(xiě)入的圖片路徑)self.write_image_path = cfg.TEST.WRITE_IMAGE_PATH# D·C 191116:TEST.SHOW_LABEL設(shè)置為T(mén)rueself.show_label = cfg.TEST.SHOW_LABEL# D·C 191120:創(chuàng)建命名空間“input”with tf.name_scope('input'):# D·C 191120:建立變量(創(chuàng)建占位符開(kāi)辟內(nèi)存空間)self.input_data = tf.placeholder(dtype=tf.float32, name='input_data')self.trainable = tf.placeholder(dtype=tf.bool, name='trainable')model = YOLOV3(self.input_data, self.trainable)self.pred_sbbox, self.pred_mbbox, self.pred_lbbox = model.pred_sbbox, model.pred_mbbox, model.pred_lbbox# D·C 191120:創(chuàng)建命名空間“指數(shù)滑動(dòng)平均”with tf.name_scope('ema'):ema_obj = tf.train.ExponentialMovingAverage(self.moving_ave_decay)# D·C 191120:在允許軟設(shè)備放置的會(huì)話中啟動(dòng)圖形并記錄放置決策。(不懂啥意思。。。)allow_soft_placement=True表示允許tf自動(dòng)選擇可用的GPU和CPUself.sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))# D·C 191120:variables_to_restore()用于加載模型計(jì)算滑動(dòng)平均值時(shí)將影子變量直接映射到變量本身self.saver = tf.train.Saver(ema_obj.variables_to_restore())# D·C 191120:用于下次訓(xùn)練時(shí)恢復(fù)模型self.saver.restore(self.sess, self.weight_file)def predict(self, image):# D·C 191107:復(fù)制一份圖片的鏡像,避免對(duì)圖片直接操作改變圖片的內(nèi)在屬性org_image = np.copy(image)# D·C 191107:獲取圖片尺寸org_h, org_w, _ = org_image.shape# D·C 191108:該函數(shù)將源圖結(jié)合input_size,將其轉(zhuǎn)換成預(yù)投喂的方形圖像(作者默認(rèn)544×544,中間為縮小尺寸的源圖,上下空區(qū)域?yàn)榛覉D):image_data = utils.image_preprocess(image, [self.input_size, self.input_size])# D·C 191108:打印維度看看:# print(image_data.shape)# (544, 544, 3)# D·C 191108:創(chuàng)建新軸,不懂要?jiǎng)?chuàng)建新軸干嘛?image_data = image_data[np.newaxis, ...]# D·C 191108:打印維度看看:# print(image_data.shape)# (1, 544, 544, 3)# D·C 191110:三個(gè)box可能存放了預(yù)測(cè)框圖(可能是N多的框,有用的沒(méi)用的重疊的都在里面)的信息(但是打印出來(lái)的值完全看不懂啊喂?)pred_sbbox, pred_mbbox, pred_lbbox = self.sess.run([self.pred_sbbox, self.pred_mbbox, self.pred_lbbox],feed_dict={self.input_data: image_data,self.trainable: False})# D·C 191110:打印三個(gè)box的類型、形狀和值看看:# print(type(pred_sbbox))# print(type(pred_mbbox))# print(type(pred_lbbox))# 都是<class 'numpy.ndarray'># print(pred_sbbox.shape)# print(pred_mbbox.shape)# print(pred_lbbox.shape)# (1, 68, 68, 3, 6)# (1, 34, 34, 3, 6)# (1, 17, 17, 3, 6)# print(pred_sbbox)# print(pred_mbbox)# print(pred_lbbox)# D·C 191110:(-1,6)表示不知道有多少行,反正你給我整成6列,然后concatenate又把它們仨給疊起來(lái),最終得到無(wú)數(shù)個(gè)6列數(shù)組(后面self.num_classes)個(gè)數(shù)存放的貌似是這個(gè)框?qū)儆陬惖母怕?#xff09;pred_bbox = np.concatenate([np.reshape(pred_sbbox, (-1, 5 + self.num_classes)),np.reshape(pred_mbbox, (-1, 5 + self.num_classes)),np.reshape(pred_lbbox, (-1, 5 + self.num_classes))], axis=0)# D·C 191111:打印pred_bbox和它的維度看看:# print(pred_bbox)# print(pred_bbox.shape)# (18207, 6)# D·C 191111:猜測(cè)是第一道過(guò)濾,過(guò)濾掉score_threshold以下的圖片,過(guò)濾完之后少了好多:# D·C 191115:bboxes維度為[n,6],前四列是坐標(biāo),第五列是得分,第六列是對(duì)應(yīng)類下標(biāo)bboxes = utils.postprocess_boxes(pred_bbox, (org_h, org_w), self.input_size, self.score_threshold)# D·C 191111:猜測(cè)是第二道過(guò)濾,過(guò)濾掉iou_threshold以下的圖片:bboxes = utils.nms(bboxes, self.iou_threshold)return bboxesdef dontla_evaluate_detect(self):# 攝像頭個(gè)數(shù)(在這里設(shè)置所需使用攝像頭的總個(gè)數(shù))cam_num = 6ctx = rs.context()# 連續(xù)驗(yàn)證機(jī)制# D·C 1911202:創(chuàng)建最大驗(yàn)證次數(shù)max_veri_times;創(chuàng)建連續(xù)穩(wěn)定值continuous_stable_value,用于判斷設(shè)備重置后是否處于穩(wěn)定狀態(tài)max_veri_times = 100continuous_stable_value = 10print('\n', end='')print('開(kāi)始連續(xù)驗(yàn)證,連續(xù)驗(yàn)證穩(wěn)定值:{},最大驗(yàn)證次數(shù):{}:'.format(continuous_stable_value, max_veri_times))continuous_value = 0veri_times = 0while True:devices = ctx.query_devices()connected_cam_num = len(devices)if connected_cam_num == cam_num:continuous_value += 1if continuous_value == continuous_stable_value:breakelse:continuous_value = 0veri_times += 1if veri_times == max_veri_times:print("檢測(cè)超時(shí),請(qǐng)檢查攝像頭連接!")sys.exit()print('攝像頭個(gè)數(shù):{}'.format(connected_cam_num))# 循環(huán)reset攝像頭# hardware_reset()后是不是應(yīng)該延遲一段時(shí)間?不延遲就會(huì)報(bào)錯(cuò)print('\n', end='')print('開(kāi)始初始化攝像頭:')for dev in ctx.query_devices():dev.hardware_reset()while len(ctx.query_devices()) != cam_num:time.sleep(0.5)print('攝像頭{}初始化成功'.format(dev.get_info(rs.camera_info.serial_number)))# D·C 191202:猜測(cè)攝像頭重置后的若干秒內(nèi),攝像頭是不穩(wěn)定的,這跟訪問(wèn)ctx.query_devices()時(shí)設(shè)備丟失是否是同一回事,有待考證!# D·C 191202:除此之外,我還懷疑,訪問(wèn)ctx.query_devices()會(huì)對(duì)設(shè)備連接造成影響,在這里,我們盡量減少訪問(wèn)頻率。如果沒(méi)有影響,那還是要等設(shè)備穩(wěn)定再運(yùn)行。# 連續(xù)驗(yàn)證機(jī)制# D·C 1911202:創(chuàng)建最大驗(yàn)證次數(shù)max_veri_times;創(chuàng)建連續(xù)穩(wěn)定值continuous_stable_value,用于判斷設(shè)備重置后是否處于穩(wěn)定狀態(tài)print('\n', end='')print('開(kāi)始連續(xù)驗(yàn)證,連續(xù)驗(yàn)證穩(wěn)定值:{},最大驗(yàn)證次數(shù):{}:'.format(continuous_stable_value, max_veri_times))continuous_value = 0veri_times = 0while True:devices = ctx.query_devices()connected_cam_num = len(devices)if connected_cam_num == cam_num:continuous_value += 1if continuous_value == continuous_stable_value:breakelse:continuous_value = 0veri_times += 1if veri_times == max_veri_times:print("檢測(cè)超時(shí),請(qǐng)檢查攝像頭連接!")sys.exit()print('攝像頭個(gè)數(shù):{}'.format(connected_cam_num))# 打印攝像頭序列號(hào)和接口號(hào)并創(chuàng)建需要顯示在窗口上的備注信息字符串列表(窗口名)print('\n', end='')cam_id = 0serial_list = []for i in devices:cam_id += 1serial_list.append('camera{}; serials number {}; usb port {}'.format(cam_id, i.get_info(rs.camera_info.serial_number),i.get_info(rs.camera_info.usb_type_descriptor)))print('serial number {}:{};usb port:{}'.format(cam_id, i.get_info(rs.camera_info.serial_number),i.get_info(rs.camera_info.usb_type_descriptor)))# 配置各個(gè)攝像頭的基本對(duì)象for i in range(connected_cam_num):# D·C 191203:括號(hào)里是否有必要加ctx,加了沒(méi)加好像沒(méi)多大區(qū)別,但不加它又會(huì)提示黃色locals()['pipeline' + str(i)] = rs.pipeline(ctx)locals()['config' + str(i)] = rs.config()locals()['serial' + str(i)] = ctx.devices[i].get_info(rs.camera_info.serial_number)locals()['config' + str(i)].enable_device(locals()['serial' + str(i)])locals()['config' + str(i)].enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)locals()['config' + str(i)].enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)locals()['pipeline' + str(i)].start(locals()['config' + str(i)])# 創(chuàng)建對(duì)齊對(duì)象(深度對(duì)齊顏色)locals()['align' + str(i)] = rs.align(rs.stream.color)# 運(yùn)行流并進(jìn)行識(shí)別print('\n', end='')print('開(kāi)始識(shí)別:')try:# 設(shè)置break標(biāo)志,方便按下按鈕跳出循環(huán)退出窗口break2 = Falsewhile True:for i in range(connected_cam_num):locals()['frames' + str(i)] = locals()['pipeline' + str(i)].wait_for_frames()# 獲取對(duì)齊幀集locals()['aligned_frames' + str(i)] = locals()['align' + str(i)].process(locals()['frames' + str(i)])# 獲取對(duì)齊后的深度幀和彩色幀locals()['aligned_depth_frame' + str(i)] = locals()['aligned_frames' + str(i)].get_depth_frame()locals()['color_frame' + str(i)] = locals()['aligned_frames' + str(i)].get_color_frame()if not locals()['aligned_depth_frame' + str(i)] or not locals()['color_frame' + str(i)]:continue# 獲取顏色幀內(nèi)參locals()['color_profile' + str(i)] = locals()['color_frame' + str(i)].get_profile()locals()['cvsprofile' + str(i)] = rs.video_stream_profile(locals()['color_profile' + str(i)])locals()['color_intrin' + str(i)] = locals()['cvsprofile' + str(i)].get_intrinsics()locals()['color_intrin_part' + str(i)] = [locals()['color_intrin' + str(i)].ppx,locals()['color_intrin' + str(i)].ppy,locals()['color_intrin' + str(i)].fx,locals()['color_intrin' + str(i)].fy]locals()['color_image' + str(i)] = np.asanyarray(locals()['color_frame' + str(i)].get_data())locals()['bboxes_pr' + str(i)] = self.predict(locals()['color_image' + str(i)])locals()['image' + str(i)] = utils.draw_bbox(locals()['color_image' + str(i)],locals()['bboxes_pr' + str(i)],locals()['aligned_depth_frame' + str(i)],locals()['color_intrin_part' + str(i)],show_label=self.show_label)# D·C 191202:本想創(chuàng)建固定比例的大小可調(diào)的窗口,發(fā)現(xiàn)無(wú)法使用,opencv bug?# cv2.namedWindow('{}'.format(serial_list[i]),# flags=cv2.WINDOW_NORMAL | cv2.WINDOW_FREERATIO | cv2.WINDOW_GUI_EXPANDED)cv2.imshow('{}'.format(serial_list[i]), locals()['image' + str(i)])key = cv2.waitKey(1)# 如果按下ESC,則跳出循環(huán)if key == 27:# 貌似直接用return也行# returnbreak2 = Truebreakif break2:breakfinally:# 大概覺(jué)得先關(guān)閉窗口再停止流比較靠譜# 銷毀所有窗口cv2.destroyAllWindows()print('\n', end='')print('已關(guān)閉所有窗口!')# 停止所有流for i in range(connected_cam_num):locals()['pipeline' + str(i)].stop()print('正在停止所有流,請(qǐng)等待數(shù)秒至程序穩(wěn)定結(jié)束!')if __name__ == '__main__':YoloTest().dontla_evaluate_detect()print('程序已結(jié)束!')

總結(jié)

以上是生活随笔為你收集整理的yunyang tensorflow-yolov3 Intel Realsense D435 (并发)使用locals()函数批量配置摄像头运行识别程序并画框(代码记录)(代码示例)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 精品一区二区在线免费观看 | 一区二区激情 | 中文字幕一区二区三区夫目前犯 | 就要干就要操 | 欧美视频免费在线 | 国产福利短视频 | 牛夜精品久久久久久久99黑人 | 欧美黑人粗大 | 国产香蕉视频在线观看 | 亚洲 欧美 日韩系列 | 成 人免费va视频 | 日韩免费在线视频 | 中文字幕3页 | 超级黄色片 | 动漫精品一区 | 海角社区在线视频播放观看 | a视频在线播放 | 日韩人妻精品一区二区三区视频 | 邻家有女4完整版电影观看 欧美偷拍另类 | 无人在线观看高清视频 单曲 | 美国黄色一级毛片 | 精品66 | 国内精品99 | 看日本毛片 | 岛国色图| 久久不卡日韩美女 | 欧美成人免费播放 | 国产盗摄精品 | 另类亚洲色图 | 中文字幕亚洲激情 | 久久久久久黄 | 日本加勒比一区二区 | aa一级视频| 美女脱裤子让男人捅 | 午夜视频免费在线观看 | 天天毛片 | 深夜福利视频在线观看 | 日韩片在线观看 | 肮脏的交易在线观看 | 按摩害羞主妇中文字幕 | 久久久久久久久久久久 | 69xxx中国| 啪啪网站免费观看 | 一级全黄裸体免费视频 | 50部乳奶水在线播放 | 国产免费黄色小视频 | 黄色avav | 青青草国产 | 韩国av电影在线观看 | 综合久久婷婷 | 日本久久视频 | 一级黄色片一级黄色片 | 色91| 国产一级免费看 | 国产精品theporn | xxxxx毛片| 激情六月综合 | 性网爆门事件集合av | www男人的天堂 | 激情视频网站 | 久久精品视频在线播放 | 国产美女精品在线 | 一级毛片黄色 | 特级黄色一级片 | 亚洲 小说区 图片区 都市 | 国产精品天干天干 | 日本一级片在线观看 | 免费看黄色aaaaaa 片 | 国产理论片 | 在线蜜桃 | 91久久国产综合久久91 | 久久疯狂做爰流白浆xx | 天天色影综合网 | 91无套直看片红桃 | 青青青国产精品一区二区 | 女人被狂躁60分钟视频 | 日韩在线天堂 | 人妻互换一区二区三区四区五区 | 肉色超薄丝袜脚交一区二区 | 91插插插影库永久免费 | 一级特黄性色生活片 | 人妻少妇一区二区三区 | 国内久久精品 | 国产成人在线免费观看 | 久久99久久99精品免视看婷婷 | 四虎国产精品永久免费观看视频 | 黄色网占| 亚洲AV成人无码久久精品巨臀 | 精品视频在线一区二区 | 亚洲日本欧美精品 | 不卡视频免费在线观看 | 狠狠地日| 一级特黄aa | 久久久久亚洲av成人片 | 超碰成人在线观看 | 99国产精品一区 | 中文字幕一区二区三区四区不卡 | 香蕉网站在线观看 | 欧美日本日韩 |