日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程语言 > python >内容正文

python

python dlib学习(四):单目标跟踪

發(fā)布時(shí)間:2025/3/21 python 37 豆豆
生活随笔 收集整理的這篇文章主要介紹了 python dlib学习(四):单目标跟踪 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

前言

dlib提供了dlib.correlation_tracker()類用于跟蹤目標(biāo)。
官方文檔入口:http://dlib.net/python/index.html#dlib.correlation_tracker
不復(fù)雜,就不介紹了,后面會(huì)直接給出兩個(gè)程序,有注釋。

程序1

# -*- coding: utf-8 -*- import sys import dlib import cv2tracker = dlib.correlation_tracker() # 導(dǎo)入correlation_tracker()類 cap = cv2.VideoCapture(0) # OpenCV打開攝像頭 start_flag = True # 標(biāo)記,是否是第一幀,若在第一幀需要先初始化 selection = None # 實(shí)時(shí)跟蹤鼠標(biāo)的跟蹤區(qū)域 track_window = None # 要檢測的物體所在區(qū)域 drag_start = None # 標(biāo)記,是否開始拖動(dòng)鼠標(biāo)# 鼠標(biāo)點(diǎn)擊事件回調(diào)函數(shù) def onMouseClicked(event, x, y, flags, param):global selection, track_window, drag_start # 定義全局變量if event == cv2.EVENT_LBUTTONDOWN: # 鼠標(biāo)左鍵按下drag_start = (x, y)track_window = Noneif drag_start: # 是否開始拖動(dòng)鼠標(biāo),記錄鼠標(biāo)位置xMin = min(x, drag_start[0])yMin = min(y, drag_start[1])xMax = max(x, drag_start[0])yMax = max(y, drag_start[1])selection = (xMin, yMin, xMax, yMax)if event == cv2.EVENT_LBUTTONUP: # 鼠標(biāo)左鍵松開drag_start = Nonetrack_window = selectionselection = Noneif __name__ == '__main__':cv2.namedWindow("image", cv2.WINDOW_AUTOSIZE)cv2.setMouseCallback("image", onMouseClicked)# opencv的bgr格式圖片轉(zhuǎn)換成rgb格式# b, g, r = cv2.split(frame)# frame2 = cv2.merge([r, g, b])while(1):ret, frame = cap.read() # 從攝像頭讀入1幀if start_flag == True: # 如果是第一幀,需要先初始化# 這里是初始化,窗口中會(huì)停在當(dāng)前幀,用鼠標(biāo)拖拽一個(gè)框來指定區(qū)域,隨后會(huì)跟蹤這個(gè)目標(biāo);我們需要先找到目標(biāo)才能跟蹤不是嗎?while True:img_first = frame.copy() # 不改變?cè)瓉淼膸?#xff0c;拷貝一個(gè)新的出來if track_window: # 跟蹤目標(biāo)的窗口畫出來了,就實(shí)時(shí)標(biāo)出來cv2.rectangle(img_first, (track_window[0], track_window[1]), (track_window[2], track_window[3]), (0,0,255), 1)elif selection: # 跟蹤目標(biāo)的窗口隨鼠標(biāo)拖動(dòng)實(shí)時(shí)顯示cv2.rectangle(img_first, (selection[0], selection[1]), (selection[2], selection[3]), (0,0,255), 1)cv2.imshow("image", img_first)# 按下回車,退出循環(huán)if cv2.waitKey(5) == 13:breakstart_flag = False # 初始化完畢,不再是第一幀了tracker.start_track(frame, dlib.rectangle(track_window[0], track_window[1], track_window[2], track_window[3])) # 跟蹤目標(biāo),目標(biāo)就是選定目標(biāo)窗口中的else:tracker.update(frame) # 更新,實(shí)時(shí)跟蹤box_predict = tracker.get_position() # 得到目標(biāo)的位置cv2.rectangle(frame,(int(box_predict.left()),int(box_predict.top())),(int(box_predict.right()),int(box_predict.bottom())),(0,255,255),1) # 用矩形框標(biāo)注出來cv2.imshow("image", frame)# 如果按下ESC鍵,就退出if cv2.waitKey(10) == 27:breakcap.release()cv2.destroyAllWindows()

注:如果程序卡了,就調(diào)一下cv2.waitKey()中的參數(shù),也就是延時(shí)時(shí)間,調(diào)小即可。

運(yùn)行結(jié)果

初始時(shí),窗口中只會(huì)顯示第一幀的圖像;
使用鼠標(biāo)拖拽一個(gè)框,紅框中目標(biāo)后,按回車,設(shè)置框內(nèi)為識(shí)別目標(biāo);
實(shí)時(shí)識(shí)別,以橙框標(biāo)出;
按ESC鍵退出。

(csdn只能上傳2M的圖片,真心難受)

程序2

由于前面那個(gè)程序,只是熟悉下函數(shù)寫的,我覺得用起來蛋疼,所以又重新封裝了一下。看起來舒服多了。

# -*- coding: utf-8 -*- import sys import dlib import cv2class myCorrelationTracker(object):def __init__(self, windowName='default window', cameraNum=0):# 自定義幾個(gè)狀態(tài)標(biāo)志self.STATUS_RUN_WITHOUT_TRACKER = 0 # 不跟蹤目標(biāo),但是實(shí)時(shí)顯示self.STATUS_RUN_WITH_TRACKER = 1 # 跟蹤目標(biāo),實(shí)時(shí)顯示self.STATUS_PAUSE = 2 # 暫停,卡在當(dāng)前幀self.STATUS_BREAK = 3 # 退出self.status = self.STATUS_RUN_WITHOUT_TRACKER # 指示狀態(tài)的變量# 這幾個(gè)跟前面程序1定義的變量一樣self.track_window = None # 實(shí)時(shí)跟蹤鼠標(biāo)的跟蹤區(qū)域self.drag_start = None # 要檢測的物體所在區(qū)域self.start_flag = True # 標(biāo)記,是否開始拖動(dòng)鼠標(biāo)# 創(chuàng)建好顯示窗口cv2.namedWindow(windowName, cv2.WINDOW_AUTOSIZE)cv2.setMouseCallback(windowName, self.onMouseClicked)self.windowName = windowName# 打開攝像頭self.cap = cv2.VideoCapture(cameraNum)# correlation_tracker()類,跟蹤器,跟程序1中一樣self.tracker = dlib.correlation_tracker()# 當(dāng)前幀self.frame = None# 按鍵處理函數(shù)def keyEventHandler(self):keyValue = cv2.waitKey(5) # 每隔5ms讀取一次按鍵的鍵值if keyValue == 27: # ESCself.status = self.STATUS_BREAKif keyValue == 32: # 空格if self.status != self.STATUS_PAUSE: # 按下空格,暫停播放,可以選定跟蹤的區(qū)域#print self.statusself.status = self.STATUS_PAUSE#print self.statuselse: # 再按次空格,重新播放,但是不進(jìn)行目標(biāo)識(shí)別if self.track_window:self.status = self.STATUS_RUN_WITH_TRACKERself.start_flag = Trueelse:self.status = self.STATUS_RUN_WITHOUT_TRACKERif keyValue == 13: # 回車#print '**'if self.status == self.STATUS_PAUSE: # 按下空格之后if self.track_window: # 如果選定了區(qū)域,再按回車,表示確定選定區(qū)域?yàn)楦櫮繕?biāo)self.status = self.STATUS_RUN_WITH_TRACKERself.start_flag = True# 任務(wù)處理函數(shù) def processHandler(self):# 不跟蹤目標(biāo),但是實(shí)時(shí)顯示if self.status == self.STATUS_RUN_WITHOUT_TRACKER:ret, self.frame = self.cap.read()cv2.imshow(self.windowName, self.frame)# 暫停,暫停時(shí)使用鼠標(biāo)拖動(dòng)紅框,選擇目標(biāo)區(qū)域,與程序1類似elif self.status == self.STATUS_PAUSE:img_first = self.frame.copy() # 不改變?cè)瓉淼膸?#xff0c;拷貝一個(gè)新的變量出來if self.track_window: # 跟蹤目標(biāo)的窗口畫出來了,就實(shí)時(shí)標(biāo)出來cv2.rectangle(img_first, (self.track_window[0], self.track_window[1]), (self.track_window[2], self.track_window[3]), (0,0,255), 1)elif self.selection: # 跟蹤目標(biāo)的窗口隨鼠標(biāo)拖動(dòng)實(shí)時(shí)顯示cv2.rectangle(img_first, (self.selection[0], self.selection[1]), (self.selection[2], self.selection[3]), (0,0,255), 1)cv2.imshow(self.windowName, img_first)# 退出elif self.status == self.STATUS_BREAK:self.cap.release() # 釋放攝像頭cv2.destroyAllWindows() # 釋放窗口sys.exit() # 退出程序# 跟蹤目標(biāo),實(shí)時(shí)顯示elif self.status == self.STATUS_RUN_WITH_TRACKER:ret, self.frame = self.cap.read() # 從攝像頭讀取一幀if self.start_flag: # 如果是第一幀,需要先初始化self.tracker.start_track(self.frame, dlib.rectangle(self.track_window[0], self.track_window[1], self.track_window[2], self.track_window[3])) # 開始跟蹤目標(biāo)self.start_flag = False # 不再是第一幀else:self.tracker.update(self.frame) # 更新# 得到目標(biāo)的位置,并顯示box_predict = self.tracker.get_position() cv2.rectangle(self.frame,(int(box_predict.left()),int(box_predict.top())),(int(box_predict.right()),int(box_predict.bottom())),(0,255,255),1)cv2.imshow(self.windowName, self.frame)# 鼠標(biāo)點(diǎn)擊事件回調(diào)函數(shù)def onMouseClicked(self, event, x, y, flags, param):if event == cv2.EVENT_LBUTTONDOWN: # 鼠標(biāo)左鍵按下self.drag_start = (x, y)self.track_window = Noneif self.drag_start: # 是否開始拖動(dòng)鼠標(biāo),記錄鼠標(biāo)位置xMin = min(x, self.drag_start[0])yMin = min(y, self.drag_start[1])xMax = max(x, self.drag_start[0])yMax = max(y, self.drag_start[1])self.selection = (xMin, yMin, xMax, yMax)if event == cv2.EVENT_LBUTTONUP: # 鼠標(biāo)左鍵松開self.drag_start = Noneself.track_window = self.selectionself.selection = Nonedef run(self):while(1):self.keyEventHandler()self.processHandler()if __name__ == '__main__':testTracker = myCorrelationTracker(windowName='image', cameraNum=1)testTracker.run()

注:如果程序卡了,就調(diào)一下cv2.waitKey()中的參數(shù),也就是延時(shí)時(shí)間,調(diào)小即可。

運(yùn)行結(jié)果

操作有一些改變:
初始時(shí),會(huì)自動(dòng)從攝像頭采集圖像顯示;
按下空格,暫停;此時(shí)若再按空格,恢復(fù)實(shí)時(shí)顯示,但不進(jìn)行目標(biāo)跟蹤;
暫停時(shí),拖動(dòng)鼠標(biāo)會(huì)顯示紅框,按下回車,將紅框內(nèi)物體視為目標(biāo)進(jìn)行識(shí)別;
隨后實(shí)時(shí)識(shí)別,以橙框標(biāo)出;
按ESC鍵退出。

官方例程

#!/usr/bin/python # The contents of this file are in the public domain. See LICENSE_FOR_EXAMPLE_PROGRAMS.txt # # This example shows how to use the correlation_tracker from the dlib Python # library. This object lets you track the position of an object as it moves # from frame to frame in a video sequence. To use it, you give the # correlation_tracker the bounding box of the object you want to track in the # current video frame. Then it will identify the location of the object in # subsequent frames. # # In this particular example, we are going to run on the # video sequence that comes with dlib, which can be found in the # examples/video_frames folder. This video shows a juice box sitting on a table # and someone is waving the camera around. The task is to track the position of # the juice box as the camera moves around. # # # COMPILING/INSTALLING THE DLIB PYTHON INTERFACE # You can install dlib using the command: # pip install dlib # # Alternatively, if you want to compile dlib yourself then go into the dlib # root folder and run: # python setup.py install # or # python setup.py install --yes USE_AVX_INSTRUCTIONS # if you have a CPU that supports AVX instructions, since this makes some # things run faster. # # Compiling dlib should work on any operating system so long as you have # CMake and boost-python installed. On Ubuntu, this can be done easily by # running the command: # sudo apt-get install libboost-python-dev cmake # # Also note that this example requires scikit-image which can be installed # via the command: # pip install scikit-image # Or downloaded from http://scikit-image.org/download.html. import os import globimport dlib from skimage import io# Path to the video frames video_folder = os.path.join("..", "examples", "video_frames")# Create the correlation tracker - the object needs to be initialized # before it can be used tracker = dlib.correlation_tracker()win = dlib.image_window() # We will track the frames as we load them off of disk for k, f in enumerate(sorted(glob.glob(os.path.join(video_folder, "*.jpg")))):print("Processing Frame {}".format(k))img = io.imread(f)# We need to initialize the tracker on the first frameif k == 0:# Start a track on the juice box. If you look at the first frame you# will see that the juice box is contained within the bounding# box (74, 67, 112, 153).tracker.start_track(img, dlib.rectangle(74, 67, 112, 153))else:# Else we just attempt to track from the previous frametracker.update(img)win.clear_overlay()win.set_image(img)win.add_overlay(tracker.get_position())dlib.hit_enter_to_continue()

吐槽:
已經(jīng)寫了四篇有關(guān)dlib的學(xué)習(xí)筆記了。dlib這個(gè)庫的確很方便,能夠很輕松地讓我們實(shí)現(xiàn)一些基礎(chǔ)的識(shí)別任務(wù),比如人臉識(shí)別。但是如果想要在各種識(shí)別任務(wù)中有更好的效果,肯定是不能只用他給的模型的。那也就是說需要自己訓(xùn)練了,看到官方文檔中也提供了一些訓(xùn)練以及自己構(gòu)建神經(jīng)網(wǎng)絡(luò)等的api接口,下次有時(shí)間再來整理一下程序。最近學(xué)校又是校運(yùn)會(huì),又要看數(shù)學(xué)(矩陣論、凸優(yōu)化),不過這樣抽時(shí)間出來寫寫程序心情也舒暢了不少。
ヽ(・ω・。)ノ

總結(jié)

以上是生活随笔為你收集整理的python dlib学习(四):单目标跟踪的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。