日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程语言 > python >内容正文

python

python 摄像头标定_python 3利用Dlib 19.7实现摄像头人脸检测特征点标定

發布時間:2023/12/15 python 26 豆豆
生活随笔 收集整理的這篇文章主要介紹了 python 摄像头标定_python 3利用Dlib 19.7实现摄像头人脸检测特征点标定 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

Python 3 利用 Dlib 19.7 實現攝像頭人臉檢測特征點標定

0.引言

利用python開發,借助Dlib庫捕獲攝像頭中的人臉,進行實時特征點標定;

圖1 工程效果示例(gif)

圖2 工程效果示例(靜態圖片)

(實現比較簡單,代碼量也比較少,適合入門或者興趣學習。)

1.開發環境

python:  3.6.3

dlib:    19.7

OpenCv, numpy

import dlib # 人臉識別的庫dlib

import numpy as np # 數據處理的庫numpy

import cv2 # 圖像處理的庫OpenCv

2.源碼介紹

其實實現很簡單,主要分為兩個部分:攝像頭調用+人臉特征點標定

2.1 攝像頭調用

介紹下opencv中攝像頭的調用方法;

利用 cap = cv2.VideoCapture(0) 創建一個對象;

(具體可以參考官方文檔)

# 2018-2-26

# By TimeStamp

# cnblogs: http://www.cnblogs.com/AdaminXie

"""

cv2.VideoCapture(), 創建cv2攝像頭對象/ open the default camera

Python: cv2.VideoCapture() →

Python: cv2.VideoCapture(filename) →

filename – name of the opened video file (eg. video.avi) or image sequence (eg. img_%02d.jpg, which will read samples like img_00.jpg, img_01.jpg, img_02.jpg, ...)

Python: cv2.VideoCapture(device) →

device – id of the opened video capturing device (i.e. a camera index). If there is a single camera connected, just pass 0.

"""

cap = cv2.VideoCapture(0)

"""

cv2.VideoCapture.set(propId, value),設置視頻參數;

propId:

CV_CAP_PROP_POS_MSEC Current position of the video file in milliseconds.

CV_CAP_PROP_POS_FRAMES 0-based index of the frame to be decoded/captured next.

CV_CAP_PROP_POS_AVI_RATIO Relative position of the video file: 0 - start of the film, 1 - end of the film.

CV_CAP_PROP_FRAME_WIDTH Width of the frames in the video stream.

CV_CAP_PROP_FRAME_HEIGHT Height of the frames in the video stream.

CV_CAP_PROP_FPS Frame rate.

CV_CAP_PROP_FOURCC 4-character code of codec.

CV_CAP_PROP_FRAME_COUNT Number of frames in the video file.

CV_CAP_PROP_FORMAT Format of the Mat objects returned by retrieve() .

CV_CAP_PROP_MODE Backend-specific value indicating the current capture mode.

CV_CAP_PROP_BRIGHTNESS Brightness of the image (only for cameras).

CV_CAP_PROP_CONTRAST Contrast of the image (only for cameras).

CV_CAP_PROP_SATURATION Saturation of the image (only for cameras).

CV_CAP_PROP_HUE Hue of the image (only for cameras).

CV_CAP_PROP_GAIN Gain of the image (only for cameras).

CV_CAP_PROP_EXPOSURE Exposure (only for cameras).

CV_CAP_PROP_CONVERT_RGB Boolean flags indicating whether images should be converted to RGB.

CV_CAP_PROP_WHITE_BALANCE_U The U value of the whitebalance setting (note: only supported by DC1394 v 2.x backend currently)

CV_CAP_PROP_WHITE_BALANCE_V The V value of the whitebalance setting (note: only supported by DC1394 v 2.x backend currently)

CV_CAP_PROP_RECTIFICATION Rectification flag for stereo cameras (note: only supported by DC1394 v 2.x backend currently)

CV_CAP_PROP_ISO_SPEED The ISO speed of the camera (note: only supported by DC1394 v 2.x backend currently)

CV_CAP_PROP_BUFFERSIZE Amount of frames stored in internal buffer memory (note: only supported by DC1394 v 2.x backend currently)

value: 設置的參數值/ Value of the property

"""

cap.set(3, 480)

"""

cv2.VideoCapture.isOpened(), 檢查攝像頭初始化是否成功 / check if we succeeded

返回true或false

"""

cap.isOpened()

"""

cv2.VideoCapture.read([imgage]) -> retval,image, 讀取視頻 / Grabs, decodes and returns the next video frame

返回兩個值:

一個是布爾值true/false,用來判斷讀取視頻是否成功/是否到視頻末尾

圖像對象,圖像的三維矩陣

"""

flag, im_rd = cap.read()

2.2 人臉特征點標定

調用預測器“shape_predictor_68_face_landmarks.dat”進行68點標定,這是dlib訓練好的模型,可以直接調用進行人臉68個人臉特征點的標定;

2.3 源碼

實現的方法比較簡單:

利用 cv2.VideoCapture() 創建攝像頭對象,然后利用 flag, im_rd = cv2.VideoCapture.read() 讀取攝像頭視頻,im_rd就是視頻中的一幀幀圖像;

然后就類似于單張圖像進行人臉檢測,對這一幀幀的圖像im_rd利用dlib進行特征點標定,然后繪制特征點;

你可以按下s鍵來獲取當前截圖,或者按下q鍵來退出攝像頭;

# 2018-2-26

# By TimeStamp

# cnblogs: http://www.cnblogs.com/AdaminXie

# github: https://github.com/coneypo/Dlib_face_detection_from_camera

import dlib #人臉識別的庫dlib

import numpy as np #數據處理的庫numpy

import cv2 #圖像處理的庫OpenCv

# dlib預測器

detector = dlib.get_frontal_face_detector()

predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')

# 創建cv2攝像頭對象

cap = cv2.VideoCapture(0)

# cap.set(propId, value)

# 設置視頻參數,propId設置的視頻參數,value設置的參數值

cap.set(3, 480)

# 截圖screenshoot的計數器

cnt = 0

# cap.isOpened() 返回true/false 檢查初始化是否成功

while(cap.isOpened()):

# cap.read()

# 返回兩個值:

# 一個布爾值true/false,用來判斷讀取視頻是否成功/是否到視頻末尾

# 圖像對象,圖像的三維矩陣

flag, im_rd = cap.read()

# 每幀數據延時1ms,延時為0讀取的是靜態幀

k = cv2.waitKey(1)

# 取灰度

img_gray = cv2.cvtColor(im_rd, cv2.COLOR_RGB2GRAY)

# 人臉數rects

rects = detector(img_gray, 0)

#print(len(rects))

# 待會要寫的字體

font = cv2.FONT_HERSHEY_SIMPLEX

# 標68個點

if(len(rects)!=0):

# 檢測到人臉

for i in range(len(rects)):

landmarks = np.matrix([[p.x, p.y] for p in predictor(im_rd, rects[i]).parts()])

for idx, point in enumerate(landmarks):

# 68點的坐標

pos = (point[0, 0], point[0, 1])

# 利用cv2.circle給每個特征點畫一個圈,共68個

cv2.circle(im_rd, pos, 2, color=(0, 255, 0))

# 利用cv2.putText輸出1-68

cv2.putText(im_rd, str(idx + 1), pos, font, 0.2, (0, 0, 255), 1, cv2.LINE_AA)

cv2.putText(im_rd, "faces: "+str(len(rects)), (20,50), font, 1, (0, 0, 255), 1, cv2.LINE_AA)

else:

# 沒有檢測到人臉

cv2.putText(im_rd, "no face", (20, 50), font, 1, (0, 0, 255), 1, cv2.LINE_AA)

# 添加說明

im_rd = cv2.putText(im_rd, "s: screenshot", (20, 400), font, 0.8, (255, 255, 255), 1, cv2.LINE_AA)

im_rd = cv2.putText(im_rd, "q: quit", (20, 450), font, 0.8, (255, 255, 255), 1, cv2.LINE_AA)

# 按下s鍵保存

if (k == ord('s')):

cnt+=1

cv2.imwrite("screenshoot"+str(cnt)+".jpg", im_rd)

# 按下q鍵退出

if(k==ord('q')):

break

# 窗口顯示

cv2.imshow("camera", im_rd)

# 釋放攝像頭

cap.release()

# 刪除建立的窗口

cv2.destroyAllWindows()

如果對您有幫助,歡迎在GitHub上star本項目。

以上就是本文的全部內容,希望對大家的學習有所幫助,也希望大家多多支持腳本之家。

總結

以上是生活随笔為你收集整理的python 摄像头标定_python 3利用Dlib 19.7实现摄像头人脸检测特征点标定的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。