日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程语言 > python >内容正文

python

活体检测python_活体检测很复杂?仅使用opencv就能实现!(附源码)!

發(fā)布時(shí)間:2023/12/10 python 38 豆豆
生活随笔 收集整理的這篇文章主要介紹了 活体检测python_活体检测很复杂?仅使用opencv就能实现!(附源码)! 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

摘要: 活體檢測(cè)在各行各業(yè)應(yīng)用比較廣泛,如何實(shí)現(xiàn)一個(gè)活體檢測(cè)系統(tǒng)呢?早期實(shí)現(xiàn)很困難,現(xiàn)在僅使用opencv即可實(shí)現(xiàn),快來(lái)嘗試一下吧。

什么是活體檢測(cè),為什么需要它?

隨著時(shí)代的發(fā)展,人臉識(shí)別系統(tǒng)的應(yīng)用也正變得比以往任何時(shí)候都更加普遍。從智能手機(jī)上的人臉識(shí)別解鎖、到人臉識(shí)別打卡、門禁系統(tǒng)等,人臉識(shí)別系統(tǒng)正在各行各業(yè)得到應(yīng)用。然而,人臉識(shí)別系統(tǒng)很容易被“非真實(shí)”的面孔所欺騙。比如將人的照片放在人臉識(shí)別相機(jī),就可以騙過(guò)人臉識(shí)別系統(tǒng),讓其識(shí)別為人臉。

為了使人臉識(shí)別系統(tǒng)更安全,我們不僅要識(shí)別出人臉,還需要能夠檢測(cè)其是否為真實(shí)面部,這就要用到活體檢測(cè)了。

目前有許多活體檢測(cè)方法,包括:

  • 紋理分析(Texture analysis),包括計(jì)算面部區(qū)域上的局部二進(jìn)制模式(LBP)并使用SVM將面部分類為真臉或假臉;
  • 頻率分析(Frequency analysis),例如檢查面部的傅里葉域;
  • 可變聚焦分析(ariable focusing analysis),例如檢查兩個(gè)連續(xù)幀之間的像素值的變化。
  • 基于啟發(fā)式的算法(Heuristic-based algorithms),包括眼球運(yùn)動(dòng)、嘴唇運(yùn)動(dòng)和眨眼檢測(cè);
  • 光流算法(Optical Flow algorithms),即檢查從3D對(duì)象和2D平面生成的光流的差異和屬性;
  • 3D臉部形狀,類似于Apple的iPhone臉部識(shí)別系統(tǒng)所使用的臉部形狀,使臉部識(shí)別系統(tǒng)能夠區(qū)分真人臉部和其他人的打印輸出的照片圖像;

面部識(shí)別系統(tǒng)工程師可以組合上述方法挑選和選擇適合于其特定應(yīng)用的活體檢測(cè)模型。但本教程將采用圖像處理中常用方法——卷積神經(jīng)網(wǎng)絡(luò)(CNN)來(lái)構(gòu)建一個(gè)能夠區(qū)分真實(shí)面部和假面部的深度神經(jīng)網(wǎng)絡(luò)(稱之為“LivenessNet”網(wǎng)絡(luò)),將活體檢測(cè)視為二元分類問(wèn)題。

首先檢查一下數(shù)據(jù)集。

活動(dòng)檢測(cè)視頻

為了讓例子更加簡(jiǎn)單明了,本文構(gòu)建的活體檢測(cè)器將側(cè)重于區(qū)分真實(shí)面孔與屏幕上的欺騙面孔。且該算法可以很容易地?cái)U(kuò)展到其他類型的欺騙面孔,包括打印輸出、高分辨率打印等。

活體檢測(cè)數(shù)據(jù)集來(lái)源:

  • iPhone縱向/自拍;
  • 錄制了一段約25秒在辦公室里走來(lái)走去的視頻;
  • 重播了相同的25秒視頻,iPhone重錄視頻;
  • 獲得兩個(gè)示例視頻,一個(gè)用于“真實(shí)”面部,另一個(gè)用于“假/欺騙”面部。
  • 最后,將面部檢測(cè)應(yīng)用于兩組視頻,以提取兩個(gè)類的單個(gè)面部區(qū)域。

項(xiàng)目結(jié)構(gòu)

$ tree --dirsfirst --filelimit 10 . ├── dataset │ ├── fake [150 entries] │ └── real [161 entries] ├── face_detector │ ├── deploy.prototxt │ └── res10_300x300_ssd_iter_140000.caffemodel ├── pyimagesearch │ ├── __init__.py │ └── livenessnet.py ├── videos │ ├── fake.mp4 │ └── real.mov ├── gather_examples.py ├── train_liveness.py ├── liveness_demo.py ├── le.pickle ├── liveness.model └── plot.png6 directories, 12 files

項(xiàng)目中主要有四個(gè)目錄:

* dataset / :數(shù)據(jù)集目錄,包含兩類圖像:

在播放臉部視頻時(shí),手機(jī)錄屏得到的假臉;

face_detector / pyimagesearch / video/

另外還有三個(gè)Python腳本:

  • gather_examples.py :此腳本從輸入視頻文件中獲取面部區(qū)域,并創(chuàng)建深度學(xué)習(xí)面部數(shù)據(jù)集;
  • train_liveness.py :此腳本將訓(xùn)練LivenessNet分類器。訓(xùn)練會(huì)得到以下幾個(gè)文件:
    le .pickle liveness.model plot.png
  • liveness_demo.py :該演示腳本將啟動(dòng)網(wǎng)絡(luò)攝像頭以進(jìn)行面部實(shí)時(shí)活體檢測(cè);

從訓(xùn)練數(shù)據(jù)集中檢測(cè)和提取面部區(qū)域

數(shù)據(jù)目錄:

dataset / fake / dataset / real /

打開(kāi) gather_examples.py 文件并插入以下代碼:

# import the necessary packages import numpy as np import argparse import cv2 import os# construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-i", "--input", type=str, required=True,help="path to input video") ap.add_argument("-o", "--output", type=str, required=True,help="path to output directory of cropped faces") ap.add_argument("-d", "--detector", type=str, required=True,help="path to OpenCV's deep learning face detector") ap.add_argument("-c", "--confidence", type=float, default=0.5,help="minimum probability to filter weak detections") ap.add_argument("-s", "--skip", type=int, default=16,help="# of frames to skip before applying face detection") args = vars(ap.parse_args())

首先導(dǎo)入所需的包:

第8-19行解析命令行參數(shù):

input output detector confidence skip

之后加載面部檢測(cè)器并初始化視頻流:

# load our serialized face detector from disk print("[INFO] loading face detector...") protoPath = os.path.sep.join([args["detector"], "deploy.prototxt"]) modelPath = os.path.sep.join([args["detector"],"res10_300x300_ssd_iter_140000.caffemodel"]) net = cv2.dnn.readNetFromCaffe(protoPath, modelPath)# open a pointer to the video file stream and initialize the total # number of frames read and saved thus far vs = cv2.VideoCapture(args["input"]) read = 0 saved = 0

此外還初始化了兩個(gè)變量,用于讀取的幀數(shù)以及循環(huán)執(zhí)行時(shí)保存的幀數(shù)。

創(chuàng)建一個(gè)循環(huán)來(lái)處理幀:

# loop over frames from the video file stream while True:# grab the frame from the file(grabbed, frame) = vs.read()# if the frame was not grabbed, then we have reached the end# of the streamif not grabbed:break# increment the total number of frames read thus farread += 1# check to see if we should process this frameif read % args["skip"] != 0:continue

下面進(jìn)行面部檢測(cè):

# grab the frame dimensions and construct a blob from the frame(h, w) = frame.shape[:2]blob = cv2.dnn.blobFromImage(cv2.resize(frame, (300, 300)), 1.0,(300, 300), (104.0, 177.0, 123.0))# pass the blob through the network and obtain the detections and# predictionsnet.setInput(blob)detections = net.forward()# ensure at least one face was foundif len(detections) > 0:# we're making the assumption that each image has only ONE# face, so find the bounding box with the largest probabilityi = np.argmax(detections[0, 0, :, 2])confidence = detections[0, 0, i, 2]

為了執(zhí)行面部檢測(cè),需要從圖像中創(chuàng)建一個(gè)區(qū)域,該區(qū)域有300×300的寬度和高度,以適應(yīng)Caffe面部檢測(cè)器。

此外腳本假設(shè)視頻的每一幀中只有一個(gè)面部,這有助于防止誤報(bào)。獲得最高概率的面部檢測(cè)指數(shù),并使用索引提取檢測(cè)的置信度,之后將低概率的進(jìn)行過(guò)濾,并將結(jié)果寫入磁盤:

# ensure that the detection with the largest probability also# means our minimum probability test (thus helping filter out# weak detections)if confidence > args["confidence"]:# compute the (x, y)-coordinates of the bounding box for# the face and extract the face ROIbox = detections[0, 0, i, 3:7] * np.array([w, h, w, h])(startX, startY, endX, endY) = box.astype("int")face = frame[startY:endY, startX:endX]# write the frame to diskp = os.path.sep.join([args["output"],"{}.png".format(saved)])cv2.imwrite(p, face)saved += 1print("[INFO] saved {} to disk".format(p))# do a bit of cleanup vs.release() cv2.destroyAllWindows()

提取到面部區(qū)域后,就可以得到面部的邊界框坐標(biāo)。然后為面部區(qū)域生成路徑+文件名,并將其寫入磁盤中。

構(gòu)建活體檢測(cè)圖像數(shù)據(jù)集

打開(kāi)終端并執(zhí)行以下命令來(lái)提取“假/欺騙”類別的面部圖像:

$ python gather_examples.py --input videos/real.mov --output dataset/real --detector face_detector --skip 1 [INFO] loading face detector... [INFO] saved datasets/fake/0.png to disk [INFO] saved datasets/fake/1.png to disk [INFO] saved datasets/fake/2.png to disk [INFO] saved datasets/fake/3.png to disk [INFO] saved datasets/fake/4.png to disk [INFO] saved datasets/fake/5.png to disk ... [INFO] saved datasets/fake/145.png to disk [INFO] saved datasets/fake/146.png to disk [INFO] saved datasets/fake/147.png to disk [INFO] saved datasets/fake/148.png to disk [INFO] saved datasets/fake/149.png to disk

同理也可以執(zhí)行以下命令獲得“真實(shí)”類別的面部圖像:

$ python gather_examples.py --input videos/fake.mov --output dataset/fake --detector face_detector --skip 4 [INFO] loading face detector... [INFO] saved datasets/real/0.png to disk [INFO] saved datasets/real/1.png to disk [INFO] saved datasets/real/2.png to disk [INFO] saved datasets/real/3.png to disk [INFO] saved datasets/real/4.png to disk ... [INFO] saved datasets/real/156.png to disk [INFO] saved datasets/real/157.png to disk [INFO] saved datasets/real/158.png to disk [INFO] saved datasets/real/159.png to disk [INFO] saved datasets/real/160.png to disk

注意,這里要確保數(shù)據(jù)分布均衡。

執(zhí)行腳本后,統(tǒng)計(jì)圖像數(shù)量:

  • 假:150張圖片
  • 真:161張圖片
  • 總計(jì):311張圖片

實(shí)施“LivenessNet”深度學(xué)習(xí)活體檢測(cè)模型

LivenessNet實(shí)際上只是一個(gè)簡(jiǎn)單的卷積神經(jīng)網(wǎng)絡(luò),盡量將這個(gè)網(wǎng)絡(luò)設(shè)計(jì)的盡可能淺,參數(shù)盡可能少,原因有兩個(gè):

  • 減少過(guò)擬合可能性;
  • 確保活體檢測(cè)器能夠?qū)崟r(shí)運(yùn)行;

打開(kāi) livenessnet .py 并插入以下代碼:

# import the necessary packages from keras.models import Sequential from keras.layers.normalization import BatchNormalization from keras.layers.convolutional import Conv2D from keras.layers.convolutional import MaxPooling2D from keras.layers.core import Activation from keras.layers.core import Flatten from keras.layers.core import Dropout from keras.layers.core import Dense from keras import backend as Kclass LivenessNet:@staticmethoddef build(width, height, depth, classes):# initialize the model along with the input shape to be# "channels last" and the channels dimension itselfmodel = Sequential()inputShape = (height, width, depth)chanDim = -1# if we are using "channels first", update the input shape# and channels dimensionif K.image_data_format() == "channels_first":inputShape = (depth, height, width)chanDim = 1# first CONV => RELU => CONV => RELU => POOL layer setmodel.add(Conv2D(16, (3, 3), padding="same",input_shape=inputShape))model.add(Activation("relu"))model.add(BatchNormalization(axis=chanDim))model.add(Conv2D(16, (3, 3), padding="same"))model.add(Activation("relu"))model.add(BatchNormalization(axis=chanDim))model.add(MaxPooling2D(pool_size=(2, 2)))model.add(Dropout(0.25))# second CONV => RELU => CONV => RELU => POOL layer setmodel.add(Conv2D(32, (3, 3), padding="same"))model.add(Activation("relu"))model.add(BatchNormalization(axis=chanDim))model.add(Conv2D(32, (3, 3), padding="same"))model.add(Activation("relu"))model.add(BatchNormalization(axis=chanDim))model.add(MaxPooling2D(pool_size=(2, 2)))model.add(Dropout(0.25))# first (and only) set of FC => RELU layersmodel.add(Flatten())model.add(Dense(64))model.add(Activation("relu"))model.add(BatchNormalization())model.add(Dropout(0.5))# softmax classifiermodel.add(Dense(classes))model.add(Activation("softmax"))# return the constructed network architecturereturn model

創(chuàng)建活體檢測(cè)器訓(xùn)練腳本

打開(kāi) train_liveness .py 文件并插入以下代碼:

# set the matplotlib backend so figures can be saved in the background import matplotlib matplotlib.use("Agg")# import the necessary packages from pyimagesearch.livenessnet import LivenessNet from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report from keras.preprocessing.image import ImageDataGenerator from keras.optimizers import Adam from keras.utils import np_utils from imutils import paths import matplotlib.pyplot as plt import numpy as np import argparse import pickle import cv2 import os# construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-d", "--dataset", required=True,help="path to input dataset") ap.add_argument("-m", "--model", type=str, required=True,help="path to trained model") ap.add_argument("-l", "--le", type=str, required=True,help="path to label encoder") ap.add_argument("-p", "--plot", type=str, default="plot.png",help="path to output loss/accuracy plot") args = vars(ap.parse_args())

此腳本接受四個(gè)命令行參數(shù):

dataset model le plot

下一個(gè)代碼塊將執(zhí)行初始化并構(gòu)建數(shù)據(jù):

# initialize the initial learning rate, batch size, and number of # epochs to train for INIT_LR = 1e-4 BS = 8 EPOCHS = 50# grab the list of images in our dataset directory, then initialize # the list of data (i.e., images) and class images print("[INFO] loading images...") imagePaths = list(paths.list_images(args["dataset"])) data = [] labels = []for imagePath in imagePaths:# extract the class label from the filename, load the image and# resize it to be a fixed 32x32 pixels, ignoring aspect ratiolabel = imagePath.split(os.path.sep)[-2]image = cv2.imread(imagePath)image = cv2.resize(image, (32, 32))# update the data and labels lists, respectivelydata.append(image)labels.append(label)# convert the data into a NumPy array, then preprocess it by scaling # all pixel intensities to the range [0, 1] data = np.array(data, dtype="float") / 255.0

之后對(duì)標(biāo)簽進(jìn)行獨(dú)熱編碼并對(duì)將數(shù)據(jù)劃分為訓(xùn)練數(shù)據(jù)(75%)和測(cè)試數(shù)據(jù)(25%):

# encode the labels (which are currently strings) as integers and then # one-hot encode them le = LabelEncoder() labels = le.fit_transform(labels) labels = np_utils.to_categorical(labels, 2)# partition the data into training and testing splits using 75% of # the data for training and the remaining 25% for testing (trainX, testX, trainY, testY) = train_test_split(data, labels,test_size=0.25, random_state=42)

之后對(duì)數(shù)據(jù)進(jìn)行擴(kuò)充并對(duì)模型進(jìn)行編譯和訓(xùn)練:

# construct the training image generator for data augmentation aug = ImageDataGenerator(rotation_range=20, zoom_range=0.15,width_shift_range=0.2, height_shift_range=0.2, shear_range=0.15,horizontal_flip=True, fill_mode="nearest")# initialize the optimizer and model print("[INFO] compiling model...") opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS) model = LivenessNet.build(width=32, height=32, depth=3,classes=len(le.classes_)) model.compile(loss="binary_crossentropy", optimizer=opt,metrics=["accuracy"])# train the network print("[INFO] training network for {} epochs...".format(EPOCHS)) H = model.fit_generator(aug.flow(trainX, trainY, batch_size=BS),validation_data=(testX, testY), steps_per_epoch=len(trainX) // BS,epochs=EPOCHS)

模型訓(xùn)練后,可以評(píng)估效果并生成仿真曲線圖:

# evaluate the network print("[INFO] evaluating network...") predictions = model.predict(testX, batch_size=BS) print(classification_report(testY.argmax(axis=1),predictions.argmax(axis=1), target_names=le.classes_))# save the network to disk print("[INFO] serializing network to '{}'...".format(args["model"])) model.save(args["model"])# save the label encoder to disk f = open(args["le"], "wb") f.write(pickle.dumps(le)) f.close()# plot the training loss and accuracy plt.style.use("ggplot") plt.figure() plt.plot(np.arange(0, EPOCHS), H.history["loss"], label="train_loss") plt.plot(np.arange(0, EPOCHS), H.history["val_loss"], label="val_loss") plt.plot(np.arange(0, EPOCHS), H.history["acc"], label="train_acc") plt.plot(np.arange(0, EPOCHS), H.history["val_acc"], label="val_acc") plt.title("Training Loss and Accuracy on Dataset") plt.xlabel("Epoch #") plt.ylabel("Loss/Accuracy") plt.legend(loc="lower left") plt.savefig(args["plot"])

訓(xùn)練活體檢測(cè)器

執(zhí)行以下命令開(kāi)始模型訓(xùn)練:

$ python train.py --dataset dataset --model liveness.model --le le.pickle [INFO] loading images... [INFO] compiling model... [INFO] training network for 50 epochs... Epoch 1/50 29/29 [==============================] - 2s 58ms/step - loss: 1.0113 - acc: 0.5862 - val_loss: 0.4749 - val_acc: 0.7436 Epoch 2/50 29/29 [==============================] - 1s 21ms/step - loss: 0.9418 - acc: 0.6127 - val_loss: 0.4436 - val_acc: 0.7949 Epoch 3/50 29/29 [==============================] - 1s 21ms/step - loss: 0.8926 - acc: 0.6472 - val_loss: 0.3837 - val_acc: 0.8077 ... Epoch 48/50 29/29 [==============================] - 1s 21ms/step - loss: 0.2796 - acc: 0.9094 - val_loss: 0.0299 - val_acc: 1.0000 Epoch 49/50 29/29 [==============================] - 1s 21ms/step - loss: 0.3733 - acc: 0.8792 - val_loss: 0.0346 - val_acc: 0.9872 Epoch 50/50 29/29 [==============================] - 1s 21ms/step - loss: 0.2660 - acc: 0.9008 - val_loss: 0.0322 - val_acc: 0.9872 [INFO] evaluating network...precision recall f1-score supportfake 0.97 1.00 0.99 35real 1.00 0.98 0.99 43micro avg 0.99 0.99 0.99 78macro avg 0.99 0.99 0.99 78 weighted avg 0.99 0.99 0.99 78[INFO] serializing network to 'liveness.model'...

從上述結(jié)果來(lái)看,在測(cè)試集上獲得99%的檢測(cè)精度!

合并起來(lái):使用OpenCV進(jìn)行活體檢測(cè)

最后一步是將所有部分組合在一起:

  • 訪問(wèn)網(wǎng)絡(luò)攝像頭/視頻流;
  • 對(duì)每個(gè)幀應(yīng)用面部檢測(cè);
  • 對(duì)于檢測(cè)到的每個(gè)臉部,應(yīng)用活體檢測(cè)器模型;

打開(kāi)`liveness_demo.py并插入以下代碼:

# import the necessary packages from imutils.video import VideoStream from keras.preprocessing.image import img_to_array from keras.models import load_model import numpy as np import argparse import imutils import pickle import time import cv2 import os# construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-m", "--model", type=str, required=True,help="path to trained model") ap.add_argument("-l", "--le", type=str, required=True,help="path to label encoder") ap.add_argument("-d", "--detector", type=str, required=True,help="path to OpenCV's deep learning face detector") ap.add_argument("-c", "--confidence", type=float, default=0.5,help="minimum probability to filter weak detections") args = vars(ap.parse_args())

上述代碼導(dǎo)入必要的包,并加載模型。

下面初始化人臉檢測(cè)器、LivenessNet模型以及視頻流:

# load our serialized face detector from disk print("[INFO] loading face detector...") protoPath = os.path.sep.join([args["detector"], "deploy.prototxt"]) modelPath = os.path.sep.join([args["detector"],"res10_300x300_ssd_iter_140000.caffemodel"]) net = cv2.dnn.readNetFromCaffe(protoPath, modelPath)# load the liveness detector model and label encoder from disk print("[INFO] loading liveness detector...") model = load_model(args["model"]) le = pickle.loads(open(args["le"], "rb").read())# initialize the video stream and allow the camera sensor to warmup print("[INFO] starting video stream...") vs = VideoStream(src=0).start() time.sleep(2.0)

之后開(kāi)始循環(huán)遍歷視頻的每一幀以檢測(cè)面部是否真實(shí):

# loop over the frames from the video stream while True:# grab the frame from the threaded video stream and resize it# to have a maximum width of 600 pixelsframe = vs.read()frame = imutils.resize(frame, width=600)# grab the frame dimensions and convert it to a blob(h, w) = frame.shape[:2]blob = cv2.dnn.blobFromImage(cv2.resize(frame, (300, 300)), 1.0,(300, 300), (104.0, 177.0, 123.0))# pass the blob through the network and obtain the detections and# predictionsnet.setInput(blob)detections = net.forward()

使用OpenCV blobFromImage函數(shù)生成一個(gè)面部數(shù)據(jù),然后將其傳遞到面部檢測(cè)器網(wǎng)絡(luò)繼續(xù)進(jìn)行推理。核心代碼如下:

# loop over the detectionsfor i in range(0, detections.shape[2]):# extract the confidence (i.e., probability) associated with the# predictionconfidence = detections[0, 0, i, 2]# filter out weak detectionsif confidence > args["confidence"]:# compute the (x, y)-coordinates of the bounding box for# the face and extract the face ROIbox = detections[0, 0, i, 3:7] * np.array([w, h, w, h])(startX, startY, endX, endY) = box.astype("int")# ensure the detected bounding box does fall outside the# dimensions of the framestartX = max(0, startX)startY = max(0, startY)endX = min(w, endX)endY = min(h, endY)# extract the face ROI and then preproces it in the exact# same manner as our training dataface = frame[startY:endY, startX:endX]face = cv2.resize(face, (32, 32))face = face.astype("float") / 255.0face = img_to_array(face)face = np.expand_dims(face, axis=0)# pass the face ROI through the trained liveness detector# model to determine if the face is "real" or "fake"preds = model.predict(face)[0]j = np.argmax(preds)label = le.classes_[j]# draw the label and bounding box on the framelabel = "{}: {:.4f}".format(label, preds[j])cv2.putText(frame, label, (startX, startY - 10),cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)cv2.rectangle(frame, (startX, startY), (endX, endY),(0, 0, 255), 2)

首先過(guò)濾掉弱檢測(cè)結(jié)果,然后提取面部圖像并對(duì)其進(jìn)行預(yù)處理,之后送入到活動(dòng)檢測(cè)器模型來(lái)確定面部是“真實(shí)的”還是“假的/欺騙的”。最后,在原圖上繪制標(biāo)簽和添加文本以及矩形框,最后進(jìn)行展示和清理。

# show the output frame and wait for a key presscv2.imshow("Frame", frame)key = cv2.waitKey(1) & 0xFF# if the `q` key was pressed, break from the loopif key == ord("q"):break# do a bit of cleanup cv2.destroyAllWindows() vs.stop()

將活體檢測(cè)器應(yīng)用到實(shí)時(shí)視頻上

打開(kāi)終端并執(zhí)行以下命令:

$ python liveness_demo.py --model liveness.model --le le.pickle --detector face_detector Using TensorFlow backend. [INFO] loading face detector... [INFO] loading liveness detector... [INFO] starting video stream...

可以看到,活體檢測(cè)器成功地區(qū)分了真實(shí)和偽造的面孔。下面的視頻作為一個(gè)更長(zhǎng)時(shí)間的演示:視頻地址

進(jìn)一步的工作

本文設(shè)計(jì)的系統(tǒng)還有一些限制和缺陷,主要限制實(shí)際上是數(shù)據(jù)集有限——總共只有311個(gè)圖像。這項(xiàng)工作的第一個(gè)擴(kuò)展之一是簡(jiǎn)單地收集額外的訓(xùn)練數(shù)據(jù),比如其它人,其它膚色或種族的人。

此外,活體檢測(cè)器只是通過(guò)屏幕上的惡搞攻擊進(jìn)行訓(xùn)練,它并沒(méi)有經(jīng)過(guò)打印出來(lái)的圖像或照片的訓(xùn)練。因此,建議添加不同類型的圖像源。

最后,我想提一下,活體檢測(cè)沒(méi)有最好的方法,只有最合適的方法。一些好的活體檢測(cè)器包含多種活體檢測(cè)方法。

總結(jié)

在本教程中,學(xué)習(xí)了如何使用OpenCV進(jìn)行活動(dòng)檢測(cè)。使用此活體檢測(cè)器就可以在自己的人臉識(shí)別系統(tǒng)中發(fā)現(xiàn)偽造的假臉并進(jìn)行反面部欺騙。此外,創(chuàng)建活動(dòng)檢測(cè)器使用了OpenCV、Deep Learning和Python等領(lǐng)域的知識(shí)。整個(gè)過(guò)程如下:

  • 第一步是收集真假數(shù)據(jù)集。數(shù)據(jù)來(lái)源有:
    • 智能手機(jī)錄制自己的視頻(即“真”面);
    • 手機(jī)錄播(即“假”面);
    • 對(duì)兩組視頻應(yīng)用面部檢測(cè)以形成最終數(shù)據(jù)集。
  • 第二步,獲得數(shù)據(jù)集之后,實(shí)現(xiàn)了“LivenessNet”網(wǎng)絡(luò),該網(wǎng)絡(luò)設(shè)計(jì)的比較淺層,這是為了確保:
    • 減少了過(guò)擬合小數(shù)據(jù)集的可能性;
    • 該模型本身能夠?qū)崟r(shí)運(yùn)行;

總的來(lái)說(shuō),本文設(shè)計(jì)的活體檢測(cè)器能夠在驗(yàn)證集上獲得99%的準(zhǔn)確度。此外,活動(dòng)檢測(cè)器也能夠應(yīng)用于實(shí)時(shí)視頻流。

總結(jié)

以上是生活随笔為你收集整理的活体检测python_活体检测很复杂?仅使用opencv就能实现!(附源码)!的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。