日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

LearnOpenCV学习——平均脸

發(fā)布時(shí)間:2023/12/18 编程问答 27 豆豆
生活随笔 收集整理的這篇文章主要介紹了 LearnOpenCV学习——平均脸 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

原文地址: http://www.learnopencv.com/average-face-opencv-c-python-tutorial/

先看實(shí)驗(yàn)效果:最右邊的人臉是由左邊6幅人臉平均得到的。

Figure 2 : Average face of US Presidents : Carter to Obama.

實(shí)現(xiàn)過程:

1、人臉關(guān)鍵點(diǎn)檢測(cè):

采用dlib的關(guān)鍵點(diǎn)檢測(cè)器,獲得人臉的68個(gè)關(guān)鍵點(diǎn)。

2、坐標(biāo)變換:

由于輸入圖像的尺寸是大小不一的,人臉區(qū)域大小也不相同,所以要通過坐標(biāo)變換,對(duì)人臉圖像進(jìn)行歸一化操作。
在這里,我們把人臉區(qū)域warp成600*600大小,使得左眼(外眼角)位置在(180,200),右眼(外眼角)位置在(420,200)。
選擇這兩個(gè)位置的原因是:我們希望人臉中心在圖像高度的1/3位置,并且兩個(gè)眼睛保持水平,所以我們選擇左眼角位置為( 0.3*width, height / 3 ),右眼角位置為(0.7*width , height / 3) 。
dlib關(guān)鍵點(diǎn)檢測(cè)器,檢測(cè)到的左外眼角是第36個(gè)點(diǎn),右外眼角為第45個(gè)點(diǎn),如下圖:

利用這兩個(gè)點(diǎn)計(jì)算圖像的變換矩陣(similarity transform),該矩陣是一個(gè)2*3的矩陣,如下:
如果我們想對(duì)一個(gè)矩形進(jìn)行變換,其中x、y方向的縮放因?yàn)榉謩e為sx,sy,同時(shí)旋轉(zhuǎn)一個(gè)角度 ,然后再在x方向平移tx, 在y方向平移ty, 則這樣的一個(gè)變換矩陣可以寫成:

第一列和第二列是縮放和旋轉(zhuǎn),第三列是平移。
Given, a point , the above similarity transform, moves it to point using the equation given below:

利用opencv的estimateRigidTransform方法,可以獲得這樣的變換矩陣,但遺憾的是,estimateRigidTransform至少需要三個(gè)點(diǎn),所以我們需要構(gòu)選第三個(gè)點(diǎn),構(gòu)造方法是用第三個(gè)點(diǎn)與已有的兩個(gè)點(diǎn)構(gòu)成等邊三角形,這樣第三個(gè)點(diǎn)的坐標(biāo)為:

相似變換矩陣計(jì)算出來(lái)之的一,就可以用它來(lái)對(duì)圖像和landmark進(jìn)行變換。The image is transformed using warpAffine and the points are transformed using transform.

3、人臉對(duì)齊

經(jīng)過步驟2之后,所有的圖像都變成一樣大小,并且兩個(gè)眼睛的位置是保持一致。如果就這樣進(jìn)行平均臉計(jì)算的話,會(huì)得到下面這種效果,因?yàn)槌搜劬?duì)齊了之外,其他點(diǎn)并沒有對(duì)齊,所以會(huì)出現(xiàn)這種效果。

Figure 5 : Result of naive face averaging

其他點(diǎn)怎么對(duì)齊呢?我們已經(jīng)知道輸入圖像68個(gè)點(diǎn)的位置,如果還能知道輸出圖像68個(gè)點(diǎn)的位置,那自然是很容易對(duì)齊的。遺憾的是,除了眼睛的位置我們可以事先定義之外,其他點(diǎn)的位置一般很難給出合理的定義。
解決辦法是Delaunay Triangulation,具體如下:

(1)Calculate Mean Face Points
計(jì)算N張similarity transform之后的輸出圖像的所有關(guān)鍵點(diǎn)位置的平均值,即平均臉的第i個(gè)關(guān)鍵點(diǎn)位置,等于所有經(jīng)過similarity transform之后的圖像的第i個(gè)關(guān)鍵點(diǎn)位置的平均值。

(2)Calculate Delaunay Triangulation
利用平均臉的68個(gè)關(guān)鍵點(diǎn),以及圖像邊界的8個(gè)點(diǎn),計(jì)算Delaunay Triangulation,如下圖所示。The result of Delaunay triangulation is a list of triangles represented by the indices of points in the 76 points ( 68 face points + 8 boundary points ) array。

Figure 6 : Delaunay Triangulation of average landmark points.

(3)Warp Triangles
對(duì)輸入圖像(similarity transform之后的圖像)和平均臉分別計(jì)算Delaunay Triangulation,如圖7的left image 和middle image,left image里面的triangle 1對(duì)應(yīng)middle image里面的triangle 1,通過這兩個(gè)三角形,可以計(jì)算一個(gè)從輸入圖像triangle 1到平均臉triangle 1的放射變換,從而把輸入圖像triangle 1里面的所有像素,變換成平均臉triangle 1的所有像素。其他triangle 也進(jìn)行同樣的操作,得到的結(jié)果就如right image所示。

Figure 7 : Image Warping based on Delaunay Triangulation.
The left image shows Delaunay triangles on the transformed input image.
The middle image shows the triangulation on the average landmarks.
The right image is simply the left image warped to the average face

4、人臉平均

經(jīng)過warp之后,將所有人臉的對(duì)應(yīng)像素的灰度值加起來(lái)求平均,即是平均臉圖像。

原作者的實(shí)驗(yàn)效果:

Figure 8 : Facial Average of Mark Zuckerberg, Larry Page, Elon Musk and Jeff Bezos


Figure 9 : Facial Average of last four best actress winners : Brie Larson, Julianne Moore, Cate Blanchett and Jennifer Lawrence

還有一個(gè)有趣的實(shí)驗(yàn)是,將Obama左右鏡像的兩張圖像進(jìn)行平均之后,得到一張對(duì)稱臉,如下圖:

Figure 10 : President Obama made symmetric (center) by averaging his image (left) with its mirror reflection (right).

完整代碼:

#!/usr/bin/env python# Copyright (c) 2016 Satya Mallick <spmallick@learnopencv.com> # All rights reserved. No warranty, explicit or implicit, provided.import os import cv2 import numpy as np import math import sys# Read points from text files in directory def readPoints(path) :# Create an array of array of points.pointsArray = [];#List all files in the directory and read points from text files one by onefor filePath in os.listdir(path):if filePath.endswith(".txt"):#Create an array of points.points = []; # Read points from filePathwith open(os.path.join(path, filePath)) as file :for line in file :x, y = line.split()points.append((int(x), int(y)))# Store array of pointspointsArray.append(points)return pointsArray;# Read all jpg images in folder. def readImages(path) :#Create array of array of images.imagesArray = [];#List all files in the directory and read points from text files one by onefor filePath in os.listdir(path):if filePath.endswith(".jpg"):# Read image found.img = cv2.imread(os.path.join(path,filePath));# Convert to floating pointimg = np.float32(img)/255.0;# Add to array of imagesimagesArray.append(img);return imagesArray;# Compute similarity transform given two sets of two points. # OpenCV requires 3 pairs of corresponding points. # We are faking the third one.def similarityTransform(inPoints, outPoints) :s60 = math.sin(60*math.pi/180);c60 = math.cos(60*math.pi/180); inPts = np.copy(inPoints).tolist();outPts = np.copy(outPoints).tolist();xin = c60*(inPts[0][0] - inPts[1][0]) - s60*(inPts[0][1] - inPts[1][1]) + inPts[1][0];yin = s60*(inPts[0][0] - inPts[1][0]) + c60*(inPts[0][1] - inPts[1][1]) + inPts[1][1];inPts.append([np.int(xin), np.int(yin)]);xout = c60*(outPts[0][0] - outPts[1][0]) - s60*(outPts[0][1] - outPts[1][1]) + outPts[1][0];yout = s60*(outPts[0][0] - outPts[1][0]) + c60*(outPts[0][1] - outPts[1][1]) + outPts[1][1];outPts.append([np.int(xout), np.int(yout)]);tform = cv2.estimateRigidTransform(np.array([inPts]), np.array([outPts]), False);return tform;# Check if a point is inside a rectangle def rectContains(rect, point) :if point[0] < rect[0] :return Falseelif point[1] < rect[1] :return Falseelif point[0] > rect[2] :return Falseelif point[1] > rect[3] :return Falsereturn True# Calculate delanauy triangle def calculateDelaunayTriangles(rect, points):# Create subdivsubdiv = cv2.Subdiv2D(rect);# Insert points into subdivfor p in points:subdiv.insert((p[0], p[1]));# List of triangles. Each triangle is a list of 3 points ( 6 numbers )triangleList = subdiv.getTriangleList();# Find the indices of triangles in the points arraydelaunayTri = []for t in triangleList:pt = []pt.append((t[0], t[1]))pt.append((t[2], t[3]))pt.append((t[4], t[5]))pt1 = (t[0], t[1])pt2 = (t[2], t[3])pt3 = (t[4], t[5]) if rectContains(rect, pt1) and rectContains(rect, pt2) and rectContains(rect, pt3):ind = []for j in xrange(0, 3):for k in xrange(0, len(points)): if(abs(pt[j][0] - points[k][0]) < 1.0 and abs(pt[j][1] - points[k][1]) < 1.0):ind.append(k) if len(ind) == 3: delaunayTri.append((ind[0], ind[1], ind[2]))return delaunayTridef constrainPoint(p, w, h) :p = ( min( max( p[0], 0 ) , w - 1 ) , min( max( p[1], 0 ) , h - 1 ) )return p;# Apply affine transform calculated using srcTri and dstTri to src and # output an image of size. def applyAffineTransform(src, srcTri, dstTri, size) :# Given a pair of triangles, find the affine transform.warpMat = cv2.getAffineTransform( np.float32(srcTri), np.float32(dstTri) )# Apply the Affine Transform just found to the src imagedst = cv2.warpAffine( src, warpMat, (size[0], size[1]), None, flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_REFLECT_101 )return dst# Warps and alpha blends triangular regions from img1 and img2 to img def warpTriangle(img1, img2, t1, t2) :# Find bounding rectangle for each triangler1 = cv2.boundingRect(np.float32([t1]))r2 = cv2.boundingRect(np.float32([t2]))# Offset points by left top corner of the respective rectanglest1Rect = [] t2Rect = []t2RectInt = []for i in xrange(0, 3):t1Rect.append(((t1[i][0] - r1[0]),(t1[i][1] - r1[1])))t2Rect.append(((t2[i][0] - r2[0]),(t2[i][1] - r2[1])))t2RectInt.append(((t2[i][0] - r2[0]),(t2[i][1] - r2[1])))# Get mask by filling trianglemask = np.zeros((r2[3], r2[2], 3), dtype = np.float32)cv2.fillConvexPoly(mask, np.int32(t2RectInt), (1.0, 1.0, 1.0), 16, 0);# Apply warpImage to small rectangular patchesimg1Rect = img1[r1[1]:r1[1] + r1[3], r1[0]:r1[0] + r1[2]]size = (r2[2], r2[3])img2Rect = applyAffineTransform(img1Rect, t1Rect, t2Rect, size)img2Rect = img2Rect * mask# Copy triangular region of the rectangular patch to the output imageimg2[r2[1]:r2[1]+r2[3], r2[0]:r2[0]+r2[2]] = img2[r2[1]:r2[1]+r2[3], r2[0]:r2[0]+r2[2]] * ( (1.0, 1.0, 1.0) - mask )img2[r2[1]:r2[1]+r2[3], r2[0]:r2[0]+r2[2]] = img2[r2[1]:r2[1]+r2[3], r2[0]:r2[0]+r2[2]] + img2Rectif __name__ == '__main__' :path = 'presidents/'# Dimensions of output imagew = 600;h = 600;# Read points for all imagesallPoints = readPoints(path);# Read all imagesimages = readImages(path);# Eye cornerseyecornerDst = [ (np.int(0.3 * w ), np.int(h / 3)), (np.int(0.7 * w ), np.int(h / 3)) ];imagesNorm = [];pointsNorm = [];# Add boundary points for delaunay triangulationboundaryPts = np.array([(0,0), (w/2,0), (w-1,0), (w-1,h/2), ( w-1, h-1 ), ( w/2, h-1 ), (0, h-1), (0,h/2) ]);# Initialize location of average points to 0spointsAvg = np.array([(0,0)]* ( len(allPoints[0]) + len(boundaryPts) ), np.float32());n = len(allPoints[0]);numImages = len(images)# Warp images and trasnform landmarks to output coordinate system,# and find average of transformed landmarks.for i in xrange(0, numImages):points1 = allPoints[i];# Corners of the eye in input imageeyecornerSrc = [ allPoints[i][36], allPoints[i][45] ] ;# Compute similarity transformtform = similarityTransform(eyecornerSrc, eyecornerDst);# Apply similarity transformationimg = cv2.warpAffine(images[i], tform, (w,h));# Apply similarity transform on pointspoints2 = np.reshape(np.array(points1), (68,1,2)); points = cv2.transform(points2, tform);points = np.float32(np.reshape(points, (68, 2)));# Append boundary points. Will be used in Delaunay Triangulationpoints = np.append(points, boundaryPts, axis=0)# Calculate location of average landmark points.pointsAvg = pointsAvg + points / numImages;pointsNorm.append(points);imagesNorm.append(img);# Delaunay triangulationrect = (0, 0, w, h);dt = calculateDelaunayTriangles(rect, np.array(pointsAvg));# Output imageoutput = np.zeros((h,w,3), np.float32());# Warp input images to average image landmarksfor i in xrange(0, len(imagesNorm)) :img = np.zeros((h,w,3), np.float32());# Transform triangles one by onefor j in xrange(0, len(dt)) :tin = []; tout = [];for k in xrange(0, 3) : pIn = pointsNorm[i][dt[j][k]];pIn = constrainPoint(pIn, w, h);pOut = pointsAvg[dt[j][k]];pOut = constrainPoint(pOut, w, h);tin.append(pIn);tout.append(pOut);warpTriangle(imagesNorm[i], img, tin, tout);# Add image intensities for averagingoutput = output + img;# Divide by numImages to get averageoutput = output / numImages;# Display resultcv2.imshow('image', output);cv2.waitKey(0);

總結(jié)

以上是生活随笔為你收集整理的LearnOpenCV学习——平均脸的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 丰满多毛的大隂户视频 | 国产又黄又粗又猛又爽的视频 | 69视频在线看 | 亚洲毛片网 | 青青草国产精品 | 欧美国产精品一区 | 久久久成人免费视频 | a黄色大片 | 男人用嘴添女人下身免费视频 | 美女视频一区二区三区 | 久久丁香| 久久九九国产精品 | 视频一区二区三区在线观看 | 美女福利片 | 成人片免费看 | 国产在线观 | 日韩视频中文字幕 | 久久久久久久久久亚洲 | 中文字幕在线日韩 | 天天射天天射 | 一级国产片 | 中国一级特黄录像播放 | av三级网站| 五月情网 | 日韩久久一区二区 | 久久婷婷av | 潘金莲一级淫片aaaaa | 爱爱视频在线免费观看 | 6080午夜| 色综合天天综合网国产成人网 | 国产乱淫av片免费 | 久久国产精品久久精品国产 | 看av在线| 四虎午夜影院 | 久久免费视频网站 | 亚洲综合国产精品 | 国产伦精品一区二区三区网站 | 草草在线视频 | 性一交一乱一色一免费无遮挡 | 人妻熟女一区二区三区 | 国产精品天堂 | 日韩欧美一区二区在线观看 | 锕锕锕锕锕锕锕锕 | 91精品在线视频观看 | 中文久久久 | 91视频在线免费看 | 我们好看的2018视频在线观看 | 亚洲一区二区 | 午夜日韩欧美 | 欧美网| 欧美交受高潮1 | 亚洲免费色图 | 国产精品成人久久久久久久 | 动漫av一区二区三区 | 国产6区 | 中出白浆 | 国产毛片毛片毛片 | 免费看黄视频的网站 | 97视频免费看 | 无码国产精品一区二区免费式直播 | 精品人妻无码一区二区三区 | 污视频网站免费在线观看 | 精品人妻一区二区三区久久夜夜嗨 | 久久久性视频 | 手机在线毛片 | www.射.com | 高清视频一区二区三区 | 午夜av一区二区三区 | 啄木乌欧美一区二区三区 | 青青草黄色 | 久久久影视 | 黄色av一区二区 | 九九亚洲视频 | 丰满女人又爽又紧又丰满 | 精品国产乱码久久久久久蜜臀网站 | 国产欧美中文字幕 | 日韩成人在线免费观看 | 小镇姑娘国语版在线观看免费 | 精品人妻码一区二区三区红楼视频 | 欧美日本久久 | 亚洲欧美另类激情 | 国产成人自拍网 | 亚洲精品视频在线看 | 天天艹夜夜 | 国产成人精品aa毛片 | 精品不卡一区二区三区 | 一区二区三区在线免费视频 | 日本在线高清视频 | 999精品免费视频 | 欧美色图网址 | 欧美在线视频观看 | 亚洲图片欧美在线看 | 99成人在线| 日本高清xxxx | 四色成人 | 日本黄页网站免费大全 | 日韩www. | 一区二区三区在线观 | 夜夜嗨av禁果av粉嫩av懂色av |