日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

画极线

發(fā)布時(shí)間:2023/12/10 编程问答 30 豆豆
生活随笔 收集整理的這篇文章主要介紹了 画极线 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

OpenCV學(xué)習(xí)日記5

2017-05-27 10:44:35?1000sprites?閱讀數(shù) 2339更多

分類專欄:?計(jì)算機(jī)視覺

版權(quán)聲明:本文為博主原創(chuàng)文章,遵循?CC 4.0 BY-SA?版權(quán)協(xié)議,轉(zhuǎn)載請附上原文出處鏈接和本聲明。

本文鏈接:https://blog.csdn.net/shengshengwang/article/details/72779289

1. solvePnP,cvPOSIT(過時(shí)),solvePnPRansac [1][2]

解析:給定物體3D點(diǎn)集與對應(yīng)的圖像2D點(diǎn)集,以及攝像頭內(nèi)參數(shù)的情況下計(jì)算物體的3D姿態(tài)。solvePnP和cvPOSIT

的輸出都是旋轉(zhuǎn)矩陣和位移向量,不過solvePnP是精確解,cvPOSIT是近似解。因?yàn)閟olvePnP調(diào)用的是

cvFindExtrinsicCameraParams2,通過已知的內(nèi)參進(jìn)行未知外參求解;而cvPOSIT是用仿射投影模型近似透視投影模

型,不斷迭代計(jì)算出估計(jì)值(在物體深度變化相對于物體到攝像機(jī)的距離比較大時(shí),該算法可能不收斂)。

solvePnP和solvePnPRansac函數(shù)原型,如下所示:

(1)cv2.solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs[, rvec[, tvec[, useExtrinsicGuess[, flags]]]])?

→?retval, rvec, tvec

(2)cv2.solvePnPRansac(objectPoints, imagePoints, cameraMatrix, distCoeffs[, rvec[, tvec[, useExtrinsicGuess[,?

iterationsCount[, reprojectionError[, minInliersCount[, inliers[, flags]]]]]]]]) → rvec, tvec, inliers

?

2. 對極幾何(Epipolar Geometry)

解析:

在雙目立體視覺系統(tǒng)中,有兩個攝像機(jī)在不同角度拍攝物理空間中的同一實(shí)體點(diǎn),在兩副圖像上分別有兩個成像點(diǎn)。

立體匹配就是已知其中的一個成像點(diǎn),在另一副圖像上找出該成像點(diǎn)的對應(yīng)點(diǎn)。極線幾何約束是一種常用的匹配約束

技術(shù),它是一種點(diǎn)對直線的約束,將對應(yīng)點(diǎn)匹配從整幅圖像尋找壓縮到在一條直線上尋找。

  • import cv2

  • import numpy as np

  • from matplotlib import pyplot as plt

  • ?
  • img1 = cv2.imread('myleft.jpg',0) #queryimage # left image

  • img2 = cv2.imread('myright.jpg',0) #trainimage # right image

  • ?
  • sift = cv2.SIFT()

  • # find the keypoints and descriptors with SIFT

  • kp1, des1 = sift.detectAndCompute(img1,None)

  • kp2, des2 = sift.detectAndCompute(img2,None)

  • ?
  • # FLANN parameters

  • FLANN_INDEX_KDTREE = 0

  • index_params = dict(algorithm=FLANN_INDEX_KDTREE,trees=5)

  • search_params = dict(checks=50)

  • flann = cv2.FlannBasedMatcher(index_params,search_params)

  • matches = flann.knnMatch(des1,des2,k=2)

  • ?
  • good = []

  • pts1 = []

  • pts2 = []

  • ?
  • # ratio test as per Lowe's paper

  • for i,(m,n) in enumerate(matches):

  • if m.distance < 0.8*n.distance:

  • good.append(m)

  • pts2.append(kp2[m.trainIdx].pt)

  • pts1.append(kp1[m.queryIdx].pt)

  • ?
  • pts1 = np.float32(pts1)

  • pts2 = np.float32(pts2)

  • F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_LMEDS)

  • ?
  • # we select only inlier points

  • pts1 = pts1[mask.ravel()==1]

  • pts2 = pts2[mask.ravel()==1]

  • ?
  • def drawlines(img1,img2,lines,pts1,pts2):

  • ''' img1 - image on which we draw the epilines for the points in img2

  • lines - corresponding epilines

  • '''

  • r,c = img1.shape

  • img1 = cv2.cvtColor(img1,cv2.COLOR_GRAY2BGR)

  • img2 = cv2.cvtColor(img2,cv2.COLOR_GRAY2BGR)

  • for r,pt1,pt2 in zip(lines,pts1,pts2):

  • color = tuple(np.random.randint(0,255,3).tolist())

  • x0,y0 = map(int, [0,-r[2]/r[1]])

  • x1,y1 = map(int, [c,-(r[2]+r[0]*c)/r[1]])

  • img1 = cv2.line(img1, (x0,y0), (x1,y1), color, 1)

  • img1 = cv2.circle(img1,tuple(pt1),5,color,-1)

  • img2 = cv2.circle(img2,tuple(pt2),5,color,-1)

  • return img1,img2

  • ?
  • # find epilines corresponding to points in right image (second image) and

  • # drawing its lines on left image

  • lines1 = cv2.computeCorrespondEpilines(pts2.reshape(-1,1,2), 2,F)

  • lines1 = lines1.reshape(-1,3)

  • img5,img6 = drawlines(img1,img2,lines1,pts1,pts2)

  • ?
  • # find epilines corresponding to points in left image (first image) and

  • # drawing its lines on right image

  • lines2 = cv2.computeCorrespondEpilines(pts1.reshape(-1,1,2), 1,F)

  • lines2 = lines2.reshape(-1,3)

  • img3,img4 = drawlines(img2,img1,lines2,pts2,pts1)

  • ?
  • plt.subplot(121),plt.imshow(img5)

  • plt.subplot(122),plt.imshow(img3)

  • plt.show()

  • 結(jié)果輸出,如下所示:

    說明:findFundamentalMat和computeCorrespondEpilines函數(shù)原型,如下所示:

    (1)cv2.findFundamentalMat(points1, points2[, method[, param1[, param2[,?mask]]]]) →?retval, mask

    (2)cv2.computeCorrespondEpilines(points, whichImage, F[, lines]) →?lines

    ?

    3. 立體圖像中的深度地圖

    解析:如果同一場景有兩幅圖像,那么就可以獲得圖像的深度信息。如下所示:

    構(gòu)建立體圖像中的深度地圖過程,如下所示:

  • import cv2

  • from matplotlib import pyplot as plt

  • ?
  • imgL = cv2.imread('tsukuba_l.png',0)

  • imgR = cv2.imread('tsukuba_r.png',0)

  • stereo = cv2.createStereoBM(numDisparities=16, blockSize=15)

  • disparity = stereo.compute(imgL,imgR)

  • plt.imshow(disparity,'gray')

  • plt.show()

  • 結(jié)果輸出,如下所示:

    說明:左側(cè)為原始圖像,右側(cè)為深度圖像。結(jié)果中的噪音可以通過調(diào)整numDisparities和blockSize得到更好的結(jié)果。

    createStereoBM函數(shù)原型為cv2.createStereoBM([numDisparities[, blockSize]]) → retval。

    ?

    4. BRIEF算子詳解 [4]

    解析:BRIEF(Binary Robust Independent Elementary Features)是一種對特征點(diǎn)描述子計(jì)算和匹配的快速方法,

    但它不提供查找特征的方法,原始文獻(xiàn)推薦使用CenSurE特征檢測器。同時(shí)它不具備旋轉(zhuǎn)不變性和尺度不變性而且對

    噪聲敏感。如下所示:

  • import cv2

  • ?
  • img = cv2.imread('simple.jpg',0)

  • # initiate STAR detector

  • star = cv2.FeatureDetector_create("STAR")

  • # initiate BRIEF extractor

  • brief = cv2.DescriptorExtractor_create("BRIEF")

  • # find the keypoints with STAR

  • kp = star.detect(img,None)

  • # compute the descriptors with BRIEF

  • kp, des = brief.compute(img, kp)

  • print brief.getInt('bytes')

  • print des.shape

  • 說明:在OpenCV中CenSurE檢測器叫做STAR檢測器。

    ?

    5.opencv-4.0.0和Windows 7:ImportError: ERROR: recursion is detected during loading of "cv2" binary extensions. Check OpenCV installation

    解析:

    (1)將D:\opencv-4.0.0\build\python\cv2\python-3.6\cv2.cp36-win_amd64.pyd修改為cv2.pyd,然后拷貝到D:\Anaconda3\Lib\site-packages目錄。

    (2)將D:\opencv-4.0.0\build\x64\vc15\bin目錄下的.dll文件拷貝到D:\Anaconda3\Library\bin目錄。

    ?

    參考文獻(xiàn):

    [1]?Camera Calibration and 3D Reconstruction:

    http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html

    [2] 三維姿態(tài):關(guān)于solvePnP與cvPOSIT:http://blog.csdn.net/abc20002929/article/details/8520063

    [3]?極線約束(epipolar constraint):http://blog.csdn.net/tianwaifeimao/article/details/19544861

    [4] createStereoBM:http://docs.opencv.org/3.0-

    beta/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#cv2.createStereoBM

    [5] 特征工程BRIEF:http://dnntool.com/2017/03/27/brief/

    總結(jié)

    以上是生活随笔為你收集整理的画极线的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。