使用OpenCV进行相机标定
1. 使用OpenCV進(jìn)行標(biāo)定
相機(jī)已經(jīng)有很長(zhǎng)一段歷史了。但是,伴隨著20世紀(jì)后期的廉價(jià)針孔照相機(jī)的問世,它們已經(jīng)變成我們?nèi)粘I畹囊环N常見的存在。不幸的是,這種廉價(jià)是由代價(jià)的:顯著的變形。幸運(yùn)的是,這些是常數(shù)而且使用標(biāo)定和一些重繪我們可以矯正這個(gè)。而且,使用標(biāo)定你還可以確定照相機(jī)的像素和真實(shí)世界的坐標(biāo)單位毫米之間關(guān)系。
原理:
對(duì)于變形(鏡頭畸變),OpenCV考慮徑向畸變和切向畸變。
對(duì)于徑向畸變參數(shù)使用以下公式:
所以對(duì)于一個(gè)輸入圖像的舊像素點(diǎn)(x,y),它在輸出圖像的新像素點(diǎn)坐標(biāo)將會(huì)是(xcorrected, ycorrected)。徑向畸變的出現(xiàn)表示了“桶”或者“魚眼”效果。
切向畸變出現(xiàn)是因?yàn)殓R頭不能和成像平面完美平行。它可以通過以下公式糾正:
所以我們可以有5個(gè)畸變參數(shù)。在OpenCV中是以一個(gè)1行5列的矩陣表示的:
?
現(xiàn)在對(duì)于單位轉(zhuǎn)換我們使用以下公式:
?
這里w是使用的單映射坐標(biāo)系統(tǒng)表示(而且w=Z)。未知的參數(shù)是fx和fy(相機(jī)焦距)和(cx, cy)是光學(xué)中心以像素坐標(biāo)表示。如果對(duì)于兩個(gè)軸一個(gè)通用的焦距是通過一個(gè)給定的a比率使用(通常為1),那么fy=fx*a,在上面的公式中我們將會(huì)有一個(gè)單個(gè)的焦距f。即fx=fy=f。這個(gè)包含4個(gè)參數(shù)的矩陣是指的相機(jī)矩陣。由于畸變協(xié)參是相同的無(wú)論相機(jī)分辨率是多少,所以這些畸變協(xié)參應(yīng)該按照當(dāng)前分辨率縮放,而非校正后的分辨率。
確定這兩個(gè)矩陣的過程稱為標(biāo)定。這些參數(shù)的計(jì)算是通過基本的幾何等式完成的。使用的等式形式取決于選擇的標(biāo)定物體。當(dāng)前OpenCV支持三種類型的物體來(lái)進(jìn)行標(biāo)定:
- 傳統(tǒng)的黑白棋盤板
- 對(duì)稱的圓圈圖案
- 不對(duì)稱的圓圈圖案
基本上,你需要使用你的相機(jī)拍攝這些圖案,然后讓OpenCV找到它們。每個(gè)找到的圖案會(huì)產(chǎn)生一個(gè)新的等式。為了解決這些等式你需要至少一個(gè)預(yù)先設(shè)定數(shù)量的圖案照片來(lái)形成一個(gè)良好適定等式系統(tǒng)(什么叫適定問題)。這個(gè)數(shù)量對(duì)于黑白棋盤板要較高,而對(duì)于圓圈圖案則較少。舉例,理論上棋盤圖案需要至少兩個(gè)照片。然而,實(shí)際上我們的輸入圖像有相當(dāng)數(shù)量的噪聲,所以為了得到好的結(jié)果你可能需要至少?gòu)牟煌嵌鹊?span lang="en-us">10個(gè)好的輸入圖案照片。
?
目標(biāo):
樣本應(yīng)用將:
- 決定畸變矩陣
- 決定相機(jī)矩陣
- 從相機(jī),視頻和圖片文件列中獲取輸入文件
- 文件中讀取配置
- 保存結(jié)果到XML/YAML文件
- 計(jì)算重投影誤差
?
源碼:
You may also find the source code in the?samples/cpp/tutorial_code/calib3d/camera_calibration/?folder of the OpenCV source library or?download?itfrom?here.??
你可以在OpenCV的源碼庫(kù)的samples/cpp/tutorial_code/calib3d/camera_calibration/文件夾中找到源碼或者可以從這里下載。
?
?The program has a single argument: the name of its configuration file. If none is given then it will try to open the one named “default.xml”.?Here's?a?sample?configuration?file?in XML format.?
?這個(gè)程序有一個(gè)單個(gè)參數(shù):配置文件名稱。如果沒有給定那么它就會(huì)試著打開一個(gè)命名為“default.xml”的文件。這里是一個(gè)XML格式的樣本配置文件。
In the configuration file you may choose to use camera as an input, a video file or an image list. If you opt for the last one, you will need to create a configuration file where you enumerate the images to use. Here’s?an?example?of?this. The important part to remember is that the images need to be specified using the absolute path or the relative one from your application’s working directory. You may find all this in the samples directory mentioned above.
在配置文件中你可以選擇使用相機(jī)作為輸入,或者是一個(gè)視頻文件或者一個(gè)圖片列表。如果你選擇最后一個(gè),你將需要?jiǎng)?chuàng)建一個(gè)配置文件,里面你列舉要使用的圖片。這里是一個(gè)樣本文件。需要記住的重要部分是需要使用絕對(duì)路徑指定或你的應(yīng)用的工作路徑的相對(duì)路徑指定的圖片。你可能在上面提到的樣例路徑中找到這些。
The application starts up with reading the settings from the configuration file.?Although, this is an important part of it, it has nothing to do with the subject of this tutorial:?camera calibration.?Therefore, I’ve chosen not to post the code for that part here. Technical background on how to do this you can find in the?File Input and Output using XML and YAML files?tutorial.
這個(gè)應(yīng)用以讀取配置文件的設(shè)置啟動(dòng)。雖然讀取配置文件是一個(gè)重要部分,但是它和本教程的主題無(wú)關(guān):相機(jī)標(biāo)定。所以,我選擇不在這里貼這部分的源代碼了。如果讀取配置文件的技術(shù)背景資料你可以在“使用XML和YAML文件的文件讀取和輸出”教程中找到。
?
解釋:?
1. 讀取設(shè)置
2. Settings s; 3. const string inputSettingsFile = argc > 1 ? argv[1] : "default.xml"; 4. FileStorage fs(inputSettingsFile, FileStorage::READ); // Read the settings 5. if (!fs.isOpened()) 6. { 7. cout << "Could not open the configuration file: \"" << inputSettingsFile << "\"" << endl; 8. return -1; 9. } 10. fs["Settings"] >> s; 11. fs.release(); // close Settings file 12. 13. if (!s.goodInput) 14. { 15. cout << "Invalid input detected. Application stopping. " << endl; 16. return -1; 17. }?Settings為設(shè)置類。
For this I’ve used simple OpenCV class input operation. After reading the file I’ve an additional post-processing function that checks validity of the input. Only if all inputs are good then?goodInput?variable will be true.
對(duì)此,我使用簡(jiǎn)單的OpenCV類輸入操作。在讀取文件之后我有一個(gè)多余的處理之后的函數(shù)檢查輸入的有效性。只有在所有的輸入都有效goodInput變量才為真。
Get next input, if it fails or we have enough of them - calibrate. After this we have a big loop where we do the following operations: get the next image from the image list, camera or video file. If this fails or we have enough images then we run the calibration process. In case of image we step out of the loop and otherwise the remaining frames will be undistorted (if the option is set) via changing from?DETECTION?mode to the?CALIBRATED?one.
獲取下一個(gè)輸入,如果失敗了或者我們有足夠的它們——標(biāo)定。在這之后我們有一個(gè)大的循環(huán),其中我們做以下的操作:從圖片列表、相機(jī)或者視頻文件中獲取下一個(gè)圖片。如果這個(gè)失敗了或者我們有足夠的圖片那么我們進(jìn)行標(biāo)定的進(jìn)程。在圖片的情況下,通過從DETECTION模式轉(zhuǎn)變到CALIBRATED模式我們跳出循環(huán),否則剩余的幀將非失真的(如果這個(gè)選項(xiàng)設(shè)置了)。
19. for(int i = 0;;++i) 20. { 21. Mat view; 22. bool blinkOutput = false; 23. 24. view = s.nextImage(); 25. 26. //----- If no more image, or got enough, then stop calibration and show result ------------- 27. if( mode == CAPTURING && imagePoints.size() >= (unsigned)s.nrFrames ) 28. { 29. if( runCalibrationAndSave(s, imageSize, cameraMatrix, distCoeffs, imagePoints)) 30. mode = CALIBRATED; 31. else 32. mode = DETECTION; 33. } 34. if(view.empty()) // If no more images then run calibration, save and stop loop. 35. { 36. if( imagePoints.size() > 0 ) 37. runCalibrationAndSave(s, imageSize, cameraMatrix, distCoeffs, imagePoints); 38. break; 39. imageSize = view.size(); // Format input image. 40. if( s.flipVertical ) flip( view, view, 0 ); 41. }For some cameras we may need to flip the input image. Here we do this too.
對(duì)于一些相機(jī)我們可能需要跳過輸入圖片。這里我們也這么做。
Find the pattern in the current input. The formation of the equations I mentioned above aims to finding major patterns in the input: in case of the chessboard this are corners of the squares and for the circles, well, the circles themselves. The position of these will form the result which will be written into the?pointBuf?vector.
在當(dāng)前輸入中尋找圖案。上面我提到的等式形式主要是為了查找輸入中的主要圖案:在棋盤模式下是方形的角,而對(duì)于圓形,好吧,是圓自己。這些的位置將形成結(jié)果保存到pintBuf矢量對(duì)象中。
43. vector<Point2f> pointBuf; 44. 45. bool found; 46. switch( s.calibrationPattern ) // Find feature points on the input format 47. { 48. case Settings::CHESSBOARD: 49. found = findChessboardCorners( view, s.boardSize, pointBuf, 50. CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_FAST_CHECK | CV_CALIB_CB_NORMALIZE_IMAGE); 51. break; 52. case Settings::CIRCLES_GRID: 53. found = findCirclesGrid( view, s.boardSize, pointBuf ); 54. break; 55. case Settings::ASYMMETRIC_CIRCLES_GRID: 56. found = findCirclesGrid( view, s.boardSize, pointBuf, CALIB_CB_ASYMMETRIC_GRID ); 57. break; 58. }Depending on the type of the input pattern you use either the?findChessboardCorners?or the?findCirclesGrid?function. For both of them you pass the current image and the size of the board and you’ll get the positions of the patterns. Furthermore, they return a boolean variable which states if the pattern was found in the input (we only need to take into account those images where this is true!).
根據(jù)輸入圖案的類型你可以使用findChessboardCorners函數(shù)或者findCirclesGrid函數(shù)。對(duì)于兩者你傳入當(dāng)前圖像和板子的尺寸然后你將會(huì)得到圖案的位置。而且,它們將會(huì)返回一個(gè)布爾類型表示是否圖案已經(jīng)在輸入圖像中找到了(我們只需要考慮為true的圖片)。
Then again in case of cameras we only take camera images when an input delay time is passed. This is done in order to allow user moving the chessboard around and getting different images. Similar images result in similar equations, and similar equations at the calibration step will form an ill-posed problem, so the calibration will fail. For square images the positions of the corners are only approximate. We may improve this by calling the?cornerSubPix?function. It will produce better calibration result. After this we add a valid inputs result to the?imagePoints?vector to collect all of the equations into a single container. Finally, for visualization feedback purposes we will draw the found points on the input image using?findChessboardCorners?function.
然后再一次在相機(jī)模式下我們只需要每隔一個(gè)時(shí)間獲取相機(jī)圖像。這么做是為了允許用戶移動(dòng)棋盤然后得到不同角度的圖像。相似的圖像將導(dǎo)致相似的等式,而相似的等式在標(biāo)定步驟時(shí)將產(chǎn)生一個(gè)病態(tài)不適定問題,然后標(biāo)定將會(huì)失敗。對(duì)于矩形圖像拐角的位置是唯一有效的。我們可以通過調(diào)用cornerSubPix函數(shù)來(lái)提高這個(gè)效果。這將會(huì)產(chǎn)生更好的標(biāo)定效果。在這之后我們添加一個(gè)有效的輸入結(jié)果到imagePoints矢量中從而收集所有的等式到一個(gè)單個(gè)容器中。最后,對(duì)于視覺反饋目的我們將使用findChessboardCorners函數(shù)在輸入圖像中繪制找到的點(diǎn)。
1 if ( found) // If done with success, 2 { 3 // improve the found corners' coordinate accuracy for chessboard 4 if( s.calibrationPattern == Settings::CHESSBOARD) 5 { 6 Mat viewGray; 7 cvtColor(view, viewGray, CV_BGR2GRAY); 8 cornerSubPix( viewGray, pointBuf, Size(11,11), 9 Size(-1,-1), TermCriteria( CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 30, 0.1 )); 10 } 11 12 if( mode == CAPTURING && // For camera only take new samples after delay time 13 (!s.inputCapture.isOpened() || clock() - prevTimestamp > s.delay*1e-3*CLOCKS_PER_SEC) ) 14 { 15 imagePoints.push_back(pointBuf); 16 prevTimestamp = clock(); 17 blinkOutput = s.inputCapture.isOpened(); 18 } 19 20 // Draw the corners. 21 drawChessboardCorners( view, s.boardSize, Mat(pointBuf), found ); 22 }Show state and result to the user, plus command line control of the application. This part shows text output on the image.
顯示狀態(tài)和結(jié)果給用戶,以及應(yīng)用的命令行控制。這部分將展示圖像上的文本輸出。
60. //----------------------------- Output Text ------------------------------------------------ 61. string msg = (mode == CAPTURING) ? "100/100" : 62. mode == CALIBRATED ? "Calibrated" : "Press 'g' to start"; 63. int baseLine = 0; 64. Size textSize = getTextSize(msg, 1, 1, 1, &baseLine); 65. Point textOrigin(view.cols - 2*textSize.width - 10, view.rows - 2*baseLine - 10); 66. 67. if( mode == CAPTURING ) 68. { 69. if(s.showUndistorsed) 70. msg = format( "%d/%d Undist", (int)imagePoints.size(), s.nrFrames ); 71. else 72. msg = format( "%d/%d", (int)imagePoints.size(), s.nrFrames ); 73. } 74. 75. putText( view, msg, textOrigin, 1, 1, mode == CALIBRATED ? GREEN : RED); 76. 77. if( blinkOutput ) 78. bitwise_not(view, view);If we ran calibration and got camera’s matrix with the distortion coefficients we may want to correct the image using?undistort?function:
如果我們運(yùn)行標(biāo)定然后得到帶有畸變協(xié)參的相機(jī)的矩陣,我們可能希望使用undistort函數(shù)糾正圖像。
1 //------------------------- Video capture output undistorted ------------------------------ 2 if( mode == CALIBRATED && s.showUndistorsed ) 3 { 4 Mat temp = view.clone(); 5 undistort(temp, view, cameraMatrix, distCoeffs); 6 } 7 //------------------------------ Show image and check for input commands ------------------- 8 imshow("Image View", view);Then we wait for an input key and if this is?u?we toggle the distortion removal, if it is?g?we start again the detection process, and finally for the?ESC?key we quit the application:
然后我們等待一個(gè)輸入鍵,如果是u的話我們切換失真移除,如果是g的話我們重新啟動(dòng)檢測(cè)步驟,最后等待ESC鍵退出程序:
1 char key = waitKey(s.inputCapture.isOpened() ? 50 : s.delay); 2 if( key == ESC_KEY ) 3 break; 4 5 if( key == 'u' && mode == CALIBRATED ) 6 s.showUndistorsed = !s.showUndistorsed; 7 8 if( s.inputCapture.isOpened() && key == 'g' ) 9 { 10 mode = CAPTURING; 11 imagePoints.clear(); 12 }Show the distortion removal for the images too. When you work with an image list it is not possible to remove the distortion inside the loop. Therefore, you must do this after the loop. Taking advantage of this now I’ll expand the?undistort?function, which is in fact first calls?initUndistortRectifyMap?to find transformation matrices and then performs transformation using?remap?function. Because, after successful calibration map calculation needs to be done only once, by using this expanded form you may speed up your application:
顯示圖像的失真移除。當(dāng)你采用一個(gè)圖片列進(jìn)行工作的時(shí)候,不可能在循環(huán)內(nèi)部移除失真。因此,你必須在循環(huán)之后做這個(gè)。考慮到這個(gè)現(xiàn)在我將拓展undistort函數(shù),它實(shí)際上首先調(diào)用initUndistortRectifyMap來(lái)查找轉(zhuǎn)換矩陣,然后使用remap函數(shù)執(zhí)行轉(zhuǎn)換。因?yàn)?#xff0c;在成功標(biāo)定之后,地圖計(jì)算需要一次執(zhí)行完,通過使用這種延展的方式你可能會(huì)加快你的程序:
80. if( s.inputType == Settings::IMAGE_LIST && s.showUndistorsed ) 81. { 82. Mat view, rview, map1, map2; 83. initUndistortRectifyMap(cameraMatrix, distCoeffs, Mat(), 84. getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, imageSize, 1, imageSize, 0), 85. imageSize, CV_16SC2, map1, map2); 86. 87. for(int i = 0; i < (int)s.imageList.size(); i++ ) 88. { 89. view = imread(s.imageList[i], 1); 90. if(view.empty()) 91. continue; 92. remap(view, rview, map1, map2, INTER_LINEAR); 93. imshow("Image View", rview); 94. char c = waitKey(); 95. if( c == ESC_KEY || c == 'q' || c == 'Q' ) 96. break; 97. } 98. }?
標(biāo)定與保存:
Because the calibration needs to be done only once per camera, it makes sense to save it after a successful calibration. This way later on you can just load these values into your program. Due to this we first make the calibration, and if it succeeds we save the result into an OpenCV style XML or YAML file, depending on the extension you give in the configuration file.
因?yàn)闃?biāo)定需要每個(gè)相機(jī)只做一次就夠了。所以有必要在成功標(biāo)定后將參數(shù)保存下來(lái)。這樣以后你就能直接加載這些參數(shù)到你的項(xiàng)目中了。由于這個(gè)原因我們首先進(jìn)行標(biāo)定,然后如果成功了我們將保存結(jié)果到一個(gè)OpenCV格式的XML或YAML文件中,這取決于你給的配置文件的拓展名。
Therefore in the first function we just split up these two processes. Because we want to save many of the calibration variables we’ll create these variables here and pass on both of them to the calibration and saving function. Again, I’ll not show the saving part as that has little in common with the calibration. Explore the source file in order to find out how and what:
因此在第一個(gè)函數(shù)中我們只需要分開兩個(gè)步驟。因?yàn)槲覀兿氡4孢@些標(biāo)定參數(shù)我們將在這里創(chuàng)建這些參數(shù),然后同時(shí)將它們傳遞給標(biāo)定和保存函數(shù)中。再次,我將不會(huì)展示保存的部分,因?yàn)槟歉鷺?biāo)定關(guān)系不大。自己探索源代碼文件查看如何以及怎樣保存的:
1 bool runCalibrationAndSave(Settings& s, Size imageSize, Mat& cameraMatrix, Mat& distCoeffs,vector<vector<Point2f> > imagePoints ) 2 { 3 vector<Mat> rvecs, tvecs; 4 vector<float> reprojErrs; 5 double totalAvgErr = 0; 6 7 bool ok = runCalibration(s,imageSize, cameraMatrix, distCoeffs, imagePoints, rvecs, tvecs, 8 reprojErrs, totalAvgErr); 9 cout << (ok ? "Calibration succeeded" : "Calibration failed") 10 << ". avg re projection error = " << totalAvgErr ; 11 12 if( ok ) // save only if the calibration was done with success 13 saveCameraParams( s, imageSize, cameraMatrix, distCoeffs, rvecs ,tvecs, reprojErrs, 14 imagePoints, totalAvgErr); 15 return ok; 16 }We do the calibration with the help of the?calibrateCamera?function. It has the following parameters:
我們將在calibrateCamera的函數(shù)幫助下做標(biāo)定。它有如下參數(shù):
- The object points. This is a vector of?Point3f?vector that for each input image describes how should the pattern look. If we have a planar pattern (like a chessboard) then we can simply set all Z coordinates to zero. This is a collection of the points where these important points are present. Because, we use a single pattern for all the input images we can calculate this just once and multiply it for all the other input views. We calculate the corner points with the?calcBoardCornerPositions?function as:
?對(duì)象點(diǎn)。這是一個(gè)為每個(gè)輸入圖片所創(chuàng)建的Point3f矢量的對(duì)象,它描述了圖案的長(zhǎng)相。如果我們有一個(gè)平面的圖像(比如一個(gè)棋盤)那么我們能夠簡(jiǎn)單地設(shè)置所有的Z坐標(biāo)為0。這是一個(gè)顯示重要點(diǎn)的點(diǎn)的集合。因?yàn)?#xff0c;我們?yōu)樗械妮斎雸D像使用一個(gè)單個(gè)的圖案,我們可以計(jì)算一次然后將它乘以其他的輸入圖像矩陣。我們使用calcBoardCornerPositions函數(shù)計(jì)算這些拐角點(diǎn):
? void calcBoardCornerPositions(Size boardSize, float squareSize, vector<Point3f>& corners, ? Settings::Pattern patternType /*= Settings::CHESSBOARD*/) ? { ? corners.clear(); ? ? switch(patternType) ? { ? case Settings::CHESSBOARD: ? case Settings::CIRCLES_GRID: ? for( int i = 0; i < boardSize.height; ++i ) ? for( int j = 0; j < boardSize.width; ++j ) ? corners.push_back(Point3f(float( j*squareSize ), float( i*squareSize ), 0)); ? break; ? ? case Settings::ASYMMETRIC_CIRCLES_GRID: ? for( int i = 0; i < boardSize.height; i++ ) ? for( int j = 0; j < boardSize.width; j++ ) ? corners.push_back(Point3f(float((2*j + i % 2)*squareSize), float(i*squareSize), 0)); ? break; ? } ? }And then multiply it as:
然后如下將它相乘:
vector<vector<Point3f> > objectPoints(1); calcBoardCornerPositions(s.boardSize, s.squareSize, objectPoints[0], s.calibrationPattern); objectPoints.resize(imagePoints.size(),objectPoints[0]);- The image points. This is a vector of?Point2f?vector which for each input image contains coordinates of the important points (corners for chessboard and centers of the circles for the circle pattern). We have already collected this from?findChessboardCorners?or?findCirclesGrid?function. We just need to pass it on.
圖像點(diǎn)。這是一個(gè)Point2f矢量的矢量對(duì)象,它對(duì)每個(gè)輸入圖像來(lái)說(shuō)都包含重要的點(diǎn)坐標(biāo)(棋盤板的拐角,以及圓圖案的圓心)。我們已經(jīng)通過findChessboardCorners函數(shù)或者findCirclesGrid函數(shù)收集了這個(gè)。我們只需要傳遞過來(lái)。
- The size of the image acquired from the camera, video file or the images.
從相機(jī)、視頻文件或者圖片中獲取的圖像的尺寸。
- The camera matrix. If we used the fixed aspect ratio option we need to set the?fx?to zero:
相機(jī)矩陣。如果我們使用修正后的長(zhǎng)寬比率選項(xiàng)我們需要將fx設(shè)置為0:
? cameraMatrix = Mat::eye(3, 3, CV_64F); ? if( s.flag & CV_CALIB_FIX_ASPECT_RATIO ) ? cameraMatrix.at<double>(0,0) = 1.0;The distortion coefficient matrix. Initialize with zero.畸變協(xié)矩陣。初始化為0。
? distCoeffs = Mat::zeros(8, 1, CV_64F);For all the views the function will calculate rotation and translation vectors which transform the object points (given in the model coordinate space) to the image points (given in the world coordinate space). The 7-th and 8-th parameters are the output vector of matrices containing in the i-th position the rotation and translation vector for the i-th object point to the i-th image point.
對(duì)于所有的圖像矩陣該函數(shù)將計(jì)算旋轉(zhuǎn)和平移矢量,它們將對(duì)象點(diǎn)(在模型坐標(biāo)系中給出)轉(zhuǎn)換到圖像點(diǎn)(在世界坐標(biāo)系中)上。 第7和第8參數(shù)是包含在第i位置的從第i對(duì)象到第i圖像點(diǎn)的旋轉(zhuǎn)和平移向量矩陣的輸出向量。
- The final argument is the flag. You need to specify here options like fix the aspect ratio for the focal length, assume zero tangential distortion or to fix the principal point.
?最后的參數(shù)是標(biāo)志。你需要指定選項(xiàng)像修正焦距的長(zhǎng)寬比率,假定零正切失真或者修正法線點(diǎn)。
double rms = calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix,distCoeffs, rvecs, tvecs, s.flag|CV_CALIB_FIX_K4|CV_CALIB_FIX_K5);- The function returns the average re-projection error. This number gives a good estimation of precision of the found parameters. This should be as close to zero as possible. Given the intrinsic, distortion, rotation and translation matrices we may calculate the error for one view by using the?projectPoints?to first transform the object point to image point. Then we calculate the absolute norm between what we got with our transformation and the corner/circle finding algorithm. To find the average error we calculate the arithmetical mean of the errors calculated for all the calibration images.
該函數(shù)返回平均重映射誤差。 這個(gè)數(shù)給出一個(gè)找到的參數(shù)的精度的好的預(yù)測(cè)。這個(gè)應(yīng)該盡可能接近于0。考慮到內(nèi)在的,失真,旋轉(zhuǎn)和平移矩陣我們可以通過使用projectPoints將對(duì)象點(diǎn)轉(zhuǎn)換到圖像點(diǎn)來(lái)計(jì)算每個(gè)圖片的誤差。然后我們計(jì)算我們轉(zhuǎn)換得到的與使用查找算法得到拐角/圓圈之間的絕對(duì)的二范數(shù)。為了得到平均誤差我們?yōu)樗械臉?biāo)定圖像計(jì)算算術(shù)平均誤差。
? double computeReprojectionErrors( const vector<vector<Point3f> >& objectPoints, ? const vector<vector<Point2f> >& imagePoints, ? const vector<Mat>& rvecs, const vector<Mat>& tvecs, ? const Mat& cameraMatrix , const Mat& distCoeffs, ? vector<float>& perViewErrors) ? { ? vector<Point2f> imagePoints2; ? int i, totalPoints = 0; ? double totalErr = 0, err; ? perViewErrors.resize(objectPoints.size()); ? ? for( i = 0; i < (int)objectPoints.size(); ++i ) ? { ? projectPoints( Mat(objectPoints[i]), rvecs[i], tvecs[i], cameraMatrix, // project ? distCoeffs, imagePoints2); ? err = norm(Mat(imagePoints[i]), Mat(imagePoints2), CV_L2); // difference ? ? int n = (int)objectPoints[i].size(); ? perViewErrors[i] = (float) std::sqrt(err*err/n); // save for this view ? totalErr += err*err; // sum it up ? totalPoints += n; ? } ? ? return std::sqrt(totalErr/totalPoints); // calculate the arithmetical mean ? }?
圖像:
Let there be?this?input?chessboard?pattern?which has a size of 9 X 6. I’ve used an AXIS IP camera to create a couple of snapshots of the board and saved it into VID5 directory. I’ve put this inside the?images/CameraCalibration?folder of my working directory and created the following?VID5.XML?file that describes which images to use:
讓尺寸為9x6的棋盤圖案做輸入。 我使用了一個(gè)AXIS IP相機(jī)來(lái)拍攝了幾個(gè)照片然后保存到VID5路徑。我已經(jīng)將這個(gè)放到我工作目錄的images/CameraCalibration文件夾,然后創(chuàng)建了接下來(lái)的的VID5.XML文件,描述了使用的哪些圖片:
<?xml version="1.0"?> <opencv_storage> <images> images/CameraCalibration/VID5/xx1.jpg images/CameraCalibration/VID5/xx2.jpg images/CameraCalibration/VID5/xx3.jpg images/CameraCalibration/VID5/xx4.jpg images/CameraCalibration/VID5/xx5.jpg images/CameraCalibration/VID5/xx6.jpg images/CameraCalibration/VID5/xx7.jpg images/CameraCalibration/VID5/xx8.jpg </images> </opencv_storage>Then passed?images/CameraCalibration/VID5/VID5.XML?as an input in the configuration file. Here’s a chessboard pattern found during the runtime of the application:
然后傳遞images/CameraCalibration/VID5/VID5.XML作為一個(gè)配置文件的輸入值。這里是一個(gè)在程序運(yùn)行時(shí)查找到的棋盤板圖案。
After applying the distortion removal we get:執(zhí)行完失真移除之后我們得到:
The same works for?this?asymmetrical?circle?pattern?by setting the input width to 4 and height to 11. This time I’ve used a live camera feed by specifying its ID (“1”) for the input. Here’s, how a detected pattern should look:
同樣可以使用這種非對(duì)稱的圓圖案設(shè)置輸入寬度為4、輸入高度為11。這一次我使用了一個(gè)現(xiàn)場(chǎng)直播攝像頭反饋,通過指定ID(“1”)作為輸入。這里,檢測(cè)到的圖案應(yīng)該是看起來(lái)這樣:
In both cases in the specified output XML/YAML file you’ll find the camera and distortion coefficients matrices:
在這兩種情況下在指定的輸出XML/YAML文件中你將會(huì)找到攝像頭和失真協(xié)參矩陣:
<Camera_Matrix type_id="opencv-matrix"> <rows>3</rows> <cols>3</cols> <dt>d</dt> <data>6.5746697944293521e+002 0. 3.1950000000000000e+002 0.6.5746697944293521e+002 2.3950000000000000e+002 0. 0. 1.</data></Camera_Matrix> <Distortion_Coefficients type_id="opencv-matrix"> <rows>5</rows> <cols>1</cols> <dt>d</dt> <data>-4.1802327176423804e-001 5.0715244063187526e-001 0. 0.-5.7843597214487474e-001</data></Distortion_Coefficients>Add these values as constants to your program, call the?initUndistortRectifyMap?and the?remap?function to remove distortion and enjoy distortion free inputs for cheap and low quality cameras.
將這些值作為常數(shù)添加到你的程序中,調(diào)用initUndistortRectifyMap和remap函數(shù)來(lái)移除失真,然后享受來(lái)自廉價(jià)和低清晰攝像頭的沒有畸變的輸入吧。
?>>OpenCV做相機(jī)標(biāo)定原文鏈接
http://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html
視頻
?
?
終于成功了,注意事項(xiàng):
如果VS2008不行,就試一試VS2010,因?yàn)樾枰愕木幾g庫(kù)相對(duì)應(yīng)。如果Debug不行,就試試Release。總之,多試幾次。?
?
詳細(xì)指導(dǎo)教程:
?文件放在Debug文件夾下:
其中in_VID5.xml是輸入?yún)?shù):
BoardSize_Width為內(nèi)角點(diǎn)的寬方向的個(gè)數(shù),BoardSize_Height為內(nèi)角點(diǎn)的高方向的個(gè)數(shù)。Square_Size為用戶定義的坐標(biāo)系下的一個(gè)四方格的尺寸(一般設(shè)為真實(shí)尺寸的多少mm)。
VID5.xml為圖片索引
最后的輸出結(jié)果:
共6幀圖像,寬7個(gè)角點(diǎn),高7個(gè)角點(diǎn),正方形尺寸40,修正寬高比率1,相機(jī)矩陣,畸變協(xié)陣,平均重投影誤差。
可以看到fx=4.85*10^2, cx=3.195*10^2, fy=4.85*10^2, cy=1.795*10^2,?
K1=-1.964*10^(-2), K2=-1.45*10^(-1), K3=4.856*10^(-1)
?
?
?以上是采用3.1自帶的程序?qū)嶒?yàn)的。也可以參考OpenCV2.46版:opencv標(biāo)定程序(修改),但在OpenCV3.1上實(shí)驗(yàn)會(huì)有內(nèi)存錯(cuò)誤。
?
?
?
?
?
?
?》》擴(kuò)展閱讀
1. 軟件《多基線近景攝影測(cè)量軟件》 相機(jī)檢校 X:出現(xiàn)“無(wú)效相片”
2. 軟件《Photomodeler Scanner》 camera calibrate
3. C++ 攝影測(cè)量 依賴于OpenCV
?依賴庫(kù):
opengl32.lib
glu32.lib
glaux.lib
cximagecrtd.lib
cv.lib
highgui.lib
cxcore.lib
BLASd.lib
clapackd.lib
libf2cd.lib
tmglibd.lib
libumfpack.lib
libamd.lib
?界面設(shè)計(jì):(MFC界面庫(kù))
?
1.新建工程,選擇添加文件
取名為“H”,將在D盤新建一個(gè)名為H的文件夾工程,
選擇默認(rèn)的圖像組[0],點(diǎn)確定,將加載圖像組
?可見圖像組是由0-7八個(gè)標(biāo)定圖和0-25二十六個(gè)拍攝圖總共34個(gè)組成。
?點(diǎn)擊標(biāo)記圓檢測(cè)
即完成標(biāo)記圓檢測(cè),標(biāo)記圓是自己制作的標(biāo)定物
標(biāo)定成功后,可以看到每個(gè)圖像的標(biāo)定圓以紅色十字架標(biāo)記,第二列數(shù)字表示每張圖像中的標(biāo)記圓個(gè)數(shù),可見有的標(biāo)記圓沒有被正確標(biāo)記出
?然后點(diǎn)擊攝像機(jī)標(biāo)定,求取攝像機(jī)的內(nèi)外方位元素
?
然后點(diǎn)擊“多視圖重構(gòu)”按鈕,出現(xiàn)了錯(cuò)誤
?
?
4. Halcon
張正友的《相機(jī)標(biāo)定法》
?5. Matlab
?
?
問題1:初始值需不需要設(shè)置
相機(jī)迭代參數(shù)初始值不需要設(shè)置
?
附加依賴項(xiàng):
opencv_calib3d310d.lib
opencv_core310d.lib
opencv_features2d310d.lib
opencv_flann310d.lib
opencv_highgui310d.lib
opencv_imgcodecs310d.lib
opencv_imgproc310d.lib
opencv_ml310d.lib
opencv_objdetect310d.lib
opencv_photo310d.lib
opencv_shape310d.lib
opencv_stitching310d.lib
opencv_superres310d.lib
opencv_ts310d.lib
opencv_video310d.lib
opencv_videoio310d.lib
opencv_videostab310d.lib
完整引用
opencv_calib3d310d.lib opencv_core310d.lib opencv_features2d310d.lib opencv_flann310d.lib opencv_highgui310d.lib opencv_imgcodecs310d.lib opencv_imgproc310d.lib opencv_ml310d.lib opencv_objdetect310d.lib opencv_photo310d.lib opencv_shape310d.lib opencv_stitching310d.lib opencv_superres310d.lib opencv_ts310d.lib opencv_video310d.lib opencv_videoio310d.lib opencv_videostab310d.lib pcl_kdtree_debug.lib pcl_io_debug.lib pcl_search_debug.lib pcl_segmentation_debug.lib pcl_apps_debug.lib pcl_features_debug.lib pcl_filters_debug.lib pcl_visualization_debug.lib pcl_common_debug.lib pcl_kdtree_release.lib pcl_io_release.lib pcl_search_release.lib pcl_segmentation_release.lib pcl_apps_release.lib pcl_features_release.lib pcl_filters_release.lib pcl_visualization_release.lib pcl_common_release.lib flann_cpp_s-gd.lib boost_date_time-vc100-mt-1_49.lib boost_date_time-vc100-mt-gd-1_49.lib boost_filesystem-vc100-mt-1_49.lib boost_filesystem-vc100-mt-gd-1_49.lib boost_iostreams-vc100-mt-1_49.lib boost_iostreams-vc100-mt-gd-1_49.lib boost_serialization-vc100-mt-1_49.lib boost_serialization-vc100-mt-gd-1_49.lib boost_system-vc100-mt-1_49.lib boost_system-vc100-mt-gd-1_49.lib boost_thread-vc100-mt-1_49.lib boost_thread-vc100-mt-gd-1_49.lib boost_wserialization-vc100-mt-1_49.lib boost_wserialization-vc100-mt-gd-1_49.lib libboost_date_time-vc100-mt-1_49.lib libboost_date_time-vc100-mt-gd-1_49.lib libboost_filesystem-vc100-mt-1_49.lib libboost_filesystem-vc100-mt-gd-1_49.lib libboost_iostreams-vc100-mt-1_49.lib libboost_iostreams-vc100-mt-gd-1_49.lib libboost_serialization-vc100-mt-1_49.lib libboost_serialization-vc100-mt-gd-1_49.lib libboost_system-vc100-mt-1_49.lib libboost_system-vc100-mt-gd-1_49.lib libboost_thread-vc100-mt-1_49.lib libboost_thread-vc100-mt-gd-1_49.lib libboost_wserialization-vc100-mt-1_49.lib libboost_wserialization-vc100-mt-gd-1_49.lib openNI.lib OpenNI.jni.lib NiSampleModule.lib NiSampleExtensionModule.lib vtkalglib-gd.lib vtkCharts-gd.lib vtkCommon-gd.lib vtkDICOMParser-gd.lib vtkexoIIc-gd.lib vtkexpat-gd.lib vtkFiltering-gd.lib vtkfreetype-gd.lib vtkftgl-gd.lib vtkGenericFiltering-gd.lib vtkGeovis-gd.lib vtkGraphics-gd.lib vtkhdf5-gd.lib vtkHybrid-gd.lib vtkImaging-gd.lib vtkInfovis-gd.lib vtkIO-gd.lib vtkjpeg-gd.lib vtklibxml2-gd.lib vtkmetaio-gd.lib vtkNetCDF-gd.lib vtkNetCDF_cxx-gd.lib vtkpng-gd.lib vtkproj4-gd.lib vtkRendering-gd.lib vtksqlite-gd.lib vtksys-gd.lib vtktiff-gd.lib vtkverdict-gd.lib vtkViews-gd.lib vtkVolumeRendering-gd.lib vtkWidgets-gd.lib vtkzlib-gd.lib?Executable Directories:
E:\QQDownload\PCL1_6\PCL 1.6.0\bin;E:\opencv_c\install\x86\vc10\bin;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\VTK\bin;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\Qhull\bin;E:\QQDownload\OpenNI\Bin;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\FLANN\bin;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\Eigen\bin;$(ExecutablePath)Include Directories:
E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\VTK\include\vtk-5.8;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\Qhull\include;E:\QQDownload\OpenNI\Include;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\FLANN\include;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\Eigen\include;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\Boost\include;E:\QQDownload\PCL1_6\PCL 1.6.0\include\pcl-1.6;E:\opencv_c\install\include;$(IncludePath)Library Directories:
E:\QQDownload\OpenNI\Lib;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\VTK\lib\vtk-5.8;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\Qhull\lib;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\FLANN\lib;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\Boost\lib;E:\QQDownload\PCL1_6\PCL 1.6.0\lib;E:\opencv_c\install\x86\vc10\staticlib;E:\opencv_c\install\x86\vc10\lib;$(LibraryPath)?VS2010,Release模式,添加了OpenNI,OpenCV,VTK,Boost,PCL...均為2010編譯器編譯的,32位。選的Release執(zhí)行成功。
?
g2o
g2o_cli.lib g2o_core.lib g2o_csparse_extension.lib g2o_ext_csparse.lib g2o_ext_freeglut_minimal.lib g2o_interface.lib g2o_opengl_helper.lib g2o_parser.lib g2o_simulator.lib g2o_solver_csparse.lib g2o_solver_dense.lib g2o_solver_pcg.lib g2o_solver_slam2d_linear.lib g2o_solver_structure_only.lib g2o_stuff.lib g2o_types_data.lib g2o_types_icp.lib g2o_types_sba.lib g2o_types_sclam2d.lib g2o_types_sim3.lib g2o_types_slam2d.lib g2o_types_slam2d_addons.lib g2o_types_slam3d.lib g2o_types_slam3d_addons.lib g2o_viewer.lib?
轉(zhuǎn)載于:https://www.cnblogs.com/2008nmj/p/6341410.html
總結(jié)
以上是生活随笔為你收集整理的使用OpenCV进行相机标定的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: CentOS7虚拟机搭建xwiki
- 下一篇: Qt版本中国象棋开发(二)