使用OpenCV进行相机标定
1. 使用OpenCV進行標定
相機已經有很長一段歷史了。但是,伴隨著20世紀后期的廉價針孔照相機的問世,它們已經變成我們日常生活的一種常見的存在。不幸的是,這種廉價是由代價的:顯著的變形。幸運的是,這些是常數而且使用標定和一些重繪我們可以矯正這個。而且,使用標定你還可以確定照相機的像素和真實世界的坐標單位毫米之間關系。
原理:
對于變形(鏡頭畸變),OpenCV考慮徑向畸變和切向畸變。
對于徑向畸變參數使用以下公式:
所以對于一個輸入圖像的舊像素點(x,y),它在輸出圖像的新像素點坐標將會是(xcorrected, ycorrected)。徑向畸變的出現表示了“桶”或者“魚眼”效果。
切向畸變出現是因為鏡頭不能和成像平面完美平行。它可以通過以下公式糾正:
所以我們可以有5個畸變參數。在OpenCV中是以一個1行5列的矩陣表示的:
?
現在對于單位轉換我們使用以下公式:
?
這里w是使用的單映射坐標系統表示(而且w=Z)。未知的參數是fx和fy(相機焦距)和(cx, cy)是光學中心以像素坐標表示。如果對于兩個軸一個通用的焦距是通過一個給定的a比率使用(通常為1),那么fy=fx*a,在上面的公式中我們將會有一個單個的焦距f。即fx=fy=f。這個包含4個參數的矩陣是指的相機矩陣。由于畸變協參是相同的無論相機分辨率是多少,所以這些畸變協參應該按照當前分辨率縮放,而非校正后的分辨率。
確定這兩個矩陣的過程稱為標定。這些參數的計算是通過基本的幾何等式完成的。使用的等式形式取決于選擇的標定物體。當前OpenCV支持三種類型的物體來進行標定:
- 傳統的黑白棋盤板
- 對稱的圓圈圖案
- 不對稱的圓圈圖案
基本上,你需要使用你的相機拍攝這些圖案,然后讓OpenCV找到它們。每個找到的圖案會產生一個新的等式。為了解決這些等式你需要至少一個預先設定數量的圖案照片來形成一個良好適定等式系統(什么叫適定問題)。這個數量對于黑白棋盤板要較高,而對于圓圈圖案則較少。舉例,理論上棋盤圖案需要至少兩個照片。然而,實際上我們的輸入圖像有相當數量的噪聲,所以為了得到好的結果你可能需要至少從不同角度的10個好的輸入圖案照片。
?
目標:
樣本應用將:
- 決定畸變矩陣
- 決定相機矩陣
- 從相機,視頻和圖片文件列中獲取輸入文件
- 文件中讀取配置
- 保存結果到XML/YAML文件
- 計算重投影誤差
?
源碼:
You may also find the source code in the?samples/cpp/tutorial_code/calib3d/camera_calibration/?folder of the OpenCV source library or?download?itfrom?here.??
你可以在OpenCV的源碼庫的samples/cpp/tutorial_code/calib3d/camera_calibration/文件夾中找到源碼或者可以從這里下載。
?
?The program has a single argument: the name of its configuration file. If none is given then it will try to open the one named “default.xml”.?Here's?a?sample?configuration?file?in XML format.?
?這個程序有一個單個參數:配置文件名稱。如果沒有給定那么它就會試著打開一個命名為“default.xml”的文件。這里是一個XML格式的樣本配置文件。
In the configuration file you may choose to use camera as an input, a video file or an image list. If you opt for the last one, you will need to create a configuration file where you enumerate the images to use. Here’s?an?example?of?this. The important part to remember is that the images need to be specified using the absolute path or the relative one from your application’s working directory. You may find all this in the samples directory mentioned above.
在配置文件中你可以選擇使用相機作為輸入,或者是一個視頻文件或者一個圖片列表。如果你選擇最后一個,你將需要創建一個配置文件,里面你列舉要使用的圖片。這里是一個樣本文件。需要記住的重要部分是需要使用絕對路徑指定或你的應用的工作路徑的相對路徑指定的圖片。你可能在上面提到的樣例路徑中找到這些。
The application starts up with reading the settings from the configuration file.?Although, this is an important part of it, it has nothing to do with the subject of this tutorial:?camera calibration.?Therefore, I’ve chosen not to post the code for that part here. Technical background on how to do this you can find in the?File Input and Output using XML and YAML files?tutorial.
這個應用以讀取配置文件的設置啟動。雖然讀取配置文件是一個重要部分,但是它和本教程的主題無關:相機標定。所以,我選擇不在這里貼這部分的源代碼了。如果讀取配置文件的技術背景資料你可以在“使用XML和YAML文件的文件讀取和輸出”教程中找到。
?
解釋:?
1. 讀取設置
2. Settings s; 3. const string inputSettingsFile = argc > 1 ? argv[1] : "default.xml"; 4. FileStorage fs(inputSettingsFile, FileStorage::READ); // Read the settings 5. if (!fs.isOpened()) 6. { 7. cout << "Could not open the configuration file: \"" << inputSettingsFile << "\"" << endl; 8. return -1; 9. } 10. fs["Settings"] >> s; 11. fs.release(); // close Settings file 12. 13. if (!s.goodInput) 14. { 15. cout << "Invalid input detected. Application stopping. " << endl; 16. return -1; 17. }?Settings為設置類。
For this I’ve used simple OpenCV class input operation. After reading the file I’ve an additional post-processing function that checks validity of the input. Only if all inputs are good then?goodInput?variable will be true.
對此,我使用簡單的OpenCV類輸入操作。在讀取文件之后我有一個多余的處理之后的函數檢查輸入的有效性。只有在所有的輸入都有效goodInput變量才為真。
Get next input, if it fails or we have enough of them - calibrate. After this we have a big loop where we do the following operations: get the next image from the image list, camera or video file. If this fails or we have enough images then we run the calibration process. In case of image we step out of the loop and otherwise the remaining frames will be undistorted (if the option is set) via changing from?DETECTION?mode to the?CALIBRATED?one.
獲取下一個輸入,如果失敗了或者我們有足夠的它們——標定。在這之后我們有一個大的循環,其中我們做以下的操作:從圖片列表、相機或者視頻文件中獲取下一個圖片。如果這個失敗了或者我們有足夠的圖片那么我們進行標定的進程。在圖片的情況下,通過從DETECTION模式轉變到CALIBRATED模式我們跳出循環,否則剩余的幀將非失真的(如果這個選項設置了)。
19. for(int i = 0;;++i) 20. { 21. Mat view; 22. bool blinkOutput = false; 23. 24. view = s.nextImage(); 25. 26. //----- If no more image, or got enough, then stop calibration and show result ------------- 27. if( mode == CAPTURING && imagePoints.size() >= (unsigned)s.nrFrames ) 28. { 29. if( runCalibrationAndSave(s, imageSize, cameraMatrix, distCoeffs, imagePoints)) 30. mode = CALIBRATED; 31. else 32. mode = DETECTION; 33. } 34. if(view.empty()) // If no more images then run calibration, save and stop loop. 35. { 36. if( imagePoints.size() > 0 ) 37. runCalibrationAndSave(s, imageSize, cameraMatrix, distCoeffs, imagePoints); 38. break; 39. imageSize = view.size(); // Format input image. 40. if( s.flipVertical ) flip( view, view, 0 ); 41. }For some cameras we may need to flip the input image. Here we do this too.
對于一些相機我們可能需要跳過輸入圖片。這里我們也這么做。
Find the pattern in the current input. The formation of the equations I mentioned above aims to finding major patterns in the input: in case of the chessboard this are corners of the squares and for the circles, well, the circles themselves. The position of these will form the result which will be written into the?pointBuf?vector.
在當前輸入中尋找圖案。上面我提到的等式形式主要是為了查找輸入中的主要圖案:在棋盤模式下是方形的角,而對于圓形,好吧,是圓自己。這些的位置將形成結果保存到pintBuf矢量對象中。
43. vector<Point2f> pointBuf; 44. 45. bool found; 46. switch( s.calibrationPattern ) // Find feature points on the input format 47. { 48. case Settings::CHESSBOARD: 49. found = findChessboardCorners( view, s.boardSize, pointBuf, 50. CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_FAST_CHECK | CV_CALIB_CB_NORMALIZE_IMAGE); 51. break; 52. case Settings::CIRCLES_GRID: 53. found = findCirclesGrid( view, s.boardSize, pointBuf ); 54. break; 55. case Settings::ASYMMETRIC_CIRCLES_GRID: 56. found = findCirclesGrid( view, s.boardSize, pointBuf, CALIB_CB_ASYMMETRIC_GRID ); 57. break; 58. }Depending on the type of the input pattern you use either the?findChessboardCorners?or the?findCirclesGrid?function. For both of them you pass the current image and the size of the board and you’ll get the positions of the patterns. Furthermore, they return a boolean variable which states if the pattern was found in the input (we only need to take into account those images where this is true!).
根據輸入圖案的類型你可以使用findChessboardCorners函數或者findCirclesGrid函數。對于兩者你傳入當前圖像和板子的尺寸然后你將會得到圖案的位置。而且,它們將會返回一個布爾類型表示是否圖案已經在輸入圖像中找到了(我們只需要考慮為true的圖片)。
Then again in case of cameras we only take camera images when an input delay time is passed. This is done in order to allow user moving the chessboard around and getting different images. Similar images result in similar equations, and similar equations at the calibration step will form an ill-posed problem, so the calibration will fail. For square images the positions of the corners are only approximate. We may improve this by calling the?cornerSubPix?function. It will produce better calibration result. After this we add a valid inputs result to the?imagePoints?vector to collect all of the equations into a single container. Finally, for visualization feedback purposes we will draw the found points on the input image using?findChessboardCorners?function.
然后再一次在相機模式下我們只需要每隔一個時間獲取相機圖像。這么做是為了允許用戶移動棋盤然后得到不同角度的圖像。相似的圖像將導致相似的等式,而相似的等式在標定步驟時將產生一個病態不適定問題,然后標定將會失敗。對于矩形圖像拐角的位置是唯一有效的。我們可以通過調用cornerSubPix函數來提高這個效果。這將會產生更好的標定效果。在這之后我們添加一個有效的輸入結果到imagePoints矢量中從而收集所有的等式到一個單個容器中。最后,對于視覺反饋目的我們將使用findChessboardCorners函數在輸入圖像中繪制找到的點。
1 if ( found) // If done with success, 2 { 3 // improve the found corners' coordinate accuracy for chessboard 4 if( s.calibrationPattern == Settings::CHESSBOARD) 5 { 6 Mat viewGray; 7 cvtColor(view, viewGray, CV_BGR2GRAY); 8 cornerSubPix( viewGray, pointBuf, Size(11,11), 9 Size(-1,-1), TermCriteria( CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 30, 0.1 )); 10 } 11 12 if( mode == CAPTURING && // For camera only take new samples after delay time 13 (!s.inputCapture.isOpened() || clock() - prevTimestamp > s.delay*1e-3*CLOCKS_PER_SEC) ) 14 { 15 imagePoints.push_back(pointBuf); 16 prevTimestamp = clock(); 17 blinkOutput = s.inputCapture.isOpened(); 18 } 19 20 // Draw the corners. 21 drawChessboardCorners( view, s.boardSize, Mat(pointBuf), found ); 22 }Show state and result to the user, plus command line control of the application. This part shows text output on the image.
顯示狀態和結果給用戶,以及應用的命令行控制。這部分將展示圖像上的文本輸出。
60. //----------------------------- Output Text ------------------------------------------------ 61. string msg = (mode == CAPTURING) ? "100/100" : 62. mode == CALIBRATED ? "Calibrated" : "Press 'g' to start"; 63. int baseLine = 0; 64. Size textSize = getTextSize(msg, 1, 1, 1, &baseLine); 65. Point textOrigin(view.cols - 2*textSize.width - 10, view.rows - 2*baseLine - 10); 66. 67. if( mode == CAPTURING ) 68. { 69. if(s.showUndistorsed) 70. msg = format( "%d/%d Undist", (int)imagePoints.size(), s.nrFrames ); 71. else 72. msg = format( "%d/%d", (int)imagePoints.size(), s.nrFrames ); 73. } 74. 75. putText( view, msg, textOrigin, 1, 1, mode == CALIBRATED ? GREEN : RED); 76. 77. if( blinkOutput ) 78. bitwise_not(view, view);If we ran calibration and got camera’s matrix with the distortion coefficients we may want to correct the image using?undistort?function:
如果我們運行標定然后得到帶有畸變協參的相機的矩陣,我們可能希望使用undistort函數糾正圖像。
1 //------------------------- Video capture output undistorted ------------------------------ 2 if( mode == CALIBRATED && s.showUndistorsed ) 3 { 4 Mat temp = view.clone(); 5 undistort(temp, view, cameraMatrix, distCoeffs); 6 } 7 //------------------------------ Show image and check for input commands ------------------- 8 imshow("Image View", view);Then we wait for an input key and if this is?u?we toggle the distortion removal, if it is?g?we start again the detection process, and finally for the?ESC?key we quit the application:
然后我們等待一個輸入鍵,如果是u的話我們切換失真移除,如果是g的話我們重新啟動檢測步驟,最后等待ESC鍵退出程序:
1 char key = waitKey(s.inputCapture.isOpened() ? 50 : s.delay); 2 if( key == ESC_KEY ) 3 break; 4 5 if( key == 'u' && mode == CALIBRATED ) 6 s.showUndistorsed = !s.showUndistorsed; 7 8 if( s.inputCapture.isOpened() && key == 'g' ) 9 { 10 mode = CAPTURING; 11 imagePoints.clear(); 12 }Show the distortion removal for the images too. When you work with an image list it is not possible to remove the distortion inside the loop. Therefore, you must do this after the loop. Taking advantage of this now I’ll expand the?undistort?function, which is in fact first calls?initUndistortRectifyMap?to find transformation matrices and then performs transformation using?remap?function. Because, after successful calibration map calculation needs to be done only once, by using this expanded form you may speed up your application:
顯示圖像的失真移除。當你采用一個圖片列進行工作的時候,不可能在循環內部移除失真。因此,你必須在循環之后做這個。考慮到這個現在我將拓展undistort函數,它實際上首先調用initUndistortRectifyMap來查找轉換矩陣,然后使用remap函數執行轉換。因為,在成功標定之后,地圖計算需要一次執行完,通過使用這種延展的方式你可能會加快你的程序:
80. if( s.inputType == Settings::IMAGE_LIST && s.showUndistorsed ) 81. { 82. Mat view, rview, map1, map2; 83. initUndistortRectifyMap(cameraMatrix, distCoeffs, Mat(), 84. getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, imageSize, 1, imageSize, 0), 85. imageSize, CV_16SC2, map1, map2); 86. 87. for(int i = 0; i < (int)s.imageList.size(); i++ ) 88. { 89. view = imread(s.imageList[i], 1); 90. if(view.empty()) 91. continue; 92. remap(view, rview, map1, map2, INTER_LINEAR); 93. imshow("Image View", rview); 94. char c = waitKey(); 95. if( c == ESC_KEY || c == 'q' || c == 'Q' ) 96. break; 97. } 98. }?
標定與保存:
Because the calibration needs to be done only once per camera, it makes sense to save it after a successful calibration. This way later on you can just load these values into your program. Due to this we first make the calibration, and if it succeeds we save the result into an OpenCV style XML or YAML file, depending on the extension you give in the configuration file.
因為標定需要每個相機只做一次就夠了。所以有必要在成功標定后將參數保存下來。這樣以后你就能直接加載這些參數到你的項目中了。由于這個原因我們首先進行標定,然后如果成功了我們將保存結果到一個OpenCV格式的XML或YAML文件中,這取決于你給的配置文件的拓展名。
Therefore in the first function we just split up these two processes. Because we want to save many of the calibration variables we’ll create these variables here and pass on both of them to the calibration and saving function. Again, I’ll not show the saving part as that has little in common with the calibration. Explore the source file in order to find out how and what:
因此在第一個函數中我們只需要分開兩個步驟。因為我們想保存這些標定參數我們將在這里創建這些參數,然后同時將它們傳遞給標定和保存函數中。再次,我將不會展示保存的部分,因為那跟標定關系不大。自己探索源代碼文件查看如何以及怎樣保存的:
1 bool runCalibrationAndSave(Settings& s, Size imageSize, Mat& cameraMatrix, Mat& distCoeffs,vector<vector<Point2f> > imagePoints ) 2 { 3 vector<Mat> rvecs, tvecs; 4 vector<float> reprojErrs; 5 double totalAvgErr = 0; 6 7 bool ok = runCalibration(s,imageSize, cameraMatrix, distCoeffs, imagePoints, rvecs, tvecs, 8 reprojErrs, totalAvgErr); 9 cout << (ok ? "Calibration succeeded" : "Calibration failed") 10 << ". avg re projection error = " << totalAvgErr ; 11 12 if( ok ) // save only if the calibration was done with success 13 saveCameraParams( s, imageSize, cameraMatrix, distCoeffs, rvecs ,tvecs, reprojErrs, 14 imagePoints, totalAvgErr); 15 return ok; 16 }We do the calibration with the help of the?calibrateCamera?function. It has the following parameters:
我們將在calibrateCamera的函數幫助下做標定。它有如下參數:
- The object points. This is a vector of?Point3f?vector that for each input image describes how should the pattern look. If we have a planar pattern (like a chessboard) then we can simply set all Z coordinates to zero. This is a collection of the points where these important points are present. Because, we use a single pattern for all the input images we can calculate this just once and multiply it for all the other input views. We calculate the corner points with the?calcBoardCornerPositions?function as:
?對象點。這是一個為每個輸入圖片所創建的Point3f矢量的對象,它描述了圖案的長相。如果我們有一個平面的圖像(比如一個棋盤)那么我們能夠簡單地設置所有的Z坐標為0。這是一個顯示重要點的點的集合。因為,我們為所有的輸入圖像使用一個單個的圖案,我們可以計算一次然后將它乘以其他的輸入圖像矩陣。我們使用calcBoardCornerPositions函數計算這些拐角點:
? void calcBoardCornerPositions(Size boardSize, float squareSize, vector<Point3f>& corners, ? Settings::Pattern patternType /*= Settings::CHESSBOARD*/) ? { ? corners.clear(); ? ? switch(patternType) ? { ? case Settings::CHESSBOARD: ? case Settings::CIRCLES_GRID: ? for( int i = 0; i < boardSize.height; ++i ) ? for( int j = 0; j < boardSize.width; ++j ) ? corners.push_back(Point3f(float( j*squareSize ), float( i*squareSize ), 0)); ? break; ? ? case Settings::ASYMMETRIC_CIRCLES_GRID: ? for( int i = 0; i < boardSize.height; i++ ) ? for( int j = 0; j < boardSize.width; j++ ) ? corners.push_back(Point3f(float((2*j + i % 2)*squareSize), float(i*squareSize), 0)); ? break; ? } ? }And then multiply it as:
然后如下將它相乘:
vector<vector<Point3f> > objectPoints(1); calcBoardCornerPositions(s.boardSize, s.squareSize, objectPoints[0], s.calibrationPattern); objectPoints.resize(imagePoints.size(),objectPoints[0]);- The image points. This is a vector of?Point2f?vector which for each input image contains coordinates of the important points (corners for chessboard and centers of the circles for the circle pattern). We have already collected this from?findChessboardCorners?or?findCirclesGrid?function. We just need to pass it on.
圖像點。這是一個Point2f矢量的矢量對象,它對每個輸入圖像來說都包含重要的點坐標(棋盤板的拐角,以及圓圖案的圓心)。我們已經通過findChessboardCorners函數或者findCirclesGrid函數收集了這個。我們只需要傳遞過來。
- The size of the image acquired from the camera, video file or the images.
從相機、視頻文件或者圖片中獲取的圖像的尺寸。
- The camera matrix. If we used the fixed aspect ratio option we need to set the?fx?to zero:
相機矩陣。如果我們使用修正后的長寬比率選項我們需要將fx設置為0:
? cameraMatrix = Mat::eye(3, 3, CV_64F); ? if( s.flag & CV_CALIB_FIX_ASPECT_RATIO ) ? cameraMatrix.at<double>(0,0) = 1.0;The distortion coefficient matrix. Initialize with zero.畸變協矩陣。初始化為0。
? distCoeffs = Mat::zeros(8, 1, CV_64F);For all the views the function will calculate rotation and translation vectors which transform the object points (given in the model coordinate space) to the image points (given in the world coordinate space). The 7-th and 8-th parameters are the output vector of matrices containing in the i-th position the rotation and translation vector for the i-th object point to the i-th image point.
對于所有的圖像矩陣該函數將計算旋轉和平移矢量,它們將對象點(在模型坐標系中給出)轉換到圖像點(在世界坐標系中)上。 第7和第8參數是包含在第i位置的從第i對象到第i圖像點的旋轉和平移向量矩陣的輸出向量。
- The final argument is the flag. You need to specify here options like fix the aspect ratio for the focal length, assume zero tangential distortion or to fix the principal point.
?最后的參數是標志。你需要指定選項像修正焦距的長寬比率,假定零正切失真或者修正法線點。
double rms = calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix,distCoeffs, rvecs, tvecs, s.flag|CV_CALIB_FIX_K4|CV_CALIB_FIX_K5);- The function returns the average re-projection error. This number gives a good estimation of precision of the found parameters. This should be as close to zero as possible. Given the intrinsic, distortion, rotation and translation matrices we may calculate the error for one view by using the?projectPoints?to first transform the object point to image point. Then we calculate the absolute norm between what we got with our transformation and the corner/circle finding algorithm. To find the average error we calculate the arithmetical mean of the errors calculated for all the calibration images.
該函數返回平均重映射誤差。 這個數給出一個找到的參數的精度的好的預測。這個應該盡可能接近于0。考慮到內在的,失真,旋轉和平移矩陣我們可以通過使用projectPoints將對象點轉換到圖像點來計算每個圖片的誤差。然后我們計算我們轉換得到的與使用查找算法得到拐角/圓圈之間的絕對的二范數。為了得到平均誤差我們為所有的標定圖像計算算術平均誤差。
? double computeReprojectionErrors( const vector<vector<Point3f> >& objectPoints, ? const vector<vector<Point2f> >& imagePoints, ? const vector<Mat>& rvecs, const vector<Mat>& tvecs, ? const Mat& cameraMatrix , const Mat& distCoeffs, ? vector<float>& perViewErrors) ? { ? vector<Point2f> imagePoints2; ? int i, totalPoints = 0; ? double totalErr = 0, err; ? perViewErrors.resize(objectPoints.size()); ? ? for( i = 0; i < (int)objectPoints.size(); ++i ) ? { ? projectPoints( Mat(objectPoints[i]), rvecs[i], tvecs[i], cameraMatrix, // project ? distCoeffs, imagePoints2); ? err = norm(Mat(imagePoints[i]), Mat(imagePoints2), CV_L2); // difference ? ? int n = (int)objectPoints[i].size(); ? perViewErrors[i] = (float) std::sqrt(err*err/n); // save for this view ? totalErr += err*err; // sum it up ? totalPoints += n; ? } ? ? return std::sqrt(totalErr/totalPoints); // calculate the arithmetical mean ? }?
圖像:
Let there be?this?input?chessboard?pattern?which has a size of 9 X 6. I’ve used an AXIS IP camera to create a couple of snapshots of the board and saved it into VID5 directory. I’ve put this inside the?images/CameraCalibration?folder of my working directory and created the following?VID5.XML?file that describes which images to use:
讓尺寸為9x6的棋盤圖案做輸入。 我使用了一個AXIS IP相機來拍攝了幾個照片然后保存到VID5路徑。我已經將這個放到我工作目錄的images/CameraCalibration文件夾,然后創建了接下來的的VID5.XML文件,描述了使用的哪些圖片:
<?xml version="1.0"?> <opencv_storage> <images> images/CameraCalibration/VID5/xx1.jpg images/CameraCalibration/VID5/xx2.jpg images/CameraCalibration/VID5/xx3.jpg images/CameraCalibration/VID5/xx4.jpg images/CameraCalibration/VID5/xx5.jpg images/CameraCalibration/VID5/xx6.jpg images/CameraCalibration/VID5/xx7.jpg images/CameraCalibration/VID5/xx8.jpg </images> </opencv_storage>Then passed?images/CameraCalibration/VID5/VID5.XML?as an input in the configuration file. Here’s a chessboard pattern found during the runtime of the application:
然后傳遞images/CameraCalibration/VID5/VID5.XML作為一個配置文件的輸入值。這里是一個在程序運行時查找到的棋盤板圖案。
After applying the distortion removal we get:執行完失真移除之后我們得到:
The same works for?this?asymmetrical?circle?pattern?by setting the input width to 4 and height to 11. This time I’ve used a live camera feed by specifying its ID (“1”) for the input. Here’s, how a detected pattern should look:
同樣可以使用這種非對稱的圓圖案設置輸入寬度為4、輸入高度為11。這一次我使用了一個現場直播攝像頭反饋,通過指定ID(“1”)作為輸入。這里,檢測到的圖案應該是看起來這樣:
In both cases in the specified output XML/YAML file you’ll find the camera and distortion coefficients matrices:
在這兩種情況下在指定的輸出XML/YAML文件中你將會找到攝像頭和失真協參矩陣:
<Camera_Matrix type_id="opencv-matrix"> <rows>3</rows> <cols>3</cols> <dt>d</dt> <data>6.5746697944293521e+002 0. 3.1950000000000000e+002 0.6.5746697944293521e+002 2.3950000000000000e+002 0. 0. 1.</data></Camera_Matrix> <Distortion_Coefficients type_id="opencv-matrix"> <rows>5</rows> <cols>1</cols> <dt>d</dt> <data>-4.1802327176423804e-001 5.0715244063187526e-001 0. 0.-5.7843597214487474e-001</data></Distortion_Coefficients>Add these values as constants to your program, call the?initUndistortRectifyMap?and the?remap?function to remove distortion and enjoy distortion free inputs for cheap and low quality cameras.
將這些值作為常數添加到你的程序中,調用initUndistortRectifyMap和remap函數來移除失真,然后享受來自廉價和低清晰攝像頭的沒有畸變的輸入吧。
?>>OpenCV做相機標定原文鏈接
http://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html
視頻
?
?
終于成功了,注意事項:
如果VS2008不行,就試一試VS2010,因為需要跟你的編譯庫相對應。如果Debug不行,就試試Release。總之,多試幾次。?
?
詳細指導教程:
?文件放在Debug文件夾下:
其中in_VID5.xml是輸入參數:
BoardSize_Width為內角點的寬方向的個數,BoardSize_Height為內角點的高方向的個數。Square_Size為用戶定義的坐標系下的一個四方格的尺寸(一般設為真實尺寸的多少mm)。
VID5.xml為圖片索引
最后的輸出結果:
共6幀圖像,寬7個角點,高7個角點,正方形尺寸40,修正寬高比率1,相機矩陣,畸變協陣,平均重投影誤差。
可以看到fx=4.85*10^2, cx=3.195*10^2, fy=4.85*10^2, cy=1.795*10^2,?
K1=-1.964*10^(-2), K2=-1.45*10^(-1), K3=4.856*10^(-1)
?
?
?以上是采用3.1自帶的程序實驗的。也可以參考OpenCV2.46版:opencv標定程序(修改),但在OpenCV3.1上實驗會有內存錯誤。
?
?
?
?
?
?
?》》擴展閱讀
1. 軟件《多基線近景攝影測量軟件》 相機檢校 X:出現“無效相片”
2. 軟件《Photomodeler Scanner》 camera calibrate
3. C++ 攝影測量 依賴于OpenCV
?依賴庫:
opengl32.lib
glu32.lib
glaux.lib
cximagecrtd.lib
cv.lib
highgui.lib
cxcore.lib
BLASd.lib
clapackd.lib
libf2cd.lib
tmglibd.lib
libumfpack.lib
libamd.lib
?界面設計:(MFC界面庫)
?
1.新建工程,選擇添加文件
取名為“H”,將在D盤新建一個名為H的文件夾工程,
選擇默認的圖像組[0],點確定,將加載圖像組
?可見圖像組是由0-7八個標定圖和0-25二十六個拍攝圖總共34個組成。
?點擊標記圓檢測
即完成標記圓檢測,標記圓是自己制作的標定物
標定成功后,可以看到每個圖像的標定圓以紅色十字架標記,第二列數字表示每張圖像中的標記圓個數,可見有的標記圓沒有被正確標記出
?然后點擊攝像機標定,求取攝像機的內外方位元素
?
然后點擊“多視圖重構”按鈕,出現了錯誤
?
?
4. Halcon
張正友的《相機標定法》
?5. Matlab
?
?
問題1:初始值需不需要設置
相機迭代參數初始值不需要設置
?
附加依賴項:
opencv_calib3d310d.lib
opencv_core310d.lib
opencv_features2d310d.lib
opencv_flann310d.lib
opencv_highgui310d.lib
opencv_imgcodecs310d.lib
opencv_imgproc310d.lib
opencv_ml310d.lib
opencv_objdetect310d.lib
opencv_photo310d.lib
opencv_shape310d.lib
opencv_stitching310d.lib
opencv_superres310d.lib
opencv_ts310d.lib
opencv_video310d.lib
opencv_videoio310d.lib
opencv_videostab310d.lib
完整引用
opencv_calib3d310d.lib opencv_core310d.lib opencv_features2d310d.lib opencv_flann310d.lib opencv_highgui310d.lib opencv_imgcodecs310d.lib opencv_imgproc310d.lib opencv_ml310d.lib opencv_objdetect310d.lib opencv_photo310d.lib opencv_shape310d.lib opencv_stitching310d.lib opencv_superres310d.lib opencv_ts310d.lib opencv_video310d.lib opencv_videoio310d.lib opencv_videostab310d.lib pcl_kdtree_debug.lib pcl_io_debug.lib pcl_search_debug.lib pcl_segmentation_debug.lib pcl_apps_debug.lib pcl_features_debug.lib pcl_filters_debug.lib pcl_visualization_debug.lib pcl_common_debug.lib pcl_kdtree_release.lib pcl_io_release.lib pcl_search_release.lib pcl_segmentation_release.lib pcl_apps_release.lib pcl_features_release.lib pcl_filters_release.lib pcl_visualization_release.lib pcl_common_release.lib flann_cpp_s-gd.lib boost_date_time-vc100-mt-1_49.lib boost_date_time-vc100-mt-gd-1_49.lib boost_filesystem-vc100-mt-1_49.lib boost_filesystem-vc100-mt-gd-1_49.lib boost_iostreams-vc100-mt-1_49.lib boost_iostreams-vc100-mt-gd-1_49.lib boost_serialization-vc100-mt-1_49.lib boost_serialization-vc100-mt-gd-1_49.lib boost_system-vc100-mt-1_49.lib boost_system-vc100-mt-gd-1_49.lib boost_thread-vc100-mt-1_49.lib boost_thread-vc100-mt-gd-1_49.lib boost_wserialization-vc100-mt-1_49.lib boost_wserialization-vc100-mt-gd-1_49.lib libboost_date_time-vc100-mt-1_49.lib libboost_date_time-vc100-mt-gd-1_49.lib libboost_filesystem-vc100-mt-1_49.lib libboost_filesystem-vc100-mt-gd-1_49.lib libboost_iostreams-vc100-mt-1_49.lib libboost_iostreams-vc100-mt-gd-1_49.lib libboost_serialization-vc100-mt-1_49.lib libboost_serialization-vc100-mt-gd-1_49.lib libboost_system-vc100-mt-1_49.lib libboost_system-vc100-mt-gd-1_49.lib libboost_thread-vc100-mt-1_49.lib libboost_thread-vc100-mt-gd-1_49.lib libboost_wserialization-vc100-mt-1_49.lib libboost_wserialization-vc100-mt-gd-1_49.lib openNI.lib OpenNI.jni.lib NiSampleModule.lib NiSampleExtensionModule.lib vtkalglib-gd.lib vtkCharts-gd.lib vtkCommon-gd.lib vtkDICOMParser-gd.lib vtkexoIIc-gd.lib vtkexpat-gd.lib vtkFiltering-gd.lib vtkfreetype-gd.lib vtkftgl-gd.lib vtkGenericFiltering-gd.lib vtkGeovis-gd.lib vtkGraphics-gd.lib vtkhdf5-gd.lib vtkHybrid-gd.lib vtkImaging-gd.lib vtkInfovis-gd.lib vtkIO-gd.lib vtkjpeg-gd.lib vtklibxml2-gd.lib vtkmetaio-gd.lib vtkNetCDF-gd.lib vtkNetCDF_cxx-gd.lib vtkpng-gd.lib vtkproj4-gd.lib vtkRendering-gd.lib vtksqlite-gd.lib vtksys-gd.lib vtktiff-gd.lib vtkverdict-gd.lib vtkViews-gd.lib vtkVolumeRendering-gd.lib vtkWidgets-gd.lib vtkzlib-gd.lib?Executable Directories:
E:\QQDownload\PCL1_6\PCL 1.6.0\bin;E:\opencv_c\install\x86\vc10\bin;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\VTK\bin;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\Qhull\bin;E:\QQDownload\OpenNI\Bin;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\FLANN\bin;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\Eigen\bin;$(ExecutablePath)Include Directories:
E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\VTK\include\vtk-5.8;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\Qhull\include;E:\QQDownload\OpenNI\Include;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\FLANN\include;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\Eigen\include;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\Boost\include;E:\QQDownload\PCL1_6\PCL 1.6.0\include\pcl-1.6;E:\opencv_c\install\include;$(IncludePath)Library Directories:
E:\QQDownload\OpenNI\Lib;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\VTK\lib\vtk-5.8;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\Qhull\lib;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\FLANN\lib;E:\QQDownload\PCL1_6\PCL 1.6.0\3rdParty\Boost\lib;E:\QQDownload\PCL1_6\PCL 1.6.0\lib;E:\opencv_c\install\x86\vc10\staticlib;E:\opencv_c\install\x86\vc10\lib;$(LibraryPath)?VS2010,Release模式,添加了OpenNI,OpenCV,VTK,Boost,PCL...均為2010編譯器編譯的,32位。選的Release執行成功。
?
g2o
g2o_cli.lib g2o_core.lib g2o_csparse_extension.lib g2o_ext_csparse.lib g2o_ext_freeglut_minimal.lib g2o_interface.lib g2o_opengl_helper.lib g2o_parser.lib g2o_simulator.lib g2o_solver_csparse.lib g2o_solver_dense.lib g2o_solver_pcg.lib g2o_solver_slam2d_linear.lib g2o_solver_structure_only.lib g2o_stuff.lib g2o_types_data.lib g2o_types_icp.lib g2o_types_sba.lib g2o_types_sclam2d.lib g2o_types_sim3.lib g2o_types_slam2d.lib g2o_types_slam2d_addons.lib g2o_types_slam3d.lib g2o_types_slam3d_addons.lib g2o_viewer.lib?
轉載于:https://www.cnblogs.com/2008nmj/p/6341410.html
總結
以上是生活随笔為你收集整理的使用OpenCV进行相机标定的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: CentOS7虚拟机搭建xwiki
- 下一篇: C#类型与SQLSEVER类型对比