相机已经存在了很长一段时间。 随着二十世纪末廉价针孔相机的推出,相机已经在日常生活中普及。虽然价格便宜,但是成像存在严重的畸变。不过,这些畸变是固定的形式,基于标定和重映技术可以纠正畸变。此外,基于标定,可以确定相机的自然单位(像素)和现实世界单位(例如毫米)之间的关系。

理论

对于畸变, OpenCV 中考虑径向和切向的因素.  对于径向的畸变因素使用如下公式:

xdistorted=x(1+k1r2+k2r4+k3r6)ydistorted=y(1+k1r2+k2r4+k3r6)

所以对于没有畸变的像素点 (x,y) 坐标, 在畸变图像中的坐标为 (xdistortedydistorted). 径向畸变的形式通常表现为"桶型(barrel)" 或者 "鱼眼"形式.

切向畸变的发生是因为拍摄图像的镜头是不能和图像平面完全平行造成的。可以用公式表示成:

xdistorted=x+[2p1xy+p2(r2+2x2)]ydistorted=y+[p1(r2+2y2)+2p2xy]

所以在OpenCV中有5个畸变参数,一般表示成具有5个元素的行矩阵:

distortion_coefficients=(k1k2p1p2k3)

现在,对于单位转换(unit conversion),我们使用以下公式:

⎡⎣⎢⎢xyw⎤⎦⎥⎥=⎡⎣⎢⎢⎢fx000fy0cxcy1⎤⎦⎥⎥⎥⎡⎣⎢⎢XYZ⎤⎦⎥⎥

这里出现的w 表示使用齐次坐标系统 (并且 w=Z). 未知参数为 fx 和 fy (相机的焦距) 以及 (cx,cy) 光学像素坐标的中心. If for both axes a common focal length is used with a given a aspect ratio (通常为 1), 那么 fy=fx∗a and in the upper formula we will have a single focal length f. 矩阵包含的四个参数称为相机的内参(camera matrix). 虽然使用不同的相机分辨率的情况下畸变系数都是相同的, these should be scaled along with the current resolution from the calibrated resolution.

确定这两个矩阵的过程就是校准。这些参数的计算是通过基本的几何方程实现。使用的方程取决于所选择的标定对象. 目前OpenCV 支持三种类型的对象用于标定:

  • 经典的黑白棋盘格
  • 对称的圆形图案
  • 非对称的圆形图案

首先,需要用待校正的相机拍摄这些标定图案,并让OpenCV 找到他们. 每一个识别到的图案样式都可以产生一个新的方程. 至少要拍摄预定数量的图案形成一个适定的方程组. 标定中棋盘格需要拍摄较多的数量,圆形的图案拍摄的数量较少.例如,理论上棋盘格至少需要拍摄两幅图案. 但是,实际上根据经验统计,如果想要得到较好的校正效果,至少要拍摄10张不同位置的图案。

目的:

展示如下功能:

  • 确定畸变矩阵(Determine the distortion matrix)
  • 确定相机内参(Determine the camera matrix)
  • 输入为摄像头、视频和图像列表(Take input from Camera, Video and Image file list)
  • 读取配置文件(Read configuration from XML/YAML file)
  • 保存校正结果(Save the results into XML/YAML file)
  • 再投影误差的计算(Calculate re-projection error)

源代码

#include <iostream>
#include <sstream>
#include <string>
#include <ctime>
#include <cstdio>#include <opencv2/core.hpp>
#include <opencv2/core/utility.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/calib3d.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/highgui.hpp>using namespace cv;
using namespace std;static void help()
{cout <<  "This is a camera calibration sample." << endl<<  "Usage: camera_calibration [configuration_file -- default ./default.xml]"  << endl<<  "Near the sample file you'll find the configuration file, which has detailed help of ""how to edit it.  It may be any OpenCV supported file format XML/YAML." << endl;
}
class Settings
{
public:Settings() : goodInput(false) {}enum Pattern { NOT_EXISTING, CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };enum InputType { INVALID, CAMERA, VIDEO_FILE, IMAGE_LIST };void write(FileStorage& fs) const                        //Write serialization for this class{fs << "{"<< "BoardSize_Width"  << boardSize.width<< "BoardSize_Height" << boardSize.height<< "Square_Size"         << squareSize<< "Calibrate_Pattern" << patternToUse<< "Calibrate_NrOfFrameToUse" << nrFrames<< "Calibrate_FixAspectRatio" << aspectRatio<< "Calibrate_AssumeZeroTangentialDistortion" << calibZeroTangentDist<< "Calibrate_FixPrincipalPointAtTheCenter" << calibFixPrincipalPoint<< "Write_DetectedFeaturePoints" << writePoints<< "Write_extrinsicParameters"   << writeExtrinsics<< "Write_outputFileName"  << outputFileName<< "Show_UndistortedImage" << showUndistorsed<< "Input_FlipAroundHorizontalAxis" << flipVertical<< "Input_Delay" << delay<< "Input" << input<< "}";}void read(const FileNode& node)                          //Read serialization for this class{node["BoardSize_Width" ] >> boardSize.width;node["BoardSize_Height"] >> boardSize.height;node["Calibrate_Pattern"] >> patternToUse;node["Square_Size"]  >> squareSize;node["Calibrate_NrOfFrameToUse"] >> nrFrames;node["Calibrate_FixAspectRatio"] >> aspectRatio;node["Write_DetectedFeaturePoints"] >> writePoints;node["Write_extrinsicParameters"] >> writeExtrinsics;node["Write_outputFileName"] >> outputFileName;node["Calibrate_AssumeZeroTangentialDistortion"] >> calibZeroTangentDist;node["Calibrate_FixPrincipalPointAtTheCenter"] >> calibFixPrincipalPoint;node["Calibrate_UseFisheyeModel"] >> useFisheye;node["Input_FlipAroundHorizontalAxis"] >> flipVertical;node["Show_UndistortedImage"] >> showUndistorsed;node["Input"] >> input;node["Input_Delay"] >> delay;node["Fix_K1"] >> fixK1;node["Fix_K2"] >> fixK2;node["Fix_K3"] >> fixK3;node["Fix_K4"] >> fixK4;node["Fix_K5"] >> fixK5;validate();}void validate(){goodInput = true;if (boardSize.width <= 0 || boardSize.height <= 0){cerr << "Invalid Board size: " << boardSize.width << " " << boardSize.height << endl;goodInput = false;}if (squareSize <= 10e-6){cerr << "Invalid square size " << squareSize << endl;goodInput = false;}if (nrFrames <= 0){cerr << "Invalid number of frames " << nrFrames << endl;goodInput = false;}if (input.empty())      // Check for valid inputinputType = INVALID;else{if (input[0] >= '0' && input[0] <= '9'){stringstream ss(input);ss >> cameraID;inputType = CAMERA;}else{if (readStringList(input, imageList)){inputType = IMAGE_LIST;nrFrames = (nrFrames < (int)imageList.size()) ? nrFrames : (int)imageList.size();}elseinputType = VIDEO_FILE;}if (inputType == CAMERA)inputCapture.open(cameraID);if (inputType == VIDEO_FILE)inputCapture.open(input);if (inputType != IMAGE_LIST && !inputCapture.isOpened())inputType = INVALID;}if (inputType == INVALID){cerr << " Input does not exist: " << input;goodInput = false;}flag = 0;if(calibFixPrincipalPoint) flag |= CALIB_FIX_PRINCIPAL_POINT;if(calibZeroTangentDist)   flag |= CALIB_ZERO_TANGENT_DIST;if(aspectRatio)            flag |= CALIB_FIX_ASPECT_RATIO;if(fixK1)                  flag |= CALIB_FIX_K1;if(fixK2)                  flag |= CALIB_FIX_K2;if(fixK3)                  flag |= CALIB_FIX_K3;if(fixK4)                  flag |= CALIB_FIX_K4;if(fixK5)                  flag |= CALIB_FIX_K5;if (useFisheye) {// the fisheye model has its own enum, so overwrite the flagsflag = fisheye::CALIB_FIX_SKEW | fisheye::CALIB_RECOMPUTE_EXTRINSIC;if(fixK1)                  flag |= fisheye::CALIB_FIX_K1;if(fixK2)                  flag |= fisheye::CALIB_FIX_K2;if(fixK3)                  flag |= fisheye::CALIB_FIX_K3;if(fixK4)                  flag |= fisheye::CALIB_FIX_K4;}calibrationPattern = NOT_EXISTING;if (!patternToUse.compare("CHESSBOARD")) calibrationPattern = CHESSBOARD;if (!patternToUse.compare("CIRCLES_GRID")) calibrationPattern = CIRCLES_GRID;if (!patternToUse.compare("ASYMMETRIC_CIRCLES_GRID")) calibrationPattern = ASYMMETRIC_CIRCLES_GRID;if (calibrationPattern == NOT_EXISTING){cerr << " Camera calibration mode does not exist: " << patternToUse << endl;goodInput = false;}atImageList = 0;}Mat nextImage(){Mat result;if( inputCapture.isOpened() ){Mat view0;inputCapture >> view0;view0.copyTo(result);}else if( atImageList < imageList.size() )result = imread(imageList[atImageList++], IMREAD_COLOR);return result;}static bool readStringList( const string& filename, vector<string>& l ){l.clear();FileStorage fs(filename, FileStorage::READ);if( !fs.isOpened() )return false;FileNode n = fs.getFirstTopLevelNode();if( n.type() != FileNode::SEQ )return false;FileNodeIterator it = n.begin(), it_end = n.end();for( ; it != it_end; ++it )l.push_back((string)*it);return true;}
public:Size boardSize;              // The size of the board -> Number of items by width and heightPattern calibrationPattern;  // One of the Chessboard, circles, or asymmetric circle patternfloat squareSize;            // The size of a square in your defined unit (point, millimeter,etc).int nrFrames;                // The number of frames to use from the input for calibrationfloat aspectRatio;           // The aspect ratioint delay;                   // In case of a video inputbool writePoints;            // Write detected feature pointsbool writeExtrinsics;        // Write extrinsic parametersbool calibZeroTangentDist;   // Assume zero tangential distortionbool calibFixPrincipalPoint; // Fix the principal point at the centerbool flipVertical;           // Flip the captured images around the horizontal axisstring outputFileName;       // The name of the file where to writebool showUndistorsed;        // Show undistorted images after calibrationstring input;                // The input ->bool useFisheye;             // use fisheye camera model for calibrationbool fixK1;                  // fix K1 distortion coefficientbool fixK2;                  // fix K2 distortion coefficientbool fixK3;                  // fix K3 distortion coefficientbool fixK4;                  // fix K4 distortion coefficientbool fixK5;                  // fix K5 distortion coefficientint cameraID;vector<string> imageList;size_t atImageList;VideoCapture inputCapture;InputType inputType;bool goodInput;int flag;private:string patternToUse;};static inline void read(const FileNode& node, Settings& x, const Settings& default_value = Settings())
{if(node.empty())x = default_value;elsex.read(node);
}static inline void write(FileStorage& fs, const String&, const Settings& s )
{s.write(fs);
}enum { DETECTION = 0, CAPTURING = 1, CALIBRATED = 2 };bool runCalibrationAndSave(Settings& s, Size imageSize, Mat&  cameraMatrix, Mat& distCoeffs,vector<vector<Point2f> > imagePoints );int main(int argc, char* argv[])
{help();//! [file_read]Settings s;const string inputSettingsFile = argc > 1 ? argv[1] : "default.xml";FileStorage fs(inputSettingsFile, FileStorage::READ); // Read the settingsif (!fs.isOpened()){cout << "Could not open the configuration file: \"" << inputSettingsFile << "\"" << endl;return -1;}fs["Settings"] >> s;fs.release();                                         // close Settings file//! [file_read]//FileStorage fout("settings.yml", FileStorage::WRITE); // write config as YAML//fout << "Settings" << s;if (!s.goodInput){cout << "Invalid input detected. Application stopping. " << endl;return -1;}vector<vector<Point2f> > imagePoints;Mat cameraMatrix, distCoeffs;Size imageSize;int mode = s.inputType == Settings::IMAGE_LIST ? CAPTURING : DETECTION;clock_t prevTimestamp = 0;const Scalar RED(0,0,255), GREEN(0,255,0);const char ESC_KEY = 27;//! [get_input]for(;;){Mat view;bool blinkOutput = false;view = s.nextImage();//-----  If no more image, or got enough, then stop calibration and show result -------------if( mode == CAPTURING && imagePoints.size() >= (size_t)s.nrFrames ){if( runCalibrationAndSave(s, imageSize,  cameraMatrix, distCoeffs, imagePoints))mode = CALIBRATED;elsemode = DETECTION;}if(view.empty())          // If there are no more images stop the loop{// if calibration threshold was not reached yet, calibrate nowif( mode != CALIBRATED && !imagePoints.empty() )runCalibrationAndSave(s, imageSize,  cameraMatrix, distCoeffs, imagePoints);break;}//! [get_input]imageSize = view.size();  // Format input image.if( s.flipVertical )    flip( view, view, 0 );//! [find_pattern]vector<Point2f> pointBuf;bool found;int chessBoardFlags = CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_NORMALIZE_IMAGE;if(!s.useFisheye) {// fast check erroneously fails with high distortions like fisheyechessBoardFlags |= CALIB_CB_FAST_CHECK;}switch( s.calibrationPattern ) // Find feature points on the input format{case Settings::CHESSBOARD:found = findChessboardCorners( view, s.boardSize, pointBuf, chessBoardFlags);break;case Settings::CIRCLES_GRID:found = findCirclesGrid( view, s.boardSize, pointBuf );break;case Settings::ASYMMETRIC_CIRCLES_GRID:found = findCirclesGrid( view, s.boardSize, pointBuf, CALIB_CB_ASYMMETRIC_GRID );break;default:found = false;break;}//! [find_pattern]//! [pattern_found]if ( found)                // If done with success,{// improve the found corners' coordinate accuracy for chessboardif( s.calibrationPattern == Settings::CHESSBOARD){Mat viewGray;cvtColor(view, viewGray, COLOR_BGR2GRAY);cornerSubPix( viewGray, pointBuf, Size(11,11),Size(-1,-1), TermCriteria( TermCriteria::EPS+TermCriteria::COUNT, 30, 0.1 ));}if( mode == CAPTURING &&  // For camera only take new samples after delay time(!s.inputCapture.isOpened() || clock() - prevTimestamp > s.delay*1e-3*CLOCKS_PER_SEC) ){imagePoints.push_back(pointBuf);prevTimestamp = clock();blinkOutput = s.inputCapture.isOpened();}// Draw the corners.drawChessboardCorners( view, s.boardSize, Mat(pointBuf), found );}//! [pattern_found]//----------------------------- Output Text ------------------------------------------------//! [output_text]string msg = (mode == CAPTURING) ? "100/100" :mode == CALIBRATED ? "Calibrated" : "Press 'g' to start";int baseLine = 0;Size textSize = getTextSize(msg, 1, 1, 1, &baseLine);Point textOrigin(view.cols - 2*textSize.width - 10, view.rows - 2*baseLine - 10);if( mode == CAPTURING ){if(s.showUndistorsed)msg = format( "%d/%d Undist", (int)imagePoints.size(), s.nrFrames );elsemsg = format( "%d/%d", (int)imagePoints.size(), s.nrFrames );}putText( view, msg, textOrigin, 1, 1, mode == CALIBRATED ?  GREEN : RED);if( blinkOutput )bitwise_not(view, view);//! [output_text]//------------------------- Video capture  output  undistorted ------------------------------//! [output_undistorted]if( mode == CALIBRATED && s.showUndistorsed ){Mat temp = view.clone();if (s.useFisheye)cv::fisheye::undistortImage(temp, view, cameraMatrix, distCoeffs);elseundistort(temp, view, cameraMatrix, distCoeffs);}//! [output_undistorted]//------------------------------ Show image and check for input commands -------------------//! [await_input]imshow("Image View", view);char key = (char)waitKey(s.inputCapture.isOpened() ? 50 : s.delay);if( key  == ESC_KEY )break;if( key == 'u' && mode == CALIBRATED )s.showUndistorsed = !s.showUndistorsed;if( s.inputCapture.isOpened() && key == 'g' ){mode = CAPTURING;imagePoints.clear();}//! [await_input]}// -----------------------Show the undistorted image for the image list ------------------------//! [show_results]if( s.inputType == Settings::IMAGE_LIST && s.showUndistorsed ){Mat view, rview, map1, map2;if (s.useFisheye){Mat newCamMat;fisheye::estimateNewCameraMatrixForUndistortRectify(cameraMatrix, distCoeffs, imageSize,Matx33d::eye(), newCamMat, 1);fisheye::initUndistortRectifyMap(cameraMatrix, distCoeffs, Matx33d::eye(), newCamMat, imageSize,CV_16SC2, map1, map2);}else{initUndistortRectifyMap(cameraMatrix, distCoeffs, Mat(),getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, imageSize, 1, imageSize, 0), imageSize,CV_16SC2, map1, map2);}for(size_t i = 0; i < s.imageList.size(); i++ ){view = imread(s.imageList[i], IMREAD_COLOR);if(view.empty())continue;remap(view, rview, map1, map2, INTER_LINEAR);imshow("Image View", rview);char c = (char)waitKey();if( c  == ESC_KEY || c == 'q' || c == 'Q' )break;}}//! [show_results]return 0;
}//! [compute_errors]
static double computeReprojectionErrors( const vector<vector<Point3f> >& objectPoints,const vector<vector<Point2f> >& imagePoints,const vector<Mat>& rvecs, const vector<Mat>& tvecs,const Mat& cameraMatrix , const Mat& distCoeffs,vector<float>& perViewErrors, bool fisheye)
{vector<Point2f> imagePoints2;size_t totalPoints = 0;double totalErr = 0, err;perViewErrors.resize(objectPoints.size());for(size_t i = 0; i < objectPoints.size(); ++i ){if (fisheye){fisheye::projectPoints(objectPoints[i], imagePoints2, rvecs[i], tvecs[i], cameraMatrix,distCoeffs);}else{projectPoints(objectPoints[i], rvecs[i], tvecs[i], cameraMatrix, distCoeffs, imagePoints2);}err = norm(imagePoints[i], imagePoints2, NORM_L2);size_t n = objectPoints[i].size();perViewErrors[i] = (float) std::sqrt(err*err/n);totalErr        += err*err;totalPoints     += n;}return std::sqrt(totalErr/totalPoints);
}
//! [compute_errors]
//! [board_corners]
static void calcBoardCornerPositions(Size boardSize, float squareSize, vector<Point3f>& corners,Settings::Pattern patternType /*= Settings::CHESSBOARD*/)
{corners.clear();switch(patternType){case Settings::CHESSBOARD:case Settings::CIRCLES_GRID:for( int i = 0; i < boardSize.height; ++i )for( int j = 0; j < boardSize.width; ++j )corners.push_back(Point3f(j*squareSize, i*squareSize, 0));break;case Settings::ASYMMETRIC_CIRCLES_GRID:for( int i = 0; i < boardSize.height; i++ )for( int j = 0; j < boardSize.width; j++ )corners.push_back(Point3f((2*j + i % 2)*squareSize, i*squareSize, 0));break;default:break;}
}
//! [board_corners]
static bool runCalibration( Settings& s, Size& imageSize, Mat& cameraMatrix, Mat& distCoeffs,vector<vector<Point2f> > imagePoints, vector<Mat>& rvecs, vector<Mat>& tvecs,vector<float>& reprojErrs,  double& totalAvgErr)
{//! [fixed_aspect]cameraMatrix = Mat::eye(3, 3, CV_64F);if( s.flag & CALIB_FIX_ASPECT_RATIO )cameraMatrix.at<double>(0,0) = s.aspectRatio;//! [fixed_aspect]if (s.useFisheye) {distCoeffs = Mat::zeros(4, 1, CV_64F);} else {distCoeffs = Mat::zeros(8, 1, CV_64F);}vector<vector<Point3f> > objectPoints(1);calcBoardCornerPositions(s.boardSize, s.squareSize, objectPoints[0], s.calibrationPattern);objectPoints.resize(imagePoints.size(),objectPoints[0]);//Find intrinsic and extrinsic camera parametersdouble rms;if (s.useFisheye) {Mat _rvecs, _tvecs;rms = fisheye::calibrate(objectPoints, imagePoints, imageSize, cameraMatrix, distCoeffs, _rvecs,_tvecs, s.flag);rvecs.reserve(_rvecs.rows);tvecs.reserve(_tvecs.rows);for(int i = 0; i < int(objectPoints.size()); i++){rvecs.push_back(_rvecs.row(i));tvecs.push_back(_tvecs.row(i));}} else {rms = calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs,s.flag);}cout << "Re-projection error reported by calibrateCamera: "<< rms << endl;bool ok = checkRange(cameraMatrix) && checkRange(distCoeffs);totalAvgErr = computeReprojectionErrors(objectPoints, imagePoints, rvecs, tvecs, cameraMatrix,distCoeffs, reprojErrs, s.useFisheye);return ok;
}// Print camera parameters to the output file
static void saveCameraParams( Settings& s, Size& imageSize, Mat& cameraMatrix, Mat& distCoeffs,const vector<Mat>& rvecs, const vector<Mat>& tvecs,const vector<float>& reprojErrs, const vector<vector<Point2f> >& imagePoints,double totalAvgErr )
{FileStorage fs( s.outputFileName, FileStorage::WRITE );time_t tm;time( &tm );struct tm *t2 = localtime( &tm );char buf[1024];strftime( buf, sizeof(buf), "%c", t2 );fs << "calibration_time" << buf;if( !rvecs.empty() || !reprojErrs.empty() )fs << "nr_of_frames" << (int)std::max(rvecs.size(), reprojErrs.size());fs << "image_width" << imageSize.width;fs << "image_height" << imageSize.height;fs << "board_width" << s.boardSize.width;fs << "board_height" << s.boardSize.height;fs << "square_size" << s.squareSize;if( s.flag & CALIB_FIX_ASPECT_RATIO )fs << "fix_aspect_ratio" << s.aspectRatio;if (s.flag){std::stringstream flagsStringStream;if (s.useFisheye){flagsStringStream << "flags:"<< (s.flag & fisheye::CALIB_FIX_SKEW ? " +fix_skew" : "")<< (s.flag & fisheye::CALIB_FIX_K1 ? " +fix_k1" : "")<< (s.flag & fisheye::CALIB_FIX_K2 ? " +fix_k2" : "")<< (s.flag & fisheye::CALIB_FIX_K3 ? " +fix_k3" : "")<< (s.flag & fisheye::CALIB_FIX_K4 ? " +fix_k4" : "")<< (s.flag & fisheye::CALIB_RECOMPUTE_EXTRINSIC ? " +recompute_extrinsic" : "");}else{flagsStringStream << "flags:"<< (s.flag & CALIB_USE_INTRINSIC_GUESS ? " +use_intrinsic_guess" : "")<< (s.flag & CALIB_FIX_ASPECT_RATIO ? " +fix_aspectRatio" : "")<< (s.flag & CALIB_FIX_PRINCIPAL_POINT ? " +fix_principal_point" : "")<< (s.flag & CALIB_ZERO_TANGENT_DIST ? " +zero_tangent_dist" : "")<< (s.flag & CALIB_FIX_K1 ? " +fix_k1" : "")<< (s.flag & CALIB_FIX_K2 ? " +fix_k2" : "")<< (s.flag & CALIB_FIX_K3 ? " +fix_k3" : "")<< (s.flag & CALIB_FIX_K4 ? " +fix_k4" : "")<< (s.flag & CALIB_FIX_K5 ? " +fix_k5" : "");}fs.writeComment(flagsStringStream.str());}fs << "flags" << s.flag;fs << "fisheye_model" << s.useFisheye;fs << "camera_matrix" << cameraMatrix;fs << "distortion_coefficients" << distCoeffs;fs << "avg_reprojection_error" << totalAvgErr;if (s.writeExtrinsics && !reprojErrs.empty())fs << "per_view_reprojection_errors" << Mat(reprojErrs);if(s.writeExtrinsics && !rvecs.empty() && !tvecs.empty() ){CV_Assert(rvecs[0].type() == tvecs[0].type());Mat bigmat((int)rvecs.size(), 6, CV_MAKETYPE(rvecs[0].type(), 1));bool needReshapeR = rvecs[0].depth() != 1 ? true : false;bool needReshapeT = tvecs[0].depth() != 1 ? true : false;for( size_t i = 0; i < rvecs.size(); i++ ){Mat r = bigmat(Range(int(i), int(i+1)), Range(0,3));Mat t = bigmat(Range(int(i), int(i+1)), Range(3,6));if(needReshapeR)rvecs[i].reshape(1, 1).copyTo(r);else{//*.t() is MatExpr (not Mat) so we can use assignment operatorCV_Assert(rvecs[i].rows == 3 && rvecs[i].cols == 1);r = rvecs[i].t();}if(needReshapeT)tvecs[i].reshape(1, 1).copyTo(t);else{CV_Assert(tvecs[i].rows == 3 && tvecs[i].cols == 1);t = tvecs[i].t();}}fs.writeComment("a set of 6-tuples (rotation vector + translation vector) for each view");fs << "extrinsic_parameters" << bigmat;}if(s.writePoints && !imagePoints.empty() ){Mat imagePtMat((int)imagePoints.size(), (int)imagePoints[0].size(), CV_32FC2);for( size_t i = 0; i < imagePoints.size(); i++ ){Mat r = imagePtMat.row(int(i)).reshape(2, imagePtMat.cols);Mat imgpti(imagePoints[i]);imgpti.copyTo(r);}fs << "image_points" << imagePtMat;}
}//! [run_and_save]
bool runCalibrationAndSave(Settings& s, Size imageSize, Mat& cameraMatrix, Mat& distCoeffs,vector<vector<Point2f> > imagePoints)
{vector<Mat> rvecs, tvecs;vector<float> reprojErrs;double totalAvgErr = 0;bool ok = runCalibration(s, imageSize, cameraMatrix, distCoeffs, imagePoints, rvecs, tvecs, reprojErrs,totalAvgErr);cout << (ok ? "Calibration succeeded" : "Calibration failed")<< ". avg re projection error = " << totalAvgErr << endl;if (ok)saveCameraParams(s, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs, reprojErrs, imagePoints,totalAvgErr);return ok;
}
//! [run_and_save]

程序只有一个参数:配置文件的名字. 如果没有给出将会尝试打开名字为 "default.xml"的配置文件.XML格式的配置文件的例子如下:

<?xml version="1.0"?>
<opencv_storage>
<Settings><!-- Number of inner corners per a item row and column. (square, circle) --><BoardSize_Width> 9</BoardSize_Width><BoardSize_Height>6</BoardSize_Height><!-- The size of a square in some user defined metric system (pixel, millimeter)--><Square_Size>50</Square_Size><!-- The type of input used for camera calibration. One of: CHESSBOARD CIRCLES_GRID ASYMMETRIC_CIRCLES_GRID --><Calibrate_Pattern>"CHESSBOARD"</Calibrate_Pattern><!-- The input to use for calibration. To use an input camera -> give the ID of the camera, like "1"To use an input video  -> give the path of the input video, like "/tmp/x.avi"To use an image list   -> give the path to the XML or YAML file containing the list of the images, like "/tmp/circles_list.xml"--><Input>"images/CameraCalibration/VID5/VID5.xml"</Input><!--  If true (non-zero) we flip the input images around the horizontal axis.--><Input_FlipAroundHorizontalAxis>0</Input_FlipAroundHorizontalAxis><!-- Time delay between frames in case of camera. --><Input_Delay>100</Input_Delay> <!-- How many frames to use, for calibration. --><Calibrate_NrOfFrameToUse>25</Calibrate_NrOfFrameToUse><!-- Consider only fy as a free parameter, the ratio fx/fy stays the same as in the input cameraMatrix. Use or not setting. 0 - False Non-Zero - True--><Calibrate_FixAspectRatio> 1 </Calibrate_FixAspectRatio><!-- If true (non-zero) tangential distortion coefficients  are set to zeros and stay zero.--><Calibrate_AssumeZeroTangentialDistortion>1</Calibrate_AssumeZeroTangentialDistortion><!-- If true (non-zero) the principal point is not changed during the global optimization.--><Calibrate_FixPrincipalPointAtTheCenter> 1 </Calibrate_FixPrincipalPointAtTheCenter><!-- The name of the output log file. --><Write_outputFileName>"out_camera_data.xml"</Write_outputFileName><!-- If true (non-zero) we write to the output file the feature points.--><Write_DetectedFeaturePoints>1</Write_DetectedFeaturePoints><!-- If true (non-zero) we write to the output file the extrinsic camera parameters.--><Write_extrinsicParameters>1</Write_extrinsicParameters><!-- If true (non-zero) we show after calibration the undistorted images.--><Show_UndistortedImage>1</Show_UndistortedImage><!-- If true (non-zero) will be used fisheye camera model.--><Calibrate_UseFisheyeModel>0</Calibrate_UseFisheyeModel><!-- If true (non-zero) distortion coefficient k1 will be equals to zero.--><Fix_K1>0</Fix_K1><!-- If true (non-zero) distortion coefficient k2 will be equals to zero.--><Fix_K2>0</Fix_K2><!-- If true (non-zero) distortion coefficient k3 will be equals to zero.--><Fix_K3>0</Fix_K3><!-- If true (non-zero) distortion coefficient k4 will be equals to zero.--><Fix_K4>1</Fix_K4><!-- If true (non-zero) distortion coefficient k5 will be equals to zero.--><Fix_K5>1</Fix_K5>
</Settings>
</opencv_storage>

配置文件中可以选择摄像头、视频或者图像列表作为输入。如果选择图像列表作为输入,你需要创建一个包含要使用的图像的配置文件,例子如下:

<?xml version="1.0"?>
<opencv_storage>
<images>
images/CameraCalibraation/VID5/xx1.jpg
images/CameraCalibraation/VID5/xx2.jpg
images/CameraCalibraation/VID5/xx3.jpg
images/CameraCalibraation/VID5/xx4.jpg
images/CameraCalibraation/VID5/xx5.jpg
images/CameraCalibraation/VID5/xx6.jpg
images/CameraCalibraation/VID5/xx7.jpg
images/CameraCalibraation/VID5/xx8.jpg
</images>
</opencv_storage>

配置文件中要指定程序执行路径下,能够访问到图像的绝对或相对路径。

程序开始先读取配置文件. 对于XML 和 YAML文件的读入和写出,请参见《OpenCV中XML文件和YAML文件的读写》。

结果

棋盘格结果

棋盘格(9 X 6)的下载地址:棋盘格

相机: AXIS IP camera

图片配置文件:VID5.XML

<?xml version="1.0"?>
<opencv_storage>
<images>
images/CameraCalibration/VID5/xx1.jpg
images/CameraCalibration/VID5/xx2.jpg
images/CameraCalibration/VID5/xx3.jpg
images/CameraCalibration/VID5/xx4.jpg
images/CameraCalibration/VID5/xx5.jpg
images/CameraCalibration/VID5/xx6.jpg
images/CameraCalibration/VID5/xx7.jpg
images/CameraCalibration/VID5/xx8.jpg
</images>
</opencv_storage>

找到的棋盘格的图案如下图所示:

畸变移除厚的效果为:

非对称圆样式图案标定

下载地址:this asymmetrical circle pattern

输入:摄像头

在设定宽度4高度11的情况下,效果如下:

校正系数输出

输出存储的XML/YAML文件如下:

<camera_matrix type_id="opencv-matrix">
<rows>3</rows>
<cols>3</cols>
<dt>d</dt>
<data>6.5746697944293521e+002 0. 3.1950000000000000e+002 0.6.5746697944293521e+002 2.3950000000000000e+002 0. 0. 1.</data></camera_matrix>
<distortion_coefficients type_id="opencv-matrix">
<rows>5</rows>
<cols>1</cols>
<dt>d</dt>
<data>-4.1802327176423804e-001 5.0715244063187526e-001 0. 0.-5.7843597214487474e-001</data></distortion_coefficients>

将这些参数以常数的形式放在你的代码中, 调用 cv::initUndistortRectifyMap 和 cv::remap 函数进行畸变的移除。

基于OpenCV进行相机标定相关推荐

  1. 使用OpenCV进行相机标定

    1. 使用OpenCV进行标定 相机已经有很长一段历史了.但是,伴随着20世纪后期的廉价针孔照相机的问世,它们已经变成我们日常生活的一种常见的存在.不幸的是,这种廉价是由代价的:显著的变形.幸运的是, ...

  2. android opencv 获取小图在大图的坐标_Android开发—基于OpenCV实现相机实时图像识别跟踪...

    利用OpenCV实现实时图像识别和图像跟踪 图像识别 什么是图像识别 图像识别,是指利用计算机对图像进行处理.分析和理解,以识别各种不同模式的目标和对像的技术.根据观测到的图像,对其中的物体分辨其类别 ...

  3. 基于python的相机标定(采用圆形标定板图片)

    基于python的相机标定(采用圆形标定板图片) 系列文章目录 与黑白棋盘格差别主要在于寻找角点的函数,只需将第一章内第二段代码 ret, corners1 = cv.findChessboardCo ...

  4. python利用opencv进行相机标定获取参数,并根据畸变参数修正图像附有全部代码(流畅无痛版)

    python利用opencv进行相机标定获取参数,并根据畸变参数修正图像附有全部代码 一.前言 今天的低价单孔摄像机(照相机)会给图像带来很多畸变.畸变主要有两 种:径向畸变和切想畸变.如下图所示,用 ...

  5. Opencv立体相机标定

    0. 简要 立体相机标定是立体视觉深度测量的重要步骤,相机标定的精度很大程度上决定了深度的精度,因此掌握立体相机的标定算法和过程至关重要.由于相机标定原理可以在网上找到很多相关资料,因此本文不展开讲原 ...

  6. OpenCV | 双目相机标定之OpenCV获取左右相机图像+MATLAB单目标定+双目标定

    博主github:https://github.com/MichaelBeechan 博主CSDN:https://blog.csdn.net/u011344545 原本网上可以搜到很多关于双目相机标 ...

  7. opencv+pythons相机标定源码解析

    相机标定原理,这里不再赘述,一般使用张友正相机标定法.这里只介绍了标定相机内参的方法,即3x3的matrix. import cv2 import numpy as np import glob# 设 ...

  8. opencv双目相机标定-示例代码分析

      在这里我使用的是Learning OpenCV3的示例,本节使用的项目代码可以在这里下载到. 一.运行示例   在下载完整个工程以后,按照工程使用说明,下载配置Opencv,运行VS2019项目即 ...

  9. 基于Python实现相机标定正畸并生成鸟瞰图

    资源下载地址:https://download.csdn.net/download/sheziqiong/85836848 资源下载地址:https://download.csdn.net/downl ...

最新文章

  1. Linux的企业-Redis数据库、缓存和哨兵Sentinal、Redis高可用
  2. c++ 优先队列_什么是队列?(Python队列)
  3. python瀑布图怎么做_利用Python绘制数据的瀑布图的教程
  4. 程序员的噩梦,你遇到过哪几条?
  5. [恢]hdu 1040
  6. jsp javabean mysql_jsp mysql JavaBean
  7. 面向对象编程(三)——程序执行过程中内存分析
  8. VS2017如何创建c语言项目
  9. 白话空间统计三十:地统计学(1)起源
  10. 完美收官 | IOTE第十八届国际物联网展精彩落幕,美格智能参展回顾
  11. 乐安全 支持x86_不用苦等五一 四款近期主打平板推荐
  12. PHP初级学习(一)
  13. C#将PDF文件转为图片
  14. python数据分析岗位_python拉勾数据职位分析
  15. RocketMQ 消费者Rebalance 解析——图解、源码级解析
  16. 电路原理入门书籍推荐
  17. 2016新华三杯复赛实验试题
  18. nodejs微信支付小微商户申请入驻时,如何实现图片上传接口
  19. C# button按键无反应
  20. 微信浏览网页时内容被重新排版

热门文章

  1. 你需要掌握的三种编程语言
  2. 利用MOG2背景模型提取运动目标的OpenCV代码
  3. [Nodejs学习之旅2-1] 模块机制
  4. 忘关烤箱了?我用 Python 和 OpenCV 来帮忙
  5. JS如何从数组中随机取出若干个数,且不重复
  6. swift 1.2 升级
  7. 【原创】modb 功能设计之“跨线程通信”
  8. Windows 7的VPC虚拟机自动不与主机时间同步的解决办法
  9. 世界百位首富的共同特质
  10. python创建字典型数据_Python数据类型之字典dict