理论知识请参考《学习OpenCV中文版》(公式,函数描述方面可能有错误注意一下,还有不要看《学习OpenCV3中文版》,可以看《Learning OpenCV3》英文原版,有少许错误注意一下)

下面直接上代码和结果:

说明:由于本人也是第一次接触摄像机,所以代码中注释比较多,也可能有错误,欢迎大家指出!


单目标定部分(矫正部分请自建文件夹)

/*
单目标定
参数:imageList     存放标定图片名称的txtsingleCalibrateResult   存放标定结果的txtobjectPoints  世界坐标系中点的坐标corners_seq       存放图像中的角点,用于立体标定cameraMatrix 相机的内参数矩阵distCoeffs      相机的畸变系数imageSize        输入图像的尺寸(像素)patternSize        标定板每行的角点个数, 标定板每列的角点个数 (9, 6)   chessboardSize  棋盘上每个方格的边长(mm)
注意:亚像素精确化时,允许输入单通道,8位或者浮点型图像。由于输入图像的类型不同,下面用作标定函数参数的内参数矩阵和畸变系数矩阵在初始化时也要数据注意类型。
*/
bool singleCameraCalibrate(const char* imageList, const char* singleCalibrateResult, const char* undisortion_path, vector<vector<Point3f>>& objectPoints,vector<vector<Point2f>>& corners_seq, Mat& cameraMatrix, Mat& distCoeffs, Size& imageSize, Size patternSize, Size chessboardSize)
{int n_boards = 0;ifstream imageStore(imageList); // 打开存放标定图片名称的txtofstream resultStore(singleCalibrateResult); // 保存标定结果的txt// 开始提取角点坐标vector<Point2f> corners; // 存放一张图片的角点坐标 string imageName; // 读取的标定图片的名称while (getline(imageStore, imageName)) // 读取txt的每一行(每一行存放了一张标定图片的名称){n_boards++;Mat imageInput = imread(imageName);cvtColor(imageInput, imageInput, CV_RGB2GRAY);imageSize.width = imageInput.cols; // 获取图片的宽度imageSize.height = imageInput.rows; // 获取图片的高度// 查找标定板的角点bool found = findChessboardCorners(imageInput, patternSize, corners); // 最后一个参数int flags的缺省值为:CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_NORMALIZE_IMAGE// 亚像素精确化。在findChessboardCorners中自动调用了cornerSubPix,为了更加精细化,我们自己再调用一次。if (found) // 当所有的角点都被找到{TermCriteria criteria = TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 40, 0.001); // 终止标准,迭代40次或者达到0.001的像素精度cornerSubPix(imageInput, corners, Size(11, 11), Size(-1, -1), criteria);// 由于我们的图像只存较大,将搜索窗口调大一些,(11, 11)为真实窗口的一半,真实大小为(11*2+1, 11*2+1)--(23, 23)corners_seq.push_back(corners); // 存入角点序列// 绘制角点//drawChessboardCorners(imageInput, patternSize, corners, true);//imshow("cornersframe", imageInput);//waitKey(500); // 暂停0.5s}}//destroyWindow("cornersframe");// 进行相机标定// 计算角点对应的三维坐标int pic, i, j;for (pic = 0; pic < n_boards; pic++){vector<Point3f> realPointSet;for (i = 0; i < patternSize.height; i++){for (j = 0; j < patternSize.width; j++){Point3f realPoint;// 假设标定板位于世界坐标系Z=0的平面realPoint.x = j * chessboardSize.width;realPoint.y = i * chessboardSize.height;realPoint.z = 0;realPointSet.push_back(realPoint);}}objectPoints.push_back(realPointSet);}// 执行标定程序vector<Mat> rvec; // 旋转向量vector<Mat> tvec; // 平移向量calibrateCamera(objectPoints, corners_seq, imageSize, cameraMatrix, distCoeffs, rvec, tvec, 0);// 保存标定结果resultStore << "相机内参数矩阵" << endl;resultStore << cameraMatrix << endl << endl;resultStore << "相机畸变系数" << endl;resultStore << distCoeffs << endl << endl;// 计算重投影点,与原图角点比较,得到误差double errPerImage = 0.; // 每张图像的误差double errAverage = 0.; // 所有图像的平均误差double totalErr = 0.; // 误差总和vector<Point2f> projectImagePoints; // 重投影点for (i = 0; i < n_boards; i++){vector<Point3f> tempObjectPoints = objectPoints[i]; // 临时三维点// 计算重投影点projectPoints(tempObjectPoints, rvec[i], tvec[i], cameraMatrix, distCoeffs, projectImagePoints);// 计算新的投影点与旧的投影点之间的误差vector<Point2f> tempCornersPoints = corners_seq[i];// 临时存放旧投影点Mat tempCornersPointsMat = Mat(1, tempCornersPoints.size(), CV_32FC2); // 定义成两个通道的Mat是为了计算误差Mat projectImagePointsMat = Mat(1, projectImagePoints.size(), CV_32FC2);// 赋值for (int j = 0; j < tempCornersPoints.size(); j++){projectImagePointsMat.at<Vec2f>(0, j) = Vec2f(projectImagePoints[j].x, projectImagePoints[j].y);tempCornersPointsMat.at<Vec2f>(0, j) = Vec2f(tempCornersPoints[j].x, tempCornersPoints[j].y);}// opencv里的norm函数其实把这里的两个通道分别分开来计算的(X1-X2)^2的值,然后统一求和,最后进行根号errPerImage = norm(tempCornersPointsMat, projectImagePointsMat, NORM_L2) / (patternSize.width * patternSize.height);totalErr += errPerImage;resultStore << "第" << i + 1 << "张图像的平均误差为:" << errPerImage << endl;}resultStore << "全局平局误差为:" << totalErr / n_boards << endl;imageStore.close();resultStore.close();// 进行畸变矫正Mat undistortImage;string imageRoute;int k = 1;ifstream imageStore_(imageList);string imageName_;while (getline(imageStore_, imageName_)){stringstream StrStm;string temp;Mat distortImage = imread(imageName_);cvtColor(distortImage, distortImage, CV_RGB2GRAY);undistortImage = distortImage.clone();undistort(distortImage, undistortImage, cameraMatrix, distCoeffs);StrStm << k++;StrStm >> temp;imageRoute = undisortion_path + temp + ".jpg";imwrite(imageRoute, undistortImage);StrStm.clear();imageRoute.clear();}cout << "已矫正!" << endl;imageStore_.close();return true;
}

结果:


双目标定部分(用到单目标定里的图像点和三维点,所以直接调用立体标定函数)

/*
双目标定:计算两摄像机相对旋转矩阵 R,平移向量 T, 本征矩阵E, 基础矩阵F
参数:stereoCalibrateResult 存放立体标定结果的txtobjectPoints            三维点imagePoints              二维图像上的点cameraMatrix         相机内参数distCoeffs             相机畸变系数imageSize             图像尺寸R       左右相机相对的旋转矩阵T        左右相机相对的平移向量E        本征矩阵F       基础矩阵
*/
bool stereoCalibrate(const char* stereoCalibrateResult, vector<vector<Point3f>> objectPoints, vector<vector<Point2f>> imagePoints1, vector<vector<Point2f>> imagePoints2,Mat& cameraMatrix1, Mat& distCoeffs1, Mat& cameraMatrix2, Mat& distCoeffs2, Size& imageSize, Mat& R, Mat& T, Mat& E, Mat& F)
{ofstream stereoStore(stereoCalibrateResult);TermCriteria criteria = TermCriteria(TermCriteria::COUNT | TermCriteria::EPS, 30, 1e-6); // 终止条件stereoCalibrate(objectPoints, imagePoints1, imagePoints2, cameraMatrix1, distCoeffs1,cameraMatrix2, distCoeffs2, imageSize, R, T, E, F, CALIB_FIX_INTRINSIC, criteria); // 注意参数顺序,可以到保存的文件中查看,避免返回时出错stereoStore << "左相机内参数:" << endl;stereoStore << cameraMatrix1 << endl;stereoStore << "右相机内参数:" << endl;stereoStore << cameraMatrix2 << endl;stereoStore << "左相机畸变系数:" << endl;stereoStore << distCoeffs1 << endl;stereoStore << "右相机畸变系数:" << endl;stereoStore << distCoeffs2 << endl;stereoStore << "旋转矩阵:" << endl;stereoStore << R << endl;stereoStore << "平移向量:" << endl;stereoStore << T << endl;stereoStore << "本征矩阵:" << endl;stereoStore << E << endl;stereoStore << "基础矩阵:" << endl;stereoStore << F << endl;stereoStore.close(); return true;
}

结果:


立体校正

/*
立体校正
参数:stereoRectifyParams   存放立体校正结果的txtcameraMatrix            相机内参数distCoeffs             相机畸变系数imageSize             图像尺寸R                       左右相机相对的旋转矩阵T                        左右相机相对的平移向量R1, R2                   行对齐旋转校正P1, P2                   左右投影矩阵Q                     重投影矩阵map1, map2             重投影映射表
*/
Rect stereoRectification(const char* stereoRectifyParams, Mat& cameraMatrix1, Mat& distCoeffs1, Mat& cameraMatrix2, Mat& distCoeffs2,Size& imageSize, Mat& R, Mat& T, Mat& R1, Mat& R2, Mat& P1, Mat& P2, Mat& Q, Mat& mapl1, Mat& mapl2, Mat& mapr1, Mat& mapr2)
{Rect validRoi[2];ofstream stereoStore(stereoRectifyParams);stereoRectify(cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, imageSize,R, T, R1, R2, P1, P2, Q, 0, -1, imageSize, &validRoi[0], &validRoi[1]);// 计算左右图像的重投影映射表stereoStore << "R1:" << endl;stereoStore << R1 << endl;stereoStore << "R2:" << endl;stereoStore << R2 << endl;stereoStore << "P1:" << endl;stereoStore << P1 << endl;stereoStore << "P2:" << endl;stereoStore << P2 << endl;stereoStore << "Q:" << endl;stereoStore << Q << endl;stereoStore.close();cout << "R1:" << endl;cout << R1 << endl;cout << "R2:" << endl;cout << R2 << endl;cout << "P1:" << endl;cout << P1 << endl;cout << "P2:" << endl;cout << P2 << endl;cout << "Q:" << endl;cout << Q << endl;initUndistortRectifyMap(cameraMatrix1, distCoeffs1, R1, P1, imageSize, CV_32FC1, mapl1, mapl2);initUndistortRectifyMap(cameraMatrix2, distCoeffs2, R2, P2, imageSize, CV_32FC1, mapr1, mapr2);return validRoi[0], validRoi[1];
}

结果:


视差图

/*
计算视差图
参数:imageName1    左相机拍摄的图像imageName2  右相机拍摄的图像img1_rectified  重映射后的左侧相机图像img2_rectified   重映射后的右侧相机图像map  重投影映射表
*/
bool computeDisparityImage(const char* imageName1, const char* imageName2, Mat& img1_rectified,Mat& img2_rectified, Mat& mapl1, Mat& mapl2, Mat& mapr1, Mat& mapr2, Rect validRoi[2], Mat& disparity)
{// 首先,对左右相机的两张图片进行重构Mat img1 = imread(imageName1);Mat img2 = imread(imageName2);if (img1.empty() | img2.empty()){cout << "图像为空" << endl;}Mat gray_img1, gray_img2;cvtColor(img1, gray_img1, COLOR_BGR2GRAY);cvtColor(img2, gray_img2, COLOR_BGR2GRAY);Mat canvas(imageSize.height, imageSize.width * 2, CV_8UC1); // 注意数据类型Mat canLeft = canvas(Rect(0, 0, imageSize.width, imageSize.height));   Mat canRight = canvas(Rect(imageSize.width, 0, imageSize.width, imageSize.height));gray_img1.copyTo(canLeft);gray_img2.copyTo(canRight);imwrite("校正前左右相机图像.jpg", canvas);remap(gray_img1, img1_rectified, mapl1, mapl2, INTER_LINEAR);remap(gray_img2, img2_rectified, mapr1, mapr2, INTER_LINEAR);   imwrite("左相机校正图像.jpg", img1_rectified);imwrite("右相机校正图像.jpg", img2_rectified);img1_rectified.copyTo(canLeft);img2_rectified.copyTo(canRight);rectangle(canLeft, validRoi[0], Scalar(255, 255, 255), 5, 8);rectangle(canRight, validRoi[1], Scalar(255, 255, 255), 5, 8);for (int j = 0; j <= canvas.rows; j += 16)line(canvas, Point(0, j), Point(canvas.cols, j), Scalar(0, 255, 0), 1, 8);imwrite("校正后左右相机图像.jpg", canvas);// 进行立体匹配Ptr<StereoBM> bm = StereoBM::create(16, 9); // Ptr<>是一个智能指针bm->compute(img1_rectified, img2_rectified, disparity); // 计算视差图disparity.convertTo(disparity, CV_32F, 1.0 / 16);// 归一化视差映射normalize(disparity, disparity, 0, 256, NORM_MINMAX, -1);return true;
}

结果:


鼠标回调函数

// 鼠标回调函数,点击视差图显示深度
void onMouse(int event, int x, int y, int flags, void *param)
{Point point;point.x = x;point.y = y;if(event == EVENT_LBUTTONDOWN){cout << result3DImage.at<Vec3f>(point) << endl;}
}

主函数

int main()
{singleCameraCalibrate(imageList_L, singleCalibrate_result_L, objectPoints_L, corners_seq_L, cameraMatrix_L,distCoeffs_L, imageSize, patternSize, chessboardSize);cout << "已完成左相机的标定!" << endl;singleCameraCalibrate(imageList_R, singleCalibrate_result_R, objectPoints_R, corners_seq_R, cameraMatrix_R,distCoeffs_R, imageSize, patternSize, chessboardSize);cout << "已完成右相机的标定!" << endl;stereoCalibrate(stereoCalibrate_result_L, objectPoints_L, corners_seq_L, corners_seq_R, cameraMatrix_L, distCoeffs_L,cameraMatrix_R, distCoeffs_R, imageSize, R, T, E, F);cout << "相机立体标定完成!" << endl;//stereoCalibrate(stereoCalibrate_result_R, objectPoints_R, corners_seq_L, corners_seq_R, cameraMatrix_L, distCoeffs_L,//   cameraMatrix_R, distCoeffs_R, imageSize, R2, T2, E2, F2);//cout << "右相机立体标定完成!" << endl;validRoi[0], validRoi[1] = stereoRectification(stereoRectifyParams, cameraMatrix_L, distCoeffs_L, cameraMatrix_R, distCoeffs_R,imageSize, R, T, R1, R2, P1, P2, Q, mapl1, mapl2, mapr1, mapr2);cout << "已创建图像重投影映射表!" << endl;computeDisparityImage(imageName_L, imageName_R, img1_rectified, img2_rectified, mapl1, mapl2, mapr1, mapr2, validRoi, disparity);cout << "视差图建立完成!" << endl;// 从三维投影获得深度映射reprojectImageTo3D(disparity, result3DImage, Q);imshow("视差图", disparity);setMouseCallback("视差图", onMouse);waitKey(0);//destroyAllWindows();return 0;
}

整个源代码

#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include <fstream>using namespace cv;
using namespace std;const char* imageName_L = "1.jpg"; // 用于检测深度的图像
const char* imageName_R = "26.jpg";
const char* imageList_L = "caliberationpics_L.txt"; // 左相机的标定图片名称列表
const char* imageList_R = "caliberationpics_R.txt"; // 右相机的标定图片名称列表
const char* singleCalibrate_result_L = "calibrationresults_L.txt"; // 存放左相机的标定结果
const char* singleCalibrate_result_R = "calibrationresults_R.txt"; // 存放右相机的标定结果
const char* stereoCalibrate_result_L = "stereocalibrateresult_L.txt"; // 存放立体标定结果
const char* stereoCalibrate_result_R = "stereocalibrateresult_R.txt";
const char* stereoRectifyParams = "stereoRectifyParams.txt"; // 存放立体校正参数
vector<vector<Point2f>> corners_seq_L; // 所有角点坐标
vector<vector<Point2f>> corners_seq_R;
vector<vector<Point3f>> objectPoints_L; // 三维坐标
vector<vector<Point3f>> objectPoints_R;
Mat cameraMatrix_L = Mat(3, 3, CV_32FC1, Scalar::all(0)); // 相机的内参数
Mat cameraMatrix_R = Mat(3, 3, CV_32FC1, Scalar::all(0)); // 初始化相机的内参数
Mat distCoeffs_L = Mat(1, 5, CV_32FC1, Scalar::all(0)); // 相机的畸变系数
Mat distCoeffs_R = Mat(1, 5, CV_32FC1, Scalar::all(0)); // 初始化相机的畸变系数
Mat R, T, E, F; // 立体标定参数
Mat R1, R2, P1, P2, Q; // 立体校正参数
Mat mapl1, mapl2, mapr1, mapr2; // 图像重投影映射表
Mat img1_rectified, img2_rectified, disparity, result3DImage; // 校正图像 视差图 深度图
Size patternSize = Size(9, 6); // 行列内角点个数
Size chessboardSize = Size(30, 30); // 棋盘上每个棋盘格的大小30mm
Size imageSize; // 图像尺寸
Rect validRoi[2];/*
单目标定
参数:imageList     存放标定图片名称的txtsingleCalibrateResult   存放标定结果的txtobjectPoints  世界坐标系中点的坐标corners_seq       存放图像中的角点,用于立体标定cameraMatrix 相机的内参数矩阵distCoeffs      相机的畸变系数imageSize        输入图像的尺寸(像素)patternSize        标定板每行的角点个数, 标定板每列的角点个数 (9, 6)   chessboardSize  棋盘上每个方格的边长(mm)
注意:亚像素精确化时,允许输入单通道,8位或者浮点型图像。由于输入图像的类型不同,下面用作标定函数参数的内参数矩阵和畸变系数矩阵在初始化时也要数据注意类型。
*/
bool singleCameraCalibrate(const char* imageList, const char* singleCalibrateResult, vector<vector<Point3f>>& objectPoints, vector<vector<Point2f>>& corners_seq, Mat& cameraMatrix, Mat& distCoeffs, Size& imageSize, Size patternSize, Size chessboardSize)
{int n_boards = 0;ifstream imageStore(imageList); // 打开存放标定图片名称的txtofstream resultStore(singleCalibrateResult); // 保存标定结果的txt// 开始提取角点坐标vector<Point2f> corners; // 存放一张图片的角点坐标 string imageName; // 读取的标定图片的名称while (getline(imageStore, imageName)) // 读取txt的每一行(每一行存放了一张标定图片的名称){n_boards++;Mat imageInput = imread(imageName);cvtColor(imageInput, imageInput, CV_RGB2GRAY);imageSize.width = imageInput.cols; // 获取图片的宽度imageSize.height = imageInput.rows; // 获取图片的高度// 查找标定板的角点bool found = findChessboardCorners(imageInput, patternSize, corners); // 最后一个参数int flags的缺省值为:CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_NORMALIZE_IMAGE// 亚像素精确化。在findChessboardCorners中自动调用了cornerSubPix,为了更加精细化,我们自己再调用一次。if (found) // 当所有的角点都被找到{TermCriteria criteria = TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 40, 0.001); // 终止标准,迭代40次或者达到0.001的像素精度cornerSubPix(imageInput, corners, Size(11, 11), Size(-1, -1), criteria);// 由于我们的图像只存较大,将搜索窗口调大一些,(11, 11)为真实窗口的一半,真实大小为(11*2+1, 11*2+1)--(23, 23)corners_seq.push_back(corners); // 存入角点序列// 绘制角点//drawChessboardCorners(imageInput, patternSize, corners, true);//imshow("cornersframe", imageInput);//waitKey(500); // 暂停0.5s}}//destroyWindow("cornersframe");// 进行相机标定// 计算角点对应的三维坐标int pic, i, j;for (pic = 0; pic < n_boards; pic++){vector<Point3f> realPointSet;for (i = 0; i < patternSize.height; i++){for (j = 0; j < patternSize.width; j++){Point3f realPoint;// 假设标定板位于世界坐标系Z=0的平面realPoint.x = j * chessboardSize.width;realPoint.y = i * chessboardSize.height;realPoint.z = 0;realPointSet.push_back(realPoint);}}objectPoints.push_back(realPointSet);}// 执行标定程序vector<Mat> rvec; // 旋转向量vector<Mat> tvec; // 平移向量calibrateCamera(objectPoints, corners_seq, imageSize, cameraMatrix, distCoeffs, rvec, tvec, 0);// 保存标定结果resultStore << "相机内参数矩阵" << endl;resultStore << cameraMatrix << endl << endl;resultStore << "相机畸变系数" << endl;resultStore << distCoeffs << endl << endl;// 计算重投影点,与原图角点比较,得到误差double errPerImage = 0.; // 每张图像的误差double errAverage = 0.; // 所有图像的平均误差double totalErr = 0.; // 误差总和vector<Point2f> projectImagePoints; // 重投影点for (i = 0; i < n_boards; i++){vector<Point3f> tempObjectPoints = objectPoints[i]; // 临时三维点// 计算重投影点projectPoints(tempObjectPoints, rvec[i], tvec[i], cameraMatrix, distCoeffs, projectImagePoints);// 计算新的投影点与旧的投影点之间的误差vector<Point2f> tempCornersPoints = corners_seq[i];// 临时存放旧投影点Mat tempCornersPointsMat = Mat(1, tempCornersPoints.size(), CV_32FC2); // 定义成两个通道的Mat是为了计算误差Mat projectImagePointsMat = Mat(1, projectImagePoints.size(), CV_32FC2);// 赋值for (int j = 0; j < tempCornersPoints.size(); j++){projectImagePointsMat.at<Vec2f>(0, j) = Vec2f(projectImagePoints[j].x, projectImagePoints[j].y);tempCornersPointsMat.at<Vec2f>(0, j) = Vec2f(tempCornersPoints[j].x, tempCornersPoints[j].y);}// opencv里的norm函数其实把这里的两个通道分别分开来计算的(X1-X2)^2的值,然后统一求和,最后进行根号errPerImage = norm(tempCornersPointsMat, projectImagePointsMat, NORM_L2) / (patternSize.width * patternSize.height);totalErr += errPerImage;resultStore << "第" << i + 1 << "张图像的平均误差为:" << errPerImage << endl;}resultStore << "全局平局误差为:" << totalErr / n_boards << endl;imageStore.close();resultStore.close();return true;
}/*
双目标定:计算两摄像机相对旋转矩阵 R,平移向量 T, 本征矩阵E, 基础矩阵F
参数:stereoCalibrateResult 存放立体标定结果的txtobjectPoints            三维点imagePoints              二维图像上的点cameraMatrix         相机内参数distCoeffs             相机畸变系数imageSize             图像尺寸R       左右相机相对的旋转矩阵T        左右相机相对的平移向量E        本征矩阵F       基础矩阵
*/
bool stereoCalibrate(const char* stereoCalibrateResult, vector<vector<Point3f>> objectPoints, vector<vector<Point2f>> imagePoints1, vector<vector<Point2f>> imagePoints2,Mat& cameraMatrix1, Mat& distCoeffs1, Mat& cameraMatrix2, Mat& distCoeffs2, Size& imageSize, Mat& R, Mat& T, Mat& E, Mat& F)
{ofstream stereoStore(stereoCalibrateResult);TermCriteria criteria = TermCriteria(TermCriteria::COUNT | TermCriteria::EPS, 30, 1e-6); // 终止条件stereoCalibrate(objectPoints, imagePoints1, imagePoints2, cameraMatrix1, distCoeffs1,cameraMatrix2, distCoeffs2, imageSize, R, T, E, F, CALIB_FIX_INTRINSIC, criteria); // 注意参数顺序,可以到保存的文件中查看,避免返回时出错stereoStore << "左相机内参数:" << endl;stereoStore << cameraMatrix1 << endl;stereoStore << "右相机内参数:" << endl;stereoStore << cameraMatrix2 << endl;stereoStore << "左相机畸变系数:" << endl;stereoStore << distCoeffs1 << endl;stereoStore << "右相机畸变系数:" << endl;stereoStore << distCoeffs2 << endl;stereoStore << "旋转矩阵:" << endl;stereoStore << R << endl;stereoStore << "平移向量:" << endl;stereoStore << T << endl;stereoStore << "本征矩阵:" << endl;stereoStore << E << endl;stereoStore << "基础矩阵:" << endl;stereoStore << F << endl;stereoStore.close(); return true;
}/*
立体校正
参数:stereoRectifyParams   存放立体校正结果的txtcameraMatrix            相机内参数distCoeffs             相机畸变系数imageSize             图像尺寸R                       左右相机相对的旋转矩阵T                        左右相机相对的平移向量R1, R2                   行对齐旋转校正P1, P2                   左右投影矩阵Q                     重投影矩阵map1, map2             重投影映射表
*/
Rect stereoRectification(const char* stereoRectifyParams, Mat& cameraMatrix1, Mat& distCoeffs1, Mat& cameraMatrix2, Mat& distCoeffs2,Size& imageSize, Mat& R, Mat& T, Mat& R1, Mat& R2, Mat& P1, Mat& P2, Mat& Q, Mat& mapl1, Mat& mapl2, Mat& mapr1, Mat& mapr2)
{Rect validRoi[2];ofstream stereoStore(stereoRectifyParams);stereoRectify(cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, imageSize,R, T, R1, R2, P1, P2, Q, 0, -1, imageSize, &validRoi[0], &validRoi[1]);// 计算左右图像的重投影映射表stereoStore << "R1:" << endl;stereoStore << R1 << endl;stereoStore << "R2:" << endl;stereoStore << R2 << endl;stereoStore << "P1:" << endl;stereoStore << P1 << endl;stereoStore << "P2:" << endl;stereoStore << P2 << endl;stereoStore << "Q:" << endl;stereoStore << Q << endl;stereoStore.close();cout << "R1:" << endl;cout << R1 << endl;cout << "R2:" << endl;cout << R2 << endl;cout << "P1:" << endl;cout << P1 << endl;cout << "P2:" << endl;cout << P2 << endl;cout << "Q:" << endl;cout << Q << endl;initUndistortRectifyMap(cameraMatrix1, distCoeffs1, R1, P1, imageSize, CV_32FC1, mapl1, mapl2);initUndistortRectifyMap(cameraMatrix2, distCoeffs2, R2, P2, imageSize, CV_32FC1, mapr1, mapr2);return validRoi[0], validRoi[1];
}/*
计算视差图
参数:imageName1    左相机拍摄的图像imageName2  右相机拍摄的图像img1_rectified  重映射后的左侧相机图像img2_rectified   重映射后的右侧相机图像map  重投影映射表
*/
bool computeDisparityImage(const char* imageName1, const char* imageName2, Mat& img1_rectified,Mat& img2_rectified, Mat& mapl1, Mat& mapl2, Mat& mapr1, Mat& mapr2, Rect validRoi[2], Mat& disparity)
{// 首先,对左右相机的两张图片进行重构Mat img1 = imread(imageName1);Mat img2 = imread(imageName2);if (img1.empty() | img2.empty()){cout << "图像为空" << endl;}Mat gray_img1, gray_img2;cvtColor(img1, gray_img1, COLOR_BGR2GRAY);cvtColor(img2, gray_img2, COLOR_BGR2GRAY);Mat canvas(imageSize.height, imageSize.width * 2, CV_8UC1); // 注意数据类型Mat canLeft = canvas(Rect(0, 0, imageSize.width, imageSize.height));   Mat canRight = canvas(Rect(imageSize.width, 0, imageSize.width, imageSize.height));gray_img1.copyTo(canLeft);gray_img2.copyTo(canRight);imwrite("校正前左右相机图像.jpg", canvas);remap(gray_img1, img1_rectified, mapl1, mapl2, INTER_LINEAR);remap(gray_img2, img2_rectified, mapr1, mapr2, INTER_LINEAR);   imwrite("左相机校正图像.jpg", img1_rectified);imwrite("右相机校正图像.jpg", img2_rectified);img1_rectified.copyTo(canLeft);img2_rectified.copyTo(canRight);rectangle(canLeft, validRoi[0], Scalar(255, 255, 255), 5, 8);rectangle(canRight, validRoi[1], Scalar(255, 255, 255), 5, 8);for (int j = 0; j <= canvas.rows; j += 16)line(canvas, Point(0, j), Point(canvas.cols, j), Scalar(0, 255, 0), 1, 8);imwrite("校正后左右相机图像.jpg", canvas);// 进行立体匹配Ptr<StereoBM> bm = StereoBM::create(16, 9); // Ptr<>是一个智能指针bm->compute(img1_rectified, img2_rectified, disparity); // 计算视差图disparity.convertTo(disparity, CV_32F, 1.0 / 16);// 归一化视差映射normalize(disparity, disparity, 0, 256, NORM_MINMAX, -1);return true;
}// 鼠标回调函数,点击视差图显示深度
void onMouse(int event, int x, int y, int flags, void *param)
{Point point;point.x = x;point.y = y;if(event == EVENT_LBUTTONDOWN){cout << result3DImage.at<Vec3f>(point) << endl;}
}int main()
{singleCameraCalibrate(imageList_L, singleCalibrate_result_L, objectPoints_L, corners_seq_L, cameraMatrix_L,distCoeffs_L, imageSize, patternSize, chessboardSize);cout << "已完成左相机的标定!" << endl;singleCameraCalibrate(imageList_R, singleCalibrate_result_R, objectPoints_R, corners_seq_R, cameraMatrix_R,distCoeffs_R, imageSize, patternSize, chessboardSize);cout << "已完成右相机的标定!" << endl;stereoCalibrate(stereoCalibrate_result_L, objectPoints_L, corners_seq_L, corners_seq_R, cameraMatrix_L, distCoeffs_L,cameraMatrix_R, distCoeffs_R, imageSize, R, T, E, F);cout << "相机立体标定完成!" << endl;//stereoCalibrate(stereoCalibrate_result_R, objectPoints_R, corners_seq_L, corners_seq_R, cameraMatrix_L, distCoeffs_L,//   cameraMatrix_R, distCoeffs_R, imageSize, R2, T2, E2, F2);//cout << "右相机立体标定完成!" << endl;validRoi[0], validRoi[1] = stereoRectification(stereoRectifyParams, cameraMatrix_L, distCoeffs_L, cameraMatrix_R, distCoeffs_R,imageSize, R, T, R1, R2, P1, P2, Q, mapl1, mapl2, mapr1, mapr2);cout << "已创建图像重投影映射表!" << endl;computeDisparityImage(imageName_L, imageName_R, img1_rectified, img2_rectified, mapl1, mapl2, mapr1, mapr2, validRoi, disparity);cout << "视差图建立完成!" << endl;// 从三维投影获得深度映射reprojectImageTo3D(disparity, result3DImage, Q);imshow("视差图", disparity);setMouseCallback("视差图", onMouse);waitKey(0);//destroyAllWindows();return 0;
}

两种分辨率下视差图的深度:第一张图是640*480的,第二张图是5448*2050的

点击视差图即可输出。

 


从结果来看,不同分辨率的图像的深度不一样。同样的深度只是一个估测,和实际距离与一定差距,本文与实际效果大概存在10-20cm的误差。

整个项目已打包上传,https://download.csdn.net/download/sweetwind1996/11192277。(我没填写下载积分,为啥还显示5积分?)

OpenCV3+VS2017+单目标定+双目标定+双目测距相关推荐

  1. 使用opencv实现单目标定相机标定(摘抄)

    使用opencv实现单目标定 相机标定的目的:获取摄像机的内参和外参矩阵(同时也会得到每一幅标定图像的选择和平移矩阵),内参和外参系数可以对之后相机拍摄的图像就进行矫正,得到畸变相对很小的图像. 相机 ...

  2. 单目标定---从原理到实现(c++)(利用张氏标定法)

    文章转自:https://www.cnblogs.com/zyly/p/9366080.html 双目视觉是建立在几何数学的基础上,数学推导是枯燥乏味的.因此这里不去过多的介绍数学原理,只是简要的叙述 ...

  3. OpenCV | 双目相机标定之OpenCV获取左右相机图像+MATLAB单目标定+双目标定

    博主github:https://github.com/MichaelBeechan 博主CSDN:https://blog.csdn.net/u011344545 原本网上可以搜到很多关于双目相机标 ...

  4. 相机标定——单目标定和双目标定

    相机标定--单目标定和双目标定 1.标定目的 在图像测量过程以及机器视觉应用中,为确定空间物体表面某点的三维几何位置与其在图像中对应点之间的相互关系,必须建立相机成像的几何模型,这些几何模型参数就是相 ...

  5. 双目标定(一)单目标定与矫正的基本介绍

    1.单目相机标定 首先,任何标定都是用基于小孔模型的数学模型去近似相机模型,我们需要用fx = f/dx, fy = f/dy,图像坐标系中的光心原点坐标(和可能的缩放因子ks)这5个相机内参数,切向 ...

  6. 双目标定(二)单目标定基本原理

    主体思路,先处理纯二维平面的畸变问题(此处略过),矫正图片后,再来求解相机内外参数.基本思路是求得每个标定板对应的单应矩阵,再联合优化所有标定板数据得到相机内参矩阵,再得到每个标定板对应的外参. 1. ...

  7. 双目测距系列(二)鱼眼镜头双目标定及测距

    前言 这几天把基于opencv C++ api将鱼眼镜头的双目标定以及测距功能实现完毕,效果还可以,至少对齐得非常棒. 这里把其流程及其关键函数在这里总结一下. 对于双目标定而言,opencv一共支持 ...

  8. 结构光三维重建之单目标定的一种方法——建立“相位-像点-真实三维坐标”之间的关系

    结构光三维重建之单目标定的一种方法--建立"相位-像点-真实三维坐标"之间的关系 1.目的 为了让像我一样刚接触结构光三维扫描的朋友们能更快速地理解整个框架,我先介绍一下写作本文的 ...

  9. 单目标定(免费拿走不谢)

    单目标定源代码 OpenCV版本4.0.0 Visual studio2017版本 如果遇到任何问题,或者有错误的地方,欢迎评论留言指正 本段代码亲测可用,直接复制即可 注意:有些路径是需要更改的,注 ...

最新文章

  1. R语言入门——ggplot2
  2. 程序员修炼之道---读书随笔1
  3. python之美_Python之美[从菜鸟到高手]--生成器之全景分析
  4. laravel 定时任务
  5. 深刻理解HDFS工作机制
  6. vs code没有代码提示
  7. 在线教学视频的设计与实现
  8. Python爬虫基本原理
  9. OWA2003隐藏附件病毒提示的方法
  10. Tableau上面地图与条形图结合_Tableau 全新地图实战演示,更快、更高、更强
  11. jsp开发教程之 仿MOP论坛 二(数据库,界面设计篇)
  12. 北京理工大学计算机学院2021拟录取名单,北京理工大学管理与经济学院2021年硕士研究生拟录取通知...
  13. 液晶驱动原理 c语言,基于STM8S的LCD驱动电路和LCD显示原理分析
  14. 最新主流 Markdown 编辑器推荐
  15. python爬取淘宝数据魔方_淘宝数据魔方是什么(淘宝数据魔方技术架构解析)
  16. axure 侧滑抽屉式菜单_Axure教程之抽屉菜单
  17. 在java中gc是啥_java中,什么是GC?GC的基本原理。
  18. linux afs3服务,AFS配置3
  19. Python Turtle绘图基础(一)——Turtle简介、绘图窗体与绘图区域
  20. 61_ZYNQ7020开发板_SD/QSPI方式启动_ax_peta

热门文章

  1. Springboot使用极光推送
  2. 迅龙中文开源Web搜索引擎的目标
  3. poi获取excel2003,excel2007,ppt2007图表类型
  4. 集合排序 Collections.sort用法
  5. 微信小程序“复印件制作”上线了
  6. UI那点事—从我的经历看GUI发展
  7. pdf如何转换成ppt格式?
  8. [Davinci Configurator 配置] 实现功能寻址不响应、关闭抑制肯定响应、NRC否定响应
  9. String类型的用法(详细解说)
  10. WEB自动化测试笔记完整整理