OpenCV3.4.8实现立体视觉校正与标定
文章目录
- 说明
- Code
- 效果
- 参考
说明
1、先calibrateCamera()
确定相机内参与畸变向量;再stereoCalirate()
确定右相机相对于左相机的R、T,本征矩阵E,基本矩阵F;再立体标定stereoRectify()
确定R1, R2, P1, P2, Q; 根据R1 , R2 重投影得到立体校正图像(共面行对齐);再使用StereoBM or StereoSGBM
计算Disparity ; 再reprojectImageTo3D()
得到视差图的所有三维坐标(单位为mm,这里Tx(两相机之间的基线距离是以mm为单位))
2、本征矩阵E,基本矩阵F的关系:本征矩阵包含了两个摄像机之间的所有几何信息,但不包含摄像机的任何信息,基本矩阵F在本征矩阵的基础上引入了两个相机的内参数信息;
1、本征矩阵E将左侧摄像机所观察到的点P的物理坐标位置与右侧摄像机所看到的
相同的点位置相关联起来;
2、基本矩阵F将图像坐标系(像素)中一个摄像机的像平面上的点与另一个摄像机
的像平面上的点关联起来;
3、立体标定函数stereoCalibrate()
原理:不要使用该函数一次性计算出所有参数,可能导致异常结果;
①使用calibrateCamera()
函数计算相机的内参数,畸变向量;
②根据如下图的关系,每幅图像对都可以计算出一对R、T;
③采用鲁棒的Levenberg-Marquardt迭代算法找到最小投影误差,计算出最佳R,T作为立体标定参数;
4、立体测量的数学原理:
5、三维深度的测量方法:测量三维坐标为(X/W , Y/W , Z/W);实现函数:reporjectImageTo3D()
;
Code
#include <opencv2/opencv.hpp>
#include <iostream>
#include <math.h>using namespace cv;
using namespace std;void createCalibImg();
bool SingleCalib(string calibdata, string calibimgname, Size Pattern_size, int Square_size, int& imgNum,Mat& CameraMatrix, Mat& DistCoeffs, vector<Mat>& Rvecs, vector<Mat>& Tvecs, double& totalerror,vector<vector<Point3f>>& _objectPoints, vector<vector<Point2f>>& _imgPoints, Size& _image_size);
double Stereo_Calib_Quanlity_Check(vector<vector<Point2f>>& _leftimgPoints, int& leftimgNum, vector<vector<Point2f>>& _rightimgPoints, int& rightimgNum,const Mat& leftCameraMatrix, const Mat& leftDistCoeffs, const Mat& _R1, const Mat& _P1,const Mat& rightCameraMatrix, const Mat& rightDistCoeffs, const Mat& _R2, const Mat& _P2,Mat& _F, Size _pattern_size);int main(int argc, char** argv)
{/*1.生成读取用的照片序列文件*///createCalibImg();/*2.左右相机标定(内参矩阵、畸变向量)*/Size board_size = Size(6, 9); //棋盘格行列角点数量int square_size = 50; //棋盘格方块大小int leftimgNum = 0, rightimgNum = 0;Mat leftCamMatrix = Mat(3, 3, CV_32FC1, Scalar::all(0));Mat rightCamMatrix = Mat(3, 3, CV_32FC1, Scalar::all(0));Mat leftDistCoeffs = Mat(1, 5, CV_32FC1, Scalar::all(0));Mat rightDistCoeffs = Mat(1, 5, CV_32FC1, Scalar::all(0)); vector<Mat> leftRvec, leftTvec;vector<Mat> rightRvec, rightTvec;vector<vector<Point3f>> leftobjPoints; //这些数据供第三步立体标定使用vector<vector<Point3f>> rightobjPoints;vector<vector<Point2f>> leftimgPoints;vector<vector<Point2f>> rightimgPoints;Size leftImgSize;Size rightImgSize;double leftTotalerror = 0.0, rightTotalerror = 0.0;SingleCalib("calibimg.yaml", "leftimg", board_size, square_size, leftimgNum,leftCamMatrix, leftDistCoeffs, leftRvec, leftTvec, leftTotalerror,leftobjPoints,leftimgPoints,leftImgSize); //左相机标定SingleCalib("calibimg.yaml", "rightimg", board_size, square_size, rightimgNum,rightCamMatrix, rightDistCoeffs, rightRvec, rightTvec, rightTotalerror,rightobjPoints,rightimgPoints, rightImgSize); //右相机标定 /*3.立体标定(右相机相对左相机的旋转矩阵与平移向量)*/Mat R = Mat(3, 3, CV_32FC1, Scalar::all(0));Mat T = Mat(3,1,CV_32FC1,Scalar::all(0)); Mat E = Mat(3, 3, CV_32FC1, Scalar::all(0)); //3 * 3 矩阵Mat F = Mat(3, 3, CV_32FC1, Scalar::all(0));stereoCalibrate(leftobjPoints, leftimgPoints, rightimgPoints, leftCamMatrix, leftDistCoeffs,rightCamMatrix, rightDistCoeffs, leftImgSize, R, T, E, F, CALIB_FIX_INTRINSIC);/*4.立体校正(Bougut算法)Calculate R1 R2 P1 P2 Q */Mat R1, R2, P1, P2, Q;stereoRectify(leftCamMatrix, leftDistCoeffs, rightCamMatrix, rightDistCoeffs, leftImgSize,R, T, R1, R2, P1, P2, Q, CALIB_ZERO_DISPARITY); //立体校正,计算校正参数//4.1 衡量立体校正效果double Stereoerr = Stereo_Calib_Quanlity_Check(leftimgPoints, leftimgNum, rightimgPoints, rightimgNum, leftCamMatrix, leftDistCoeffs, R1, P1,rightCamMatrix, rightDistCoeffs, R2, P2, F, board_size); cout << "stereo calib quanlity:" << Stereoerr << endl;Mat lmap1, lmap2;Mat rmap1, rmap2;initUndistortRectifyMap(leftCamMatrix, leftDistCoeffs, R1, P1, leftImgSize, CV_16SC2, lmap1, lmap2); //计算左相机视图矫正映射initUndistortRectifyMap(rightCamMatrix, rightDistCoeffs, R2, P2, rightImgSize, CV_16SC2, rmap1, rmap2); //计算右相机视图矫正映射//4.2 生成立体校正后的图像(共面行对准)FileStorage fs2 = FileStorage("calibimg.yaml", FileStorage::READ);FileNode leftimgSeq = fs2["leftimg"];FileNode rightimgSeq = fs2["rightimg"];FileNodeIterator lit = leftimgSeq.begin(), lit_end = leftimgSeq.end(); //在第5步中继续使用FileNodeIterator rit = rightimgSeq.begin(), rit_end = rightimgSeq.end();Mat lsrc, ldst,rsrc,rdst;for (; lit != lit_end && rit != rit_end; lit++,rit++){lsrc = imread((string)(*lit));rsrc = imread((string)(*rit));Mat s = Mat(lsrc.rows, (lsrc.cols * 2), CV_8UC3, Scalar(0, 0, 0));Mat part1 = Mat(s, Rect(0, 0, lsrc.cols, lsrc.rows)); //浅复制Mat part2 = Mat(s, Rect(lsrc.cols, 0, lsrc.cols, lsrc.rows));remap(lsrc, ldst, lmap1, lmap2, INTER_LINEAR);remap(rsrc, rdst, rmap1, rmap2, INTER_LINEAR);resize(ldst, part1, part1.size(), 0, 0, INTER_LINEAR);resize(rdst, part2, part2.size(), 0, 0, INTER_LINEAR);Mat d = Mat(lsrc.rows, (lsrc.cols * 2), CV_8UC3, Scalar(0, 0, 0));Mat dpart1 = d(Rect(0, 0, lsrc.cols, lsrc.rows));Mat dpart2 = d(Rect(lsrc.cols, 0, lsrc.cols, lsrc.rows));lsrc.copyTo(dpart1); rsrc.copyTo(dpart2);for (int i = 0; i < lsrc.rows; i += 20){line(s, Point(0, i), Point(s.cols, i), Scalar(0, 255, 0), 1); //绘制扫描线line(d, Point(0, i), Point(s.cols, i), Scalar(0, 255, 0), 1); //绘制扫描线}imshow("原始图", d);imshow("stereo rectify", s);waitKey(0);}/*5.生成视差图(块匹配BM算法)*/Ptr<StereoBM> bm = StereoBM::create();bm->setBlockSize(15); //SAD窗口大小设置为15bm->setNumDisparities(64); //搜寻的最大视差64 bm->setMinDisparity(0); //最小视差点bm->setUniquenessRatio(10); //最佳匹配与第二好匹配之间的差异程度,取值一般在5-15之间//以下参数不是很关键 ROI1与ROI2参数不用设置,因为stereoRectify()函数没有输出对应参数bm->setDisp12MaxDiff(-1); //左视差图与右视差图最大的容许差异(默认-1)bm->setPreFilterCap(31); //预过滤饱和度阈值bm->setPreFilterSize(15); //NORMALIZED RESPONSE模式下预过滤窗口大小bm->setPreFilterType(StereoBM::PREFILTER_NORMALIZED_RESPONSE); //只归一化窗口中的强度bm->setSpeckleRange(32); //视差变化阈值(散斑阈值),当窗口内视差变化大于阈值时,该窗口内的视差清零bm->setSpeckleWindowSize(200); //散斑窗口大小bm->setTextureThreshold(10); //纹理阈值,保证有足够的纹理以克服噪声//生成查看视差图Mat dist,dist8U; //存储视差图(Disparity)lit = leftimgSeq.begin(); //迭代器重新定位rit = rightimgSeq.begin();for (; lit != lit_end && rit != rit_end; lit++, rit++){lsrc = imread((string)(*lit),IMREAD_GRAYSCALE); //注意BM算法只能处理灰度图rsrc = imread((string)(*rit),IMREAD_GRAYSCALE);bm->compute(lsrc, rsrc, dist); //dist为CV_16S格式dist.convertTo(dist, CV_32F, 1.0 / 16); //除16得到真正的视差图//归一化显示(imshow只能显示8位图)dist8U = Mat(dist.size(), CV_8UC1);normalize(dist, dist8U, 0, 255, NORM_MINMAX, CV_8UC1); //归一化imshow("Disparity Image", dist8U);waitKey(0);}/*6.三维重投影,计算左相机图像像素上每个像素点的三维坐标(reprojectImageTo3D())*/Mat objxyz; //存储三维重投影计算得到的三维坐标//vector<Mat> objxyz_set; //存储所有图像的三维计算坐标(输出),共有13幅图像lit = leftimgSeq.begin(); //迭代器重新定位rit = rightimgSeq.begin();for (; lit != lit_end && rit != rit_end; lit++, rit++){lsrc = imread((string)(*lit), IMREAD_GRAYSCALE); //注意BM算法只能处理灰度图rsrc = imread((string)(*rit), IMREAD_GRAYSCALE);bm->compute(lsrc, rsrc, dist); //dist为CV_16S格式dist.convertTo(dist, CV_32F, 1.0 / 16); //除16得到真正的视差图reprojectImageTo3D(dist, objxyz, Q, true); //三维重投影,objxyz默认是CV_32FC3类型,每个像素3个通道,存储摄像机坐标系下的三维坐标, 异常值对应的三维坐标将极大//objxyz_set.push_back(objxyz);cout << (string)(*lit) + " : "<< objxyz.at<Vec3f>(100,100) << endl << endl; //查看每幅图像坐标(100,100)处的计算三维坐标(单位为像素)}fs2.release(); //释放fs2//保存所有数据FileStorage fs1 = FileStorage("calibresult.yaml", FileStorage::WRITE);fs1 << "leftCamMatrix" << leftCamMatrix;fs1 << "leftDistCoeffs" << leftDistCoeffs;fs1 << "leftTotalerror" << leftTotalerror;fs1 << "rightCamMatrix" << rightCamMatrix;fs1 << "rightDistCoeffs" << rightDistCoeffs;fs1 << "rightTotalerror" << rightTotalerror;fs1 << "Camera Rotation Matrix" << R;fs1 << "Camera Translation Matrix" << T;fs1 << "Essential Matrix" << E;fs1 << "Fundamental Matrix" << F;fs1 << "left rectification matrix" << R1;fs1 << "right rectification matrix" << R2;fs1 << "left Projection matrix" << P1;fs1 << "right Projection matrix" << P2;fs1 << "Q " << Q;fs1 << "stereo calib error" << Stereoerr;fs1.release();return 0;
}
//生成用于读写的照片序列
void createCalibImg()
{FileStorage fs = FileStorage("calibimg.yaml", FileStorage::WRITE);fs << "leftimg" << "[";fs << "left01.jpg" << "left02.jpg" << "left03.jpg" << "left04.jpg" << "left05.jpg"<< "left06.jpg" << "left07.jpg" << "left08.jpg" << "left09.jpg" << "left11.jpg"<< "left12.jpg" << "left13.jpg" << "left14.jpg";fs << "]";fs << "rightimg" << "[";fs << "right01.jpg" << "right02.jpg" << "right03.jpg" << "right04.jpg" << "right05.jpg"<< "right06.jpg" << "right07.jpg" << "right08.jpg" << "right09.jpg" << "right11.jpg"<< "right12.jpg" << "right13.jpg" << "right14.jpg";fs << "]";fs.release();std::cout << "文件写操作完成!请在工程目录下查看..." << endl;
}/* 单个相机标定去畸变(求解内参矩阵、畸变向量)
string calibdata yaml文件名(包含标定图像名)
string calibimgname yaml文件中的序列名
Size Pattern_size 棋盘格行列角点数
int Square_size 棋盘格尺寸大小(默认宽高相同,单位mm)
int& imgNum 返回处理图像数
Mat& CameraMatrix 输出相机内参矩阵
Mat& DistCoeffs 输出相机畸变向量
vector<Mat>& Rvecs 输出每幅图像的旋转向量(3*1)
vector<Mat>& Tvecs 输出每幅图像的平移矩阵
double& totalerror 返回总体标定误差
vector<Point3f> _objectPoints 返回棋盘格三维坐标
vector<vector<Point2f>> _imgPoints 返回棋盘角点二维坐标(返回的是未校准的点)
*/
bool SingleCalib(string calibdata,string calibimgname, Size Pattern_size , int Square_size , int& imgNum,Mat& CameraMatrix , Mat& DistCoeffs,vector<Mat>& Rvecs,vector<Mat>& Tvecs, double& totalerror,vector<vector<Point3f>>& _objectPoints,vector<vector<Point2f>>& _imgPoints, Size& _image_size)
{FileStorage fs(calibdata, FileStorage::READ); //打开文件FileNode calibimg = fs[calibimgname]; //读取对应序列if (calibimg.type() != FileNode::SEQ){cerr << "error! input is not a sequence..." << endl;return false;}FileNodeIterator it = calibimg.begin(), it_end = calibimg.end(); //创建迭代器Size pattern_size = Pattern_size; //棋盘格行列角点数Mat inputimg;Mat grayimg;vector<Point2f> corners; //存储一幅图像的棋盘角点数vector<vector<Point2f>> corners_set; //存储所有图像的棋盘角点数Size imagesize; //图像大小int imgnum = 0; //标定图像数namedWindow(calibimgname, CV_WINDOW_AUTOSIZE); //创建一个窗体for (; it != it_end; it++){imgnum++; //标定图像计数inputimg = imread((string)(*it), IMREAD_COLOR); //读入图像cvtColor(inputimg, grayimg, CV_BGR2GRAY);if (calibimg.begin() == it) //读入第一张图片时保存图片大小{imagesize.width = inputimg.cols;imagesize.height = inputimg.rows;}if (false == findChessboardCorners(grayimg, pattern_size, corners)){//如果没有找到所有角点,则报错cerr << "没有检测到所有角点:" << (string)(*it) << endl;continue;}else {find4QuadCornerSubpix(grayimg, corners, Size(11, 11)); //亚像素级精确化角点坐标corners_set.push_back(corners); //存储每幅图像的角点drawChessboardCorners(inputimg, pattern_size, corners, true); //绘制角点,棋盘上所有角点均被找到imshow(calibimgname , inputimg); //显示一下waitKey(100);}}destroyWindow(calibimgname); //显示完毕后销毁窗口fs.release(); //读取完毕释放文件std::cout << imgnum << endl;int squaresize = Square_size; //棋盘格尺寸大小vector<vector<Point3f>> objectpoints; //存储所有棋盘格的空间坐标//设置棋盘格坐标objectpointsfor (int count = 0; count < imgnum; count++){vector<Point3f> temppoint;for (int i = 0; i < pattern_size.height; i++){ for (int j = 0; j < pattern_size.width; j++){Point3f realPoint;realPoint.x = j * squaresize; //还是觉得应该是先j后irealPoint.y = i * squaresize;realPoint.z = 0; //棋盘格Z轴为0temppoint.push_back(realPoint); //存储一幅图像中所有坐标} }objectpoints.push_back(temppoint); //存储所有棋盘点坐标}//相机标定Mat cameraMatrix = Mat(3, 3, CV_32FC1, Scalar::all(0));Mat distCoeffs = Mat(1, 5, CV_32FC1, Scalar::all(0));vector<Mat> rvecs; //存储所有图片相对于摄像机坐标系的旋转向量(3 * 1)vector<Mat> tvecs; //存储所有图片相对于摄像机坐标系的平移向量cv::calibrateCamera(objectpoints,corners_set,imagesize,cameraMatrix,distCoeffs,rvecs,tvecs,0); //没给定的值使用默认值//vector<Mat> rvecs_trans(rvecs.size()); //存储转换后的旋转矩阵(3 * 3) /*for (int i = 0; i < rvecs.size(); i++){Rodrigues(rvecs[i], rvecs_trans[i]); }*///相机标定结果评价double err = 0.0;double total_err = 0.0;vector<Point3f> objecttemp;vector<Point2f> imagetemp;vector<Point2f> projectionpoints;for (int i = 0; i < imgnum; i++){objecttemp = objectpoints[i]; //读取一幅图像的棋盘点projectPoints(objecttemp, rvecs[i], tvecs[i], cameraMatrix, distCoeffs,projectionpoints); //计算反向投影点imagetemp = corners_set[i]; //读取对应图像的角点/*计算投影误差*/Mat proobj = Mat(1, projectionpoints.size(), CV_32FC2); //注意是32位2通道,注意是重投影后图像坐标点的大小Mat img = Mat(1, imagetemp.size(), CV_32FC2);for (int j = 0; j < imagetemp.size(); j++) //写入数据{proobj.at<Vec2f>(0, j) = Vec2f(projectionpoints[j].x, projectionpoints[j].y);//注意填充的一定是重投影的图像点img.at<Vec2f>(0, j) = Vec2f(imagetemp[j].x, imagetemp[j].y);}err = norm(proobj, img, NORM_L2); //矩阵范数运算total_err += (err /= (pattern_size.width * pattern_size.height));}total_err = (total_err / imgnum);std::cout << total_err << endl;//返回计算参数CameraMatrix = cameraMatrix;DistCoeffs = distCoeffs;Rvecs = rvecs;Tvecs = tvecs;totalerror = total_err;imgNum = imgnum;_objectPoints = objectpoints; //返回所有棋盘空间坐标点供下一步使用_imgPoints = corners_set; //返回所有棋盘图像坐标点供下一步使用_image_size = imagesize; //返回图像大小return true;
}/*衡量立体校正效果
vector<vector<Point2f>>& _leftimgPoints 去畸变的左相机视图角点集合
int& leftimgNum 左相机总视图数
vector<vector<Point2f>>& _rightimgPoints 去畸变的右相机视图角点集合
int& rightimgNum 右相机总视图数
Mat& _F 基本矩阵
Size _pattern_size 每行列棋盘点数
返回值 左右视图每个点到极线的平均偏移距离
*/
double Stereo_Calib_Quanlity_Check(vector<vector<Point2f>>& _leftimgPoints, int& leftimgNum, vector<vector<Point2f>>& _rightimgPoints, int& rightimgNum,const Mat& leftCameraMatrix, const Mat& leftDistCoeffs, const Mat& _R1 , const Mat& _P1, const Mat& rightCameraMatrix, const Mat& rightDistCoeffs, const Mat& _R2, const Mat& _P2,Mat& _F, Size _pattern_size)
{double err = 0.0, totalerr = 0.0;int board_n = (_pattern_size.width * _pattern_size.height); //每幅图像总的角点数float a, b, c;float x, y;vector<Point2f> rimg,limg;Mat leftLine = Mat(1, board_n, CV_32FC3, Scalar::all(0)); //存储左相机每一幅图像的极线Mat rightLine = Mat(1, board_n, CV_32FC3, Scalar::all(0)); //存储右相机每一幅图像的极线/* 极线存储还有另一种更好的方式:vector<Point3f> leftLine; //(三维坐标x,y,z分别代表a,b,c)vector<Point3f> rightLine;*/vector<vector<Point2f>> leftPoints; //存储校正后的图像点集vector<vector<Point2f>> rightPoints;vector<Point2f> tempPoint;for (int i = 0; i < _leftimgPoints.size(); i++){undistortPoints(_leftimgPoints[i], tempPoint, leftCameraMatrix, leftDistCoeffs,Mat(),leftCameraMatrix); //去畸变处理leftPoints.push_back(tempPoint); //保存相关点}for (int i = 0; i < _rightimgPoints.size(); i++){undistortPoints(_rightimgPoints[i], tempPoint, rightCameraMatrix, rightDistCoeffs,Mat(),rightCameraMatrix);rightPoints.push_back(tempPoint);}//计算左相机图像点与极线距离for (int i = 0; i < rightimgNum; i++){rimg = rightPoints[i]; //注意 使用右相机图像计算的才是左相机内的极线limg = leftPoints[i]; computeCorrespondEpilines(rimg, 2, _F, leftLine); //计算左相机视图中的极线float* l_temp = (float*)leftLine.ptr(0); //获取行指针for (int j = 0; j < board_n; j++) //这里实际上一共是54条极线{a = (*l_temp); //极线值b = (*(l_temp + 1));c = (*(l_temp + 2));l_temp += 3; //更新指针的位置x = limg[j].x; //左相机图像去畸变的点y = limg[j].y;err = fabs(a * x + b * y + c) / sqrt(a * a + b * b); //点到直线的距离公式cout << "left: " << err << endl;totalerr += err;}}//计算右相机图像点与极线距离for (int i = 0; i < leftimgNum; i++){rimg = rightPoints[i]; //注意 使用左相机图像计算的才是右相机内的极线limg = leftPoints[i];computeCorrespondEpilines(limg, 1, _F, rightLine); //计算右相机视图中的极线float * r_temp = (float*)rightLine.ptr(0); //定位到矩阵头for (int j = 0; j < board_n; j++) //计算每个点到极线的距离{a = *r_temp;b = *(r_temp + 1);c = *(r_temp + 2);r_temp += 3;x = rimg[j].x;y = rimg[j].y;err = fabs(a * x + b * y + c) / sqrt(a * a + b * b);//点到直线的距离公式cout << "right: " << err << endl;totalerr += err;}}totalerr /= (2 * leftimgNum * board_n); //求取每个点到极线的平均偏移距离return totalerr;
}
效果
1、绘制角点:
2、未校正的图像:
3、立体校正的共面行对准图像:
4、视差图(CV_8UC1):
5、左相机每幅图像(350,170)像素处测量的三维坐标,Z值为10000代表测量失败,只有left01.jpg、left05.jpg深度测量成功;
6、立体校正偏移误差:0.344624像素(共面行对准)
7、保存的文件:
%YAML:1.0
---
leftCamMatrix: !!opencv-matrixrows: 3cols: 3dt: ddata: [ 5.3172114589370665e+02, 0., 3.4059397869967120e+02, 0.,5.3165275202126293e+02, 2.3218163956721483e+02, 0., 0., 1. ]
leftDistCoeffs: !!opencv-matrixrows: 1cols: 5dt: ddata: [ -2.6496241080749372e-01, -6.1791447271371112e-02,7.4441869557696826e-04, -6.7734037466286705e-04,3.5892739140212027e-01 ]
leftTotalerror: 5.8422314361454647e-02
rightCamMatrix: !!opencv-matrixrows: 3cols: 3dt: ddata: [ 5.3707652474832639e+02, 0., 3.2446264774840478e+02, 0.,5.3688888158513885e+02, 2.4813174334653809e+02, 0., 0., 1. ]
rightDistCoeffs: !!opencv-matrixrows: 1cols: 5dt: ddata: [ -2.9752913190311114e-01, 1.5768191747141277e-01,-6.4179876358640912e-04, -1.2016815825458044e-04,-7.8165194533429322e-02 ]
rightTotalerror: 5.9504322884020584e-02
Camera Rotation Matrix: !!opencv-matrixrows: 3cols: 3dt: ddata: [ 9.9997369979298278e-01, 3.7417641627110268e-03,6.2128031744264536e-03, -3.6870887231558363e-03,9.9995458103248225e-01, -8.7887000688689539e-03,-6.2454062382753806e-03, 8.7655617677141213e-03,9.9994207823644232e-01 ]
Camera Translation Matrix: !!opencv-matrixrows: 3cols: 1dt: ddata: [ -1.6648015327782122e+02, 1.9779789412307625e+00,2.7109730298627279e+00 ]
Essential Matrix: !!opencv-matrixrows: 3cols: 3dt: ddata: [ -2.3576839315534621e-03, -2.6935118036821470e+00,2.0016903020564629e+00, 1.6711655428804784e+00,1.4694358883844729e+00, 1.6648735319559174e+02,-1.3640998241952227e+00, -1.6647999305186403e+02,1.4508553407330742e+00 ]
Fundamental Matrix: !!opencv-matrixrows: 3cols: 3dt: ddata: [ 2.7112467616625021e-09, 3.0978344954601152e-06,-1.9441354262530049e-03, -1.9224484139607071e-06,-1.6906033714389786e-06, -1.0078848076102437e-01,1.3186327782052762e-03, 1.0224855455965888e-01, 1. ]
left rectification matrix: !!opencv-matrixrows: 3cols: 3dt: ddata: [ 9.9991607855733788e-01, -8.2799227516537500e-03,-9.9638708212500676e-03, 8.2361509727174598e-03,9.9995628680007076e-01, -4.4260937822668822e-03,1.0000082983181544e-02, 4.3436583937348523e-03,9.9994056371970830e-01 ]
right rectification matrix: !!opencv-matrixrows: 3cols: 3dt: ddata: [ 9.9979689547098194e-01, -1.1878756511288764e-02,-1.6280753985364708e-02, 1.1950036508966637e-02,9.9991940200931673e-01, 4.2879030731850926e-03,1.6228506832756312e-02, -4.4815877851695426e-03,9.9985826542410650e-01 ]
left Projection matrix: !!opencv-matrixrows: 3cols: 4dt: ddata: [ 5.3427081680320089e+02, 0., 3.4392805862426758e+02, 0., 0.,5.3427081680320089e+02, 2.4199363708496094e+02, 0., 0., 0., 1.,0. ]
right Projection matrix: !!opencv-matrixrows: 3cols: 4dt: ddata: [ 5.3427081680320089e+02, 0., 3.4392805862426758e+02,-8.8963556374480846e+04, 0., 5.3427081680320089e+02,2.4199363708496094e+02, 0., 0., 0., 1., 0. ]
Q : !!opencv-matrixrows: 4cols: 4dt: ddata: [ 1., 0., 0., -3.4392805862426758e+02, 0., 1., 0.,-2.4199363708496094e+02, 0., 0., 0., 5.3427081680320089e+02, 0.,0., 6.0055020120177681e-03, 0. ]
stereo calib error: 3.4462384683656455e-01
参考
1、https://blog.csdn.net/xuxunjie147/article/details/79219774
2、https://blog.csdn.net/lijiayu2015/article/details/53079661
3、https://www.cnblogs.com/polly333/p/5130375.html
4、Learning OpenCV3
OpenCV3.4.8实现立体视觉校正与标定相关推荐
- 双目立体视觉 I:标定和校正
点击上方"3D视觉工坊",选择"星标" 干货第一时间送达 作者:Ali Yasin Eser 编译:ronghuaiyang(AI 公园) 导读 双目立体校正和 ...
- 双目立体视觉摄像头的标定、矫正、世界坐标计算(opencv)
使用opencv对双目立体视觉摄像头进行标定和矫正 摄像头标定准备工作 摄像头标定 摄像头矫正 世界坐标计算 摄像头标定准备工作 使用双目摄像头拍摄贴有棋盘格的平面,拍摄多组图片,要保证左右相机均能够 ...
- 《Learning OpenCV3》ch18:相机模型与标定
省赛期间用到双目视觉的时候,只是很粗浅地调用了下API,毕竟初学而且时间紧迫.最近打算跟一个硕士生再做一个相关项目,而且也想用Stereo Vision作为毕业设计,所以很有必要了解一下其中的原理.最 ...
- 广角摄像头畸变校正(1)-标定
1.标定 标定板:标定的最开始阶段最需要的肯定是标定板.两种方法,直接从opencv官网上能下载到: http://docs.opencv.org/2.4/_downloads/pattern.png ...
- 双目立体视觉之Halcon标定
标定结果 Halcon标定过程 获取左右相机图像中标定板的区域; find_caltab(Image : CalPlate : CalPlateDescr, SizeGauss, MarkThresh ...
- 双目立体视觉:四(双目标定matlab,图像校正,图像匹配,计算视差,disparity详解,)
二郎也比较忙,在某大场工作,有时候没有时间回复. 如果希望二郎尽快帮忙,可以将代码,数据和问题发给二郎,谢谢大家理解. glwang20@mails.jlu.edu.cn 不过还是希望大家自己要好好研 ...
- Matlab相机标定并进行图像校正
Matlab相机标定工具 APP->展开: 图像处理与计算机视觉栏: 相机标定过程 打印一张黑白棋盘,拍摄足够多张照片,以确保之后的合格照片的筛选. 2. 打开Matlab相机标定工具,从文件中 ...
- OpenCV相机标定与畸变校正
点击我爱计算机视觉标星,更快获取CVML新技术 本文转载自OpenCV学堂. OpenCV单目相机标定,图像畸变校正 相机标定定义与原理 01 在图像测量过程以及机器视觉应用中,为确定空间物体表面某点 ...
- 【计算机视觉】摄像机标定与畸变校正
任务1:利用OpenCV实现摄像机参数标定 设置好棋盘格参数(边长.行列数等),打印方形棋盘格并粘贴,拍摄多张标定图像: 估计标定参数,可视化标定结果. 任务2:根据标定参数进行畸变校正 ...
最新文章
- Qt中如何改变三角形图形项的包围盒
- mysql表内增加一个字段并赋值
- 开机遇到grub解决方法,超详细
- linux c之加入了加入原文件 math.h调用abs()函数出现implicit declaration of function错误
- [渝粤教育] 西南科技大学 车辆构造 在线考试复习资料
- 【数组】Triangle
- python代码中函有中文报错的解决方法
- 深信服scsa知识点一
- 视觉SLAM入门十四讲
- cad管线交叉怎么画_高效设计!多种方式进行管线连接、伸缩
- element表格序号排序
- CSS颜色搭配(超级赞的几个网站)
- 5月6阴阳师服务器维护,阴阳师正式服5月6日更新公告
- Latex + vscode + SumatraPdf配置的一些历程
- FPGA 的布局规划艺术
- 冒充linux内核,4岁小萝莉向Linux内核贡献代码修复「漏洞」而且已经合并到内核...
- LeetCode(力扣) 刷题注意事项 持续更新 ~ ~
- 初装宽带疾如风,一抽费用逝无踪
- word中给论文标参考文献和文献序号变化后的自动更新
- 【产品经理必备文档】述职报告/年终总结汇报ppt模板