1. 前言

全局BA,优化所有的关键帧和地图点,该全局BA只有视觉残差,是在纯视觉或者imu未初始化时使用。它在两个地方用到:

  • 单目视觉初始化时,CreateInitialMapMonocular函数中需要在计算出初始化位姿和地图点之后进行一次全局优化。
  • 闭环矫正时,本质图优化之后要进行一次全局优化。

Vertex:

  • g2o::VertexSE3Expmap(),即关键帧的Tcw
  • g2o::VertexSBAPointXYZ(),地图点的mWorldPos(地图点3D坐标)

Edge:

  • g2o::EdgeSE3ProjectXYZ(),每个观测的重投影误差

Vertex:待优化关键帧的Tcw
Vertex:待优化地图点的mWorldPos
measurement:地图点在关键帧中的二维位置(u,v)
InfoMatrix: invSigma2(与特征点所在的尺度有关)

2. 代码分析

全局BA: pMap中所有的MapPoints和关键帧做bundle adjustment优化

这个全局BA优化在本程序中有两个地方使用:

  • 单目初始化:CreateInitialMapMonocular函数
  • 闭环优化:RunGlobalBundleAdjustment函数
void Optimizer::GlobalBundleAdjustemnt(Map *pMap, int nIterations, bool *pbStopFlag, const unsigned long nLoopKF, const bool bRobust)
{// 获取地图中的所有关键帧vector<KeyFrame *> vpKFs = pMap->GetAllKeyFrames();// 获取地图中的所有地图点vector<MapPoint *> vpMP = pMap->GetAllMapPoints();// 调用GBABundleAdjustment(vpKFs, vpMP, nIterations, pbStopFlag, nLoopKF, bRobust);
}

bundle adjustment 优化过程

void Optimizer::BundleAdjustment(const vector<KeyFrame *> &vpKFs, const vector<MapPoint *> &vpMP,int nIterations, bool *pbStopFlag, const unsigned long nLoopKF, const bool bRobust)
{// 不参与优化的地图点,下面会用到vector<bool> vbNotIncludedMP;vbNotIncludedMP.resize(vpMP.size());Map *pMap = vpKFs[0]->GetMap();// Step 1 初始化g2o优化器g2o::SparseOptimizer optimizer;g2o::BlockSolver_6_3::LinearSolverType *linearSolver;linearSolver = new g2o::LinearSolverEigen<g2o::BlockSolver_6_3::PoseMatrixType>();g2o::BlockSolver_6_3 *solver_ptr = new g2o::BlockSolver_6_3(linearSolver);// 使用LM算法优化g2o::OptimizationAlgorithmLevenberg *solver = new g2o::OptimizationAlgorithmLevenberg(solver_ptr);optimizer.setAlgorithm(solver);optimizer.setVerbose(false);// 如果这个时候外部请求终止,那就结束// 注意这句执行之后,外部再请求结束BA,就结束不了了if (pbStopFlag)optimizer.setForceStopFlag(pbStopFlag);// 记录添加到优化器中的顶点的最大关键帧idlong unsigned int maxKFid = 0;const int nExpectedSize = (vpKFs.size()) * vpMP.size();vector<ORB_SLAM3::EdgeSE3ProjectXYZ *> vpEdgesMono;vpEdgesMono.reserve(nExpectedSize);vector<ORB_SLAM3::EdgeSE3ProjectXYZToBody *> vpEdgesBody;vpEdgesBody.reserve(nExpectedSize);vector<KeyFrame *> vpEdgeKFMono;vpEdgeKFMono.reserve(nExpectedSize);vector<KeyFrame *> vpEdgeKFBody;vpEdgeKFBody.reserve(nExpectedSize);vector<MapPoint *> vpMapPointEdgeMono;vpMapPointEdgeMono.reserve(nExpectedSize);vector<MapPoint *> vpMapPointEdgeBody;vpMapPointEdgeBody.reserve(nExpectedSize);vector<g2o::EdgeStereoSE3ProjectXYZ *> vpEdgesStereo;vpEdgesStereo.reserve(nExpectedSize);vector<KeyFrame *> vpEdgeKFStereo;vpEdgeKFStereo.reserve(nExpectedSize);vector<MapPoint *> vpMapPointEdgeStereo;vpMapPointEdgeStereo.reserve(nExpectedSize);// Step 2 向优化器添加顶点// Set KeyFrame vertices// Step 2.1 :向优化器添加关键帧位姿顶点// 对于当前地图中的所有关键帧for (size_t i = 0; i < vpKFs.size(); i++){KeyFrame *pKF = vpKFs[i];// 去除无效的if (pKF->isBad())continue;// 对于每一个能用的关键帧构造SE3顶点,其实就是当前关键帧的位姿g2o::VertexSE3Expmap *vSE3 = new g2o::VertexSE3Expmap();vSE3->setEstimate(Converter::toSE3Quat(pKF->GetPose()));vSE3->setId(pKF->mnId);// 只有第0帧关键帧不优化(参考基准)vSE3->setFixed(pKF->mnId == pMap->GetInitKFid());// 向优化器中添加顶点,并且更新maxKFidoptimizer.addVertex(vSE3);if (pKF->mnId > maxKFid)maxKFid = pKF->mnId;}// 卡方分布 95% 以上可信度的时候的阈值const float thHuber2D = sqrt(5.99);  // 自由度为2const float thHuber3D = sqrt(7.815); // 自由度为3// Set MapPoint vertices// Step 2.2:向优化器添加MapPoints顶点// 遍历地图中的所有地图点for (size_t i = 0; i < vpMP.size(); i++){MapPoint *pMP = vpMP[i];// 跳过无效地图点if (pMP->isBad())continue;// 创建顶点g2o::VertexSBAPointXYZ *vPoint = new g2o::VertexSBAPointXYZ();// 注意由于地图点的位置是使用cv::Mat数据类型表示的,这里需要转换成为Eigen::Vector3d类型vPoint->setEstimate(Converter::toVector3d(pMP->GetWorldPos()));// 前面记录maxKFid 是在这里使用的const int id = pMP->mnId + maxKFid + 1;vPoint->setId(id);// g2o在做BA的优化时必须将其所有地图点全部schur掉,否则会出错。// 原因是使用了g2o::LinearSolver<BalBlockSolver::PoseMatrixType>这个类型来指定linearsolver,// 其中模板参数当中的位姿矩阵类型在程序中为相机姿态参数的维度,于是BA当中schur消元后解得线性方程组必须是只含有相机姿态变量。// Ceres库则没有这样的限制vPoint->setMarginalized(true);optimizer.addVertex(vPoint);// 边的关系,其实就是点和关键帧之间观测的关系const map<KeyFrame *, tuple<int, int>> observations = pMP->GetObservations();// 边计数int nEdges = 0;// SET EDGES//  Step 3:向优化器添加投影边(是在遍历地图点、添加地图点的顶点的时候顺便添加的)//  遍历观察到当前地图点的所有关键帧for (map<KeyFrame *, tuple<int, int>>::const_iterator mit = observations.begin(); mit != observations.end(); mit++){KeyFrame *pKF = mit->first;// 滤出不合法的关键帧if (pKF->isBad() || pKF->mnId > maxKFid)continue;if (optimizer.vertex(id) == NULL || optimizer.vertex(pKF->mnId) == NULL)continue;nEdges++;const int leftIndex = get<0>(mit->second);if (leftIndex != -1 && pKF->mvuRight[get<0>(mit->second)] < 0){// 如果是单目相机按照下面操作const cv::KeyPoint &kpUn = pKF->mvKeysUn[leftIndex];Eigen::Matrix<double, 2, 1> obs;obs << kpUn.pt.x, kpUn.pt.y;// 创建边ORB_SLAM3::EdgeSE3ProjectXYZ *e = new ORB_SLAM3::EdgeSE3ProjectXYZ();// 填充数据,构造约束关系e->setVertex(0, dynamic_cast<g2o::OptimizableGraph::Vertex *>(optimizer.vertex(id)));e->setVertex(1, dynamic_cast<g2o::OptimizableGraph::Vertex *>(optimizer.vertex(pKF->mnId)));e->setMeasurement(obs);// 信息矩阵,也是协方差,表明了这个约束的观测在各个维度(x,y)上的可信程度,在我们这里对于具体的一个点,两个坐标的可信程度都是相同的,// 其可信程度受到特征点在图像金字塔中的图层有关,图层越高,可信度越差// 为了避免出现信息矩阵中元素为负数的情况,这里使用的是sigma^(-2)const float &invSigma2 = pKF->mvInvLevelSigma2[kpUn.octave];e->setInformation(Eigen::Matrix2d::Identity() * invSigma2);// 如果要使用鲁棒核函数if (bRobust){g2o::RobustKernelHuber *rk = new g2o::RobustKernelHuber;e->setRobustKernel(rk);// 这里的重投影误差,自由度为2,所以这里设置为卡方分布中自由度为2的阈值,如果重投影的误差大约大于1个像素的时候,就认为不太靠谱的点了,// 核函数是为了避免其误差的平方项出现数值上过大的增长rk->setDelta(thHuber2D);}// 设置相机内参e->pCamera = pKF->mpCamera;optimizer.addEdge(e);vpEdgesMono.push_back(e);vpEdgeKFMono.push_back(pKF);vpMapPointEdgeMono.push_back(pMP);}else if (leftIndex != -1 && pKF->mvuRight[leftIndex] >= 0) // Stereo observation{// 双目或RGBD相机按照下面操作// 双目相机的观测数据则是由三个部分组成:投影点的x坐标,投影点的y坐标,以及投影点在右目中的x坐标(默认y方向上已经对齐了)const cv::KeyPoint &kpUn = pKF->mvKeysUn[leftIndex];Eigen::Matrix<double, 3, 1> obs;const float kp_ur = pKF->mvuRight[get<0>(mit->second)];// 为什么右目只有一个观测量?这是因为假设已经作了双目较正,假设同一点在两目上的观测的y值是相同的。obs << kpUn.pt.x, kpUn.pt.y, kp_ur;// 对于双目输入,g2o也有专门的误差边g2o::EdgeStereoSE3ProjectXYZ *e = new g2o::EdgeStereoSE3ProjectXYZ();// 填充e->setVertex(0, dynamic_cast<g2o::OptimizableGraph::Vertex *>(optimizer.vertex(id)));e->setVertex(1, dynamic_cast<g2o::OptimizableGraph::Vertex *>(optimizer.vertex(pKF->mnId)));e->setMeasurement(obs);// 信息矩阵这里是相同的,考虑的是左目特征点的所在图层const float &invSigma2 = pKF->mvInvLevelSigma2[kpUn.octave];Eigen::Matrix3d Info = Eigen::Matrix3d::Identity() * invSigma2;e->setInformation(Info);// 如果要使用鲁棒核函数if (bRobust){g2o::RobustKernelHuber *rk = new g2o::RobustKernelHuber;e->setRobustKernel(rk);// 由于现在的观测有三个值,重投影误差会有三个平方项的和组成,因此对应的卡方分布的自由度为3,所以这里设置的也是自由度为3的时候的阈值rk->setDelta(thHuber3D);}// 填充相机的基本参数e->fx = pKF->fx;e->fy = pKF->fy;e->cx = pKF->cx;e->cy = pKF->cy;e->bf = pKF->mbf;optimizer.addEdge(e);vpEdgesStereo.push_back(e);vpEdgeKFStereo.push_back(pKF);vpMapPointEdgeStereo.push_back(pMP);}if (pKF->mpCamera2){int rightIndex = get<1>(mit->second);if (rightIndex != -1 && rightIndex < pKF->mvKeysRight.size()){rightIndex -= pKF->NLeft;Eigen::Matrix<double, 2, 1> obs;cv::KeyPoint kp = pKF->mvKeysRight[rightIndex];obs << kp.pt.x, kp.pt.y;ORB_SLAM3::EdgeSE3ProjectXYZToBody *e = new ORB_SLAM3::EdgeSE3ProjectXYZToBody();e->setVertex(0, dynamic_cast<g2o::OptimizableGraph::Vertex *>(optimizer.vertex(id)));e->setVertex(1, dynamic_cast<g2o::OptimizableGraph::Vertex *>(optimizer.vertex(pKF->mnId)));e->setMeasurement(obs);const float &invSigma2 = pKF->mvInvLevelSigma2[kp.octave];e->setInformation(Eigen::Matrix2d::Identity() * invSigma2);g2o::RobustKernelHuber *rk = new g2o::RobustKernelHuber;e->setRobustKernel(rk);rk->setDelta(thHuber2D);e->mTrl = Converter::toSE3Quat(pKF->mTrl);e->pCamera = pKF->mpCamera2;optimizer.addEdge(e);vpEdgesBody.push_back(e);vpEdgeKFBody.push_back(pKF);vpMapPointEdgeBody.push_back(pMP);}}}// 如果因为一些特殊原因,实际上并没有任何关键帧观测到当前的这个地图点,那么就删除掉这个顶点,并且这个地图点也就不参与优化if (nEdges == 0){optimizer.removeVertex(vPoint);vbNotIncludedMP[i] = true;}else{vbNotIncludedMP[i] = false;}}// Optimize!// Step 4:开始优化optimizer.setVerbose(false);optimizer.initializeOptimization();optimizer.optimize(nIterations);Verbose::PrintMess("BA: End of the optimization", Verbose::VERBOSITY_NORMAL);// Recover optimized data// Step 5:得到优化的结果// Step 5.1 Keyframes// 遍历所有的关键帧for (size_t i = 0; i < vpKFs.size(); i++){KeyFrame *pKF = vpKFs[i];if (pKF->isBad())continue;// 获取到优化结果g2o::VertexSE3Expmap *vSE3 = static_cast<g2o::VertexSE3Expmap *>(optimizer.vertex(pKF->mnId));g2o::SE3Quat SE3quat = vSE3->estimate();if (nLoopKF == pMap->GetOriginKF()->mnId){// 原则上来讲不会出现"当前闭环关键帧是第0帧"的情况,如果这种情况出现,只能够说明是在创建初始地图点的时候调用的这个全局BA函数.// 这个时候,地图中就只有两个关键帧,其中优化后的位姿数据可以直接写入到帧的成员变量中// 老白:视觉初始化时pKF->SetPose(Converter::toCvMat(SE3quat));}else{// 正常的操作,先把优化后的位姿写入到帧的一个专门的成员变量mTcwGBA中备用pKF->mTcwGBA.create(4, 4, CV_32F);Converter::toCvMat(SE3quat).copyTo(pKF->mTcwGBA);pKF->mnBAGlobalForKF = nLoopKF; // 标记这个关键帧参与了这次全局优化// 下面都是一些调试操作,计算优化前后的位移cv::Mat mTwc = pKF->GetPoseInverse();cv::Mat mTcGBA_c = pKF->mTcwGBA * mTwc;cv::Vec3d vector_dist = mTcGBA_c.rowRange(0, 3).col(3);double dist = cv::norm(vector_dist);if (dist > 1){int numMonoBadPoints = 0, numMonoOptPoints = 0;int numStereoBadPoints = 0, numStereoOptPoints = 0;vector<MapPoint *> vpMonoMPsOpt, vpStereoMPsOpt;for (size_t i = 0, iend = vpEdgesMono.size(); i < iend; i++){ORB_SLAM3::EdgeSE3ProjectXYZ *e = vpEdgesMono[i];MapPoint *pMP = vpMapPointEdgeMono[i];KeyFrame *pKFedge = vpEdgeKFMono[i];if (pKF != pKFedge){continue;}if (pMP->isBad())continue;if (e->chi2() > 5.991 || !e->isDepthPositive()){numMonoBadPoints++;}else{numMonoOptPoints++;vpMonoMPsOpt.push_back(pMP);}}for (size_t i = 0, iend = vpEdgesStereo.size(); i < iend; i++){g2o::EdgeStereoSE3ProjectXYZ *e = vpEdgesStereo[i];MapPoint *pMP = vpMapPointEdgeStereo[i];KeyFrame *pKFedge = vpEdgeKFMono[i];if (pKF != pKFedge){continue;}if (pMP->isBad())continue;if (e->chi2() > 7.815 || !e->isDepthPositive()){numStereoBadPoints++;}else{numStereoOptPoints++;vpStereoMPsOpt.push_back(pMP);}}}}}// Step 5.2 遍历所有地图点,去除其中没有参与优化过程的地图点for (size_t i = 0; i < vpMP.size(); i++){if (vbNotIncludedMP[i])continue;MapPoint *pMP = vpMP[i];if (pMP->isBad())continue;// 获取优化之后的地图点的位置g2o::VertexSBAPointXYZ *vPoint = static_cast<g2o::VertexSBAPointXYZ *>(optimizer.vertex(pMP->mnId + maxKFid + 1));if (nLoopKF == pMap->GetOriginKF()->mnId){// 如果这个GBA是在创建初始地图的时候调用的话,那么地图点的位姿也可以直接写入pMP->SetWorldPos(Converter::toCvMat(vPoint->estimate()));pMP->UpdateNormalAndDepth();}else{// 反之,如果是正常的闭环过程调用,就先临时保存一下pMP->mPosGBA.create(3, 1, CV_32F);Converter::toCvMat(vPoint->estimate()).copyTo(pMP->mPosGBA);pMP->mnBAGlobalForKF = nLoopKF;}}
}

实现了imu初始化优化,LocalMapping::InitializeIMU中使用LoopClosing::RunGlobalBundleAdjustment地图全部做BA。

也就是包含imu预积分的GlobalBundleAdjustemnt

void Optimizer::FullInertialBA(Map *pMap, int its, const bool bFixLocal, const long unsigned int nLoopId, bool *pbStopFlag, bool bInit, float priorG, float priorA, Eigen::VectorXd *vSingVal, bool *bHess)
{// 获取地图里所有mp与kf,以及最大kf的idlong unsigned int maxKFid = pMap->GetMaxKFid();const vector<KeyFrame *> vpKFs = pMap->GetAllKeyFrames();const vector<MapPoint *> vpMPs = pMap->GetAllMapPoints();// Setup optimizer// 1. 很经典的一套设置优化器流程g2o::SparseOptimizer optimizer;g2o::BlockSolverX::LinearSolverType *linearSolver;linearSolver = new g2o::LinearSolverEigen<g2o::BlockSolverX::PoseMatrixType>();g2o::BlockSolverX *solver_ptr = new g2o::BlockSolverX(linearSolver);g2o::OptimizationAlgorithmLevenberg *solver = new g2o::OptimizationAlgorithmLevenberg(solver_ptr);solver->setUserLambdaInit(1e-5);optimizer.setAlgorithm(solver);optimizer.setVerbose(false);if (pbStopFlag)optimizer.setForceStopFlag(pbStopFlag);int nNonFixed = 0;// 2. 设置关键帧与偏置节点// Set KeyFrame verticesKeyFrame *pIncKF; // vpKFs中最后一个id符合要求的关键帧for (size_t i = 0; i < vpKFs.size(); i++){KeyFrame *pKFi = vpKFs[i];if (pKFi->mnId > maxKFid)continue;VertexPose *VP = new VertexPose(pKFi);VP->setId(pKFi->mnId);pIncKF = pKFi;bool bFixed = false;if (bFixLocal){bFixed = (pKFi->mnBALocalForKF >= (maxKFid - 1)) || (pKFi->mnBAFixedForKF >= (maxKFid - 1));if (!bFixed)nNonFixed++;VP->setFixed(bFixed); // 固定,这里可能跟回环有关}optimizer.addVertex(VP);// 如果是初始化的那几个及后面的关键帧,加入速度节点if (pKFi->bImu){VertexVelocity *VV = new VertexVelocity(pKFi);VV->setId(maxKFid + 3 * (pKFi->mnId) + 1);VV->setFixed(bFixed);optimizer.addVertex(VV);// priorA==0.f 时 bInit为false,也就是又加入了偏置节点if (!bInit){VertexGyroBias *VG = new VertexGyroBias(pKFi);VG->setId(maxKFid + 3 * (pKFi->mnId) + 2);VG->setFixed(bFixed);optimizer.addVertex(VG);VertexAccBias *VA = new VertexAccBias(pKFi);VA->setId(maxKFid + 3 * (pKFi->mnId) + 3);VA->setFixed(bFixed);optimizer.addVertex(VA);}}}// priorA!=0.f 时 bInit为true,加入了偏置节点if (bInit){VertexGyroBias *VG = new VertexGyroBias(pIncKF);VG->setId(4 * maxKFid + 2);VG->setFixed(false);optimizer.addVertex(VG);VertexAccBias *VA = new VertexAccBias(pIncKF);VA->setId(4 * maxKFid + 3);VA->setFixed(false);optimizer.addVertex(VA);}// falseif (bFixLocal){if (nNonFixed < 3)return;}// 3. 添加关于imu的边// IMU linksfor (size_t i = 0; i < vpKFs.size(); i++){KeyFrame *pKFi = vpKFs[i];// 必须有对应的上一个关键帧,感觉跟下面的判定冲突了if (!pKFi->mPrevKF){Verbose::PrintMess("NOT INERTIAL LINK TO PREVIOUS FRAME!", Verbose::VERBOSITY_NORMAL);continue;}if (pKFi->mPrevKF && pKFi->mnId <= maxKFid){if (pKFi->isBad() || pKFi->mPrevKF->mnId > maxKFid)continue;// 这两个都必须为初始化后的关键帧if (pKFi->bImu && pKFi->mPrevKF->bImu){// 3.1 根据上一帧的偏置设定当前帧的新偏置pKFi->mpImuPreintegrated->SetNewBias(pKFi->mPrevKF->GetImuBias());// 3.2 提取节点g2o::HyperGraph::Vertex *VP1 = optimizer.vertex(pKFi->mPrevKF->mnId);g2o::HyperGraph::Vertex *VV1 = optimizer.vertex(maxKFid + 3 * (pKFi->mPrevKF->mnId) + 1);g2o::HyperGraph::Vertex *VG1;g2o::HyperGraph::Vertex *VA1;g2o::HyperGraph::Vertex *VG2;g2o::HyperGraph::Vertex *VA2;// 根据不同输入配置相应的偏置节点if (!bInit){VG1 = optimizer.vertex(maxKFid + 3 * (pKFi->mPrevKF->mnId) + 2);VA1 = optimizer.vertex(maxKFid + 3 * (pKFi->mPrevKF->mnId) + 3);VG2 = optimizer.vertex(maxKFid + 3 * (pKFi->mnId) + 2);VA2 = optimizer.vertex(maxKFid + 3 * (pKFi->mnId) + 3);}else{VG1 = optimizer.vertex(4 * maxKFid + 2);VA1 = optimizer.vertex(4 * maxKFid + 3);}g2o::HyperGraph::Vertex *VP2 = optimizer.vertex(pKFi->mnId);g2o::HyperGraph::Vertex *VV2 = optimizer.vertex(maxKFid + 3 * (pKFi->mnId) + 1);if (!bInit){if (!VP1 || !VV1 || !VG1 || !VA1 || !VP2 || !VV2 || !VG2 || !VA2){cout << "Error" << VP1 << ", " << VV1 << ", " << VG1 << ", " << VA1 << ", " << VP2 << ", " << VV2 << ", " << VG2 << ", " << VA2 << endl;continue;}}else{if (!VP1 || !VV1 || !VG1 || !VA1 || !VP2 || !VV2){cout << "Error" << VP1 << ", " << VV1 << ", " << VG1 << ", " << VA1 << ", " << VP2 << ", " << VV2 << endl;continue;}}// 3.3 设置边EdgeInertial *ei = new EdgeInertial(pKFi->mpImuPreintegrated);ei->setVertex(0, dynamic_cast<g2o::OptimizableGraph::Vertex *>(VP1));ei->setVertex(1, dynamic_cast<g2o::OptimizableGraph::Vertex *>(VV1));ei->setVertex(2, dynamic_cast<g2o::OptimizableGraph::Vertex *>(VG1));ei->setVertex(3, dynamic_cast<g2o::OptimizableGraph::Vertex *>(VA1));ei->setVertex(4, dynamic_cast<g2o::OptimizableGraph::Vertex *>(VP2));ei->setVertex(5, dynamic_cast<g2o::OptimizableGraph::Vertex *>(VV2));g2o::RobustKernelHuber *rki = new g2o::RobustKernelHuber;ei->setRobustKernel(rki);// 9个自由度的卡方检验(0.05)rki->setDelta(sqrt(16.92));optimizer.addEdge(ei);// 加了每一个关键帧的偏置时,还要优化两帧之间偏置的误差if (!bInit){EdgeGyroRW *egr = new EdgeGyroRW();egr->setVertex(0, VG1);egr->setVertex(1, VG2);cv::Mat cvInfoG = pKFi->mpImuPreintegrated->C.rowRange(9, 12).colRange(9, 12).inv(cv::DECOMP_SVD);Eigen::Matrix3d InfoG;for (int r = 0; r < 3; r++)for (int c = 0; c < 3; c++)InfoG(r, c) = cvInfoG.at<float>(r, c);egr->setInformation(InfoG);egr->computeError();optimizer.addEdge(egr);EdgeAccRW *ear = new EdgeAccRW();ear->setVertex(0, VA1);ear->setVertex(1, VA2);cv::Mat cvInfoA = pKFi->mpImuPreintegrated->C.rowRange(12, 15).colRange(12, 15).inv(cv::DECOMP_SVD);Eigen::Matrix3d InfoA;for (int r = 0; r < 3; r++)for (int c = 0; c < 3; c++)InfoA(r, c) = cvInfoA.at<float>(r, c);ear->setInformation(InfoA);ear->computeError();optimizer.addEdge(ear);}}else{cout << pKFi->mnId << " or " << pKFi->mPrevKF->mnId << " no imu" << endl;}}}// 只加入pIncKF帧的偏置,优化偏置到0if (bInit){g2o::HyperGraph::Vertex *VG = optimizer.vertex(4 * maxKFid + 2);g2o::HyperGraph::Vertex *VA = optimizer.vertex(4 * maxKFid + 3);// Add prior to comon biasesEdgePriorAcc *epa = new EdgePriorAcc(cv::Mat::zeros(3, 1, CV_32F));epa->setVertex(0, dynamic_cast<g2o::OptimizableGraph::Vertex *>(VA));double infoPriorA = priorA; //epa->setInformation(infoPriorA * Eigen::Matrix3d::Identity());optimizer.addEdge(epa);EdgePriorGyro *epg = new EdgePriorGyro(cv::Mat::zeros(3, 1, CV_32F));epg->setVertex(0, dynamic_cast<g2o::OptimizableGraph::Vertex *>(VG));double infoPriorG = priorG; //epg->setInformation(infoPriorG * Eigen::Matrix3d::Identity());optimizer.addEdge(epg);}const float thHuberMono = sqrt(5.991);const float thHuberStereo = sqrt(7.815);const unsigned long iniMPid = maxKFid * 5;vector<bool> vbNotIncludedMP(vpMPs.size(), false);// 5. 添加关于mp的节点与边,这段比较好理解,很传统的视觉上的重投影误差for (size_t i = 0; i < vpMPs.size(); i++){MapPoint *pMP = vpMPs[i];g2o::VertexSBAPointXYZ *vPoint = new g2o::VertexSBAPointXYZ();vPoint->setEstimate(Converter::toVector3d(pMP->GetWorldPos()));unsigned long id = pMP->mnId + iniMPid + 1;vPoint->setId(id);vPoint->setMarginalized(true);optimizer.addVertex(vPoint);const map<KeyFrame *, tuple<int, int>> observations = pMP->GetObservations();bool bAllFixed = true;// Set edges//  遍历所有能观测到这个点的关键帧for (map<KeyFrame *, tuple<int, int>>::const_iterator mit = observations.begin(), mend = observations.end(); mit != mend; mit++){KeyFrame *pKFi = mit->first;if (pKFi->mnId > maxKFid)continue;if (!pKFi->isBad()){const int leftIndex = get<0>(mit->second);cv::KeyPoint kpUn;// 添加边if (leftIndex != -1 && pKFi->mvuRight[get<0>(mit->second)] < 0) // Monocular observation{kpUn = pKFi->mvKeysUn[leftIndex];Eigen::Matrix<double, 2, 1> obs;obs << kpUn.pt.x, kpUn.pt.y;EdgeMono *e = new EdgeMono(0);g2o::OptimizableGraph::Vertex *VP = dynamic_cast<g2o::OptimizableGraph::Vertex *>(optimizer.vertex(pKFi->mnId));if (bAllFixed)if (!VP->fixed())bAllFixed = false;e->setVertex(0, dynamic_cast<g2o::OptimizableGraph::Vertex *>(optimizer.vertex(id)));e->setVertex(1, VP);e->setMeasurement(obs);const float invSigma2 = pKFi->mvInvLevelSigma2[kpUn.octave];e->setInformation(Eigen::Matrix2d::Identity() * invSigma2);g2o::RobustKernelHuber *rk = new g2o::RobustKernelHuber;e->setRobustKernel(rk);rk->setDelta(thHuberMono);optimizer.addEdge(e);}else if (leftIndex != -1 && pKFi->mvuRight[leftIndex] >= 0) // stereo observation{kpUn = pKFi->mvKeysUn[leftIndex];const float kp_ur = pKFi->mvuRight[leftIndex];Eigen::Matrix<double, 3, 1> obs;obs << kpUn.pt.x, kpUn.pt.y, kp_ur;EdgeStereo *e = new EdgeStereo(0);g2o::OptimizableGraph::Vertex *VP = dynamic_cast<g2o::OptimizableGraph::Vertex *>(optimizer.vertex(pKFi->mnId));if (bAllFixed)if (!VP->fixed())bAllFixed = false;e->setVertex(0, dynamic_cast<g2o::OptimizableGraph::Vertex *>(optimizer.vertex(id)));e->setVertex(1, VP);e->setMeasurement(obs);const float invSigma2 = pKFi->mvInvLevelSigma2[kpUn.octave];e->setInformation(Eigen::Matrix3d::Identity() * invSigma2);g2o::RobustKernelHuber *rk = new g2o::RobustKernelHuber;e->setRobustKernel(rk);rk->setDelta(thHuberStereo);optimizer.addEdge(e);}if (pKFi->mpCamera2){ // Monocular right observationint rightIndex = get<1>(mit->second);if (rightIndex != -1 && rightIndex < pKFi->mvKeysRight.size()){rightIndex -= pKFi->NLeft;Eigen::Matrix<double, 2, 1> obs;kpUn = pKFi->mvKeysRight[rightIndex];obs << kpUn.pt.x, kpUn.pt.y;EdgeMono *e = new EdgeMono(1);g2o::OptimizableGraph::Vertex *VP = dynamic_cast<g2o::OptimizableGraph::Vertex *>(optimizer.vertex(pKFi->mnId));if (bAllFixed)if (!VP->fixed())bAllFixed = false;e->setVertex(0, dynamic_cast<g2o::OptimizableGraph::Vertex *>(optimizer.vertex(id)));e->setVertex(1, VP);e->setMeasurement(obs);const float invSigma2 = pKFi->mvInvLevelSigma2[kpUn.octave];e->setInformation(Eigen::Matrix2d::Identity() * invSigma2);g2o::RobustKernelHuber *rk = new g2o::RobustKernelHuber;e->setRobustKernel(rk);rk->setDelta(thHuberMono);optimizer.addEdge(e);}}}}// falseif (bAllFixed){optimizer.removeVertex(vPoint);vbNotIncludedMP[i] = true;}}if (pbStopFlag)if (*pbStopFlag)return;optimizer.initializeOptimization();optimizer.optimize(its);// 5. 取出优化结果,对应的值赋值// Recover optimized data// Keyframesfor (size_t i = 0; i < vpKFs.size(); i++){KeyFrame *pKFi = vpKFs[i];if (pKFi->mnId > maxKFid)continue;VertexPose *VP = static_cast<VertexPose *>(optimizer.vertex(pKFi->mnId));if (nLoopId == 0){cv::Mat Tcw = Converter::toCvSE3(VP->estimate().Rcw[0], VP->estimate().tcw[0]);pKFi->SetPose(Tcw);}else{pKFi->mTcwGBA = cv::Mat::eye(4, 4, CV_32F);Converter::toCvMat(VP->estimate().Rcw[0]).copyTo(pKFi->mTcwGBA.rowRange(0, 3).colRange(0, 3));Converter::toCvMat(VP->estimate().tcw[0]).copyTo(pKFi->mTcwGBA.rowRange(0, 3).col(3));pKFi->mnBAGlobalForKF = nLoopId;}if (pKFi->bImu){VertexVelocity *VV = static_cast<VertexVelocity *>(optimizer.vertex(maxKFid + 3 * (pKFi->mnId) + 1));if (nLoopId == 0){pKFi->SetVelocity(Converter::toCvMat(VV->estimate()));}else{pKFi->mVwbGBA = Converter::toCvMat(VV->estimate());}VertexGyroBias *VG;VertexAccBias *VA;if (!bInit){VG = static_cast<VertexGyroBias *>(optimizer.vertex(maxKFid + 3 * (pKFi->mnId) + 2));VA = static_cast<VertexAccBias *>(optimizer.vertex(maxKFid + 3 * (pKFi->mnId) + 3));}else{VG = static_cast<VertexGyroBias *>(optimizer.vertex(4 * maxKFid + 2));VA = static_cast<VertexAccBias *>(optimizer.vertex(4 * maxKFid + 3));}Vector6d vb;vb << VG->estimate(), VA->estimate();IMU::Bias b(vb[3], vb[4], vb[5], vb[0], vb[1], vb[2]);if (nLoopId == 0){pKFi->SetNewBias(b);}else{pKFi->mBiasGBA = b;}}}// Pointsfor (size_t i = 0; i < vpMPs.size(); i++){if (vbNotIncludedMP[i])continue;MapPoint *pMP = vpMPs[i];g2o::VertexSBAPointXYZ *vPoint = static_cast<g2o::VertexSBAPointXYZ *>(optimizer.vertex(pMP->mnId + iniMPid + 1));if (nLoopId == 0){pMP->SetWorldPos(Converter::toCvMat(vPoint->estimate()));pMP->UpdateNormalAndDepth();}else{pMP->mPosGBA.create(3, 1, CV_32F);Converter::toCvMat(vPoint->estimate()).copyTo(pMP->mPosGBA);pMP->mnBAGlobalForKF = nLoopId;}}pMap->IncreaseChangeIndex();
}

参考文献

ORB-SLAM3源码阅读笔记(7)-Optimizer.cc - 知乎

【SLAM学习笔记】8-ORB_SLAM3关键源码分析⑥ Optimizer(三)全局优化_口哨糖youri的博客-CSDN博客

ORB-SLAM3从理论到代码实现(三):Optimizer全局优化相关推荐

  1. ORB SLAM3——IMU优化部分精读-VertexPose(ImuCamPose)的更新量到底是什么?

    ORB SLAM3--IMU优化部分精读-VertexPose(ImuCamPose)的更新量到底是什么? 先说答案 void ImuCamPose::Update(const double *pu) ...

  2. VINS理论与代码详解2——单目视觉跟踪

    VINS理论与代码详解2--单目视觉跟踪 一.Feature_tracker文件夹中 首先讲第一部分,也就是纯粹的图像处理部分内容,在论文中的第IV点观测值预处理的A部分视觉前端处理,为了更好的理解代 ...

  3. 从PointNet到PointNet++理论及代码详解

    从PointNet到PointNet++理论及代码详解 1. 点云是什么 1.1 三维数据的表现形式 1.2 为什么使用点云 1.3 点云上以往的相关工作 2. PointNet 2.1 基于点云的置 ...

  4. 无人机航拍图像匹配——ORB算法实践(含代码)

    无人机航拍图像匹配--ORB算法实践(含代码) 一.摘要 二.ORB算法的原理 1. FAST角点检测 2.BRIEF描述子 描述子生成 3.方向计算(角点检测和描述子之间的步骤) 4.尺度金字塔 三 ...

  5. Openai神作Dalle2理论和代码复现

    Openai神作Dalle2 注:大家觉得博客好的话,别忘了点赞收藏呀,本人每周都会更新关于人工智能和大数据相关的内容,内容多为原创,Python Java Scala SQL 代码,CV NLP 推 ...

  6. Part-1 ORB SLAM3初始化-1

    初始化 ORB SLAM3的初始化主要是创建ORB词袋.关键帧数据库.多地图等对象,其步骤如下: 检测配置文件能否打开 加载ORB词袋(ORBVocabulary) 创建关键帧数据库(KeyFrame ...

  7. 关于Jenkins部署代码权限三种方案

    关于Jenkins部署代码权限三种方案 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.修改Jenkins进程用户为root [root@jenkins ~]# cat /etc ...

  8. ML之XGBoost:XGBoost参数调优的优秀外文翻译—《XGBoost中的参数调优完整指南(带python中的代码)》(三)

    ML之XGBoost:XGBoost参数调优的优秀外文翻译-<XGBoost中的参数调优完整指南(带python中的代码)>(三) 目录 3. 参数微调案例/Parameter Tunin ...

  9. 编写高性能的C#代码(三)使用SPAN

    原文来自互联网,由长沙DotNET技术社区编译.如译文侵犯您的署名权或版权,请联系小编,小编将在24小时内删除. 作者介绍: 史蒂夫·戈登(Steve Gordon)是Microsoft MVP,Pl ...

  10. Eclipse远程调试Java代码的三种方法

    Eclipse远程调试Java代码的三种方法, 第1种方法是用来调试已经启动的Java程序,Eclipse可以随时连接到远程Java程序进行调试, 第2种方法可以调试Java程序启动过程,但是Ecli ...

最新文章

  1. ajax怎么设置好友,好友按ajax新消息通知
  2. linux gt txt内容为空,2gt;/dev/null和gt;/dev/null 2gt;amp;1和2gt;amp;1gt;/dev/null的区别...
  3. poj3083 Children of the Candy Corn 深搜+广搜
  4. oracle查询本身字符集,Oracle字符集问题总结
  5. node:爬虫爬取网页图片 1
  6. android 将.txt文件转化为.db并使用raw下的数据文件
  7. html在线拖拽环绕,用HTML5原生实现拖放或排序
  8. 魔兽模型【说明/使用方法】
  9. itextpdf识别不了一些中文字体的问题
  10. 谈谈我对网站的个人规划
  11. 用HTML写一首绝句古诗,唐诗七绝绝句经典50首:唐诗七绝悲伤的句子让人心醉
  12. lucas–kanade_异常检测常用光流法量化对比:Farneback/Horn-Schunck / Lucas–Kanade
  13. 1.3 PyCharm下载
  14. B2C商城项目源码,基于Java开发的高可用分布式B2C商城系统,Java+Spring MVC+Dubbo+Zookeeper+MySQL+Redis+FastDFS+Nginx+Solr
  15. 计算机网络应用和ps的实训报告,ps实训报告心得体会.doc
  16. figure,figcaption的用法
  17. 【渝粤题库】广东开放大学 建筑力学与结构 形成性考核
  18. Elasticsearch04:ES集群监控管理工具-cerebro
  19. 解决欠拟合和过拟合的几种方法
  20. Action Unit Memory Network for Weakly Supervised Temporal Action Localization 笔记

热门文章

  1. 解决keil注册机和编译错误的问题2020-12-28
  2. 华为云NP考试题库_华为np题库
  3. 《深入浅出通信原理》连载1-562合集
  4. 计算机网络工程师模拟题库,计算机网络工程师模拟题56.doc
  5. 商务办公软件应用与实践【4】
  6. 给pmbus添加mfr_id, mfr_version, mfr_model节点显示
  7. win10桌面图标空白解决
  8. python给用户打标签_用户画像—打用户行为标签
  9. 如何设计一个好玩的游戏——游戏设计的艺术
  10. 教你用Python实现全自动刷网课