我们继续上一节的tracking部分,tracking部分结合了多视图几何的许多计算算法,这就不得不翻开《多视图几何》这本大部头才能看懂作者的源码。上一节的初始化过程后,Tracking需要持续地提供位姿给建图模块,在一般的视觉SLAM中都需要一个运动的先验给特征匹配和估计,imu预积分就是其中一种。回到Track函数,作者为相机设计了两种运行模式,分别是匀速运动模式和关键帧匹配模式。

一、持续定位

在一般的SLAM定位过程中,例如lego-LOAM,会有两段定位,scan-to-scan以及scan-to-map,在这里也类似。

1、帧到帧的匹配

匀速运动模式适用于在上一帧已经计算好速度的情况下,直接沿袭到下一帧。

bool Tracking::TrackWithMotionModel()
{ORBmatcher matcher(0.9,true);// Update last frame pose according to its reference keyframe// Create "visual odometry" points if in Localization ModeUpdateLastFrame();mCurrentFrame.SetPose(mVelocity*mLastFrame.mTcw);将速度作为先验,其实是匀速采样图像的过程中上上帧到上一帧的变换,直接按直线运动累计fill(mCurrentFrame.mvpMapPoints.begin(),mCurrentFrame.mvpMapPoints.end(),static_cast<MapPoint*>(NULL));// Project points seen in previous frameint th;if(mSensor!=System::STEREO)th=15;elseth=7;int nmatches = matcher.SearchByProjection(mCurrentFrame,mLastFrame,th,mSensor==System::MONOCULAR);// If few matches, uses a wider window searchif(nmatches<20){fill(mCurrentFrame.mvpMapPoints.begin(),mCurrentFrame.mvpMapPoints.end(),static_cast<MapPoint*>(NULL));nmatches = matcher.SearchByProjection(mCurrentFrame,mLastFrame,2*th,mSensor==System::MONOCULAR);}if(nmatches<20)return false;// Optimize frame pose with all matchesOptimizer::PoseOptimization(&mCurrentFrame);// Discard outliersint nmatchesMap = 0;for(int i =0; i<mCurrentFrame.N; i++){if(mCurrentFrame.mvpMapPoints[i]){if(mCurrentFrame.mvbOutlier[i]){MapPoint* pMP = mCurrentFrame.mvpMapPoints[i];mCurrentFrame.mvpMapPoints[i]=static_cast<MapPoint*>(NULL);mCurrentFrame.mvbOutlier[i]=false;pMP->mbTrackInView = false;pMP->mnLastFrameSeen = mCurrentFrame.mnId;nmatches--;}else if(mCurrentFrame.mvpMapPoints[i]->Observations()>0)nmatchesMap++;}}if(mbOnlyTracking){mbVO = nmatchesMap<10;return nmatches>20;}return nmatchesMap>=10;
}

PoseOptimization是进一步的位姿优化,这个函数是对单个位姿的再次优化,严格的说,是利用优化的同时迎合观测的特征点并评判图像帧是否满足我们的需要,在本篇中将会多次使用该函数。

int Optimizer::PoseOptimization(Frame *pFrame)
{//本函数是对单个位姿再次进行优化g2o::SparseOptimizer optimizer;g2o::BlockSolver_6_3::LinearSolverType * linearSolver;linearSolver = new g2o::LinearSolverDense<g2o::BlockSolver_6_3::PoseMatrixType>();g2o::BlockSolver_6_3 * solver_ptr = new g2o::BlockSolver_6_3(linearSolver);//定义基于LM算法的求解器g2o::OptimizationAlgorithmLevenberg* solver = new g2o::OptimizationAlgorithmLevenberg(solver_ptr);optimizer.setAlgorithm(solver);int nInitialCorrespondences=0;// Set Frame vertexg2o::VertexSE3Expmap * vSE3 = new g2o::VertexSE3Expmap();//每一帧作为一个节点vSE3->setEstimate(Converter::toSE3Quat(pFrame->mTcw));//将目前得到的帧位姿转换为SE3李代数形式vSE3->setId(0);//当前帧作为第一个节点vSE3->setFixed(false);//由于是优化当前位姿,因此不固定optimizer.addVertex(vSE3);// Set MapPoint verticesconst int N = pFrame->N;//每一帧所持有的地图点也加入优化// Set MapPoint verticesconst int N = pFrame->N;vector<g2o::EdgeSE3ProjectXYZOnlyPose*> vpEdgesMono;//本帧所观测到的地图点所投影的像素值作为观测约束vector<size_t> vnIndexEdgeMono;vpEdgesMono.reserve(N);vnIndexEdgeMono.reserve(N);vector<g2o::EdgeStereoSE3ProjectXYZOnlyPose*> vpEdgesStereo;vector<size_t> vnIndexEdgeStereo;vpEdgesStereo.reserve(N);vnIndexEdgeStereo.reserve(N);const float deltaMono = sqrt(5.991);//经验值?const float deltaStereo = sqrt(7.815);{unique_lock<mutex> lock(MapPoint::mGlobalMutex);for(int i=0; i<N; i++){MapPoint* pMP = pFrame->mvpMapPoints[i];if(pMP){// Monocular observation// 单目与双目过程类似,我们先关注单目的过程if(pFrame->mvuRight[i]<0){nInitialCorrespondences++;//在最开始时,计数地图点pFrame->mvbOutlier[i] = false;Eigen::Matrix<double,2,1> obs;const cv::KeyPoint &kpUn = pFrame->mvKeysUn[i];obs << kpUn.pt.x, kpUn.pt.y;//直接使用g2o的观测投影模型g2o::EdgeSE3ProjectXYZOnlyPose* e = new g2o::EdgeSE3ProjectXYZOnlyPose();e->setVertex(0, dynamic_cast<g2o::OptimizableGraph::Vertex*>(optimizer.vertex(0)));e->setMeasurement(obs);//观测值是地图点在这一帧的像素值const float invSigma2 = pFrame->mvInvLevelSigma2[kpUn.octave];e->setInformation(Eigen::Matrix2d::Identity()*invSigma2);g2o::RobustKernelHuber* rk = new g2o::RobustKernelHuber;e->setRobustKernel(rk);rk->setDelta(deltaMono);e->fx = pFrame->fx;e->fy = pFrame->fy;e->cx = pFrame->cx;e->cy = pFrame->cy;cv::Mat Xw = pMP->GetWorldPos();e->Xw[0] = Xw.at<float>(0);e->Xw[1] = Xw.at<float>(1);e->Xw[2] = Xw.at<float>(2);//将地图点的xyz位置和相机内参加入,作为真值,与观察值形成误差optimizer.addEdge(e);//将观测作为边加入,这样完成了整个优化的前期准备过程vpEdgesMono.push_back(e);vnIndexEdgeMono.push_back(i);}else  // Stereo observation{nInitialCorrespondences++;pFrame->mvbOutlier[i] = false;//SET EDGEEigen::Matrix<double,3,1> obs;const cv::KeyPoint &kpUn = pFrame->mvKeysUn[i];const float &kp_ur = pFrame->mvuRight[i];obs << kpUn.pt.x, kpUn.pt.y, kp_ur;g2o::EdgeStereoSE3ProjectXYZOnlyPose* e = new g2o::EdgeStereoSE3ProjectXYZOnlyPose();e->setVertex(0, dynamic_cast<g2o::OptimizableGraph::Vertex*>(optimizer.vertex(0)));e->setMeasurement(obs);const float invSigma2 = pFrame->mvInvLevelSigma2[kpUn.octave];Eigen::Matrix3d Info = Eigen::Matrix3d::Identity()*invSigma2;e->setInformation(Info);g2o::RobustKernelHuber* rk = new g2o::RobustKernelHuber;e->setRobustKernel(rk);rk->setDelta(deltaStereo);e->fx = pFrame->fx;e->fy = pFrame->fy;e->cx = pFrame->cx;e->cy = pFrame->cy;e->bf = pFrame->mbf;cv::Mat Xw = pMP->GetWorldPos();e->Xw[0] = Xw.at<float>(0);e->Xw[1] = Xw.at<float>(1);e->Xw[2] = Xw.at<float>(2);optimizer.addEdge(e);vpEdgesStereo.push_back(e);vnIndexEdgeStereo.push_back(i);}}}}if(nInitialCorrespondences<3)return 0;// We perform 4 optimizations, after each optimization we classify observation as inlier/outlier// At the next optimization, outliers are not included, but at the end they can be classified as inliers again.const float chi2Mono[4]={5.991,5.991,5.991,5.991};const float chi2Stereo[4]={7.815,7.815,7.815, 7.815};const int its[4]={10,10,10,10};int nBad=0;for(size_t it=0; it<4; it++){//连续进行4次优化,在每次优化完毕后将坏点剔除图中vSE3->setEstimate(Converter::toSE3Quat(pFrame->mTcw));optimizer.initializeOptimization(0);optimizer.optimize(its[it]);nBad=0;for(size_t i=0, iend=vpEdgesMono.size(); i<iend; i++){g2o::EdgeSE3ProjectXYZOnlyPose* e = vpEdgesMono[i];const size_t idx = vnIndexEdgeMono[i];if(pFrame->mvbOutlier[idx]){e->computeError();}const float chi2 = e->chi2();if(chi2>chi2Mono[it])//将误差大于阈值的作为坏点(outlier){pFrame->mvbOutlier[idx]=true;e->setLevel(1);nBad++;}else{pFrame->mvbOutlier[idx]=false;e->setLevel(0);}if(it==2)e->setRobustKernel(0);}for(size_t i=0, iend=vpEdgesStereo.size(); i<iend; i++){g2o::EdgeStereoSE3ProjectXYZOnlyPose* e = vpEdgesStereo[i];const size_t idx = vnIndexEdgeStereo[i];if(pFrame->mvbOutlier[idx]){e->computeError();}const float chi2 = e->chi2();if(chi2>chi2Stereo[it]){pFrame->mvbOutlier[idx]=true;e->setLevel(1);nBad++;}else{e->setLevel(0);pFrame->mvbOutlier[idx]=false;}if(it==2)e->setRobustKernel(0);}if(optimizer.edges().size()<10)//坏点超过一定程度则中止break;}// Recover optimized pose and return number of inliersg2o::VertexSE3Expmap* vSE3_recov = static_cast<g2o::VertexSE3Expmap*>(optimizer.vertex(0));g2o::SE3Quat SE3quat_recov = vSE3_recov->estimate();cv::Mat pose = Converter::toCvMat(SE3quat_recov);pFrame->SetPose(pose);//优化完毕后的位姿return nInitialCorrespondences-nBad;
}

2、帧到地图的匹配

在一般情况下,我们会基于之前的一段局部地图再次进行优化。TrackLocalMap的作用是根据初步优化后的位姿,在地图中寻找与其相近的关键帧,再基于这一步的位姿确定其他关键帧的共视关系,将有可能看见的其他帧的地图点再次进行匹配,从而进行更大范围的优化。

bool Tracking::TrackLocalMap()
{// We have an estimation of the camera pose and some map points tracked in the frame.// We retrieve the local map and try to find matches to points in the local map.UpdateLocalMap();SearchLocalPoints();// Optimize PoseOptimizer::PoseOptimization(&mCurrentFrame);mnMatchesInliers = 0;// Update MapPoints Statisticsfor(int i=0; i<mCurrentFrame.N; i++){if(mCurrentFrame.mvpMapPoints[i]){if(!mCurrentFrame.mvbOutlier[i]){mCurrentFrame.mvpMapPoints[i]->IncreaseFound();if(!mbOnlyTracking){if(mCurrentFrame.mvpMapPoints[i]->Observations()>0)mnMatchesInliers++;}elsemnMatchesInliers++;}else if(mSensor==System::STEREO)mCurrentFrame.mvpMapPoints[i] = static_cast<MapPoint*>(NULL);}}// Decide if the tracking was succesful// More restrictive if there was a relocalization recentlyif(mCurrentFrame.mnId<mnLastRelocFrameId+mMaxFrames && mnMatchesInliers<50)return false;if(mnMatchesInliers<30)return false;elsereturn true;
}

二、重定位

当mState变为LOST时将进行重定位。在这里重定位主要是利用pnp(epnp)的方式求解,我们可以看到首先是利用本帧的词袋模型在所有关键帧中遍历,将适合的关键帧挑选出来再进行优化计算,如果认定当前帧和某关键帧匹配则立即结束遍历,将位姿跳到该关键帧附近。

bool Tracking::Relocalization()
{// Compute Bag of Words VectormCurrentFrame.ComputeBoW();//计算词袋模型// Relocalization is performed when tracking is lost// Track Lost: Query KeyFrame Database for keyframe candidates for relocalisationvector<KeyFrame*> vpCandidateKFs = mpKeyFrameDB->DetectRelocalizationCandidates(&mCurrentFrame);//在所有关键帧中遍历查找阈值以上的适用帧,一并取出留用if(vpCandidateKFs.empty())return false;const int nKFs = vpCandidateKFs.size();// We perform first an ORB matching with each candidate// If enough matches are found we setup a PnP solverORBmatcher matcher(0.75,true);vector<PnPsolver*> vpPnPsolvers;vpPnPsolvers.resize(nKFs);vector<vector<MapPoint*> > vvpMapPointMatches;vvpMapPointMatches.resize(nKFs);vector<bool> vbDiscarded;vbDiscarded.resize(nKFs);//做好丢弃和继续计算两手准备int nCandidates=0;for(int i=0; i<nKFs; i++){KeyFrame* pKF = vpCandidateKFs[i];if(pKF->isBad())vbDiscarded[i] = true;else{int nmatches = matcher.SearchByBoW(pKF,mCurrentFrame,vvpMapPointMatches[i]);if(nmatches<15){vbDiscarded[i] = true;continue;}else{PnPsolver* pSolver = new PnPsolver(mCurrentFrame,vvpMapPointMatches[i]);pSolver->SetRansacParameters(0.99,10,300,4,0.5,5.991);vpPnPsolvers[i] = pSolver;nCandidates++;}}}// Alternatively perform some iterations of P4P RANSAC// Until we found a camera pose supported by enough inliersbool bMatch = false;ORBmatcher matcher2(0.9,true);while(nCandidates>0 && !bMatch){for(int i=0; i<nKFs; i++){if(vbDiscarded[i])continue;// Perform 5 Ransac Iterationsvector<bool> vbInliers;int nInliers;bool bNoMore;PnPsolver* pSolver = vpPnPsolvers[i];//使用Epnp的方法进行计算,由于我只会暴力法就先跳过具体计算。。。cv::Mat Tcw = pSolver->iterate(5,bNoMore,vbInliers,nInliers);// If Ransac reachs max. iterations discard keyframeif(bNoMore){vbDiscarded[i]=true;nCandidates--;}// If a Camera Pose is computed, optimizeif(!Tcw.empty()){Tcw.copyTo(mCurrentFrame.mTcw);set<MapPoint*> sFound;//设计一个地图点容器,将可能满足的点加入,等待进一步优化const int np = vbInliers.size();for(int j=0; j<np; j++){if(vbInliers[j]){mCurrentFrame.mvpMapPoints[j]=vvpMapPointMatches[i][j];sFound.insert(vvpMapPointMatches[i][j]);}elsemCurrentFrame.mvpMapPoints[j]=NULL;}//对本帧位姿进行优化,输出的是优化后依然满足阈值的地图点int nGood = Optimizer::PoseOptimization(&mCurrentFrame);//对于明显不满足的关键帧就不再处理if(nGood<10)continue;for(int io =0; io<mCurrentFrame.N; io++)if(mCurrentFrame.mvbOutlier[io])mCurrentFrame.mvpMapPoints[io]=static_cast<MapPoint*>(NULL);// If few inliers, search by projection in a coarse window and optimize again//对于有抢救价值的关键帧进行反复投影并重新采样求匹配的特征点if(nGood<50){int nadditional =matcher2.SearchByProjection(mCurrentFrame,vpCandidateKFs[i],sFound,10,100);if(nadditional+nGood>=50){nGood = Optimizer::PoseOptimization(&mCurrentFrame);// If many inliers but still not enough, search by projection again in a narrower window// the camera has been already optimized with many pointsif(nGood>30 && nGood<50){sFound.clear();for(int ip =0; ip<mCurrentFrame.N; ip++)if(mCurrentFrame.mvpMapPoints[ip])sFound.insert(mCurrentFrame.mvpMapPoints[ip]);nadditional =matcher2.SearchByProjection(mCurrentFrame,vpCandidateKFs[i],sFound,3,64);// Final optimizationif(nGood+nadditional>=50){nGood = Optimizer::PoseOptimization(&mCurrentFrame);for(int io =0; io<mCurrentFrame.N; io++)if(mCurrentFrame.mvbOutlier[io])mCurrentFrame.mvpMapPoints[io]=NULL;}}}}// If the pose is supported by enough inliers stop ransacs and continueif(nGood>=50){bMatch = true;break;}}}}if(!bMatch){return false;}else{mnLastRelocFrameId = mCurrentFrame.mnId;return true;}
}

总地来说,Tracking部分是基于特征点的匹配进一步提供了相机的位姿估计,同时也为建图和后端提供了未经处理的地图点和关键帧,是整个orb-slam系统的骨干部分,在熟悉了它之后,剩下的部分也就迎刃而解。

ORB_SLAM2源码阅读(三)相机定位相关推荐

  1. 24 UsageEnvironment使用环境抽象基类——Live555源码阅读(三)UsageEnvironment

    24 UsageEnvironment使用环境抽象基类--Live555源码阅读(三)UsageEnvironment 24 UsageEnvironment使用环境抽象基类--Live555源码阅读 ...

  2. mybatis源码阅读(三):mybatis初始化(下)mapper解析

    转载自 mybatis源码阅读(三):mybatis初始化(下)mapper解析 MyBatis 的真正强大在于它的映射语句,也是它的魔力所在.由于它的异常强大,映射器的 XML 文件就显得相对简单. ...

  3. SDWebImage源码阅读(三)UIImage+GIF

    UIImage+GIF 是UIImage 类的一个GIF 分类,在之前的版本里面这个分类是用了处理GIF 动态图片的但是会有内存暴增的bug.在当前 '4.0.0-beta2' 的版本里GIF 动态图 ...

  4. Struts2源码阅读(三)_DispatcherConfigurationProvider

    首先强调一下struts2的线程程安全,在Struts2中大量采用ThreadLocal线程局部变量的方法来保证线程的安全,像Dispatcher等都是通过ThreadLocal来保存变量值,使得每个 ...

  5. SpringMVC源码阅读(三)

    先理一下Bean的初始化路线 org.springframework.beans.factory.support.AbstractBeanDefinitionReader public int loa ...

  6. redis源码阅读-zset

    前段时间给小伙伴分享redis,顺带又把redis撸了一遍了,对其源码,又有了比较深入的了解.(ps: 分享的文章再丰富下再放出来). 数据结构 我们先看下redis 5.0的代码.本次讲解主要是zs ...

  7. redis源码阅读-持久化之RDB

    持久化介绍: redis的持久化有两种方式: rdb :可以在指定的时间间隔内生成数据集的时间点快照(point-in-time snapshot) aof : 记录redis执行的所有写操作命令 根 ...

  8. redis源码阅读-持久化之aof与aof重写详解

    aof相关配置 aof-rewrite-incremental-fsync yes # aof 开关,默认是关闭的,改为yes表示开启 appendonly no # aof的文件名,默认 appen ...

  9. mysql 1260,MYSQL 源码阅读 六

    前期节要 MYSQL源码阅读 一 MYSQL源码阅读 二 MYSQL源码阅读 三 MYSQL 源码阅读 四 MYSQL 源码阅读 五 上次有两个问题没搞明白 1 是 为什么一定要开启调试线程 ? 因为 ...

最新文章

  1. 2019腾讯广告算法大赛-冠军之路
  2. 挑战JavaScript正则表达式每日两题(2)
  3. 「模型解读」深度学习网络只能有一个输入吗
  4. 【Android 逆向】加壳技术简介 ( 动态加载 | 第一代加壳技术 - DEX 整体加固 | 第二代加壳技术 - 函数抽取 | 第三代加壳技术 - VMP / Dex2C | 动态库加壳技术 )
  5. Framework7 4.1.0 发布,全功能 HTML 框架
  6. 三网齐发 HTC One行货确定4月24日发布
  7. SpringBoot中使用Thymeleaf常用功能(三):测试Thymeleaf循环取数据
  8. python计算器教程,用Python程序制作一个简单的计算器
  9. PyTorch 1.0 中文文档:数据类型信息
  10. github可以刷星吗_GitHub 没有 star,该写进简历里吗?
  11. wordpress 伪静态nginx设置
  12. Python获取Linux或Windows系统的基本信息
  13. 数据结构 第一章 绪论
  14. 使用树莓派3获取CPU温度
  15. list保存到scv
  16. 三顶红帽子和两顶白(蓝)帽子。
  17. 服务器上搭建java环境,安装tomcat以及MySQL数据库-小白教程
  18. 高可用 高性能 高并发
  19. 上海大学计算机学院 教授名录,教授名录
  20. basler 相机取图超时_Basler|基于OpenCV的Basler相机采集图像程序

热门文章

  1. eureka + kafka +zookeeper
  2. Loaders 的使用,结合Fragments
  3. UIColor的RGB定义颜色(灰色)
  4. Ubuntu20.04安装visit可视化软件
  5. 记录Spring cloud alibaba Nacos学习
  6. 聊聊阅读源码那些事儿
  7. 开通了CSDN博客,留了脚印
  8. 中国异戊烷市场调研与投资预测报告(2021版)
  9. 跨页面清除Cookie信息
  10. 让大数据告诉你,网红“小龙虾”究竟有多火