一、预备知识点

  1. Bag-of-Words
  2. ORB特征提取与匹配

5. Bag of Word

作用:

加速匹配 和 回环

在跟踪线程里面 一进来就要对每帧进行词袋的提取
词袋模型是由Node和Word组成的,其都是描述子

每个word有权重 IDF训练过程中词出现的频次,log 总数/特征次数 || TF-IDF image中出现的次数*IDF

四个步骤:

  1. 建立词典 聚类成树的概念,如果要M个聚类点,每次聚k个,聚d次,相当于有d层。 M = k d M=k^d M=kd

  2. 形成词袋 词典反复训练之后建出来就不动了,但是词袋每张image都有。每帧的feature通过词典树进行最邻近搜索。

  3. 正向索引 加速匹配每个image记录自己的特征点在那个node和那个word中,

  4. 逆向索引 回环检测词典中的每个word都会记录那些帧有这个word并且权重是多少

  5. Kmeans聚类

    • 随机选几个聚类中心,
    • 计算每个点到这几个聚类中心的距离,最近的就把这个点归属给这个聚类中心所在的cluster
    • 然后根据这次cluster点算距离平均求出新的聚类中心
    • 根据新的聚类中心计算每个点到聚类中心的距离更新成新的cluster
    • 往复迭代 current_聚类中心和last_聚类中心相同
    • 注意为了解决第一次随机聚类中心过于接近收敛困难的问题, Kmeans++ 通过聚一个中心点然后以这个中心点画圆(转盘算法)使得第二个聚类中心原理这个聚类中心的概率更大 保证每个聚类点的分散性
  6. 扩展最近邻问题

搜索形式:

  • K-NN
  • Radius-NN

分类:

  • 二叉树(BST)

    O(logn~n),O(n)

    平衡原则:构建的二叉树最好左右平衡,使层数降低决定时间复杂度在O(logn)~O(n)

    Node{ key , left , right , value}

    树建立过程 递归fun 如果相等或者节点为空返回节点,如果树小于节点去节点左边return fun递归,如果树大于节点去树的右边节点return fun 递归

    中序遍历 inorder(root) {如果root不是空 {inorder(root,left) print(root) inorder(root,right)}}

    最临近搜索 维护worst distance 1-NN维护一个worst distance K-NN维护K个worst distance 的数组 Radius-NN 就维护一个worst distance 里面有多少都包含

  • kd-树

    O(nlogn~nlognlogn),O(kn+nlogn)

    leaf_size 切割到最后一个空间点数阈值

    二叉树是在一维数轴上进行纵向的切割

    kd-树是在二维图上进行垂直于不同维度方向的切割

    第一层 在x轴方向上 看x轴的数据 找中值切 两半, 第二层在这两半上 在y轴方向上 看y轴的数据 找中值切,…

    Node { axis value left right point_ID}

    树建立过程 判断size 是不是大于 leafsize ,选择一个维度 排序找中值,左边点递归 右边点递归

    最临近搜索 有正向和反向过程 因为 leafsize的关系 使得点变成了一个块结构区域,这样找最临近的话,现在正向搜索,搜索到最后 找最近点与当前的value画圆,然后根据圆与块结构区域的是否相交要逆向在搜索几个区域,每搜索 一个区域都会更新圆的半径

  • 八叉树

    leaf_size min_extent 最小数量限制和最小边长限制

    Octree {self, child ,center ,extent ,point_ID , is leaf }

    比kd树稍微复杂一些,因为相当于 圆与长方形的相交约束变成 球 与 立方体的相交约束。

代码:

ORB用的第三方库 DBoW

  1. 重点是生成 BowVector 和 FeatureVector

步骤

  1. 离线训练词典,数据集,底层聚类,取平均变成上层
  2. 形成帧的词袋,将送进来的每一帧的所有特征点转换成BoWvector和FeatureVector
  3. BoWvector <WordId, WordValue>管理一张frame 含有的树中词和TFIDF
  4. FeatureVector <NodeId, std::vector >管理了一张frame 含有的NodeID和NodeID里面都有Frame中的那些特征
  5. 将特征点在词典树里面做索引,找到特征在树里面的位置
/***********************/
//1.离线训练词典
//Kmeans训练语法树
//对叶子结点创建word
//对每个节点添加权重
template<class TDescriptor, class F>
void TemplatedVocabulary<TDescriptor,F>::create(const std::vector<std::vector<TDescriptor> > &training_features)
{m_nodes.clear();m_words.clear();// expected_nodes = Sum_{i=0..L} ( k^i )int expected_nodes = (int)((pow((double)m_k, (double)m_L + 1) - 1)/(m_k - 1));m_nodes.reserve(expected_nodes); // avoid allocations when creating the treevector<pDescriptor> features;getFeatures(training_features, features);// create root  m_nodes.push_back(Node(0)); // root// create the treeHKmeansStep(0, features, 1);// create the words 即tree的叶子结点createWords();// and set the weight of each node of the treesetNodeWeights(training_features);}
/***********************************************/
template<class TDescriptor, class F>
void TemplatedVocabulary<TDescriptor,F>::HKmeansStep(NodeId parent_id, const vector<pDescriptor> &descriptors, int current_level)
{if(descriptors.empty()) return;// features associated to each clustervector<TDescriptor> clusters;vector<vector<unsigned int> > groups; // groups[i] = [j1, j2, ...]// j1, j2, ... indices of descriptors associated to cluster iclusters.reserve(m_k);groups.reserve(m_k);//const int msizes[] = { m_k, descriptors.size() };//cv::SparseMat assoc(2, msizes, CV_8U);//cv::SparseMat last_assoc(2, msizes, CV_8U);  assoc.row(cluster_idx).col(descriptor_idx) = 1 iif associatedif((int)descriptors.size() <= m_k){// trivial case: one cluster per featuregroups.resize(descriptors.size());for(unsigned int i = 0; i < descriptors.size(); i++){groups[i].push_back(i);clusters.push_back(*descriptors[i]);}}else{// select clusters and groups with kmeansbool first_time = true;bool goon = true;// to check if clusters move after iterationsvector<int> last_association, current_association;while(goon){// 1. Calculate clustersif(first_time){// random sample initiateClusters(descriptors, clusters);}else{// calculate cluster centres 计算聚类中心for(unsigned int c = 0; c < clusters.size(); ++c){vector<pDescriptor> cluster_descriptors;cluster_descriptors.reserve(groups[c].size());/*for(unsigned int d = 0; d < descriptors.size(); ++d){if( assoc.find<unsigned char>(c, d) ){cluster_descriptors.push_back(descriptors[d]);}}*/vector<unsigned int>::const_iterator vit;for(vit = groups[c].begin(); vit != groups[c].end(); ++vit){cluster_descriptors.push_back(descriptors[*vit]);}F::meanValue(cluster_descriptors, clusters[c]);}} // if(!first_time)// 2. Associate features with clusters// calculate distances to cluster centersgroups.clear();groups.resize(clusters.size(), vector<unsigned int>());current_association.resize(descriptors.size());//assoc.clear();typename vector<pDescriptor>::const_iterator fit;//unsigned int d = 0;for(fit = descriptors.begin(); fit != descriptors.end(); ++fit)//, ++d){double best_dist = F::distance(*(*fit), clusters[0]);unsigned int icluster = 0;for(unsigned int c = 1; c < clusters.size(); ++c){double dist = F::distance(*(*fit), clusters[c]);if(dist < best_dist){best_dist = dist;icluster = c;}}//assoc.ref<unsigned char>(icluster, d) = 1;groups[icluster].push_back(fit - descriptors.begin());current_association[ fit - descriptors.begin() ] = icluster;}// kmeans++ ensures all the clusters has any feature associated with them// 3. check convergenceif(first_time){first_time = false;}else{//goon = !eqUChar(last_assoc, assoc);goon = false;for(unsigned int i = 0; i < current_association.size(); i++){// 比较上一次和这一次聚类的结果,如果不同则再迭代if(current_association[i] != last_association[i]){goon = true;break;}}}if(goon){// copy last feature-cluster associationlast_association = current_association;//last_assoc = assoc.clone();}} // while(goon)} // if must run kmeans// create nodesfor(unsigned int i = 0; i < clusters.size(); ++i){NodeId id = m_nodes.size();m_nodes.push_back(Node(id));m_nodes.back().descriptor = clusters[i];m_nodes.back().parent = parent_id;m_nodes[parent_id].children.push_back(id);}// go on with the next levelif(current_level < m_L){// iterate again with the resulting clustersconst vector<NodeId> &children_ids = m_nodes[parent_id].children;for(unsigned int i = 0; i < clusters.size(); ++i){NodeId id = children_ids[i];vector<pDescriptor> child_features;child_features.reserve(groups[i].size());vector<unsigned int>::const_iterator vit;for(vit = groups[i].begin(); vit != groups[i].end(); ++vit){child_features.push_back(descriptors[*vit]);}if(child_features.size() > 1){HKmeansStep(id, child_features, current_level + 1);}}}
}
// 一堆transform重构函数
/**************************************/
//输入特征值和NodeID的层
//输出BoWvector和FeatureVector
//BoWvector <WordId, WordValue>是wordID和权重
//FeatureVector<NodeId, std::vector<unsigned int> >记录NodeID和特征点的序号
template<class TDescriptor, class F>
void TemplatedVocabulary<TDescriptor,F>::transform(const std::vector<TDescriptor>& features,BowVector &v, FeatureVector &fv, int levelsup) const
{v.clear();fv.clear();if(empty()) // safe for subclasses{return;}// normalize LNorm norm;bool must = m_scoring_object->mustNormalize(norm);typename vector<TDescriptor>::const_iterator fit;if(m_weighting == TF || m_weighting == TF_IDF){unsigned int i_feature = 0;for(fit = features.begin(); fit < features.end(); ++fit, ++i_feature){WordId id;NodeId nid;WordValue w; // w is the idf value if TF_IDF, 1 if TFtransform(*fit, id, w, &nid, levelsup);if(w > 0) // not stopped{ //如果如果这个bowvector有这个词就累加权重TF-IDf 如果没有就新加一个,v.addWeight(id, w);//如果如果这个featurevector有这个Node就添加这个feature在node权重TF-IDf 如果没有就新加一个,fv.addFeature(nid, i_feature);}}if(!v.empty() && !must){// unnecessary when normalizingconst double nd = v.size();for(BowVector::iterator vit = v.begin(); vit != v.end(); vit++) vit->second /= nd;}}else // IDF || BINARY{unsigned int i_feature = 0;for(fit = features.begin(); fit < features.end(); ++fit, ++i_feature){WordId id;NodeId nid;WordValue w;// w is idf if IDF, or 1 if BINARYtransform(*fit, id, w, &nid, levelsup);if(w > 0) // not stopped{v.addIfNotExist(id, w);fv.addFeature(nid, i_feature);}}} // if m_weighting == ...if(must) v.normalize(norm);
}/***********************************/
//计算特征在词典中的归属
//输入特征值,NodeID记录的层,输出word的ID和权重并且NodeID
//NodeID是默认是第四层的ID(从word往上第四层)
//输入特征值,对每一层遍历找出最优的汉明距离代表的Node,如果到了Node记录层 记录最优的Node
//接着往下找 找到最优的wordID和权重
void TemplatedVocabulary<TDescriptor,F>::transform(const TDescriptor &feature, WordId &word_id, WordValue &weight, NodeId *nid, int levelsup) const
{ // propagate the feature down the treevector<NodeId> nodes;typename vector<NodeId>::const_iterator nit;// level at which the node must be stored in nid, if givenconst int nid_level = m_L - levelsup;if(nid_level <= 0 && nid != NULL) *nid = 0; // rootNodeId final_id = 0; // rootint current_level = 0;do{++current_level;nodes = m_nodes[final_id].children;final_id = nodes[0];double best_d = F::distance(feature, m_nodes[final_id].descriptor);for(nit = nodes.begin() + 1; nit != nodes.end(); ++nit){NodeId id = *nit;double d = F::distance(feature, m_nodes[id].descriptor);if(d < best_d){best_d = d;final_id = id;}}if(nid != NULL && current_level == nid_level)*nid = final_id;} while( !m_nodes[final_id].isLeaf() );// turn node id into word idword_id = m_nodes[final_id].word_id;weight = m_nodes[final_id].weight;
}

6. ORB特征提取与匹配

ORB特征

  1. fast角点 像素点周围3像素的圆连续几个灰度比中心点灰度差别到某个阈值就是fast角点

  2. brief描述子 在角点附近按照一定的规律选很多组点对,每个点对比较灰度产生1/0的排列

  3. Orientated灰度质心法 解决旋转不变性问题,方向就是角点到灰度质心的方向(在圆心出画圆然后在原内取灰度质心 计算机图形学的内容)
    ORB特征提取

  4. 分层提取、每层划块

    分层(图像金字塔)根据总的需要提取的点,在每层划分一下 N i = N 1 − s 1 − s i N_i=N\frac{1-s}{1-s^i} Ni​=N1−si1−s​

    四叉树划块,划足够多的块 exp: 500个点 则化600个块 ,每个块里面找一个特征点,有些块没有特征点的话就降低fast角点的阈值 再找,如果一个块降低足够阈值以后只有一个特征点则停止划块。每块找出响应值最好fast

  5. C++多态,不同功能的函数

  6. 仿函数能够行使函数功能的类

  7. 先计算主旋转方向,然后高斯模糊,然后在主旋转方向上算brief描述子

/***************************************************************/
ORBextractor.cpp
//0.ORBextractor::ORBextractor
//重构函数主要计算每层图像金字塔的scale还有Oriented Brief 的圆尺寸
/*******************************************************************/
//1.对一个image进行特征提取ORB特征点
//这是一个仿函数重载调用操作符(),函数运行速度快,仿函数有自己的成员变量
//步骤:
//1.1 构建金字塔ComputePyramid构建图像金字塔
//1.2 四叉树找Fast并算旋转方向 ComputeKeyPointsOctTree里面的DistributeOctTree,对每层金字塔先划分很多块,对每块进行fast提取如果没有则降低为最低阈值提取,然后就是不断的四叉树(不需要递归),知道块的个数满足一定的数量(块与特征点数相当)或者预计下层划分即将满足,停止划块(期间块里面没有特征就删除块,一个特征就终止这个块向下四叉树)块比要提取的特征数多一点,就排序,先从块中特征点少的块选,选的方式:一个块中fast响应值最高的特征点,最终把fast角点送出来
//ComputeKeyPointsOctTree里面的computeOrientation计算fast角点的旋转方向,根据计算机图像学只是画一个圆然后就是(灰度*y累加/所有灰度)和(灰度*x累加/所有灰度)取一个tan就能算出来
//1.3 高斯模糊先GaussianBlur,就是一个高斯核卷积,对杂点进行模糊去除(提取不模糊,但是描述是要模糊的)
//1.4 计算描述子,computeDescriptors这里是主方向上进行BRIEF描述 32*8一共256位描述,每次比较八对点的灰度正负(0/1)一共比较32次就是一个 256位的brief描述子
//1.5 把每一层的特征点都恢复到第0层
void ORBextractor::operator()( InputArray _image, InputArray _mask, vector<KeyPoint>& _keypoints,OutputArray _descriptors)
{ if(_image.empty())return;Mat image = _image.getMat();assert(image.type() == CV_8UC1 );// Pre-compute the scale pyramid// 构建图像金字塔(并包含边界EDGE_THRESHOLD)//边界扩充有很多方法 镜像 扩展 固定值ComputePyramid(image);// 计算每层图像的兴趣点vector < vector<KeyPoint> > allKeypoints; // vector<vector<KeyPoint>>ComputeKeyPointsOctTree(allKeypoints);//ComputeKeyPointsOld(allKeypoints);Mat descriptors;int nkeypoints = 0;for (int level = 0; level < nlevels; ++level)nkeypoints += (int)allKeypoints[level].size();if( nkeypoints == 0 )_descriptors.release();else{_descriptors.create(nkeypoints, 32, CV_8U);descriptors = _descriptors.getMat();}_keypoints.clear();_keypoints.reserve(nkeypoints);int offset = 0;for (int level = 0; level < nlevels; ++level){vector<KeyPoint>& keypoints = allKeypoints[level];int nkeypointsLevel = (int)keypoints.size();if(nkeypointsLevel==0)continue;// preprocess the resized image 对图像进行高斯模糊Mat workingMat = mvImagePyramid[level].clone();GaussianBlur(workingMat, workingMat, Size(7, 7), 2, 2, BORDER_REFLECT_101);// Compute the descriptors 计算描述子Mat desc = descriptors.rowRange(offset, offset + nkeypointsLevel);computeDescriptors(workingMat, keypoints, desc, pattern);offset += nkeypointsLevel;// Scale keypoint coordinatesif (level != 0){float scale = mvScaleFactor[level]; //getScale(level, firstLevel, scaleFactor);for (vector<KeyPoint>::iterator keypoint = keypoints.begin(),keypointEnd = keypoints.end(); keypoint != keypointEnd; ++keypoint)keypoint->pt *= scale;}// And add the keypoints to the output_keypoints.insert(_keypoints.end(), keypoints.begin(), keypoints.end());}
}

ORB特征匹配

匹配策略:

  1. 金字塔相邻层匹配
  2. 词袋树
  3. 3D投影邻近区域

误匹配筛选:

  1. 旋转主方向(直方图选最高的三个主要角度,快,优)进行筛选,(用于连续帧的匹配 还有其他策略 共识角筛选等等)
  2. 对极约束(由于 畸变问题, 极线可能不是宽度可能会变粗)

代码:
总结:

  1. 先进行加速匹配,然后在筛选误匹配
  2. 加速匹配,有投影关系或者几何约束的情况下先进行投影关系的匹配,没有投影关系的先用Bow进行匹配
  3. 在算Bow匹配(FeatureVector)以后,算一个Rt或者Sim3在进行补漏匹配
  4. 误匹配筛选:基本都会进行旋转主方向的筛选,如果有F矩阵就进行对极约束,已经配对或者质量不好的不匹配
  5. 匹配所用到的模块:初始化(小窗口匹配)追踪(相邻帧,当前帧与局部地图)回环(关键帧之间,关键帧与局部地图)重定位(当前帧与关键帧的3D点加速匹配:先Bow在投影)
  6. Fuse这个点挺好的

SearchByProjection 投影 一帧的2D与另一帧的2D对应3D投影到这一帧

  1. 追踪

当前帧与localmap的3D的匹配

SearchByProjection:localmap3D点重投影当前帧附近找到匹配的2D点,根据视差搜索半径r也会变,中心处搜索半径小,会通过DescriptorDistance 判断最优和次优之间的比值,如果比值越趋近于0则表示这个点越好,初始化的时候设置0.9,追踪的时候可以设0.7。在有mappoint的3D点映射到image中进行匹配

当前帧与上一帧的3D的匹配

与1.1不同帧与帧间的匹配出了通过投影来缩小搜索范围加速匹配以外,还可以通过旋转主方向来判断是否是误匹配!!!!

SearchByProjection当前帧对上一帧Mappoint(3D点)的跟踪,这样就能建立联系,就是多帧追到了同一个localmap的Mappoint

有个问题 localmap的维护 这个localmap的3D点的描述子是那一帧的有时间回来找找

  1. 回环

关键帧与localmap之间的匹配

SearchByProjection关键帧对局部3D点的匹配,建立回环以后localmap的3D点来加速当前帧与关键帧的特征匹配

  1. 重定位

当前帧与关键帧的3D点进行加速匹配

SearchByBow 词袋 两帧之间通过语法树加速

与投影的区别就是,语法树不需要先验的Rt(因为不需要投影),但是速度慢一些。属于没有别的办法的情况下采用比如说 回环检测,

  1. 重定位/追踪重投影失效的时候

当前帧与关键帧的MapPoint更新

通过FeatureVector (记录NodeID和对应的特征点)所以对两帧相同的NodeID的特征点进行匹配,并且通过旋转主方向进行筛选

  1. 回环

闭环检测两个关键帧之间特征点匹配

通过FeatureVector (记录NodeID和对应的特征点)所以对两帧相同的NodeID的特征点进行匹配,并且通过旋转主方向进行筛选

SearchBySim3 投影 在SearchByBoW之后进行补漏匹配

Sim3 Sim3有尺度信息,可以进行双向,因为两个关键帧都有MapPoint,都可以通过Sim3进行匹配

回环

  1. 通过Bow加速描述子的匹配,利用RANSAC粗略地计算出当前关键帧与闭环候选关键帧的Sim3(当前关键帧—闭环候选关键帧)
  2. 根据估计的Sim3,对3D点进行投影找到更多匹配,通过优化的方法计算更精确的Sim3(当前关键帧—闭环候选关键帧)
  3. 将闭环候选关键帧以及闭环候选关键帧共视的关键帧的MapPoints与当前关键帧的点进行匹配(当前关键帧—闭环候选关键帧+相连关键帧)

SearchForInitialization 类似投影 一帧特征点附近区域在另一帧匹配

初始化

初始化的匹配:通过两帧图像和两帧的特征点,对第一帧的每个特征点附近画一个窗口然后在窗口内进行匹配

SearchForTriangulation 词袋+对极几何 在算出F矩阵后可以通过

利用基本矩阵F12,在pKF1和pKF2之间找特征匹配。作用:当pKF1中特征点没有对应的3D点时,通过匹配的特征点产生新的3d点

将左图像的每个特征点与右图像同一node节点的所有特征点依次检测,判断是否满足对极几何约束,满足约束就是匹配的特征点,根据阈值 和 角度投票剔除误匹配

Fuse 将MapPoint投影到关键帧中

localmapping和闭环检测

到时候需要如果关键帧没有这个点则放到关键帧中,如果有这个点则比较一下这两个点那个观测更多一些,将观测多的替换观测少的

很长的的代码…

ORBmatcher.cpp
/**
* This file is part of ORB-SLAM2.
*
* Copyright (C) 2014-2016 Raúl Mur-Artal <raulmur at unizar dot es> (University of Zaragoza)
* For more information see <https://github.com/raulmur/ORB_SLAM2>
*
* ORB-SLAM2 is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* ORB-SLAM2 is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with ORB-SLAM2. If not, see <http://www.gnu.org/licenses/>.
*/#include "ORBmatcher.h"#include<limits.h>#include<opencv2/core/core.hpp>
#include<opencv2/features2d/features2d.hpp>#include "Thirdparty/DBoW2/DBoW2/FeatureVector.h"#include<stdint.h>using namespace std;namespace ORB_SLAM2
{const int ORBmatcher::TH_HIGH = 100;
const int ORBmatcher::TH_LOW = 50;
const int ORBmatcher::HISTO_LENGTH = 30;/*** Constructor* @param nnratio  ratio of the best and the second score* @param checkOri check orientation*/
ORBmatcher::ORBmatcher(float nnratio, bool checkOri): mfNNratio(nnratio), mbCheckOrientation(checkOri)
{
}/*** @brief 对于每个局部3D点通过投影在小范围内找到和最匹配的2D点。从而实现Frame对Local MapPoint的跟踪。用于tracking过程中实现当前帧对局部3D点的跟踪。** 将Local MapPoint投影到当前帧中, 由此增加当前帧的MapPoints \n* 在SearchLocalPoints()中已经将Local MapPoints重投影(isInFrustum())到当前帧 \n* 并标记了这些点是否在当前帧的视野中,即mbTrackInView \n* 对这些MapPoints,在其投影点附近根据描述子距离选取匹配,以及最终的方向投票机制进行剔除* @param  F           当前帧* @param  vpMapPoints Local MapPoints* @param  th          搜索范围因子:r = r * th * ScaleFactor* @return             成功匹配的数量* @see SearchLocalPoints() isInFrustum()*/
int ORBmatcher::SearchByProjection(Frame &F, const vector<MapPoint*> &vpMapPoints, const float th)
{int nmatches=0;const bool bFactor = th!=1.0;for(size_t iMP=0; iMP<vpMapPoints.size(); iMP++){MapPoint* pMP = vpMapPoints[iMP];// 判断该点是否要投影if(!pMP->mbTrackInView)continue;if(pMP->isBad())continue;// step1:通过距离预测特征点所在的金字塔层数const int &nPredictedLevel = pMP->mnTrackScaleLevel;// The size of the window will depend on the viewing direction// step2:根据观测到该3D点的视角确定搜索窗口的大小, 若相机正对这该3D点则r取一个较小的值(mTrackViewCos>0.998?2.5:4.0)float r = RadiusByViewingCos(pMP->mTrackViewCos);if(bFactor)r*=th;// (pMP->mTrackProjX, pMP->mTrackProjY):图像特征点坐标// r*F.mvScaleFactors[nPredictedLevel]:搜索范围// nPredictedLevel-1:miniLevel// nPredictedLevel:maxLevel// step3:在2D投影点附近一定范围内搜索属于miniLevel~maxLevel层的特征点 ---> vIndicesconst vector<size_t> vIndices =F.GetFeaturesInArea(pMP->mTrackProjX,pMP->mTrackProjY,r*F.mvScaleFactors[nPredictedLevel],nPredictedLevel-1,nPredictedLevel);if(vIndices.empty())continue;const cv::Mat MPdescriptor = pMP->GetDescriptor();int bestDist=256;int bestLevel= -1;int bestDist2=256;int bestLevel2 = -1;int bestIdx =-1 ;// Get best and second matches with near keypoints// step4:在vIndices内找到最佳匹配与次佳匹配,如果最优匹配误差小于阈值,且最优匹配明显优于次优匹配,则匹配3D点-2D特征点匹配关联成功for(vector<size_t>::const_iterator vit=vIndices.begin(), vend=vIndices.end(); vit!=vend; vit++){const size_t idx = *vit;// 如果Frame中的该兴趣点已经有对应的MapPoint了,则退出该次循环if(F.mvpMapPoints[idx])if(F.mvpMapPoints[idx]->Observations()>0)continue;if(F.mvuRight[idx]>0){const float er = fabs(pMP->mTrackProjXR-F.mvuRight[idx]);if(er>r*F.mvScaleFactors[nPredictedLevel])continue;}const cv::Mat &d = F.mDescriptors.row(idx);const int dist = DescriptorDistance(MPdescriptor,d);// 记录最优匹配和次优匹配if(dist<bestDist){bestDist2=bestDist;bestDist=dist;bestLevel2 = bestLevel;bestLevel = F.mvKeysUn[idx].octave;bestIdx=idx;}else if(dist<bestDist2){bestLevel2 = F.mvKeysUn[idx].octave;bestDist2=dist;}}// Apply ratio to second match (only if best and second are in the same scale level)if(bestDist<=TH_HIGH){if(bestLevel==bestLevel2 && bestDist>mfNNratio*bestDist2)continue;F.mvpMapPoints[bestIdx]=pMP; // 为Frame中的兴趣点增加对应的MapPointnmatches++;}}return nmatches;
}// 根据观测角度决定 SearchByProjection 的搜索范围
float ORBmatcher::RadiusByViewingCos(const float &viewCos)
{if(viewCos>0.998)return 2.5;elsereturn 4.0;
}bool ORBmatcher::CheckDistEpipolarLine(const cv::KeyPoint &kp1,const cv::KeyPoint &kp2,const cv::Mat &F12,const KeyFrame* pKF2)
{// Epipolar line in second image l = x1'F12 = [a b c]// 求出kp1在pKF2上对应的极线const float a = kp1.pt.x*F12.at<float>(0,0)+kp1.pt.y*F12.at<float>(1,0)+F12.at<float>(2,0);const float b = kp1.pt.x*F12.at<float>(0,1)+kp1.pt.y*F12.at<float>(1,1)+F12.at<float>(2,1);const float c = kp1.pt.x*F12.at<float>(0,2)+kp1.pt.y*F12.at<float>(1,2)+F12.at<float>(2,2);// 计算kp2特征点到极线的距离:// 极线l:ax + by + c = 0// (u,v)到l的距离为: |au+bv+c| / sqrt(a^2+b^2)const float num = a*kp2.pt.x+b*kp2.pt.y+c;const float den = a*a+b*b;if(den==0)return false;const float dsqr = num*num/den;// 尺度越大,范围应该越大。// 金字塔最底层一个像素就占一个像素,在倒数第二层,一个像素等于最底层1.2个像素(假设金字塔尺度为1.2)return dsqr<3.84*pKF2->mvLevelSigma2[kp2.octave];
}/*** @brief 通过语法树加速关键帧与当前帧之间的特征点匹配* * 通过bow对pKF和F中的特征点进行快速匹配(不属于同一node的特征点直接跳过匹配) \n* 对属于同一node的特征点通过描述子距离进行匹配 \n* 根据匹配,用pKF中特征点对应的MapPoint更新F中特征点对应的MapPoints \n* 每个特征点都对应一个MapPoint,因此pKF中每个特征点的MapPoint也就是F中对应点的MapPoint \n* 通过距离阈值、比例阈值和角度投票进行剔除误匹配* @param  pKF               KeyFrame* @param  F                 Current Frame* @param  vpMapPointMatches F中MapPoints对应的匹配,NULL表示未匹配* @return                   成功匹配的数量*/
int ORBmatcher::SearchByBoW(KeyFrame* pKF,Frame &F, vector<MapPoint*> &vpMapPointMatches)
{const vector<MapPoint*> vpMapPointsKF = pKF->GetMapPointMatches();vpMapPointMatches = vector<MapPoint*>(F.N,static_cast<MapPoint*>(NULL));const DBoW2::FeatureVector &vFeatVecKF = pKF->mFeatVec;int nmatches=0;vector<int> rotHist[HISTO_LENGTH];for(int i=0;i<HISTO_LENGTH;i++)rotHist[i].reserve(500);const float factor = HISTO_LENGTH/360.0f;// We perform the matching over ORB that belong to the same vocabulary node (at a certain level)// 将属于同一节点(特定层)的ORB特征进行匹配DBoW2::FeatureVector::const_iterator KFit = vFeatVecKF.begin();DBoW2::FeatureVector::const_iterator Fit = F.mFeatVec.begin();DBoW2::FeatureVector::const_iterator KFend = vFeatVecKF.end();DBoW2::FeatureVector::const_iterator Fend = F.mFeatVec.end();while(KFit != KFend && Fit != Fend){if(KFit->first == Fit->first) //步骤1:分别取出属于同一node的ORB特征点(只有属于同一node,才有可能是匹配点){const vector<unsigned int> vIndicesKF = KFit->second;const vector<unsigned int> vIndicesF = Fit->second;// 步骤2:遍历KF中属于该node的特征点for(size_t iKF=0; iKF<vIndicesKF.size(); iKF++){const unsigned int realIdxKF = vIndicesKF[iKF];MapPoint* pMP = vpMapPointsKF[realIdxKF]; // 取出KF中该特征对应的MapPointif(!pMP)continue;if(pMP->isBad())continue;const cv::Mat &dKF= pKF->mDescriptors.row(realIdxKF); // 取出KF中该特征对应的描述子int bestDist1=256; // 最好的距离(最小距离)int bestIdxF =-1 ;int bestDist2=256; // 倒数第二好距离(倒数第二小距离)// 步骤3:遍历F中属于该node的特征点,找到了最佳匹配点for(size_t iF=0; iF<vIndicesF.size(); iF++){const unsigned int realIdxF = vIndicesF[iF];if(vpMapPointMatches[realIdxF])// 表明这个点已经被匹配过了,不再匹配,加快速度continue;const cv::Mat &dF = F.mDescriptors.row(realIdxF); // 取出F中该特征对应的描述子const int dist =  DescriptorDistance(dKF,dF); // 求描述子的距离if(dist<bestDist1)// dist < bestDist1 < bestDist2,更新bestDist1 bestDist2{bestDist2=bestDist1;bestDist1=dist;bestIdxF=realIdxF;}else if(dist<bestDist2)// bestDist1 < dist < bestDist2,更新bestDist2{bestDist2=dist;}}// 步骤4:根据阈值 和 角度投票剔除误匹配if(bestDist1<=TH_LOW) // 匹配距离(误差)小于阈值{// trick!// 最佳匹配比次佳匹配明显要好,那么最佳匹配才真正靠谱if(static_cast<float>(bestDist1)<mfNNratio*static_cast<float>(bestDist2)){// 步骤5:更新特征点的MapPointvpMapPointMatches[bestIdxF]=pMP;const cv::KeyPoint &kp = pKF->mvKeysUn[realIdxKF];if(mbCheckOrientation){// trick!// angle:每个特征点在提取描述子时的旋转主方向角度,如果图像旋转了,这个角度将发生改变// 所有的特征点的角度变化应该是一致的,通过直方图统计得到最准确的角度变化值float rot = kp.angle-F.mvKeys[bestIdxF].angle;// 该特征点的角度变化值if(rot<0.0)rot+=360.0f;int bin = round(rot*factor);// 将rot分配到bin组if(bin==HISTO_LENGTH)bin=0;assert(bin>=0 && bin<HISTO_LENGTH);rotHist[bin].push_back(bestIdxF);}nmatches++;}}}KFit++;Fit++;}else if(KFit->first < Fit->first){KFit = vFeatVecKF.lower_bound(Fit->first);}else{Fit = F.mFeatVec.lower_bound(KFit->first);}}// 根据方向剔除误匹配的点if(mbCheckOrientation){int ind1=-1;int ind2=-1;int ind3=-1;// 计算rotHist中最大的三个的indexComputeThreeMaxima(rotHist,HISTO_LENGTH,ind1,ind2,ind3);for(int i=0; i<HISTO_LENGTH; i++){// 如果特征点的旋转角度变化量属于这三个组,则保留if(i==ind1 || i==ind2 || i==ind3)continue;// 将除了ind1 ind2 ind3以外的匹配点去掉for(size_t j=0, jend=rotHist[i].size(); j<jend; j++){vpMapPointMatches[rotHist[i][j]]=static_cast<MapPoint*>(NULL);nmatches--;}}}return nmatches;
}// 用于闭环检测中将MapPoint和关键帧的特征点进行关联
// 根据Sim3变换,将每个vpPoints投影到pKF上,并根据尺度确定一个搜索区域,
// 根据该MapPoint的描述子与该区域内的特征点进行匹配,如果匹配误差小于TH_LOW即匹配成功,更新vpMatched
int ORBmatcher::SearchByProjection(KeyFrame* pKF, cv::Mat Scw, const vector<MapPoint*> &vpPoints, vector<MapPoint*> &vpMatched, int th)
{// Get Calibration Parameters for later projectionconst float &fx = pKF->fx;const float &fy = pKF->fy;const float &cx = pKF->cx;const float &cy = pKF->cy;// Decompose Scw// Scwd的形式为[sR, st]cv::Mat sRcw = Scw.rowRange(0,3).colRange(0,3);const float scw = sqrt(sRcw.row(0).dot(sRcw.row(0)));   // 计算得到尺度scv::Mat Rcw = sRcw/scw;cv::Mat tcw = Scw.rowRange(0,3).col(3)/scw; // pKF坐标系下,世界坐标系到pKF的位移,方向由世界坐标系指向pKFcv::Mat Ow = -Rcw.t()*tcw;  // 世界坐标系下,pKF到世界坐标系的位移(世界坐标系原点相对pKF的位置),方向由pKF指向世界坐标系// Set of MapPoints already found in the KeyFrame// 使用set类型,并去除没有匹配的点,用于快速检索某个MapPoint是否有匹配set<MapPoint*> spAlreadyFound(vpMatched.begin(), vpMatched.end());spAlreadyFound.erase(static_cast<MapPoint*>(NULL));int nmatches=0;// For each Candidate MapPoint Project and Match// 遍历所有的MapPoints,尝试为每个MapPoint找到匹配的特征点for(int iMP=0, iendMP=vpPoints.size(); iMP<iendMP; iMP++){MapPoint* pMP = vpPoints[iMP];// Discard Bad MapPoints and already found// 丢弃坏的MapPoints和已经匹配上的MapPointsif(pMP->isBad() || spAlreadyFound.count(pMP))continue;// Get 3D Coords.cv::Mat p3Dw = pMP->GetWorldPos();// Transform into Camera Coords.cv::Mat p3Dc = Rcw*p3Dw+tcw;// Depth must be positiveif(p3Dc.at<float>(2)<0.0)continue;// Project into Imageconst float invz = 1/p3Dc.at<float>(2);const float x = p3Dc.at<float>(0)*invz;const float y = p3Dc.at<float>(1)*invz;const float u = fx*x+cx;const float v = fy*y+cy;// Point must be inside the imageif(!pKF->IsInImage(u,v))continue;// Depth must be inside the scale invariance region of the point// 根据是否满足MapPoint的有效距离范围,来推断这个MapPoint是否有可能被该关键帧观测到const float maxDistance = pMP->GetMaxDistanceInvariance();const float minDistance = pMP->GetMinDistanceInvariance();cv::Mat PO = p3Dw-Ow;const float dist = cv::norm(PO);if(dist<minDistance || dist>maxDistance)continue;// Viewing angle must be less than 60 degcv::Mat Pn = pMP->GetNormal();// 该关键帧观测该MapPoint的角度和观测到该MapPoint的平均观测角度差别太大if(PO.dot(Pn)<0.5*dist)continue;int nPredictedLevel = pMP->PredictScale(dist,pKF);// Search in a radius// 根据尺度确定搜索半径const float radius = th*pKF->mvScaleFactors[nPredictedLevel];const vector<size_t> vIndices = pKF->GetFeaturesInArea(u,v,radius);if(vIndices.empty())continue;// Match to the most similar keypoint in the radiusconst cv::Mat dMP = pMP->GetDescriptor();int bestDist = 256;int bestIdx = -1;// 遍历搜索区域内所有特征点,与该MapPoint的描述子进行匹配for(vector<size_t>::const_iterator vit=vIndices.begin(), vend=vIndices.end(); vit!=vend; vit++){const size_t idx = *vit;if(vpMatched[idx])continue;const int &kpLevel= pKF->mvKeysUn[idx].octave;if(kpLevel<nPredictedLevel-1 || kpLevel>nPredictedLevel)continue;const cv::Mat &dKF = pKF->mDescriptors.row(idx);const int dist = DescriptorDistance(dMP,dKF);if(dist<bestDist){bestDist = dist;bestIdx = idx;}}// 该MapPoint与bestIdx对应的特征点匹配成功if(bestDist<=TH_LOW){vpMatched[bestIdx]=pMP;nmatches++;}}return nmatches;
}// 初始化时假设F1和F2图像变化不大,在windowSize范围进行匹配,外部调用中windowSize = 100
int ORBmatcher::SearchForInitialization(Frame &F1, Frame &F2, vector<cv::Point2f> &vbPrevMatched, vector<int> &vnMatches12, int windowSize)
{int nmatches=0;vnMatches12 = vector<int>(F1.mvKeysUn.size(),-1);vector<int> rotHist[HISTO_LENGTH];for(int i=0;i<HISTO_LENGTH;i++)rotHist[i].reserve(500);const float factor = HISTO_LENGTH/360.0f;vector<int> vMatchedDistance(F2.mvKeysUn.size(),INT_MAX);vector<int> vnMatches21(F2.mvKeysUn.size(),-1);for(size_t i1=0, iend1=F1.mvKeysUn.size(); i1<iend1; i1++){cv::KeyPoint kp1 = F1.mvKeysUn[i1];int level1 = kp1.octave;if(level1>0)continue;vector<size_t> vIndices2 = F2.GetFeaturesInArea(vbPrevMatched[i1].x,vbPrevMatched[i1].y, windowSize,level1,level1);if(vIndices2.empty())continue;cv::Mat d1 = F1.mDescriptors.row(i1);int bestDist = INT_MAX;int bestDist2 = INT_MAX;int bestIdx2 = -1;for(vector<size_t>::iterator vit=vIndices2.begin(); vit!=vIndices2.end(); vit++){size_t i2 = *vit;cv::Mat d2 = F2.mDescriptors.row(i2);int dist = DescriptorDistance(d1,d2);if(vMatchedDistance[i2]<=dist)continue;if(dist<bestDist){bestDist2=bestDist;bestDist=dist;bestIdx2=i2;}else if(dist<bestDist2){bestDist2=dist;}}// 详见SearchByBoW(KeyFrame* pKF,Frame &F, vector<MapPoint*> &vpMapPointMatches)函数步骤4if(bestDist<=TH_LOW){if(bestDist<(float)bestDist2*mfNNratio){if(vnMatches21[bestIdx2]>=0){vnMatches12[vnMatches21[bestIdx2]]=-1;nmatches--;}vnMatches12[i1]=bestIdx2;vnMatches21[bestIdx2]=i1;vMatchedDistance[bestIdx2]=bestDist;nmatches++;if(mbCheckOrientation){float rot = F1.mvKeysUn[i1].angle-F2.mvKeysUn[bestIdx2].angle;if(rot<0.0)rot+=360.0f;int bin = round(rot*factor);if(bin==HISTO_LENGTH)bin=0;assert(bin>=0 && bin<HISTO_LENGTH);rotHist[bin].push_back(i1);}}}}if(mbCheckOrientation){int ind1=-1;int ind2=-1;int ind3=-1;ComputeThreeMaxima(rotHist,HISTO_LENGTH,ind1,ind2,ind3);for(int i=0; i<HISTO_LENGTH; i++){if(i==ind1 || i==ind2 || i==ind3)continue;for(size_t j=0, jend=rotHist[i].size(); j<jend; j++){int idx1 = rotHist[i][j];if(vnMatches12[idx1]>=0){vnMatches12[idx1]=-1;nmatches--;}}}}//Update prev matchedfor(size_t i1=0, iend1=vnMatches12.size(); i1<iend1; i1++)if(vnMatches12[i1]>=0)vbPrevMatched[i1]=F2.mvKeysUn[vnMatches12[i1]].pt;return nmatches;
}/*** @brief 通过语法树加速两个关键帧之间的特征匹配。该函数用于闭环检测时两个关键帧间的特征点匹配* * 通过bow对pKF和F中的特征点进行快速匹配(不属于同一node的特征点直接跳过匹配) \n* 对属于同一node的特征点通过描述子距离进行匹配 \n* 根据匹配,更新vpMatches12 \n* 通过距离阈值、比例阈值和角度投票进行剔除误匹配* @param  pKF1               KeyFrame1* @param  pKF2               KeyFrame2* @param  vpMatches12        pKF2中与pKF1匹配的MapPoint,null表示没有匹配* @return                    成功匹配的数量*/
int ORBmatcher::SearchByBoW(KeyFrame *pKF1, KeyFrame *pKF2, vector<MapPoint *> &vpMatches12)
{// 详细注释可参见:SearchByBoW(KeyFrame* pKF,Frame &F, vector<MapPoint*> &vpMapPointMatches)const vector<cv::KeyPoint> &vKeysUn1 = pKF1->mvKeysUn;const DBoW2::FeatureVector &vFeatVec1 = pKF1->mFeatVec;const vector<MapPoint*> vpMapPoints1 = pKF1->GetMapPointMatches();const cv::Mat &Descriptors1 = pKF1->mDescriptors;const vector<cv::KeyPoint> &vKeysUn2 = pKF2->mvKeysUn;const DBoW2::FeatureVector &vFeatVec2 = pKF2->mFeatVec;const vector<MapPoint*> vpMapPoints2 = pKF2->GetMapPointMatches();const cv::Mat &Descriptors2 = pKF2->mDescriptors;vpMatches12 = vector<MapPoint*>(vpMapPoints1.size(),static_cast<MapPoint*>(NULL));vector<bool> vbMatched2(vpMapPoints2.size(),false);vector<int> rotHist[HISTO_LENGTH];for(int i=0;i<HISTO_LENGTH;i++)rotHist[i].reserve(500);const float factor = HISTO_LENGTH/360.0f;int nmatches = 0;DBoW2::FeatureVector::const_iterator f1it = vFeatVec1.begin();DBoW2::FeatureVector::const_iterator f2it = vFeatVec2.begin();DBoW2::FeatureVector::const_iterator f1end = vFeatVec1.end();DBoW2::FeatureVector::const_iterator f2end = vFeatVec2.end();while(f1it != f1end && f2it != f2end){if(f1it->first == f2it->first)//步骤1:分别取出属于同一node的ORB特征点(只有属于同一node,才有可能是匹配点){// 步骤2:遍历KF中属于该node的特征点for(size_t i1=0, iend1=f1it->second.size(); i1<iend1; i1++){const size_t idx1 = f1it->second[i1];MapPoint* pMP1 = vpMapPoints1[idx1];if(!pMP1)continue;if(pMP1->isBad())continue;const cv::Mat &d1 = Descriptors1.row(idx1);int bestDist1=256;int bestIdx2 =-1 ;int bestDist2=256;// 步骤3:遍历F中属于该node的特征点,找到了最佳匹配点for(size_t i2=0, iend2=f2it->second.size(); i2<iend2; i2++){const size_t idx2 = f2it->second[i2];MapPoint* pMP2 = vpMapPoints2[idx2];if(vbMatched2[idx2] || !pMP2)continue;if(pMP2->isBad())continue;const cv::Mat &d2 = Descriptors2.row(idx2);int dist = DescriptorDistance(d1,d2);if(dist<bestDist1){bestDist2=bestDist1;bestDist1=dist;bestIdx2=idx2;}else if(dist<bestDist2){bestDist2=dist;}}// 步骤4:根据阈值 和 角度投票剔除误匹配// 详见SearchByBoW(KeyFrame* pKF,Frame &F, vector<MapPoint*> &vpMapPointMatches)函数步骤4if(bestDist1<TH_LOW){if(static_cast<float>(bestDist1)<mfNNratio*static_cast<float>(bestDist2)){vpMatches12[idx1]=vpMapPoints2[bestIdx2];vbMatched2[bestIdx2]=true;if(mbCheckOrientation){float rot = vKeysUn1[idx1].angle-vKeysUn2[bestIdx2].angle;if(rot<0.0)rot+=360.0f;int bin = round(rot*factor);if(bin==HISTO_LENGTH)bin=0;assert(bin>=0 && bin<HISTO_LENGTH);rotHist[bin].push_back(idx1);}nmatches++;}}}f1it++;f2it++;}else if(f1it->first < f2it->first){f1it = vFeatVec1.lower_bound(f2it->first);}else{f2it = vFeatVec2.lower_bound(f1it->first);}}if(mbCheckOrientation){int ind1=-1;int ind2=-1;int ind3=-1;ComputeThreeMaxima(rotHist,HISTO_LENGTH,ind1,ind2,ind3);for(int i=0; i<HISTO_LENGTH; i++){if(i==ind1 || i==ind2 || i==ind3)continue;for(size_t j=0, jend=rotHist[i].size(); j<jend; j++){vpMatches12[rotHist[i][j]]=static_cast<MapPoint*>(NULL);nmatches--;}}}return nmatches;
}/*** @brief 利用基本矩阵F12,在pKF1和pKF2之间找特征匹配。作用:当pKF1中特征点没有对应的3D点时,通过匹配的特征点产生新的3d点* * @param pKF1          关键帧1* @param pKF2          关键帧2* @param F12           基础矩阵* @param vMatchedPairs 存储匹配特征点对,特征点用其在关键帧中的索引表示* @param bOnlyStereo   在双目和rgbd情况下,要求特征点在右图存在匹配* @return              成功匹配的数量*/
int ORBmatcher::SearchForTriangulation(KeyFrame *pKF1, KeyFrame *pKF2, cv::Mat F12,vector<pair<size_t, size_t> > &vMatchedPairs, const bool bOnlyStereo)
{const DBoW2::FeatureVector &vFeatVec1 = pKF1->mFeatVec;const DBoW2::FeatureVector &vFeatVec2 = pKF2->mFeatVec;// Compute epipole in second image// 计算KF1的相机中心在KF2图像平面的坐标,即极点坐标cv::Mat Cw = pKF1->GetCameraCenter(); // tw2c1cv::Mat R2w = pKF2->GetRotation();    // Rc2wcv::Mat t2w = pKF2->GetTranslation(); // tc2wcv::Mat C2 = R2w*Cw+t2w; // tc2c1 KF1的相机中心在KF2坐标系的表示const float invz = 1.0f/C2.at<float>(2);// 步骤0:得到KF1的相机光心在KF2中的坐标(KF1在KF2中的极点坐标)const float ex =pKF2->fx*C2.at<float>(0)*invz+pKF2->cx;const float ey =pKF2->fy*C2.at<float>(1)*invz+pKF2->cy;// Find matches between not tracked keypoints// Matching speed-up by ORB Vocabulary// Compare only ORB that share the same nodeint nmatches=0;vector<bool> vbMatched2(pKF2->N,false);vector<int> vMatches12(pKF1->N,-1);vector<int> rotHist[HISTO_LENGTH];for(int i=0;i<HISTO_LENGTH;i++)rotHist[i].reserve(500);const float factor = HISTO_LENGTH/360.0f;// We perform the matching over ORB that belong to the same vocabulary node (at a certain level)// 将属于同一节点(特定层)的ORB特征进行匹配// FeatureVector的数据结构类似于:{(node1,feature_vector1) (node2,feature_vector2)...}// f1it->first对应node编号,f1it->second对应属于该node的所有特特征点编号DBoW2::FeatureVector::const_iterator f1it = vFeatVec1.begin();DBoW2::FeatureVector::const_iterator f2it = vFeatVec2.begin();DBoW2::FeatureVector::const_iterator f1end = vFeatVec1.end();DBoW2::FeatureVector::const_iterator f2end = vFeatVec2.end();// 步骤1:遍历pKF1和pKF2中的node节点while(f1it!=f1end && f2it!=f2end){// 如果f1it和f2it属于同一个node节点if(f1it->first == f2it->first){// 步骤2:遍历该node节点下(f1it->first)的所有特征点for(size_t i1=0, iend1=f1it->second.size(); i1<iend1; i1++){// 获取pKF1中属于该node节点的所有特征点索引const size_t idx1 = f1it->second[i1];// 步骤2.1:通过特征点索引idx1在pKF1中取出对应的MapPointMapPoint* pMP1 = pKF1->GetMapPoint(idx1);// If there is already a MapPoint skip// !!!!!!由于寻找的是未匹配的特征点,所以pMP1应该为NULLif(pMP1)continue;// 如果mvuRight中的值大于0,表示是双目,且该特征点有深度值const bool bStereo1 = pKF1->mvuRight[idx1]>=0;// 不考虑双目深度的有效性if(bOnlyStereo)if(!bStereo1)continue;// 步骤2.2:通过特征点索引idx1在pKF1中取出对应的特征点const cv::KeyPoint &kp1 = pKF1->mvKeysUn[idx1];// 步骤2.3:通过特征点索引idx1在pKF1中取出对应的特征点的描述子const cv::Mat &d1 = pKF1->mDescriptors.row(idx1);int bestDist = TH_LOW;int bestIdx2 = -1;// 步骤3:遍历该node节点下(f2it->first)的所有特征点for(size_t i2=0, iend2=f2it->second.size(); i2<iend2; i2++){// 获取pKF2中属于该node节点的所有特征点索引size_t idx2 = f2it->second[i2];// 步骤3.1:通过特征点索引idx2在pKF2中取出对应的MapPointMapPoint* pMP2 = pKF2->GetMapPoint(idx2);// If we have already matched or there is a MapPoint skip// 如果pKF2当前特征点索引idx2已经被匹配过或者对应的3d点非空// 那么这个索引idx2就不能被考虑if(vbMatched2[idx2] || pMP2)continue;const bool bStereo2 = pKF2->mvuRight[idx2]>=0;if(bOnlyStereo)if(!bStereo2)continue;// 步骤3.2:通过特征点索引idx2在pKF2中取出对应的特征点的描述子const cv::Mat &d2 = pKF2->mDescriptors.row(idx2);// 计算idx1与idx2在两个关键帧中对应特征点的描述子距离const int dist = DescriptorDistance(d1,d2);if(dist>TH_LOW || dist>bestDist)continue;// 步骤3.3:通过特征点索引idx2在pKF2中取出对应的特征点const cv::KeyPoint &kp2 = pKF2->mvKeysUn[idx2];if(!bStereo1 && !bStereo2){const float distex = ex-kp2.pt.x;const float distey = ey-kp2.pt.y;// !!!!该特征点距离极点太近,表明kp2对应的MapPoint距离pKF1相机太近if(distex*distex+distey*distey<100*pKF2->mvScaleFactors[kp2.octave])continue;}// 步骤4:计算特征点kp2到kp1极线(kp1对应pKF2的一条极线)的距离是否小于阈值if(CheckDistEpipolarLine(kp1,kp2,F12,pKF2)){bestIdx2 = idx2;bestDist = dist;}}// 步骤1、2、3、4总结下来就是:将左图像的每个特征点与右图像同一node节点的所有特征点// 依次检测,判断是否满足对极几何约束,满足约束就是匹配的特征点// 详见SearchByBoW(KeyFrame* pKF,Frame &F, vector<MapPoint*> &vpMapPointMatches)函数步骤4if(bestIdx2>=0){const cv::KeyPoint &kp2 = pKF2->mvKeysUn[bestIdx2];vMatches12[idx1]=bestIdx2;vbMatched2[bestIdx2]=true;nmatches++;if(mbCheckOrientation){float rot = kp1.angle-kp2.angle;if(rot<0.0)rot+=360.0f;int bin = round(rot*factor);if(bin==HISTO_LENGTH)bin=0;assert(bin>=0 && bin<HISTO_LENGTH);rotHist[bin].push_back(idx1);}}}f1it++;f2it++;}else if(f1it->first < f2it->first){f1it = vFeatVec1.lower_bound(f2it->first);}else{f2it = vFeatVec2.lower_bound(f1it->first);}}if(mbCheckOrientation){int ind1=-1;int ind2=-1;int ind3=-1;ComputeThreeMaxima(rotHist,HISTO_LENGTH,ind1,ind2,ind3);for(int i=0; i<HISTO_LENGTH; i++){if(i==ind1 || i==ind2 || i==ind3)continue;for(size_t j=0, jend=rotHist[i].size(); j<jend; j++){vMatches12[rotHist[i][j]]=-1;nmatches--;}}}vMatchedPairs.clear();vMatchedPairs.reserve(nmatches);for(size_t i=0, iend=vMatches12.size(); i<iend; i++){if(vMatches12[i]<0)continue;vMatchedPairs.push_back(make_pair(i,vMatches12[i]));}return nmatches;
}/*** @brief 将MapPoints投影(用关键帧的位姿)到关键帧pKF中,并判断是否有重复的MapPoints* 1.如果MapPoint能匹配关键帧的特征点,并且该点有对应的MapPoint,那么将两个MapPoint合并(选择观测数多的)* 2.如果MapPoint能匹配关键帧的特征点,并且该点没有对应的MapPoint,那么为该点添加MapPoint* @param  pKF         相邻关键帧* @param  vpMapPoints 当前关键帧的MapPoints* @param  th          搜索半径的因子* @return             重复MapPoints的数量*/
int ORBmatcher::Fuse(KeyFrame *pKF, const vector<MapPoint *> &vpMapPoints, const float th)
{cv::Mat Rcw = pKF->GetRotation();cv::Mat tcw = pKF->GetTranslation();const float &fx = pKF->fx;const float &fy = pKF->fy;const float &cx = pKF->cx;const float &cy = pKF->cy;const float &bf = pKF->mbf;cv::Mat Ow = pKF->GetCameraCenter();int nFused=0;const int nMPs = vpMapPoints.size();// 遍历所有的MapPointsfor(int i=0; i<nMPs; i++){MapPoint* pMP = vpMapPoints[i];if(!pMP)continue;if(pMP->isBad() || pMP->IsInKeyFrame(pKF))continue;cv::Mat p3Dw = pMP->GetWorldPos();cv::Mat p3Dc = Rcw*p3Dw + tcw;// Depth must be positiveif(p3Dc.at<float>(2)<0.0f)continue;const float invz = 1/p3Dc.at<float>(2);const float x = p3Dc.at<float>(0)*invz;const float y = p3Dc.at<float>(1)*invz;const float u = fx*x+cx;const float v = fy*y+cy;// 步骤1:得到MapPoint在图像上的投影坐标// Point must be inside the imageif(!pKF->IsInImage(u,v))continue;const float ur = u-bf*invz;const float maxDistance = pMP->GetMaxDistanceInvariance();const float minDistance = pMP->GetMinDistanceInvariance();cv::Mat PO = p3Dw-Ow;const float dist3D = cv::norm(PO);// Depth must be inside the scale pyramid of the imageif(dist3D<minDistance || dist3D>maxDistance )continue;// Viewing angle must be less than 60 degcv::Mat Pn = pMP->GetNormal();if(PO.dot(Pn)<0.5*dist3D)continue;int nPredictedLevel = pMP->PredictScale(dist3D,pKF);// Search in a radiusconst float radius = th*pKF->mvScaleFactors[nPredictedLevel];// 步骤2:根据MapPoint的深度确定尺度,从而确定搜索范围const vector<size_t> vIndices = pKF->GetFeaturesInArea(u,v,radius);if(vIndices.empty())continue;// Match to the most similar keypoint in the radiusconst cv::Mat dMP = pMP->GetDescriptor();int bestDist = 256;int bestIdx = -1;for(vector<size_t>::const_iterator vit=vIndices.begin(), vend=vIndices.end(); vit!=vend; vit++)// 步骤3:遍历搜索范围内的features{const size_t idx = *vit;const cv::KeyPoint &kp = pKF->mvKeysUn[idx];const int &kpLevel= kp.octave;if(kpLevel<nPredictedLevel-1 || kpLevel>nPredictedLevel)continue;// 计算MapPoint投影的坐标与这个区域特征点的距离,如果偏差很大,直接跳过特征点匹配if(pKF->mvuRight[idx]>=0){// Check reprojection error in stereoconst float &kpx = kp.pt.x;const float &kpy = kp.pt.y;const float &kpr = pKF->mvuRight[idx];const float ex = u-kpx;const float ey = v-kpy;const float er = ur-kpr;const float e2 = ex*ex+ey*ey+er*er;if(e2*pKF->mvInvLevelSigma2[kpLevel]>7.8)continue;}else{const float &kpx = kp.pt.x;const float &kpy = kp.pt.y;const float ex = u-kpx;const float ey = v-kpy;const float e2 = ex*ex+ey*ey;// 基于卡方检验计算出的阈值(假设测量有一个像素的偏差)if(e2*pKF->mvInvLevelSigma2[kpLevel]>5.99)continue;}const cv::Mat &dKF = pKF->mDescriptors.row(idx);const int dist = DescriptorDistance(dMP,dKF);if(dist<bestDist)// 找MapPoint在该区域最佳匹配的特征点{bestDist = dist;bestIdx = idx;}}// If there is already a MapPoint replace otherwise add new measurementif(bestDist<=TH_LOW)// 找到了MapPoint在该区域最佳匹配的特征点{MapPoint* pMPinKF = pKF->GetMapPoint(bestIdx);if(pMPinKF)// 如果这个点有对应的MapPoint{if(!pMPinKF->isBad())// 如果这个MapPoint不是bad,选择哪一个呢?{if(pMPinKF->Observations()>pMP->Observations())pMP->Replace(pMPinKF);elsepMPinKF->Replace(pMP);}}else// 如果这个点没有对应的MapPoint{pMP->AddObservation(pKF,bestIdx);pKF->AddMapPoint(pMP,bestIdx);}nFused++;}}return nFused;
}// 投影MapPoints(用Sim3: Scw参数)到KeyFrame中,并判断是否有重复的MapPoints
// Scw为世界坐标系到pKF机体坐标系的Sim3变换,用于将世界坐标系下的vpPoints变换到机体坐标系
int ORBmatcher::Fuse(KeyFrame *pKF, cv::Mat Scw, const vector<MapPoint *> &vpPoints, float th, vector<MapPoint *> &vpReplacePoint)
{// Get Calibration Parameters for later projectionconst float &fx = pKF->fx;const float &fy = pKF->fy;const float &cx = pKF->cx;const float &cy = pKF->cy;// Decompose Scw// 将Sim3转化为SE3并分解cv::Mat sRcw = Scw.rowRange(0,3).colRange(0,3);const float scw = sqrt(sRcw.row(0).dot(sRcw.row(0)));// 计算得到尺度scv::Mat Rcw = sRcw/scw;// 除掉scv::Mat tcw = Scw.rowRange(0,3).col(3)/scw;// 除掉scv::Mat Ow = -Rcw.t()*tcw;// Set of MapPoints already found in the KeyFrameconst set<MapPoint*> spAlreadyFound = pKF->GetMapPoints();int nFused=0;const int nPoints = vpPoints.size();// For each candidate MapPoint project and match// 遍历所有的MapPointsfor(int iMP=0; iMP<nPoints; iMP++){MapPoint* pMP = vpPoints[iMP];// Discard Bad MapPoints and already foundif(pMP->isBad() || spAlreadyFound.count(pMP))continue;// Get 3D Coords.cv::Mat p3Dw = pMP->GetWorldPos();// Transform into Camera Coords.cv::Mat p3Dc = Rcw*p3Dw+tcw;// Depth must be positiveif(p3Dc.at<float>(2)<0.0f)continue;// Project into Imageconst float invz = 1.0/p3Dc.at<float>(2);const float x = p3Dc.at<float>(0)*invz;const float y = p3Dc.at<float>(1)*invz;const float u = fx*x+cx;const float v = fy*y+cy;// 得到MapPoint在图像上的投影坐标// Point must be inside the imageif(!pKF->IsInImage(u,v))continue;// Depth must be inside the scale pyramid of the image// 根据距离是否在图像合理金字塔尺度范围内和观测角度是否小于60度判断该MapPoint是否正常const float maxDistance = pMP->GetMaxDistanceInvariance();const float minDistance = pMP->GetMinDistanceInvariance();cv::Mat PO = p3Dw-Ow;const float dist3D = cv::norm(PO);if(dist3D<minDistance || dist3D>maxDistance)continue;// Viewing angle must be less than 60 degcv::Mat Pn = pMP->GetNormal();if(PO.dot(Pn)<0.5*dist3D)continue;// Compute predicted scale levelconst int nPredictedLevel = pMP->PredictScale(dist3D,pKF);// Search in a radius// 计算搜索范围const float radius = th*pKF->mvScaleFactors[nPredictedLevel];// 收集pKF在该区域内的特征点const vector<size_t> vIndices = pKF->GetFeaturesInArea(u,v,radius);if(vIndices.empty())continue;// Match to the most similar keypoint in the radiusconst cv::Mat dMP = pMP->GetDescriptor();int bestDist = INT_MAX;int bestIdx = -1;for(vector<size_t>::const_iterator vit=vIndices.begin(); vit!=vIndices.end(); vit++){const size_t idx = *vit;const int &kpLevel = pKF->mvKeysUn[idx].octave;if(kpLevel<nPredictedLevel-1 || kpLevel>nPredictedLevel)continue;const cv::Mat &dKF = pKF->mDescriptors.row(idx);int dist = DescriptorDistance(dMP,dKF);if(dist<bestDist){bestDist = dist;bestIdx = idx;}}// If there is already a MapPoint replace otherwise add new measurementif(bestDist<=TH_LOW){MapPoint* pMPinKF = pKF->GetMapPoint(bestIdx);// 如果这个MapPoint已经存在,则替换,// 这里不能轻易替换(因为MapPoint一般会被很多帧都可以观测到),这里先记录下来,之后调用Replace函数来替换if(pMPinKF){if(!pMPinKF->isBad())vpReplacePoint[iMP] = pMPinKF;// 记录下来该pMPinKF需要被替换掉}else// 如果这个MapPoint不存在,直接添加{pMP->AddObservation(pKF,bestIdx);pKF->AddMapPoint(pMP,bestIdx);}nFused++;}}return nFused;
}// 通过Sim3变换,确定pKF1的特征点在pKF2中的大致区域,同理,确定pKF2的特征点在pKF1中的大致区域
// 在该区域内通过描述子进行匹配捕获pKF1和pKF2之前漏匹配的特征点,更新vpMatches12(之前使用SearchByBoW进行特征点匹配时会有漏匹配)
int ORBmatcher::SearchBySim3(KeyFrame *pKF1, KeyFrame *pKF2, vector<MapPoint*> &vpMatches12,const float &s12, const cv::Mat &R12, const cv::Mat &t12, const float th)
{// 步骤1:变量初始化const float &fx = pKF1->fx;const float &fy = pKF1->fy;const float &cx = pKF1->cx;const float &cy = pKF1->cy;// Camera 1 from world// 从world到camera的变换cv::Mat R1w = pKF1->GetRotation();cv::Mat t1w = pKF1->GetTranslation();//Camera 2 from worldcv::Mat R2w = pKF2->GetRotation();cv::Mat t2w = pKF2->GetTranslation();//Transformation between camerascv::Mat sR12 = s12*R12;cv::Mat sR21 = (1.0/s12)*R12.t();cv::Mat t21 = -sR21*t12;const vector<MapPoint*> vpMapPoints1 = pKF1->GetMapPointMatches();const int N1 = vpMapPoints1.size();const vector<MapPoint*> vpMapPoints2 = pKF2->GetMapPointMatches();const int N2 = vpMapPoints2.size();vector<bool> vbAlreadyMatched1(N1,false);// 用于记录该特征点是否被处理过vector<bool> vbAlreadyMatched2(N2,false);// 用于记录该特征点是否在pKF1中有匹配// 步骤2:用vpMatches12更新vbAlreadyMatched1和vbAlreadyMatched2for(int i=0; i<N1; i++){MapPoint* pMP = vpMatches12[i];if(pMP){vbAlreadyMatched1[i]=true;// 该特征点已经判断过int idx2 = pMP->GetIndexInKeyFrame(pKF2);if(idx2>=0 && idx2<N2)vbAlreadyMatched2[idx2]=true;// 该特征点在pKF1中有匹配}}vector<int> vnMatch1(N1,-1);vector<int> vnMatch2(N2,-1);// Transform from KF1 to KF2 and search// 步骤3.1:通过Sim变换,确定pKF1的特征点在pKF2中的大致区域,//         在该区域内通过描述子进行匹配捕获pKF1和pKF2之前漏匹配的特征点,更新vpMatches12//         (之前使用SearchByBoW进行特征点匹配时会有漏匹配)for(int i1=0; i1<N1; i1++){MapPoint* pMP = vpMapPoints1[i1];if(!pMP || vbAlreadyMatched1[i1])// 该特征点已经有匹配点了,直接跳过continue;if(pMP->isBad())continue;cv::Mat p3Dw = pMP->GetWorldPos();cv::Mat p3Dc1 = R1w*p3Dw + t1w;// 把pKF1系下的MapPoint从world坐标系变换到camera1坐标系cv::Mat p3Dc2 = sR21*p3Dc1 + t21;// 再通过Sim3将该MapPoint从camera1变换到camera2坐标系// Depth must be positiveif(p3Dc2.at<float>(2)<0.0)continue;// 投影到camera2图像平面const float invz = 1.0/p3Dc2.at<float>(2);const float x = p3Dc2.at<float>(0)*invz;const float y = p3Dc2.at<float>(1)*invz;const float u = fx*x+cx;const float v = fy*y+cy;// Point must be inside the imageif(!pKF2->IsInImage(u,v))continue;const float maxDistance = pMP->GetMaxDistanceInvariance();const float minDistance = pMP->GetMinDistanceInvariance();const float dist3D = cv::norm(p3Dc2);// Depth must be inside the scale invariance regionif(dist3D<minDistance || dist3D>maxDistance )continue;// Compute predicted octave// 预测该MapPoint对应的特征点在图像金字塔哪一层const int nPredictedLevel = pMP->PredictScale(dist3D,pKF2);// Search in a radius// 计算特征点搜索半径const float radius = th*pKF2->mvScaleFactors[nPredictedLevel];// 取出该区域内的所有特征点const vector<size_t> vIndices = pKF2->GetFeaturesInArea(u,v,radius);if(vIndices.empty())continue;// Match to the most similar keypoint in the radiusconst cv::Mat dMP = pMP->GetDescriptor();int bestDist = INT_MAX;int bestIdx = -1;// 遍历搜索区域内的所有特征点,与pMP进行描述子匹配for(vector<size_t>::const_iterator vit=vIndices.begin(), vend=vIndices.end(); vit!=vend; vit++){const size_t idx = *vit;const cv::KeyPoint &kp = pKF2->mvKeysUn[idx];if(kp.octave<nPredictedLevel-1 || kp.octave>nPredictedLevel)continue;const cv::Mat &dKF = pKF2->mDescriptors.row(idx);const int dist = DescriptorDistance(dMP,dKF);if(dist<bestDist){bestDist = dist;bestIdx = idx;}}if(bestDist<=TH_HIGH){vnMatch1[i1]=bestIdx;}}// Transform from KF2 to KF1 and search// 步骤3.2:通过Sim变换,确定pKF2的特征点在pKF1中的大致区域,//         在该区域内通过描述子进行匹配捕获pKF1和pKF2之前漏匹配的特征点,更新vpMatches12//         (之前使用SearchByBoW进行特征点匹配时会有漏匹配)for(int i2=0; i2<N2; i2++){MapPoint* pMP = vpMapPoints2[i2];if(!pMP || vbAlreadyMatched2[i2])continue;if(pMP->isBad())continue;cv::Mat p3Dw = pMP->GetWorldPos();cv::Mat p3Dc2 = R2w*p3Dw + t2w;cv::Mat p3Dc1 = sR12*p3Dc2 + t12;// Depth must be positiveif(p3Dc1.at<float>(2)<0.0)continue;const float invz = 1.0/p3Dc1.at<float>(2);const float x = p3Dc1.at<float>(0)*invz;const float y = p3Dc1.at<float>(1)*invz;const float u = fx*x+cx;const float v = fy*y+cy;// Point must be inside the imageif(!pKF1->IsInImage(u,v))continue;const float maxDistance = pMP->GetMaxDistanceInvariance();const float minDistance = pMP->GetMinDistanceInvariance();const float dist3D = cv::norm(p3Dc1);// Depth must be inside the scale pyramid of the imageif(dist3D<minDistance || dist3D>maxDistance)continue;// Compute predicted octaveconst int nPredictedLevel = pMP->PredictScale(dist3D,pKF1);// Search in a radius of 2.5*sigma(ScaleLevel)const float radius = th*pKF1->mvScaleFactors[nPredictedLevel];const vector<size_t> vIndices = pKF1->GetFeaturesInArea(u,v,radius);if(vIndices.empty())continue;// Match to the most similar keypoint in the radiusconst cv::Mat dMP = pMP->GetDescriptor();int bestDist = INT_MAX;int bestIdx = -1;for(vector<size_t>::const_iterator vit=vIndices.begin(), vend=vIndices.end(); vit!=vend; vit++){const size_t idx = *vit;const cv::KeyPoint &kp = pKF1->mvKeysUn[idx];if(kp.octave<nPredictedLevel-1 || kp.octave>nPredictedLevel)continue;const cv::Mat &dKF = pKF1->mDescriptors.row(idx);const int dist = DescriptorDistance(dMP,dKF);if(dist<bestDist){bestDist = dist;bestIdx = idx;}}if(bestDist<=TH_HIGH){vnMatch2[i2]=bestIdx;}}// Check agreementint nFound = 0;for(int i1=0; i1<N1; i1++){int idx2 = vnMatch1[i1];if(idx2>=0){int idx1 = vnMatch2[idx2];if(idx1==i1){vpMatches12[i1] = vpMapPoints2[idx2];nFound++;}}}return nFound;
}/*** @brief 对上一帧每个3D点通过投影在小范围内找到和最匹配的2D点。从而实现当前帧CurrentFrame对上一帧LastFrame 3D点的匹配跟踪。用于tracking中前后帧跟踪** 上一帧中包含了MapPoints,对这些MapPoints进行tracking,由此增加当前帧的MapPoints \n* 1. 将上一帧的MapPoints投影到当前帧(根据速度模型可以估计当前帧的Tcw)* 2. 在投影点附近根据描述子距离选取匹配,以及最终的方向投票机制进行剔除* @param  CurrentFrame 当前帧* @param  LastFrame    上一帧* @param  th           阈值* @param  bMono        是否为单目* @return              成功匹配的数量* @see SearchByBoW()*/
int ORBmatcher::SearchByProjection(Frame &CurrentFrame, const Frame &LastFrame, const float th, const bool bMono)
{int nmatches = 0;// Rotation Histogram (to check rotation consistency)vector<int> rotHist[HISTO_LENGTH];for(int i=0;i<HISTO_LENGTH;i++)rotHist[i].reserve(500);const float factor = HISTO_LENGTH/360.0f;const cv::Mat Rcw = CurrentFrame.mTcw.rowRange(0,3).colRange(0,3);const cv::Mat tcw = CurrentFrame.mTcw.rowRange(0,3).col(3);const cv::Mat twc = -Rcw.t()*tcw; // twc(w)const cv::Mat Rlw = LastFrame.mTcw.rowRange(0,3).colRange(0,3);const cv::Mat tlw = LastFrame.mTcw.rowRange(0,3).col(3); // tlw(l)// vector from LastFrame to CurrentFrame expressed in LastFrameconst cv::Mat tlc = Rlw*twc+tlw; // Rlw*twc(w) = twc(l), twc(l) + tlw(l) = tlc(l)// 判断前进还是后退,并以此预测特征点在当前帧所在的金字塔层数const bool bForward = tlc.at<float>(2)>CurrentFrame.mb && !bMono; // 非单目情况,如果Z大于基线,则表示前进const bool bBackward = -tlc.at<float>(2)>CurrentFrame.mb && !bMono; // 非单目情况,如果Z小于基线,则表示前进for(int i=0; i<LastFrame.N; i++){MapPoint* pMP = LastFrame.mvpMapPoints[i];if(pMP){if(!LastFrame.mvbOutlier[i]){// 对上一帧有效的MapPoints进行跟踪// Projectcv::Mat x3Dw = pMP->GetWorldPos();cv::Mat x3Dc = Rcw*x3Dw+tcw;const float xc = x3Dc.at<float>(0);const float yc = x3Dc.at<float>(1);const float invzc = 1.0/x3Dc.at<float>(2);if(invzc<0)continue;float u = CurrentFrame.fx*xc*invzc+CurrentFrame.cx;float v = CurrentFrame.fy*yc*invzc+CurrentFrame.cy;if(u<CurrentFrame.mnMinX || u>CurrentFrame.mnMaxX)continue;if(v<CurrentFrame.mnMinY || v>CurrentFrame.mnMaxY)continue;int nLastOctave = LastFrame.mvKeys[i].octave;// Search in a window. Size depends on scalefloat radius = th*CurrentFrame.mvScaleFactors[nLastOctave]; // 尺度越大,搜索范围越大vector<size_t> vIndices2;// NOTE 尺度越大,图像越小// 以下可以这么理解,例如一个有一定面积的圆点,在某个尺度n下它是一个特征点// 当前进时,圆点的面积增大,在某个尺度m下它是一个特征点,由于面积增大,则需要在更高的尺度下才能检测出来// 因此m>=n,对应前进的情况,nCurOctave>=nLastOctave。后退的情况可以类推if(bForward) // 前进,则上一帧兴趣点在所在的尺度nLastOctave<=nCurOctavevIndices2 = CurrentFrame.GetFeaturesInArea(u,v, radius, nLastOctave);else if(bBackward) // 后退,则上一帧兴趣点在所在的尺度0<=nCurOctave<=nLastOctavevIndices2 = CurrentFrame.GetFeaturesInArea(u,v, radius, 0, nLastOctave);else // 在[nLastOctave-1, nLastOctave+1]中搜索vIndices2 = CurrentFrame.GetFeaturesInArea(u,v, radius, nLastOctave-1, nLastOctave+1);if(vIndices2.empty())continue;const cv::Mat dMP = pMP->GetDescriptor();int bestDist = 256;int bestIdx2 = -1;// 遍历满足条件的特征点for(vector<size_t>::const_iterator vit=vIndices2.begin(), vend=vIndices2.end(); vit!=vend; vit++){// 如果该特征点已经有对应的MapPoint了,则退出该次循环const size_t i2 = *vit;if(CurrentFrame.mvpMapPoints[i2])if(CurrentFrame.mvpMapPoints[i2]->Observations()>0)continue;if(CurrentFrame.mvuRight[i2]>0){// 双目和rgbd的情况,需要保证右图的点也在搜索半径以内const float ur = u - CurrentFrame.mbf*invzc;const float er = fabs(ur - CurrentFrame.mvuRight[i2]);if(er>radius)continue;}const cv::Mat &d = CurrentFrame.mDescriptors.row(i2);const int dist = DescriptorDistance(dMP,d);if(dist<bestDist){bestDist=dist;bestIdx2=i2;}}// 详见SearchByBoW(KeyFrame* pKF,Frame &F, vector<MapPoint*> &vpMapPointMatches)函数步骤4if(bestDist<=TH_HIGH){CurrentFrame.mvpMapPoints[bestIdx2]=pMP; // 为当前帧添加MapPointnmatches++;if(mbCheckOrientation){float rot = LastFrame.mvKeysUn[i].angle-CurrentFrame.mvKeysUn[bestIdx2].angle;if(rot<0.0)rot+=360.0f;int bin = round(rot*factor);if(bin==HISTO_LENGTH)bin=0;assert(bin>=0 && bin<HISTO_LENGTH);rotHist[bin].push_back(bestIdx2);}}}}}//Apply rotation consistencyif(mbCheckOrientation){int ind1=-1;int ind2=-1;int ind3=-1;ComputeThreeMaxima(rotHist,HISTO_LENGTH,ind1,ind2,ind3);for(int i=0; i<HISTO_LENGTH; i++){if(i!=ind1 && i!=ind2 && i!=ind3){for(size_t j=0, jend=rotHist[i].size(); j<jend; j++){CurrentFrame.mvpMapPoints[rotHist[i][j]]=static_cast<MapPoint*>(NULL);nmatches--;}}}}return nmatches;
}// 对当前帧每个3D点通过投影在小范围内找到和最匹配的2D点。从而实现当前帧CurrentFrame对关键帧3D点的匹配跟踪。用于重定位时特征点匹配
int ORBmatcher::SearchByProjection(Frame &CurrentFrame, KeyFrame *pKF, const set<MapPoint*> &sAlreadyFound, const float th , const int ORBdist)
{int nmatches = 0;const cv::Mat Rcw = CurrentFrame.mTcw.rowRange(0,3).colRange(0,3);const cv::Mat tcw = CurrentFrame.mTcw.rowRange(0,3).col(3);const cv::Mat Ow = -Rcw.t()*tcw;// Rotation Histogram (to check rotation consistency)vector<int> rotHist[HISTO_LENGTH];for(int i=0;i<HISTO_LENGTH;i++)rotHist[i].reserve(500);const float factor = HISTO_LENGTH/360.0f;const vector<MapPoint*> vpMPs = pKF->GetMapPointMatches();for(size_t i=0, iend=vpMPs.size(); i<iend; i++){MapPoint* pMP = vpMPs[i];if(pMP){if(!pMP->isBad() && !sAlreadyFound.count(pMP)){//Projectcv::Mat x3Dw = pMP->GetWorldPos();cv::Mat x3Dc = Rcw*x3Dw+tcw;const float xc = x3Dc.at<float>(0);const float yc = x3Dc.at<float>(1);const float invzc = 1.0/x3Dc.at<float>(2);const float u = CurrentFrame.fx*xc*invzc+CurrentFrame.cx;const float v = CurrentFrame.fy*yc*invzc+CurrentFrame.cy;if(u<CurrentFrame.mnMinX || u>CurrentFrame.mnMaxX)continue;if(v<CurrentFrame.mnMinY || v>CurrentFrame.mnMaxY)continue;// Compute predicted scale levelcv::Mat PO = x3Dw-Ow;float dist3D = cv::norm(PO);const float maxDistance = pMP->GetMaxDistanceInvariance();const float minDistance = pMP->GetMinDistanceInvariance();// Depth must be inside the scale pyramid of the imageif(dist3D<minDistance || dist3D>maxDistance)continue;int nPredictedLevel = pMP->PredictScale(dist3D,&CurrentFrame);// Search in a windowconst float radius = th*CurrentFrame.mvScaleFactors[nPredictedLevel];const vector<size_t> vIndices2 = CurrentFrame.GetFeaturesInArea(u, v, radius, nPredictedLevel-1, nPredictedLevel+1);if(vIndices2.empty())continue;const cv::Mat dMP = pMP->GetDescriptor();int bestDist = 256;int bestIdx2 = -1;for(vector<size_t>::const_iterator vit=vIndices2.begin(); vit!=vIndices2.end(); vit++){const size_t i2 = *vit;if(CurrentFrame.mvpMapPoints[i2])continue;const cv::Mat &d = CurrentFrame.mDescriptors.row(i2);const int dist = DescriptorDistance(dMP,d);if(dist<bestDist){bestDist=dist;bestIdx2=i2;}}if(bestDist<=ORBdist){CurrentFrame.mvpMapPoints[bestIdx2]=pMP;nmatches++;// 详见SearchByBoW(KeyFrame* pKF,Frame &F, vector<MapPoint*> &vpMapPointMatches)函数步骤4if(mbCheckOrientation){float rot = pKF->mvKeysUn[i].angle-CurrentFrame.mvKeysUn[bestIdx2].angle;if(rot<0.0)rot+=360.0f;int bin = round(rot*factor);if(bin==HISTO_LENGTH)bin=0;assert(bin>=0 && bin<HISTO_LENGTH);rotHist[bin].push_back(bestIdx2);}}}}}if(mbCheckOrientation){int ind1=-1;int ind2=-1;int ind3=-1;ComputeThreeMaxima(rotHist,HISTO_LENGTH,ind1,ind2,ind3);for(int i=0; i<HISTO_LENGTH; i++){if(i!=ind1 && i!=ind2 && i!=ind3){for(size_t j=0, jend=rotHist[i].size(); j<jend; j++){CurrentFrame.mvpMapPoints[rotHist[i][j]]=NULL;nmatches--;}}}}return nmatches;
}// 取出直方图中值最大的三个index
void ORBmatcher::ComputeThreeMaxima(vector<int>* histo, const int L, int &ind1, int &ind2, int &ind3)
{int max1=0;int max2=0;int max3=0;for(int i=0; i<L; i++){const int s = histo[i].size();if(s>max1){max3=max2;max2=max1;max1=s;ind3=ind2;ind2=ind1;ind1=i;}else if(s>max2){max3=max2;max2=s;ind3=ind2;ind2=i;}else if(s>max3){max3=s;ind3=i;}}if(max2<0.1f*(float)max1){ind2=-1;ind3=-1;}else if(max3<0.1f*(float)max1){ind3=-1;}
}// Bit set count operation from
// http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel
int ORBmatcher::DescriptorDistance(const cv::Mat &a, const cv::Mat &b)
{const int *pa = a.ptr<int32_t>();const int *pb = b.ptr<int32_t>();int dist=0;for(int i=0; i<8; i++, pa++, pb++){unsigned  int v = *pa ^ *pb;v = v - ((v >> 1) & 0x55555555);v = (v & 0x33333333) + ((v >> 2) & 0x33333333);dist += (((v + (v >> 4)) & 0xF0F0F0F) * 0x1010101) >> 24;}return dist;
}} //namespace ORB_SLAM

ORB词袋特征提取和匹配相关推荐

  1. (01)ORB-SLAM2源码无死角解析-(31) ORB特征匹配→词袋BoW:BRIEF描述子转BoW向量

    讲解关于slam一系列文章汇总链接:史上最全slam从零开始,针对于本栏目讲解的(01)ORB-SLAM2源码无死角解析链接如下(本文内容来自计算机视觉life ORB-SLAM2 课程课件): (0 ...

  2. Opencv实现Sift、Surf、ORB特征提取与匹配

    在opencv3中,这三个算子都转移到一个名为xfeature2d的第三方库中,而在opencv2中这三个算子在nonfree库中. 关于在vs下配置opencv可参考我转载的另外一篇文章.注意版本号 ...

  3. ORB算法——特征提取特征匹配

    特征提取 Abstract ORB(Oriented Fast and Rotated Brief),可以用来对图像中的关键点快速创建特征向量,这些特征向量可以用来识别图像中的对象. 其中,Fast ...

  4. 【图像特征提取与匹配】ORB算法详述

    ORB算法 ORB(Oriented FAST and Rotated BRIEF)特征是目前看来非常具有代表性的实时图像特征.它改进了 FAST(Features from Accelerated ...

  5. 3D视觉(四):ORB特征提取与匹配

    3D视觉(四):ORB特征提取与匹配 根据维基百科的定义,图像特征是一组与计算任务相关的信息,计算任务取决于具体的应用.简而言之,特征是图像信息的另一种表达形式. 数字图像在计算机中以灰度值矩阵的方式 ...

  6. ORB特征提取和匹配

    ORB特征提取和匹配 1 什么是ORB特征 2 FAST关键点 3 尺度不变性 4 旋转不变性 4.1 灰度质心法 4.2 rBRIEF描述子 4.2.1 BREIF描述子 4.2.2Hamming距 ...

  7. ORB-SLAM2从理论到代码实现(三):ORB特征提取和匹配理论和代码详解

    1. 理论知识 特征点由关键点(Key-point)和描述子(Descriptor)两部分组成.ORB特征点(Oriented FAST and Rotated BRIEF)是由Oriented FA ...

  8. 全面综述:图像特征提取与匹配技术

    作者:William 来源:自动驾驶全栈工程师知乎专栏,https://www.zhihu.com/people/william.hyin/columns 特征提取和匹配是许多计算机视觉应用中的一个重 ...

  9. 过年也学(nei)习 (juan)| 图像特征提取与匹配技术

    点击上方"小白学视觉",选择加"星标"或"置顶" 重磅干货,第一时间送达 作者:william 链接:https://zhuanlan.zh ...

最新文章

  1. 【c语言】蓝桥杯算法训练 平方计算
  2. vue 插入dom_vue内部复用问题以及虚拟dom的更新
  3. 【Groovy】Groovy 脚本调用 ( Groovy 脚本中调用另外一个 Groovy 脚本 | 调用 evaluate 方法执行 Groovy 脚本 | 参数传递 )
  4. HTML5-Canvas 图形变换+状态保存
  5. hbuilder打包ios_ios开发证书的作用及申请步骤
  6. 搭建cacti监控平台
  7. canvas生成图片toDataURL报错的原因和解决方法
  8. bzoj3713: [PA2014]Iloczyn(乱搞)
  9. Flash制作空战游戏
  10. LINUX运维之道_摘要
  11. 树莓派读写ABB变频器
  12. 7-zip压缩解压软件.html,7-Zip 压缩率比较高的压缩软件 17.01 美化优化版
  13. markdown特殊符号语法
  14. Windows Server 2012 R2 安装补丁KB2999226提示此更新不适合用于计算机
  15. 兴业银行紧急核查国美贷款
  16. 赫尔维兹_勒奇超越函数(matlab自编函数)
  17. CNAS仪器校准人员需要遵守哪些规范?
  18. Android 12 适配攻略
  19. Linux下CAN总线通信调试记录
  20. 44session绑定解绑、钝化活化

热门文章

  1. 中文数字文字转换成阿拉伯数字
  2. OPC教程三:KEPServerEX6的使用
  3. VM使用-pin针同心度检测
  4. 互联网黑市分析:社工库的传说
  5. Android AndroidManifest 文件详解
  6. 幸运概率--已知,1000个硬币里有10个金币。随机的取出n个硬币,则取出硬币里有金币的概率是多少?
  7. MySQL 索引基本原则
  8. 第十一章 文件操作_C语言实现文件复制功能(包括文本文件和二进制文件)
  9. VMware发布Linux虚拟桌面技术预览版
  10. EXCEL一个单元格内容分成多个单元格