索引目录

  • 1.关键点的类cv::KeyPoint
  • 2.cv::Feature2D-查找并计算描述符
  • 3.cv::DMatch对象
  • 4.cv::DescriptorMatcher 关键点匹配类
  • 5.核心关键点检测方法
    • 5.1 Harris-Shi-Tomasi角点检测器
    • 5.2 简单的blob斑点检测器
    • 5.3 FAST特征检测器
    • 5.4 SIFT特征检测器
    • 5.4 SURF特征检测器
    • 5.5 Star/CenSurE特征检测器
    • 5.6 BRIEF描述符提取器
    • 5.7 BRISK算法
    • 5.8 ORB特征检测器
    • 5.9 FREAK描述符提取器
    • 5.10 稠密特征检测器
    • 5.11 AKAZE特征提取
  • 6.关键点过滤
  • 7.匹配方法
    • 7.1 cv::BFMatcher暴力匹配
    • 7.2 快速近似最邻近and cv::FlannBasedMatcher
  • 8.结果显示
    • 8.1 cv::drawKeypoints显示关键点
    • 8.2 cv::drawMatches显示关键点匹配
  • 9.应用
    • 9.1 基于描述子匹配的已知对象定位
  • 参考

关键点和描述符的是三个主要的应用场景就是跟踪,目标识别和立体建模。不过不管是哪种用途,根本的处理逻辑确实类似的:首先找出图像中的关键点,之后选取一种描述符,最后基于关键点的描述符查找匹配 —— detect-describe-match。

1.关键点的类cv::KeyPoint

class cv::KeyPoint {public:cv::Point2f pt; // coordinates of the keypointfloat size; // diameter of the meaningful keypoint neighborhoodfloat angle; // 关键点角度 (-1 if none)float response; // 能够给某个关键点更强烈响应的检测器,有时能够被理解为特性实际存在的概率int octave; // octave (pyramid layer) keypoint was extracted fromint class_id; // object id, can be used to cluster keypoints by objectcv::KeyPoint(cv::Point2f _pt,float _size,float _angle = -1,float _response = 0,int _octave = 0,int _class_id = -1);cv::KeyPoint(float x,float y,float _size,float _angle = -1,float _response = 0,int _octave = 0,int _class_id = -1);...
};

2.cv::Feature2D-查找并计算描述符

class cv::Feature2D : public cv::Algorithm {public://detect:用于计算 Keypointvirtual void detect(cv::InputArray image, // Image on which to detectvector< cv::KeyPoint >& keypoints, // Array of found keypointscv::InputArray mask = cv::noArray()) const;virtual void detect(cv::InputArrayOfArrays images, // Images on which to detectvector<vector< cv::KeyPoint > >& keypoints, // keypoints for each imagecv::InputArrayOfArrays masks = cv::noArray()) const;//compute:用于计算 descriptorvirtual void compute(cv::InputArray image, // Image where keypoints are locatedstd::vector<cv::KeyPoint>& keypoints, // input/output vector of keypointscv::OutputArray descriptors); // computed descriptors, M x N matrix,// where M is the number of keypoints// and N is the descriptor sizevirtual void compute(cv::InputArrayOfArrays image, // Images where keypoints are locatedstd::vector<std::vector<cv::KeyPoint> >& keypoints, //I/O vec of keypntscv::OutputArrayOfArrays descriptors); // computed descriptors,// vector of (Mi x N) matrices, where// Mi is the number of keypoints in// the i-th image and N is the//由于不同的关键点检测算法对于同一幅图像常常得到不同的结果,而且在算法计算过程中需要一种特殊的图像表示,//其计算量很大,如果分开进行此步骤将重复进行两次,因此如果需要得到描述符,//通常建议直接使用 detectAndCompute 函数                                        virtual void detectAndCompute(cv::InputArray image, // Image on which to detectcv::InputArray mask, // Optional region of interest maskstd::vector<cv::KeyPoint>& keypoints, // found or provided keypointscv::OutputArray descriptors, // computed descriptorsbool useProvidedKeypoints = false); // if true,// the provided keypoints are used,// otherwise they are detected//返回描述符向量的长度virtual int descriptorSize() const; // size of each descriptor in elements//描述符元素的类型virtual int descriptorType() const; // type of descriptor elements//描述符的归一化方法,指定了如何比较两个描述符,比如对于01描述符,可以使用 NORM_HAMMING;//而对于 SIFT 和 SURF,则可以使用 NORM_L2 或者 NORM_L1virtual int defaultNorm() const; // the recommended norm to be used// for comparing descriptors.// Usually, it's NORM_HAMMING for// binary descriptors and NORM_L2// for all others.virtual void read(const cv::FileNode&);virtual void write(cv::FileStorage&) const;...
};

3.cv::DMatch对象

一个匹配器尝试在一副或一组图中匹配一幅图中的关键点,如果匹配成功,将返回 cv::DMatch 的列表来描述。

class cv::DMatch {public:DMatch(); // sets this->distance// to std::numeric_limits<float>::max()DMatch(int _queryIdx, int _trainIdx, float _distance);DMatch(int _queryIdx, int _trainIdx, int _imgIdx, float _distance);int queryIdx; // 搜索图像(“新”图像)int trainIdx; // 训练图像(“旧”图像)int imgIdx; // train image indexfloat distance;//给出了匹配程度bool operator<(const DMatch &m) const; // Comparison operator// based on 'distance'
}

4.cv::DescriptorMatcher 关键点匹配类

通常匹配器被应用在目标识别和跟踪两个场景中。其中目标识别需要我们训练匹配器——给出已知物体最大区分度的描述符,然后根据我们给出的描述符给出字典中哪个描述符与之相匹配。而跟踪则要求在给定两组描述符的条件下,给出它们之间的匹配情况。
cv::DescriptorMatcher 提供了 match(),knnMatch() 和 radiusMatch() 三个函数,其中每个函数都有针对目标检测和跟踪的两个版本,其中识别需要输入一个特性列表和训练好的字典,而跟踪则需输入两个特性列表。

class cv::DescriptorMatcher {public://添加描述符,其中每个元素都是一个 Mat,每一行是一个描述符,列数是描述符的维数virtual void add(InputArrayOfArrays descriptors); // Add train descriptors//清空和判断匹配器是否添加了描述符virtual void clear(); // Clear train descriptorsvirtual bool empty() const; // true if no descriptors//当加载完所有描述符时,通常需要运行 train 函数。它将基于使用的匹配方法生成指定数据结构,以便将来更高效地加速匹配过程。通常如果提供了 train 方法,必须在调用匹配方法之前调用 train 方法。void train(); // Train matchervirtual bool isMaskSupported() const = 0; // true if supports masks//获得已添加的描述符const vector<cv::Mat>& getTrainDescriptors() const; // Get train descriptors methods to match descriptors from one list vs. "trained" set (recognition)    void match(InputArray queryDescriptors,vector<cv::DMatch>& matches,InputArrayOfArrays masks = noArray());void knnMatch(InputArray queryDescriptors,vector< vector<cv::DMatch> >& matches,int k,InputArrayOfArrays masks = noArray(),bool compactResult = false);void radiusMatch(InputArray queryDescriptors,vector< vector<cv::DMatch> >& matches,float maxDistance,InputArrayOfArrays masks = noArray(),bool compactResult = false);// methods to match descriptors from two lists (tracking)//// Find one best match for each query descriptorvoid match(InputArray queryDescriptors,InputArray trainDescriptors,vector<cv::DMatch>& matches,InputArray mask = noArray()) const;// Find k best matches for each query descriptor (in increasing// order of distances)void knnMatch(InputArray queryDescriptors,InputArray trainDescriptors,vector< vector<cv::DMatch> >& matches,int k,InputArray mask = noArray(),bool compactResult = false) const;// Find best matches for each query descriptor with distance less// than maxDistancevoid radiusMatch(InputArray queryDescriptors,InputArray trainDescriptors,vector< vector<cv::DMatch> >& matches,float maxDistance,InputArray mask = noArray(),bool compactResult = false) const;virtual void read(const FileNode&); // Reads matcher from a file nodevirtual void write(FileStorage&) const; // Writes matcher to a file storagevirtual cv::Ptr<cv::DescriptorMatcher> clone(bool emptyTrainData = false) const = 0;static cv::Ptr<cv::DescriptorMatcher> create(const string& descriptorMatcherType);...
};

5.核心关键点检测方法

参考:OpenCV探索之路(二十三):特征检测和特征匹配方法汇总

5.1 Harris-Shi-Tomasi角点检测器

GFTTDetector该方法无法提取描述子,只支持提取关键点

class cv::GFTTDetector : public cv::Feature2D {public:static Ptr<GFTTDetector> create(int maxCorners = 1000, // Keep this many cornersdouble qualityLevel = 0.01, // fraction of largest eigenvaluedouble minDistance = 1, // Discard corners if this closeint blockSize = 3, // Neighborhood usedbool useHarrisDetector = false, // If false, use Shi Tomasidouble k = 0.04 // Used for Harris metric);...
};
#include <opencv2/opencv.hpp>
#include <iostream>using namespace cv;
using namespace std;int main(int argc, char** argv) {Mat src = imread("images/test1.png");auto keypoint_detector = GFTTDetector::create(1000, 0.01, 1.0, 3, false, 0.04);vector<KeyPoint> kpts;keypoint_detector->detect(src, kpts);Mat result = src.clone();drawKeypoints(src, kpts, result, Scalar::all(-1), DrawMatchesFlags::DEFAULT);imshow("GFTT-Keypoint-Detect", result);imwrite("D:/result.png", result);waitKey(0);return 0;
}

5.2 简单的blob斑点检测器

简单的斑点检测器通过首先将输入图像转换为灰度图,然后从该灰度图像计算一系列由最小阈值、最大阈值和阈值步长确定的阈值(二值)图像,再使用cv::findContours()等方法提取连接的组件并且计算出每个轮廓的中心,即候选斑点中心。之后,将空间中彼此相邻的候选斑点中心(由最小距离参数MinDistBetweenBlobs控制)和具有相邻阈值的图像(在应用阈值列表中不同一步)组合在一起,由形成组合的每个轮廓计算出一个半径和中心。一旦完成了斑点定位,通过过滤器减少斑点数量。
斑点过滤可以按照以下比较:
颜色:对于单通道灰度图像即强度
大小:斑点像素所占面积
圆度:实际斑点像素所占面积和斑点计算有效半径的圆的比值
惯性比:二阶矩矩阵的特征值的比
凸度:斑点的面积与其凸包面积的比


class SimpleBlobDetector : public Feature2D {public:struct Params {Params();float minThreshold; // First threshold to usefloat maxThreshold; // Highest threshold to usefloat thresholdStep; // Step between thresholdssize_t minRepeatability; // Blob must appear// in this many imagesfloat minDistBetweenBlobs; // Blob must be this far// from othersbool filterByColor; // True to use color filteruchar blobColor; // always 0 or 255bool filterByArea; // True to use area filterfloat minArea, maxArea; // min and max area to accept// True to filter on "circularity", and min/max// ratio to circle areabool filterByCircularity;float minCircularity, maxCircularity;// True to filter on "inertia", and min/max eigenvalue ratiobool filterByInertia;float minInertiaRatio, maxInertiaRatio;// True to filter on convexity, and min/max ratio to hull areabool filterByConvexity;float minConvexity, maxConvexity;void read(const FileNode& fn);void write(FileStorage& fs) const;};static Ptr<SimpleBlobDetector> create(const SimpleBlobDetector::Params &parameters= SimpleBlobDetector::Params());virtual void read(const FileNode& fn);virtual void write(FileStorage& fs) const;...
};
#include "opencv2/opencv.hpp"
#include <iostream>using namespace cv;
using namespace std;int main(int argc, char** argv)
{// 加载图像Mat src = imread("images/blob2.png");Mat gray;cvtColor(src, gray, COLOR_BGR2GRAY);// 初始化参数设置SimpleBlobDetector::Params params;params.minThreshold = 10;params.maxThreshold = 200;params.filterByArea = true;params.minArea = 100;params.filterByCircularity = true;params.minCircularity = 0.1;params.filterByConvexity = true;params.minConvexity = 0.87;params.filterByInertia = true;params.minInertiaRatio = 0.01;// 创建BLOB DetetorPtr<SimpleBlobDetector> detector = SimpleBlobDetector::create(params);// BLOB分析与显示Mat result;vector<KeyPoint> keypoints;detector->detect(gray, keypoints);drawKeypoints(src, keypoints, result, Scalar(0, 0, 255), DrawMatchesFlags::DRAW_RICH_KEYPOINTS);imshow("Blob Detection Demo", result);waitKey(0);
}

5.3 FAST特征检测器

其基本原理是比较某个点周围固定距离的点的亮度,如果有超过阈值暗于或亮于该点的像素总数占据环上像素总数的一半以上,则判断该点为关键点。同时,为了解决同一区域选取了多个类似关键点的问题,通过计算分数只保留该区域内以下得分最高的关键点。

class cv::FastFeatureDetector : public cv::Feature2D {public:enum {TYPE_5_8 = 0, // 8 points, requires 5 in a rowTYPE_7_12 = 1, // 12 points, requires 7 in a rowTYPE_9_16 = 2 // 16 points, requires 9 in a row};static Ptr<FastFeatureDetector> create(int threshold = 10, // center to periphery diffbool nonmaxSupression = true, // suppress non-max corners?是否合并邻近关键点int type = TYPE_9_16 // Size of circle (see enum));...
};

5.4 SIFT特征检测器

Scale Invariant Feature Transform尺度不变特征变换匹配算法,对旋转、尺度缩放、亮度变化等保持不变性,是非常稳定的局部特征。SIFT特征计算开销大,但是具有高度表达能力,非常适合跟踪和识别任务。
参见博客:https://www.cnblogs.com/tianyalu/p/5467813.html

class SIFT : public Feature2D {public:static Ptr<SIFT> create(int nfeatures = 0, // 算法对检测出的特征点排名,返回最好的nfeatures个特征点int nOctaveLayers = 3, // 金字塔中每组的层数double contrastThreshold = 0.04, // 过滤掉较差的特征点的对阈值double edgeThreshold = 10, // 过滤掉边缘效应的阈值double sigma = 1.6 // 金字塔第0层图像高斯滤波系数,也就是σ);int descriptorSize() const; // descriptor size, always 128int descriptorType() const; // descriptor type, always CV_32F...
};
#include "opencv2/opencv.hpp"
#include <iostream>
using namespace  cv;
int main()
{Mat src = imread("images/flower.png");namedWindow("input", WINDOW_AUTOSIZE);imshow("input", src);auto sift = SIFT::create();std::vector<KeyPoint> keypoints;sift->detect(src, keypoints);Mat result;drawKeypoints(src, keypoints, result, (0, 0, 255), DrawMatchesFlags::DEFAULT);imshow("sift-detector", result);waitKey(0);destroyAllWindows();
}

5.4 SURF特征检测器

针对 SIFT 计算量大,速度慢的缺点,SURF 直接使用 盒式滤波器(类似 Harr 小波) 近似两个高斯核的差值,这种操作可以通过积分图像技术得以快速计算。

class cv::xfeatures2d::SURF : public cv::Feature2D {public:static Ptr<SURF> create(double hessianThreshold = 100, // Keep features above thisint nOctaves = 4, // Num of pyramid octavesint nOctaveLayers = 3, // Num of images in each octavebool extended = false, // false: 64-element,// true: 128-element descriptorsbool upright = false, // true: don't compute orientation// (w/out is much faster));int descriptorSize() const; // descriptor size, 64 or 128int descriptorType() const; // descriptor type, always CV_32F...
};
typedef SURF SurfFeatureDetector;
typedef SURF SurfDescriptorExtractor;

5.5 Star/CenSurE特征检测器

Star 特性,也被称为 Center Surround Extremum (or CenSurE),最初被用于视觉度量。由于 Harris 角或者 FAST 并不满足尺度不变性,而 SIFT 由于图像金字塔又无法满足高精度定位,而 Star 就是为解决尺度不变性和高精度定位的需求。其主要流程包括以下两个阶段,第一阶段使用类似 SIFT 的 difference of Gaussians(GoD)提取局部极值点,之后第二阶段使用 Harris 度量的尺度自适应版本去除像边的特性。

// Constructor for the Star detector object:
//
class cv::xfeatures2d::StarDetector : public cv::Feature2D {public:static Ptr<StarDetector> create(int maxSize = 45, // Largest feature consideredint responseThreshold = 30, // Minimum wavelet responseint lineThresholdProjected = 10, // Threshold on Harris measureint lineThresholdBinarized = 8, // Threshold on binarized Harrisint suppressNonmaxSize = 5 // Keep only best features// in this size space);...
};

5.6 BRIEF描述符提取器

RIEF 就是描述一系列测试的描述符,它比较某个单一像素点与特征中的一些其他像素的的亮度,并产生 0-1 的二值化结果。为了降低噪声的影响,在处理前通常会使用高斯滤波对图像进行平滑。而由于该描述符是一个二值字符串,它不仅能够被更快地计算和存储,也能够更有效地进行比较。

class cv::xfeatures2d::BriefDescriptorExtractor : public cv::Feature2D {public:static Ptr<BriefDescriptorExtractor> create(int bytes = 32, // can be equal 16, 32 or 64 bytesbool use_orientation = false // true if point pairs are "rotated"// according to keypoint orientation);virtual int descriptorSize() const; // number of bytes for featuresvirtual int descriptorType() const; // Always returns CV_8UC1
};

5.7 BRISK算法

BRISK 是对 BRIEF 的改进,首先 BRIEF 只是一种计算描述符的方法,而 BRISK 可以描述符检测器,同时,BRISK 虽然本质上类似于 BRIEF 但其采用了邻域采样模式,即以特征点为圆心,构建多个不同半径的离散化 Bresenham 同心圆,然后再每一个同心圆上获得具有相同间距的N个采样点。

class cv::BRISK : public cv::Feature2D {public:static Ptr<BRISK> create(int thresh = 30, // Threshold passed to FASTint octaves = 3, // N doublings in pyramidfloat patternScale = 1.0f // Rescale default pattern);int descriptorSize() const; // descriptor sizeint descriptorType() const; // descriptor typestatic Ptr<BRISK> create( // Compute BRISK featuresconst vector<float>& radiusList, // Radii of sample circlesconst vector<int>& numberList, // Sample points per circlefloat dMax = 5.85f, // Max distance for short pairsfloat dMin = 8.2f, // Min distance for long pairsconst vector<int>& indexChange = std::vector<int>() // Unused);
};
//demo
#include <opencv2/opencv.hpp>
#include <iostream>using namespace cv;
using namespace std;int main(int argc, char** argv) {Mat box = imread("D:/images/box.png");Mat box_in_sence = imread("D:/images/box_in_scene.png");// 创建BRISKauto brisk_detector = BRISK::create();vector<KeyPoint> kpts_01, kpts_02;Mat descriptors1, descriptors2;brisk_detector->detectAndCompute(box, Mat(), kpts_01, descriptors1);brisk_detector->detectAndCompute(box_in_sence, Mat(), kpts_02, descriptors2);// 定义描述子匹配 - 暴力匹配Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create(DescriptorMatcher::BRUTEFORCE);std::vector< DMatch > matches;matcher->match(descriptors1, descriptors2, matches);// 绘制匹配Mat img_matches;drawMatches(box, kpts_01, box_in_sence, kpts_02, matches, img_matches);imshow("AKAZE-Matches", img_matches);imwrite("D:/result.png", img_matches);waitKey(0);return 0;
}

5.8 ORB特征检测器

针对实时应用场景,使用基于 BRIEF 的描述符和基于 FAST 的关键点检测算法实现了 ORB 特征检测器。其首先使用 FAST 得到候选的关键点集合;为了去克服 FAST 容易将边作为角的缺点,其计算 Harris 角来限制两个特征值的差异。同时,ORB 相比于 FAST 给出了特征的角度信息。

class ORB : public Feature2D {public:// the size of the signature in bytesenum { kBytes = 32, HARRIS_SCORE = 0, FAST_SCORE = 1 };static Ptr<ORB> create(int nfeatures = 500, // Maximum features to computefloat scaleFactor = 1.2f, // Pyramid ratio (greater than 1.0)int nlevels = 8, // Number of pyramid levels to useint edgeThreshold = 31, // Size of no-search borderint firstLevel = 0, // Always '0'int WTA_K = 2, // Pts in each comparison: 2, 3, or 4int scoreType = 0, // Either HARRIS_SCORE or FAST_SCOREint patchSize = 31, // Size of patch for each descriptorint fastThreshold = 20 // Threshold for FAST detector);int descriptorSize() const; // descriptor size (bytes), always 32int descriptorType() const; // descriptor type, always CV_8U
};
#include <opencv2/opencv.hpp>
#include <iostream>using namespace cv;
using namespace std;int main(int argc, char** argv) {Mat src = imread("D:/images/grad.png");auto orb_detector = ORB::create(1000);vector<KeyPoint> kpts;orb_detector->detect(src, kpts);Mat result = src.clone();drawKeypoints(src, kpts, result, Scalar::all(-1), DrawMatchesFlags::DEFAULT);imshow("ORB-detector", result);imwrite("D:/result.png", result);waitKey(0);return 0;
}

5.9 FREAK描述符提取器

FREAK (Fast Retina KeyPoint,快速视网膜关键点)根据视网膜原理进行点对采样,中间密集一些,离中心越远越稀疏。并且由粗到精构建描述子,穷举贪婪搜索找相关性小的。42个感受野,一千对点的组合,找前512个即可。这512个分成4组,前128对相关性更小,可以代表粗的信息,后面越来越精。匹配的时候可以先看前16bytes,即代表精信息的部分,如果距离小于某个阈值,再继续,否则就不用往下看了。

class FREAK : public Features2D {public:static Ptr<FREAK> create(bool orientationNormalized = true, // enable orientation normalizationbool scaleNormalized = true, // enable scale normalizationfloat patternScale = 22.0f, // scaling of the description patternint nOctaves = 4, // octaves covered by detected keypointsconst vector<int>& selectedPairs = vector<int>() // user selected pairs);virtual int descriptorSize() const; // returns the descriptor length in bytesvirtual int descriptorType() const; // returns the descriptor type...
};

5.10 稠密特征检测器

其并不检测特征,而是将图像分为横竖的多个方格,而针对每个方格检测特征。最终也证明,在大多数应用中,其不但充分,而且相当可取。

class cv::DenseFeatureDetector : public cv::FeatureDetector {public:explicit DenseFeatureDetector(float initFeatureScale = 1.f, //设置第一层特征的大小int featureScaleLevels = 1, // Number of layersfloat featureScaleMul = 0.1f, // Scale factor for layersint initXyStep = 6, // Spacing between featuresint initImgBound = 0, // No-generate boundarybool varyXyStepWithScale = true, // if true, scale 'initXyStep'bool varyImgBoundWithScale = false // if true, scale 'initImgBound');cv::AlgorithmInfo* info() const;...
};

5.11 AKAZE特征提取

AKAZE特征提取算法是局部特征描述子算法,AKAZE是KAZE的加速版,sift,surf等特征都是通过高斯核进行线性尺度空间进行特征检测的,相同尺度下每个点的变换是一样的,由于高斯函数是低通滤波函数,会平滑图像边缘,以至图像损失掉许多细节信息。针对这一问题,提出了基于非线性尺度空间的AKAZE特征点检测方法,该非线性尺度空间保证了图像边缘在尺度变化中信息损失量非常少,从而极大地保持了图像细节信息。比SIFT、SURF更加稳定,速度更快,只有opencv新版本支持。
参见KAZE 算法原理与源码分析:https://blog.csdn.net/chenyusiyuan/article/details/8711684

Ptr<KAZE> create(bool extended=false, bool upright=false, float threshold = 0.001f, int nOctaves = 4, int nOctaveLayers = 4, int diffusivity = KAZE::DIFF_PM_G2);Ptr<AKAZE> create(int descriptor_type=AKAZE::DESCRIPTOR_MLDB, int descriptor_size = 0, int descriptor_channels = 3, float threshold = 0.001f, int nOctaves = 4, int nOctaveLayers = 4, int diffusivity = KAZE::DIFF_PM_G2);
#include <opencv2/opencv.hpp>
#include <iostream>using namespace cv;
using namespace std;int main(int argc, char** argv) {Mat box = imread("D:/images/box.png");Mat box_in_sence = imread("D:/images/box_in_scene.png");// 创建AKAZEauto akaze_detector = AKAZE::create();//把AKAZE换成KAZE即可,即可实现KAZE算法vector<KeyPoint> kpts_01, kpts_02;Mat descriptors1, descriptors2;akaze_detector->detectAndCompute(box, Mat(), kpts_01, descriptors1);akaze_detector->detectAndCompute(box_in_sence, Mat(), kpts_02, descriptors2);// 定义描述子匹配 - 暴力匹配Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create(DescriptorMatcher::BRUTEFORCE);std::vector< DMatch > matches;matcher->match(descriptors1, descriptors2, matches);// 绘制匹配Mat img_matches;drawMatches(box, kpts_01, box_in_sence, kpts_02, matches, img_matches);imshow("AKAZE-Matches", img_matches);imwrite("D:/result.png", img_matches);waitKey(0);return 0;
}

6.关键点过滤

关键点滤波器用于从现有的关键点列表中查找更佳的关键点或者去除相同的关键点。

class cv::KeyPointsFilter {public://去除所有小于图像边缘大小的关键点,不过必须事先指定之前使用的 imageSizestatic void runByImageBorder(vector< cv::KeyPoint >& keypoints, // in/out list of keypointscv::Size imageSize, // Size of original imageint borderSize // Size of border in pixels);//去除所有小于 minSize 或者大于 maxSize 的关键点static void runByKeypointSize(vector< cv::KeyPoint >& keypoints, // in/out list of keypointsfloat minSize, // Smallest keypoint to keepfloat maxSize = FLT_MAX // Largest one to keep);//去除所有 mask 中为零的关键点static void runByPixelsMask(vector< cv::KeyPoint >& keypoints, // in/out list of keypointsconst cv::Mat& mask // Keep where mask is nonzero);//去除重复的关键点static void removeDuplicated(vector< cv::KeyPoint >& keypoints // in/out list of keypoints);//去除关键点直到数量降为 npointsstatic void retainBest(vector< cv::KeyPoint >& keypoints, // in/out list of keypointsint npoints // Keep this many);
}

7.匹配方法

7.1 cv::BFMatcher暴力匹配

暴力搜索就是直接根据询问集从训练集中查找,唯一需要指定的是距离度量方法(normType),其中 crosscheck 如果置 1,进行交叉检查,那么必须两者分别为对方的最近邻。这能够有效的消除假匹配,不过将花费更多的时间。

class cv::FlannBasedMatcher : public cv::DescriptorMatcher {public:FlannBasedMatcher(const cv::Ptr< cv::flann::IndexParams>& indexParams= new cv::flann::KDTreeIndexParams(),const cv::Ptr< cv::flann::SearchParams>& searchParams= new cv::flann::SearchParams());virtual void add(const vector<Mat>& descriptors);virtual void clear();virtual void train();virtual bool isMaskSupported() const;virtual void read(const FileNode&); // Read from file nodevirtual void write(FileStorage&) const; // Write to file storagevirtual cv::Ptr<DescriptorMatcher> clone(bool emptyTrainData = false) const;...
};

7.2 快速近似最邻近and cv::FlannBasedMatcher

class cv::FlannBasedMatcher : public cv::DescriptorMatcher {public:FlannBasedMatcher(const cv::Ptr< cv::flann::IndexParams>& indexParams= new cv::flann::KDTreeIndexParams(),const cv::Ptr< cv::flann::SearchParams>& searchParams= new cv::flann::SearchParams()//struct cv::flann::SearchParams : public cv::flann::IndexParams {//SearchParams(//int checks = 32, // Limit on NN candidates to check//float eps = 0, // (Not used right now)//bool sorted = true // Sort multiple returns if 'true');};);virtual void add(const vector<Mat>& descriptors);virtual void clear();virtual void train();virtual bool isMaskSupported() const;virtual void read(const FileNode&); // Read from file nodevirtual void write(FileStorage&) const; // Write to file storagevirtual cv::Ptr<DescriptorMatcher> clone(bool emptyTrainData = false) const;...
};
//indexParams参数说明://Linear indexing with cv::flann::LinearIndexParams:其等效于 cv::BFMatcher
cv::FlannBasedMatcher matcher(new cv::flann::LinearIndexParams(), // Default index parametersnew cv::flann::SearchParams() // Default search parameters
);
//KD-tree indexing with cv::flann::KDTreeIndexParams:使用随机 kd-trees 进行匹配,其默认值为 4,如果通常设置为 16 或更大
cv::FlannBasedMatcher matcher(new cv::flann::KDTreeIndexParams(16), // Index using 16 kd-treesnew cv::flann::SearchParams() // Default search parameters
);
//Hierarchical k-means tree indexing with cv::flann::KMeansIndexParams:索引使用层级 k-means 分簇
struct cv::flann::KMeansIndexParams : public cv::flann::IndexParams {KMeansIndexParams(int branching = 32, // Branching factor for treeint iterations = 11, // Max for k-means stagefloat cb_index = 0.2, // Probably don't mess withcv::flann::flann_centers_init_t centers_init= cv::flann::CENTERS_RANDOM);
};
//Combining KD-trees and k-means with cv::flann::CompositeIndexParams:混合使用 kd-trees 和 k-means 方法
struct cv::flann::CompositeIndexParams : public cv::flann::IndexParams {CompositeIndexParams(int trees = 4, // Number of treesint branching = 32, // Branching factor for treeint iterations = 11, // Max for k-means stagefloat cb_index = 0.2, // Usually leave as-iscv::flann::flann_centers_init_t centers_init= cv::flann::CENTERS_RANDOM);
};
//Locality-sensitive hash (LSH) indexing with cv::flann::LshIndexParams:使用 hash 函数将相似的目标放置到相同的桶中,其只能被用于处理二值特性,比如汉明距离
struct cv::flann::LshIndexParams : public cv::flann::IndexParams {LshIndexParams(unsigned int table_number, // Number of hash tables to use, usually '10' to '30'unsigned int key_size, // key bits, usually '10' to '20'unsigned int multi_probe_level // Best to just set this to '2');
};
//Automatic index selection with cv::flann::AutotunedIndexParams:让算法自主选择一个合适的索引方法
struct cv::flann::AutotunedIndexParams : public cv::flann::IndexParams {AutotunedIndexParams(float target_precision = 0.9, // Percentage of searches required// to return an exact resultfloat build_weight = 0.01, // Priority for building fastfloat memory_weight = 0.0, // Priority for saving memoryfloat sample_fraction = 0.1 // Fraction of training data to use);
};
//demo
#include <opencv2/opencv.hpp>
#include <iostream>
#include <math.h>
#define RATIO    0.4
using namespace cv;
using namespace std;int main(int argc, char** argv) {Mat box = imread("images/电表/box.bmp");Mat scene = imread("images/电表/scene.jpg");if (scene.empty()) {printf("could not load image...\n");return -1;}namedWindow("input image", WINDOW_NORMAL);resizeWindow("input image", 900, 600);imshow("input image", scene);vector<KeyPoint> keypoints_obj, keypoints_sence;Mat descriptors_box, descriptors_sence;Ptr<ORB> detector = ORB::create();detector->detectAndCompute(scene, Mat(), keypoints_sence, descriptors_sence);detector->detectAndCompute(box, Mat(), keypoints_obj, descriptors_box);vector<DMatch> matches;// 初始化flann匹配// Ptr<FlannBasedMatcher> matcher = FlannBasedMatcher::create(); // default is bad, using local sensitive hash(LSH)Ptr<DescriptorMatcher> matcher = makePtr<FlannBasedMatcher>(makePtr<flann::LshIndexParams>(12, 20, 2));matcher->match(descriptors_box, descriptors_sence, matches);// 发现匹配vector<DMatch> goodMatches;printf("total match points : %d\n", matches.size());float maxdist = 0;for (unsigned int i = 0; i < matches.size(); ++i) {printf("dist : %.2f \n", matches[i].distance);maxdist = max(maxdist, matches[i].distance);}for (unsigned int i = 0; i < matches.size(); ++i) {if (matches[i].distance < maxdist*RATIO)goodMatches.push_back(matches[i]);}Mat dst;drawMatches(box, keypoints_obj, scene, keypoints_sence, goodMatches, dst);namedWindow("output", WINDOW_NORMAL);resizeWindow("output", 900, 600);imshow("output", dst);waitKey(0);return 0;
}

8.结果显示

8.1 cv::drawKeypoints显示关键点

void cv::drawKeypoints(const cv::Mat& image, // Image to draw keypointsconst vector< cv::KeyPoint >& keypoints, // List of keypoints to drawcv::Mat& outImg, // image and keypoints drawnconst Scalar& color = cv::Scalar::all(-1), // 自动使用不同的颜色int flags = cv::DrawMatchesFlags::DEFAULT// 使用小圆圈,cv::DrawMatchesFlags:: DRAW_RICH_KEYPOINTS 标记为 size 大小的圆圈,并标记 angle 的方向
);

8.2 cv::drawMatches显示关键点匹配

void cv::drawMatches(const cv::Mat& img1, // "Left" imageconst vector< cv::KeyPoint >& keypoints1, // Keypoints (lt. img)const cv::Mat& img2, // "Right" imageconst vector< cv::KeyPoint >& keypoints2, // Keypoints (rt. img)const vector< cv::DMatch >& matches1to2, // List of matchescv::Mat& outImg, // Result imagesconst cv::Scalar& matchColor = cv::Scalar::all(-1),const cv::Scalar& singlePointColor = cv::Scalar::all(-1),const vector<char>& matchesMask = vector<char>(),int flags= cv::DrawMatchesFlags::DEFAULT
)
void cv::drawMatches(const cv::Mat& img1, // "Left" imageconst vector< cv::KeyPoint >& keypoints1, // Keypoints (lt. img)const cv::Mat& img2, // "Right" imageconst vector< cv::KeyPoint >& keypoints2, // Keypoints (rt. img)const vector< vector<cv::DMatch> >& matches1to2, // List of lists// of matchescv::Mat& outImg, // Result imagesconst cv::Scalar& matchColor // 匹配的关键点将用线连接,同时标记上此颜色= cv::Scalar::all(-1),const cv::Scalar& singlePointColor // 没有匹配的关键点将使用此颜色= cv::Scalar::all(-1),const vector< vector<char> >& matchesMask // only draw for nonzero= vector< vector<char> >(),int flags = cv::DrawMatchesFlags::DEFAULT//输出结果在 outImg 中,同时用小圆圈标记//flags:cv::DrawMatchesFlags::DRAW_OVER_OUTIMG(并不重新分配 outImg 的空间,这样可以多次调用 cv::drawMatches(),将结    果绘制在一张图上);//cv::DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS(并不绘制未匹配上的关键点);//cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS(将关键点用带尺度和方向的圆标示)
);

9.应用

9.1 基于描述子匹配的已知对象定位

基于OpenCV提供的描述子匹配算法,得到描述子直接的距离,距离越小的说明是匹配越好的,设定一个距离阈值,一般是最大匹配距离的1/5~1/4左右作为阈值,得到所有小于阈值的匹配点,作为输入,通过单应性矩阵,获得这两个点所在平面的变换关系H,根据H使用透视变换就可以根据输入的对象图像获得场景图像中对象位置,最终绘制位置即可。

#include <opencv2/opencv.hpp>
#include <iostream>
#include <math.h>
#define RATIO    0.4
using namespace cv;
using namespace std;int main(int argc, char** argv) {Mat box = imread("D:/images/box.png");Mat scene = imread("D:/images/box_in_scene.png");if (scene.empty()) {printf("could not load image...\n");return -1;}imshow("input image", scene);vector<KeyPoint> keypoints_obj, keypoints_sence;Mat descriptors_box, descriptors_sence;Ptr<ORB> detector = ORB::create();detector->detectAndCompute(scene, Mat(), keypoints_sence, descriptors_sence);detector->detectAndCompute(box, Mat(), keypoints_obj, descriptors_box);vector<DMatch> matches;// 初始化flann匹配// Ptr<FlannBasedMatcher> matcher = FlannBasedMatcher::create(); // default is bad, using local sensitive hash(LSH)Ptr<DescriptorMatcher> matcher = makePtr<FlannBasedMatcher>(makePtr<flann::LshIndexParams>(12, 20, 2));matcher->match(descriptors_box, descriptors_sence, matches);// 发现匹配vector<DMatch> goodMatches;printf("total match points : %d\n", matches.size());float maxdist = 0;for (unsigned int i = 0; i < matches.size(); ++i) {printf("dist : %.2f \n", matches[i].distance);maxdist = max(maxdist, matches[i].distance);}for (unsigned int i = 0; i < matches.size(); ++i) {if (matches[i].distance < maxdist*RATIO)goodMatches.push_back(matches[i]);}Mat dst;drawMatches(box, keypoints_obj, scene, keypoints_sence, goodMatches, dst);imshow("output", dst);//-- Localize the objectstd::vector<Point2f> obj_pts;std::vector<Point2f> scene_pts;for (size_t i = 0; i < goodMatches.size(); i++){//-- Get the keypoints from the good matchesobj_pts.push_back(keypoints_obj[goodMatches[i].queryIdx].pt);scene_pts.push_back(keypoints_sence[goodMatches[i].trainIdx].pt);}Mat H = findHomography(obj_pts, scene_pts, RHO);// 无法配准// Mat H = findHomography(obj, scene, RANSAC);//-- Get the corners from the image_1 ( the object to be "detected" )std::vector<Point2f> obj_corners(4);obj_corners[0] = Point(0, 0); obj_corners[1] = Point(box.cols, 0);obj_corners[2] = Point(box.cols, box.rows); obj_corners[3] = Point(0, box.rows);std::vector<Point2f> scene_corners(4);perspectiveTransform(obj_corners, scene_corners, H);//-- Draw lines between the corners (the mapped object in the scene - image_2 )line(dst, scene_corners[0] + Point2f(box.cols, 0), scene_corners[1] + Point2f(box.cols, 0), Scalar(0, 255, 0), 4);line(dst, scene_corners[1] + Point2f(box.cols, 0), scene_corners[2] + Point2f(box.cols, 0), Scalar(0, 255, 0), 4);line(dst, scene_corners[2] + Point2f(box.cols, 0), scene_corners[3] + Point2f(box.cols, 0), Scalar(0, 255, 0), 4);line(dst, scene_corners[3] + Point2f(box.cols, 0), scene_corners[0] + Point2f(box.cols, 0), Scalar(0, 255, 0), 4);//-- Show detected matchesimshow("Good Matches & Object detection", dst);imwrite("D:/result.png", dst);waitKey(0);waitKey(0);return 0;
}

参考

  1. https://blog.csdn.net/a40850273/article/details/86604603
  2. https://www.cnblogs.com/tianyalu/p/5467813.html

opencv(十八)-关键点和描述符相关推荐

  1. OpenCV3学习(11.3)关键点的描述符KeyPoint对象与匹配类DMatch

    corners:包含大量本地信息的像素块,并能够在另一张图中被快速识别 keypoints:作为 corners 的扩展,它将像素块的信息进行编码从而使得更易辨识,至少在原则上唯一 descripto ...

  2. SIFT四部曲之——构建关键点特征描述符

    最近没空写最后一部分的内容,先把代码放上来 % SIFT 算法的最后一步是特征向量生成orient_bin_spacing = pi/4; orient_angles = [-pi:orient_bi ...

  3. OpenCV中图像特征提取与描述

    目录 图像特征提取与描述 图像的特征 Harris和Shi-Tomas算法 Harris角点检测 Shi-Tomasi角点检测 小结 SIFT/SURF算法 SIFT原理 基本流程 尺度空间极值检测 ...

  4. CVPR2020:端到端学习三维点云的局部多视图描述符

    CVPR2020:端到端学习三维点云的局部多视图描述符 End-to-End Learning Local Multi-View Descriptors for 3D Point Clouds 论文地 ...

  5. OpenCV SIFT检测关键点

    SIFT原理: 尺度空间极值检测:构建高斯金字塔,高斯差分金字塔,检测极值点. 关键点定位:去除对比度较小和边缘对极值点的影响. 关键点方向确定:利用梯度直方图确定关键点的方向. 关键点描述:对关键点 ...

  6. 嵌入式Linux系统编程学习之九基于文件描述符的文件操作

    文章目录 前言 一.文件描述符 二.打开.创建和关闭文件 三.读写文件 四.改变文件大小 五.文件定位 六.原子操作 七.进一步理解文件描述符 八.文件描述符的复制 九.文件的锁定 十.获取文件信息 ...

  7. python图像特征提取与匹配_[OpenCV-Python] OpenCV 中图像特征提取与描述 部分 V (二)...

    部分 V 图像特征提取与描述 34 角点检测的 FAST 算法 目标 • 理解 FAST 算法的基础 • 使用 OpenCV 中的 FAST 算法相关函数进行角点检测 原理 我们前面学习了几个特征检测 ...

  8. 将您重定向的次数过多什么意思_【linux二三轶事】重定向是啥?文件描述符是啥?...

    [前言] 写这篇文章的原因,是因为我在工作中遇到重定向和fd的时候,被这厮折磨的够呛.现在终于战胜了他们,当然要奏一首凯歌,率土同庆啦! 在开启正文之前,我们必须要先明白几个关键点,这对于理解后面的文 ...

  9. 蓝牙HID规范的报告描述符【另外一篇文章】

    SYD8801是一款低功耗高性能蓝牙低功耗SOC,集成了高性能2.4GHz射频收发机.32位ARM Cortex-M0处理器.128kB Flash存储器.以及丰富的数字接口.SYD8801片上集成了 ...

最新文章

  1. Flink从入门到精通100篇(十四)-Flink开发IDEA环境搭建与测试
  2. 北京计算机工业学校96届,刘驰_北京理工大学计算机学院
  3. java编写一个函数_请教如何用java编写一个函数图像生成的应用程序?谢谢!
  4. java httpserver 多个接口_多个Servlet之间数据共享实现方案
  5. Taro+react开发(37)箭头函数括号加个return
  6. 一场不能只看结果的较量
  7. 一步一步教你在CentOS6.0下安装NS2(ns-allinone-2.34.tar.gz)模拟仿真工具
  8. 饭局潜规则,吃饭时的最大忌讳,就是低头玩手机
  9. Vijos P1571 笨笨的导弹攻击【最长上升子序列+DP】
  10. Python 正则表达式大全
  11. linux查看445端口状态,linux和Windows如何查看端口占用情况
  12. java 开发ocx控件_Java调用ocx控件以及dll
  13. pythoncad_pythonCAD
  14. 古筝d调变降e调怎么办_为什么古筝总要调音、还总调不好?
  15. 当我在浏览器输入 www.baiu.com 之后发生了什么
  16. 1188: 选票统计(一)(结构体专题)
  17. BIT-Vehicle Dataset - 车辆车型识别数据集
  18. 软件测试工程师需要学什么?
  19. linux自带查看端口流量命令,iftop命令查看linux系统网卡流量的命令
  20. python卡尔曼滤波室内定位_基于Unscented卡尔曼滤波的室内定位

热门文章

  1. 上海区块链技术研发_上海区块链工程技术研究中心于复旦大学正式揭牌
  2. 游戏联运平台的运营模式是什么?
  3. 【逗老师带你学IT】Zoom动态授权用户Pro License妈妈再也不用担心预算超标了
  4. Ae:图层的常用属性
  5. jodconverter word文档转PDF
  6. DS1302的使用说明
  7. [PHB]FDN开启后手机仍然能够上网 - MTK物联网在线解答 - 技术论坛
  8. 头歌 第7章 函数2(课后习题8~12)第1关:习题8 椰子数
  9. 36岁程序员被裁后吐槽:平时与领导称兄道弟,到裁员立刻变脸!
  10. coji小机器人_让孩子学做程序员 Coji编程机器人体验