在特征点检测过后,完成图像间特征点的匹配是非常重要的。对于图像配准工作而言。特征点匹配的准确度是最值得关注的点,宁愿少匹配,也不能误匹配。

我在此图像配准中使用的是KNN匹配:
下面引用自百度百科。

K最近邻(kNN,k-NearestNeighbor)分类算法是数据挖掘分类技术中最简单的方法之一,所谓K最近邻,就是k个最近的邻居的意思,说的是每个样本都可以用它最接近的k个邻居来代表。


如上图所示,如果k=5,那么对于未知点 xu x_u,离它最近的5个点有4个属于 w1 w_1,1个属于 w3 w_3,那么我们可以认为这个未知点 xu x_u是属于 w1 w_1的。
在特征点匹配的场景中,Lowe设定的匹配规则是1NN,即寻找特征向量离得最近的点进行匹配。但是这样会产生误匹配,因此它又定制了一个规则及1NN/2NN < ratio时,该匹配才生效。Lowe推荐的ratio为0.8。

得到匹配的一系列点后,接下来就是要产生两图像的对应仿射矩阵了。这里采用的是RANSAC法。

最终图像匹配源码为:


#include<iostream>//#include <opencv2/xfeatures2d.hpp>
#include <opencv2/opencv.hpp>using namespace std;
using namespace cv;void main()
{double start, duration_ms;cv::Mat mask1, mask2, imgshowcanny;cv::Mat imgShow1, imgShow2, imgShow3;// 声明并从data文件夹里读取两个rgb与深度图cv::Mat rgb1 = cv::imread("C:\\opencv-cpp\\opencvtest\\img_source\\1\\1.jpg", 0);cv::Mat rgb2 = cv::imread("C:\\opencv-cpp\\opencvtest\\img_source\\1\\2.jpg", 0);cv::Mat rgb3 = cv::imread("C:\\opencv-cpp\\opencvtest\\img_source\\1\\3.jpg", 0);cv::Mat rgb4 = cv::imread("C:\\opencv-cpp\\opencvtest\\img_source\\1\\4.jpg", 0);cv::Mat rgb5 = cv::imread("C:\\opencv-cpp\\opencvtest\\img_source\\1\\5.jpg", 0);cv::Mat rgb6 = cv::imread("C:\\opencv-cpp\\opencvtest\\img_source\\1\\6.jpg", 0);cv::Mat rgb7 = cv::imread("C:\\opencv-cpp\\opencvtest\\img_source\\1\\7.jpg", 0);cv::Mat rgb8 = cv::imread("C:\\opencv-cpp\\opencvtest\\img_source\\1\\8.jpg", 0);cv::Mat rgb9 = cv::imread("C:\\opencv-cpp\\opencvtest\\img_source\\1\\9.jpg", 0);cv::Mat rgb10 = cv::imread("C:\\opencv-cpp\\opencvtest\\img_source\\1\\10.jpg", 0);vector< cv::KeyPoint > kp1, kp2, kp3, kp4, kp5, kp6, kp7, kp8, kp9, kp10; //关键点mask1 = rgb1.clone();//边缘检测//cout << "Canny edge detection" << endl;//mask1 = Mat::zeros(rgb1.size(), CV_8UC1);//mask2 = Mat::zeros(rgb2.size(), CV_8UC1);cv::Canny(mask1, mask1, 50, 200, 3);//cv::Canny(rgb2, mask2, 50, 150, 3);cv::Mat element = getStructuringElement(MORPH_RECT, Size(3, 3));dilate(mask1, mask1, element);cv::imwrite("F:/study work/Visual Odometry dev/VO_practice_2016.4/算法测试/KAZE算法/data/mask1.jpg", mask1);imgshowcanny = mask1.clone();cv::resize(imgshowcanny, imgshowcanny, cv::Size(imgshowcanny.cols / 2, imgshowcanny.rows / 2));cv::imshow("mask", imgshowcanny);//特征检测算法:AKAZEcv::Ptr<cv::AKAZE> akaze = AKAZE::create(AKAZE::DESCRIPTOR_MLDB, 0, 3, 0.001f, 4, 4, KAZE::DIFF_PM_G2);Mat descriptors_1, descriptors_2, descriptors_3, descriptors_4, descriptors_5, descriptors_6, descriptors_7, descriptors_8, descriptors_9, descriptors_10;start = double(getTickCount());akaze->detectAndCompute(rgb1, mask1, kp1, descriptors_1, false);duration_ms = (double(getTickCount()) - start) * 1000 / getTickFrequency();//计时std::cout << "It took " << duration_ms << " ms to detect features in pic1 using AKAZE." << std::endl;start = double(getTickCount());akaze->detectAndCompute(rgb2, cv::Mat(), kp2, descriptors_2, false);duration_ms = (double(getTickCount()) - start) * 1000 / getTickFrequency();//计时std::cout << "It took " << duration_ms << " ms to detect features in pic2 using AKAZE." << std::endl;akaze->detectAndCompute(rgb3, cv::Mat(), kp3, descriptors_3, false);akaze->detectAndCompute(rgb4, cv::Mat(), kp4, descriptors_4, false);akaze->detectAndCompute(rgb5, cv::Mat(), kp5, descriptors_5, false);akaze->detectAndCompute(rgb6, cv::Mat(), kp6, descriptors_6, false);akaze->detectAndCompute(rgb7, cv::Mat(), kp7, descriptors_7, false);akaze->detectAndCompute(rgb8, cv::Mat(), kp8, descriptors_8, false);akaze->detectAndCompute(rgb9, cv::Mat(), kp9, descriptors_9, false);akaze->detectAndCompute(rgb10, cv::Mat(), kp10, descriptors_10, false);// 可视化, 显示特征点cout << "Key points of two AKAZE images: " << kp1.size() << ", " << kp2.size() << endl;cv::drawKeypoints(rgb1, kp1, imgShow1, cv::Scalar::all(-1), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS);cv::drawKeypoints(rgb2, kp2, imgShow2, cv::Scalar::all(-1), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS);cv::drawKeypoints(rgb3, kp3, imgShow3, cv::Scalar::all(-1), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS);cv::imwrite("F:/study work/Visual Odometry dev/VO_practice_2016.4/算法测试/KAZE算法/data/akaze1.jpg", imgShow1);cv::imwrite("F:/study work/Visual Odometry dev/VO_practice_2016.4/算法测试/KAZE算法/data/akaze2.jpg", imgShow2);cv::resize(imgShow2, imgShow2, cv::Size(imgShow2.cols / 2, imgShow2.rows / 2));cv::resize(imgShow1, imgShow1, cv::Size(imgShow1.cols / 2, imgShow1.rows / 2));cv::imshow("AKAZE_keypoints2", imgShow2);cv::imshow("AKAZE_keypoints1", imgShow1);cv::waitKey(0); //暂停等待一个按键//用Matcher匹配特征 :BFMatcher bf_matcher;FlannBasedMatcher flann_matcher;cv::Ptr<cv::DescriptorMatcher>  knn_matcher = cv::DescriptorMatcher::create("BruteForce-Hamming");std::vector< DMatch > flann_matches;std::vector< DMatch > bf_matches1;std::vector< DMatch > knn_match;std::vector< std::vector<cv::DMatch> > knn_matches1, knn_matches2, knn_matches3, knn_matches4, knn_matches5, knn_matches6, knn_matches7, knn_matches8, knn_matches9;start = double(getTickCount());bf_matcher.match(descriptors_1, descriptors_2, bf_matches1);//flann_matcher.match(descriptors_1, descriptors_2, flann_matches);duration_ms = (double(getTickCount()) - start) * 1000 / getTickFrequency();//计时std::cout << "It took " << duration_ms << " ms to do brutal-force-match on AKAZE features." << std::endl;start = double(getTickCount());knn_matcher->knnMatch(descriptors_1, descriptors_2, knn_matches1, 2);duration_ms = (double(getTickCount()) - start) * 1000 / getTickFrequency();//计时std::cout << "It took " << duration_ms << " ms to do knn-match on AKAZE features." << std::endl;knn_matcher->knnMatch(descriptors_1, descriptors_3, knn_matches2, 2);knn_matcher->knnMatch(descriptors_1, descriptors_4, knn_matches3, 2);knn_matcher->knnMatch(descriptors_1, descriptors_5, knn_matches4, 2);knn_matcher->knnMatch(descriptors_1, descriptors_6, knn_matches5, 2);knn_matcher->knnMatch(descriptors_1, descriptors_7, knn_matches6, 2);knn_matcher->knnMatch(descriptors_1, descriptors_8, knn_matches7, 2);knn_matcher->knnMatch(descriptors_1, descriptors_9, knn_matches8, 2);knn_matcher->knnMatch(descriptors_1, descriptors_10, knn_matches9, 2);//用RANSAC求取单应矩阵的方式去掉不可靠的BF匹配vector< DMatch > inliers1, inliers2;cout << "Computing homography (RANSAC)" << endl;vector<Point2f> points1(bf_matches1.size());vector<Point2f> points2(bf_matches1.size());for (size_t i = 0; i < bf_matches1.size(); i++){points1[i] = kp1[bf_matches1[i].queryIdx].pt;points2[i] = kp2[bf_matches1[i].trainIdx].pt;}/*for (int i = 0; i < points1.size(); i++) {cout << "points1_x :" << points1[i].x << endl;cout << "points1_y :" << points1[i].y << endl;}*///计算单应矩阵并找出inliers  vector<uchar> flag1(points1.size(), 0);Mat H1 = findHomography(points1, points2, CV_RANSAC, 3.0, flag1);Mat H2 = findHomography(points2, points1, CV_RANSAC, 3.0, flag1);cout << "points1_shape: " << points1.size() << endl;cout << "points2_shape: " << points2.size() << endl;cout << "H1_shape: " << H2.size() << endl;/*for (int i = 0; i < H1.rows; i++) {for (int j = 0; j < H1.cols; j++) {cout << "H1_value" << H1[i][i] << endl;}}*///std::cout << H2 << std::endl;//cout << H << endl << endl;  for (int i = 0; i < bf_matches1.size(); i++){if (flag1[i]){inliers1.push_back(bf_matches1[i]);}}cout << "AKAZE BF matches inliers size = " << inliers1.size() << endl;//去掉不可靠的HAMMING-KNN匹配vector< cv::DMatch > goodMatches1;for (size_t i = 0; i < knn_matches1.size(); i++){if (knn_matches1[i][0].distance < 0.3  *knn_matches1[i][1].distance)goodMatches1.push_back(knn_matches1[i][0]);}vector< cv::DMatch > goodMatches2;for (size_t i = 0; i < knn_matches2.size(); i++){if (knn_matches2[i][0].distance < 0.3  *knn_matches2[i][1].distance)goodMatches2.push_back(knn_matches2[i][0]);}vector< cv::DMatch > goodMatches3;for (size_t i = 0; i < knn_matches3.size(); i++){if (knn_matches3[i][0].distance < 0.3  *knn_matches3[i][1].distance)goodMatches3.push_back(knn_matches3[i][0]);}vector< cv::DMatch > goodMatches4;for (size_t i = 0; i < knn_matches4.size(); i++){if (knn_matches4[i][0].distance < 0.3  *knn_matches4[i][1].distance)goodMatches4.push_back(knn_matches4[i][0]);}vector< cv::DMatch > goodMatches5;for (size_t i = 0; i < knn_matches5.size(); i++){if (knn_matches5[i][0].distance < 0.3  *knn_matches5[i][1].distance)goodMatches5.push_back(knn_matches5[i][0]);}vector< cv::DMatch > goodMatches6;for (size_t i = 0; i < knn_matches6.size(); i++){if (knn_matches6[i][0].distance < 0.3  *knn_matches6[i][1].distance)goodMatches6.push_back(knn_matches6[i][0]);}vector< cv::DMatch > goodMatches7;for (size_t i = 0; i < knn_matches7.size(); i++){if (knn_matches7[i][0].distance < 0.3  *knn_matches7[i][1].distance)goodMatches7.push_back(knn_matches7[i][0]);}vector< cv::DMatch > goodMatches8;for (size_t i = 0; i < knn_matches8.size(); i++){if (knn_matches8[i][0].distance < 0.3  *knn_matches8[i][1].distance)goodMatches8.push_back(knn_matches8[i][0]);}vector< cv::DMatch > goodMatches9;for (size_t i = 0; i < knn_matches9.size(); i++){if (knn_matches9[i][0].distance < 0.3  *knn_matches9[i][1].distance)goodMatches9.push_back(knn_matches9[i][0]);}cout << "AKAZE good knn matches size = " << goodMatches1.size() << endl;cout << "I am before count point" << endl;vector<Point2f> points3(goodMatches1.size());vector<Point2f> points4(goodMatches1.size());for (size_t i = 0; i < goodMatches1.size(); i++){points3[i] = kp1[goodMatches1[i].queryIdx].pt;points4[i] = kp2[goodMatches1[i].trainIdx].pt;}cout << "I am before count point 1" << endl;vector<Point2f> points5(goodMatches2.size());vector<Point2f> points6(goodMatches2.size());for (size_t i = 0; i < goodMatches2.size(); i++){points5[i] = kp1[goodMatches2[i].queryIdx].pt;points6[i] = kp3[goodMatches2[i].trainIdx].pt;}cout << "I am before count point 2" << endl;vector<Point2f> points7(goodMatches3.size());vector<Point2f> points8(goodMatches3.size());cout << "goodMatchers3.size" << goodMatches3.size() << endl;for (size_t i = 0; i < goodMatches3.size(); i++){points7[i] = kp1[goodMatches3[i].queryIdx].pt;points8[i] = kp4[goodMatches3[i].trainIdx].pt;}cout << "I am before count point 3" << endl;vector<Point2f> points9(goodMatches4.size());vector<Point2f> points10(goodMatches4.size());for (size_t i = 0; i < goodMatches4.size(); i++){points9[i] = kp1[goodMatches4[i].queryIdx].pt;points10[i] = kp5[goodMatches4[i].trainIdx].pt;}cout << "I am before count point 4" << endl;vector<Point2f> points11(goodMatches5.size());vector<Point2f> points12(goodMatches5.size());for (size_t i = 0; i < goodMatches5.size(); i++){points11[i] = kp1[goodMatches5[i].queryIdx].pt;points12[i] = kp6[goodMatches5[i].trainIdx].pt;}vector<Point2f> points13(goodMatches6.size());vector<Point2f> points14(goodMatches6.size());for (size_t i = 0; i < goodMatches6.size(); i++){points13[i] = kp1[goodMatches6[i].queryIdx].pt;points14[i] = kp7[goodMatches6[i].trainIdx].pt;}vector<Point2f> points15(goodMatches7.size());vector<Point2f> points16(goodMatches7.size());for (size_t i = 0; i < goodMatches7.size(); i++){points15[i] = kp1[goodMatches7[i].queryIdx].pt;points16[i] = kp8[goodMatches7[i].trainIdx].pt;}cout << "I am before count point 4" << endl;vector<Point2f> points17(goodMatches8.size());vector<Point2f> points18(goodMatches8.size());for (size_t i = 0; i < goodMatches8.size(); i++){points17[i] = kp1[goodMatches8[i].queryIdx].pt;points18[i] = kp9[goodMatches8[i].trainIdx].pt;}vector<Point2f> points19(goodMatches9.size());vector<Point2f> points20(goodMatches9.size());for (size_t i = 0; i < goodMatches9.size(); i++){points19[i] = kp1[goodMatches9[i].queryIdx].pt;points20[i] = kp10[goodMatches9[i].trainIdx].pt;}cout << "I am after count point" << endl;Mat H3 = findHomography(points4, points3, CV_RANSAC, 3.0, flag1);Mat H4 = findHomography(points6, points5, CV_RANSAC, 3.0, flag1);Mat H5 = findHomography(points8, points7, CV_RANSAC, 3.0, flag1);Mat H6 = findHomography(points10, points9, CV_RANSAC, 3.0, flag1);Mat H7 = findHomography(points12, points11, CV_RANSAC, 3.0, flag1);Mat H8 = findHomography(points14, points13, CV_RANSAC, 3.0, flag1);Mat H9 = findHomography(points16, points15, CV_RANSAC, 3.0, flag1);Mat H10 = findHomography(points18, points17, CV_RANSAC, 3.0, flag1);Mat H11 = findHomography(points20, points19, CV_RANSAC, 3.0, flag1);cv::Mat imdst_2, imdst_3, imdst_4, imdst_5, imdst_6, imdst_7, imdst_8, imdst_9, imdst_10, imsrc_1, imsrc_2, imsrc_3, imsrc_4, imsrc_5, imsrc_6, imsrc_7, imsrc_8, imsrc_9, imsrc_10, warp_dst, imresult;imsrc_1 = cv::imread("C:\\opencv-cpp\\opencvtest\\img_source\\1\\1.jpg");imsrc_2 = cv::imread("C:\\opencv-cpp\\opencvtest\\img_source\\1\\2.jpg");imsrc_3 = cv::imread("C:\\opencv-cpp\\opencvtest\\img_source\\1\\3.jpg");imsrc_4 = cv::imread("C:\\opencv-cpp\\opencvtest\\img_source\\1\\4.jpg");imsrc_5 = cv::imread("C:\\opencv-cpp\\opencvtest\\img_source\\1\\5.jpg");imsrc_6 = cv::imread("C:\\opencv-cpp\\opencvtest\\img_source\\1\\6.jpg");imsrc_7 = cv::imread("C:\\opencv-cpp\\opencvtest\\img_source\\1\\7.jpg");imsrc_8 = cv::imread("C:\\opencv-cpp\\opencvtest\\img_source\\1\\8.jpg");imsrc_9 = cv::imread("C:\\opencv-cpp\\opencvtest\\img_source\\1\\9.jpg");imsrc_10 = cv::imread("C:\\opencv-cpp\\opencvtest\\img_source\\1\\10.jpg");//warpAffine(im_src, warp_dst, H2, im_src.size());warpPerspective(imsrc_2, imdst_2, H3, imsrc_2.size());warpPerspective(imsrc_3, imdst_3, H4, imsrc_3.size());warpPerspective(imsrc_4, imdst_4, H5, imsrc_4.size());warpPerspective(imsrc_5, imdst_5, H6, imsrc_5.size());warpPerspective(imsrc_6, imdst_6, H7, imsrc_6.size());warpPerspective(imsrc_7, imdst_7, H8, imsrc_7.size());warpPerspective(imsrc_8, imdst_8, H9, imsrc_8.size());warpPerspective(imsrc_9, imdst_9, H10, imsrc_9.size());warpPerspective(imsrc_10, imdst_10, H11, imsrc_10.size());cv::Mat imsrc_1_, imdst_2_, imdst_3_, imdst_4_, imdst_5_, imdst_6_, imdst_7_, imdst_8_, imdst_9_, imdst_10_;imsrc_1.convertTo(imsrc_1_, CV_32F, 1.0 / 255.0);imdst_2.convertTo(imdst_2_, CV_32F, 1.0 / 255.0);imdst_3.convertTo(imdst_3_, CV_32F, 1.0 / 255.0);imdst_4.convertTo(imdst_4_, CV_32F, 1.0 / 255.0);imdst_5.convertTo(imdst_5_, CV_32F, 1.0 / 255.0);imdst_6.convertTo(imdst_6_, CV_32F, 1.0 / 255.0);imdst_7.convertTo(imdst_7_, CV_32F, 1.0 / 255.0);imdst_8.convertTo(imdst_8_, CV_32F, 1.0 / 255.0);imdst_9.convertTo(imdst_9_, CV_32F, 1.0 / 255.0);imdst_10.convertTo(imdst_10_, CV_32F, 1.0 / 255.0);imresult = (imsrc_1_ + imdst_2_ + imdst_3_ + imdst_4_ + imdst_5_ + imdst_6_ + imdst_7_ + imdst_8_ + imdst_9_ + imdst_10_) / 10;cv::imshow("result", imresult);cv::imwrite("D:\\opencv_cpp\\opencv_cpp\\source_images\\after_transactions\\2_after_transaction.jpg", imdst_2);cv::imwrite("D:\\opencv_cpp\\opencv_cpp\\source_images\\after_transactions\\3_after_transaction.jpg", imdst_3);// 可视化:显示匹配的特征cv::Mat AKAZE_BF_imgMatches;cv::Mat AKAZE_KNN_imgMatches; 筛选匹配,把距离太大的去掉 这里使用的准则是去掉大于四倍最小距离的匹配//vector< cv::DMatch > goodMatches;//double minDis = 9999;//for (size_t i = 0; i<matches.size(); i++)//{//  if (matches[i].distance < minDis)//      minDis = matches[i].distance;//}//for (size_t i = 0; i<matches.size(); i++)//{//  if (matches[i].distance < 8 * minDis)//      goodMatches.push_back(matches[i]);//}// 显示 good matchescv::drawMatches(rgb1, kp1, rgb2, kp2, inliers1, AKAZE_BF_imgMatches);cv::imwrite("F:/study work/Visual Odometry dev/VO_practice_2016.4/算法测试/KAZE算法/data/AKAZE-BF-Matches-inliers.jpg", AKAZE_BF_imgMatches);cv::drawMatches(rgb1, kp1, rgb2, kp2, goodMatches1, AKAZE_KNN_imgMatches);cv::imwrite("F:/study work/Visual Odometry dev/VO_practice_2016.4/算法测试/KAZE算法/data/AKAZE-Good-KNN-Matches.jpg", AKAZE_KNN_imgMatches);cv::resize(AKAZE_BF_imgMatches, AKAZE_BF_imgMatches,
Size(AKAZE_BF_imgMatches.cols / 2, AKAZE_BF_imgMatches.rows / 2));cv::imshow("AKAZE-BF-Matches-inliers", AKAZE_BF_imgMatches);cv::resize(AKAZE_KNN_imgMatches, AKAZE_KNN_imgMatches, Size(AKAZE_KNN_imgMatches.cols / 2, AKAZE_KNN_imgMatches.rows / 2));cv::imshow("AKAZE-Good-KNN-Matches", AKAZE_KNN_imgMatches);cv::waitKey(0);
}

代码中选用了10张图片进行配准。最后结果为:

kaze算法的图像配准研究(2)-匹配相关推荐

  1. kaze算法的图像配准研究(1)-KAZE算法原理

    一:KAZE算法的由来 在2012年,ECCV会议中出现了一种比SIFT更稳定的特征检测算法.KAZE的取名是为了纪念尺度空间分析的开创者-日本学者Iijima.KAZE在日语中是'风'的谐音,寓意是 ...

  2. SIFT-FCACO算法的图像配准

    SIFT-FCACO算法的图像配准 2014年微型机与应用第15期 作者:吴金津,文志强,龙永新,武岫缘 2015/6/1 19:12:00 关键词: SIFT-FCACO算法 蚁群算法 RANSAC ...

  3. c语言编程图像拼接,一种基于Lucas-Kanade算法的图像配准和拼接方法

    一种基于Lucas-Kanade算法的图像配准和拼接方法 [技术领域] [0001 ]本发明涉及图像处理技术领域,具体涉及一种基于Lucas-Kanade算法的图像配准 和拼接方法. [背景技术] [ ...

  4. 图像配准系列之基于FFD形变与粒子群算法的图像配准

    在之前的文章中,我们分别使用了梯度下降发与LM算法来优化FFD形变的控制参数,达到图像配准的目的: 图像配准系列之基于FFD形变与梯度下降法的图像配准 图像配准系列之基于FFD形变与LM算法的图像配准 ...

  5. python实现基于SIFT算法的图像配准(仿射变换)

    话不多说,直接上代码,可以用的话别忘了请喝可乐!(手动笑哭脸) [用法] 第45.46行的输入: img1 = cv2.imread('sift/3.jpg') img2 = cv2.imread(' ...

  6. 【图像配准】多图配准/不同特征提取算法/匹配器比较测试

    前言 本文首先完成之前专栏前置博文未完成的多图配准拼接任务,其次对不同特征提取器/匹配器效率进行进一步实验探究. 各类算法原理简述 看到有博文[1]指出,在速度方面SIFT<SURF<BR ...

  7. 图像配准的前世今生:从人工设计特征到深度学习

    点击上方"小白学视觉",选择加"星标"或"置顶" 重磅干货,第一时间送达 机器之心编译 参与:Nurhachu Null,Geek AI 作 ...

  8. 【计算机视觉】图像配准(Image Registration)

    (Source:https://blog.sicara.com/image-registration-sift-deep-learning-3c794d794b7a)  图像配准方法概述 图像配准广泛 ...

  9. 机器学习笔记 - 基于传统方法/深度学习的图像配准

    一.图像配准 图像配准是将 一个场景的不同图像变换到同一坐标系的过程.这些图像可以在不同的时间(多时间配准).由不同的传感器(多模态配准)和/或从不同的视点拍摄.这些图像之间的空间关系可以是 刚性的 ...

最新文章

  1. 管理员信息管理之更新管理员数据
  2. Katalon Studio自动化测试框架使用【1】--- 环境安装以及基础配置(MacOS)
  3. 2017 ACM-ICPC西安网赛B-Coin
  4. Meteor 加入账户系统
  5. qqwry.dat java 乱码_UTF-8使用纯真IP数据库乱码问题
  6. 关于webpack编译scss文件
  7. 项目过程总结 和某个字段的更新
  8. 千万千万不要运行的Linux命令
  9. Java封装JSON数据
  10. Flash Media Server 4.5
  11. 51单片机入门教程(4)——波形发生器
  12. oracle数据库中TDS,某高校开发了一个学生信息管理系统TDS,里面使用了Oracle数据库。则TDS被称为...
  13. Kaldi中 声纹识别的流程图
  14. 治愈系好声线:唱见散搭
  15. 在线K歌又现新模式 音遇APP能否站稳脚跟?
  16. 世界人工智能大会阿里巴巴专场论坛《数字时代的技术责任》来了
  17. 看完李宏毅的视频我决定学好英语了
  18. 怎么给HTML文件加背景,设置文件夹背景,如何设置文件夹背景颜色
  19. 计算机桌面闪动,电脑屏幕闪动怎么解决_电脑屏幕闪烁不停抖动修复方法-win7之家...
  20. linux下离线更新nessus漏洞插件的方法

热门文章

  1. Spring入门基础
  2. Unity制作自定义消息提示框
  3. Java密码学原型算法实现——第二部分:单钥加密算法
  4. 百度文库推广怎么做-百度文库推广技巧
  5. 等精度测频原理--频率计
  6. 服务注册中心consul
  7. 【Excel 教程系列第 11 篇】Excel 如何快速下拉填充序列至 10000 行
  8. python实验五答案_Python实验五
  9. Cesium 注册及移除事件
  10. 新手入坑编程,奥利给