详细的图像拼接实例注释,但是觉得这个代码整体比较乱,接下来自己会整理一个更加有序的代码。
代码和数据可见
完整的代码和数据请见:代码数据链接

#include <iostream>
#include <fstream>
#include <string>
#include "opencv2/opencv_modules.hpp"
#include <opencv2/core/utility.hpp>
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/stitching/detail/autocalib.hpp"
#include "opencv2/stitching/detail/blenders.hpp"
#include "opencv2/stitching/detail/timelapsers.hpp"
#include "opencv2/stitching/detail/camera.hpp"
#include "opencv2/stitching/detail/exposure_compensate.hpp"
#include "opencv2/stitching/detail/matchers.hpp"
#include "opencv2/stitching/detail/motion_estimators.hpp"
#include "opencv2/stitching/detail/seam_finders.hpp"
#include "opencv2/stitching/detail/warpers.hpp"
#include "opencv2/stitching/warpers.hpp"
#include <opencv2/nofree/nofree.hpp>#define ENABLE_LOG 1
#define LOG(msg) std::cout << msg
#define LOGLN(msg) std::cout << msg << std::endlusing namespace std;
using namespace cv;
using namespace cv::detail;static void printUsage()
{cout <<"Rotation model images stitcher.\n\n""stitching_detailed img1 img2 [...imgN] [flags]\n\n""Flags:\n""  --preview\n""      Run stitching in the preview mode. Works faster than usual mode,\n""      but output image will have lower resolution.\n""  --try_cuda (yes|no)\n""      Try to use CUDA. The default value is 'no'. All default values\n""      are for CPU mode.\n""\nMotion Estimation Flags:\n""  --work_megapix <float>\n""      Resolution for image registration step. The default is 0.6 Mpx.\n""  --features (surf|orb)\n""      Type of features used for images matching. The default is surf.\n""  --matcher (homography|affine)\n""      Matcher used for pairwise image matching.\n""  --estimator (homography|affine)\n""      Type of estimator used for transformation estimation.\n""  --match_conf <float>\n""      Confidence for feature matching step. The default is 0.65 for surf and 0.3 for orb.\n""  --conf_thresh <float>\n""      Threshold for two images are from the same panorama confidence.\n""      The default is 1.0.\n""  --ba (no|reproj|ray|affine)\n""      Bundle adjustment cost function. The default is ray.\n""  --ba_refine_mask (mask)\n""      Set refinement mask for bundle adjustment. It looks like 'x_xxx',\n""      where 'x' means refine respective parameter and '_' means don't\n""      refine one, and has the following format:\n""      <fx><skew><ppx><aspect><ppy>. The default mask is 'xxxxx'. If bundle\n""      adjustment doesn't support estimation of selected parameter then\n""      the respective flag is ignored.\n""  --wave_correct (no|horiz|vert)\n""      Perform wave effect correction. The default is 'horiz'.\n""  --save_graph <file_name>\n""      Save matches graph represented in DOT language to <file_name> file.\n""      Labels description: Nm is number of matches, Ni is number of inliers,\n""      C is confidence.\n""\nCompositing Flags:\n""  --warp (affine|plane|cylindrical|spherical|fisheye|stereographic|compressedPlaneA2B1|compressedPlaneA1.5B1|compressedPlanePortraitA2B1|compressedPlanePortraitA1.5B1|paniniA2B1|paniniA1.5B1|paniniPortraitA2B1|paniniPortraitA1.5B1|mercator|transverseMercator)\n""      Warp surface type. The default is 'spherical'.\n""  --seam_megapix <float>\n""      Resolution for seam estimation step. The default is 0.1 Mpx.\n""  --seam (no|voronoi|gc_color|gc_colorgrad)\n""      Seam estimation method. The default is 'gc_color'.\n""  --compose_megapix <float>\n""      Resolution for compositing step. Use -1 for original resolution.\n""      The default is -1.\n""  --expos_comp (no|gain|gain_blocks)\n""      Exposure compensation method. The default is 'gain_blocks'.\n""  --blend (no|feather|multiband)\n""      Blending method. The default is 'multiband'.\n""  --blend_strength <float>\n""      Blending strength from [0,100] range. The default is 5.\n""  --output <result_img>\n""      The default is 'result.jpg'.\n""  --timelapse (as_is|crop) \n""      Output warped images separately as frames of a time lapse movie, with 'fixed_' prepended to input file names.\n""  --rangewidth <int>\n""      uses range_width to limit number of images to match with.\n";
}// Default command line args
vector<String> img_names;
bool preview = false;  // 使用preview将会加快运算速度,但同时降低输出图像分辨率
bool try_cuda = false; // 是否使用CUDA加速
double work_megapix = 0.6; //图像匹配步骤的分辨率????
double seam_megapix = 0.1; // 拼缝图像分辨率???
double compose_megapix = -1; //曝光补偿时候分辨率 -1表示使用原始分辨率
float conf_thresh = 1.f; //两幅图来自同一个全景图的置信度
string features_type = "surf"; //特征点选取 SURF或者ORB
string matcher_type = "homography"; //匹配方法 affine或者homography(射影变换)
string estimator_type = "homography"; //预测参数值方法 affine或者homography(射影变换)
string ba_cost_func = "ray"; //光束平差法损失函数 (no|reproj|ray|affine)
string ba_refine_mask = "xxxxx"; //当使用ray作为光束平差法损失函数时,需要初始化setRefinementMask(表示需要精确化的相机内参数矩阵K的掩码矩阵)bool do_wave_correct = true; //波形矫正 (no|horiz|vert),默认为水平方向
WaveCorrectKind wave_correct = detail::WAVE_CORRECT_HORIZ;bool save_graph = true; //存储匹配对文件名<file_name> label中Nm为匹配数量,Ni为内点数,C为置信度
std::string save_graph_to;
string warp_type = "spherical";// warp (affine|plane|cylindrical|spherical|fisheye|stereographic|等等
int expos_comp_type = ExposureCompensator::GAIN_BLOCKS;//增益补偿(no|gain|gain_blocks)
float match_conf = 0.3f; //匹配点对的置信度
string seam_find_type = "gc_color";
int blend_type = Blender::MULTI_BAND;//图像融合方法 blend (no|feather|multiband)
int timelapse_type = Timelapser::AS_IS;
float blend_strength = 5; // 这里好像是跟融合 下采样之类的数量有关
string result_name = "result.jpg";
bool timelapse = false; // timelapse (as_is|crop)
int range_width = -1; // 限制图像匹配的数量static int parseCmdArgs(int argc, char** argv)
{if (argc == 1){printUsage();return -1;}for (int i = 1; i < argc; ++i){if (string(argv[i]) == "--help" || string(argv[i]) == "/?"){printUsage();return -1;}else if (string(argv[i]) == "--preview"){preview = true;}else if (string(argv[i]) == "--try_cuda"){if (string(argv[i + 1]) == "no")try_cuda = false;else if (string(argv[i + 1]) == "yes")try_cuda = true;else{cout << "Bad --try_cuda flag value\n";return -1;}i++;}else if (string(argv[i]) == "--work_megapix"){work_megapix = atof(argv[i + 1]);i++;}else if (string(argv[i]) == "--seam_megapix"){seam_megapix = atof(argv[i + 1]);i++;}else if (string(argv[i]) == "--compose_megapix"){compose_megapix = atof(argv[i + 1]);i++;}else if (string(argv[i]) == "--result"){result_name = argv[i + 1];i++;}else if (string(argv[i]) == "--features"){features_type = argv[i + 1];if (features_type == "orb")match_conf = 0.3f;i++;}else if (string(argv[i]) == "--matcher"){if (string(argv[i + 1]) == "homography" || string(argv[i + 1]) == "affine")matcher_type = argv[i + 1];else{cout << "Bad --matcher flag value\n";return -1;}i++;}else if (string(argv[i]) == "--estimator"){if (string(argv[i + 1]) == "homography" || string(argv[i + 1]) == "affine")estimator_type = argv[i + 1];else{cout << "Bad --estimator flag value\n";return -1;}i++;}else if (string(argv[i]) == "--match_conf"){match_conf = static_cast<float>(atof(argv[i + 1]));i++;}else if (string(argv[i]) == "--conf_thresh"){conf_thresh = static_cast<float>(atof(argv[i + 1]));i++;}else if (string(argv[i]) == "--ba"){ba_cost_func = argv[i + 1];i++;}else if (string(argv[i]) == "--ba_refine_mask"){ba_refine_mask = argv[i + 1];if (ba_refine_mask.size() != 5){cout << "Incorrect refinement mask length.\n";return -1;}i++;}else if (string(argv[i]) == "--wave_correct"){if (string(argv[i + 1]) == "no")do_wave_correct = false;else if (string(argv[i + 1]) == "horiz"){do_wave_correct = true;wave_correct = detail::WAVE_CORRECT_HORIZ;}else if (string(argv[i + 1]) == "vert"){do_wave_correct = true;wave_correct = detail::WAVE_CORRECT_VERT;}else{cout << "Bad --wave_correct flag value\n";return -1;}i++;}else if (string(argv[i]) == "--save_graph"){save_graph = true;save_graph_to = argv[i + 1];i++;}else if (string(argv[i]) == "--warp"){warp_type = string(argv[i + 1]);i++;}else if (string(argv[i]) == "--expos_comp"){if (string(argv[i + 1]) == "no")expos_comp_type = ExposureCompensator::NO;else if (string(argv[i + 1]) == "gain")expos_comp_type = ExposureCompensator::GAIN;else if (string(argv[i + 1]) == "gain_blocks")expos_comp_type = ExposureCompensator::GAIN_BLOCKS;else{cout << "Bad exposure compensation method\n";return -1;}i++;}else if (string(argv[i]) == "--seam"){if (string(argv[i + 1]) == "no" ||string(argv[i + 1]) == "voronoi" ||string(argv[i + 1]) == "gc_color" ||string(argv[i + 1]) == "gc_colorgrad" ||string(argv[i + 1]) == "dp_color" ||string(argv[i + 1]) == "dp_colorgrad")seam_find_type = argv[i + 1];else{cout << "Bad seam finding method\n";return -1;}i++;}else if (string(argv[i]) == "--blend"){if (string(argv[i + 1]) == "no")blend_type = Blender::NO;else if (string(argv[i + 1]) == "feather")blend_type = Blender::FEATHER;else if (string(argv[i + 1]) == "multiband")blend_type = Blender::MULTI_BAND;else{cout << "Bad blending method\n";return -1;}i++;}else if (string(argv[i]) == "--timelapse"){timelapse = true;if (string(argv[i + 1]) == "as_is")timelapse_type = Timelapser::AS_IS;else if (string(argv[i + 1]) == "crop")timelapse_type = Timelapser::CROP;else{cout << "Bad timelapse method\n";return -1;}i++;}else if (string(argv[i]) == "--rangewidth"){range_width = atoi(argv[i + 1]);i++;}else if (string(argv[i]) == "--blend_strength"){blend_strength = static_cast<float>(atof(argv[i + 1]));i++;}else if (string(argv[i]) == "--output"){result_name = argv[i + 1];i++;}elseimg_names.push_back(argv[i]);}if (preview){compose_megapix = 0.6;}return 0;
}int main(int argc, char* argv[])
{
#if ENABLE_LOGint64 app_start_time = getTickCount(); // 统计时间
#endif#if 0cv::setBreakOnError(true);
#endifint retval = parseCmdArgs(argc, argv);if (retval)return retval;// Check if have enough imagesint num_images = static_cast<int>(img_names.size());if (num_images < 2){LOGLN("Need more images"); // LOGIN 和 LOG都使用define语句定义过了 就是一个coutreturn -1;}double work_scale = 1, seam_scale = 1, compose_scale = 1;bool is_work_scale_set = false, is_seam_scale_set = false, is_compose_scale_set = false;LOGLN("Finding features...");
#if ENABLE_LOGint64 t = getTickCount();
#endif// 第一步 寻找特征点 surf 或者 orb特征cv::initModule_nonfree();Ptr<FeaturesFinder> finder; //Ptr是opencv中智能指针if (features_type == "surf"){
#ifdef HAVE_OPENCV_XFEATURES2Dif (try_cuda && cuda::getCudaEnabledDeviceCount() > 0)finder = makePtr<SurfFeaturesFinderGpu>();else
#endiffinder = makePtr<SurfFeaturesFinder>();}else if (features_type == "orb"){finder = makePtr<OrbFeaturesFinder>();}else{cout << "Unknown 2D features type: '" << features_type << "'.\n";return -1;}Mat full_img, img;vector<ImageFeatures> features(num_images);vector<Mat> images(num_images);vector<Size> full_img_sizes(num_images); //存储每一张图像的大小double seam_work_aspect = 1;for (int i = 0; i < num_images; ++i){full_img = imread(img_names[i]);full_img_sizes[i] = full_img.size();if (full_img.empty()){LOGLN("Can't open image " << img_names[i]);return -1;}if (work_megapix < 0){img = full_img;work_scale = 1;is_work_scale_set = true;}else{if (!is_work_scale_set){work_scale = min(1.0, sqrt(work_megapix * 1e6 / full_img.size().area()));is_work_scale_set = true;}resize(full_img, img, Size(), work_scale, work_scale, INTER_LINEAR_EXACT);//work_scale代表长宽方向缩放的尺度,则配准时图像分辨率为0.6Mpix}if (!is_seam_scale_set){seam_scale = min(1.0, sqrt(seam_megapix * 1e6 / full_img.size().area()));seam_work_aspect = seam_scale / work_scale;//类似的定义了拼缝的分辨率is_seam_scale_set = true;}(*finder)(img, features[i]);features[i].img_idx = i;//讲匹配结果存储在featuresvector<Mat> img_feature(num_images);//定义一个图像存储特征LOGLN("Features in image #" << i+1 << ": " << features[i].keypoints.size());
//        drawKeypoints(img, featurs[i],img_feature[i], Scalar::all(-1));//绘制特征点
//        namedWindow("feature");
//        imshow("feature",img_feature[i]);
//        waitKey(500);resize(full_img, img, Size(), seam_scale, seam_scale, INTER_LINEAR_EXACT);//这里已经修改为了拼缝的分辨率images[i] = img.clone();}//释放内存finder->collectGarbage();full_img.release();img.release();LOGLN("Finding features, time: " << ((getTickCount() - t) / getTickFrequency()) << " sec");LOG("Pairwise matching");
#if ENABLE_LOGt = getTickCount();
#endifvector<MatchesInfo> pairwise_matches;Ptr<FeaturesMatcher> matcher;//智能指针if (matcher_type == "affine")matcher = makePtr<AffineBestOf2NearestMatcher>(false, try_cuda, match_conf);//使用2NN方法进行特征点匹配,并且当描述子的比值大于阈值认为是正确匹配//lab/lcd<1-match_conf则认为ab是正确匹配,在此之前如果寻找到匹配点数量小于2,则退出else if (range_width==-1)// 每幅图允许匹配的数量 可能是考虑到了投影变换需要计算参数更多matcher = makePtr<BestOf2NearestMatcher>(try_cuda, match_conf);// makePtr 相当于Ptr<T>elsematcher = makePtr<BestOf2NearestRangeMatcher>(range_width, try_cuda, match_conf);(*matcher)(features, pairwise_matches);matcher->collectGarbage();LOGLN("Pairwise matching, time: " << ((getTickCount() - t) / getTickFrequency()) << " sec");// Check if we should save matches graph,是否存储匹配对if (save_graph){LOGLN("Saving matches graph...");ofstream f(save_graph_to.c_str());f << matchesGraphAsString(img_names, pairwise_matches, conf_thresh);}// Leave only images we are sure are from the same panorama// 这里是否对features和pairwist_matches进行了更新????// 这里给出了源码 可以看出更新了feature和pairwise_matches https://www.cnblogs.com/jsxyhelu/p/6810964.htmlvector<int> indices = leaveBiggestComponent(features, pairwise_matches, conf_thresh);//c = ni/(8+3N) 如果这个数大于3,则认为是同一幅图,这一步骤已经集成在函数内部,如果低于阈值,则认为是不能拼接,在这里机构建最大可拼接子集vector<Mat> img_subset;vector<String> img_names_subset;vector<Size> full_img_sizes_subset;// 更新图像集合为全部可拼接图像for (size_t i = 0; i < indices.size(); ++i){img_names_subset.push_back(img_names[indices[i]]);img_subset.push_back(images[indices[i]]);full_img_sizes_subset.push_back(full_img_sizes[indices[i]]);}images = img_subset;img_names = img_names_subset;full_img_sizes = full_img_sizes_subset;// Check if we still have enough imagesnum_images = static_cast<int>(img_names.size());if (num_images < 2){LOGLN("Need more images");return -1;}Ptr<Estimator> estimator;if (estimator_type == "affine")estimator = makePtr<AffineBasedEstimator>();elseestimator = makePtr<HomographyBasedEstimator>();vector<CameraParams> cameras;//这里的旋转矩阵包括了相机的内参以及旋转和平移向量,这里只是初步预测,后面使用光束平差法进行了细化if (!(*estimator)(features, pairwise_matches, cameras)){cout << "Homography estimation failed.\n";return -1;}for (size_t i = 0; i < cameras.size(); ++i){Mat R;cameras[i].R.convertTo(R, CV_32F);// 转换旋转矩阵的数据类型cameras[i].R = R;LOGLN("Initial camera intrinsics #" << indices[i]+1 << ":\nK:\n" << cameras[i].K() << "\nR:\n" << cameras[i].R);}Ptr<detail::BundleAdjusterBase> adjuster;if (ba_cost_func == "reproj") adjuster = makePtr<detail::BundleAdjusterReproj>();else if (ba_cost_func == "ray") adjuster = makePtr<detail::BundleAdjusterRay>();else if (ba_cost_func == "affine") adjuster = makePtr<detail::BundleAdjusterAffinePartial>();else if (ba_cost_func == "no") adjuster = makePtr<NoBundleAdjuster>();else{cout << "Unknown bundle adjustment cost function: '" << ba_cost_func << "'.\n";return -1;}//这个我觉得没什么用 前面已经通过leaveBiggestComponent求出了最大子集adjuster->setConfThresh(conf_thresh);//当使用ray作为光束平差法损失函数时,需要初始化setRefinementMask(表示需要精确化的相机内参数矩阵K的掩码矩阵)Mat_<uchar> refine_mask = Mat::zeros(3, 3, CV_8U);if (ba_refine_mask[0] == 'x') refine_mask(0,0) = 1;if (ba_refine_mask[1] == 'x') refine_mask(0,1) = 1;if (ba_refine_mask[2] == 'x') refine_mask(0,2) = 1;if (ba_refine_mask[3] == 'x') refine_mask(1,1) = 1;if (ba_refine_mask[4] == 'x') refine_mask(1,2) = 1;adjuster->setRefinementMask(refine_mask);if (!(*adjuster)(features, pairwise_matches, cameras)){cout << "Camera parameters adjusting failed.\n";return -1;}// Find median focal length, 这里的focal取得是中值,也可以取平均值vector<double> focals;for (size_t i = 0; i < cameras.size(); ++i){LOGLN("Camera #" << indices[i]+1 << ":\nK:\n" << cameras[i].K() << "\nR:\n" << cameras[i].R);focals.push_back(cameras[i].focal);}sort(focals.begin(), focals.end());float warped_image_scale;if (focals.size() % 2 == 1)warped_image_scale = static_cast<float>(focals[focals.size() / 2]);elsewarped_image_scale = static_cast<float>(focals[focals.size() / 2 - 1] + focals[focals.size() / 2]) * 0.5f;// 类似论文中的up vectorif (do_wave_correct){vector<Mat> rmats;for (size_t i = 0; i < cameras.size(); ++i)rmats.push_back(cameras[i].R.clone());waveCorrect(rmats, wave_correct);for (size_t i = 0; i < cameras.size(); ++i)cameras[i].R = rmats[i];}// 由于在拍摄时候,图像位于不同的平面,如果直接拼接的话,会破坏是视觉场的一致性,所以要将其映射到平面上LOGLN("Warping images (auxiliary)... ");
#if ENABLE_LOGt = getTickCount();
#endifvector<Point> corners(num_images); // 映射之后图像左上角坐标vector<UMat> masks_warped(num_images); // 映射图像后的掩码vector<UMat> images_warped(num_images); // 映射变换后图像vector<Size> sizes(num_images); // 映射后图像尺寸vector<UMat> masks(num_images); // 原图尺寸// Preapre images masksfor (int i = 0; i < num_images; ++i){masks[i].create(images[i].size(), CV_8U);masks[i].setTo(Scalar::all(255));//定义原图中所有部分均使用}// Warp images and their masks// 将最终的图像进行映射变换,最终是在平面 椭圆还是其他Ptr<WarperCreator> warper_creator;
#ifdef HAVE_OPENCV_CUDAWARPINGif (try_cuda && cuda::getCudaEnabledDeviceCount() > 0){if (warp_type == "plane")warper_creator = makePtr<cv::PlaneWarperGpu>();else if (warp_type == "cylindrical")warper_creator = makePtr<cv::CylindricalWarperGpu>();else if (warp_type == "spherical")warper_creator = makePtr<cv::SphericalWarperGpu>();}else
#endif{if (warp_type == "plane")warper_creator = makePtr<cv::PlaneWarper>();else if (warp_type == "affine")warper_creator = makePtr<cv::AffineWarper>();else if (warp_type == "cylindrical")warper_creator = makePtr<cv::CylindricalWarper>();else if (warp_type == "spherical")warper_creator = makePtr<cv::SphericalWarper>();else if (warp_type == "fisheye")warper_creator = makePtr<cv::FisheyeWarper>();else if (warp_type == "stereographic")warper_creator = makePtr<cv::StereographicWarper>();else if (warp_type == "compressedPlaneA2B1")warper_creator = makePtr<cv::CompressedRectilinearWarper>(2.0f, 1.0f);else if (warp_type == "compressedPlaneA1.5B1")warper_creator = makePtr<cv::CompressedRectilinearWarper>(1.5f, 1.0f);else if (warp_type == "compressedPlanePortraitA2B1")warper_creator = makePtr<cv::CompressedRectilinearPortraitWarper>(2.0f, 1.0f);else if (warp_type == "compressedPlanePortraitA1.5B1")warper_creator = makePtr<cv::CompressedRectilinearPortraitWarper>(1.5f, 1.0f);else if (warp_type == "paniniA2B1")warper_creator = makePtr<cv::PaniniWarper>(2.0f, 1.0f);else if (warp_type == "paniniA1.5B1")warper_creator = makePtr<cv::PaniniWarper>(1.5f, 1.0f);else if (warp_type == "paniniPortraitA2B1")warper_creator = makePtr<cv::PaniniPortraitWarper>(2.0f, 1.0f);else if (warp_type == "paniniPortraitA1.5B1")warper_creator = makePtr<cv::PaniniPortraitWarper>(1.5f, 1.0f);else if (warp_type == "mercator")warper_creator = makePtr<cv::MercatorWarper>();else if (warp_type == "transverseMercator")warper_creator = makePtr<cv::TransverseMercatorWarper>();}if (!warper_creator){cout << "Can't create the following warper '" << warp_type << "'\n";return 1;}// 参数数量视映射情况而定,设置映射的尺寸为焦距,这里是因为定义了拼缝,所有乘以了拼缝;Ptr<RotationWarper> warper = warper_creator->create(static_cast<float>(warped_image_scale * seam_work_aspect));for (int i = 0; i < num_images; ++i){Mat_<float> K;// K为相机内参cameras[i].K().convertTo(K, CV_32F);float swa = (float)seam_work_aspect;K(0,0) *= swa; K(0,2) *= swa;K(1,1) *= swa; K(1,2) *= swa;// 这里用到了相机的内参和外参,得到了变换后图像左上角坐标和变换后图像corners[i] = warper->warp(images[i], K, cameras[i].R, INTER_LINEAR, BORDER_REFLECT, images_warped[i]);sizes[i] = images_warped[i].size();// 映射后图像尺寸// 得到了映射后的图像掩码warper->warp(masks[i], K, cameras[i].R, INTER_NEAREST, BORDER_CONSTANT, masks_warped[i]);}vector<UMat> images_warped_f(num_images);for (int i = 0; i < num_images; ++i)images_warped[i].convertTo(images_warped_f[i], CV_32F);LOGLN("Warping images, time: " << ((getTickCount() - t) / getTickFrequency()) << " sec");Ptr<ExposureCompensator> compensator = ExposureCompensator::createDefault(expos_comp_type);compensator->feed(corners, images_warped, masks_warped);// 定义拼缝Ptr<SeamFinder> seam_finder;if (seam_find_type == "no")seam_finder = makePtr<detail::NoSeamFinder>();else if (seam_find_type == "voronoi")seam_finder = makePtr<detail::VoronoiSeamFinder>();else if (seam_find_type == "gc_color"){
#ifdef HAVE_OPENCV_CUDALEGACYif (try_cuda && cuda::getCudaEnabledDeviceCount() > 0)seam_finder = makePtr<detail::GraphCutSeamFinderGpu>(GraphCutSeamFinderBase::COST_COLOR);else
#endifseam_finder = makePtr<detail::GraphCutSeamFinder>(GraphCutSeamFinderBase::COST_COLOR);}else if (seam_find_type == "gc_colorgrad"){
#ifdef HAVE_OPENCV_CUDALEGACYif (try_cuda && cuda::getCudaEnabledDeviceCount() > 0)seam_finder = makePtr<detail::GraphCutSeamFinderGpu>(GraphCutSeamFinderBase::COST_COLOR_GRAD);else
#endifseam_finder = makePtr<detail::GraphCutSeamFinder>(GraphCutSeamFinderBase::COST_COLOR_GRAD);}else if (seam_find_type == "dp_color")seam_finder = makePtr<detail::DpSeamFinder>(DpSeamFinder::COLOR);else if (seam_find_type == "dp_colorgrad")seam_finder = makePtr<detail::DpSeamFinder>(DpSeamFinder::COLOR_GRAD);if (!seam_finder){cout << "Can't create the following seam finder '" << seam_find_type << "'\n";return 1;}// 得到接缝线的掩码图像seam_finder->find(images_warped_f, corners, masks_warped);// Release unused memoryimages.clear();images_warped.clear();images_warped_f.clear();masks.clear();LOGLN("Compositing...");
#if ENABLE_LOGt = getTickCount();
#endif// 进行曝光补偿,这里由于曝光时尺寸发生了变换,因此这里需要对映射后的分辨率进行改变Mat img_warped, img_warped_s;Mat dilated_mask, seam_mask, mask, mask_warped;Ptr<Blender> blender;Ptr<Timelapser> timelapser;//double compose_seam_aspect = 1;double compose_work_aspect = 1;for (int img_idx = 0; img_idx < num_images; ++img_idx){LOGLN("Compositing image #" << indices[img_idx]+1);// Read image and resize it if necessaryfull_img = imread(img_names[img_idx]);if (!is_compose_scale_set){if (compose_megapix > 0)compose_scale = min(1.0, sqrt(compose_megapix * 1e6 / full_img.size().area()));is_compose_scale_set = true;// Compute relative scales//compose_seam_aspect = compose_scale / seam_scale;compose_work_aspect = compose_scale / work_scale;// Update warped image scale,warped_image_scale是焦距尺寸warped_image_scale *= static_cast<float>(compose_work_aspect);warper = warper_creator->create(warped_image_scale);// Update corners and sizesfor (int i = 0; i < num_images; ++i){// Update intrinsicscameras[i].focal *= compose_work_aspect;cameras[i].ppx *= compose_work_aspect; //Principal point Xcameras[i].ppy *= compose_work_aspect;// Update corner and sizeSize sz = full_img_sizes[i];if (std::abs(compose_scale - 1) > 1e-1){sz.width = cvRound(full_img_sizes[i].width * compose_scale);sz.height = cvRound(full_img_sizes[i].height * compose_scale);}Mat K;cameras[i].K().convertTo(K, CV_32F);Rect roi = warper->warpRoi(sz, K, cameras[i].R);// Projected image minimum bounding boxcorners[i] = roi.tl();//左上角坐标sizes[i] = roi.size();//尺寸}}if (abs(compose_scale - 1) > 1e-1)resize(full_img, img, Size(), compose_scale, compose_scale, INTER_LINEAR_EXACT);elseimg = full_img;full_img.release();Size img_size = img.size();Mat K;cameras[img_idx].K().convertTo(K, CV_32F);// 这里为什么要重新进行映射 单纯是因为尺寸原因???warper->warp(img, K, cameras[img_idx].R, INTER_LINEAR, BORDER_REFLECT, img_warped);// Warp the current image maskmask.create(img_size, CV_8U);mask.setTo(Scalar::all(255));warper->warp(mask, K, cameras[img_idx].R, INTER_NEAREST, BORDER_CONSTANT, mask_warped);// Compensate exposurecompensator->apply(img_idx, corners[img_idx], img_warped, mask_warped);img_warped.convertTo(img_warped_s, CV_16S);img_warped.release();img.release();mask.release();// 这里来说结构元素是什么 Mat()是什么意思?????//在融合的时候,最重要的是在接缝线两侧进行处理,而上一步在寻找接缝线后得到的掩码的边界就是接缝线处,// 因此我们还需要在接缝线两侧开辟一块区域用于融合处理,这一处理过程对羽化方法尤为关键// 应用膨胀算法缩小掩码面积dilate(masks_warped[img_idx], dilated_mask, Mat());resize(dilated_mask, seam_mask, mask_warped.size(), 0, 0, INTER_LINEAR_EXACT);// 映射变换图的掩码和膨胀后的掩码相“与”,从而使扩展的区域仅仅限于接缝线两侧,其他边界处不受影响mask_warped = seam_mask & mask_warped;if (!blender && !timelapse){blender = Blender::createDefault(blend_type, try_cuda);Size dst_sz = resultRoi(corners, sizes).size();float blend_width = sqrt(static_cast<float>(dst_sz.area())) * blend_strength / 100.f;if (blend_width < 1.f)blender = Blender::createDefault(Blender::NO, try_cuda);else if (blend_type == Blender::MULTI_BAND){MultiBandBlender* mb = dynamic_cast<MultiBandBlender*>(blender.get());//设置频段数,即金字塔层数mb->setNumBands(static_cast<int>(ceil(log(blend_width)/log(2.)) - 1.));LOGLN("Multi-band blender, number of bands: " << mb->numBands());}else if (blend_type == Blender::FEATHER){FeatherBlender* fb = dynamic_cast<FeatherBlender*>(blender.get());fb->setSharpness(1.f/blend_width);// 设置羽化度LOGLN("Feather blender, sharpness: " << fb->sharpness());}blender->prepare(corners, sizes);}else if (!timelapser && timelapse){timelapser = Timelapser::createDefault(timelapse_type);timelapser->initialize(corners, sizes);}// Blend the current imageif (timelapse){timelapser->process(img_warped_s, Mat::ones(img_warped_s.size(), CV_8UC1), corners[img_idx]);String fixedFileName;size_t pos_s = String(img_names[img_idx]).find_last_of("/\\");if (pos_s == String::npos){fixedFileName = "fixed_" + img_names[img_idx];}else{fixedFileName = "fixed_" + String(img_names[img_idx]).substr(pos_s + 1, String(img_names[img_idx]).length() - pos_s);}imwrite(fixedFileName, timelapser->getDst());}else{blender->feed(img_warped_s, mask_warped, corners[img_idx]);}}if (!timelapse){Mat result, result_mask;blender->blend(result, result_mask);LOGLN("Compositing, time: " << ((getTickCount() - t) / getTickFrequency()) << " sec");imwrite(result_name, result);}LOGLN("Finished, total time: " << ((getTickCount() - app_start_time) / getTickFrequency()) << " sec");return 0;
}

拼接结果:

opencv 图像拼接相关推荐

  1. python+opencv图像拼接-python opencv 图像拼接的实现方法

    初级的图像拼接为将两幅图像简单的粘贴在一起,仅仅是图像几何空间的转移与合成,与图像内容无关.高级图像拼接也叫作基于特征匹配的图像拼接,拼接时消去两幅图像相同的部分,实现拼接合成全景图. 具有相同尺寸的 ...

  2. OpenCV图像拼接之Stitching和Stitching_detailed

    Stitcher类与detail命名空间 OpenCV提供了高级别的函数封装在Stitcher类中,使用很方便,不用考虑太多的细节. 低级别函数封装在detail命名空间中,展示了opencv算法实现 ...

  3. OpenCV图像拼接-Stitcher类-Stitching detailed使用与参数介绍

    关于OpenCV图像拼接的方法,如果不熟悉的话,可以先看看我整理的如下四篇博客: OpenCV常用图像拼接方法(一):直接拼接(硬拼) OpenCV常用图像拼接方法(二):基于模板匹配拼接 OpenC ...

  4. Opencv 图像拼接与融合简单方法Stitcher

    Opencv 图像拼接与融合简单方法Stitcher 官方示例 使用方法 运行效果 官方示例 #include "opencv2/imgcodecs.hpp" #include & ...

  5. opencv图像拼接【二】

      在opencv图像拼接[一]中,实现了图像的直接连接,那么本文将实现基于特征匹配的图像融合,就是两幅图像中会有相同的部分,根据图像中相同的特征,实现图像的"拼接". 原图 特征 ...

  6. OPenCV 图像拼接之------stitching和stitching_detailed

    Stitcher类与detail命名空间 OpenCV提供了高级别的函数封装在Stitcher类中,使用很方便,不用考虑太多的细节. 低级别函数封装在detail命名空间中,展示了opencv算法实现 ...

  7. python图像拼接_python opencv 图像拼接的实现方法

    初级的图像拼接为将两幅图像简单的粘贴在一起,仅仅是图像几何空间的转移与合成,与图像内容无关.高级图像拼接也叫作基于特征匹配的图像拼接,拼接时消去两幅图像相同的部分,实现拼接合成全景图. 具有相同尺寸的 ...

  8. opencv图像拼接_使用OpenCV进行图像全景拼接

    点击上方"AI小白学视觉",选择加"星标"或"置顶"重磅干货,第一时间送达 图像拼接是计算机视觉中最成功的应用之一.如今,很难找到不包含此功 ...

  9. opencv 图像拼接和图像融合技术

    图像拼接在实际的应用场景很广,比如无人机航拍,遥感图像等等,图像拼接是进一步做图像理解基础步骤,拼接效果的好坏直接影响接下来的工作,所以一个好的图像拼接算法非常重要. 再举一个身边的例子吧,你用你的手 ...

  10. OpenCV图像拼接和图像融合技术

    转自:https://www.cnblogs.com/skyfsm/p/7411961.html 图像拼接在实际的应用场景很广,比如无人机航拍,遥感图像等等,图像拼接是进一步做图像理解基础步骤,拼接效 ...

最新文章

  1. 【高级Java架构师系统学习】java如何开发安卓软件
  2. 软件项目管理0628:出差面临的问题
  3. tableau暂时不支持m1芯片!期待未来!
  4. sklearn 聚类 实例
  5. web前端之js快速入门(BOM和DOM)
  6. LeetCode 674 最长连续递增子序列
  7. php新浪获取ip接口,【php】利用新浪api接口与php获取远程数据的步骤,获取IP地址,并获取相应的IP归属地...
  8. 一作解读NLPCC最佳学生论文:1200万中文对话数据和预训练模型CDial-GPT
  9. 传输模型, tcp socket套接字
  10. distri.lua的web运维工具
  11. jQuery入门基础
  12. Linux 操作系统原理 — 操作系统的本质
  13. groovy脚本一键360加固多渠道打包
  14. 静默安装oracle11,Oracle11g静默安装
  15. 惠群计算机科技,电脑报专访:探索新视角,再造多元化的宏碁
  16. T600显卡和GTX1650 哪个好
  17. 代理模式相关简单论述
  18. 移动端h5网页调用支付宝支付接口
  19. application实现一个简单的网页计数器
  20. DELL-R730服务器U盘安装操作系统指南

热门文章

  1. MATLAB 用拉格朗日插值验证龙格现象
  2. [DLX]HDOJ4069 Squiggly Sudoku
  3. 德州CC2640R2f蓝牙芯片学习笔记(二)代码框架
  4. sdk+windows安装教程
  5. 如何使用ssh来连接windows
  6. 360度全景的地拍如何制作?
  7. ASCII码与字符对照表(附转换代码)
  8. 超视频时代,数据洪峰何解?
  9. JDK1.6“新“特性Instrumentation之JavaAgent
  10. 东文财 赵栋 罗松 201771010106《面向对象程序设计(java)》实验14