关于OpenCV图像拼接的方法,如果不熟悉的话,可以先看看我整理的如下四篇博客:

  • OpenCV常用图像拼接方法(一):直接拼接(硬拼)

  • OpenCV常用图像拼接方法(二):基于模板匹配拼接

  • OpenCV常用图像拼接方法(三):基于特征匹配拼接

  • OpenCV常用图像拼接方法(四):基于Stitcher类拼接

本篇博客是Stitcher类的扩展介绍,通过例程stitching_detailed.cpp的使用和参数介绍,帮助大家了解Stitcher类拼接的具体步骤和方法,先看看其内部的流程结构图(如下):

stitching_detailed.cpp目录如下,可以在自己安装的OpenCV目录下找到,笔者这里使用的OpenCV4.4版本

stitching_detailed.cpp具体源码如下:

// 05_Image_Stitch_Stitching_Detailed.cpp : 此文件包含 "main" 函数。程序执行将在此处开始并结束。
//
#include "pch.h"
#include <iostream>
#include <fstream>
#include <string>
#include "opencv2/opencv_modules.hpp"
#include <opencv2/core/utility.hpp>
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/stitching/detail/autocalib.hpp"
#include "opencv2/stitching/detail/blenders.hpp"
#include "opencv2/stitching/detail/timelapsers.hpp"
#include "opencv2/stitching/detail/camera.hpp"
#include "opencv2/stitching/detail/exposure_compensate.hpp"
#include "opencv2/stitching/detail/matchers.hpp"
#include "opencv2/stitching/detail/motion_estimators.hpp"
#include "opencv2/stitching/detail/seam_finders.hpp"
#include "opencv2/stitching/detail/warpers.hpp"
#include "opencv2/stitching/warpers.hpp"#ifdef HAVE_OPENCV_XFEATURES2D
#include "opencv2/xfeatures2d.hpp"
#include "opencv2/xfeatures2d/nonfree.hpp"
#endif#define ENABLE_LOG 1
#define LOG(msg) std::cout << msg
#define LOGLN(msg) std::cout << msg << std::endlusing namespace std;
using namespace cv;
using namespace cv::detail;static void printUsage(char** argv)
{cout <<"Rotation model images stitcher.\n\n"<< argv[0] << " img1 img2 [...imgN] [flags]\n\n""Flags:\n""  --preview\n""      Run stitching in the preview mode. Works faster than usual mode,\n""      but output image will have lower resolution.\n""  --try_cuda (yes|no)\n""      Try to use CUDA. The default value is 'no'. All default values\n""      are for CPU mode.\n""\nMotion Estimation Flags:\n""  --work_megapix <float>\n""      Resolution for image registration step. The default is 0.6 Mpx.\n""  --features (surf|orb|sift|akaze)\n""      Type of features used for images matching.\n""      The default is surf if available, orb otherwise.\n""  --matcher (homography|affine)\n""      Matcher used for pairwise image matching.\n""  --estimator (homography|affine)\n""      Type of estimator used for transformation estimation.\n""  --match_conf <float>\n""      Confidence for feature matching step. The default is 0.65 for surf and 0.3 for orb.\n""  --conf_thresh <float>\n""      Threshold for two images are from the same panorama confidence.\n""      The default is 1.0.\n""  --ba (no|reproj|ray|affine)\n""      Bundle adjustment cost function. The default is ray.\n""  --ba_refine_mask (mask)\n""      Set refinement mask for bundle adjustment. It looks like 'x_xxx',\n""      where 'x' means refine respective parameter and '_' means don't\n""      refine one, and has the following format:\n""      <fx><skew><ppx><aspect><ppy>. The default mask is 'xxxxx'. If bundle\n""      adjustment doesn't support estimation of selected parameter then\n""      the respective flag is ignored.\n""  --wave_correct (no|horiz|vert)\n""      Perform wave effect correction. The default is 'horiz'.\n""  --save_graph <file_name>\n""      Save matches graph represented in DOT language to <file_name> file.\n""      Labels description: Nm is number of matches, Ni is number of inliers,\n""      C is confidence.\n""\nCompositing Flags:\n""  --warp (affine|plane|cylindrical|spherical|fisheye|stereographic|compressedPlaneA2B1|compressedPlaneA1.5B1|compressedPlanePortraitA2B1|compressedPlanePortraitA1.5B1|paniniA2B1|paniniA1.5B1|paniniPortraitA2B1|paniniPortraitA1.5B1|mercator|transverseMercator)\n""      Warp surface type. The default is 'spherical'.\n""  --seam_megapix <float>\n""      Resolution for seam estimation step. The default is 0.1 Mpx.\n""  --seam (no|voronoi|gc_color|gc_colorgrad)\n""      Seam estimation method. The default is 'gc_color'.\n""  --compose_megapix <float>\n""      Resolution for compositing step. Use -1 for original resolution.\n""      The default is -1.\n""  --expos_comp (no|gain|gain_blocks|channels|channels_blocks)\n""      Exposure compensation method. The default is 'gain_blocks'.\n""  --expos_comp_nr_feeds <int>\n""      Number of exposure compensation feed. The default is 1.\n""  --expos_comp_nr_filtering <int>\n""      Number of filtering iterations of the exposure compensation gains.\n""      Only used when using a block exposure compensation method.\n""      The default is 2.\n""  --expos_comp_block_size <int>\n""      BLock size in pixels used by the exposure compensator.\n""      Only used when using a block exposure compensation method.\n""      The default is 32.\n""  --blend (no|feather|multiband)\n""      Blending method. The default is 'multiband'.\n""  --blend_strength <float>\n""      Blending strength from [0,100] range. The default is 5.\n""  --output <result_img>\n""      The default is 'result.jpg'.\n""  --timelapse (as_is|crop) \n""      Output warped images separately as frames of a time lapse movie, with 'fixed_' prepended to input file names.\n""  --rangewidth <int>\n""      uses range_width to limit number of images to match with.\n";
}// Default command line args
vector<String> img_names;
bool preview = false;
bool try_cuda = false;
double work_megapix = 0.6;
double seam_megapix = 0.1;
double compose_megapix = -1;
float conf_thresh = 1.f;
#ifdef HAVE_OPENCV_XFEATURES2D
string features_type = "surf";
float match_conf = 0.65f;
#else
string features_type = "orb";
float match_conf = 0.3f;
#endif
string matcher_type = "homography";
string estimator_type = "homography";
string ba_cost_func = "ray";
string ba_refine_mask = "xxxxx";
bool do_wave_correct = true;
WaveCorrectKind wave_correct = detail::WAVE_CORRECT_HORIZ;
bool save_graph = false;
std::string save_graph_to;
string warp_type = "spherical";
int expos_comp_type = ExposureCompensator::GAIN_BLOCKS;
int expos_comp_nr_feeds = 1;
int expos_comp_nr_filtering = 2;
int expos_comp_block_size = 32;
string seam_find_type = "gc_color";
int blend_type = Blender::MULTI_BAND;
int timelapse_type = Timelapser::AS_IS;
float blend_strength = 5;
string result_name = "result.jpg";
bool timelapse = false;
int range_width = -1;static int parseCmdArgs(int argc, char** argv)
{if (argc == 1){printUsage(argv);return -1;}for (int i = 1; i < argc; ++i){if (string(argv[i]) == "--help" || string(argv[i]) == "/?"){printUsage(argv);return -1;}else if (string(argv[i]) == "--preview"){preview = true;}else if (string(argv[i]) == "--try_cuda"){if (string(argv[i + 1]) == "no")try_cuda = false;else if (string(argv[i + 1]) == "yes")try_cuda = true;else{cout << "Bad --try_cuda flag value\n";return -1;}i++;}else if (string(argv[i]) == "--work_megapix"){work_megapix = atof(argv[i + 1]);i++;}else if (string(argv[i]) == "--seam_megapix"){seam_megapix = atof(argv[i + 1]);i++;}else if (string(argv[i]) == "--compose_megapix"){compose_megapix = atof(argv[i + 1]);i++;}else if (string(argv[i]) == "--result"){result_name = argv[i + 1];i++;}else if (string(argv[i]) == "--features"){features_type = argv[i + 1];if (string(features_type) == "orb")match_conf = 0.3f;i++;}else if (string(argv[i]) == "--matcher"){if (string(argv[i + 1]) == "homography" || string(argv[i + 1]) == "affine")matcher_type = argv[i + 1];else{cout << "Bad --matcher flag value\n";return -1;}i++;}else if (string(argv[i]) == "--estimator"){if (string(argv[i + 1]) == "homography" || string(argv[i + 1]) == "affine")estimator_type = argv[i + 1];else{cout << "Bad --estimator flag value\n";return -1;}i++;}else if (string(argv[i]) == "--match_conf"){match_conf = static_cast<float>(atof(argv[i + 1]));i++;}else if (string(argv[i]) == "--conf_thresh"){conf_thresh = static_cast<float>(atof(argv[i + 1]));i++;}else if (string(argv[i]) == "--ba"){ba_cost_func = argv[i + 1];i++;}else if (string(argv[i]) == "--ba_refine_mask"){ba_refine_mask = argv[i + 1];if (ba_refine_mask.size() != 5){cout << "Incorrect refinement mask length.\n";return -1;}i++;}else if (string(argv[i]) == "--wave_correct"){if (string(argv[i + 1]) == "no")do_wave_correct = false;else if (string(argv[i + 1]) == "horiz"){do_wave_correct = true;wave_correct = detail::WAVE_CORRECT_HORIZ;}else if (string(argv[i + 1]) == "vert"){do_wave_correct = true;wave_correct = detail::WAVE_CORRECT_VERT;}else{cout << "Bad --wave_correct flag value\n";return -1;}i++;}else if (string(argv[i]) == "--save_graph"){save_graph = true;save_graph_to = argv[i + 1];i++;}else if (string(argv[i]) == "--warp"){warp_type = string(argv[i + 1]);i++;}else if (string(argv[i]) == "--expos_comp"){if (string(argv[i + 1]) == "no")expos_comp_type = ExposureCompensator::NO;else if (string(argv[i + 1]) == "gain")expos_comp_type = ExposureCompensator::GAIN;else if (string(argv[i + 1]) == "gain_blocks")expos_comp_type = ExposureCompensator::GAIN_BLOCKS;else if (string(argv[i + 1]) == "channels")expos_comp_type = ExposureCompensator::CHANNELS;else if (string(argv[i + 1]) == "channels_blocks")expos_comp_type = ExposureCompensator::CHANNELS_BLOCKS;else{cout << "Bad exposure compensation method\n";return -1;}i++;}else if (string(argv[i]) == "--expos_comp_nr_feeds"){expos_comp_nr_feeds = atoi(argv[i + 1]);i++;}else if (string(argv[i]) == "--expos_comp_nr_filtering"){expos_comp_nr_filtering = atoi(argv[i + 1]);i++;}else if (string(argv[i]) == "--expos_comp_block_size"){expos_comp_block_size = atoi(argv[i + 1]);i++;}else if (string(argv[i]) == "--seam"){if (string(argv[i + 1]) == "no" ||string(argv[i + 1]) == "voronoi" ||string(argv[i + 1]) == "gc_color" ||string(argv[i + 1]) == "gc_colorgrad" ||string(argv[i + 1]) == "dp_color" ||string(argv[i + 1]) == "dp_colorgrad")seam_find_type = argv[i + 1];else{cout << "Bad seam finding method\n";return -1;}i++;}else if (string(argv[i]) == "--blend"){if (string(argv[i + 1]) == "no")blend_type = Blender::NO;else if (string(argv[i + 1]) == "feather")blend_type = Blender::FEATHER;else if (string(argv[i + 1]) == "multiband")blend_type = Blender::MULTI_BAND;else{cout << "Bad blending method\n";return -1;}i++;}else if (string(argv[i]) == "--timelapse"){timelapse = true;if (string(argv[i + 1]) == "as_is")timelapse_type = Timelapser::AS_IS;else if (string(argv[i + 1]) == "crop")timelapse_type = Timelapser::CROP;else{cout << "Bad timelapse method\n";return -1;}i++;}else if (string(argv[i]) == "--rangewidth"){range_width = atoi(argv[i + 1]);i++;}else if (string(argv[i]) == "--blend_strength"){blend_strength = static_cast<float>(atof(argv[i + 1]));i++;}else if (string(argv[i]) == "--output"){result_name = argv[i + 1];i++;}elseimg_names.push_back(argv[i]);}if (preview){compose_megapix = 0.6;}return 0;
}int main(int argc, char* argv[])
{
#if ENABLE_LOGint64 app_start_time = getTickCount();
#endif#if 0cv::setBreakOnError(true);
#endifint retval = parseCmdArgs(argc, argv);if (retval)return retval;// Check if have enough imagesint num_images = static_cast<int>(img_names.size());if (num_images < 2){LOGLN("Need more images");return -1;}double work_scale = 1, seam_scale = 1, compose_scale = 1;bool is_work_scale_set = false, is_seam_scale_set = false, is_compose_scale_set = false;LOGLN("Finding features...");
#if ENABLE_LOGint64 t = getTickCount();
#endifPtr<Feature2D> finder;if (features_type == "orb"){finder = ORB::create();}else if (features_type == "akaze"){finder = AKAZE::create();}
#ifdef HAVE_OPENCV_XFEATURES2Delse if (features_type == "surf"){finder = xfeatures2d::SURF::create();}
#endifelse if (features_type == "sift"){finder = SIFT::create();}else{cout << "Unknown 2D features type: '" << features_type << "'.\n";return -1;}Mat full_img, img;vector<ImageFeatures> features(num_images);vector<Mat> images(num_images);vector<Size> full_img_sizes(num_images);double seam_work_aspect = 1;for (int i = 0; i < num_images; ++i){full_img = imread(samples::findFile(img_names[i]));full_img_sizes[i] = full_img.size();if (full_img.empty()){LOGLN("Can't open image " << img_names[i]);return -1;}if (work_megapix < 0){img = full_img;work_scale = 1;is_work_scale_set = true;}else{if (!is_work_scale_set){work_scale = min(1.0, sqrt(work_megapix * 1e6 / full_img.size().area()));is_work_scale_set = true;}resize(full_img, img, Size(), work_scale, work_scale, INTER_LINEAR_EXACT);}if (!is_seam_scale_set){seam_scale = min(1.0, sqrt(seam_megapix * 1e6 / full_img.size().area()));seam_work_aspect = seam_scale / work_scale;is_seam_scale_set = true;}computeImageFeatures(finder, img, features[i]);features[i].img_idx = i;LOGLN("Features in image #" << i + 1 << ": " << features[i].keypoints.size());resize(full_img, img, Size(), seam_scale, seam_scale, INTER_LINEAR_EXACT);images[i] = img.clone();}full_img.release();img.release();LOGLN("Finding features, time: " << ((getTickCount() - t) / getTickFrequency()) << " sec");LOG("Pairwise matching");
#if ENABLE_LOGt = getTickCount();
#endifvector<MatchesInfo> pairwise_matches;Ptr<FeaturesMatcher> matcher;if (matcher_type == "affine")matcher = makePtr<AffineBestOf2NearestMatcher>(false, try_cuda, match_conf);else if (range_width == -1)matcher = makePtr<BestOf2NearestMatcher>(try_cuda, match_conf);elsematcher = makePtr<BestOf2NearestRangeMatcher>(range_width, try_cuda, match_conf);(*matcher)(features, pairwise_matches);matcher->collectGarbage();LOGLN("Pairwise matching, time: " << ((getTickCount() - t) / getTickFrequency()) << " sec");// Check if we should save matches graphif (save_graph){LOGLN("Saving matches graph...");ofstream f(save_graph_to.c_str());f << matchesGraphAsString(img_names, pairwise_matches, conf_thresh);}// Leave only images we are sure are from the same panoramavector<int> indices = leaveBiggestComponent(features, pairwise_matches, conf_thresh);vector<Mat> img_subset;vector<String> img_names_subset;vector<Size> full_img_sizes_subset;for (size_t i = 0; i < indices.size(); ++i){img_names_subset.push_back(img_names[indices[i]]);img_subset.push_back(images[indices[i]]);full_img_sizes_subset.push_back(full_img_sizes[indices[i]]);}images = img_subset;img_names = img_names_subset;full_img_sizes = full_img_sizes_subset;// Check if we still have enough imagesnum_images = static_cast<int>(img_names.size());if (num_images < 2){LOGLN("Need more images");return -1;}Ptr<Estimator> estimator;if (estimator_type == "affine")estimator = makePtr<AffineBasedEstimator>();elseestimator = makePtr<HomographyBasedEstimator>();vector<CameraParams> cameras;if (!(*estimator)(features, pairwise_matches, cameras)){cout << "Homography estimation failed.\n";return -1;}for (size_t i = 0; i < cameras.size(); ++i){Mat R;cameras[i].R.convertTo(R, CV_32F);cameras[i].R = R;LOGLN("Initial camera intrinsics #" << indices[i] + 1 << ":\nK:\n" << cameras[i].K() << "\nR:\n" << cameras[i].R);}Ptr<detail::BundleAdjusterBase> adjuster;if (ba_cost_func == "reproj") adjuster = makePtr<detail::BundleAdjusterReproj>();else if (ba_cost_func == "ray") adjuster = makePtr<detail::BundleAdjusterRay>();else if (ba_cost_func == "affine") adjuster = makePtr<detail::BundleAdjusterAffinePartial>();else if (ba_cost_func == "no") adjuster = makePtr<NoBundleAdjuster>();else{cout << "Unknown bundle adjustment cost function: '" << ba_cost_func << "'.\n";return -1;}adjuster->setConfThresh(conf_thresh);Mat_<uchar> refine_mask = Mat::zeros(3, 3, CV_8U);if (ba_refine_mask[0] == 'x') refine_mask(0, 0) = 1;if (ba_refine_mask[1] == 'x') refine_mask(0, 1) = 1;if (ba_refine_mask[2] == 'x') refine_mask(0, 2) = 1;if (ba_refine_mask[3] == 'x') refine_mask(1, 1) = 1;if (ba_refine_mask[4] == 'x') refine_mask(1, 2) = 1;adjuster->setRefinementMask(refine_mask);if (!(*adjuster)(features, pairwise_matches, cameras)){cout << "Camera parameters adjusting failed.\n";return -1;}// Find median focal lengthvector<double> focals;for (size_t i = 0; i < cameras.size(); ++i){LOGLN("Camera #" << indices[i] + 1 << ":\nK:\n" << cameras[i].K() << "\nR:\n" << cameras[i].R);focals.push_back(cameras[i].focal);}sort(focals.begin(), focals.end());float warped_image_scale;if (focals.size() % 2 == 1)warped_image_scale = static_cast<float>(focals[focals.size() / 2]);elsewarped_image_scale = static_cast<float>(focals[focals.size() / 2 - 1] + focals[focals.size() / 2]) * 0.5f;if (do_wave_correct){vector<Mat> rmats;for (size_t i = 0; i < cameras.size(); ++i)rmats.push_back(cameras[i].R.clone());waveCorrect(rmats, wave_correct);for (size_t i = 0; i < cameras.size(); ++i)cameras[i].R = rmats[i];}LOGLN("Warping images (auxiliary)... ");
#if ENABLE_LOGt = getTickCount();
#endifvector<Point> corners(num_images);vector<UMat> masks_warped(num_images);vector<UMat> images_warped(num_images);vector<Size> sizes(num_images);vector<UMat> masks(num_images);// Prepare images masksfor (int i = 0; i < num_images; ++i){masks[i].create(images[i].size(), CV_8U);masks[i].setTo(Scalar::all(255));}// Warp images and their masksPtr<WarperCreator> warper_creator;
#ifdef HAVE_OPENCV_CUDAWARPINGif (try_cuda && cuda::getCudaEnabledDeviceCount() > 0){if (warp_type == "plane")warper_creator = makePtr<cv::PlaneWarperGpu>();else if (warp_type == "cylindrical")warper_creator = makePtr<cv::CylindricalWarperGpu>();else if (warp_type == "spherical")warper_creator = makePtr<cv::SphericalWarperGpu>();}else
#endif{if (warp_type == "plane")warper_creator = makePtr<cv::PlaneWarper>();else if (warp_type == "affine")warper_creator = makePtr<cv::AffineWarper>();else if (warp_type == "cylindrical")warper_creator = makePtr<cv::CylindricalWarper>();else if (warp_type == "spherical")warper_creator = makePtr<cv::SphericalWarper>();else if (warp_type == "fisheye")warper_creator = makePtr<cv::FisheyeWarper>();else if (warp_type == "stereographic")warper_creator = makePtr<cv::StereographicWarper>();else if (warp_type == "compressedPlaneA2B1")warper_creator = makePtr<cv::CompressedRectilinearWarper>(2.0f, 1.0f);else if (warp_type == "compressedPlaneA1.5B1")warper_creator = makePtr<cv::CompressedRectilinearWarper>(1.5f, 1.0f);else if (warp_type == "compressedPlanePortraitA2B1")warper_creator = makePtr<cv::CompressedRectilinearPortraitWarper>(2.0f, 1.0f);else if (warp_type == "compressedPlanePortraitA1.5B1")warper_creator = makePtr<cv::CompressedRectilinearPortraitWarper>(1.5f, 1.0f);else if (warp_type == "paniniA2B1")warper_creator = makePtr<cv::PaniniWarper>(2.0f, 1.0f);else if (warp_type == "paniniA1.5B1")warper_creator = makePtr<cv::PaniniWarper>(1.5f, 1.0f);else if (warp_type == "paniniPortraitA2B1")warper_creator = makePtr<cv::PaniniPortraitWarper>(2.0f, 1.0f);else if (warp_type == "paniniPortraitA1.5B1")warper_creator = makePtr<cv::PaniniPortraitWarper>(1.5f, 1.0f);else if (warp_type == "mercator")warper_creator = makePtr<cv::MercatorWarper>();else if (warp_type == "transverseMercator")warper_creator = makePtr<cv::TransverseMercatorWarper>();}if (!warper_creator){cout << "Can't create the following warper '" << warp_type << "'\n";return 1;}Ptr<RotationWarper> warper = warper_creator->create(static_cast<float>(warped_image_scale * seam_work_aspect));for (int i = 0; i < num_images; ++i){Mat_<float> K;cameras[i].K().convertTo(K, CV_32F);float swa = (float)seam_work_aspect;K(0, 0) *= swa; K(0, 2) *= swa;K(1, 1) *= swa; K(1, 2) *= swa;corners[i] = warper->warp(images[i], K, cameras[i].R, INTER_LINEAR, BORDER_REFLECT, images_warped[i]);sizes[i] = images_warped[i].size();warper->warp(masks[i], K, cameras[i].R, INTER_NEAREST, BORDER_CONSTANT, masks_warped[i]);}vector<UMat> images_warped_f(num_images);for (int i = 0; i < num_images; ++i)images_warped[i].convertTo(images_warped_f[i], CV_32F);LOGLN("Warping images, time: " << ((getTickCount() - t) / getTickFrequency()) << " sec");LOGLN("Compensating exposure...");
#if ENABLE_LOGt = getTickCount();
#endifPtr<ExposureCompensator> compensator = ExposureCompensator::createDefault(expos_comp_type);if (dynamic_cast<GainCompensator*>(compensator.get())){GainCompensator* gcompensator = dynamic_cast<GainCompensator*>(compensator.get());gcompensator->setNrFeeds(expos_comp_nr_feeds);}if (dynamic_cast<ChannelsCompensator*>(compensator.get())){ChannelsCompensator* ccompensator = dynamic_cast<ChannelsCompensator*>(compensator.get());ccompensator->setNrFeeds(expos_comp_nr_feeds);}if (dynamic_cast<BlocksCompensator*>(compensator.get())){BlocksCompensator* bcompensator = dynamic_cast<BlocksCompensator*>(compensator.get());bcompensator->setNrFeeds(expos_comp_nr_feeds);bcompensator->setNrGainsFilteringIterations(expos_comp_nr_filtering);bcompensator->setBlockSize(expos_comp_block_size, expos_comp_block_size);}compensator->feed(corners, images_warped, masks_warped);LOGLN("Compensating exposure, time: " << ((getTickCount() - t) / getTickFrequency()) << " sec");LOGLN("Finding seams...");
#if ENABLE_LOGt = getTickCount();
#endifPtr<SeamFinder> seam_finder;if (seam_find_type == "no")seam_finder = makePtr<detail::NoSeamFinder>();else if (seam_find_type == "voronoi")seam_finder = makePtr<detail::VoronoiSeamFinder>();else if (seam_find_type == "gc_color"){
#ifdef HAVE_OPENCV_CUDALEGACYif (try_cuda && cuda::getCudaEnabledDeviceCount() > 0)seam_finder = makePtr<detail::GraphCutSeamFinderGpu>(GraphCutSeamFinderBase::COST_COLOR);else
#endifseam_finder = makePtr<detail::GraphCutSeamFinder>(GraphCutSeamFinderBase::COST_COLOR);}else if (seam_find_type == "gc_colorgrad"){
#ifdef HAVE_OPENCV_CUDALEGACYif (try_cuda && cuda::getCudaEnabledDeviceCount() > 0)seam_finder = makePtr<detail::GraphCutSeamFinderGpu>(GraphCutSeamFinderBase::COST_COLOR_GRAD);else
#endifseam_finder = makePtr<detail::GraphCutSeamFinder>(GraphCutSeamFinderBase::COST_COLOR_GRAD);}else if (seam_find_type == "dp_color")seam_finder = makePtr<detail::DpSeamFinder>(DpSeamFinder::COLOR);else if (seam_find_type == "dp_colorgrad")seam_finder = makePtr<detail::DpSeamFinder>(DpSeamFinder::COLOR_GRAD);if (!seam_finder){cout << "Can't create the following seam finder '" << seam_find_type << "'\n";return 1;}seam_finder->find(images_warped_f, corners, masks_warped);LOGLN("Finding seams, time: " << ((getTickCount() - t) / getTickFrequency()) << " sec");// Release unused memoryimages.clear();images_warped.clear();images_warped_f.clear();masks.clear();LOGLN("Compositing...");
#if ENABLE_LOGt = getTickCount();
#endifMat img_warped, img_warped_s;Mat dilated_mask, seam_mask, mask, mask_warped;Ptr<Blender> blender;Ptr<Timelapser> timelapser;//double compose_seam_aspect = 1;double compose_work_aspect = 1;for (int img_idx = 0; img_idx < num_images; ++img_idx){LOGLN("Compositing image #" << indices[img_idx] + 1);// Read image and resize it if necessaryfull_img = imread(samples::findFile(img_names[img_idx]));if (!is_compose_scale_set){if (compose_megapix > 0)compose_scale = min(1.0, sqrt(compose_megapix * 1e6 / full_img.size().area()));is_compose_scale_set = true;// Compute relative scales//compose_seam_aspect = compose_scale / seam_scale;compose_work_aspect = compose_scale / work_scale;// Update warped image scalewarped_image_scale *= static_cast<float>(compose_work_aspect);warper = warper_creator->create(warped_image_scale);// Update corners and sizesfor (int i = 0; i < num_images; ++i){// Update intrinsicscameras[i].focal *= compose_work_aspect;cameras[i].ppx *= compose_work_aspect;cameras[i].ppy *= compose_work_aspect;// Update corner and sizeSize sz = full_img_sizes[i];if (std::abs(compose_scale - 1) > 1e-1){sz.width = cvRound(full_img_sizes[i].width * compose_scale);sz.height = cvRound(full_img_sizes[i].height * compose_scale);}Mat K;cameras[i].K().convertTo(K, CV_32F);Rect roi = warper->warpRoi(sz, K, cameras[i].R);corners[i] = roi.tl();sizes[i] = roi.size();}}if (abs(compose_scale - 1) > 1e-1)resize(full_img, img, Size(), compose_scale, compose_scale, INTER_LINEAR_EXACT);elseimg = full_img;full_img.release();Size img_size = img.size();Mat K;cameras[img_idx].K().convertTo(K, CV_32F);// Warp the current imagewarper->warp(img, K, cameras[img_idx].R, INTER_LINEAR, BORDER_REFLECT, img_warped);// Warp the current image maskmask.create(img_size, CV_8U);mask.setTo(Scalar::all(255));warper->warp(mask, K, cameras[img_idx].R, INTER_NEAREST, BORDER_CONSTANT, mask_warped);// Compensate exposurecompensator->apply(img_idx, corners[img_idx], img_warped, mask_warped);img_warped.convertTo(img_warped_s, CV_16S);img_warped.release();img.release();mask.release();dilate(masks_warped[img_idx], dilated_mask, Mat());resize(dilated_mask, seam_mask, mask_warped.size(), 0, 0, INTER_LINEAR_EXACT);mask_warped = seam_mask & mask_warped;if (!blender && !timelapse){blender = Blender::createDefault(blend_type, try_cuda);Size dst_sz = resultRoi(corners, sizes).size();float blend_width = sqrt(static_cast<float>(dst_sz.area())) * blend_strength / 100.f;if (blend_width < 1.f)blender = Blender::createDefault(Blender::NO, try_cuda);else if (blend_type == Blender::MULTI_BAND){MultiBandBlender* mb = dynamic_cast<MultiBandBlender*>(blender.get());mb->setNumBands(static_cast<int>(ceil(log(blend_width) / log(2.)) - 1.));LOGLN("Multi-band blender, number of bands: " << mb->numBands());}else if (blend_type == Blender::FEATHER){FeatherBlender* fb = dynamic_cast<FeatherBlender*>(blender.get());fb->setSharpness(1.f / blend_width);LOGLN("Feather blender, sharpness: " << fb->sharpness());}blender->prepare(corners, sizes);}else if (!timelapser && timelapse){timelapser = Timelapser::createDefault(timelapse_type);timelapser->initialize(corners, sizes);}// Blend the current imageif (timelapse){timelapser->process(img_warped_s, Mat::ones(img_warped_s.size(), CV_8UC1), corners[img_idx]);String fixedFileName;size_t pos_s = String(img_names[img_idx]).find_last_of("/\\");if (pos_s == String::npos){fixedFileName = "fixed_" + img_names[img_idx];}else{fixedFileName = "fixed_" + String(img_names[img_idx]).substr(pos_s + 1, String(img_names[img_idx]).length() - pos_s);}imwrite(fixedFileName, timelapser->getDst());}else{blender->feed(img_warped_s, mask_warped, corners[img_idx]);}}if (!timelapse){Mat result, result_mask;blender->blend(result, result_mask);LOGLN("Compositing, time: " << ((getTickCount() - t) / getTickFrequency()) << " sec");imwrite(result_name, result);}LOGLN("Finished, total time: " << ((getTickCount() - app_start_time) / getTickFrequency()) << " sec");return 0;
}

stitching_detail 程序运行流程

  • 命令行调用程序,输入源图像以及程序的参数
  • 特征点检测,判断是使用 surf 还是 orb,默认是 surf
  • 对图像的特征点进行匹配,使用最近邻和次近邻方法,将两个最优的匹配的置信度 保存下来
  • 对图像进行排序以及将置信度高的图像保存到同一个集合中,删除置信度比较低的图像间的匹配,得到能正确匹配的图像序列。这样将置信度高于门限的所有匹配合并到一个集合中
  • 对所有图像进行相机参数粗略估计,然后求出旋转矩阵
  • 使用光束平均法进一步精准的估计出旋转矩阵
  • 波形校正,水平或者垂直
  • 拼接
  • 融合,多频段融合,光照补偿

stitching_detail 程序接口介绍

  • img1 img2 img3 输入图像
  • --preview  以预览模式运行程序,比正常模式要快,但输出图像分辨率低,拼接的分辨 率 compose_megapix 设置为 0.6
  • --try_gpu  (yes|no)  是否使用 CUDA加速,默认为 no,使用CPU模式
  • /* 运动估计参数 */
  • --work_megapix <--work_megapix <float>> 图像匹配时的分辨率大小,默认为 0.6
  • --features (surf | orb | sift | akaze) 选择 surf 或者 orb 算法进行特征点匹配,默认为 surf
  • --matcher (homography | affine) 用于成对图像匹配的匹配器
  • --estimator (homography | affine) 用于转换估计的估计器类型
  • --match_conf <float> 特征点匹配步骤的匹配置信度,最近邻匹配距离与次近邻匹配距离的比值,surf 默认为 0.65,orb 默认为 0.3
  • --conf_thresh <float> 两幅图来自同一全景图的置信度,默认为 1.0
  • --ba (no | reproj | ray | affine) 光束平均法的误差函数选择,默认是 ray 方法
  • --ba_refine_mask (mask) 光束平均法设置优化掩码
  • --wave_correct (no|horiz|vert) 波形校验水平,垂直或者没有 默认是 horiz(水平)
  • --save_graph <file_name> 将匹配的图形以点的形式保存到文件中, Nm 代表匹配的数量,NI代表正确匹配的数量,C 表示置信度
  • /*图像融合参数:*/
  • --warp (plane|cylindrical|spherical|fisheye|stereographic|compressedPlaneA2B1|compressedPla  neA1.5B1|compressedPlanePortraitA2B1|compressedPlanePortraitA1.5B1|paniniA2B1|paniniA1.5B1|paniniPortraitA2B1|paniniPor traitA1.5B1|mercator|transverseMercator)     选择融合的平面,默认是球形
  • --seam_megapix <float> 拼接缝像素的大小 默认是 0.1
  • --seam (no|voronoi|gc_color|gc_colorgrad) 拼接缝隙估计方法 默认是 gc_color
  • --compose_megapix <float> 拼接分辨率,默认为-1
  • --expos_comp (no|gain|gain_blocks) 光照补偿方法,默认是 gain_blocks
  • --blend (no|feather|multiband) 融合方法,默认是多频段融合
  • --blend_strength <float> 融合强度,0-100.默认是 5.
  • --output <result_img> 输出图像的文件名,默认是 result,jpg     命令使用实例,以及程序运行时的提示:

未完,待更新。。。

youtube有一个台湾老师对这个例程的简单解读,教新手查看例程和分析结构的步骤可以看看(下面是视频链接):

https://www.youtube.com/watch?v=1Pp8VIqN_ro

另外我这里有一个pdf可以参考,帮助你进一步了解拼接的细节,1积分

https://download.csdn.net/download/stq054188/12956421

上面使用默认参数,详细输出信息如下:

E:\Practice\OpenCV\Algorithm_Summary\Image_Stitching\x64\Debug>05_Image_Stitch_Stitching_Detailed.exe ./imgs/boat1.jpg ./imgs/boat2.jpg ./imgs/boat3.jpg ./imgs/boat4.jpg ./imgs/boat5.jpg ./imgs/boat6.jpg
Finding features...
[ INFO:0] global C:\build\master_winpack-build-win64-vc15\opencv\modules\core\src\ocl.cpp (891) cv::ocl::haveOpenCL Initialize OpenCL runtime...
Features in image #1: 500
[ INFO:0] global C:\build\master_winpack-build-win64-vc15\opencv\modules\core\src\ocl.cpp (433) cv::ocl::OpenCLBinaryCacheConfigurator::OpenCLBinaryCacheConfigurator Successfully initialized OpenCL cache directory: C:\Users\A4080599\AppData\Local\Temp\opencv\4.4\opencl_cache\
[ INFO:0] global C:\build\master_winpack-build-win64-vc15\opencv\modules\core\src\ocl.cpp (457) cv::ocl::OpenCLBinaryCacheConfigurator::prepareCacheDirectoryForContext Preparing OpenCL cache configuration for context: NVIDIA_Corporation--GeForce_GTX_1070--411_31
Features in image #2: 500
Features in image #3: 500
Features in image #4: 500
Features in image #5: 500
Features in image #6: 500
Finding features, time: 5.46377 sec
Pairwise matchingPairwise matching, time: 3.24159 sec
Initial camera intrinsics #1:
K:
[534.6674906996568, 0, 474.5;0, 534.6674906996568, 316;0, 0, 1]
R:
[0.91843718, -0.09762425, -1.1678253;0.0034433089, 1.0835428, -0.025021957;0.28152198, 0.16100603, 0.91920781]
Initial camera intrinsics #2:
K:
[534.6674906996568, 0, 474.5;0, 534.6674906996568, 316;0, 0, 1]
R:
[1.001171, -0.085758291, -0.64530683;0.010103324, 1.0520245, -0.030576767;0.15743911, 0.12035993, 1]
Initial camera intrinsics #3:
K:
[534.6674906996568, 0, 474.5;0, 534.6674906996568, 316;0, 0, 1]
R:
[1, 0, 0;0, 1, 0;0, 0, 1]
Initial camera intrinsics #4:
K:
[534.6674906996568, 0, 474.5;0, 534.6674906996568, 316;0, 0, 1]
R:
[0.8474561, 0.028589081, 0.75133896;-0.0014587968, 0.92028928, 0.033205934;-0.17483309, 0.018777205, 0.84592116]
Initial camera intrinsics #5:
K:
[534.6674906996568, 0, 474.5;0, 534.6674906996568, 316;0, 0, 1]
R:
[0.60283858, 0.069275051, 1.2121853;-0.014153662, 0.85474133, 0.014057174;-0.29529575, 0.053770453, 0.61932623]
Initial camera intrinsics #6:
K:
[534.6674906996568, 0, 474.5;0, 534.6674906996568, 316;0, 0, 1]
R:
[0.41477469, 0.075901195, 1.4396564;-0.015423983, 0.82344943, 0.0061162044;-0.35168326, 0.055747174, 0.42653102]
Camera #1:
K:
[1068.953598931666, 0, 474.5;0, 1068.953598931666, 316;0, 0, 1]
R:
[0.84266716, -0.010490002, -0.53833258;0.004485324, 0.99991232, -0.01246338;0.53841609, 0.0080878884, 0.84264034]
Camera #2:
K:
[1064.878323247434, 0, 474.5;0, 1064.878323247434, 316;0, 0, 1]
R:
[0.95117813, -0.015436338, -0.3082563;0.01137107, 0.99982315, -0.014980057;0.308433, 0.010743499, 0.95118535]
Camera #3:
K:
[1065.382193682081, 0, 474.5;0, 1065.382193682081, 316;0, 0, 1]
R:
[1, -1.6298145e-09, 0;-1.5716068e-09, 1, 0;0, 0, 1]
Camera #4:
K:
[1067.611537959627, 0, 474.5;0, 1067.611537959627, 316;0, 0, 1]
R:
[0.91316396, -7.9067249e-06, 0.40759254;-0.0075879274, 0.99982637, 0.017019274;-0.4075219, -0.018634165, 0.91300529]
Camera #5:
K:
[1080.708135180496, 0, 474.5;0, 1080.708135180496, 316;0, 0, 1]
R:
[0.70923853, 0.0025724203, 0.70496398;-0.0098195076, 0.99993235, 0.0062302947;-0.70490021, -0.01134116, 0.70921582]
Camera #6:
K:
[1080.90412660159, 0, 474.5;0, 1080.90412660159, 316;0, 0, 1]
R:
[0.49985889, 3.5938341e-05, 0.86610687;-0.00682831, 0.99996907, 0.0038993564;-0.86607999, -0.0078631733, 0.49984369]
Warping images (auxiliary)...
Warping images, time: 0.0791121 sec
Compensating exposure...
Compensating exposure, time: 0.72288 sec
Finding seams...
Finding seams, time: 3.09237 sec
Compositing...
Compositing image #1
Multi-band blender, number of bands: 8
Compositing image #2
Compositing image #3
Compositing image #4
Compositing image #5
Compositing image #6
Compositing, time: 13.7766 sec
Finished, total time: 29.4535 sec

输入图像boat1.jpg、boat2.jpg、boat3.jpg、boat4.jpg、boat5.jpg、boat6.jpg如下(可以在OpenCV安装目录下找到D:\OpenCV4.4\opencv_extra-master\testdata\stitching)

结果图:

参数warp_type 设置为"plane",效果图如下:

参数warp_type 设置为"fisheye",效果图如下(旋转90°后):

其他的参数可以根据自己需要修改,如果要自己完成还需要详细了解拼接步骤再优化。

更多OpenCV、Halcon等相关学习资讯请关注公众号:OpenCV与AI深度学习

OpenCV图像拼接-Stitcher类-Stitching detailed使用与参数介绍相关推荐

  1. php拼接全景图,Opencv使用Stitcher类图像拼接生成全景图像

    Opencv中自带的Stitcher类可以实现全景图像,效果不错.下边的例子是Opencv Samples中的stitching.cpp的简化,源文件可以在这个路径里找到: \opencv\sourc ...

  2. Opencv 使用Stitcher类图像拼接生成全景图像

    Opencv中自带的Stitcher类可以实现全景图像,效果不错.下边的例子是Opencv Samples中的stitching.cpp的简化,源文件可以在这个路径里找到: \opencv\sourc ...

  3. opencv 图像拼接stitcher

    如果相互重复的至少有1/3效果还有不错的 import cv2 import math import os# 文件夹所有图片 path = "Images/Scan" images ...

  4. OpenCV常用图像拼接方法(四):基于Stitcher类拼接

    OpenCV常用图像拼接方法将分为四个部分与大家共享,这里是第四种方法,至此四种常用方法介绍完毕. OpenCV的常用图像拼接方法(四):基于OpenCV Stitcher类的图像拼接,OpenCV版 ...

  5. OpenCV图像拼接之Stitching和Stitching_detailed

    Stitcher类与detail命名空间 OpenCV提供了高级别的函数封装在Stitcher类中,使用很方便,不用考虑太多的细节. 低级别函数封装在detail命名空间中,展示了opencv算法实现 ...

  6. OPenCV 图像拼接之------stitching和stitching_detailed

    Stitcher类与detail命名空间 OpenCV提供了高级别的函数封装在Stitcher类中,使用很方便,不用考虑太多的细节. 低级别函数封装在detail命名空间中,展示了opencv算法实现 ...

  7. Opencv 图像拼接与融合简单方法Stitcher

    Opencv 图像拼接与融合简单方法Stitcher 官方示例 使用方法 运行效果 官方示例 #include "opencv2/imgcodecs.hpp" #include & ...

  8. OpenCV图像拼接器Stitcher 无法使用GPU加速

    OpenCV 使用Stitcher 命令行模式下使用 try_cuda yes时,出现throw_no_cuda 的error 因为项目需求, 最近在使用opencv 里的Stitcher 拼接器, ...

  9. OpenCV拼接细节stitching detailed的实例(附完整代码)

    OpenCV拼接细节stitching detailed的实例 OpenCV拼接细节stitching detailed的实例 OpenCV拼接细节stitching detailed的实例 #inc ...

最新文章

  1. SpringBoot整合Grpc实现跨语言RPC通讯
  2. python对接微信支付_python3接入微信企业支付实现小程序提现
  3. IO流基础,创建File对象与方法是用
  4. Kotlin学习 PART 1:kotlin定义和目的
  5. MySql 数据库 - 重置数据库、重置初始密码方法,数据库初始化方法,长时间不用忘记密码暴力解决方法
  6. 【风险管理】假如我是风控经理,会搭建怎样的风控团队
  7. SpringBoot中怎样基于slf4j封装日志类输出日志
  8. 使用函数BAPISDORDER_GETDETAILEDLIST读取S/4HANA中Sales Order行项目数据
  9. 永远不要在 MySQL 中使用 UTF-8
  10. Java 8的烹调方式–拼图项目
  11. ElasticSearch原理
  12. 怎样给Spark传递函数—怎样让你的Spark应用更高效更健壮
  13. SQL Server复制需要有实际的服务器名称才能连接到服务器 错误解决方案
  14. vue -- 正确的引入jquery
  15. python基础-解释器安装
  16. 在windows7下安装pads2007.4
  17. 计算机一级excel试题百度云,excel计算机一级试题.doc
  18. Matlab求集合交集和并集
  19. 关于类unix系统(linux,bsd等)克隆的资料-2
  20. Graham-Scan算法计算凸包的Python代码实现

热门文章

  1. 算法思想(持续更新...)
  2. 键盘按键VK键值列表及宏定义
  3. WS_EX_TOOLWINDOW导致的窗口一直不能在前面
  4. 拍乐云受邀2021亚太CDN峰会,技术创新赋能行业新价值
  5. 第一次上计算机课心得,第一次上微机课作文4篇
  6. java web 发送消息_java集成WebSocket向指定用户发送消息
  7. matlab visa转c,将一个m文件转成c /cpp文件并在VC中进行编译。这种方法有个烦人的地方,每次你都需要把matla...
  8. kafka命令行使用
  9. Windows DPC详解
  10. DirectPlay的基本概念