2.1 文献未看
2.2 数据集


下载好的数据集进行bzip2 -d xxx文件的解压,得到需要获取的数据集;
思路整理:下载解压好的数据集需要读取其中的内容并将其进行可视化显示,数据集中给出的是没有噪声的数据,所以为了模拟真实的环境所观测到数据,需要在相机的外参,路标点,观测数据上分别加上噪声,并且在BA处理的过程中为了使数值更稳定,防止有极端情况下处理很大或很大偏移的BA问题,我们需要使用归一化函数(Normal)对原始数据进行归一化操作。(归一化是指将所有的路标点的中心置0, 然后最一个合适的尺度缩放)。然后进行BA优化,将优化完的结果写入文件中,进行可视化显示。
首先要进行BAL数据集内容的读取,所以我们需要创建一个类用于读取和写入数据,并且要对数据最归一化和添加噪声的操作。
1> 创建一个BALPorblem的类,通过创建一个对象,使用其构造函数来读取数据集中的内容BALProblem bal_problem(argv[1]);

    explicit BALProblem::BALProblem(const std::string &filename, bool use_quaternions = false); //explicit关键字只能用于内部的构造函数声明上。用该关键字修饰的类不能发生隐式类型转换,只能显示的方式进行类型转换。

构造函数的内部实现细节:调用fopen函数来读取内容

FILE *fptr = fopen(filename.c_str(), "r");if (fptr == NULL) {std::cerr << "Error: unable to open file " << filename;return;};

通过文件指针fptr来获取数据集中内容,进行存储和处理,先了解BAL数据集总数据的存储结果,才能有的放矢的进行数据的读取和存储等处理

类外使用模板参数template<typename T>定义一个FscanfOrDie()的函数在BALProblem()的构造函数中进行调用,用来获取数据集中的内容FscanfOrDie(fptr, "%d", &num_cameras_); FscanfOrDie(fptr, "%d", &num_points_); FscanfOrDie(fptr, "%d", &num_observations_); 分别获取了数据集首行中的三个值(相机的数量,路标点的数量,观测点的数量)。

template<typename T>
void FscanfOrDie(FILE *fptr, const char *format, T *value) {int num_scanned = fscanf(fptr, format, value);if (num_scanned != 1)std::cerr << "Invalid UW data file. ";
}

根据相机,路标点和观测值的数据分别开辟对应类型数据的空间用于存储读取到的数值:oint_index_ = new int[num_observations_]; camera_index_ = new int[num_observations_]; observations_ = new double[2 * num_observations_];
相机的数量和路标点的数量知道后,我们可以开辟需要优化的变量的空间,每个相机待有化的变量有9个(旋转3维,平移3维,f焦距,k1,k2畸变系数),每个路标点的优化变量3个空间位置。
所以需要开辟num_parameters_ = 9 * num_cameras_ + 3 * num_points_;这么大的空间用于存储优化变量parameters_ = new double[num_parameters_];
point_index_ camera_index_ observations_这个三个空间首地址,通过移位操作来存储对应的数据

     for (int i = 0; i < num_observations_; ++i) {FscanfOrDie(fptr, "%d", camera_index_ + i);FscanfOrDie(fptr, "%d", point_index_ + i);for (int j = 0; j < 2; ++j) {FscanfOrDie(fptr, "%lf", observations_ + 2 * i + j);}}

获取完,数据集中剩下的就是待优化变量的内容了,使用文件指针继续获取

 for (int i = 0; i < num_parameters_; ++i) {FscanfOrDie(fptr, "%lf", parameters_ + i);}

最后关闭文件fcolse(fptr), 此构造函数的第二个参数是是否使四元数,若是用了四元数需要将数据集中的相机外参的旋转部分的内容通过AngleAxisToQuaternion(original_cursor, quaternion_cursor);函数变成四元数进行存储

use_quaternions_ = use_quaternions;if (use_quaternions) {// Switch the angle-axis rotations to quaternions.num_parameters_ = 10 * num_cameras_ + 3 * num_points_;double *quaternion_parameters = new double[num_parameters_];double *original_cursor = parameters_;double *quaternion_cursor = quaternion_parameters;for (int i = 0; i < num_cameras_; ++i) {AngleAxisToQuaternion(original_cursor, quaternion_cursor);quaternion_cursor += 4;original_cursor += 3;for (int j = 4; j < 10; ++j) {*quaternion_cursor++ = *original_cursor++;}}// Copy the rest of the points.for (int i = 0; i < 3 * num_points_; ++i) {*quaternion_cursor++ = *original_cursor++;}// Swap in the quaternion parameters.delete[]parameters_;parameters_ = quaternion_parameters;}

到此时文件的读取已完成。
2> 第二步,对读取到的数据进行归一化处理oid BALProblem::Normalize() 通过double *points = mutable_points();来确定路标点的起始地址:

/// camera参数的起始地址double *mutable_cameras() { return parameters_; }double *mutable_points() { return parameters_ + camera_block_size() * num_cameras_; }

然后对每个路标的分别进行3维数据的升序排序,并返回排序后的最中间的坐标值:

std::vector<double> tmp(num_points_);
Eigen::Vector3d median;
double *points = mutable_points();
for (int i = 0; i < 3; ++i) {for (int j = 0; j < num_points_; ++j) {tmp[j] = points[3 * j + i];}median(i) = Median(&tmp);
}

然后让每个坐标点减去中间坐标点的坐标,然后进行归一化操作:

for (int i = 0; i < num_points_; ++i) {VectorRef point(points + 3 * i, 3);tmp[i] = (point - median).lpNorm<1>();}

经过这样的处理后,之前的路标点的坐标都进行了缩放,且缩放倍率是路标点三个维度,与经过排序后中间点三个维度差值的倒数: const double scale = 100.0 / median_absolute_deviation; 所以,缩放后路标点为:

// X = scale * (X - median)for (int i = 0; i < num_points_; ++i) {VectorRef point(points + 3 * i, 3);point = scale * (point - median);}

相机的缩放处理的话只对与平移部分,并且由于之前定义R是从旋转后到旋转前的定义,现在相反,我们要的到旋转后的相机位姿,所以需要乘以-1.

    double *cameras = mutable_cameras();double angle_axis[3];double center[3];for (int i = 0; i < num_cameras_; ++i) {double *camera = cameras + camera_block_size() * i;CameraToAngelAxisAndCenter(camera, angle_axis, center);// center = scale * (center - median)VectorRef(center, 3) = scale * (VectorRef(center, 3) - median);AngleAxisAndCenterToCamera(angle_axis, center, camera);}

此时就完成了normalize部分。
3> 我们需要对数据集添噪声,使其符合真实状态下的观测和情况bal_problem.Perturb(0.1, 0.5, 0.5);

void BALProblem::Perturb(const double rotation_sigma,const double translation_sigma,const double point_sigma) {assert(point_sigma >= 0.0);assert(rotation_sigma >= 0.0);assert(translation_sigma >= 0.0);double *points = mutable_points();if (point_sigma > 0) {for (int i = 0; i < num_points_; ++i) {PerturbPoint3(point_sigma, points + 3 * i);}}for (int i = 0; i < num_cameras_; ++i) {double *camera = mutable_cameras() + camera_block_size() * i;double angle_axis[3];double center[3];// Perturb in the rotation of the camera in the angle-axis// representationCameraToAngelAxisAndCenter(camera, angle_axis, center);if (rotation_sigma > 0.0) {PerturbPoint3(rotation_sigma, angle_axis);}AngleAxisAndCenterToCamera(angle_axis, center, camera);if (translation_sigma > 0.0)PerturbPoint3(translation_sigma, camera + camera_block_size() - 6);}
}
void PerturbPoint3(const double sigma, double *point) {for (int i = 0; i < 3; ++i)point[i] += RandNormal() * sigma;
}
inline double RandNormal()
{double x1, x2, w;do{x1 = 2.0 * RandDouble() - 1.0;x2 = 2.0 * RandDouble() - 1.0;w = x1 * x1 + x2 * x2;}while( w >= 1.0 || w == 0.0);w = sqrt((-2.0 * log(w))/w);return x1 * w;
}

4> 将处理后的数据按ply文件的存储要求写入ply文件中bal_problem.WriteToPLYFile("initial.ply");

// Write the problem to a PLY file for inspection in Meshlab or CloudCompare
void BALProblem::WriteToPLYFile(const std::string &filename) const {std::ofstream of(filename.c_str());of << "ply"<< '\n' << "format ascii 1.0"<< '\n' << "element vertex " << num_cameras_ + num_points_<< '\n' << "property float x"<< '\n' << "property float y"<< '\n' << "property float z"<< '\n' << "property uchar red"<< '\n' << "property uchar green"<< '\n' << "property uchar blue"<< '\n' << "end_header" << std::endl;// Export extrinsic data (i.e. camera centers) as green points.double angle_axis[3];double center[3];for (int i = 0; i < num_cameras(); ++i) {const double *camera = cameras() + camera_block_size() * i;CameraToAngelAxisAndCenter(camera, angle_axis, center);of << center[0] << ' ' << center[1] << ' ' << center[2]<< "0 255 0" << '\n';}// Export the structure (i.e. 3D Points) as white points.const double *points = parameters_ + camera_block_size() * num_cameras_;for (int i = 0; i < num_points(); ++i) {const double *point = points + i * point_block_size();for (int j = 0; j < point_block_size(); ++j) {of << point[j] << ' ';}of << "255 255 255\n";}of.close();
}

5> 进行BA优化SolveBA(bal_problem);
首先构建图优化结果,每个误差项优化变量中相机有9个,路标点有3个

typedef g2o::BlockSolver<g2o::BlockSolverTraits<9, 3>> BlockSolverType;

构建线性求解器

 typedef g2o::LinearSolverCSparse<BlockSolverType::PoseMatrixType> LinearSolverType;

这两部做完后,我们需要为线性求解器选择求解增量的方法(梯度下降方法如LM, GN)等;

auto solver = new g2o::OptimizationAlgorithmLevenberg(g2o::make_unique<BlockSolverType>(g2o::make_unique<LinearSolverType>()));

选择和构建好优化方法和优化变量后,需要构建对应的图模型(即往优化模型中所要添加的节点和边)

g2o::SparseOptimizer optimizer; // 图模型
optimizer.setAlgorithm(solver);  //设置求解器
optimizer.setVerbose(true);  //打开调试输出

接着往图模型中添加待优化的相机变量和路标点变量:

//往图中添加相机的优化变量顶点for (int i = 0; i < bal_problem.num_cameras(); ++i) {VertexPoseAndIntrinsics *v = new VertexPoseAndIntrinsics();  //往图中增加相机的优化变量(顶点)double *camera = cameras + camera_block_size * i;v->setId(i);v->setEstimate(PoseAndIntrinsics(camera));  //设置待优化变量的估计值optimizer.addVertex(v);vertex_pose_intrinsics.push_back(v);}//往图中添加路标点的顶点for (int i = 0; i < bal_problem.num_points(); ++i) {VertexPoint *v = new VertexPoint();double *point = points + point_block_size * i;v->setId(i + bal_problem.num_cameras());v->setEstimate(Vector3d(point[0], point[1], point[2])); //设置待优化变量的估计值// g2o在BA中需要手动设置待Marg的顶点,BA中的稀疏性,边缘哪部分。v->setMarginalized(true);  //利用BA的稀疏性,先变边缘掉路标点来求节相机的增量,然后待如求路标点的增量,加速求解optimizer.addVertex(v);vertex_points.push_back(v);}

然后往图模型中添加约束也就是边,其中使用到了核函数(相似度,核函数,内积等价)

// 往图中添加边for (int i = 0; i < bal_problem.num_observations(); ++i) {EdgeProjection *edge = new EdgeProjection;edge->setVertex(0, vertex_pose_intrinsics[bal_problem.camera_index()[i]]); //设置连接的顶点  设置第一个顶点,相机顶点edge->setVertex(1, vertex_points[bal_problem.point_index()[i]]);  //设置连接的顶点  设置第二个顶点,路标点顶点edge->setMeasurement(Vector2d(observations[2 * i + 0], observations[2 * i + 1]));   //观测的数值,优化变量所产生的观测edge->setInformation(Matrix2d::Identity()); //信息矩阵,协方差矩阵之逆edge->setRobustKernel(new g2o::RobustKernelHuber()); //鲁棒核函数optimizer.addEdge(edge);}

最后我们开始执行优化,设置优化的迭代次数,进行配置

 //执行优化optimizer.initializeOptimization();optimizer.optimize(40);

6> 最后需要将优化后的结果可视化的进行显示,所以需要把优化后的值写入到对应的存储变量中,以便写入对应的ply文件中bal_problem.WriteToPLYFile("final.ply");

// set to bal problemfor (int i = 0; i < bal_problem.num_cameras(); ++i) {double *camera = cameras + camera_block_size * i;auto vertex = vertex_pose_intrinsics[i];auto estimate = vertex->estimate();estimate.set_to(camera);}for (int i = 0; i < bal_problem.num_points(); ++i) {double *point = points + point_block_size * i;auto vertex = vertex_points[i];for (int k = 0; k < 3; ++k) point[k] = vertex->estimate()[k];}
/// 姿态和内参的结构
struct PoseAndIntrinsics {PoseAndIntrinsics() {}/// set from given data addressexplicit PoseAndIntrinsics(double *data_addr) {rotation = SO3d::exp(Vector3d(data_addr[0], data_addr[1], data_addr[2]));translation = Vector3d(data_addr[3], data_addr[4], data_addr[5]);focal = data_addr[6];k1 = data_addr[7];k2 = data_addr[8];}/// 将估计值放入内存void set_to(double *data_addr) {auto r = rotation.log();for (int i = 0; i < 3; ++i) data_addr[i] = r[i];for (int i = 0; i < 3; ++i) data_addr[i + 3] = translation[i];data_addr[6] = focal;data_addr[7] = k1;data_addr[8] = k2;}SO3d rotation;Vector3d translation = Vector3d::Zero();double focal = 0;double k1 = 0, k2 = 0;
};/// 位姿加相机内参的顶点,9维,前三维为so3,接下去为t, f, k1, k2
class VertexPoseAndIntrinsics : public g2o::BaseVertex<9, PoseAndIntrinsics> {public:EIGEN_MAKE_ALIGNED_OPERATOR_NEW;VertexPoseAndIntrinsics() {}virtual void setToOriginImpl() override {_estimate = PoseAndIntrinsics();}virtual void oplusImpl(const double *update) override {_estimate.rotation = SO3d::exp(Vector3d(update[0], update[1], update[2])) * _estimate.rotation;_estimate.translation += Vector3d(update[3], update[4], update[5]);_estimate.focal += update[6];_estimate.k1 += update[7];_estimate.k2 += update[8];}/// 根据估计值投影一个点Vector2d project(const Vector3d &point) {Vector3d pc = _estimate.rotation * point + _estimate.translation;pc = -pc / pc[2];double r2 = pc.squaredNorm();double distortion = 1.0 + r2 * (_estimate.k1 + _estimate.k2 * r2);return Vector2d(_estimate.focal * distortion * pc[0],_estimate.focal * distortion * pc[1]);}virtual bool read(istream &in) {}virtual bool write(ostream &out) const {}
};class VertexPoint : public g2o::BaseVertex<3, Vector3d> {public:EIGEN_MAKE_ALIGNED_OPERATOR_NEW;VertexPoint() {}virtual void setToOriginImpl() override {_estimate = Vector3d(0, 0, 0);  //顶点重置函数,把估计值置0即可}//顶点更新函数,优化过程最重要的增量计算,xk+1 = xk + 增量virtual void oplusImpl(const double *update) override {_estimate += Vector3d(update[0], update[1], update[2]);}virtual bool read(istream &in) {}virtual bool write(ostream &out) const {}
};class EdgeProjection :public g2o::BaseBinaryEdge<2, Vector2d, VertexPoseAndIntrinsics, VertexPoint> {public:EIGEN_MAKE_ALIGNED_OPERATOR_NEW;//边的误差计算函数,该函数取出边所连接的顶点的当前估计值,它与观测值比较。virtual void computeError() override {auto v0 = (VertexPoseAndIntrinsics *) _vertices[0];auto v1 = (VertexPoint *) _vertices[1];auto proj = v0->project(v1->estimate());_error = proj - _measurement;}// use numeric derivativesvirtual bool read(istream &in) {}virtual bool write(ostream &out) const {}};

优化前和优化后的效果展示:
使用的数据集是problem-16-22106-pre.txt



3.1
(1) ∑W\sum\limits_{W}W∑​ ∥I(p)−Iπ(KTP)∥2\lVert I(p)- I \pi(K TP) \rVert^2∥I(p)−Iπ(KTP)∥2
(2) 6个相机位姿,3了路标点;
(3) 16 x 6 的error 对相机位姿态的雅可比为 像素梯度即灰度值对像素点的导数(16 x 2) 乘以 像素坐标对旋转后三维空间点的导数(2 x 3)乘以 旋转后的三维点对增量的导数(3 x 6) —— 就是16x2 2x3 3x6 为16x6
16 x 3 的error对路标点的雅可比为 像素梯度即灰度值对像素点的导数(16 x 2) 乘以 像素坐标对路标点增量的导数(2 x 3)
3.2
实现代码:

//
// Created by xiang on 1/4/18.
// this program shows how to perform direct bundle adjustment
//
#include <iostream>#include <g2o/core/base_unary_edge.h>
#include <g2o/core/base_binary_edge.h>
#include <g2o/core/base_vertex.h>
#include <g2o/core/block_solver.h>
#include <g2o/core/optimization_algorithm_levenberg.h>
#include <g2o/solvers/dense/linear_solver_dense.h>
#include <g2o/core/robust_kernel.h>
#include <g2o/core/robust_kernel_impl.h>
#include <g2o/types/sba/types_six_dof_expmap.h>#include <Eigen/Core>
#include <sophus/se3.hpp>
#include <opencv2/opencv.hpp>#include <pangolin/pangolin.h>
#include <boost/format.hpp>using namespace std;
using namespace Eigen;typedef vector<Sophus::SE3d, Eigen::aligned_allocator<Sophus::SE3d>> VecSE3;  //装相机位姿
typedef vector<Eigen::Vector3d, Eigen::aligned_allocator<Eigen::Vector3d>> VecVec3d;   //装3d路标点// global variables
string pose_file = "../poses.txt";
string points_file = "../points.txt";// intrinsics
float fx = 277.34;
float fy = 291.402;
float cx = 312.234;
float cy = 239.777;// bilinear interpolation  双线性插值读取图像的灰度值
inline float GetPixelValue(const cv::Mat &img, float x, float y) {uchar *data = &img.data[int(y) * img.step + int(x)];float xx = x - floor(x);float yy = y - floor(y);return float((1 - xx) * (1 - yy) * data[0] +xx * (1 - yy) * data[1] +(1 - xx) * yy * data[img.step] +xx * yy * data[img.step + 1]);
}// g2o vertex that use sophus::SE3 as pose  自定义位姿顶点,数据类型是SE3d
class VertexSophus : public g2o::BaseVertex<6, Sophus::SE3d> {public:EIGEN_MAKE_ALIGNED_OPERATOR_NEWVertexSophus() {}~VertexSophus() {}virtual void setToOriginImpl() {_estimate = Sophus::SE3d();}//根据估计值投影一个点(估计的是相机系下的3d点)Vector2f project(const Vector3d &point){//KTP 读取SE(3),转换,投影为像素坐标,访问像素灰度值计算光度误差Sophus::SE3d Tcw(estimate());Vector3d point_cam3d = Tcw * point;float u = fx * point_cam3d[0]/point_cam3d[2] + cx;float v = fy * point_cam3d[1]/point_cam3d[2] + cy;return Vector2f(u,v);}//更新virtual void oplusImpl(const double *update_) {//计算se(3),再由se(3)为SE(3)Eigen::Map<const Eigen::Matrix<double, 6, 1>> update(update_);//保存估计值,相当于_estimate = Sophus::SE3d::exp(update) * _estimate;setEstimate(Sophus::SE3d::exp(update) * estimate());}virtual bool read(std::istream &is) {}virtual bool write(std::ostream &os) const {}
};// TODO edge of projection error, implement it
// 16x1 error, which is the errors in patch   16个像素点的光度差之和
typedef Eigen::Matrix<double,16,1> Vector16d;
class EdgeDirectProjection : public g2o::BaseBinaryEdge<16, Vector16d, g2o::VertexSBAPointXYZ, VertexSophus>  //一个是SBA的XYZ自带边,一个是自定义的边{public:EIGEN_MAKE_ALIGNED_OPERATOR_NEW;//边构造函数EdgeDirectProjection(float *color, cv::Mat &target) {this->origColor = color;this->targetImg = target;}~EdgeDirectProjection() {}virtual void computeError() override {// TODO START YOUR CODE HEREg2o::VertexSBAPointXYZ* vertexPw = static_cast<g2o::VertexSBAPointXYZ *>(_vertices[0]);VertexSophus* vertexTcw = static_cast<VertexSophus *>(_vertices[1]);Vector2f proj = vertexTcw->project(vertexPw->estimate());if(proj[0]<-2 || proj[0]+2>targetImg.cols || proj[1]<-2 || proj[1]+2>targetImg.rows){this->setLevel(1);  for(int i=0; i<16; ++i)  _error[i] = 0;   //投影点超出要投影到的投影面上的边界}else{for(int i=-2; i<2;++i){for(int j=-2; j<2; ++j){int num = 4 * i + j + 10;   // 4x4方框中16个值代表的像素 num代表的是0~16//_measurement是一个16*1的向量,所以_error也是16*1_error[num] = origColor[num] - GetPixelValue(targetImg, proj[0]+i, proj[1]+j);// proj[0],proj[1]代表的是投影点的位置}}}}virtual void linearizeOplus() override{if(level()==1){_jacobianOplusXi = Eigen::Matrix<double, 16, 3>::Zero();  //误差项 原像素与投影点像素的差值是个patch为4x4的 列向量,对路标点的优化变量的导数, 所以是16x3的_jacobianOplusXj = Eigen::Matrix<double, 16, 6>::Zero();  //相机待优化变量有6个,所以J是16x6的return;}g2o::VertexSBAPointXYZ *vertexPw = static_cast<g2o::VertexSBAPointXYZ *> (_vertices[0]);VertexSophus *vertexTcw = static_cast<VertexSophus *> (_vertices[1]);Vector2f proj = vertexTcw->project(vertexPw->estimate());double X = vertexPw->estimate()[0], Y = vertexPw->estimate()[1], Z = vertexPw->estimate()[2];double inv_z = 1.0 / Z, inv_z2 = inv_z * inv_z;  //cur中的3D坐标X'Y'Z'float u = proj[0], v = proj[1];Eigen::Matrix<double,1,2> J_I2u;    //像素梯度Eigen::Matrix<double,2,3> J_uq;     //投影方程对3d点导数Eigen::Matrix<double,3,6> J_qxi = Eigen::Matrix<double,3,6>::Zero();    //3d点对李代数导数J_uq(0,0) = fx * inv_z;J_uq(0,1) = 0;J_uq(0,2) = -(fx*X) * inv_z2;J_uq(1,0) = 0;J_uq(1,1) = fy * inv_z;J_uq(1,2) = -fy * Y * inv_z2;J_qxi(0,0) = 1;J_qxi(0,4) = Z;J_qxi(0,5) = -Y;J_qxi(1,1) = 1;J_qxi(1,3) = -Z;J_qxi(1,5) = X;J_qxi(2,2) = 1;J_qxi(2,3) = -Y;J_qxi(2,4) = -X;for(int x=-2; x<2; ++x)for(int y=-2; y<2; ++y){int num = 4 * x + y + 10;//中心差分计算像素梯度J_I2u(0,0) = (1.0 / 2) * (GetPixelValue(targetImg, u+1+x, v+y)-GetPixelValue(targetImg, u-1+x, v+y));J_I2u(0,1) = (1.0 / 2) * (GetPixelValue(targetImg, u+x, v+1+y)-GetPixelValue(targetImg, u+x, v-1+y));//                _jacobianOplusXi.block<1, 3>(num, 0) = -J_I2u * J_uq * vertexTcw->estimate().rotationMatrix();_jacobianOplusXi.block<1, 3>(num, 0) = -J_I2u * J_uq ;  //1x2  2x3_jacobianOplusXj.block<1, 6>(num, 0) = -J_I2u * J_uq * J_qxi; //1x2 2x3 3x6}}virtual bool read(istream &in) {}virtual bool write(ostream &out) const {}private:cv::Mat targetImg;  // the target imagefloat *origColor = nullptr;   // 16 floats, the color of this point
};// plot the poses and points for you, need pangolin
void Draw(const VecSE3 &poses, const VecVec3d &points, string title);int main(int argc, char **argv) {// read poses and pointsVecSE3 poses;VecVec3d points;ifstream fin(pose_file);  //读取相机位姿(7张图,7个位姿)//读取位姿while (!fin.eof()){double timestamp = 0;fin >> timestamp;if (timestamp == 0) break;double data[7];for (auto &d: data) fin >> d;//四元数和平移向量来构建SE3poses.push_back(Sophus::SE3d(Eigen::Quaterniond(data[6], data[3], data[4], data[5]),Eigen::Vector3d(data[0], data[1], data[2])));if (!fin.good()) break;}fin.close();//读取3d路标点XYZvector<float *> color;fin.open(points_file);while (!fin.eof()){double xyz[3] = {0};for (int i = 0; i < 3; i++) fin >> xyz[i];if (xyz[0] == 0) break;points.push_back(Eigen::Vector3d(xyz[0], xyz[1], xyz[2]));float *c = new float[16];for (int i = 0; i < 16; i++) fin >> c[i];color.push_back(c);if (fin.good() == false) break;}fin.close();cout << "poses: " << poses.size() << ", points: " << points.size() << endl;vector<cv::Mat> images;boost::format fmt("../%d.png");for (int i = 0; i < 7; i++){images.push_back(cv::imread((fmt % i).str(), 0));}typedef g2o::BlockSolver<g2o::BlockSolverTraits<6, 3>> DirectBlock;  // 求解的向量是6*1的typedef g2o::LinearSolverDense<DirectBlock::PoseMatrixType> LinearSolverType;
//    DirectBlock::LinearSolverType *linearSolver = new g2o::LinearSolverDense<DirectBlock::PoseMatrixType>();
//    DirectBlock *solver_ptr = new DirectBlock(linearSolver);auto solver = new g2o::OptimizationAlgorithmLevenberg(g2o::make_unique<DirectBlock>(g2o::make_unique<LinearSolverType>()));g2o::SparseOptimizer optimizer;optimizer.setAlgorithm(solver);optimizer.setVerbose(true);// TODO add vertices, edges into the graph optimizervector<g2o::VertexSBAPointXYZ *> vertex_points;  //3位landmark顶点临时变量vector<VertexSophus *> vertex_pose;  //pose顶点临时变量// START YOUR CODE HERE//插入路标顶点for(int i=0; i<points.size(); ++i){g2o::VertexSBAPointXYZ *v = new g2o::VertexSBAPointXYZ;v->setId(i);v->setEstimate(points[i]);v->setMarginalized(true);   //设置边缘化路标点optimizer.addVertex(v);vertex_points.push_back(v);}//插入位姿顶点for(int i=0; i<poses.size(); ++i){VertexSophus *v = new VertexSophus();v->setId(i + points.size());v->setEstimate(poses[i]);optimizer.addVertex(v);vertex_pose.push_back(v);}//插入边for(int c=0; c<poses.size(); ++c)for(int p=0; p<points.size(); ++p){EdgeDirectProjection *edge = new EdgeDirectProjection(color[p], images[c]); edge->setVertex(0, dynamic_cast<g2o::VertexSBAPointXYZ *>(optimizer.vertex(p)));edge->setVertex(1, dynamic_cast<VertexSophus *>(optimizer.vertex(points.size()+c)));
//            edge->setMeasurement(Vector16d );edge->setInformation(Eigen::Matrix<double, 16, 16>::Identity());g2o::RobustKernelHuber *rk = new g2o::RobustKernelHuber;rk->setDelta(1.0);  //A squared error above delta^2 is considered as outlier in the dataedge->setRobustKernel(rk);optimizer.addEdge(edge);}Draw(poses, points, string("before"));// perform optimizationoptimizer.initializeOptimization(0);optimizer.optimize(200);// TODO fetch data from the optimizer// START YOUR CODE HERE//把优化后的节点,即路标点和相机存储到原有的变量中,为了可视化去显示优化后的图for(int c=0; c<poses.size(); ++c)for(int p=0; p<points.size(); ++p){points[p] = dynamic_cast<g2o::VertexSBAPointXYZ *>(optimizer.vertex(p))->estimate();poses[c] = dynamic_cast<VertexSophus *>(optimizer.vertex(points.size()+c))->estimate();}// plot the optimized points and posesDraw(poses, points, "after");// delete color datafor (auto &c: color) delete[] c;return 0;
}void Draw(const VecSE3 &poses, const VecVec3d &points, string title) {if (poses.empty() || points.empty()) {cerr << "parameter is empty!" << endl;return;}// create pangolin window and plot the trajectorypangolin::CreateWindowAndBind(title, 1024, 768);glEnable(GL_DEPTH_TEST);glEnable(GL_BLEND);glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);pangolin::OpenGlRenderState s_cam(pangolin::ProjectionMatrix(1024, 768, 500, 500, 512, 389, 0.1, 1000),pangolin::ModelViewLookAt(0, -0.1, -1.8, 0, 0, 0, 0.0, -1.0, 0.0));pangolin::View &d_cam = pangolin::CreateDisplay().SetBounds(0.0, 1.0, pangolin::Attach::Pix(175), 1.0, -1024.0f / 768.0f).SetHandler(new pangolin::Handler3D(s_cam));while (pangolin::ShouldQuit() == false) {glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);d_cam.Activate(s_cam);glClearColor(0.0f, 0.0f, 0.0f, 0.0f);// draw posesfloat sz = 0.1;int width = 640, height = 480;for (auto &Tcw: poses) {glPushMatrix();Sophus::Matrix4f m = Tcw.inverse().matrix().cast<float>();glMultMatrixf((GLfloat *) m.data());glColor3f(1, 0, 0);glLineWidth(2);glBegin(GL_LINES);glVertex3f(0, 0, 0);glVertex3f(sz * (0 - cx) / fx, sz * (0 - cy) / fy, sz);glVertex3f(0, 0, 0);glVertex3f(sz * (0 - cx) / fx, sz * (height - 1 - cy) / fy, sz);glVertex3f(0, 0, 0);glVertex3f(sz * (width - 1 - cx) / fx, sz * (height - 1 - cy) / fy, sz);glVertex3f(0, 0, 0);glVertex3f(sz * (width - 1 - cx) / fx, sz * (0 - cy) / fy, sz);glVertex3f(sz * (width - 1 - cx) / fx, sz * (0 - cy) / fy, sz);glVertex3f(sz * (width - 1 - cx) / fx, sz * (height - 1 - cy) / fy, sz);glVertex3f(sz * (width - 1 - cx) / fx, sz * (height - 1 - cy) / fy, sz);glVertex3f(sz * (0 - cx) / fx, sz * (height - 1 - cy) / fy, sz);glVertex3f(sz * (0 - cx) / fx, sz * (height - 1 - cy) / fy, sz);glVertex3f(sz * (0 - cx) / fx, sz * (0 - cy) / fy, sz);glVertex3f(sz * (0 - cx) / fx, sz * (0 - cy) / fy, sz);glVertex3f(sz * (width - 1 - cx) / fx, sz * (0 - cy) / fy, sz);glEnd();glPopMatrix();}// pointsglPointSize(2);glBegin(GL_POINTS);for (size_t i = 0; i < points.size(); i++) {glColor3f(0.0, points[i][2]/4, 1.0-points[i][2]/4);glVertex3d(points[i][0], points[i][1], points[i][2]);}glEnd();pangolin::FinishFrame();usleep(5000);   // sleep 5 ms}
}

CMakeLists.txt:

CMAKE_MINIMUM_REQUIRED(VERSION 3.4)
PROJECT(DirectBA)
SET(CMAKE_BUILD_TYPE "Release")
SET(CMAKE_CXX_STANDARD 14)LIST(APPEND CMAKE_MODULE_PATH ~/Documents/test__code/g2o/cmake_modules)#Find_Package(G2o REQUIRED)
Find_Package(Eigen3 REQUIRED)
Find_Package(Ceres REQUIRED)
Find_Package(Sophus REQUIRED)
#Find_Package(CSparse REQUIRED)
find_package(OpenCV REQUIRED)
find_package(Pangolin REQUIRED)#SET(G2O_LIBS g2o_csparse_extension g2o_stuff g2o_core cxsparse)
SET(G2O_LIBS g2o_cli g2o_ext_freeglut_minimal g2o_simulator g2o_solver_slam2d_linear g2o_types_icp g2o_types_slam2d g2o_core g2o_interface g2o_solver_csparse
g2o_solver_structure_only g2o_types_sba g2o_types_slam3d g2o_csparse_extension g2o_opengl_helper g2o_solver_dense g2o_stuff g2o_types_sclam2d g2o_parser g2o_solver_pcg g2o_types_data g2o_types_sim3 cxsparse )SET(G20_INCLUDE_DIRS /usr/local/include/g2o/)
include_directories(${PROJECT_SOURCE_DIR} ${G20_INCLUDE_DIRS} ${EIGEN3_INCLUDE_DIR} ${CSPARSE_INCLUDE_DIR})
include_directories( ${OpenCV_INCLUDE_DIRS} ${Pangolin_INCLUDE_DIRS})add_executable(directBA directBA.cpp)target_link_libraries(directBA ${G2O_LIBS}  ${OpenCV_LIBS}  ${Pangolin_LIBRARIES} fmt)


迭代到94次后相机位姿态和路标点的位置达到最优。

深蓝-视觉slam-第七节习题相关推荐

  1. 深蓝视觉SLAM课程第四讲--相机模型,非线性优化(G2O)

    课程Github地址:https://github.com/wrk666/VSLAM-Course/tree/master 0. 内容 对应于十四讲中的第5讲和第6讲 回顾十四讲P24-26 1. 针 ...

  2. 通俗易懂!视觉slam第七部分——四元数

    旋转矩阵用九个量描述三自由度的旋转,具有冗余性:欧拉角和旋转向量是紧凑的,但具有奇异性.事实上,我们找不到不带奇异性的三维向量描述方式 [19].这有点类似于,当我们想用两个坐标表示地球表面时(如经度 ...

  3. 《视觉SLAM十四讲 第二版》笔记及课后习题(第七讲)

    读书笔记:视觉里程计1 之前的内容,介绍了运动方程和观测方程的具体形式,并讲解了以非线性优化为主的求解方法.从本讲开始,我们结束了基础知识的铺垫,开始步入正题:按照第二讲的内容,分别介绍视觉里程计.优 ...

  4. 视觉slam学习笔记以及课后习题《第三讲李群李代数》

    前言 这篇博客主要记录了我在深蓝学院视觉slam课程中的课后习题,因为是为了统计知识点来方便自己以后查阅,所以有部分知识可能不太严谨,如果给大家造成了困扰请见谅,大家发现了问题也可以私信或者评论给我及 ...

  5. 视觉slam学习笔记以及课后习题《第五讲特征点法视觉里程计》

    这篇博客主要记录了我在深蓝学院视觉slam课程中的课后习题,因为是为了统计知识点来方便自己以后查阅,所以有部分知识可能不太严谨,如果给大家造成了困扰请见谅,大家发现了问题也可以私信或者评论给我及时改正 ...

  6. 视觉SLAM十四讲CH10代码解析及课后习题详解

    g2o_viewer问题解决 在进行位姿图优化时候,如果出现g2o_viewer: command not found,说明你的g2o_viewer并没有安装上,打开你之前安装的g2o文件夹,打开bi ...

  7. 视觉SLAM十四讲CH8代码解析及课后习题详解

    第一版的代码: direct_semidense.cpp #include <iostream> #include <fstream> #include <list> ...

  8. 视觉SLAM十四讲CH6代码解析及课后习题详解

    gaussNewton.cpp #include <iostream> #include <chrono> #include <opencv2/opencv.hpp> ...

  9. 视觉SLAM十四讲学习笔记-第七讲-视觉里程计-三角测量和实践

     专栏汇总 视觉SLAM十四讲学习笔记-第一讲_goldqiu的博客-CSDN博客 视觉SLAM十四讲学习笔记-第二讲-初识SLAM_goldqiu的博客-CSDN博客 视觉SLAM十四讲学习笔记-第 ...

  10. 视觉SLAM十四讲学习笔记-第七讲-视觉里程计-对极几何和对极约束、本质矩阵、基础矩阵

    专栏系列文章如下:  专栏汇总 视觉SLAM十四讲学习笔记-第一讲_goldqiu的博客-CSDN博客 视觉SLAM十四讲学习笔记-第二讲-初识SLAM_goldqiu的博客-CSDN博客 视觉SLA ...

最新文章

  1. 输出200-299之间的所有素数
  2. 到底该不该使用存储过程
  3. JS:ES6-6 初识Symbol类型
  4. [论文评析] ArXiv,2021, Focal Self Attention技术分析
  5. Opencv 相机视图中的固定框中找到直线用lsd算法
  6. springboot通过code获取微信小程序UnionID
  7. 华为rh5885服务器oid_高性能全冗余 华为RH5885 V3服务器解析
  8. java计算机毕业设计教师科研成果管理源码+mysql数据库+系统+lw文档+部署
  9. 算法系列:Reservoir Sampling
  10. 企业微信群:机器人定时提醒功能数据库配置化
  11. 【数学与算法】最小生成树Spanning Trees
  12. QT将10进制转换为2进制与16进制
  13. input文本框--去首尾空格
  14. 英语背单词有用吗_英语背单词真的有用吗?
  15. Latex表格线宽修改方法以及内容左对齐。
  16. 冯诺依曼体系各硬件工作原理解析
  17. [转载] 蒲慕明写给学生的信
  18. phpcms 添加顶踩功能
  19. AI图像识别遇上对抗性图像竟变“瞎子”
  20. Qt定制化安装包工具

热门文章

  1. pycharm 连接数据库报错
  2. STM32F407 CAN Controller介绍(二)
  3. Vulkan编程指南翻译 第六章 着色器和管线 第2节 SPIR-V 概述
  4. nginx or apache前端禁收录,爬虫,抓取
  5. VS在新建或者导入项目时出现“不支持此接口(Exception from HRESULT:0x80004002 (E_NONINTERFACE))”的解决办法
  6. latex 定义作者,通讯作者,联系地址宏包,package,authblk
  7. 【报错】进程已结束,退出代码-1073740791 (0xC0000409)
  8. pthon3+itchat微信机器人,自定义回复,定时发送晚安,微信小号控制,信息群发功能,获取位置
  9. 博客系统前端(页面设计)
  10. Towards Characterizing the Behavior of LiDARs in Snowy Conditions