本文涉及很多代码及文字,排版、文字错误请见谅。

阅读时间预计30分钟

本文涉及图像、数据均由INDEMIND双目视觉惯性模组采集

为了防止各位同学修改出错,我们把修改好的代码及文件上传至GitHub,各位同学按教程修改后,可根据我们提供的代码进行对比,确保万无一失,GitHub连接:

https://github.com/INDEMINDtech/ORB-SLAM2-​github.com

一:环境介绍

系统:Ubuntu 16.04 ROS

ORB依赖库:Pangolin、OpenCV、Eigen3、DBoW2、g2o,ros

二:下载SDK及ORB-SLAM源码

下载地址:

  • SDK:

INDEMIND | SDK下载​indemind.cn

  • ORB-SLAM:

raulmur/ORB_SLAM2​github.com

三:修改源码

1、下载好SDK之后,进入SDK-Linux/demo_ros/src目录。将下载好的放在该目录下,并对其改名,改为 ORB_SLAM

2. 进入ORB_SLAM/Examples/Stereo目录下,修改http://stereo_euroc.cc文件。

1)将其橙色区域改成

代码如下:

#include<iostream>
#include<algorithm>
#include<fstream>
#include<iomanip>
#include<chrono>
#include<opencv2/core/core.hpp>
#include<System.h>
#include "ros/ros.h"
#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/Image.h>
#include "std_msgs/String.h"
#include "FileYaml.h"
#include <queue>
#include <mutex>
#include <thread>
#include <condition_variable>
using namespace std;
ros::Subscriber image_l_sub;
ros::Subscriber image_r_sub;int image_l_count = 0;
queue<cv::Mat> que_image_l;
queue<cv::Mat> que_image_r;
queue<long double> que_stamp;std::mutex limage_mutex;
std::mutex rimage_mutex;
std::condition_variable left_data_cond;
std::condition_variable right_data_cond;
//    std::lock_guard<std::mutex>lock(rimage_mutex);
//    std::thread show_thread{show_image}; //visualization thread
//    show_thread.detach();cv::Mat imLeft;
cv::Mat imRight;
ros::Time ros_stamp;
long double stamp;

修改后如下:

2)在代码,long double stamp;后添加时间转换函数

long double time_tranform(int64_t time)
{//取后13位
int b = time/1e13;
int64_t temp = b * 1e13;
int64_t c = time - temp;//小数点后9位
long double d = c / 1e9;
return d;
}

修改后如下:

3)删除LoadImages函数

4)在time_tranform和main函数之间添加左图回调函数imagelCallback,添加右图回调函数imagerCallback,以及一些变量的定义

代码如下:

//image_l回调函数
void imagelCallback(const sensor_msgs::ImageConstPtr&msg)
{cv_bridge::CvImagePtr cv_ptr;cv_ptr = cv_bridge::toCvCopy(msg, "mono8");cv_ptr->image.copyTo(imLeft);image_l_count = cv_ptr->header.seq;ros_stamp = cv_ptr->header.stamp;
std::lock_guard<std::mutex> lock_l(limage_mutex);stamp = time_tranform(ros_stamp.toNSec());
//cout<<"ros_stamp: "<<ros_stamp<<endl;que_image_l.push(imLeft);que_stamp.push(stamp);
}
//image_r回调函数
void imagerCallback(const sensor_msgs::ImageConstPtr&msg)
{cv_bridge::CvImagePtr cv_ptr;cv_ptr = cv_bridge::toCvCopy(msg, "mono8");cv_ptr->image.copyTo(imRight);
std::lock_guard<std::mutex> lock_r(rimage_mutex);que_image_r.push(imRight);
}
Camera_Other_Parameter vecCamerasParam;
cv::Mat M1l,M2l,M1r,M2r;
cv::Mat data_left;
cv::Mat data_right;
long double frame_time;
cv::Mat imLeftRect, imRightRect;

5)在cv::Mat imLeftRect, imRightRect;之后,添加show_ORB函数,用于获取图像数据和时间,以及启动ORB的功能

代码如下:

void show_ORB()
{
//-----------------ORB_SLAM2 Init---------------------------------------------
const string param1_ORBvoc = "Vocabulary/ORBvoc.txt";const string param3_ORByaml = "Examples/Stereo/EuRoC.yaml";ORB_SLAM2::System SLAM(param1_ORBvoc,param3_ORByaml,ORB_SLAM2::System::STEREO,true);
std::this_thread::sleep_for(std::chrono::milliseconds(500));
while(true){
std::this_thread::sleep_for(std::chrono::milliseconds(30));
std::unique_lock<std::mutex> lock_l(limage_mutex);data_left = que_image_l.front();frame_time = que_stamp.front();que_image_l.pop();que_stamp.pop();lock_l.unlock();
std::unique_lock<std::mutex> lock_r(rimage_mutex);data_right = que_image_r.front();que_image_r.pop();lock_r.unlock();
//   cout.precision(13);
//  cout<<"frame: "<<frame_time<<endl;cv::resize(data_left,data_left,cv::Size(vecCamerasParam.cols,vecCamerasParam.rows));cv::resize(data_right,data_right,cv::Size(vecCamerasParam.cols,vecCamerasParam.rows));cv::remap(data_left,imLeftRect,M1l,M2l,cv::INTER_LINEAR);cv::remap(data_right,imRightRect,M1r,M2r,cv::INTER_LINEAR);
#ifdef COMPILEDWITHC11
std::chrono::steady_clock::time_point t1 = std::chrono::steady_clock::now();
#else
std::chrono::monotonic_clock::time_point t1 = std::chrono::monotonic_clock::now();
#endifSLAM.TrackStereo(imLeftRect,imRightRect,frame_time);
#ifdef COMPILEDWITHC11
std::chrono::steady_clock::time_point t2 = std::chrono::steady_clock::now();
#else
std::chrono::monotonic_clock::time_point t2 = std::chrono::monotonic_clock::now();
#endif
double ttrack= std::chrono::duration_cast<std::chrono::duration<double> >(t2 - t1).count();// Wait to load the next frame
double T = 0;T = que_stamp.front() - frame_time;if(ttrack < T)usleep((T-ttrack)*1e6);/*  cv::imshow("left",data_left);cv::imshow("right",data_right);cv::waitKey(1);*/}
// Stop all threadsSLAM.Shutdown();
// Save camera trajectorySLAM.SaveTrajectoryTUM("CameraTrajectory.txt");
}

修改后如下:

6)将main函数内的内容全部删除,在main函数内添加ros初始化,读取配置文件,去畸变,获取参数,校正,ORB_SLAM的启动

代码如下:

 ros::init(argc,argv,"ORB_SLAM2");ros::NodeHandle n;image_l_sub = n.subscribe("/module/image_left",100,imagelCallback);image_r_sub = n.subscribe("/module/image_right", 100,imagerCallback);const char *param2_SDKyaml =  "/home/indemind/u/SDK-Linux-ros/lib/1604/headset.yaml";readConfig(param2_SDKyaml,vecCamerasParam);
//-----------------fisheye rectify---------------------------------------------cv::Mat Q;if(vecCamerasParam.K_l.empty() || vecCamerasParam.K_r.empty() || vecCamerasParam.P_l.empty() || vecCamerasParam.P_r.empty() || vecCamerasParam.R_l.empty() ||vecCamerasParam.R_r.empty() || vecCamerasParam.D_l.empty() || vecCamerasParam.D_r.empty() || vecCamerasParam.rows==0 || vecCamerasParam.cols==0){
cerr << "ERROR: Calibration parameters to rectify stereo are missing!" << endl;
return -1;}cv::fisheye::initUndistortRectifyMap(vecCamerasParam.K_l,vecCamerasParam.D_l,vecCamerasParam.R_l,vecCamerasParam.P_l.rowRange(0,3).colRange(0,3),cv::Size(vecCamerasParam.cols,vecCamerasParam.rows),CV_32FC1,M1l,M2l);cv::fisheye::initUndistortRectifyMap(vecCamerasParam.K_r,vecCamerasParam.D_r,vecCamerasParam.R_r,vecCamerasParam.P_r.rowRange(0,3).colRange(0,3),cv::Size(vecCamerasParam.cols,vecCamerasParam.rows),CV_32FC1,M1r,M2r);
cout << "the P_l of initUndistortRectifyMap after" << endl;
for(int i = 0;i < 3;++i)
for(int j = 0;j < 3;++j){
double *ptr = vecCamerasParam.P_l.ptr<double>(i,j);
cout << *ptr<<endl;}cout << "the P_r of initUndistortRectifyMap after" << endl;
for(int i = 0;i < 3;++i)
for(int j = 0;j < 3;++j){
double *ptr = vecCamerasParam.P_r.ptr<double>(i,j);
cout << *ptr<<endl;}cv::stereoRectify(vecCamerasParam.K_l,vecCamerasParam.D_l,vecCamerasParam.K_r,vecCamerasParam.D_r,cv::Size(vecCamerasParam.cols,vecCamerasParam.rows),vecCamerasParam.R,vecCamerasParam.t,vecCamerasParam.R_l,vecCamerasParam.R_r,vecCamerasParam.P_l,vecCamerasParam.P_r,Q,cv::CALIB_ZERO_DISPARITY,0);//-----------------fisheye end---------------------------------------------
std::thread show_thread{show_ORB}; //visualization threadshow_thread.detach();ros::spin();return 0;

修改后如下:

7)将LoadImages函数删除掉

文件修改完成

3. 进入SDK-Linux/demo_ros/src/ORB_SLAM2/include目录下,新建FileYaml.h文件,将下列代码填入文件

#ifndef _FILEYAML_H
#define _FILEYAML_H#include <iostream>
#include <fstream>
using namespace std;#include <opencv/cv.h>
#include <opencv2/calib3d.hpp>
#include <opencv2/features2d/features2d.hpp>
#include<opencv2/opencv.hpp>#include <Eigen/Core>
#include <opencv2/core/eigen.hpp>#include <sophus/so3.h>
#include <sophus/se3.h>
#include <chrono>
using namespace cv;
using namespace Eigen;
using namespace Sophus;using cv::FileStorage;
using cv::FileNode;
struct Camera
{Eigen::Matrix4d T_SC;Eigen::Vector2d imageDimension;Eigen::VectorXd distortionCoefficients;Eigen::Vector2d focalLength;Eigen::Vector2d principalPoint;
std::string distortionType;Camera() :T_SC(Eigen::Matrix4d::Identity()),imageDimension(Eigen::Vector2d(1280, 800)),distortionCoefficients(Eigen::Vector2d(0, 0)),focalLength(Eigen::Vector2d(700, 700)),principalPoint(Eigen::Vector2d(640, 400)),distortionType("equidistant"){}void write(FileStorage& fs) const //Write serialization for this class
{fs << "{:";fs << "T_SC";fs << "[:";
for (int i = 0; i < 4; i++){
for (int j = 0; j < 4; j++){fs << T_SC(i, j);}}fs << "]";fs << "image_dimension";fs << "[:";fs << imageDimension(0) << imageDimension(1);fs << "]";fs << "distortion_coefficients";fs << "[:";
for (int i = 0; i < distortionCoefficients.rows(); i++){fs << distortionCoefficients(i);}fs << "]";fs << "distortion_type" << distortionType;fs << "focal_length";fs << "[:";fs << focalLength(0) << focalLength(1);fs << "]";fs << "principal_point";fs << "[:";fs << principalPoint(0) << principalPoint(1);fs << "]";fs << "}";}void read(const FileNode& node)  //Read serialization for this class
{cv::FileNode T_SC_node = node["T_SC"];cv::FileNode imageDimensionNode = node["image_dimension"];cv::FileNode distortionCoefficientNode = node["distortion_coefficients"];cv::FileNode focalLengthNode = node["focal_length"];cv::FileNode principalPointNode = node["principal_point"];// extrinsicsT_SC << T_SC_node[0], T_SC_node[1], T_SC_node[2], T_SC_node[3], T_SC_node[4], T_SC_node[5], T_SC_node[6], T_SC_node[7], T_SC_node[8], T_SC_node[9], T_SC_node[10], T_SC_node[11], T_SC_node[12], T_SC_node[13], T_SC_node[14], T_SC_node[15];imageDimension << imageDimensionNode[0], imageDimensionNode[1];distortionCoefficients.resize(distortionCoefficientNode.size());
for (size_t i = 0; i<distortionCoefficientNode.size(); ++i) {distortionCoefficients[i] = distortionCoefficientNode[i];}focalLength << focalLengthNode[0], focalLengthNode[1];principalPoint << principalPointNode[0], principalPointNode[1];distortionType = (std::string)(node["distortion_type"]);}
};struct Camera_Other_Parameter
{Mat K_l;Mat K_r;Mat D_l;Mat D_r;Mat R_l;Mat R_r;Mat P_l;Mat P_r;Mat R;Mat t;
int cols;    //图像宽
int rows;    //图像高
};void readConfig(const char *yamlPath,Camera_Other_Parameter &vecCamerasOtherParam);#endif

注:该文件为读取文件的头文件

4. 进入SDK-Linux/demo_ros/src/ORB_SLAM2/src目录下,新建http://FileYaml.cc文件,将下列代码填入文件:

#include "FileYaml.h"static void write(FileStorage& fs, const std::string&, const Camera& x)
{x.write(fs);
}static void read(const FileNode& node, Camera& x, const Camera& default_value = Camera())
{
if (node.empty())x = default_value;
elsex.read(node);
}void readConfig(const char *yamlPath,Camera_Other_Parameter &vecCamerasOtherParam)
{
std::vector<Camera> vecCameras;cv::FileStorage fin(yamlPath, cv::FileStorage::READ);
if(!fin.isOpened()){
cerr << endl << "Failed to load readConfig yamlPath " << endl;
return ;}cv::FileNode cameras_node = fin["cameras"];
/*    cv::FileNode Rl_node = fin["Rl"];cv::FileNode Rr_node = fin["Rr"];cv::FileNode Pl_node = fin["Pl"];cv::FileNode Pr_node = fin["Pr"];cv::FileNode Kl_node = fin["Kl"];cv::FileNode Kr_node = fin["Kr"];cv::FileNode Dl_node = fin["Dl"];cv::FileNode Dr_node = fin["Dr"];*/fin["Rl"] >> vecCamerasOtherParam.R_l;fin["Rr"] >> vecCamerasOtherParam.R_r;fin["Pl"] >> vecCamerasOtherParam.P_l;fin["Pr"] >> vecCamerasOtherParam.P_r;fin["Kl"] >> vecCamerasOtherParam.K_l;fin["Kr"] >> vecCamerasOtherParam.K_r;fin["Dl"] >> vecCamerasOtherParam.D_l;fin["Dr"] >> vecCamerasOtherParam.D_r;/*vecCamerasOtherParam.R_l = Rl_node.mat();vecCamerasOtherParam.R_r = Rr_node.mat();vecCamerasOtherParam.P_l = Pl_node.mat();vecCamerasOtherParam.P_r = Pr_node.mat();vecCamerasOtherParam.K_l = Kl_node.mat();vecCamerasOtherParam.K_r = Kr_node.mat();vecCamerasOtherParam.D_l = Dl_node.mat();vecCamerasOtherParam.D_r = Dr_node.mat();*/for (cv::FileNodeIterator it = cameras_node.begin(); it != cameras_node.end(); it++){Camera camera;(*it) >> camera;vecCameras.push_back(camera);}//obtain col & rowvecCamerasOtherParam.cols = vecCameras[0].imageDimension(0);vecCamerasOtherParam.rows = vecCameras[0].imageDimension(1);
//obtain R & tEigen::Matrix4d T_SC_left;Eigen::Matrix4d T_SC_right;T_SC_left  = vecCameras[0].T_SC;T_SC_right = vecCameras[1].T_SC;
SE3 T_SC_l(T_SC_left.topLeftCorner(3,3),T_SC_left.topRightCorner(3,1));
SE3 T_SC_r(T_SC_right.topLeftCorner(3,3),T_SC_right.topRightCorner(3,1));SE3 Tcl_cr = T_SC_l.inverse()*T_SC_r;SE3 Tcr_cl = T_SC_r.inverse()*T_SC_l;Matrix3d R = Tcr_cl.rotation_matrix();Vector3d t = Tcr_cl.translation();//Eigen tranfer to array
double * R_ptr= new double[R.size()];
double *t_ptr = new double[t.size()];Map<MatrixXd>(R_ptr, R.rows(), R.cols()) = R;Map<MatrixXd>(t_ptr, t.rows(), t.cols()) = t;
cout<<"R_matrix"<<endl;
double R_matrix[3][3];
for(int i = 0;i < 3;i++)
for(int j = 0;j<3;j++){
//transposeR_matrix[j][i] = R_ptr[i+j*3];
cout<<R_matrix[j][i]<<endl;}cout<<"t_matrix"<<endl;
double t_matrix[3];
for(int i = 0;i < 3;i++){t_matrix[i] = t_ptr[i];
cout<<t_matrix[i]<<endl;}vecCamerasOtherParam.R = cv::Mat(3,3,CV_64FC1,R_matrix);vecCamerasOtherParam.t = cv::Mat(3,1,CV_64FC1,t_matrix);
}

注:该文件为读取文件的源文件

5. 修改CMakeLists.txt

进入SDK-Linux/demo_ros/src目录下,修改CMakeLists.txt文件

将ORB中的CMakeLists.txt添加到SDK的CMakeLists.txt下,修改如下

1)

在add_compile_options(-std=c++11)后面添加

IF(NOT CMAKE_BUILD_TYPE)
SET(CMAKE_BUILD_TYPE Release)
ENDIF()MESSAGE("Build type: " ${CMAKE_BUILD_TYPE})set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}  -Wall  -O3 -march=native ")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall   -O3 -march=native")# Check C++11 or C++0x support
include(CheckCXXCompilerFlag)
CHECK_CXX_COMPILER_FLAG("-std=c++11" COMPILER_SUPPORTS_CXX11)
CHECK_CXX_COMPILER_FLAG("-std=c++0x" COMPILER_SUPPORTS_CXX0X)
if(COMPILER_SUPPORTS_CXX11)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")add_definitions(-DCOMPILEDWITHC11)message(STATUS "Using flag -std=c++11.")
elseif(COMPILER_SUPPORTS_CXX0X)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++0x")add_definitions(-DCOMPILEDWITHCd0X)message(STATUS "Using flag -std=c++0x.")
else()message(FATAL_ERROR "The compiler ${CMAKE_CXX_COMPILER} has no C++11 support. Please use a different C++ compiler.")
endif()LIST(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake_modules)find_package(OpenCV 3.0 QUIET)
if(NOT OpenCV_FOUND)find_package(OpenCV 2.4.3 QUIET)
if(NOT OpenCV_FOUND)message(FATAL_ERROR "OpenCV > 2.4.3 not found.")endif()
endif()#find_package(Eigen3 3.1.0 REQUIRED)
find_package(Pangolin REQUIRED)
include_directories( ${Pangolin_INCLUDE_DIRS})

修改后如下:

2)

在橙色区域后面添加:

#Eigen
include_directories("/usr/include/eigen3")
#Sophus
find_package(Sophus REQUIRED)
include_directories( ${Sophus_INCLUDE_DIRS})
include_directories(
${PROJECT_SOURCE_DIR}
${PROJECT_SOURCE_DIR}/ORB_SLAM/include
${PROJECT_SOURCE_DIR}/ORB_SLAM
#${EIGEN3_INCLUDE_DIR}
${Pangolin_INCLUDE_DIRS}
)
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_SOURCE_DIR}/lib)

3)

对橙色区域后修改:

add_library(${PROJECT_NAME} SHARED
ORB_SLAM/src/FileYaml.cc
ORB_SLAM/src/System.cc
ORB_SLAM/src/Tracking.cc
ORB_SLAM/src/LocalMapping.cc
ORB_SLAM/src/LoopClosing.cc
ORB_SLAM/src/ORBextractor.cc
ORB_SLAM/src/ORBmatcher.cc
ORB_SLAM/src/FrameDrawer.cc
ORB_SLAM/src/Converter.cc
ORB_SLAM/src/MapPoint.cc
ORB_SLAM/src/KeyFrame.cc
ORB_SLAM/src/Map.cc
ORB_SLAM/src/MapDrawer.cc
ORB_SLAM/src/Optimizer.cc
ORB_SLAM/src/PnPsolver.cc
ORB_SLAM/src/Frame.cc
ORB_SLAM/src/KeyFrameDatabase.cc
ORB_SLAM/src/Sim3Solver.cc
ORB_SLAM/src/Initializer.cc
ORB_SLAM/src/Viewer.cc)

4)

将橙色区域修改为:

target_link_libraries(${PROJECT_NAME}
${OpenCV_LIBS}
#${EIGEN3_LIBS}
${Pangolin_LIBRARIES}
${Sophus_LIBRARIES}
${PROJECT_SOURCE_DIR}/ORB_SLAM/Thirdparty/DBoW2/lib/libDBoW2.so
${PROJECT_SOURCE_DIR}/ORB_SLAM/Thirdparty/g2o/lib/libg2o.so
)

5)

完成步骤4)后,继续添加如下代码:

set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_SOURCE_DIR}/ORB_SLAM/Examples/RGB-D)add_executable(rgbd_tum
ORB_SLAM/Examples/RGB-D/rgbd_tum.cc)
target_link_libraries(rgbd_tum ${PROJECT_NAME}
${Sophus_LIBRARIES})
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_SOURCE_DIR}/ORB_SLAM/Examples/Stereo)add_executable(stereo_kitti
ORB_SLAM/Examples/Stereo/stereo_kitti.cc)
target_link_libraries(stereo_kitti ${PROJECT_NAME}
${Sophus_LIBRARIES})add_executable(stereo_euroc
ORB_SLAM/Examples/Stereo/stereo_euroc.cc)
add_dependencies(stereo_euroc ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})
target_link_libraries(stereo_euroc ${PROJECT_NAME}
${Sophus_LIBRARIES}
${catkin_LIBRARIES})add_executable(module_driver src/camera_driver.cpp)
add_dependencies(module_driver ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})
target_link_libraries(module_driver
${PROJECT_SOURCE_DIR}/../../lib/1604/libindem.so
${catkin_LIBRARIES}
${PROJECT_SOURCE_DIR}/../../lib/1604/libboost_filesystem.so.1.58.0
${PROJECT_SOURCE_DIR}/../../lib/1604/libboost_system.so.1.58.0
${PROJECT_SOURCE_DIR}/../../lib/1604/libg3logger.so.1.3.0-0
${PROJECT_SOURCE_DIR}/../../lib/1604/libnanomsg.so.5
${PROJECT_SOURCE_DIR}/../../lib/1604/libopencv_core.so.3.4
${PROJECT_SOURCE_DIR}/../../lib/1604/libopencv_imgproc.so.3.4
${PROJECT_SOURCE_DIR}/../../lib/1604/libopencv_videoio.so.3.4pthreadstdc++fsdl
)set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_SOURCE_DIR}/ORB_SLAM/Examples/Monocular)add_executable(mono_tum
ORB_SLAM/Examples/Monocular/mono_tum.cc)
target_link_libraries(mono_tum
${Sophus_LIBRARIES}
${PROJECT_NAME})add_executable(mono_kitti
ORB_SLAM/Examples/Monocular/mono_kitti.cc)
target_link_libraries(mono_kitti
${Sophus_LIBRARIES}
${PROJECT_NAME})add_executable(mono_euroc
ORB_SLAM/Examples/Monocular/mono_euroc.cc)
target_link_libraries(mono_euroc
${Sophus_LIBRARIES}
${PROJECT_NAME})

CMakeLists.txt修改完毕

6.编译依赖库

进入……ORB_SLAMThirdparty/DBoW2目录,打开终端,输入如下命令:

mkdir build
cd build
cmake ..
make

进入……ORB_SLAMThirdparty/g2o目录,打开终端,输入如下命令:

mkdir build
cd build
cmake ..
make

在……ORB_SLAMThirdparty/g2o目录下,新建config.h文件,将下列代码填入文件:

#ifndef G2O_CONFIG_H
#define G2O_CONFIG_H/* #undef G2O_OPENMP */
/* #undef G2O_SHARED_LIBS */// give a warning if Eigen defaults to row-major matrices.
// We internally assume column-major matrices throughout the code.
#ifdef EIGEN_DEFAULT_TO_ROW_MAJOR
#  error "g2o requires column major Eigen matrices (see http://eigen.tuxfamily.org/bz/show_bug.cgi?id=422)"
#endif#endif

7. 编译

1)安装SDK,教程请查看:

开讲啦丨双目惯性模组运行ORB-SLAM算法示例​mp.weixin.qq.com

注意:以下SDK依赖环境必须安装!!!!

安装 cmake

sudo apt-get install cmake

安装 google-glog + gflags

sudo apt-get install libgoogle-glog-dev

安装 BLAS & LAPACK

sudo apt-get install libatlas-base-dev

安装 SuiteSparse and CXSparse

sudo apt-get install libsuitesparse-dev

进入SDK-Linux/demo_ros目录下,执行catkin_make命令

7. 执行

1)新打开一个shell,执行roscore命令,如下图

2- 进入SDK-Linux/lib/1604目录下,执行

sudo -s
sudo chmod 777
./run.sh

之后执行

./run.sh

如下图

3)进入SDK-Linux/demo_ros/src/ORB_SLAM目录下,执行./Examples/Stereo/stereo_euroc命令

注意:此时,将获得去畸变之后的参数,把下列参数写入到SDK-Linux/demo_ros/src/ORB_SLAM/src/http://Tracking.cc文件中,如下图

float fx = 597.935;
float fy = 597.935;
float cx = 476.987;
float cy = 442.158;

最后,再次编译,并执行,即可得到实时ORB

教程至此结束,Demo如下:

https://www.zhihu.com/video/1090996629682991104

高校大使推荐同学购买可获得奖励金,购买同学福利多多,点击了解详情

提交相关信息,即可获得100元院校优惠券,点击了解详情

提交相关信息,即可享受15天免费试用,试用期购买享大额优惠,点击了解详情

ros重置后地址_从零开始丨INDEMIND双目惯性模组ROS平台下实时ORB-SLAM记录教程相关推荐

  1. python获取链接跳转后地址_爬虫:获取多次跳转后的页面url

    案例:页面中的一个链接,审核元素得到的地址是"http://iphone.myzaker.com/l.php?l=54472e161bc8e0fd4a8b4573" ,点击之后页面 ...

  2. 怎么重置unity界面_从零开始的星露谷-用Unity还原星露谷物语:移动,界面篇(1)...

    大家好,我是再无悲喜的炎拳. 作为爆卖1000万份的模拟农场RPG,星露谷自然吸引了我这样的休闲玩家,本期待能愉悦的搭建自己的小农场,结果却沉迷钓鱼下矿无法自拔,过起了007的日子,最后只能大叹西八! ...

  3. 4g网络设置dns地址_黑群晖nas中tr软件汉化、路由器设置端口转发教程(二)

    一.安装增强汉化web作者最近又更新了,推荐[通过群晖的"任务计划"自动安装及定期自动更新]这个方式安装,更方便,还能自动更新见作者:下载地址见留言,或私信 官方功能介绍:在线查看 ...

  4. 数据交换平台_从零开始理解大数据架构之数据交换平台

    项目简介 Exchangis是一个轻量级的.高扩展性的数据交换平台,支持对结构化及无结构化的异构数据源之间的数据传输,在应用层上具有数据权限管控.节点服务高可用和多租户资源隔离等业务特性,而在数据层上 ...

  5. ROS调用USB双目摄像头模组

    本篇文章内容大多来自古月居的 ROS&OpenCV下单目和双目摄像头的标定与使用 但这篇文章代码漏洞太多,严重影响正常实现,故把自己跑通的过程及代码写在下面: 双目摄像头 首先得确认你的双目摄 ...

  6. 太阳能板如何串联_太阳能光伏系统单晶和多晶模组的差异?农村家庭自用如何科学选择...

    太阳能光伏想必大家都耳熟能详,因其环保且属于可再生能源,目前在农村很多居民屋顶都已经铺设了光伏发电系统,即节省了电费开支又能并入国家电网,所以近些年来我国的光伏产业得到了很好的发展,是目前发展最快的清 ...

  7. 5g通用模组是什么_中国联通发布《5G通用模组白皮书V2.0》

    5月15日,在中国联通5G应用创新联盟高峰论坛上,中国联通联合芯片.模组等终端产业链合作伙伴,发布<中国联通5G通用模组白皮书V2.0>. <中国联通5G通用模组白皮书V2.0> ...

  8. html重置怎么使用图片,路由器重置后怎么设置_路由器重置设置【图文教程】-太平洋IT百科...

    一不小心Reset了?你还记得路由器重置后怎么设置吗?都说好奇心杀死人,有些妹子看到路由器上边的小圆键就是管不住自己的手按了一下,试试看会不会开启了进入神话世界的大门.小编只能表示,妹子你在按下res ...

  9. linux 服务器鼠标右键失灵_【华鹏客服维修部】系统重置后,键盘失灵怎么办?...

    哈喽,大家好 使用Windows 10系统重置功能后会出现键盘失灵,用户名无法输入的问题今天小微就来可以大家分享一下出现此类问题的临时解决方法一起来看看吧~温馨提示: 本文中的工具仅适用于重置后所有按 ...

最新文章

  1. [转]在C#中使用API回调函数的方法
  2. 如何从多个项目创建 ASP.NET 应用程序以进行组开发
  3. 绘制颜色渐变矩形函数
  4. Linux(CentOS)配置IP设置ssh访问权限
  5. Spring Boot 2 快速教程:WebFlux 集成 Thymeleaf 、 Mongodb 实践(六)
  6. sleep和sleep(0)的区别
  7. 1、反转一个3位整数
  8. python导出pdf_是程序员,就用python导出pdf
  9. 外螺纹对照表_紧固件螺纹直径与螺距对照表
  10. 超级好用论文写作工具NoteExpress下载和安装
  11. 华为鸿蒙系统卡顿怎么解决,为什么手机卡顿成系统难题?华为推出的鸿蒙系统有望解决...
  12. EXCEL VBA 中关于斗牛的算法
  13. ISP_DPC坏点矫正
  14. 联想z5 android pie,联想z5成功吃上安卓pie
  15. c#写windows服务
  16. python爬取豆瓣电影评论_python 爬取豆瓣电影评论,并进行词云展示及出现的问题解决办法...
  17. CAD基础+常用快捷(一)
  18. 王小云数学与计算机奖,2019未来科学大奖揭晓, 清华大学、山东大学教授王小云获“数学与计算机科学”奖...
  19. 如何生成qq邮箱的授权码
  20. 用STC仿真器点亮开发板的灯(第一次用单片机)

热门文章

  1. 使用verilog设计实现QR分解
  2. 《c语言从入门到精通》看书笔记——第11章 结构体和共用体
  3. pythonrequest得替代_Python爬虫通过替换http request header来欺骗浏览器实现登录功能...
  4. android复选框不选中无法点击按钮,Android-Listveiw的checkbox,Button焦点问题
  5. java能过吗_java – 你能通过例子解释AspectJ的cFlow(P u00...
  6. linux dmp文件大小,MAX_DUMP_FILE_SIZE参数:限制trace files及alert file大小
  7. latex 数学公式_数学公式、方程式 OCR 识别编辑 LaTeX 公式软件神器—极度公式
  8. 自动检测CSRF漏洞的工具
  9. python编写成绩及格不及格_python小练习:读入一个考试得分,判断这个分数是哪个等级,并输出,考虑异常场景...
  10. Effective C#: Item 3: Prefer the is or as Operators to Casts