一、录制rosbag

二、播放rosbag并用rviz查看topic,记下rgb和depth流话题名

三、用如下脚本(python2而不是3)保存rgb和depth图片同时生成rgb.txt、depth.txt

import roslib
import rosbag
import rospy
import cv2
import os
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
from cv_bridge import CvBridgeErrorrgb = '/home/town/data/dataset/rgb/'  #rgb path
depth = '/home/town/data/dataset/depth/'   #depth path
bridge = CvBridge()file_handle1 = open('/home/town/data/dataset/rgb.txt', 'w')
file_handle2 = open('/home/town/data/dataset/depth.txt', 'w')with rosbag.Bag('/home/town/data/20220423_054352.bag', 'r') as bag:for topic,msg,t in bag.read_messages():if topic == "/device_0/sensor_1/Color_0/image/data":   #rgb topiccv_image = bridge.imgmsg_to_cv2(msg,"bgr8")timestr = "%.6f" %  msg.header.stamp.to_sec()   #rgb time stampimage_name = timestr+ ".png"path = "rgb/" + image_namefile_handle1.write(timestr + " " + path + '\n')cv2.imwrite(rgb + image_name, cv_image)if topic == "/device_0/sensor_0/Depth_0/image/data":  #depth topiccv_image = bridge.imgmsg_to_cv2(msg)#cv_image = bridge.imgmsg_to_cv2(msg, '32FC1')#cv_image = cv_image * 255timestr = "%.6f" %  msg.header.stamp.to_sec()   #depth time stampimage_name = timestr+ ".png"path = "depth/" + image_namefile_handle2.write(timestr + " " + path + '\n')cv2.imwrite(depth + image_name, cv_image)
file_handle1.close()
file_handle2.close()

可以把alias python='/usr/bin/python2.7’写进bashrc,用完后记得注释掉。

四、用该脚本associate.py生成associate.txt

import argparse
import sys
import os
import numpydef read_file_list(filename):"""Reads a trajectory from a text file. File format:The file format is "stamp d1 d2 d3 ...", where stamp denotes the time stamp (to be matched)and "d1 d2 d3.." is arbitary data (e.g., a 3D position and 3D orientation) associated to this timestamp. Input:filename -- File nameOutput:dict -- dictionary of (stamp,data) tuples"""file = open(filename)data = file.read()lines = data.replace(","," ").replace("\t"," ").split("\n") list = [[v.strip() for v in line.split(" ") if v.strip()!=""] for line in lines if len(line)>0 and line[0]!="#"]list = [(float(l[0]),l[1:]) for l in list if len(l)>1]return dict(list)def associate(first_list, second_list,offset,max_difference):"""Associate two dictionaries of (stamp,data). As the time stamps never match exactly, we aim to find the closest match for every input tuple.Input:first_list -- first dictionary of (stamp,data) tuplessecond_list -- second dictionary of (stamp,data) tuplesoffset -- time offset between both dictionaries (e.g., to model the delay between the sensors)max_difference -- search radius for candidate generationOutput:matches -- list of matched tuples ((stamp1,data1),(stamp2,data2))"""first_keys = first_list.keys()second_keys = second_list.keys()potential_matches = [(abs(a - (b + offset)), a, b) for a in first_keys for b in second_keys if abs(a - (b + offset)) < max_difference]potential_matches.sort()matches = []for diff, a, b in potential_matches:if a in first_keys and b in second_keys:first_keys.remove(a)second_keys.remove(b)matches.append((a, b))matches.sort()return matchesif __name__ == '__main__':# parse command lineparser = argparse.ArgumentParser(description='''This script takes two data files with timestamps and associates them   ''')parser.add_argument('first_file', help='first text file (format: timestamp data)')parser.add_argument('second_file', help='second text file (format: timestamp data)')parser.add_argument('--first_only', help='only output associated lines from first file', action='store_true')parser.add_argument('--offset', help='time offset added to the timestamps of the second file (default: 0.0)',default=0.0)parser.add_argument('--max_difference', help='maximally allowed time difference for matching entries (default: 0.02)',default=0.02)args = parser.parse_args()first_list = read_file_list(args.first_file)second_list = read_file_list(args.second_file)matches = associate(first_list, second_list,float(args.offset),float(args.max_difference))    if args.first_only:for a,b in matches:print("%f %s"%(a," ".join(first_list[a])))else:for a,b in matches:print("%f %s %f %s"%(a," ".join(first_list[a]),b-float(args.offset)," ".join(second_list[b])))
python2 associate.py depth.txt rgb.txt > associate.txtpython2 associate.py rgb.txt depth.txt > associate.txt确保associate.txt内rgb在前?

五、测试

/Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt Examples/RGB-D/TUM1.yaml ./rgbd_dataset_freiburg1_desk/  ./fr1_desk.txt
/Examples/RGB-D/rgbd_tum ./Vocabulary/ORBvoc.txt ./Examples/RGB-D/D435i.yaml ./dataset/ ./dataset/associate.txt

若遇到Track lost soon after initialisation, reseting…,不妨把数据集前一部分图片舍弃后再试试。

#附一份D435i.yaml

%YAML:1.0#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------# Camera calibration and distortion parameters (OpenCV)
Camera.fx: 615.9417724609375
Camera.fy: 616.0935668945312
Camera.cx: 322.3533630371094
Camera.cy: 240.44674682617188Camera.k1: 0.0
Camera.k2: 0.0
Camera.p1: 0.0
Camera.p2: 0.0
Camera.p3: 0.0Camera.width: 640
Camera.height: 480# Camera frames per second
Camera.fps: 30.0# IR projector baseline times fx (aprox.)
# bf = baseline (in meters) * fx, D435i的 baseline = 50 mm
Camera.bf: 30.797   # Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1# Close/Far threshold. Baseline times.
ThDepth: 40.0# Deptmap values factor
DepthMapFactor: 1000.0#--------------------------------------------------------------------------------------------
# ORB Parameters
#--------------------------------------------------------------------------------------------# ORB Extractor: Number of features per image
ORBextractor.nFeatures: 1000# ORB Extractor: Scale factor between levels in the scale pyramid
ORBextractor.scaleFactor: 1.2# ORB Extractor: Number of levels in the scale pyramid
ORBextractor.nLevels: 8# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast
ORBextractor.iniThFAST: 20
ORBextractor.minThFAST: 7#--------------------------------------------------------------------------------------------
# Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1
Viewer.GraphLineWidth: 0.9
Viewer.PointSize: 2
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3
Viewer.ViewpointX: 0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -1.8
Viewer.ViewpointF: 500

T265.yaml

%YAML:1.0#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------
Camera.type: "PinHole"# Camera calibration and distortion parameters (OpenCV)
Camera.fx: 286.419189453125
Camera.fy: 286.384307861328
Camera.cx: 101.355010986328
Camera.cy: 102.183197021484Camera.k1: 0.0
Camera.k2: 0.0
Camera.p1: 0.0
Camera.p2: 0.0Camera.width: 848
Camera.height: 800# Camera frames per second
Camera.fps: 30.0# IR projector baseline times fx (aprox.)
Camera.bf: 40.0# Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1# Close/Far threshold. Baseline times.
ThDepth: 40.0# Deptmap values factor
DepthMapFactor: 1.0#--------------------------------------------------------------------------------------------
# ORB Parameters
#--------------------------------------------------------------------------------------------# ORB Extractor: Number of features per image
ORBextractor.nFeatures: 1000# ORB Extractor: Scale factor between levels in the scale pyramid
ORBextractor.scaleFactor: 1.2# ORB Extractor: Number of levels in the scale pyramid
ORBextractor.nLevels: 8# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast
ORBextractor.iniThFAST: 20
ORBextractor.minThFAST: 7#--------------------------------------------------------------------------------------------
# Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1
Viewer.GraphLineWidth: 0.9
Viewer.PointSize: 2
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3
Viewer.ViewpointX: 0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -1.8
Viewer.ViewpointF: 500

https://dgzc.ganahe.top/ganahe/2021/wrjzzdhcgqtgrfbh.html

用D435i录制自己的数据集运行ORBslam2并构建稠密点云相关推荐

  1. ORB-SLAM2 在线构建稠密点云(二)

    RGBD相机重建点云 更新日志 1. ORB_SLAM2 ROS 编译错误解决 2.RGBD相机在线运行 2.1 添加ROS节点文件 2.2 修改配置文件 2.3 启动运行RGBD在线节点 3.在OR ...

  2. ORB-SLAM2 在线构建稠密点云(四)

    Webot 与ROS通讯 1 ORB-SLAM2 双目在线运行 1.1 KITTI数据集运行 1.2 Indemind 双目相机在线运行 1.2.1 添加ROS节点文件 1.2.2 修改配置文件 1. ...

  3. 基于ORB-SLAM2实时构建稠密点云

    基于ORB-SLAM2实时构建稠密点云 ORB-SLAM2是特征点法的经典之作,但是只能构建稀疏二维点云,限制了其使用范围 因期望可以实现移动机器人的导航功能,需要构建三维点云,再通过octomap_ ...

  4. zed相机拆机_使用TX2+ZED相机运行ORBSLAM2和LearnVIORB

    本周任务 1.完成TX2刷机 2.使用ZED运行ORBSLAM2和LearnVIORB 任务一: TX2的刷机部分完成,刷机较为顺利,遇到的问题较少. error: Finished Flashing ...

  5. ORB-SLAM2实时显示稠密点云图

    文章目录 先上代码 生成过程 系统入口 跟踪线程 遇到问题 程序运行段错误,异常终止 定义了Eigen类型成员的字节对齐问题 其它 ROS在线生成稠密点云 代码 运行 进一步工作 动态SLAM相关 语 ...

  6. ORB-SLAM2稠密点云重建:双目室外[1]

    接上一篇. 步骤4:读取每帧位姿信息,初始化点云 类似之前的RGBD,先在XCTool.h中加入XCKITTIKey类读取每帧信息: class XCKITTIKey { public:double ...

  7. Orbslam2 稠密点云 +D435i实现(Ubuntu18.04)

    系统:Ubuntu18.04 设备:Realsense D435i 一.安装相关依赖库 毕设后写的一篇通俗一些的,适合没接触过或不太会ubuntu的同学,前半部分都是一样的, tips:需要安装一个[ ...

  8. ros下用kinectv2运行orbslam2

    目录 前提 创建工作空间 orbslam2源码配置.测试: 配置usb_cam ROS功能包 配置kinect kinectv2标定 在ROS下用kinect运行orbslam2 出错: 前提 vim ...

  9. Ubuntu 18.04配置ORB-SLAM2和ORB-SLAM3运行环境+ROS实时运行ORB-SLAM2+SLAM相关库的安装

    文章目录 一.换源 二.安装三方库 2.1 安装必要的依赖项 2.2 安装Pangolin 2.3 安装OpenCV3 2.4 安装Eigen3 三.安装ORB-SLAM2 四.安装ORB-SLAM3 ...

最新文章

  1. 基因课 15天入门生物信息(2021年) 第三天 Linux基础命令(3)
  2. APUE(第五章)标准IO
  3. C#做的在线升级小程序
  4. 在2020年到来之前,你应该知道的10大科技趋势预测
  5. jQuery方法position()与offset()区别
  6. IE6的又一条罪 javascript:void(0)
  7. ORACLE强大的令人发指
  8. VTK修炼之道18:图像基本操作_图像信息的访问与修改(vtkImageChangeInformation)
  9. 恒大全国降价,最低74折,接下来会有其他楼盘跟进降价吗?
  10. [转]Tomcat中8005/8009/8080/8443端口的作用
  11. python图形化编程实验_转换图像RGB-实验室与python
  12. 【转】增量式PID控制算法
  13. 动态规划——最大子数组和(Leetcode 53)
  14. EXP 导出出错解决方案
  15. Django3与Vue3前后端分离搭建
  16. 自动驾驶——CenterNet(Objects as Points)的学习笔记
  17. 基础知识之什么是I/O
  18. oracle ora27072,ORA-27072: skgfdisp
  19. STM32CubeIDE HAL库微秒us的延时Delay实现
  20. matlab 实现马赫带效应,图像上机实验.doc

热门文章

  1. 三维动画--Blender软件介绍
  2. 100首好听的英文歌
  3. arduino i2c 如何写16位寄存器_Arduino只是拿来玩的?你错了!教你用它自制一个非常实用的小产品|智能灌溉控制器...
  4. Node.js中exports、module.exports、require之间的关系
  5. SAP-FICO LSMW批量导财务凭证批量导固定资产主数据-AS91
  6. matlab sor解线性方程组,SOR算法解线性方程组的matlab程序
  7. 【TortoiseGit】本地仓库文件夹无故出红色叹号
  8. 1024——程序员的快乐
  9. cpython pypy_PyPy4.0比Cpython更快的Python编译器
  10. 网络推广之论坛营销发帖推广