准备工作: 1. ubuntu16.04上安装iai-kinect2, 2. 运行roslaunch kinect2_bridge kinect2_bridge.launch, 3. 运行 rosrun save_rgbd_from_kinect2 save_rgbd_from_kinect2,开始保存图像.

这个保存kinectV2相机的代码如下,完整的工程可以从我的github上下载 https://github.com/Serena2018/save_color_depth_from_kinect2_with_ros/tree/master/save_rgbd_from_kinect2

问题:我使用这个第一个版本的工具获取了rgb和depth图像,并使用TUM数据集提供的associate.py脚本(脚本内容在文章底部)得到彩色图和深度图的对应(在这个工作中我才意识到,深度相机获取的深度图和彩色图实时一一对应的,不信的话,你去看一下TUM数据集).

经过上面的工作,我感觉我获取的数据集是天衣无缝的,知道今天我用我的数据集跑一下orb-slam的RGB-D接口,发现一个大问题,跟踪过程,不连贯,出现回跳的问题,就是,比如说场景中有个人,头一段时间,这个人已经走过去了,可是下一会,这个人又退回来了.

出了这样的问题,我就开始排查问题,先从获取的原始数据开始,我播放查看图像,发现图像是平滑变换的,不存在来回跳转的问题,(彩色图和深度图都没问题)

然后排查是不是associate.py脚本的问题,那么我就使用这个脚本,作用到TUM数据集,得到相应的association.txt文件,然后在orb-slam的RGBD接口测试该组数据,发现没有会跳的问题出现,那么,就不是这个脚本的问题,

我发现现在我的数据集和TUM数据集的区别是,我的保存下来的图像的时间戳的小数部分,不能保证所有的小数部分都有6位,就是说当小数部分后几位为0时,那么就直接略去了,会出现小数部分小于6位的情况,既然我可以想到的其他变量都是相同的,那么我看一下,如果我也能做到保证小数部分都是6位,那么这个问题也许就解决了,我就将原来的代码

os_rgb << time_val.tv_sec << "." <<time_val.tv_usec;
os_dep << time_val.tv_sec << "."<<time_val.tv_usec;

改为

os_rgb << time_val.tv_sec << "." <<setiosflags(ios::fixed)<<setprecision(6)<<std::setfill('0')<<setw(6)<<time_val.tv_usec;
os_dep << time_val.tv_sec << "."<<setiosflags(ios::fixed)<<setprecision(6)<<std::setfill('0')<<setw(6) <<time_val.tv_usec;

经过上面的修改,就可以保证图像的时间戳的小数部分都是6位.

我重新生成了association.txt文件,再次运行orb-slam2的RGB-D接口,发现之前的会跳的问题解决了.很不可思议,但这就是事实,如何解释这个问题呢,

/**** 函数功能:采集iaikinect2输出的彩色图和深度图数据,并以文件的形式进行存储*** 分隔符为 逗号','  * 时间戳单位为秒(s) 精确到小数点后6位(us)** maker:crp* 2017-5-13*/#include <iostream>
#include <sstream>
#include <stdio.h>
#include <stdlib.h>
#include <string>
#include <vector>#include <ros/ros.h>
#include <ros/spinner.h>
#include <sensor_msgs/CameraInfo.h>
#include <sensor_msgs/Image.h>
#include <std_msgs/String.h>#include <cv_bridge/cv_bridge.h> //将ROS下的sensor_msgs/Image消息类型转化成cv::Mat。
#include <sensor_msgs/image_encodings.h> //头文件sensor_msgs/Image是ROS下的图像的类型,这个头文件中包含对图像进行编码的函数#include <fstream>
#include <image_transport/image_transport.h>
#include <image_transport/subscriber_filter.h>
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/opencv.hpp>
#include <sstream>using namespace std;
using namespace cv;Mat rgb, depth;
char successed_flag1 = 0, successed_flag2 = 0;string topic1_name = "/kinect2/qhd/image_color"; // topic 名称
string topic2_name = "/kinect2/qhd/image_depth_rect";string filename_rgbdata = "/home/yunlei/recordData/RGBD/rgbdata.txt";
string filename_depthdata = "/home/yunlei/recordData/RGBD/depthdata.txt";
string save_imagedata = "/home/yunlei/recordData/RGBD/";void dispDepth(const cv::Mat &in, cv::Mat &out, const float maxValue);
void callback_function_color(const sensor_msgs::Image::ConstPtr image_data);
void callback_function_depth(const sensor_msgs::Image::ConstPtr image_data);
int main(int argc, char **argv) {string out_result;// namedWindow("image color",CV_WINDOW_AUTOSIZE);// namedWindow("image depth",CV_WINDOW_AUTOSIZE);ros::init(argc, argv, "kinect2_listen");if (!ros::ok())return 0;ros::NodeHandle n;ros::Subscriber sub1 = n.subscribe(topic1_name, 30, callback_function_color);ros::Subscriber sub2 = n.subscribe(topic2_name, 30, callback_function_depth);ros::AsyncSpinner spinner(3); // Use 3 threadsspinner.start();string rgb_str, dep_str;struct timeval time_val;struct timezone tz;double time_stamp;ofstream fout_rgb(filename_rgbdata.c_str());if (!fout_rgb) {cerr << filename_rgbdata << " file not exist" << endl;}ofstream fout_depth(filename_depthdata.c_str());if (!fout_depth) {cerr << filename_depthdata << " file not exist" << endl;}while (ros::ok()) {if (successed_flag1) {gettimeofday(&time_val, &tz); // us//  time_stamp =time_val.tv_sec+ time_val.tv_usec/1000000.0;ostringstream os_rgb;// os_rgb.setf(std::ios::fixed);// os_rgb.precision(6);os_rgb << time_val.tv_sec << "." <<setiosflags(ios::fixed)<<setprecision(6)<<std::setfill('0')<<setw(6)<<time_val.tv_usec;rgb_str = save_imagedata + "rgb/" + os_rgb.str() + ".png";imwrite(rgb_str, rgb);fout_rgb << os_rgb.str() << ",rgb/" << os_rgb.str() << ".png\n";successed_flag1 = 0;//   imshow("image color",rgb);cout << "rgb -- time:  " << time_val.tv_sec << "." <<setiosflags(ios::fixed)<<setprecision(6)<<std::setfill('0')<<setw(6)<< time_val.tv_usec<< endl;//    waitKey(1);}if (successed_flag2) {gettimeofday(&time_val, &tz); // usostringstream os_dep;// os_dep.setf(std::ios::fixed);// os_dep.precision(6);os_dep << time_val.tv_sec << "."<<setiosflags(ios::fixed)<<setprecision(6)<<std::setfill('0')<<setw(6) <<time_val.tv_usec;dep_str =save_imagedata + "depth/" + os_dep.str() + ".png"; // 输出图像目录imwrite(dep_str, depth);fout_depth << os_dep.str() << ",depth/" << os_dep.str() << ".png\n";successed_flag2 = 0;//   imshow("image depth",depth);cout << "depth -- time:" << time_val.tv_sec << "." << setiosflags(ios::fixed)<<setprecision(6)<<std::setfill('0')<<setw(6)<<time_val.tv_usec<< endl;}}ros::waitForShutdown();ros::shutdown();return 0;
}
void callback_function_color(const sensor_msgs::Image::ConstPtr image_data) {cv_bridge::CvImageConstPtr pCvImage; // 声明一个CvImage指针的实例pCvImage = cv_bridge::toCvShare(image_data,image_data->encoding); //将ROS消息中的图象信息提取,生成新cv类型的图象,复制给CvImage指针pCvImage->image.copyTo(rgb);successed_flag1 = 1;
}
void callback_function_depth(const sensor_msgs::Image::ConstPtr image_data) {Mat temp;cv_bridge::CvImageConstPtr pCvImage; // 声明一个CvImage指针的实例pCvImage = cv_bridge::toCvShare(image_data,image_data->encoding); //将ROS消息中的图象信息提取,生成新cv类型的图象,复制给CvImage指针pCvImage->image.copyTo(depth);// dispDepth(temp, depth, 12000.0f);successed_flag2 = 1;// imshow("Mat depth",depth/256);// cv::waitKey(1);
}
void dispDepth(const cv::Mat &in, cv::Mat &out, const float maxValue) {cv::Mat tmp = cv::Mat(in.rows, in.cols, CV_8U);const uint32_t maxInt = 255;#pragma omp parallel forfor (int r = 0; r < in.rows; ++r) {const uint16_t *itI = in.ptr<uint16_t>(r);uint8_t *itO = tmp.ptr<uint8_t>(r);for (int c = 0; c < in.cols; ++c, ++itI, ++itO) {*itO = (uint8_t)std::min((*itI * maxInt / maxValue), 255.0f);}}cv::applyColorMap(tmp, out, COLORMAP_JET);
}

associate.py脚本

#!/usr/bin/python
# Software License Agreement (BSD License)
#
# Copyright (c) 2013, Juergen Sturm, TUM
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
#  * Redistributions of source code must retain the above copyright
#    notice, this list of conditions and the following disclaimer.
#  * Redistributions in binary form must reproduce the above
#    copyright notice, this list of conditions and the following
#    disclaimer in the documentation and/or other materials provided
#    with the distribution.
#  * Neither the name of TUM nor the names of its
#    contributors may be used to endorse or promote products derived
#    from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#
# Requirements:
# sudo apt-get install python-argparse"""
The Kinect provides the color and depth images in an un-synchronized way. This means that the set of time stamps from the color images do not intersect with those of the depth images. Therefore, we need some way of associating color images to depth images.For this purpose, you can use the ''associate.py'' script. It reads the time stamps from the rgb.txt file and the depth.txt file, and joins them by finding the best matches.
"""import argparse
import sys
import os
import numpydef read_file_list(filename):"""Reads a trajectory from a text file. File format:The file format is "stamp d1 d2 d3 ...", where stamp denotes the time stamp (to be matched)and "d1 d2 d3.." is arbitary data (e.g., a 3D position and 3D orientation) associated to this timestamp. Input:filename -- File nameOutput:dict -- dictionary of (stamp,data) tuples"""file = open(filename)data = file.read()lines = data.replace(","," ").replace("\t"," ").split("\n") list = [[v.strip() for v in line.split(" ") if v.strip()!=""] for line in lines if len(line)>0 and line[0]!="#"]list = [(float(l[0]),l[1:]) for l in list if len(l)>1]return dict(list)def associate(first_list, second_list,offset,max_difference):"""Associate two dictionaries of (stamp,data). As the time stamps never match exactly, we aim to find the closest match for every input tuple.Input:first_list -- first dictionary of (stamp,data) tuplessecond_list -- second dictionary of (stamp,data) tuplesoffset -- time offset between both dictionaries (e.g., to model the delay between the sensors)max_difference -- search radius for candidate generationOutput:matches -- list of matched tuples ((stamp1,data1),(stamp2,data2))"""first_keys = first_list.keys()second_keys = second_list.keys()potential_matches = [(abs(a - (b + offset)), a, b) for a in first_keys for b in second_keys if abs(a - (b + offset)) < max_difference]potential_matches.sort()matches = []for diff, a, b in potential_matches:if a in first_keys and b in second_keys:first_keys.remove(a)second_keys.remove(b)matches.append((a, b))matches.sort()return matchesif __name__ == '__main__':# parse command lineparser = argparse.ArgumentParser(description='''This script takes two data files with timestamps and associates them   ''')parser.add_argument('first_file', help='first text file (format: timestamp data)')parser.add_argument('second_file', help='second text file (format: timestamp data)')parser.add_argument('--first_only', help='only output associated lines from first file', action='store_true')parser.add_argument('--offset', help='time offset added to the timestamps of the second file (default: 0.0)',default=0.0)parser.add_argument('--max_difference', help='maximally allowed time difference for matching entries (default: 0.02)',default=0.015)args = parser.parse_args()first_list = read_file_list(args.first_file)second_list = read_file_list(args.second_file)matches = associate(first_list, second_list,float(args.offset),float(args.max_difference))    if args.first_only:for a,b in matches:print("%f %s"%(a," ".join(first_list[a])))else:for a,b in matches:print("%f %s %f %s"%(a," ".join(first_list[a]),b-float(args.offset)," ".join(second_list[b])))

ROS下获取kinectv2相机的仿照TUM数据集格式的彩色图和深度图相关推荐

  1. ROS获取KinectV2相机的彩色图和深度图并制作bundlefusion需要的数据集

    背景: 最近在研究BundleFusion,跑通官方数据集后,就想着制作自己的数据集来运行bundlefusion.KinectV2相机可直接获取的图像的分辨率分为三个HD 1920x1080, QH ...

  2. ROS入门:ROS下使用电脑相机运行ORB_Slam2

    介绍: 最近在学习slam,想将其应用在ros平台上,故跑了orb-slam2的代码.这里粗略总结一下"ROS下使用电脑相机运行ORB_Slam2"的过程.本人菜鸟一枚,如有问题欢 ...

  3. ros下用kinectv2运行orbslam2

    目录 前提 创建工作空间 orbslam2源码配置.测试: 配置usb_cam ROS功能包 配置kinect kinectv2标定 在ROS下用kinect运行orbslam2 出错: 前提 vim ...

  4. ROS下获取USB免驱动高速摄像头图像数据

    开发环境 使用uvc_camera package 1. 安装uvc_camera package 2. 测试和查看安装结果 3. 配置自己的.launch文件 深入研究库文件 开发环境 PC系统 u ...

  5. 乐视体感astra pro深度摄像头在ros系统获取 深度图像 彩色图像 无色彩点云数据 彩色点云数据

    1.astra pro深度摄像头介绍 2.astra pro驱动安装 3.astra pro获取深度图像   无色彩pointCloud2 4.astra pro获取彩色图像  带彩色的pointCl ...

  6. 乐视体感astra pro深度摄像头在ros系统获取 深度图像 彩色图像 无色彩点云数据 彩色点云数据

    # 1.下载 https://orbbec3d.com/develop/ 这个官网的驱动 ```bash # 下载驱动文件:OpenNI_2.3.0.63(里面包含linux和windows的驱动) ...

  7. 目标跟踪评估绘图(3):ubuntu18.04在MATLAB2016b下的vot-toolkit配置,绘制VOT数据集的EAO评估图,与其他算法进行比较

    本文的视频讲解目标跟踪_OTB数据集和VOT数据集评估图的绘制 博主电脑配置: CPU:酷睿i9-9900kf, 显卡:RTX2070S, gcc版本:7.5.0, 以下实验在MATLAB2016b平 ...

  8. ROS下使用乐视RGB-D深度相机/Orbbec Astra Pro显示图像和点云

    ROS下使用乐视RGB-D深度相机显示图像和点云 1. 正常安装 1.1 安装依赖 1.2 建立工作空间 1.3 克隆代码 1.4 Create astra udev rule 1.5 编译源码包 1 ...

  9. ZED 相机 ORB-SLAM2安装环境配置与ROS下的调试

    注:1. 对某些地方进行了更新(红色标注),以方便进行配置. 2. ZED ROS Wrapper官方github已经更新,根据描述新的Wrapper可能已经不适用与Ros Indigo了,如果大家想 ...

最新文章

  1. ActionRequestValidationException[Validation Failed: 1: script or doc is missing
  2. 昨天,JetBrains 推出“下一代 IDE”,快看看有哪些值得期待的功能!
  3. 每日一题丨以下哪个SQL查询的结果是2006-01-01 00:00:00
  4. fckeditor异常总结---1.NoClassDefFoundError: org/slf4j/LoggerFactory和NoClassDefFoundError: org/apache/log
  5. 浪潮发布OpenStack AI云平台,加速行业AI进程
  6. Linux虚拟文件系统(安装根文件系统)
  7. 从难民到 Uber 首席技术官:一个亚裔幸存者的故事
  8. Android 上百实例源码分析以及开源分析
  9. swiper实现移动端导航和内容板块的联动
  10. overleaf 公式_Latex的公式输入
  11. 原来JSON还可这样玩着
  12. 十大著名黑客-----李纳斯-托瓦兹
  13. win7 修改html文件图标,win7系统html文件图标变成空白的解决方法
  14. Zabbix 通过shell脚本监控PostgreSQL
  15. sublime手动安装GoSublime
  16. Spring Security技术栈开发企业级认证与授权-笔记
  17. boost库做什么用呢?
  18. BentoML核心概念(二):API 和 IO 描述符
  19. 用python打开多个摄像头_python-无法同时连接6个以上的IP摄像机
  20. 2021年不可错过的40篇AI论文,你都读过吗?

热门文章

  1. linux命令详解——iostat
  2. iOS8 【xcode6中添加pch全局引用文件】
  3. 动态内存分配及变量存储类别(第二部分)
  4. Unity手游之路四3d旋转-四元数,欧拉角和变幻矩阵
  5. 2. Mysql数据库的入门知识
  6. OpenGL编程低级错误范例手册
  7. spring中bean的作用域属性single与prototype的区别
  8. 浅谈微信smali注入
  9. 【2017-03-09】SQL Server 数据库基础、四种约束
  10. JSP慕课网阶段用户登录小例子(不用数据库)