文章目录

  • 1 目的
  • 2 方法
    • 2.1 SDK
    • 2.2 ROS bag
      • 2.2.1 录制bag
      • 2.2.2 从bag文件中提取数据
        • 2.2.2.1 提取图像
        • 2.2.2.2 提取IMU数据
  • 3 相关核心代码
    • 3.1 record.cc
    • 3.2 dataset.h
    • 3.3 dataset.cc
  • 4 参考

1 目的

在不同场景录制不同条件的数据集,主要是为了方便进行系统的测试,相当于一个benchmark,给不同的算法提供一个相同的测试环境。

2 方法

2.1 SDK

首先,安装好小觅相机的SDK。

然后,使用SDK中提供的样例程序,录制数据集。

cd ~/MYNT-EYE-S-SDK/samples/_output/bin
./record

终端的输出信息:

程序默认在当前路径创建datase文件夹存储数据,数据集包括:

  • left文件夹为左摄像头的图像,编码格式为png
  • disparity文件夹为视差图
  • deth文件夹为深度图
  • motion.txt为IMU数据,格式为seq, flag, timestamp, accel_x, accel_y, accel_z, gyro_x, gyro_y, gyro_z, temperature
  • 在left等文件夹中还有stream.txt,存储图片的信息,格式为seq, frame_id, timestamp, exposure_time


如果想制作成KITTI,TUM,EuRoC等数据集的格式,需要对代码进行一些修改,改动应该不大,主要是路径和命名。

2.2 ROS bag

2.2.1 录制bag

首先,启动相机。

source ~/MYNT-EYE-S-SDK/wrappers/ros/devel/setup.bash
roslaunch mynt_eye_ros_wrapper display.launch

然后,录制bag,包括左目,右目和IMU数据。

rosbag record /mynteye/left/image_raw /mynteye/right/image_raw /mynteye/imu/data_raw

2.2.2 从bag文件中提取数据

如果只存储了bag文件,又想得到上面方式采集的数据,这时可以从bag文件中提取出图像和IMU数据。

2.2.2.1 提取图像

打开一个终端,从bag文件中提取图像:

rosrun image_view extract_images _sec_per_frame:=0.01 image:=/mynteye/left/image_raw

注意:如果输出的图片数量与使用rosbag info命令查询得到的数量不符,可以减小_sec_per_frame参数的值。

打开另一个终端,播放bag:

rosbag play data.bag

这种方式提取的图片的名称是从frame0000.jpg依次升序的,如果想图片名是时间戳,需要使用以下程序。

新建extract_images.py:

# coding:utf-8
#!/usr/bin/python# Extract images from a bag file.import roslib   #roslib.load_manifest(PKG)
import rosbag
import rospy
import decimal
import cv2
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
from cv_bridge import CvBridgeErrorleft_path = '/home/wb/MYNT-EYE-S-SDK/dataset/left/'   # 左目图像的路径,需提前手动创建,也可以使用程序自动创建
right_path = '/home/wb/MYNT-EYE-S-SDK/dataset/right/'class ImageCreator():def __init__(self):self.bridge = CvBridge()with rosbag.Bag('/home/wb/MYNT-EYE-S-SDK/2020-08-05-23-05-50.bag', 'r') as bag:  # 读取bag文件,注意设置正确的bag文件路径for topic,msg,t in bag.read_messages():if topic == "/mynteye/left/image_raw": # 左目图像的topictry:cv_image = self.bridge.imgmsg_to_cv2(msg,"bgr8")except CvBridgeError as e:print e# %.6f表示小数点后带有6位,可根据精确度需要修改timestr = "%.6f" % msg.header.stamp.to_sec()image_name = timer + ".png" #图像命名:时间戳.pngcv2.imwrite(left_path + image_name, cv_image)  # 保存图像elif topic == "/mynteye/right/image_raw": # 右目图像的topictry:cv_image = self.bridge.imgmsg_to_cv2(msg,"bgr8")except CvBridgeError as e:print e# %.6f表示小数点后带有6位,可根据精确度需要修改timestr = "%.6f" % msg.header.stamp.to_sec()image_name = timer + ".png" #图像命名:时间戳.pngcv2.imwrite(right_path + image_name, cv_image)  # 保存图像if __name__ == '__main__': try:image_creator = ImageCreator()except rospy.ROSInterruptException:pass

配置好路径后,执行:

python extract_images.py

2.2.2.2 提取IMU数据

rostopic echo -b 2020-08-05-23-05-50.bag -p /mynteye/imu/data_raw > imu_data.csv

当然,也可以扩展上面的Python代码,自定义IMU数据的内容。

3 相关核心代码

3.1 record.cc

#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>#include "mynteye/logger.h"
#include "mynteye/device/device.h"
#include "mynteye/device/utils.h"
#include "mynteye/util/times.h"#include "dataset.h"MYNTEYE_USE_NAMESPACEint main(int argc, char *argv[]) {glog_init _(argc, argv);auto &&api = API::Create(argc, argv);if (!api) return 1;auto request = api->GetStreamRequest();//   struct StreamRequest {//   /** Stream width in pixels */
//   std::uint16_t width;
//   /** Stream height in pixels */
//   std::uint16_t height;
//   /** Stream pixel format */
//   Format format;
//   /** Stream frames per second */
//   std::uint16_t fps;
//   }// request.fps = 10;api->ConfigStreamRequest(request);api->EnableMotionDatas();api->EnableStreamData(Stream::DEPTH);api->Start(Source::ALL);const char *outdir;if (argc >= 2) {outdir = argv[1];   // 可以更改数据集存储的路径} else {outdir = "./dataset";}tools::Dataset dataset(outdir);cv::namedWindow("frame");std::size_t img_count = 0;std::size_t imu_count = 0;auto &&time_beg = times::now();while (true) {api->WaitForStreams();auto &&left_datas = api->GetStreamDatas(Stream::LEFT);auto &&right_datas = api->GetStreamDatas(Stream::RIGHT);auto &&depth_data = api->GetStreamData(Stream::DEPTH);auto &&disparity_data = api->GetStreamData(Stream::DISPARITY);img_count += left_datas.size();auto &&motion_datas = api->GetMotionDatas();imu_count += motion_datas.size();cv::Mat img;if (left_datas.size() > 0 && right_datas.size() > 0) {auto &&left_frame = left_datas.back().frame_raw;auto &&right_frame = right_datas.back().frame_raw;if (right_frame->data() && left_frame->data()) {if (left_frame->format() == Format::GREY) {cv::Mat left_img(left_frame->height(), left_frame->width(), CV_8UC1,left_frame->data());cv::Mat right_img(right_frame->height(), right_frame->width(), CV_8UC1,right_frame->data());cv::hconcat(left_img, right_img, img);} else if (left_frame->format() == Format::YUYV) {cv::Mat left_img(left_frame->height(), left_frame->width(), CV_8UC2,left_frame->data());cv::Mat right_img(right_frame->height(), right_frame->width(), CV_8UC2,right_frame->data());cv::cvtColor(left_img, left_img, cv::COLOR_YUV2BGR_YUY2);cv::cvtColor(right_img, right_img, cv::COLOR_YUV2BGR_YUY2);cv::hconcat(left_img, right_img, img);} else if (left_frame->format() == Format::BGR888) {cv::Mat left_img(left_frame->height(), left_frame->width(), CV_8UC3,left_frame->data());cv::Mat right_img(right_frame->height(), right_frame->width(), CV_8UC3,right_frame->data());cv::hconcat(left_img, right_img, img);} else {return -1;}cv::imshow("frame", img);}}if (img_count > 10 && imu_count > 50) {  // save// save Stream::LEFTfor (auto &&left : left_datas) {dataset.SaveStreamData(Stream::LEFT, left);}// save Stream::RIGHT// for (auto &&right : right_datas) {//   dataset.SaveStreamData(Stream::RIGHT, right);// }// save Stream::DEPTHif (!depth_data.frame.empty()) {dataset.SaveStreamData(Stream::DEPTH, depth_data);}// save Stream::DISPARITYif (!disparity_data.frame.empty()) {dataset.SaveStreamData(Stream::DISPARITY, disparity_data);}for (auto &&motion : motion_datas) {dataset.SaveMotionData(motion);}std::cout << "\rSaved " << img_count << " imgs"<< ", " << imu_count << " imus" << std::flush;}char key = static_cast<char>(cv::waitKey(1));if (key == 27 || key == 'q' || key == 'Q') {  // 按下ESC/Q键,退出程序,停止数据集的录制break;}}std::cout << " to " << outdir << std::endl;auto &&time_end = times::now();api->Stop(Source::ALL);float elapsed_ms =times::count<times::microseconds>(time_end - time_beg) * 0.001f;LOG(INFO) << "Time beg: " << times::to_local_string(time_beg)<< ", end: " << times::to_local_string(time_end)<< ", cost: " << elapsed_ms << "ms";LOG(INFO) << "Img count: " << img_count<< ", fps: " << (1000.f * img_count / elapsed_ms);LOG(INFO) << "Imu count: " << imu_count<< ", hz: " << (1000.f * imu_count / elapsed_ms);return 0;
}

3.2 dataset.h

#ifndef MYNTEYE_TOOLS_DATASET_H_  // NOLINT
#define MYNTEYE_TOOLS_DATASET_H_
#pragma once#include <fstream>
#include <map>
#include <memory>
#include <string>#include "mynteye/mynteye.h"
#include "mynteye/api/api.h"
#include "mynteye/device/callbacks.h"MYNTEYE_BEGIN_NAMESPACEnamespace tools {class Dataset {public:struct Writer {std::ofstream ofs;std::string outdir;std::string outfile;};using writer_t = std::shared_ptr<Writer>;explicit Dataset(std::string outdir);~Dataset();void SaveStreamData(const Stream &stream, const device::StreamData &data);void SaveMotionData(const device::MotionData &data);void SaveStreamData(const Stream &stream, const api::StreamData &data);void SaveMotionData(const api::MotionData &data);private:writer_t GetStreamWriter(const Stream &stream);writer_t GetMotionWriter();std::string outdir_;std::map<Stream, writer_t> stream_writers_;writer_t motion_writer_;std::map<Stream, std::size_t> stream_counts_;std::size_t motion_count_;std::size_t accel_count_;std::size_t gyro_count_;
};}  // namespace toolsMYNTEYE_END_NAMESPACE#endif  // MYNTEYE_TOOLS_DATASET_H_ NOLINT

3.3 dataset.cc

#include "dataset.h"#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>#include <iomanip>
#include <limits>
#include <stdexcept>
#include <utility>#include "mynteye/logger.h"
#include "mynteye/util/files.h"#define FULL_PRECISION \std::fixed << std::setprecision(std::numeric_limits<double>::max_digits10)#define IMAGE_FILENAME_WIDTH 6MYNTEYE_BEGIN_NAMESPACEnamespace tools {Dataset::Dataset(std::string outdir) : outdir_(std::move(outdir)) {VLOG(2) << __func__;if (!files::mkdir(outdir_)) {LOG(FATAL) << "Create directory failed: " << outdir_;}
}Dataset::~Dataset() {VLOG(2) << __func__;for (auto &&it = stream_writers_.begin(); it != stream_writers_.end(); it++) {if (it->second) {it->second->ofs.flush();it->second->ofs.close();}}if (motion_writer_) {motion_writer_->ofs.flush();motion_writer_->ofs.close();}
}void Dataset::SaveStreamData(const Stream &stream, const device::StreamData &data) {auto &&writer = GetStreamWriter(stream);auto seq = stream_counts_[stream];writer->ofs << seq << ", " << data.img->frame_id << ", "<< data.img->timestamp << ", " << data.img->exposure_time<< std::endl;if (data.frame) {std::stringstream ss;ss << writer->outdir << MYNTEYE_OS_SEP << std::dec<< std::setw(IMAGE_FILENAME_WIDTH) << std::setfill('0') << seq << ".png";if (data.frame->format() == Format::GREY) {cv::Mat img(data.frame->height(), data.frame->width(), CV_8UC1,data.frame->data());cv::imwrite(ss.str(), img);} else if (data.frame->format() == Format::YUYV) {cv::Mat img(data.frame->height(), data.frame->width(), CV_8UC2,data.frame->data());cv::cvtColor(img, img, cv::COLOR_YUV2BGR_YUY2);cv::imwrite(ss.str(), img);} else if (data.frame->format() == Format::BGR888) {cv::Mat img(data.frame->height(), data.frame->width(), CV_8UC3,data.frame->data());cv::imwrite(ss.str(), img);} else {cv::Mat img(data.frame->height(), data.frame->width(), CV_8UC1,data.frame->data());cv::imwrite(ss.str(), img);}}++stream_counts_[stream];
}void Dataset::SaveMotionData(const device::MotionData &data) {auto &&writer = GetMotionWriter();// auto seq = data.imu->serial_number;auto seq = motion_count_;writer->ofs << seq << ", " << static_cast<int>(data.imu->flag) << ", "<< data.imu->timestamp << ", " << data.imu->accel[0] << ", "<< data.imu->accel[1] << ", " << data.imu->accel[2] << ", "<< data.imu->gyro[0] << ", " << data.imu->gyro[1] << ", "<< data.imu->gyro[2] << ", " << data.imu->temperature<< std::endl;++motion_count_;/*if(motion_count_ != seq) {LOG(INFO) << "motion_count_ != seq !" << " motion_count_: " << motion_count_<< " seq: " << seq;motion_count_ = seq;}*/
}void Dataset::SaveStreamData(const Stream &stream, const api::StreamData &data) {auto &&writer = GetStreamWriter(stream);auto seq = stream_counts_[stream];writer->ofs << seq << ", " << data.img->frame_id << ", "<< data.img->timestamp << ", " << data.img->exposure_time<< std::endl;if (!data.frame.empty()) {std::stringstream ss;ss << writer->outdir << MYNTEYE_OS_SEP << std::dec<< std::setw(IMAGE_FILENAME_WIDTH) << std::setfill('0') << seq << ".png";cv::imwrite(ss.str(), data.frame);}++stream_counts_[stream];
}void Dataset::SaveMotionData(const api::MotionData &data) {auto &&writer = GetMotionWriter();// auto seq = data.imu->serial_number;auto seq = motion_count_;writer->ofs << seq << ", " << static_cast<int>(data.imu->flag) << ", "<< data.imu->timestamp << ", " << data.imu->accel[0] << ", "<< data.imu->accel[1] << ", " << data.imu->accel[2] << ", "<< data.imu->gyro[0] << ", " << data.imu->gyro[1] << ", "<< data.imu->gyro[2] << ", " << data.imu->temperature<< std::endl;motion_count_++;/*if(motion_count_ != seq) {LOG(INFO) << "motion_count_ != seq !" << " motion_count_: " << motion_count_<< " seq: " << seq;motion_count_ = seq;}*/
}Dataset::writer_t Dataset::GetStreamWriter(const Stream &stream) {try {return stream_writers_.at(stream);} catch (const std::out_of_range &e) {writer_t writer = std::make_shared<Writer>();switch (stream) {case Stream::LEFT: {writer->outdir = outdir_ + MYNTEYE_OS_SEP "left";} break;case Stream::RIGHT: {writer->outdir = outdir_ + MYNTEYE_OS_SEP "right";} break;case Stream::DEPTH: {writer->outdir = outdir_ + MYNTEYE_OS_SEP "depth";} break;case Stream::DISPARITY: {writer->outdir = outdir_ + MYNTEYE_OS_SEP "disparity";} break;case Stream::RIGHT_RECTIFIED: {writer->outdir = outdir_ + MYNTEYE_OS_SEP "right_rect";} break;case Stream::LEFT_RECTIFIED: {writer->outdir = outdir_ + MYNTEYE_OS_SEP "left_rect";} break;case Stream::DISPARITY_NORMALIZED: {writer->outdir = outdir_ + MYNTEYE_OS_SEP "disparity_norm";} break;default:LOG(FATAL) << "Unsupported stream: " << stream;}writer->outfile = writer->outdir + MYNTEYE_OS_SEP "stream.txt";files::mkdir(writer->outdir);writer->ofs.open(writer->outfile, std::ofstream::out);writer->ofs << "seq, frame_id, timestamp, exposure_time" << std::endl;writer->ofs << FULL_PRECISION;stream_writers_[stream] = writer;stream_counts_[stream] = 0;return writer;}
}Dataset::writer_t Dataset::GetMotionWriter() {if (motion_writer_ == nullptr) {writer_t writer = std::make_shared<Writer>();writer->outdir = outdir_;writer->outfile = writer->outdir + MYNTEYE_OS_SEP "motion.txt";files::mkdir(writer->outdir);writer->ofs.open(writer->outfile, std::ofstream::out);writer->ofs << "seq, flag, timestamp, accel_x, accel_y, accel_z, ""gyro_x, gyro_y, gyro_z, temperature"<< std::endl;writer->ofs << FULL_PRECISION;motion_writer_ = writer;motion_count_ = 0;accel_count_ = 0;gyro_count_ = 0;}return motion_writer_;
}}  // namespace toolsMYNTEYE_END_NAMESPACE

4 参考

  1. https://github.com/slightech/MYNT-EYE-S-SDK
  2. https://mynt-eye-s-sdk-docs-zh-cn.readthedocs.io/zh_CN/latest/
  3. https://www.cnblogs.com/arkenstone/p/6676203.html
  4. https://blog.csdn.net/corfox_liu/article/details/50920332?locationNum=15
  5. https://blog.csdn.net/Draonly/article/details/74642747

使用小觅相机录制数据集相关推荐

  1. 小觅相机录制rosbag数据集

    数据集录制: 使用的相机是双目深度版,首先启动launch文件 roslaunch mynteye_wrapper_d display.launch 然后对图片和imu数据录制,考虑到RGB图像会导致 ...

  2. 使用小觅相机录制指定话题的数据集

    1 开启小觅相机(安装好相机的SDK,按照官网安装) make init make ros 注意,发现一个Bug 我的相机不能在ROS中启动,最终换了一个usb口,因为我的电脑如果有两个USB口,好像 ...

  3. SLAM学习 | 小觅相机的图像与IMU时间戳对齐分析

    SLAM学习 | 小觅相机的图像与IMU时间戳对齐分析 1 在时间轴上标注时间戳 2 时间戳对齐误差 3 通过增加IMU频率减小对齐误差 概要: 接前文--SLAM学习 | 使用小觅相机MYNTEYE ...

  4. 小觅相机深度版运行Vins-mono

    首先声明,本人自己也是slam新手,此贴只因为自己在用小觅相机深度版运行Vins的时候太过无助,所以想写个自己运行出结果的完整过程,仅供参考,如有不对之处,还望不吝指教. 我的电脑是Ubuntu16. ...

  5. 安装小觅相机(1030)驱动以及如何将Ubuntu18.04的内核降到4.15.0版本

    小觅相机驱动安装以及可能出现的问题 小觅相机的驱动安装非常的简单,因为他们的服务非常的好,还有非常完善的官网. 话不多说,小觅驱动安装链接如下: https://mynt-eye-s-sdk.read ...

  6. 小觅相机运行VINS-Fusion(一)

    写在前面的话: 1.本文基于自己的另一篇博文win10+ubuntu16.04+ROS Kinetic 2.参考[4]是一篇非常好的帖子,但由于小觅托管在github的代码在不断更新,故需在某些步骤及 ...

  7. Ubuntu18.04 配置orbslam2环境+小觅相机测试(零基础)

    写在前面: ubuntu的安装建议采用双系统模式,不建议用虚拟机,会出现奇怪错误导致配置环境失败. 本教程使用 ubuntu18.04 ,双系统的安装请自行搜索教程安装. 本人的毕业设计做的是slam ...

  8. 小觅相机问题解决后开始进行标定的事

    小觅相机问题解决后开始进行标定的事项: 0:一定确保环境安装正常,这里出了问题就等于白给,测试环境可以参考https://blog.csdn.net/u011392872/article/detail ...

  9. 小觅相机和ZED相机参数对比

    名称 小觅120 小觅50 ZED 帧率 最高60FPS 最高60FPS 最高100FPS 分辨率 2560x720; 1280x480 2560x720; 1280x480 深度分辨率 1280x7 ...

最新文章

  1. OTA:目标检测中的最优传输分配
  2. 提交svn的时候,提示丢失了预定增加的xxxx
  3. 关于 PHP 与 MYSQL的链接
  4. 动态“神还原”李焕英旧照,用技术致敬每一位妈妈!
  5. 企业需求的Java程序员是什么样子的
  6. Hibernate注解方式实现1-1双向关联
  7. Permission denied (maybe missing INTERNET permission) 错误解决
  8. 为什么我发现自己照镜子觉得很好看,但是拍照就像变了一个人?
  9. VB 汉字字符串转换成拼音
  10. redis配置文件redis.conf详细说明
  11. linux管理员基础知识
  12. 攻击局域网计算机,如何攻击局域网电脑
  13. Free Icon Tool(icon图标提取器)绿色便携版V2.1.5 | 应用图标提取器下载 | 快速提取exe中的ico图标
  14. Redis 配置开机自动启动
  15. jbpm创建流程图_jbpm - 工作流的基本操作
  16. JS中事件的绑定和解绑
  17. 电脑计算机的符号什么意思,计算机上面的符号代表什么意思  悬赏20
  18. redis面试:缓存雪崩、缓存击穿、缓存穿透
  19. 强化学习——马尔科夫决策过程 MDP
  20. #千峰逆战,205#面向对象2.0

热门文章

  1. Go语言爬虫框架之Colly和Goquery
  2. 三维激光扫描技术入门
  3. 端午节一大波赠书福利惊喜来袭!
  4. 近来发在CSDN畅言上的几篇文章
  5. 世界首富的22个习惯
  6. java中文分词工具_中文分词工具(LAC) 试用笔记
  7. python爬虫高级知识分子的风骨_Python程序员爬取《万物理论》10万影评,带你解读霍金的有趣故事...
  8. 爬取B站直播小姐姐封面
  9. 拉普拉斯矩阵和拉普拉斯二次型
  10. 大数据时代时代舍恩伯格书资源_科技翻译练习:大数据(整理了术语表)