文献:

M. Stoiber, M. Sundermeyer and R. Triebel, "Iterative Corresponding Geometry: Fusing Region and Depth for Highly Efficient 3D Tracking of Textureless Objects," 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 6845-6855, doi: 10.1109/CVPR52688.2022.00673.

论文网址:

Iterative Corresponding Geometry: Fusing Region and Depth for Highly Efficient 3D Tracking of Textureless Objects | IEEE Conference Publication | IEEE Xplore

作者提供代码:

3DObjectTracking/ICG at master · DLR-RM/3DObjectTracking · GitHub

1、依赖

The following dependencies are required: Eigen 3, GLEW, GLFW 3, and OpenCV 4. In addition, images from an Azure Kinect or RealSense camera can be streamed using the K4A and realsense2 libraries.

必须:

(1)Eigen 3:Eigen3+Ubuntu20.04安装_weixin_54470372的博客-CSDN博客

(2)GLEW:GLEW+Ubuntu20.04安装_weixin_54470372的博客-CSDN博客

(3)GLFW 3:GLFW+Ubuntu20.04安装_weixin_54470372的博客-CSDN博客

(4)OpenCV 4:OpenCV+Ubuntu20.04安装_weixin_54470372的博客-CSDN博客

非必须:

(5)kinect或realsense的库,本次未安装

(6)OpenMP,未安装

(7)Doxygen:Doxygen+Graphviz+Ubuntu20.04安装_weixin_54470372的博客-CSDN博客

(8)Graphviz:Doxygen+Graphviz+Ubuntu20.04安装_weixin_54470372的博客-CSDN博客

~/3dTracking$ git clone https://github.com/DLR-RM/3DObjectTracking.git
Cloning into '3DObjectTracking'...
remote: Enumerating objects: 325, done.
remote: Counting objects: 100% (325/325), done.
remote: Compressing objects: 100% (230/230), done.
remote: Total 325 (delta 159), reused 233 (delta 81), pack-reused 0
Receiving objects: 100% (325/325), 4.76 MiB | 1.49 MiB/s, done.
Resolving deltas: 100% (159/159), done.

2、源代码的文件结构

有5个文件夹:

  • include/: header files of the ICG library

ICG库的头文件

  • src/: source files of the ICG library

ICG库的源文件

  • third_party/: external header-only libraries

只包含头文件的外部库

  • examples/: example files for tracking as well as for evaluation on different datasets

用于跟踪和评估不同数据集的示例文件

  • doc/: files for documentation

文档

3、CMAKE & BUILD

Use CMake to build the library from source. The following dependencies are required: Eigen 3, GLEW, GLFW 3, and OpenCV 4. In addition, images from an Azure Kinect or RealSense camera can be streamed using the K4A and realsense2 libraries. Both libraries are optional and can be disabled using the CMake flags USE_AZURE_KINECT, and USE_REALSENSE. If CMake finds OpenMP, the code is compiled using multithreading and vectorization for some functions. Finally, the documentation is built if Doxygen with dot is detected. Note that links to classes that are embedded in this readme only work in the generated documentation.

翻译:使用CMake从源代码构建库。需要以下依赖项:特征3、GLEW、GLFW 3和OpenCV 4。此外,来自Azure Kinect或RealSense相机的图像可以使用K4A和realsense2库进行流媒体传输。这两个库都是可选的,可以使用CMake标志USE_AZURE_KINECT和USE_REALSENSE来禁用。如果CMake找到了OpenMP,则使用多线程和对某些函数进行向量化来编译代码。最后,如果检测到Doxygen内嵌dot,则构建文档。注意,到这个自述文件中嵌入的类的链接只能在生成的文档中使用。

(1)cmake

首先,切换到目录3DObjectTracking/ICG目录,因为没有安装kinect和realsense的libraries,所以需要设置:USE_AZURE_KINECT和USE_REALSENSE两个options,所以输入命令:

cmake -DUSE_AZURE_KINECT=OFF -DUSE_REALSENSE=OFF

输出如下:

$ cmake -DUSE_AZURE_KINECT=OFF -DUSE_REALSENSE=OFF
CMake Warning:No source or binary directory provided.  Both will be assumed to be thesame as the current working directory, but note that this warning willbecome a fatal error in future CMake releases.CMake Warning (dev) at /usr/local/share/cmake-3.25/Modules/FindOpenGL.cmake:315 (message):Policy CMP0072 is not set: FindOpenGL prefers GLVND by default whenavailable.  Run "cmake --help-policy CMP0072" for policy details.  Use thecmake_policy command to set the policy and suppress this warning.FindOpenGL found both a legacy GL library:OPENGL_gl_LIBRARY: /usr/lib/x86_64-linux-gnu/libGL.soand GLVND libraries for OpenGL and GLX:OPENGL_opengl_LIBRARY: /usr/lib/x86_64-linux-gnu/libOpenGL.soOPENGL_glx_LIBRARY: /usr/lib/x86_64-linux-gnu/libGLX.soOpenGL_GL_PREFERENCE has not been set to "GLVND" or "LEGACY", so forcompatibility with CMake 3.10 and below the legacy GL library will be used.
Call Stack (most recent call first):CMakeLists.txt:22 (find_package)
This warning is for project developers.  Use -Wno-dev to suppress it.-- Found Doxygen: /usr/bin/doxygen (found version "1.8.17") found components: doxygen dot
-- Configuring done
-- Generating done
-- Build files have been written to: /home/r****/3dTracking/3DObjectTracking/ICG

(2)make

make

最后几行输出:

Patching output file 35/35
lookup cache used 1814/65536 hits=11252 misses=1888
finished...
[100%] Built target doc_doxygen

5、USAGE

As explained previously, ICG is a library that supports a wide variety of tracking scenarios. As a consequence, to start tracking, one has to first configure the tracker. For this, two options exist:

  • One option is to use C++ programming to set up and configure all objects according to ones scenario. An example that allows running the tracker on a sequence streamed from an AzureKinect is shown in examples/run_on_camera_sequence.cpp. The executable thereby takes the path to a directory and names of multiple bodies. The directory has to contain Body and StaticDetector metafiles that are called <BODY_NAME>.yaml file and <BODY_NAME>_detector.yaml. Similarly, examples/run_on_recorded_sequence.cpp allows to run the tracker on a sequence that was recorded using record_camera_sequence.cpp. The executable allows the tracking of a single body that is detected using a ManualDetector. It requires the metafiles for a LoaderColorCamera, Body, and ManualDetector, as well as the path to a temporary directory in which generated model files are stored.

  • In addition to the usage as a library in combination with C++ programming, the tracker can also be configured using a generator function together with a YAML file that defines the overall configuration. A detailed description on how to set up the YAML file is given in Generator Configfile. An example that shows how to use a generator is shown in examples/run_generated_tracker.cpp. The executable requires a YAML file that is parsed by the GenerateConfiguredTracker() function to generate a Tracker object. The main YAML file thereby defines how individual objects are combined and allows to specify YAML metafiles for individual components that do not use default parameters. An example of a YAML file is given in examples\generator_example\config.yaml.

翻译:

正如前面所解释的,ICG是一个支持多种跟踪场景的库。因此,要开始跟踪,首先必须配置跟踪器。对此,有两种选择:

  • 一种选择是使用c++编程来根据一个场景设置和配置所有对象。示例/run_on_camera_sequence.cpp中显示了一个允许在来自AzureKinect的序列流上运行跟踪器的例子。因此,可执行文件采用一个目录的路径和多个主体的名称。该目录必须包含名为<body_name>.yaml的Body和StaticDetector的meatafiles和<body_name>_detect .yaml文件。类似地,examples/run_on_recorded_sequence.cpp允许在使用了record_camera_sequence.cpp进行记录的序列上运行跟踪器。可执行文件允许跟踪使用ManualDetector检测到的单个主体。它需要LoaderColorCamera、Body和ManualDetector的metafiles,以及存储生成的模型文件的临时目录的路径。
  • 除了将跟踪器作为库与c++编程结合使用外,还可以使用生成器函数和定义总体配置的YAML文件来配置跟踪器。Generator Configfile中给出了关于如何设置YAML文件的详细描述。在examples/run_generated_tracker.cpp中显示了如何使用生成器的示例。可执行文件需要一个由GenerateConfiguredTracker()函数解析的YAML文件,以生成一个Tracker对象。因此,主YAML文件定义了如何组合单个对象,并允许为不使用默认参数的单个组件指定YAML元文件。在examples\generator_example\config.yaml中给出了一个YAML文件的例子。

本次不使用kinect或realsense,所以暂时不需要

6、评估

The code in examples/evaluate_<DATASET_NAME>_dataset.cpp and examples/parameters_study_<DATASET_NAME>.cpp contains everything for the evaluation on the YCB-Video, OPT, Choi, and RBOT datasets. For the evaluation, please download the YCB-Video, OPT, Choi, or RBOT dataset and adjust the dataset_directory in the source code. Note that model files (e.g. 002_master_chef_can_depth_model.bin, 002_master_chef_can_region_model.bin, ...) will be created automatically and are stored in the specified external_directory. For the evaluation of the YCB-Video dataset, please unzip poses_ycb-video.zip and store its content in the respective external_directory. For the Choi dataset, the Matlab script in examples/dataset_converter/convert_choi_dataset.m has to be executed to convert .pcd files into .png images. Also, using a program such as MeshLab, all model files have to be converted from .ply to .obj files and stored in the folder external_directory/models. Both the OPT and RBOT datasets work without any manual changes.

翻译:

(1)examples/evaluate_<dataset_name> _dataset .cpp和examples/parameters_study_<dataset_name>.cpp中的代码包含了对YCB-Video、OPT、Choi和RBOT数据集的所有评估。对于评估,请下载YCB-Video、OPT、Choi或RBOT数据集,并修改源代码中的dataset_directory。

(2)模型文件(如002_master_chef_can_depth_model.bin, 002_master_chef_can_region_model.bin,…)将自动创建,并存储在指定的external_directory中。

(3)数据集处理:对于YCB-Video数据集的评估,请解压缩poses_ycb-video.zip并将其内容存储在相应的external_目录中。对于Choi数据集,在examples/dataset_converter/convert_choi_dataset中的Matlab脚本。必须执行M将.pcd文件转换为.png图像。此外,使用像MeshLab这样的程序,所有的模型文件必须从.ply转换为.obj文件,并存储在external_directory/models文件夹中。OPT和RBOT数据集都不需要任何手动更改即可工作。

(1)数据集

(1)使用的数据集:YCB-Video, OPT, Choi, RBOT

(2)对于YCB-Video数据集的评估,请解压缩poses_ycb-video.zip并将其内容存储在相应的external_目录中。

(3)对于Choi数据集,在examples/dataset_converter/convert_choi_dataset中的Matlab脚本。必须执行M将.pcd文件转换为.png图像。

(4)OPT和RBOT数据集都不需要任何手动更改即可工作。

  • YCB:PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes – UW Robotics and State Estimation Lab
  • OPT:OPT Dataset
  • Choi:Changhyun Choi > Research > RGB-D Tracking
  • RBOT:Computer Vision and Mixed Reality Group | RBOT | Computer Vision and Mixed Reality Group

(2) 针对OPT数据集的评估

修改代码中的directory:

examples/evaluate_opt_dataset.cpp需要修改dataset、external和result_path

examples/parameters_study_opt_dataset.cpp需要修改dataset、external_path

dataset_path:存放数据集的目录

external_path:将要存放模型文件的目录

result_path:将要存放结果的目录

以下是我设置的目录:

  std::filesystem::path dataset_directory{"/home/r*/3dTracking/datasetP/OPT/Model3D/"};std::filesystem::path external_directory{"/home/r*/3dTracking/3DObjectTracking/ICG/examples/externalP/"};std::filesystem::path result_path{"/home/r*/3dTracking/3DObjectTracking/ICG/examples/resultP/"};

【复现笔记】Iterative Corresponding Geometry相关推荐

  1. Attentional Factorization Machine(AFM)复现笔记

    声明:本模型复现笔记记录自己学习过程,如果有错误请各位老师批评指正. 之前学习了很多关于特征交叉的模型比如Wide&Deep.Deep&Cross.DeepFM.NFM. 对于特征工程 ...

  2. WideDeep Model、Wide Model(LR)、Deep Model、DeepFm Model、NFM Model复现笔记

    声明:本模型复现笔记记录自己学习过程,如果有错误请各位老师批评指正. 本周复现了Wide&Deep Model.Wide Model.Deep Model.DeepFm Model.NFM M ...

  3. PaddleDetection复现笔记

    PaddleDetection复现笔记 (1) PP-Tracking_GUi step1:准备环境 step2:准备模型 step3:运行 (2) PP-Tracking step1:准备环境 st ...

  4. 【Muduo复现笔记】 PingPong测试程序

    [Muduo复现笔记] PingPong测试程序 pingpong_server void onConnection(const TcpConnectionPtr& conn) 将新连接con ...

  5. 禅道linux一键安装漏洞,禅道全版本rce漏洞复现笔记

    禅道全版本rce漏洞复现笔记 漏洞说明 禅道项目管理软件是一款国产的,基于LGPL协议,开源免费的项目管理软件,它集产品管理.项目管理.测试管理于一体,同时还包含了事务管理.组织管理等诸多功能,是中小 ...

  6. Trackformer复现笔记

    Trackformer复现笔记 前言 因毕设需要进行一下记录 一.python版本 建议使用python 3.7.1,源代码是基于python 3.7进行编写 二.pip 1.lap依赖 这个依赖需要 ...

  7. muduo网络库源码复现笔记(十七):什么都不做的EventLoop

    Muduo网络库简介 muduo 是一个基于 Reactor 模式的现代 C++ 网络库,作者陈硕.它采用非阻塞 IO 模型,基于事件驱动和回调,原生支持多核多线程,适合编写 Linux 服务端多线程 ...

  8. 【论文复现】使用PaddleDetection复现OrientedRepPoints的复现笔记

    1 复现流程 复现流程表: 翻译原始论文: 学习PaddleDetection配置参数 对齐Dataloader: 2 MMRotate代码 2.1 配置mmrotate环境 官方安装文档:INSTA ...

  9. 【复现笔记】PoseCNN-PyTorch

    PoseCNN-PyTorch: A PyTorch Implementation of the PoseCNN Framework for 6D Object Pose Estimation 代码g ...

最新文章

  1. js的时间 java怎么处理,JS实现处理时间,年月日,星期的公共方法示例
  2. 【桌面虚拟化】之五PCoIP
  3. PyPy为什么能让Python比C还快?一文了解内在机制
  4. 1032:大象喝水查
  5. iOS中UITextField的字数限制
  6. 如何开发高度可定制的产品
  7. 搬家,又一次和过往告别
  8. 收藏 | EfficientNet模型的完整细节
  9. excel-从excel导入数据到数据库
  10. It is worth noting that among the four
  11. JDBC03 利用JDBC实现事务提交与回滚【调用Connection中的方法实现事务管理】
  12. 简单的print函数的实现
  13. 实战 TPCC-MySQL 基准测试
  14. WCF中加密数据信息
  15. Linux服务器集群系统(一)——LVS项目介绍
  16. 我的文档 属性设置里找不到位置选项,以及目录迁移解决方案
  17. C1驾考成都胜利考场科目二经验
  18. MATLAB/Simulink当真,开环Buck、闭环Buck、双闭环Buck仿真;开环控制的半桥LLC谐振变换器,全桥LLC谐振变换器和电压闭环控制的半桥LLC
  19. python模拟登录163邮箱_Python实现模拟登录网易邮箱的方法示例
  20. BOM制作系列之一:BOM拆分

热门文章

  1. matlab 怀特图,怀特异方差检验方法在matlab中的实现,以及广义最.....
  2. 冒泡排序法(C语言)
  3. 离散数学-10 群与环
  4. WebGL空间变换以及gl_FragCoord的运用
  5. K8sPod资源基础管理操作
  6. VsCode插件整理
  7. linux下的go富集分析,GO富集分析
  8. 从Flink SQL doesn't support consuming update and delete changes 错误谈起
  9. java基于springboot+vue的协同过滤算法的图书推荐系统 nodejs
  10. 标量、向量和矩阵的求导法则