最近想学习一下无人驾驶SLAM方面的内容

Dynamic Object SLAM in Autonomous Driving

常规的SLAM算法首先假设环境中所有物体均处于静止的状态。而一些能够在动态环境中运行的SLAM系统,只是将环境中的动态物体视为异常值并将他们从环境中剔除,再使用常规的SLAM算法进行处理。这严重影响SLAM在自动驾驶中的应用。

但这应该是看场景的。比如根小伙伴聊的时候,对方也提出的观点是比起场景的背景,这些动态的障碍物所带来的特征很少,而整个场景的约束却很多。比如用激光雷达,扫到的所有东西必然会记录到点云信息中,但是当再次回到对应的位置时,场景其他点云的约束能把结果拉回到正确上面,这样场景的动态障碍物只是异常值。

Monocular dynamic object SLAM (MonoDOS) 通过两种方式对常规SLAM进行扩展。其一,具有对象感知功能,不仅可以检测跟踪关键点,还可以检测跟踪具有语义含义的对象。其次,它可以处理带有动态对象的场景并跟踪对象运动

DOS系统引入了对象的概念,这个概念具有以下几个内容。首先,需要从单个图像帧中提取对象,就相当于常规SLAM系统中的关键点(例如ORB-SLAM中的ORB特征点)提取阶段。该阶段将给出2D或3D对象检测结果。现阶段单目3D对象检测取得了很大进展。其次,它的数据关联性更加复杂。静态SLAM只关心图像中的关键点,因此静态SLAM的数据关联只是关键帧特征向量的匹配。对于动态SLAM我们必须对帧中的关键点和对象之间执行数据关联。第三,作为传统SLAM中Bundle Adjustment的拓展,我们必须为这个处理过程添加跟踪对象(tracklet)和动态关键点,其次还可以利用运动模型中的速度约束。

CubeSLAM: Monocular 3D Object SLAM

https://arxiv.org/pdf/1806.00557.pdf

GitHub - shichaoy/cube_slam: CubeSLAM: Monocular 3D Object Detection and SLAM

首先,来看一下这篇TRO19的文章。cubeSLAM的主要贡献之一就是巧妙地将检测到的障碍物以长方体的形式集成到因子图优化中,并使用运动模型来限制长方体的可能运动,优化了物体的速度。

检测到的动态障碍物和SLAM可以相互促进。不像传统的SLAM把动态障碍物作为outlier。动态障碍物为BA和深度初始化提供了几何约束。除此之外它还增加了泛化功能,使orb slam可以在低纹理环境中工作。mono3D结果通过BA优化,并通过运动模型进行约束。但是并没有去除建图中的障碍物

如图所示

Dynamic object SLAM. Blue nodes represent the static SLAM component and red ones represent new dynamic variables. The green squares are the new factors of dynamic map including motion model constraints and observations of objects and points.

对象提取

这篇文章将2D对象检测和初级图像特征点用于3D长方体的检测和评分。看似简单的方法对椅子和汽车的检测都具有非常好的效果。但是基于深度学习的方法可以得到更加精确的结果。

长方体对象的生成和评分

这是它的效果。它这是跟没有回环的ORB-SLAM对比的。但说实话,感觉就是哪怕把动态障碍物作为干扰的其他SLAM的方法,也可以做到不错的效果。就是感觉上动态障碍物检测处理,用于约束camera pose是可靠的,但把SLAM中的动态障碍物去除好像没什么意义。。。

Stereo Vision-based Semantic 3D Object and Ego-motion Tracking for Autonomous Driving

这篇论文也是检测以及跟踪动态障碍物,然后基于动态障碍物的重投影误差来优化camerapose

它是基于双目视觉的。它的contribution主要有两个。1、轻量级的语义分割方法提取3D目标。2、基于动态障碍物的BA优化。

Clustervo: Clustering moving instances and estimating visual odometry for self and surroundings

ClusterVO通过将动态对象表示为跟踪的关键点(或本文中的地标)的群集,使用YOLOv3作为2D对象检测器,为每个帧中的对象提出语义2D边界框。它不对描述对象进行假定。因此,相较于上述两篇文章没有长方体的产生阶段。

Multi-object monocular slam for dynamic environments

这论文呢,是单目camera,多目标检测。

但其实关键解决的问题都差不多,检测以及跟踪视野中的多个动态障碍物。然后基于tracking动态障碍物的pose来做联合的优化camerapose

在kitti上测试一些demo

这里先基于kitti数据集,进行测试。之前博客中已经介绍过kitti数据集了。本博文就用这个数据集来进行各种经典方法的复现

The KITTI Vision Benchmark Suitehttp://www.cvlibs.net/datasets/kitti/eval_odometry.php

VINS-FUSION

VINS-Fusion demo

把vins-mono也配置一下好了~

GitHub - HKUST-Aerial-Robotics/VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimatorhttps://github.com/HKUST-Aerial-Robotics/VINS-Mono

主要cm的时候会报错,把里面cmakelist的c++11改为14即可

这样可以避免出错,但好像运行会报错

试试改为ceres-solver-1.14.0。再编译。发现还是不可以。。。。重装系统看看。

在另外一台电脑上尝试了不行,然后把vins-fusion也编译了一下,然后运行过vins-fusion后就可以了???再运行好就work了。。。奇怪。。。。

最终发现问题了,要按下面顺序运行才可以。。。cao

这个原因非常的迷惑。。。。。

roslaunch vins_estimator vins_rviz.launch
roslaunch vins_estimator euroc.launch

下载数据集kmavvisualinertialdatasets – ASL Datasetshttps://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets

vins系列真的非常丰富,下面还有个co-vins

https://github.com/qintonguav/Co-VINShttps://github.com/qintonguav/Co-VINS

还有GNSS-VINS

https://github.com/HKUST-Aerial-Robotics/GVINShttps://github.com/HKUST-Aerial-Robotics/GVINS

A-LOAM

之前博客《学习笔记之——激光雷达SLAM(LOAM系列的复现与学习)》已经配置过A-LOAM了。本博文来跑一下其kitti数据集

GitHub - HKUST-Aerial-Robotics/A-LOAM: Advanced implementation of LOAMAdvanced implementation of LOAM. Contribute to HKUST-Aerial-Robotics/A-LOAM development by creating an account on GitHub.https://github.com/HKUST-Aerial-Robotics/A-LOAM

注意,对于下载的数据集需要按以下的形式打包放好

data|----poses|     |----00.txt|----sequences|     |----00|     |   |----image_0|     |   |----image_1|     |   |----velodyne|     |   |----time.txt

然后将kitti的launch修改为:

<launch><node name="kittiHelper" pkg="aloam_velodyne" type="kittiHelper" output="screen"> <!-- <param name="dataset_folder" type="string" value="/data/KITTI/odometry/" /> --><param name="dataset_folder" type="string" value="/home/kwanwaipang/dataset/kitti/data/" /><param name="sequence_number" type="string" value="00" /><!-- <param name="to_bag" type="bool" value="false" /> --><param name="to_bag" type="bool" value="true" /><!-- <param name="output_bag_file" type="string" value="/tmp/kitti.bag" /> replace with your output folder --><param name="output_bag_file" type="string" value="/home/kwanwaipang/dataset/kitti/kitti_00.bag" /> <!-- replace with your output folder --><param name="publish_delay" type="int" value="1" /></node>
</launch>

好坑。。上面那样的list是不对的。要改为下面这样才对(建议直接看源代码)

运行效果如下图所示

视频连接:

aloam

LIO-SAM

https://github.com/TixiaoShan/LIO-SAMhttps://github.com/TixiaoShan/LIO-SAM

有可能出现缺乏依赖libmetis.so而报错。运行下面命令安装即可

sudo apt-get install -y libmetis-dev

如若需要允许kitti数据集,则需要将config里面的参数文件修改如下:

lio_sam:# TopicspointCloudTopic: "points_raw"               # Point cloud data# imuTopic: "imu_raw"                         # IMU dataimuTopic: "imu_correct"                         # IMU dataodomTopic: "odometry/imu"                   # IMU pre-preintegration odometry, same frequency as IMUgpsTopic: "odometry/gpsz"                   # GPS odometry topic from navsat, see module_navsat.launch file# FrameslidarFrame: "base_link"baselinkFrame: "base_link"odometryFrame: "odom"mapFrame: "map"# GPS SettingsuseImuHeadingInitialization: true           # if using GPS data, set to "true"useGpsElevation: false                      # if GPS elevation is bad, set to "false"gpsCovThreshold: 2.0                        # m^2, threshold for using GPS dataposeCovThreshold: 25.0                      # m^2, threshold for using GPS data# Export settingssavePCD: false                              # https://github.com/TixiaoShan/LIO-SAM/issues/3savePCDDirectory: "/Downloads/LOAM/"        # in your home folder, starts and ends with "/". Warning: the code deletes "LOAM" folder then recreates it. See "mapOptimization" for implementation# Sensor Settingssensor: velodyne                            # lidar sensor type, either 'velodyne' or 'ouster'N_SCAN: 64   #16                                  # number of lidar channel (i.e., 16, 32, 64, 128)Horizon_SCAN: 1800                          # lidar horizontal resolution (Velodyne:1800, Ouster:512,1024,2048)downsampleRate: 2  #1                           # default: 1. Downsample your data if too many points. i.e., 16 = 64 / 4, 16 = 16 / 1lidarMinRange: 1.0                          # default: 1.0, minimum lidar range to be usedlidarMaxRange: 1000.0                       # default: 1000.0, maximum lidar range to be used# IMU SettingsimuAccNoise: 3.9939570888238808e-03imuGyrNoise: 1.5636343949698187e-03imuAccBiasN: 6.4356659353532566e-05imuGyrBiasN: 3.5640318696367613e-05imuGravity: 9.80511imuRPYWeight: 0.01# Extrinsics (lidar -> IMU)extrinsicTrans: [-8.086759e-01, 3.195559e-01, -7.997231e-01]extrinsicRot:  [9.999976e-01, 7.553071e-04, -2.035826e-03, -7.854027e-04, 9.998898e-01, -1.482298e-02, 2.024406e-03, 1.482454e-02, 9.998881e-01]extrinsicRPY: [9.999976e-01, 7.553071e-04, -2.035826e-03, -7.854027e-04, 9.998898e-01, -1.482298e-02, 2.024406e-03, 1.482454e-02, 9.998881e-01]# extrinsicRot: [1, 0, 0,#                 0, 1, 0,#                 0, 0, 1]# extrinsicRPY: [1, 0, 0,#                 0, 1, 0,#                 0, 0, 1]# LOAM feature thresholdedgeThreshold: 1.0surfThreshold: 0.1edgeFeatureMinValidNum: 10surfFeatureMinValidNum: 100# voxel filter papramsodometrySurfLeafSize: 0.4                     # default: 0.4 - outdoor, 0.2 - indoormappingCornerLeafSize: 0.2                    # default: 0.2 - outdoor, 0.1 - indoormappingSurfLeafSize: 0.4                      # default: 0.4 - outdoor, 0.2 - indoor# robot motion constraint (in case you are using a 2D robot)z_tollerance: 1000                            # metersrotation_tollerance: 1000                     # radians# CPU ParamsnumberOfCores: 4                              # number of cores for mapping optimizationmappingProcessInterval: 0.15                  # seconds, regulate mapping frequency# Surrounding mapsurroundingkeyframeAddingDistThreshold: 1.0   # meters, regulate keyframe adding thresholdsurroundingkeyframeAddingAngleThreshold: 0.2  # radians, regulate keyframe adding thresholdsurroundingKeyframeDensity: 2.0               # meters, downsample surrounding keyframe poses   surroundingKeyframeSearchRadius: 50.0         # meters, within n meters scan-to-map optimization (when loop closure disabled)# Loop closureloopClosureEnableFlag: trueloopClosureFrequency: 1.0                     # Hz, regulate loop closure constraint add frequencysurroundingKeyframeSize: 50                   # submap size (when loop closure enabled)historyKeyframeSearchRadius: 15.0             # meters, key frame that is within n meters from current pose will be considerd for loop closurehistoryKeyframeSearchTimeDiff: 30.0           # seconds, key frame that is n seconds older will be considered for loop closurehistoryKeyframeSearchNum: 25                  # number of hostory key frames will be fused into a submap for loop closurehistoryKeyframeFitnessScore: 0.3              # icp threshold, the smaller the better alignment# VisualizationglobalMapVisualizationSearchRadius: 1000.0    # meters, global map visualization radiusglobalMapVisualizationPoseDensity: 10.0       # meters, global map visualization keyframe densityglobalMapVisualizationLeafSize: 1.0           # meters, global map visualization cloud density# Navsat (convert GPS coordinates to Cartesian)
navsat:frequency: 50wait_for_datum: falsedelay: 0.0magnetic_declination_radians: 0yaw_offset: 0zero_altitude: truebroadcast_utm_transform: falsebroadcast_utm_transform_as_parent_frame: falsepublish_filtered_gps: false# EKF for Navsat
ekf_gps:publish_tf: falsemap_frame: mapodom_frame: odombase_link_frame: base_linkworld_frame: odomfrequency: 50two_d_mode: falsesensor_timeout: 0.01# -------------------------------------# External IMU:# -------------------------------------imu0: imu_correct# make sure the input is aligned with ROS REP105. "imu_correct" is manually transformed by myself. EKF can also transform the data using tf between your imu and base_linkimu0_config: [false, false, false,true,  true,  true,false, false, false,false, false, true,true,  true,  true]imu0_differential: falseimu0_queue_size: 50 imu0_remove_gravitational_acceleration: true# -------------------------------------# Odometry (From Navsat):# -------------------------------------odom0: odometry/gpsodom0_config: [true,  true,  true,false, false, false,false, false, false,false, false, false,false, false, false]odom0_differential: falseodom0_queue_size: 10#                            x     y     z     r     p     y   x_dot  y_dot  z_dot  r_dot p_dot y_dot x_ddot y_ddot z_ddotprocess_noise_covariance: [  1.0,  0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,    0,    0,      0,0,    1.0,  0,    0,    0,    0,    0,     0,     0,     0,    0,    0,    0,    0,      0,0,    0,    10.0, 0,    0,    0,    0,     0,     0,     0,    0,    0,    0,    0,      0,0,    0,    0,    0.03, 0,    0,    0,     0,     0,     0,    0,    0,    0,    0,      0,0,    0,    0,    0,    0.03, 0,    0,     0,     0,     0,    0,    0,    0,    0,      0,0,    0,    0,    0,    0,    0.1,  0,     0,     0,     0,    0,    0,    0,    0,      0,0,    0,    0,    0,    0,    0,    0.25,  0,     0,     0,    0,    0,    0,    0,      0,0,    0,    0,    0,    0,    0,    0,     0.25,  0,     0,    0,    0,    0,    0,      0,0,    0,    0,    0,    0,    0,    0,     0,     0.04,  0,    0,    0,    0,    0,      0,0,    0,    0,    0,    0,    0,    0,     0,     0,     0.01, 0,    0,    0,    0,      0,0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0.01, 0,    0,    0,      0,0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0.5,  0,    0,      0,0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,    0.01, 0,      0,0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,    0,    0.01,   0,0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,    0,    0,      0.015]

同样的类似ALOAM,作者也给出了从kitti生成rosbag的代码

To generate more bags using other KITTI raw data, you can use the python script provided in "config/doc/kitti2bag".

视频效果如下:

LIO-SAM

Stereo Visual Inertial Pose Estimation Based on Feedforward-Feedback Loops

这是港理工的一个开源项目。论文链接见:https://arxiv.org/pdf/2007.02250.pdf

按照要求配置看看~

GitHub - HKPolyU-UAV/FLVIS: FLVIS: Feedback Loop Based Visual Initial SLAMhttps://github.com/HKPolyU-UAV/FLVIS

如果kitti报错

(.text.startup+0x4e8):对‘Sophus::SE3::SE3()’未定义的引用

可以参考(Sophus 编译错误_u010003609的博客-CSDN博客)

将对应的cmkaelist文件改为

cmake_minimum_required(VERSION 2.8.3)
project(flvis)add_definitions(-std=c++11)
#set(CMAKE_CXX_FLAGS "-std=c++11)
set(CMAKE_CXX_FLAGS "-std=c++11 ${CMAKE_CXX_FLAGS} -O3 -Wall -pthread") # -Wextra -Werror
set(CMAKE_BUILD_TYPE "RELEASE")list(APPEND CMAKE_MODULE_PATH ${CMAKE_CURRENT_SOURCE_DIR}/3rdPartLib/g2o/cmake_modules)
set(G2O_ROOT /usr/local/include/g2o)
find_package(G2O REQUIRED) find_package (OpenCV 3 REQUIRED)
find_package (Eigen3 REQUIRED)find_package (CSparse REQUIRED )
find_package (Sophus REQUIRED )
find_package (yaml-cpp REQUIRED )
find_package (DBoW3 REQUIRED)
# pcl
find_package( PCL REQUIRED)
include_directories( ${PCL_INCLUDE_DIRS} )
add_definitions( ${PCL_DEFINITIONS} )#FIND_PACKAGE(octomap REQUIRED )
#FIND_PACKAGE(octovis REQUIRED )
#INCLUDE_DIRECTORIES(${OCTOMAP_INCLUDE_DIRS})find_package(catkin REQUIRED COMPONENTSnodeletroscpprostimestd_msgssensor_msgsgeometry_msgsnav_msgspcl_rostfvisualization_msgsimage_transportcv_bridgemessage_generationmessage_filters)add_message_files(FILESKeyFrame.msgCorrectionInf.msg)generate_messages(DEPENDENCIESstd_msgssensor_msgsgeometry_msgsnav_msgsvisualization_msgs)## Declare a catkin package
catkin_package(CATKIN_DEPENDS message_runtime)include_directories(${catkin_INCLUDE_DIRS}${OpenCV_INCLUDE_DIRS}${G2O_INCLUDE_DIRS}${CSPARSE_INCLUDE_DIR}${Sophus_INCLUDE_DIRS}${YAML_CPP_INCLUDE_DIR}${DBoW3_INCLUDE_DIR}"${CMAKE_CURRENT_SOURCE_DIR}/src/""${CMAKE_CURRENT_SOURCE_DIR}/src/processing/""${CMAKE_CURRENT_SOURCE_DIR}/src/backend/""${CMAKE_CURRENT_SOURCE_DIR}/src/frontend/""${CMAKE_CURRENT_SOURCE_DIR}/src/utils/""${CMAKE_CURRENT_SOURCE_DIR}/src/visualization/"#"${CMAKE_CURRENT_SOURCE_DIR}/src/octofeeder/")SET(G2O_LIBS cholmod cxsparse -lg2o_cli -lg2o_core-lg2o_csparse_extension -lg2o_ext_freeglut_minimal -lg2o_incremental-lg2o_interactive -lg2o_interface -lg2o_opengl_helper -lg2o_parser-lg2o_simulator -lg2o_solver_cholmod -lg2o_solver_csparse-lg2o_solver_dense -lg2o_solver_pcg -lg2o_solver_slam2d_linear-lg2o_solver_structure_only -lg2o_stuff -lg2o_types_data -lg2o_types_icp-lg2o_types_sba -lg2o_types_sclam2d -lg2o_types_sim3 -lg2o_types_slam2d-lg2o_types_slam3d)## Declare a C++ library
add_library(flvis#processingsrc/processing/feature_dem.cppsrc/processing/depth_camera.cppsrc/processing/landmark.cppsrc/processing/camera_frame.cppsrc/processing/triangulation.cppsrc/processing/lkorb_tracking.cppsrc/processing/imu_state.cppsrc/processing/vi_motion.cppsrc/processing/optimize_in_frame.cpp#vissrc/visualization/rviz_frame.cppsrc/visualization/rviz_path.cppsrc/visualization/rviz_pose.cppsrc/visualization/rviz_odom.cpp#msgsrc/utils/keyframe_msg.cppsrc/utils/correction_inf_msg.cpp#node trackingsrc/frontend/vo_tracking.cppsrc/frontend/f2f_tracking.cpp#node localmapsrc/backend/vo_localmap.cpp#node loop closingsrc/backend/vo_loopclosing.cppsrc/backend/poselmbag.cpp#src/octofeeder/octomap_feeder.cpp)add_dependencies(flvisflvis_generate_messages_cpp${catkin_EXPORTED_TARGETS})target_link_libraries(flvis${catkin_LIBRARIES}${OpenCV_LIBRARIES}${CSPARSE_LIBRARY}${Sophus_LIBRARIES}${YAML_CPP_LIBRARIES}${DBoW3_LIBRARIES}${G2O_LIBS}${PCL_LIBRARIES}${Boost_SYSTEM_LIBRARY}#${OCTOMAP_LIBRARIES})#independent modules
#1 euroc_publisher publish path
add_executable(vo_repub_recsrc/independ_modules/vo_repub_rec.cpp)
target_link_libraries(vo_repub_rec${catkin_LIBRARIES}${Sophus_LIBRARIES})add_executable(kitti_publishersrc/independ_modules/kitti_publisher.cppsrc/visualization/rviz_path.cpp)
set(Sophus_LIBRARIES libSophus.so)
target_link_libraries(kitti_publisher${catkin_LIBRARIES}${Sophus_LIBRARIES})

注意,可能出现g2o安装不好导致有问题,重新装一下即可

修改对应的launch文件,数据集还是上面整理好的

<?xml version="1.0"?>
<launch><!--Input######################################################################################################--><param name="/dataset_pub_delay"      type="double"  value="5.0" /><param name="/dataset_pub_rate"       type="int"     value="30" /><param name="/publish_gt"             type="bool"    value="true" /><param name="/dataset_folder_path"    type="string"  value="/home/kwanwaipang/dataset/kitti/data/sequences/00/" /><param name="/dataset_gt_file"        type="string"  value="/home/kwanwaipang/dataset/kitti/data/results/00.txt" /><node pkg="flvis" type="kitti_publisher" name="kitti_publisher" output="screen"/><!--FLVIS######################################################################################################--><arg name="node_start_delay"  default="1.0" /><param name="/yamlconfigfile" type="string" value="$(find flvis)/launch/KITTI/KITTI.yaml"/><param name="/voc"            type="string" value="$(find flvis)/voc/voc_orb.dbow3"/><!-- Manager --><node pkg="nodelet" type="nodelet"name="flvis_nodelet_manager" args="manager" output="screen"launch-prefix="bash -c 'sleep $(arg node_start_delay); $0 $@' "><param name="num_worker_threads" value="4" /></node><!-- TrackingNode --><!-- D435i --><node pkg="nodelet" type="nodelet" args="load flvis/TrackingNodeletClass flvis_nodelet_manager"name="TrackingNodeletClass_loader" output="screen"launch-prefix="bash -c 'sleep $(arg node_start_delay); $0 $@' "><remap from="/imu"               to="/camera/imu"/></node><!-- LocalMapNode --><!--window_size: Num of keyframes in sliding window optimizer-->
<!--    <node pkg="nodelet" type="nodelet" args="load flvis/LocalMapNodeletClass flvis_nodelet_manager"name="LocalMapNodeletClass_loader" output="screen"launch-prefix="bash -c 'sleep $(arg node_start_delay); $0 $@' "><param name="/window_size" type="int" value="8" /></node>--><!-- LoopClosingNode --><node pkg="nodelet" type="nodelet" args="load flvis/LoopClosingNodeletClass flvis_nodelet_manager"name="LoopClosingNodeletClass_loader" output="screen"launch-prefix="bash -c 'sleep $(arg node_start_delay); $0 $@' "></node><node pkg="flvis" type="vo_repub_rec" name="lc2file" output="screen"><!--Sub Support Type:--><param name="sub_type" type="string" value="NavPath" /><param name="sub_topic" type="string" value="/vision_path_lc_all" /><!--Support Type:--><!--"0" disable the republish function --><!--"Path"--><!--"PoseStamped"--><param name="repub_type" type="string" value="0" /><param name="repub_topic" type="string" value="/republish_path" /><!--output_file_path = "0" disable the file output function--><param name="output_file_path" type="string" value="$(find flvis)/results/kitti_lc.txt" />
</node></launch>
roslaunch flvis rviz_kitti.launch
roslaunch flvis flvis_kitti.launch

效果如下

flvis

DSO: Direct Sparse Odometry

DSO跑KITTI数据集_SLAM的博客-CSDN博客

DSO安装与调试 - huicanlin - 博客园

LDSO

https://github.com/tum-vision/LDSO

./bin/run_dso_kitti \preset=0 \files=/home/kwanwaipang/dataset/kitti/data/sequences/00/ \calib=./examples/Kitti/Kitti00-02.txt

跟着里面的教程安装及编译就好~~~

视频如下:

LDSO

A Robust, Real-time, LiDAR-Inertial-Visual tightly-coupled state Estimator and mapping

https://github.com/hku-mars/r2live

安装一些依赖

    sudo apt-get install ros-melodic-cv-bridge ros-melodic-tf ros-melodic-message-filters ros-melodic-image-transport

然后安装livox驱动https://github.com/Livox-SDK/livox_ros_driver

cm的时候可能会报错

error: conflicting declaration ‘typedef struct LZ4_streamDecode_t LZ4_streamDecode_t’

参考https://github.com/ethz-asl/lidar_align/issues/16

即可~

roslaunch r2live demo.launch
rosbag play YOUR_DOWNLOADED.bag

运行效果如下:

R2LIVE

OverlapNet - Loop Closing for 3D LiDAR-based SLAM

https://github.com/PRBonn/OverlapNet

High-speed Autonomous Drifting with Deep Reinforcement Learning

GitHub - caipeide/drift_drl: High-speed Autonomous Drifting with Deep Reinforcement Learning

https://sites.google.com/view/autonomous-drifting-with-drl

参考资料

​​​​​​https://github.com/PRBonn/semantic_suma

一文了解动态场景中的SLAM的研究现状 - 云+社区 - 腾讯云
https://towardsdatascience.com/monocular-dynamic-object-slam-in-autonomous-driving-f12249052bf1

ROS实验笔记之——SLAM无人驾驶初入门相关推荐

  1. ROS实验笔记之——VINS-Mono在l515上的实现

    之前博客<ROS实验笔记之--Intel Realsense l515激光相机的使用>实现了用l515运行RTABmap,本博文试试在l515上实现vins-mono 首先需要将vins- ...

  2. ROS实验笔记之——自主搭建四旋翼无人机

    最近搭建了一台小的四旋翼无人机,本博文记录一下搭建的过程以及一些问题. 请问我博客就记录了自己做实验的搭建的飞机有什么问题??? 目录 组装 飞行前准备 试飞 组装 首先是一系列的散装原件. 到最后搭 ...

  3. ROS实验笔记之——Intel Realsense l515激光相机的使用

    最近实验室购买了Intel Realsense l515相机.本博文记录使用过程~ 驱动安装 先到官网安装驱动:https://github.com/IntelRealSense/realsense- ...

  4. ROS实验笔记之——基于Prometheus自主无人机开源项目的学习与仿真

    最近在公众号上看到Prometheus无人机的资料,发现里面开源了很好的无人机的仿真环境,并且有很好的教程.而本人正好在上<Introduction to Aerial Robotics> ...

  5. ROS实验笔记之——基于Prometheus的无人机运动规划

    本博文基于Prometheus项目来学习无人机的运动规划.关于该项目的配置可以参考<ROS实验笔记之--基于Prometheus自主无人机开源项目的学习与仿真> Demo演示 基于2D-L ...

  6. ROS实验笔记之——FAST-LIVO

    最近IROS22的FAST-LVIO源码开源了,笔者赶紧测试一下. 源码链接:GitHub - hku-mars/FAST-LIVO: A Fast and Tightly-coupled Spars ...

  7. SLAM实操入门(五):无里程计仅使用激光雷达建图(GMapping算法)

    文章目录 前言 1 Gmapping算法 2 laser_scan_matcher库 2.1 安装laser_scan_matcher库 2.2 修改demo_gmapping.launch文件 3 ...

  8. 深度学习入门教程UFLDL学习实验笔记三:主成分分析PCA与白化whitening

     深度学习入门教程UFLDL学习实验笔记三:主成分分析PCA与白化whitening 主成分分析与白化是在做深度学习训练时最常见的两种预处理的方法,主成分分析是一种我们用的很多的降维的一种手段,通 ...

  9. 深度学习入门教程UFLDL学习实验笔记一:稀疏自编码器

     深度学习入门教程UFLDL学习实验笔记一:稀疏自编码器 UFLDL即(unsupervised feature learning & deep learning).这是斯坦福网站上的一篇 ...

  10. ROS学习笔记——基于Prometheus无人机开源项目仿真环境配置

    本笔记基于ubuntu18.04版本,配置基于Prometheus无人机开源项目仿真环境. 需要事先在电脑上安装了ROS,Mavros功能包,其余可参考Amov教程,                  ...

最新文章

  1. android 怎么换行,android textview 怎么换行?
  2. Linux下,安装配置Weblogic
  3. 软考系统架构师笔记-综合知识重点(一)
  4. 解封装(九):av_read_frame和av_seek_frame代码示例分析内存占用和清理情况
  5. shell实例100例《六》
  6. 《认清C++语言》---接口继承和实现继承
  7. 留个坑,不知道为什么sqlite3要求组权限才能执行db:migrate,而可以直接执行db:......
  8. 为什么抢不到红包的总是你?可能是家里路由器没放对
  9. 使用Xcode打包上传APP
  10. win7下虚拟显示器完成记(virtual monitor)——VDI显卡透传场景
  11. win10电脑休眠命令
  12. 开源离线语音识别(SpeechRecognition)
  13. 无本经营?2021做电商跨境shopee平台赚钱很轻松!
  14. 群晖之邮件服务器搭建
  15. 688. 骑士在棋盘上的概率(中等 动态规划)
  16. CASAIM自动化精密尺寸测量设备全尺寸检测铸件自动化检测铸件
  17. 6-2 使用函数输出指定范围内Fibonacci数的个数 (20 分)
  18. java按照多个分隔符分割字符串
  19. AI开发过程中常用开发命令及软件安装
  20. Manifest merger failed with multiple errors, see logs解决方案

热门文章

  1. 尚德机构推出2019版CPA系列纸质书 配套小程序刷题
  2. 学习STM32的理由
  3. 一张图囊括所有ASM优化投放知识点
  4. css元素可拖动,css3实现可拖动的魔方3d效果
  5. 汤姆猫代码python_用树莓派实现会说话的汤姆猫
  6. 什么是mx记录?如何设置域名mx记录?
  7. linux查看磁盘naa,linux查看计算机硬件信息
  8. Ubuntu Navicat 英文显示乱码解决方案
  9. html 加响应头,response发送中文,设置响应头
  10. mysql sum契合_文化契合者为王。 这是在下一次开发人员面试中如何定位的方法。...