文章目录

  • SFND 3D Object Tracking
    • Dependencies for Running Locally
    • Basic Build Instructions
  • Rubric Points
    • FP.1 : Match 3D Objects
    • FP.2 : Compute Lidar-based TTC
    • FP.3 : Associate Keypoint Correspondences with Bounding Boxes
    • FP.4 : Compute Camera-based TTC
    • FP.5 : Performance Evaluation 1
    • FP.6 : Performance Evaluation 2

SFND 3D Object Tracking

github:
https://github.com/bjtylxl/FND_P4_3D_Object_Tracking

Welcome to the final project of the camera course. By completing all the lessons, you now have a solid understanding of keypoint detectors, descriptors, and methods to match them between successive images. Also, you know how to detect objects in an image using the YOLO deep-learning framework. And finally, you know how to associate regions in a camera image with Lidar points in 3D space. Let’s take a look at our program schematic to see what we already have accomplished and what’s still missing.

In this final project, you will implement the missing parts in the schematic. To do this, you will complete four major tasks:

  1. First, you will develop a way to match 3D objects over time by using keypoint correspondences.
  2. Second, you will compute the TTC based on Lidar measurements.
  3. You will then proceed to do the same using the camera, which requires to first associate keypoint matches to regions of interest and then to compute the TTC based on those matches.
  4. And lastly, you will conduct various tests with the framework. Your goal is to identify the most suitable detector/descriptor combination for TTC estimation and also to search for problems that can lead to faulty measurements by the camera or Lidar sensor. In the last course of this Nanodegree, you will learn about the Kalman filter, which is a great way to combine the two independent TTC measurements into an improved version which is much more reliable than a single sensor alone can be. But before we think about such things, let us focus on your final project in the camera course.

Dependencies for Running Locally

  • cmake >= 2.8

    • All OSes: click here for installation instructions
  • make >= 4.1 (Linux, Mac), 3.81 (Windows)
    • Linux: make is installed by default on most Linux distros
    • Mac: install Xcode command line tools to get make
    • Windows: Click here for installation instructions
  • OpenCV >= 4.1
    • This must be compiled from source using the -D OPENCV_ENABLE_NONFREE=ON cmake flag for testing the SIFT and SURF detectors.
    • The OpenCV 4.1.0 source code can be found here
  • gcc/g++ >= 5.4
    • Linux: gcc / g++ is installed by default on most Linux distros
    • Mac: same deal as make - install Xcode command line tools
    • Windows: recommend using MinGW

Basic Build Instructions

  1. Clone this repo.
  2. Make a build directory in the top level project directory: mkdir build && cd build
  3. Compile: cmake .. && make
  4. Run it: ./3D_object_tracking.

Rubric Points

FP.1 : Match 3D Objects

In this task, please implement the method “matchBoundingBoxes”, which takes as input both the previous and the current d

void matchBoundingBoxes(std::vector<cv::DMatch> &matches, std::map<int, int> &bbBestMatches, DataFrame &prevFrame, DataFrame &currFrame)
{int p = prevFrame.boundingBoxes.size();int c = currFrame.boundingBoxes.size();int pt_counts[p][c] = { };for (auto it = matches.begin(); it != matches.end() - 1; ++it)     {cv::KeyPoint query = prevFrame.keypoints[it->queryIdx];auto query_pt = cv::Point(query.pt.x, query.pt.y);bool query_found = false;cv::KeyPoint train = currFrame.keypoints[it->trainIdx];auto train_pt = cv::Point(train.pt.x, train.pt.y);bool train_found = false;std::vector<int> query_id, train_id;for (int i = 0; i < p; i++) {if (prevFrame.boundingBoxes[i].roi.contains(query_pt))             {query_found = true;query_id.push_back(i);}}for (int i = 0; i < c; i++) {if (currFrame.boundingBoxes[i].roi.contains(train_pt))             {train_found= true;train_id.push_back(i);}}if (query_found && train_found){for (auto id_prev: query_id)for (auto id_curr: train_id)pt_counts[id_prev][id_curr] += 1;}}for (int i = 0; i < p; i++){int max_count = 0;int id_max = 0;for (int j = 0; j < c; j++)if (pt_counts[i][j] > max_count){max_count = pt_counts[i][j];id_max = j;}bbBestMatches[i] = id_max;}bool bMsg = true;if (bMsg)for (int i = 0; i < p; i++)cout << "Box " << i << " matches " << bbBestMatches[i]<< " box" << endl;
}

FP.2 : Compute Lidar-based TTC

In this part of the final project, your task is to compute the time-to-collision for all matched 3D objects based on Lidar measurements alone. Please take a look at the second lesson of this course to revisit the theory behind TTC estimation. Also, please implement the estimation in a way that makes it robust against outliers which might be way too close and thus lead to faulty estimates of the TTC. Please return your TCC to the main function at the end of computeTTCLidar.

void computeTTCLidar(std::vector<LidarPoint> &lidarPointsPrev,std::vector<LidarPoint> &lidarPointsCurr, double frameRate, double &TTC)
{double dT = 1 / frameRate;double laneWidth = 4.0; // assumed width of the ego lanevector<double> xPrev, xCurr;// find Lidar points within ego lanefor (auto it = lidarPointsPrev.begin(); it != lidarPointsPrev.end(); ++it){if (abs(it->y) <= laneWidth / 2.0){ // 3D point within ego lane?xPrev.push_back(it->x);}}for (auto it = lidarPointsCurr.begin(); it != lidarPointsCurr.end(); ++it){if (abs(it->y) <= laneWidth / 2.0){ // 3D point within ego lane?xCurr.push_back(it->x);}}double minXPrev = 0;double minXCurr = 0;if (xPrev.size() > 0){for (auto x: xPrev)minXPrev += x;minXPrev = minXPrev / xPrev.size();}if (xCurr.size() > 0){for (auto x: xCurr)minXCurr += x;minXCurr = minXCurr / xCurr.size();}// compute TTC from both measurementscout << "minXCurr: " << minXCurr << endl;cout << "minXPrev: " << minXPrev << endl;TTC = minXCurr * dT / (minXPrev - minXCurr);
}

FP.3 : Associate Keypoint Correspondences with Bounding Boxes

Before a TTC estimate can be computed in the next exercise, you need to find all keypoint matches that belong to each 3D object. You can do this by simply checking wether the corresponding keypoints are within the region of interest in the camera image. All matches which satisfy this condition should be added to a vector. The problem you will find is that there will be outliers among your matches. To eliminate those, I recommend that you compute a robust mean of all the euclidean distances between keypoint matches and then remove those that are too far away from the mean.

void clusterKptMatchesWithROI(BoundingBox &boundingBox, std::vector<cv::KeyPoint> &kptsPrev, std::vector<cv::KeyPoint> &kptsCurr, std::vector<cv::DMatch> &kptMatches)
{// ...double dist_mean = 0;std::vector<cv::DMatch>  kptMatches_roi;for (auto it = kptMatches.begin(); it != kptMatches.end(); ++it){cv::KeyPoint kp = kptsCurr.at(it->trainIdx);if (boundingBox.roi.contains(cv::Point(kp.pt.x, kp.pt.y)))kptMatches_roi.push_back(*it);}for  (auto it = kptMatches_roi.begin(); it != kptMatches_roi.end(); ++it)dist_mean += it->distance;cout << "Find " << kptMatches_roi.size()  << " matches" << endl;if (kptMatches_roi.size() > 0)dist_mean = dist_mean/kptMatches_roi.size();else return;double threshold = dist_mean * 0.7;for  (auto it = kptMatches_roi.begin(); it != kptMatches_roi.end(); ++it){if (it->distance < threshold)boundingBox.kptMatches.push_back(*it);}cout << "Leave " << boundingBox.kptMatches.size()  << " matches" << endl;
}

FP.4 : Compute Camera-based TTC

Once keypoint matches have been added to the bounding boxes, the next step is to compute the TTC estimate. As with Lidar, we already looked into this in the second lesson of this course, so you please revisit the respective section and use the code sample there as a starting point for this task here. Once you have your estimate of the TTC, please return it to the main function at the end of computeTTCCamera.

void computeTTCCamera(std::vector<cv::KeyPoint> &kptsPrev, std::vector<cv::KeyPoint> &kptsCurr, std::vector<cv::DMatch> kptMatches, double frameRate, double &TTC, cv::Mat *visImg)
{vector<double> distRatios; // stores the distance ratios for all keypoints between curr. and prev. framefor (auto it1 = kptMatches.begin(); it1 != kptMatches.end() - 1; ++it1){cv::KeyPoint kpOuterCurr = kptsCurr.at(it1->trainIdx);cv::KeyPoint kpOuterPrev = kptsPrev.at(it1->queryIdx);for (auto it2 = kptMatches.begin() + 1; it2 != kptMatches.end(); ++it2){double minDist = 100.0; // min. required distancecv::KeyPoint kpInnerCurr = kptsCurr.at(it2->trainIdx);cv::KeyPoint kpInnerPrev = kptsPrev.at(it2->queryIdx);// compute distances and distance ratiosdouble distCurr = cv::norm(kpOuterCurr.pt - kpInnerCurr.pt);double distPrev = cv::norm(kpOuterPrev.pt - kpInnerPrev.pt);if (distPrev > std::numeric_limits<double>::epsilon() && distCurr >= minDist){ // avoid division by zerodouble distRatio = distCurr / distPrev;distRatios.push_back(distRatio);}}}// only continue if list of distance ratios is not emptyif (distRatios.size() == 0){TTC = NAN;return;}std::sort(distRatios.begin(), distRatios.end());long medIndex = floor(distRatios.size() / 2.0);double medDistRatio = distRatios.size() % 2 == 0 ? (distRatios[medIndex - 1] + distRatios[medIndex]) / 2.0 : distRatios[medIndex];   // compute median dist. ratio to remove outlier influencedouble dT = 1 / frameRate;TTC = -dT / (1 - medDistRatio);
}

FP.5 : Performance Evaluation 1

This exercise is about conducting tests with the final project code, especially with regard to the Lidar part. Look for several examples where you have the impression that the Lidar-based TTC estimate is way off. Once you have found those, describe your observations and provide a sound argumentation why you think this happened.

TTC from Lidar is not correct because of some outliers and some unstable points from preceding vehicle’s front mirrors, those need to be filtered out . Here we adapt a bigger shrinkFactor = 0.1, to get more reliable and stable lidar points. Then get a more accurate results.

FP.6 : Performance Evaluation 2

This last exercise is about running the different detector / descriptor combinations and looking at the differences in TTC estimation. Find out which methods perform best and also include several examples where camera-based TTC estimation is way off. As with Lidar, describe your observations again and also look into potential reasons. This is the last task in the final project.

The task is complete once all detector / descriptor combinations implemented in previous chapters have been compared with regard to the TTC estimate on a frame-by-frame basis. To facilitate the comparison, a spreadsheet and graph should be used to represent the different TTCs.

when get a robust clusterKptMatchesWithROI can get a stable TTC from Camera. if the result get unstable, It’s probably the worse keypints matches.

the results in FP_6_Performance_Evaluation_2.csv

The TOP3 detector / descriptor combinations as the best choice for our purpose of detecting keypoints on vehicles are:
SHITOMASI/FREAK
AKAZE/BRISK
AKAZE/BRIEF

无人驾驶技术——P5:3D Object Tracking相关推荐

  1. 笔记:A Novel Representation of Parts for Accurate 3D Object Detection and Tracking in Monocular Images

    A Novel Representation of Parts for Accurate 3D Object Detection and Tracking in Monocular Images 20 ...

  2. Center-based 3D Object Detection and Tracking

    参考Center-based 3D Object Detection and Tracking - 云+社区 - 腾讯云 摘要 三维物体通常表示为点云中的三维框. 这种表示模拟了经过充分研究的基于图像 ...

  3. 点云 3D 目标检测 - CenterPoint:Center-based 3D Object Detection and Tracking(CVPR 2021)

    点云 3D 目标检测 - CenterPoint: Center-based 3D Object Detection and Tracking - 基于中心的3D目标检测与跟踪(CVPR 2021) ...

  4. 浅析CV下的无人驾驶技术

    报告题目 浅析CV下的无人驾驶技术 1.概述: 2006年,Geoffrey Hinton老爷子针对传统的神经网络算法训练速度慢,面对多层Hidden Layer严重出现过拟合的现状,提出了无监督预训 ...

  5. [LiteratureReview]CubeSLAM Monocular 3-D Object SLAM

    [LiteratureReview]CubeSLAM: Monocular 3-D Object SLAM 出处:2019 IEEE Transactions on Robotics,(截止到2022 ...

  6. 无人驾驶系列】光学雷达(LiDAR)在无人驾驶技术中的应用

    无人驾驶汽车的成功涉及高精地图.实时定位以及障碍物检测等多项技术,而这些技术都离不开光学雷达(LiDAR).本文将深入解析光学雷达是如何被广泛应用到无人车的各项技术中.文章首先介绍光学雷达的工作原理, ...

  7. 【智能驾驶】最全、最强的无人驾驶技术学习路线

    作者:许小岩   来源:AI脑力波 授权 产业智能官 转载. 近两年,国内外掀起了一场空前的无人驾驶热潮.特斯拉.谷歌.福特.奔驰.丰田.百度.滴滴等众多企业开始研发无人驾驶汽车,甚至不少企业已经计划 ...

  8. GANerated Hands for Real-Time 3D Hand Tracking from Monocular RGB

    GANerated Hands for Real-Time 3D Hand Tracking from Monocular RGB (适用于单目RGB的实时三维手部跟踪) Franziska Muel ...

  9. 点云 3D 目标跟踪 - SimTrack: Exploring Simple 3D Multi-Object Tracking for Autonomous Driving(ICCV 2021)

    点云 3D 目标跟踪 - SimTrack(ICCV 2021) 摘要 1. 引言 2. 相关工作 3. 方法 3.1 准备工作 3.2 概述 3.3 联合检测和跟踪 4. 实验 4.1 数据集 4. ...

最新文章

  1. CUDA高性能计算经典问题:前缀和
  2. tensorflow量化感知训练_tensorflow
  3. idgenerator 会重复吗_终极版:分布式唯一ID的几种生成方案
  4. 转: The Code Commandments: Best Practices for Objective-C Coding (updated for ARC)
  5. 2020年, VQA论文汇总
  6. matlab循环矢量化 嵌套,在Matlab中对for循环进行矢量化,得到不同结果的看似等效的代码...
  7. 使用JDBCTemplate实现与Spring结合,方法公用
  8. 机器人技术大提升:NVIDIA为构建自主机器统一平台树立里程碑
  9. Python 获取重定向url
  10. 消融实验——Ablation experiment
  11. Java学习之路 之 容易混淆篇
  12. vue router 嵌套、父子、多个路由跳转传值获取不到参数undefined
  13. 使用Wifi pineapple(菠萝派)进行Wi-Fi钓鱼攻击
  14. 将picpick汉化及矩形截屏
  15. 超级强大的淘宝开源平台(taobao-code)
  16. 一篇文章解读提速、降费黑科技:PCDN定义、功能、架构、场景和优势
  17. 第4章 安装CentOS 5.x与多重引导小技巧
  18. 微信撤回的消息找不到?你OUT了,看看python程序怎么找回!
  19. Java的发展前景与职业方向最全面的解析
  20. Not registered via @EnableConfigurationProperties or marked as Spring component

热门文章

  1. 直播带货表格模板-自动显示图片-自动关联系列商品
  2. 雷迅和 PIXHAWK 合作开发的PIXHACK V5 接口图说明
  3. android notification源码,通告机制Notification
  4. Gutenberg 11.8.0 有哪些新变化?
  5. python:分层抽样(取出0和1中70%的数值)
  6. 每日linux——more命令
  7. pycharm学生认证
  8. python输入q结束程序_试图让一个Python程序以字母“q”退出,但是输入是一个整数?...
  9. 【转】利用kali破解wifi密码全过程
  10. python可以实现什么黑科技_实用黑科技!利用python给手机发短信