优达学城无人车定位的项目实现:

粒子滤波算法流程图

粒子滤波的伪代码:

step1:初始化

理论上说,当粒子数量足够多时,能准确地呈现贝叶斯后验分布,如果粒子太少,可能漏掉准确位置,粒子数量太多,会拖慢滤波器,无法实时定位无人车位置。

粒子初始化有两个方法;

1.在状态空间均匀取样;空间太大时(比如全球),不易实现;

2.在某个初始估算值周围取样;对于无人车,可以用GPS获取估算位置;

这里采用GPS得到一个粗略初始坐标,然后用高斯分布取样,初始化粒子;

代码实现:

void printSamples(double gps_x, double gps_y, double theta) {std::default_random_engine gen;double std_x, std_y, std_theta;  // Standard deviations for x, y, and theta// TODO: Set standard deviations for x, y, and thetastd_x = 2;std_y = 2;std_theta = 0.5;// This line creates a normal (Gaussian) distribution for xnormal_distribution<double> dist_x(gps_x, std_x);// TODO: Create normal distributions for y and thetanormal_distribution<double> dist_y(gps_y, std_y);normal_distribution<double> dist_theta(theta, std_theta);for (int i = 0; i < 3; ++i) {double sample_x, sample_y, sample_theta;// TODO: Sample from these normal distributions like this: //   sample_x = dist_x(gen);//   where "gen" is the random engine initialized earlier.sample_x = dist_x(gen);sample_y = dist_y(gen);sample_theta = dist_theta(gen);// Print your samples to the terminal.std::cout << "Sample " << i + 1 << " " << sample_x << " " << sample_y << " " << sample_theta << std::endl;}return;
}

step2: prediction

运用自行车模型,预测车辆在下一个时间步所处的位置;

对于每个粒子,根据其速度和位置值更新粒子位置,需要考虑控制输入中的不确定性,向速度和角速度添加高斯噪声;

自行车模型计算公式:
xf=x0+vθ˙[sin(θ0+(˙θ)(dt))−sin(θ0)]x_f = x_0 + \dfrac{v}{ \dot{\theta}}[sin(\theta_0 + \dot(\theta)(dt)) - sin(\theta_0)] xf​=x0​+θ˙v​[sin(θ0​+(˙​θ)(dt))−sin(θ0​)]

yf=y0+vθ˙[cos(θ0)−cos(θ0+θ˙(dt))]y_f = y_0 + \dfrac{v}{ \dot{\theta}}[cos(\theta_0) - cos(\theta_0 + \dot{\theta}(dt))] yf​=y0​+θ˙v​[cos(θ0​)−cos(θ0​+θ˙(dt))]

θf=θ0+θ˙(dt)\theta_f = \theta_0 + \dot{\theta}(dt) θf​=θ0​+θ˙(dt)

step3: 测量更新

数据关联

使用周围物体的地标测量值,更新位置信仰之前,必须解决数据关联问题,即测量方法。

数据关联,就是将地标测量值和真实世界的对象相匹配;

最紧邻法:把最近的测量值,作为正确的对应值,优缺点对比

step4 准确性评估

两种方法:

方法一:取所有粒子的加权平均误差;

方法二:取权重最高的粒子;求其方差平方根即可

计算公式:
where:
Position_RMSE=(xp−xg)2+(yp−yg)2Position\_RMSE = \sqrt{(x_p - x_g)^2 + (y_p - y_g)^2} Position_RMSE=(xp​−xg​)2+(yp​−yg​)2​

Theta_RMSE=(θp−θg)2Theta\_RMSE = \sqrt{(\theta_p - \theta_g)^2} Theta_RMSE=(θp​−θg​)2​

Transformations and Associatio

传感器测量,是以无人车为原点坐标,在车辆坐标系统的值,要做数据管理,必须转化为地图坐标系通。

如图,点OBS1(2,2),OBS2(3,-2),OBS3(0,-4)是无人车传感器测量到的三个地标,而蓝色点P(4,5)是无人车在地图上的坐标。

Homogenous Transformation

[xmym1]=[cos⁡θ−sin⁡θxpsin⁡θcos⁡θyp001]×[xcyc1]\left[ \begin{array}{c} \text{x}_m \\ \text{y}_m \\ 1 \end{array} \right] = \begin{bmatrix} \cos\theta & -\sin\theta & \text{x}_p \\ \sin\theta & \cos\theta & \text{y}_p \\ 0 & 0 & 1 \end{bmatrix} \times \left[ \begin{array}{c} \text{x}_c \\ \text{y}_c \\ 1 \end{array} \right] ⎣⎡​xm​ym​1​⎦⎤​=⎣⎡​cosθsinθ0​−sinθcosθ0​xp​yp​1​⎦⎤​×⎣⎡​xc​yc​1​⎦⎤​

Matrix multiplication results in:

xm=xp+(cos⁡θ×xc)−(sin⁡θ×yc)\text{x}_m= \text{x}_p + (\cos\theta \times \text{x}_c) - (\sin\theta \times \text{y}_c)xm​=xp​+(cosθ×xc​)−(sinθ×yc​)

ym=yp+(sin⁡θ×xc)+(cos⁡θ×yc)\text{y}_m= \text{y}_p + (\sin\theta \times \text{x}_c) + (\cos\theta \times \text{y}_c)ym​=yp​+(sinθ×xc​)+(cosθ×yc​)

根据公式,计算三个点对应的地图坐标:

OBS1(2,2): (6,3)

OBS2(3,-2):(2,2)

OBS3(0,-4) (0,5)

代码实现:

#include <cmath>
#include <iostream>int main() {// define coordinates and thetadouble x_part, y_part, x_obs, y_obs, theta;x_part = 4;y_part = 5;x_obs = 2;y_obs = 2;theta = -M_PI/2; // -90 degrees// transform to map x coordinatedouble x_map;x_map = x_part + (cos(theta) * x_obs) - (sin(theta) * y_obs);// transform to map y coordinatedouble y_map;y_map = y_part + (sin(theta) * x_obs) + (cos(theta) * y_obs);// (6,3)std::cout << int(round(x_map)) << ", " << int(round((y_map)) << std::endl;return 0;
}

Associatio

根据三个观测点的地图坐标OBS1(6,3),OBS2(2,2),OBS3:(0,5),关联最近的地标分别为L1,L2,L2

Calculating the Particle’s Final Weight

对每个粒子,遍历计算每个匹配的地标点,
P(x,y)=12πσxσye−((x−μx)22σx2+(y−μy)22σy2)P(x,y)= \frac{1}{2\pi \sigma_x\sigma_y}e^{-(\frac{(x-\mu_x)^2}{2\sigma_x^2} + \frac{(y-\mu_y)^2}{2\sigma_y^2})} P(x,y)=2πσx​σy​1​e−(2σx2​(x−μx​)2​+2σy2​(y−μy​)2​)

得到每个粒子的存在概率,概率值越大,权重越大。

代码实现:

#include <iostream>
#include "multiv_gauss.h"int main() {// define inputsdouble sig_x, sig_y, x_obs, y_obs, mu_x, mu_y;// define outputs for observationsdouble weight1, weight2, weight3;// final weightdouble final_weight;// OBS1 valuessig_x = 0.3;sig_y = 0.3;x_obs = 6;y_obs = 3;mu_x = 5;mu_y = 3;// Calculate OBS1 weightweight1 = multiv_prob(sig_x, sig_y, x_obs, y_obs, mu_x, mu_y);// should be around 0.00683644777551 rounding to 6.84E-3std::cout << "Weight1: " << weight1 << std::endl;// OBS2 valuessig_x = 0.3;sig_y = 0.3;x_obs = 2;y_obs = 2;mu_x = 2;mu_y = 1;// Calculate OBS2 weightweight2 = multiv_prob(sig_x, sig_y, x_obs, y_obs, mu_x, mu_y);// should be around 0.00683644777551 rounding to 6.84E-3std::cout << "Weight2: " << weight2 << std::endl;// OBS3 valuessig_x = 0.3;sig_y = 0.3;x_obs = 0;y_obs = 5;mu_x = 2;mu_y = 1;// Calculate OBS3 weightweight3 = multiv_prob(sig_x, sig_y, x_obs, y_obs, mu_x, mu_y);// should be around 9.83184874151e-49 rounding to 9.83E-49std::cout << "Weight3: " << weight3 << std::endl;// Output final weightfinal_weight = weight1 * weight2 * weight3;// 4.60E-53std::cout << "Final weight: " << final_weight << std::endl;return 0;
}

Kidnapped-Vehicle project

优达学城的无人驾驶第六个项目,粒子滤波器定位,实现代码:

https://github.com/luteresa/P6-Kidnapped-Vehicle.git

运行结果:

Additional Resources on Localization

Simultaneous Localization and Mapping (SLAM)

The below papers cover Simultaneous Localization and Mapping (SLAM) - which as the name suggests, combines localization and mapping into a single algorithm without a map created beforehand.

Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age by C. Cadena, et. al.

Abstract: Simultaneous Localization and Mapping (SLAM) consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. […]

Navigating the Landscape for Real-time Localisation and Mapping for Robotics and Virtual and Augmented Reality by S. Saeedi, et. al.

Abstract: Visual understanding of 3D environments in real-time, at low power, is a huge computational challenge. Often referred to as SLAM (Simultaneous Localisation and Mapping), it is central to applications spanning domestic and industrial robotics, autonomous vehicles, virtual and augmented reality. This paper describes the results of a major research effort to assemble the algorithms, architectures, tools, and systems software needed to enable delivery of SLAM, by supporting applications specialists in selecting and configuring the appropriate algorithm and the appropriate hardware, and compilation pathway, to meet their performance, accuracy, and energy consumption goals. […]

Other Methods

The below paper from Udacity’s founder Sebastian Thrun, while from 2002, is still relevant for many different methods of mapping used today in robotics.

Robotic Mapping: A Survey by S. Thrun

Abstract: This article provides a comprehensive introduction into the field of robotic mapping, with a focus on indoor mapping. It describes and compares various probabilistic techniques, as they are presently being applied to a vast array of mobile robot mapping problems. The history of robotic mapping is also described, along with an extensive list of open research problems.

无人驾驶8: 粒子滤波定位(优达学城项目)相关推荐

  1. 优达学城_数据清洗_项目三wrangle_act

    下面是我优达学城项目三的记录报告 里面的思路和文字说明大多都在代码块里面的注释中,#后面?,可能不太容易看,需要认真看.? #导入可能需要的包 import os import numpy as np ...

  2. 优达学城无人驾驶工程师——P4车道线检测功能

    这次讲的是优达学城的无人驾驶工程师的P4项目,利用车前方的摄像头检测车道线,下面开始我们的代码部分. import numpy as np import cv2 import glob import ...

  3. 学习优达学城《无人驾驶入门》,具体需要掌握哪些python知识点?

    在上一篇文章< 002 零基础如何学习优达学城的<无人驾驶入门>?>中,我介绍了学习<无人驾驶入门>需要哪些先修知识,其中,最重要的是python.优达官方对pyt ...

  4. 优达学城《无人驾驶入门》学习笔记——卡尔曼滤波器实现详解

    优达学城<无人驾驶入门>的第二个项目是实现矩阵类,要求通过python编写一个用来计算矩阵的类Matrix.编写这个类并不难,涉及到的线性代数方面的知识也不多,比如矩阵的加法.减法.乘法, ...

  5. 优达学城无人驾驶工程师——P5车辆检测功能

    这次讲的是优达学城无人驾驶工程师第一期的最后一个项目,车辆检测功能,代码如下. 导包 import cv2 import numpy as np import matplotlib.pyplot as ...

  6. 零基础如何学习优达学城的《无人驾驶入门》?

    因为感兴趣,而且看好无人驾驶行业,我学习了优达学城的<无人驾驶入门>课程.最近整理了无人驾驶领域的资料,写成文章分享给大家. 作为系列文章的第一篇,我想介绍一下<无人驾驶入门> ...

  7. 优达学城无人驾驶工程师——P1寻找车道线

    这次介绍的是优达学城的无人驾驶工程师的P1项目,利用车的前摄像头来识别当前车道的左右两边两条的车道线.测试图片和视频在文章最后的链接里. 一开始先倒包 #importing some useful p ...

  8. 【自动驾驶技术】优达学城无人驾驶工程师学习笔记(六)——Github与Markdown相关教程

    Github与Markdown相关教程 本博文为笔者关于优达学城无人驾驶工程师课程中计算机视觉基础部分的学习笔记,该部分为实现车道线图像识别功能的基础课程,关于本课程的详细说明请参考优达学城官网. 优 ...

  9. 【自动驾驶技术】优达学城无人驾驶工程师学习笔记(七)——计算机视觉基础

    计算机视觉基础目录 前言 颜色选择(Color Selection) 理论基础 代码实践 区域筛选(Region Masking) 理论基础 代码实践 Canny边缘检测 问题背景 Canny边缘检测 ...

最新文章

  1. MLPerf结果证实至强® 可有效助力深度学习训练
  2. 特斯拉放弃SAP,仅25个人4个月就开发了整套ERP!就问你服不服?
  3. JSP自定义标签_控制标签体是否执行
  4. ios 调试的相关内容收集
  5. 复现经典:《统计学习方法》​第 11 章 条件随机场
  6. C++horspool算法查找字符串是否包含子字符串(附完整源码)
  7. [ios2] ios使用自带git respository管理项目 【转】
  8. 如何成为一个Java高薪架构师?
  9. Android native进程间通信实例-binder篇之——解决实际问题inputreader内建类清楚缓存...
  10. 计算机拆装与网络配置技能,计算机硬件及网络计算机原理与拆装.ppt
  11. neo4j call
  12. 随机数大家都会用,但是你知道生成随机数的算法吗?
  13. 微服务 SpringBoot 通过jdbcTemplate配置Oracle数据源
  14. heic图片格式转换jpg_如何在Mac上通过简单方法将HEIC图像转换为JPG
  15. html5兼容包,webpack4搭建现代Hybird-h5工程
  16. 面试阿里,看这一篇就够了!
  17. matlab 求最大值函数,利用matlab, 二元函数求最大值
  18. python酒店管理系统案例
  19. python求小于n的所有素数_python - 列出N以下所有素数的最快方法 - 堆栈内存溢出...
  20. Cadence(virtuoso)集成电路设计软件基本操作——库管理

热门文章

  1. 最不常用置换算法LFT 最久未使用置换算法LRU 操作系统 C语言链表实现
  2. C++新特性——郭炜
  3. 通过jmeter进行用户并发(vu/s)测试
  4. 超全面总结!深聊MATERIAL DESIGN引领的设计趋势
  5. android vitamio 函数,如何在Android Studio中集成Vitamio?
  6. 女程序媛的神奇修仙路
  7. TextView(文本框)
  8. codeforce 141A
  9. yum安装tomcat
  10. 帆软FineReport8.0使用技巧总结及常见问题解决