今天学习了example中lidar_to_camera.py的源码,前面部分还好,后面部分涉及到了矩阵变换的理论,开始看不懂了。它的功能好像是输出一组图像,然后是把lidar的线条数绘制到camera上面,大概是这么个意思。

下面是源码理解部分:

#!/usr/bin/env python# Copyright (c) 2020 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>."""
Lidar projection on RGB camera example
"""
"""
好像主要是将lidar数据和camera连接起来,感觉对于我还是太深奥了.以后在深入了解.
仔细看了下,好像是把lidar图像在camera照片里面显示出来?贴图
这里主要学习了有关于设置传感器的属性和将它与ego_vehicle车辆连接起来的方法。
"""import glob
import os
import systry:sys.path.append(glob.glob('../carla/dist/carla-*%d.%d-%s.egg' % (sys.version_info.major,sys.version_info.minor,'win-amd64' if os.name == 'nt' else 'linux-x86_64'))[0])
except IndexError:passimport carlaimport argparse
from queue import Queue
#
from queue import Empty
# cm模块
# colormap将数值映射成色彩不同的数据
# 可以用某条由一种颜色到另一种颜色(或者某一种颜色到白色再到另一种颜色)
# 的色带中的某个颜色来反映出来,这样就可以利用颜色创造出另一维可视化数据。
from matplotlib import cmtry:import numpy as np
except ImportError:raise RuntimeError('cannot import numpy, make sure numpy package is installed')try:from PIL import Image
# Python Image Library,主要作用是图像处理,
# 可用于图片剪切、粘贴、缩放、镜像、水印、颜色块、滤镜、图像格式转换、
# 色场空间转换、验证码、旋转图像、图像增强、直方图处理、插值和滤波等功能。
# 主要是保存图片的作用
except ImportError:raise RuntimeError('cannot import PIL, make sure "Pillow" package is installed')# 查询了一下,好像是某种色带?
VIRIDIS = np.array(cm.get_cmap('viridis').colors)
# 从0到1之间返回VIRIDIS.shape[0]的数据
VID_RANGE = np.linspace(0.0, 1.0, VIRIDIS.shape[0])def sensor_callback(data, queue):"""This simple callback just stores the data on a thread safe Python Queueto be retrieved from the "main thread"."""queue.put(data)def tutorial(args):"""指南This function is intended to be a tutorial on how to retrieve data in asynchronous way, and project 3D points from a lidar to a 2D camera.2d摄像机投影3d点云数据"""# Connect to the serverclient = carla.Client(args.host, args.port)client.set_timeout(2.0)world = client.get_world()bp_lib = world.get_blueprint_library()traffic_manager = client.get_trafficmanager(8000)traffic_manager.set_synchronous_mode(True)original_settings = world.get_settings()settings = world.get_settings()settings.synchronous_mode = Truesettings.fixed_delta_seconds = 3.0world.apply_settings(settings)vehicle = Nonecamera = Nonelidar = Nonetry:# Search the desired blueprints# 选择所需要的vehicle和camera还有雷达vehicle_bp = bp_lib.filter("vehicle.lincoln.mkz2017")[0]camera_bp = bp_lib.filter("sensor.camera.rgb")[0]lidar_bp = bp_lib.filter("sensor.lidar.ray_cast")[0]# Configure the blueprints# 设置相机的参数主要是照片的宽和长camera_bp.set_attribute("image_size_x", str(args.width))camera_bp.set_attribute("image_size_y", str(args.height))# 设置噪声if args.no_noise:lidar_bp.set_attribute('dropoff_general_rate', '0.0')lidar_bp.set_attribute('dropoff_intensity_limit', '1.0')lidar_bp.set_attribute('dropoff_zero_intensity', '0.0')# 设置lidar的相关参数lidar_bp.set_attribute('upper_fov', str(args.upper_fov))lidar_bp.set_attribute('lower_fov', str(args.lower_fov))# 通道数lidar_bp.set_attribute('channels', str(args.channels))# 范围lidar_bp.set_attribute('range', str(args.range))# 每秒多少点云lidar_bp.set_attribute('points_per_second', str(args.points_per_second))# Spawn the blueprintsvehicle = world.spawn_actor(blueprint=vehicle_bp,transform=world.get_map().get_spawn_points()[0])vehicle.set_autopilot(True)camera = world.spawn_actor(blueprint=camera_bp,transform=carla.Transform(carla.Location(x=1.6, z=1.6)),attach_to=vehicle)lidar = world.spawn_actor(blueprint=lidar_bp,transform=carla.Transform(carla.Location(x=1.0, z=1.8)),attach_to=vehicle)# 投影矩阵K # Build the K projection matrix:# K = [[Fx,  0, image_w/2],#      [ 0, Fy, image_h/2],#      [ 0,  0,         1]]image_w = camera_bp.get_attribute("image_size_x").as_int()image_h = camera_bp.get_attribute("image_size_y").as_int()fov = camera_bp.get_attribute("fov").as_float()focal = image_w / (2.0 * np.tan(fov * np.pi / 360.0))# In this case Fx and Fy are the same since the pixel aspect# ratio is 1K = np.identity(3)K[0, 0] = K[1, 1] = focalK[0, 2] = image_w / 2.0K[1, 2] = image_h / 2.0# The sensor data will be saved in thread-safe Queuesimage_queue = Queue()lidar_queue = Queue()camera.listen(lambda data: sensor_callback(data, image_queue))lidar.listen(lambda data: sensor_callback(data, lidar_queue))for frame in range(args.frames):world.tick()# 快照?world_frame = world.get_snapshot().frametry:# Get the data once it's received.image_data = image_queue.get(True, 1.0)lidar_data = lidar_queue.get(True, 1.0)except Empty:print("[Warning] Some sensor data has been missed")continueassert image_data.frame == lidar_data.frame == world_frame# At this point, we have the synchronized information from the 2 sensors.# 输出到控制台的些数据sys.stdout.write("\r(%d/%d) Simulation: %d Camera: %d Lidar: %d" %(frame, args.frames, world_frame, image_data.frame, lidar_data.frame) + ' ')sys.stdout.flush()# Get the raw BGRA buffer and convert it to an array of RGB of# shape (image_data.height, image_data.width, 3).im_array = np.copy(np.frombuffer(image_data.raw_data, dtype=np.dtype("uint8")))im_array = np.reshape(im_array, (image_data.height, image_data.width, 4))im_array = im_array[:, :, :3][:, :, ::-1]# Get the lidar data and convert it to a numpy array.p_cloud_size = len(lidar_data)p_cloud = np.copy(np.frombuffer(lidar_data.raw_data, dtype=np.dtype('f4')))p_cloud = np.reshape(p_cloud, (p_cloud_size, 4))# Lidar intensity array of shape (p_cloud_size,) but, for now, let's# focus on the 3D points.intensity = np.array(p_cloud[:, 3])# Point cloud in lidar sensor space array of shape (3, p_cloud_size).local_lidar_points = np.array(p_cloud[:, :3]).T# Add an extra 1.0 at the end of each 3d point so it becomes of# shape (4, p_cloud_size) and it can be multiplied by a (4, 4) matrix.local_lidar_points = np.r_[local_lidar_points, [np.ones(local_lidar_points.shape[1])]]# This (4, 4) matrix transforms the points from lidar space to world space.lidar_2_world = lidar.get_transform().get_matrix()# Transform the points from lidar space to world space.world_points = np.dot(lidar_2_world, local_lidar_points)# This (4, 4) matrix transforms the points from world to sensor coordinates.world_2_camera = np.array(camera.get_transform().get_inverse_matrix())# Transform the points from world space to camera space.sensor_points = np.dot(world_2_camera, world_points)# New we must change from UE4's coordinate system to an "standard"# camera coordinate system (the same used by OpenCV):# ^ z                       . z# |                        /# |              to:      +-------> x# | . x                   |# |/                      |# +-------> y      v         y# This can be achieved by multiplying by the following matrix:# [[ 0,  1,  0 ],#  [ 0,  0, -1 ],#  [ 1,  0,  0 ]]# Or, in this case, is the same as swapping:# (x, y ,z) -> (y, -z, x)point_in_camera_coords = np.array([sensor_points[1],sensor_points[2] * -1,sensor_points[0]])# Finally we can use our K matrix to do the actual 3D -> 2D.points_2d = np.dot(K, point_in_camera_coords)# Remember to normalize the x, y values by the 3rd value.points_2d = np.array([points_2d[0, :] / points_2d[2, :],points_2d[1, :] / points_2d[2, :],points_2d[2, :]])# At this point, points_2d[0, :] contains all the x and points_2d[1, :]# contains all the y values of our points. In order to properly# visualize everything on a screen, the points that are out of the screen# must be discarted, the same with points behind the camera projection plane.points_2d = points_2d.Tintensity = intensity.Tpoints_in_canvas_mask = \(points_2d[:, 0] > 0.0) & (points_2d[:, 0] < image_w) & \(points_2d[:, 1] > 0.0) & (points_2d[:, 1] < image_h) & \(points_2d[:, 2] > 0.0)points_2d = points_2d[points_in_canvas_mask]intensity = intensity[points_in_canvas_mask]# Extract the screen coords (uv) as integers.u_coord = points_2d[:, 0].astype(np.int)v_coord = points_2d[:, 1].astype(np.int)# Since at the time of the creation of this script, the intensity function# is returning high values, these are adjusted to be nicely visualized.intensity = 4 * intensity - 3color_map = np.array([np.interp(intensity, VID_RANGE, VIRIDIS[:, 0]) * 255.0,np.interp(intensity, VID_RANGE, VIRIDIS[:, 1]) * 255.0,np.interp(intensity, VID_RANGE, VIRIDIS[:, 2]) * 255.0]).astype(np.int).Tif args.dot_extent <= 0:# Draw the 2d points on the image as a single pixel using numpy.im_array[v_coord, u_coord] = color_mapelse:# Draw the 2d points on the image as squares of extent args.dot_extent.for i in range(len(points_2d)):# I'm not a NumPy expert and I don't know how to set bigger dots# without using this loop, so if anyone has a better solution,# make sure to update this script. Meanwhile, it's fast enough :)im_array[v_coord[i]-args.dot_extent : v_coord[i]+args.dot_extent,u_coord[i]-args.dot_extent : u_coord[i]+args.dot_extent] = color_map[i]# Save the image using Pillow module.image = Image.fromarray(im_array)image.save("/media/hhh/75c0c2a9-f565-4a05-b2a5-5599a918e2f0/hhh/carlaLearning/PythonAPI/pictures/%08d.png" % image_data.frame)finally:# Apply the original settings when exiting.world.apply_settings(original_settings)# Destroy the actors in the scene.if camera:camera.destroy()if lidar:lidar.destroy()if vehicle:vehicle.destroy()def main():"""Start function"""argparser = argparse.ArgumentParser(description='CARLA Sensor sync and projection tutorial')argparser.add_argument('--host',metavar='H',default='127.0.0.1',help='IP of the host server (default: 127.0.0.1)')argparser.add_argument('-p', '--port',metavar='P',default=2000,type=int,help='TCP port to listen to (default: 2000)')argparser.add_argument('--res',metavar='WIDTHxHEIGHT',default='680x420',help='window resolution (default: 1280x720)')argparser.add_argument('-f', '--frames',metavar='N',default=500,type=int,help='number of frames to record (default: 500)')argparser.add_argument('-d', '--dot-extent',metavar='SIZE',default=2,type=int,help='visualization dot extent in pixels (Recomended [1-4]) (default: 2)')argparser.add_argument('--no-noise',action='store_true',help='remove the drop off and noise from the normal (non-semantic) lidar')argparser.add_argument('--upper-fov',metavar='F',default=30.0,type=float,help='lidar\'s upper field of view in degrees (default: 15.0)')argparser.add_argument('--lower-fov',metavar='F',default=-25.0,type=float,help='lidar\'s lower field of view in degrees (default: -25.0)')argparser.add_argument('-c', '--channels',metavar='C',default=64.0,type=float,help='lidar\'s channel count (default: 64)')argparser.add_argument('-r', '--range',metavar='R',default=100.0,type=float,help='lidar\'s maximum range in meters (default: 100.0)')argparser.add_argument('--points-per-second',metavar='N',default='100000',type=int,help='lidar points per second (default: 100000)')args = argparser.parse_args()args.width, args.height = [int(x) for x in args.res.split('x')]args.dot_extent -= 1try:tutorial(args)except KeyboardInterrupt:print('\nCancelled by user. Bye!')if __name__ == '__main__':main()

要逐渐记录一下在不同的example的源码中学会了什么功能,做一个单独的笔记。

carla学习笔记(三)相关推荐

  1. J2EE学习笔记三:EJB基础概念和知识 收藏

    J2EE学习笔记三:EJB基础概念和知识 收藏 EJB正是J2EE的旗舰技术,因此俺直接跳到这一章来了,前面的几章都是讲Servlet和JSP以及JDBC的,俺都懂一些.那么EJB和通常我们所说的Ja ...

  2. tensorflow学习笔记(三十二):conv2d_transpose (解卷积)

    tensorflow学习笔记(三十二):conv2d_transpose ("解卷积") deconv解卷积,实际是叫做conv_transpose, conv_transpose ...

  3. Ethernet/IP 学习笔记三

    Ethernet/IP 学习笔记三 原文为硕士论文: 工业以太网Ethernet/IP扫描器的研发 知网网址: http://kns.cnki.net/KCMS/detail/detail.aspx? ...

  4. iView学习笔记(三):表格搜索,过滤及隐藏列操作

    iView学习笔记(三):表格搜索,过滤及隐藏某列操作 1.后端准备工作 环境说明 python版本:3.6.6 Django版本:1.11.8 数据库:MariaDB 5.5.60 新建Django ...

  5. 吴恩达《机器学习》学习笔记三——多变量线性回归

    吴恩达<机器学习>学习笔记三--多变量线性回归 一. 多元线性回归问题介绍 1.一些定义 2.假设函数 二. 多元梯度下降法 1. 梯度下降法实用技巧:特征缩放 2. 梯度下降法的学习率 ...

  6. Python基础学习笔记三

    Python基础学习笔记三 print和import print可以用,分割变量来输出 import copy import copy as co from copy import deepcopy ...

  7. Mr.J-- jQuery学习笔记(三十二)--jQuery属性操作源码封装

    扫码看专栏 jQuery的优点 jquery是JavaScript库,能够极大地简化JavaScript编程,能够更方便的处理DOM操作和进行Ajax交互 1.轻量级 JQuery非常轻巧 2.强大的 ...

  8. MYSQL学习笔记三:日期和时间函数

    MYSQL学习笔记三:日期和时间函数 1. 获取当前日期的函数和获取当前时间的函数 /*获取当前日期的函数和获取当前时间的函数.将日期以'YYYY-MM-DD'或者'YYYYMMDD'格式返回 */ ...

  9. ROS学习笔记三:创建ROS软件包

    ,# ROS学习笔记三:创建ROS软件包 catkin软件包的组成 一个软件包必须满足如下条件才能被称之为catkin软件包: 这个软件包必须包含一个catkin编译文件package.xml(man ...

最新文章

  1. p40与p100训练性能对比
  2. 3D Object Classification With Point Convolution —— 点云卷积网络
  3. asp.net core web api之异常
  4. 论文浅尝 | 面向 cQA 的跨语言问题检索方法
  5. linux用户命令快捷链接,linux简单命令
  6. 实战:采⽤Sharding-JDBC实现订单表的(两主四从) 分库分表 和 读写分离
  7. c语言socket面试题,【C++工程师面试宝典】学习说明_互联网校招面试真题面经汇总_牛客网...
  8. W25Q64 的 QSPI 模式 问题
  9. 小白重装电脑教程—WePE or 老毛桃
  10. Xmind 8 Update 8 安装及补丁下载
  11. 基于机器视觉技术的表面缺陷检测技术综述
  12. 在FPGA中,同步信号、异步信号和亚稳态的理解
  13. 为什么canvas画的是正方形是长方形
  14. 试用期不合格通知单可以签吗?
  15. on duplicate mysql_mysql 避免重复写入数据的三种方式 和insert ...on duplicate updt... 死锁...
  16. httpwatch使用_使用JavaScript的HTTPWatch自动化
  17. 用C++写一个简单小病毒(零基础奶妈级教学,不可能学完还不懂)
  18. 【PPT】PPT文档导出PDF文件时,去掉右上角时间
  19. Transitive attribute传递属性
  20. 阿里巴巴亏损114.56亿元,投资者:早有预料

热门文章

  1. EasyCVR接入大华CVS录像机,下载录像文件名为空是如何解决的?
  2. FreeBSD中编译JDK
  3. 交换机和路由器技术-32-命名ACL
  4. access中的IIf函数
  5. 怎样在一张图片上叠加另一张图片
  6. Sublime Text 中文版安装和插件安装
  7. snapchat_如何从Snapchat故事中删除快照
  8. 判断两个多边形是否相交相交
  9. 状态空间方程的能控性与能观性判断
  10. c#教程之通过数据绑定修改数据