CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution

Paper:https://arxiv.org/pdf/2105.05003.pdf

code:GitHub - aliyun/conditional-lane-detection

论文解读:

一、摘要

这项工作作为车道线检测任务,比较新颖的是检测头head。并不同于常规的基于bbox进行目标检测,这项工作采用的是检测关键点构造mask,输出形式类似instance segmentation。

二、网络结构

  • backbone采用的是普通的CNN,比如ResNet;
  • neck采用的是TransformerFPN,实际上就是考虑到车道线比较长,需要全局注意力,因此就在基础FPN构造金字塔之前对backbone输出的feature进行了Transformer的self-attention操作
  • head分为两部分
    • Proposal head用于检测车道线实例,并为每个实例生成动态的卷积核参数;
    • Conditional shape head利用Proposal head步骤生成的动态卷积核参数和conditional卷积确定车道线的point set。然后根据这些point set进行连线得到最后的车道线结果。

代码解析:

代码基于mmdetection框架(v2.0.0)开发。在config/condlanenet/里可以看到有三个文件夹,分别对应作者在三个数据集CurveLanes、CULane、TuSimple上的配置。它们之间最大的区别在于针对CurveLanes设计了RIM。下面我重点分析一下它们共同的一些模块:

backbone

采用的是resnet,根据模型的大小可能选择resnet18到resnet101不等

neck

这里采用的是TransConvFPN,在mmdet/models/necks/trans_fpn.py

跟FPN不同点主要在于多了个transformer操作。动机是觉得车道线比较细长,需要有self-attention这样non-local的结构。

也就是在resnet和FPN的中间多了一个transformer模块。

  ## TransConvFPN 不重要的代码部分已省略def forward(self, src):assert len(src) >= len(self.in_channels)src = list(src)if self.attention:trans_feat = self.trans_head(src[self.trans_idx])else:trans_feat = src[self.trans_idx]inputs = src[:-1]inputs.append(trans_feat)if len(inputs) > len(self.in_channels):for _ in range(len(inputs) - len(self.in_channels)):del inputs[0]## 下面内容跟FPN一致# build lateralslaterals = [lateral_conv(inputs[i + self.start_level])for i, lateral_conv in enumerate(self.lateral_convs)]## 省略
 ## 在TransConvFPN的__init__里
if self.attention:self.trans_head = TransConvEncoderModule(**trans_cfg)class TransConvEncoderModule(nn.Module):def __init__(self, in_dim, attn_in_dims, attn_out_dims, strides, ratios, downscale=True, pos_shape=None):super(TransConvEncoderModule, self).__init__()if downscale:stride = 2else:stride = 1# self.first_conv = ConvModule(in_dim, 2*in_dim, kernel_size=3, stride=stride, padding=1)# self.final_conv = ConvModule(attn_out_dims[-1], attn_out_dims[-1], kernel_size=3, stride=1, padding=1)attn_layers = []for dim1, dim2, stride, ratio in zip(attn_in_dims, attn_out_dims, strides, ratios):attn_layers.append(AttentionLayer(dim1, dim2, ratio, stride))if pos_shape is not None:self.attn_layers = nn.ModuleList(attn_layers)else:self.attn_layers = nn.Sequential(*attn_layers)self.pos_shape = pos_shapeself.pos_embeds = []if pos_shape is not None:for dim in attn_out_dims:pos_embed = build_position_encoding(dim, pos_shape).cuda()self.pos_embeds.append(pos_embed)def forward(self, src):# src = self.first_conv(src)if self.pos_shape is None:src = self.attn_layers(src)else:for layer, pos in zip(self.attn_layers, self.pos_embeds):src = layer(src, pos.to(src.device))# src = self.final_conv(src)return srcclass AttentionLayer(nn.Module):""" Position attention module"""def __init__(self, in_dim, out_dim, ratio=4, stride=1):super(AttentionLayer, self).__init__()self.chanel_in = in_dimnorm_cfg = dict(type='BN', requires_grad=True)act_cfg = dict(type='ReLU')self.pre_conv = ConvModule(in_dim,out_dim,kernel_size=3,stride=stride,padding=1,norm_cfg=norm_cfg,act_cfg=act_cfg,inplace=False)self.query_conv = nn.Conv2d(in_channels=out_dim, out_channels=out_dim // ratio, kernel_size=1)self.key_conv = nn.Conv2d(in_channels=out_dim, out_channels=out_dim // ratio, kernel_size=1)self.value_conv = nn.Conv2d(in_channels=out_dim, out_channels=out_dim, kernel_size=1)self.final_conv = ConvModule(out_dim,out_dim,kernel_size=3,padding=1,norm_cfg=norm_cfg,act_cfg=act_cfg)self.softmax = nn.Softmax(dim=-1)self.gamma = nn.Parameter(torch.zeros(1))def forward(self, x, pos=None):"""inputs :x : inpput feature maps( B X C X H X W)returns :out : attention value + input featureattention: B X (HxW) X (HxW)"""x = self.pre_conv(x)m_batchsize, _, height, width = x.size()if pos is not None:x += posproj_query = self.query_conv(x).view(m_batchsize, -1,width * height).permute(0, 2, 1)proj_key = self.key_conv(x).view(m_batchsize, -1, width * height)energy = torch.bmm(proj_query, proj_key)attention = self.softmax(energy)attention = attention.permute(0, 2, 1)proj_value = self.value_conv(x).view(m_batchsize, -1, width * height)out = torch.bmm(proj_value, attention)out = out.view(m_batchsize, -1, height, width)proj_value = proj_value.view(m_batchsize, -1, height, width)out_feat = self.gamma * out + xout_feat = self.final_conv(out_feat)return out_feat

head

用的是CondLaneHead,在mmdet/models/dense_heads/condlanenet_head.py

需要重点分析,跟一般的检测任务差别很大:

首先这个CondLaneHead类的forward方法是直接调用了forward_test,因此要从model去看到neck输出后具体调用的是head的什么函数

    # mmdet/models/detectors/condlanenet.pydef forward(self, img, img_metas=None, return_loss=True, **kwargs):...if img_metas is None:return self.test_inference(img)elif return_loss:return self.forward_train(img, img_metas, **kwargs)else:return self.forward_test(img, img_metas, **kwargs)def forward_train(self, img, img_metas, **kwargs):...if self.head:outputs = self.bbox_head.forward_train(output, poses, num_ins)...def forward_test(self,img,img_metas,benchmark=False,hack_seeds=None,**kwargs):...if self.head:seeds, hm = self.bbox_head.forward_test(output, hack_seeds,kwargs['thr'])...

所以实际上head的forward是没用到的,直接去看head的forward_train和forward_test就行

forward_train

    # mmdet/models/dense_heads/condlanenet_head.pydef forward_train(self, inputs, pos, num_ins):# x_list是backbone+neck输出后的multi level feature mapx_list = list(inputs)# 这里根据hm_idx参数来取某个level 的feature map,用它去生成heat_map# mask同理f_hm = x_list[self.hm_idx]f_mask = x_list[self.mask_idx]m_batchsize = f_hm.size()[0]# f_maskz = self.ctnet_head(f_hm)hm, params = z['hm'], z['params']h_hm, w_hm = hm.size()[2:]h_mask, w_mask = f_mask.size()[2:]params = params.view(m_batchsize, self.num_classes, -1, h_hm, w_hm)mask_branch = self.mask_branch(f_mask)reg_branch = mask_branch# reg_branch = self.reg_branch(f_mask)params = params.permute(0, 1, 3, 4,2).contiguous().view(-1, self.num_gen_params)pos_tensor = torch.from_numpy(np.array(pos)).long().to(params.device).unsqueeze(1)pos_tensor = pos_tensor.expand(-1, self.num_gen_params)mask_pos_tensor = pos_tensor[:, :self.num_mask_params]reg_pos_tensor = pos_tensor[:, self.num_mask_params:]if pos_tensor.size()[0] == 0:masks = Nonefeat_range = Noneelse:mask_params = params[:, :self.num_mask_params].gather(0, mask_pos_tensor)masks = self.mask_head(mask_branch, mask_params, num_ins)if self.regression:reg_params = params[:, self.num_mask_params:].gather(0, reg_pos_tensor)regs = self.reg_head(reg_branch, reg_params, num_ins)else:regs = masks# regs = regs.view(sum(num_ins), 1, h_mask, w_mask)feat_range = masks.permute(0, 1, 3,2).view(sum(num_ins), w_mask, h_mask)feat_range = self.mlp(feat_range)return hm, regs, masks, feat_range, [mask_branch, reg_branch]

forward_test

    # mmdet/models/dense_heads/condlanenet_head.pydef forward_test(self,inputs,hack_seeds=None,hm_thr=0.3,):def parse_pos(seeds, batchsize, num_classes, h, w, device):pos_list = [[p['coord'], p['id_class'] - 1] for p in seeds]poses = []for p in pos_list:[c, r], label = ppos = label * h * w + r * w + cposes.append(pos)poses = torch.from_numpy(np.array(poses, np.long)).long().to(device).unsqueeze(1)return poses# with Timer("Elapsed time in stage1: %f"):  # ignorex_list = list(inputs)f_hm = x_list[self.hm_idx]f_mask = x_list[self.mask_idx]m_batchsize = f_hm.size()[0]f_deep = f_maskm_batchsize = f_deep.size()[0]# with Timer("Elapsed time in ctnet_head: %f"):  # 0.3msz = self.ctnet_head(f_hm)h_hm, w_hm = f_hm.size()[2:]h_mask, w_mask = f_mask.size()[2:]hm, params = z['hm'], z['params']hm = torch.clamp(hm.sigmoid(), min=1e-4, max=1 - 1e-4)params = params.view(m_batchsize, self.num_classes, -1, h_hm, w_hm)# with Timer("Elapsed time in two branch: %f"):  # 0.6msmask_branch = self.mask_branch(f_mask)reg_branch = mask_branch# reg_branch = self.reg_branch(f_mask)params = params.permute(0, 1, 3, 4,2).contiguous().view(-1, self.num_gen_params)batch_size, num_classes, h, w = hm.size()# with Timer("Elapsed time in ct decode: %f"):  # 0.2msseeds = self.ctdet_decode(hm, thr=hm_thr)if hack_seeds is not None:seeds = hack_seeds# with Timer("Elapsed time in stage2: %f"):  # 0.08mspos_tensor = parse_pos(seeds, batch_size, num_classes, h, w, hm.device)pos_tensor = pos_tensor.expand(-1, self.num_gen_params)num_ins = [pos_tensor.size()[0]]mask_pos_tensor = pos_tensor[:, :self.num_mask_params]if self.regression:reg_pos_tensor = pos_tensor[:, self.num_mask_params:]# with Timer("Elapsed time in stage3: %f"):  # 0.8msif pos_tensor.size()[0] == 0:return [], hmelse:mask_params = params[:, :self.num_mask_params].gather(0, mask_pos_tensor)# with Timer("Elapsed time in mask_head: %f"):  #0.3msmasks = self.mask_head(mask_branch, mask_params, num_ins)if self.regression:reg_params = params[:, self.num_mask_params:].gather(0, reg_pos_tensor)# with Timer("Elapsed time in reg_head: %f"):  # 0.25msregs = self.reg_head(reg_branch, reg_params, num_ins)else:regs = masksfeat_range = masks.permute(0, 1, 3,2).view(sum(num_ins), w_mask, h_mask)feat_range = self.mlp(feat_range)for i in range(len(seeds)):seeds[i]['reg'] = regs[0, i:i + 1, :, :]m = masks[0, i:i + 1, :, :]seeds[i]['mask'] = mseeds[i]['range'] = feat_range[i:i + 1]return seeds, hm

可以发现,这部分的操作跟论文中描述的差不多。

(等我具体有时间再慢慢弄来看,最近很忙)

车道线检测CondLaneNet论文和源码解读相关推荐

  1. Python实现道路车道线检测(附源码)

    车道线检测是自动驾驶汽车以及一般计算机视觉的关键组件.这个概念用于描述自动驾驶汽车的路径并避免进入另一条车道的风险. 在本文中,我们将构建一个机器学习项目来实时检测车道线. 我们将使用 OpenCV ...

  2. 车道线检测-Eigenlanes 论文学习笔记

    论文:<Eigenlanes: Data-Driven Lane Descriptors for Structurally Diverse Lanes> 代码:https://github ...

  3. 自己小组的一篇 CVPR 2023 车道线检测论文

    作者 | Captain Jack 编辑 | CVer 原文链接: https://zhuanlan.zhihu.com/p/614191683 点击下方卡片,关注"自动驾驶之心" ...

  4. 实战教程 | 车道线检测项目实战,霍夫变换 新方法 Spatial CNN

    点击上方"3D视觉工坊",选择"星标" 干货第一时间送达 此文按照这样的逻辑进行撰写.分享机器学习.计算机视觉的基础知识,接着我们以一个实际的项目,带领大家自己 ...

  5. 【深度学习】实战教程 | 车道线检测项目实战,霍夫变换 新方法 Spatial CNN

    此文按照这样的逻辑进行撰写.分享机器学习.计算机视觉的基础知识,接着我们以一个实际的项目,带领大家自己动手实践.最后,分享更多学习资料.进阶项目实战,这部分属于我CSDN上的专栏,最后会按照顺序给出相 ...

  6. 车道线检测:几何约束联合车道分割和车道边界检测

    点击上方"3D视觉工坊",选择"星标" 干货第一时间送达 来源 | 知乎@任乾 编辑丨焉知自动驾驶 本文分享了一篇关于车道线检测的论文,其主要创新点在于把车道线 ...

  7. 速度最快250fps!实时、高性能车道线检测算法LaneATT

    CVPR 2021 车道线检测方向论文:Keep your Eyes on the Lane: Real-time Attention-guided Lane Detection . 论文: http ...

  8. 基于实例分割方法的端到端车道线检测 论文+代码解读

    Towards End-to-End Lane Detection: an Instance Segmentation Approach 论文原文 https://arxiv.org/pdf/1802 ...

  9. 基于深度强化学习的车道线检测和定位(Deep reinforcement learning based lane detection and localization) 论文解读+代码复现

    之前读过这篇论文,导师说要复现,这里记录一下.废话不多说,再重读一下论文. 注:非一字一句翻译.个人理解,一定偏颇. 基于深度强化学习的车道检测和定位 官方源码下载:https://github.co ...

最新文章

  1. python 40位的数减个位数_Python数据分析入门教程(五):数据运算
  2. 互联网晚报 | 11月7日 星期日 | EDG夺得《英雄联盟》S11总冠军;拼多多推出“超拼夜”系列;VMware与戴尔完成分拆...
  3. java三步 网易_Java基础:三步学会Java Socket编程·网易学院·教程
  4. 个人博客网站html源码_最新0成本简单使用CODING Pages搭建Gridea个人博客网站详细教程...
  5. C#爬网页时“远程服务器返回错误: (403) 已禁止”解决方法
  6. 数据赋能变现时代,应用有哪些有效的变现方式?
  7. python能做什么项目-用python真的可以做很多有趣的事!我自己做了些小项目!大家看看...
  8. 如何将自己在github写的android library开源,让大家依赖使用
  9. 自动化测试必备实用工具,帮你提高工作效率 | 码云周刊第 88 期
  10. 七夕动态表白代码,基于python
  11. 方波正弦波三角波信号发生电路
  12. win+ubuntu双系统卸载ubuntu
  13. Go第八篇之包的使用
  14. html自动获取图片,html img动态读取图片
  15. 蓝桥杯C语言算法提高:复数归一化
  16. python随机生成彩色图像
  17. STM32F10xxx启动模式分析 – 梦想照旧
  18. python批量删除文件前缀名_Python3-去除目录中相同的文件名前缀
  19. 【写一个操作系统】2—VMware创建软盘映像
  20. 新松GCR 系列协作机器人 用户手册 (软件部分)

热门文章

  1. 使用opencv打开相机——python
  2. 里氏代换原则(The Liskov Substitution Principle)
  3. 什么是数据库连接池?数据库连接池的机制?
  4. UDT长度的含义是什么?
  5. 人工智能-计算机视觉-图像处理-模式识别的关系
  6. 外文文献翻译工具,4款可供选择!
  7. 赛事快讯|2022中国工程机器人大赛——飞思无人机仿真与自主任务赛项演示视频来啦!
  8. ElasticSearch-2
  9. 利用Java计算一光年的距离
  10. 主板常见故障维修24例