文章目录

  • 一、背景
  • 二、动机
  • 三、方法
    • 3.1 IACS——IoU-Aware Classification Score
    • 3.2 Varifocal loss
    • 3.3 Star-Shaped Box Feature Representation
    • 4.4 Bounding-box refinement
    • 4.5 VarifocalNet
  • 四、效果
  • 五、代码
    • 5.1 修改数据集路径
    • 5.2 VFNet


代码已开源: https://github.com/hyz-xmaster/VarifocalNet

一、背景

现有的目标检测器中,大多都是先生成大量的候选框,然后使用NMS来进行滤除,NMS是基于分类得分来排序的。

由于分类得分高的框,并不一定是框的最准的框,得分较低但框的很准确的框很可能会被滤掉。因此,[11]使用额外的 IoU 得分,[9] 使用centerness 得分作为衡量框位置准确率的标准,将其得分和分类得分相乘来作为 NMS 排序的标准。

二、动机

上述两个方法虽然可能会有一些优势,但是相乘的方法并不是最优的,可能会导致更差的排序,且性能提升有限,并且单独的分支来预测位置得分会引起计算量上升。

可以不使用额外分支来预测位置得分,而是将其和分类得分分支融合起来?
——作者提出了一个 localization-aware/IoU-aware 的 classification score (IACS)

作者在FOCS+ATSS 上进行了一系列实验,将类别、位置、centerness得分分别替换为真值,寻找哪个得分对最终结果影响最大:

上表中,效果最好的是 74.4 AP,这是将真实类别得分替换为预测box和gt box的 IoU 得分得到的,这实际上表明,对于大多数对象,在大量候选框中已经存在精确的局部边界框。实现优秀检测性能的关键是准确地从池中选择出这些高质量的检测,这些结果表明,用gt IoU替代ground-truth类的分类分数是最有希望的选择措施。

三、方法

本文作者提出了基于 FCOS+ATSS 的 VarifocalNet(去掉了 centerness 分支),相比 FCOS+ATSS,该网络有三个新模块:

  • varifocal loss
  • star-shaped box feature representation
  • bounding box refinement

3.1 IACS——IoU-Aware Classification Score

IACS:分类得分向量中的一个标量元素

  • 真实类别位置的得分:预测框和真实框的 IoU
  • 其他位置的得分:0

3.2 Varifocal loss

为了学习 IACS 得分,作者设计了一个 Varifocal Loss,灵感得益于 Focal Loss。

Focal Loss:

  • p:预测得分
  • y:真实类别
  • 降低前景/背景中,简单样例对 Loss 的贡献

Varifocal Loss:

  • p:预测的 IACS
  • q:target score(真实类别:q为预测和真值的 IoU,其他类别:0)

不对称的处理前景和背景对loss的贡献:

  • 只降低负例对loss的贡献
  • 不降低正例对loss的贡献(因为正例相比负例而言,更加珍贵)
  • 作者使用 target qqq 作为正例的权重做了一个实验,实验发现当正例的 gt_IoU 高时,其对 loss 的贡献很大,也就是说训练高质量的正例对AP的提升比低质量的更大一些。

3.3 Star-Shaped Box Feature Representation

作者定义了一种 star-shaped 的 bounding-box 特征的表达方式,使用9个采样点来表达 可变形卷积得到的 bounding-box。

为什么用9个点来表达?

  • 作者认为现在基于关键点的表达方法虽然有效,但是丢失了box中的特征和上下文信息

首先,给定一个采样点(x,y),使用 3x3 卷积来回归初始 box (l′,t′,r′,b′l', t', r', b'l′,t′,r′,b′),分别为其到左、上、右、下的距离(图1红框)。选中的9个点为黄色圈。相对位移作为可变形卷积的偏移,然后使用可变形卷积对这9个点卷积,来表示这个box。

4.4 Bounding-box refinement

作者还从 bounding box refinement 的角度来尝试提升目标定位的准确性。

原始四个偏移:(l′,t′,r′,b′)(l', t', r', b')(l′,t′,r′,b′)
学习的四个缩放因子:(Δl,Δt,Δr,Δb)(\Delta l, \Delta t, \Delta r, \Delta b)(Δl,Δt,Δr,Δb)
refine 后的偏移:(l,t,r,b)=(Δl×l′,Δt×t′,Δr×r′,Δb×b′)(l, t, r, b) = (\Delta l\times l', \Delta t \times t', \Delta r \times r', \Delta b \times b')(l,t,r,b)=(Δl×l′,Δt×t′,Δr×r′,Δb×b′)

4.5 VarifocalNet


Backbone:

  • FCN

Heads:

  • bounding-box
  • IoU-aware classification score

四、效果

α=0.75,γ=2\alpha=0.75, \gamma=2α=0.75,γ=2

可视化效果:

五、代码

该文章实现基于mmdetection,本文代码的 github仓库中有安装方式,安装成功后,下载作者训练好的模型,修改coco数据路径即可成功训练和测试。

训练:

./tools/dist_train.sh configs/vfnet/vfnet_r50_fpn_1x_coco.py 8

测试:

# 测试指标
./tools/dist_test.sh configs/vfnet/vfnet_r50_fpn_1x_coco.py checkpoints/vfnet_r50_1x_41.6.pth 8 --eval bbox
# 可视化
./tools/dist_test.sh configs/vfnet/vfnet_r50_fpn_1x_coco.py checkpoints/vfnet_r50_1x_41.6.pth 8 --show-dir results/

demo:

python demo/image_demo.py demo/demo.jpg configs/vfnet/vfnet_r50_fpn_1x_coco.py checkpoints/vfnet_r50_1x_41.6.pth

5.1 修改数据集路径

在运行代码之前,一定要执行下面的语句,不然代码走的路径不对:

python setup.py develop
# 1
./configs/_base_/coco_detection.py

# 2
./configs/vfnet/vfnet_r50_fpn_1x_coco.py

5.2 VFNet

# vfnet_r50_fpn_1x_coco.py
# model settings
model = dict(type='VFNet',pretrained='torchvision://resnet50',backbone=dict(type='ResNet',depth=50,num_stages=4,out_indices=(0, 1, 2, 3),frozen_stages=1,norm_cfg=dict(type='BN', requires_grad=True),norm_eval=True,style='pytorch'),neck=dict(type='FPN',in_channels=[256, 512, 1024, 2048],out_channels=256,start_level=1,add_extra_convs=True,extra_convs_on_inputs=False,  # use P5num_outs=5,relu_before_extra_convs=True),bbox_head=dict(type='VFNetHead',num_classes=80,in_channels=256,stacked_convs=3,feat_channels=256,strides=[8, 16, 32, 64, 128],center_sampling=False,dcn_on_last_conv=False,use_atss=True,use_vfl=True,loss_cls=dict(type='VarifocalLoss',use_sigmoid=True,alpha=0.75,gamma=2.0,iou_weighted=True,loss_weight=1.0),loss_bbox=dict(type='GIoULoss', loss_weight=1.5),loss_bbox_refine=dict(type='GIoULoss', loss_weight=2.0)),# training and testing settingstrain_cfg=dict(assigner=dict(type='ATSSAssigner', topk=9),allowed_border=-1,pos_weight=-1,debug=False),test_cfg=dict(nms_pre=1000,min_bbox_size=0,score_thr=0.05,nms=dict(type='nms', iou_threshold=0.6),max_per_img=100))

VFNet Head:

/mmdet/models/dense_heads/vfnet_head.py
  (bbox_head): VFNetHead((loss_cls): VarifocalLoss()(loss_bbox): GIoULoss()(cls_convs): ModuleList((0): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(1): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(2): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True)))(reg_convs): ModuleList((0): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(1): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(2): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True)))(relu): ReLU(inplace=True)(vfnet_reg_conv): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(vfnet_reg): Conv2d(256, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(scales): ModuleList((0): Scale()(1): Scale()(2): Scale()(3): Scale()(4): Scale())(vfnet_reg_refine_dconv): DeformConv2d(in_channels=256,out_channels=256,kernel_size=(3, 3),stride=(1, 1),padding=(1, 1),dilation=(1, 1),groups=1,deform_groups=1,deform_groups=False)(vfnet_reg_refine): Conv2d(256, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(scales_refine): ModuleList((0): Scale()(1): Scale()(2): Scale()(3): Scale()(4): Scale())(vfnet_cls_dconv): DeformConv2d(in_channels=256,out_channels=256,kernel_size=(3, 3),stride=(1, 1),padding=(1, 1),dilation=(1, 1),groups=1,deform_groups=1,deform_groups=False)(vfnet_cls): Conv2d(256, 80, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(loss_bbox_refine): GIoULoss())init_cfg={'type': 'Normal', 'layer': 'Conv2d', 'std': 0.01, 'override': {'type': 'Normal', 'name': 'vfnet_cls', 'std': 0.01, 'bias_prob': 0.01}}
)

VFNet model:

VFNet((backbone): ResNet((conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)(layer1): ResLayer((0): Bottleneck((conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(downsample): Sequential((0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): Bottleneck((conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True))(2): Bottleneck((conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)))(layer2): ResLayer((0): Bottleneck((conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(downsample): Sequential((0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): Bottleneck((conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True))(2): Bottleneck((conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True))(3): Bottleneck((conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)))(layer3): ResLayer((0): Bottleneck((conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(downsample): Sequential((0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)(1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): Bottleneck((conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True))(2): Bottleneck((conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True))(3): Bottleneck((conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True))(4): Bottleneck((conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True))(5): Bottleneck((conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)))(layer4): ResLayer((0): Bottleneck((conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(downsample): Sequential((0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)(1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): Bottleneck((conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True))(2): Bottleneck((conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True))))init_cfg={'type': 'Pretrained', 'checkpoint': 'torchvision://resnet50'}(neck): FPN((lateral_convs): ModuleList((0): ConvModule((conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)))(1): ConvModule((conv): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1)))(2): ConvModule((conv): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))))(fpn_convs): ModuleList((0): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)))(1): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)))(2): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)))(3): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)))(4): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)))))init_cfg={'type': 'Xavier', 'layer': 'Conv2d', 'distribution': 'uniform'}(bbox_head): VFNetHead((loss_cls): VarifocalLoss()(loss_bbox): GIoULoss()(cls_convs): ModuleList((0): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(1): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(2): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True)))(reg_convs): ModuleList((0): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(1): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(2): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True)))(relu): ReLU(inplace=True)(vfnet_reg_conv): ConvModule((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(gn): GroupNorm(32, 256, eps=1e-05, affine=True)(activate): ReLU(inplace=True))(vfnet_reg): Conv2d(256, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(scales): ModuleList((0): Scale()(1): Scale()(2): Scale()(3): Scale()(4): Scale())(vfnet_reg_refine_dconv): DeformConv2d(in_channels=256,out_channels=256,kernel_size=(3, 3),stride=(1, 1),padding=(1, 1),dilation=(1, 1),groups=1,deform_groups=1,deform_groups=False)(vfnet_reg_refine): Conv2d(256, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(scales_refine): ModuleList((0): Scale()(1): Scale()(2): Scale()(3): Scale()(4): Scale())(vfnet_cls_dconv): DeformConv2d(in_channels=256,out_channels=256,kernel_size=(3, 3),stride=(1, 1),padding=(1, 1),dilation=(1, 1),groups=1,deform_groups=1,deform_groups=False)(vfnet_cls): Conv2d(256, 80, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(loss_bbox_refine): GIoULoss())init_cfg={'type': 'Normal', 'layer': 'Conv2d', 'std': 0.01, 'override': {'type': 'Normal', 'name': 'vfnet_cls', 'std': 0.01, 'bias_prob': 0.01}}
)

Varifocal loss:

def varifocal_loss(pred,target,weight=None,alpha=0.75,gamma=2.0,iou_weighted=True,reduction='mean',avg_factor=None):"""`Varifocal Loss <https://arxiv.org/abs/2008.13367>`_Args:pred (torch.Tensor): The prediction with shape (N, C), C is thenumber of classestarget (torch.Tensor): The learning target of the iou-awareclassification score with shape (N, C), C is the number of classes.weight (torch.Tensor, optional): The weight of loss for eachprediction. Defaults to None.alpha (float, optional): A balance factor for the negative part ofVarifocal Loss, which is different from the alpha of Focal Loss.Defaults to 0.75.gamma (float, optional): The gamma for calculating the modulatingfactor. Defaults to 2.0.iou_weighted (bool, optional): Whether to weight the loss of thepositive example with the iou target. Defaults to True.reduction (str, optional): The method used to reduce the loss intoa scalar. Defaults to 'mean'. Options are "none", "mean" and"sum".avg_factor (int, optional): Average factor that is used to averagethe loss. Defaults to None."""# pred and target should be of the same sizeassert pred.size() == target.size()import pdb; pdb.set_trace();pred_sigmoid = pred.sigmoid()target = target.type_as(pred)if iou_weighted:focal_weight = target * (target > 0.0).float() + \alpha * (pred_sigmoid - target).abs().pow(gamma) * \(target <= 0.0).float()else:focal_weight = (target > 0.0).float() + \alpha * (pred_sigmoid - target).abs().pow(gamma) * \(target <= 0.0).float()loss = F.binary_cross_entropy_with_logits(pred, target, reduction='none') * focal_weightloss = weight_reduce_loss(loss, weight, reduction, avg_factor)return loss

【目标检测】cvpr2021_VarifocalNet: An IoU-Aware Dense Object Detector相关推荐

  1. 【论文阅读】【3d目标检测】Embracing Single Stride 3D Object Detector with Sparse Transformer

    论文标题:Embracing Single Stride 3D Object Detector with Sparse Transformer 源码地址:https://github.com/TuSi ...

  2. 目标检测论文:FoveaBox: Beyond Anchor-based Object Detector及其PyTorch实现

    FoveaBox: Beyond Anchor-based Object Detector PDF: https://arxiv.org/pdf/1904.03797v1.pdf PyTorch代码: ...

  3. 目标检测中的Iou与map指标详细介绍(零基础)

    目标检测中的Iou与map指标详细介绍(零基础) 最近在算法岗实习,更新的频率会低一点,希望在实习过程中学到更多有用的视觉知识. IOU指标 下图中Ground truth为标记的正确框,Predic ...

  4. 一种新的无监督前景目标检测方法 A New Unsupervised Foreground Object Detection Method

    14.一种新的无监督前景目标检测方法 A New Unsupervised Foreground Object Detection Method 摘要:针对基于无监督特征提取的目标检测方法效率不高的问 ...

  5. 【CVPR 2021】VarifocalNet: An IoU-aware Dense Object Detector的译读笔记

    论文 VarifocalNet: An IoU-aware Dense Object Detector 摘要 准确排序大量候选框对dense检测器获得高精度是十分重要的.之前的工作使用类别分数或者类别 ...

  6. 目标检测--Rich feature hierarchies for accurate object detection and semantic segmentation(CVPR 2014)

    Rich feature hierarchies for accurate object detection and semantic segmentation 作者: Ross Girshick J ...

  7. 目标检测回归损失函数——IOU、GIOU、DIOU、CIOU、EIOU

    一.IOU Loss 上一篇文章提到L1,L2及其变种只将Bounding box的四个角点分别求loss然后相加,没有引入box四个顶点之间的相关性并且模型在训练过程中更偏向于尺寸更大的物体.在此基 ...

  8. 目标检测相关概念:IOU,precision, recall, AP, mAP

    1.IOU(交并比) IOU,交并比,顾名思义,就是预测框bounding box与真实框ground truth的交集比上并集.可以用来衡量检测物体位置的偏差.形象点可以看下图(用画图软件所画): ...

  9. 分类评价指标、目标检测评价指标(AUC,IOU,mAP等)

    文章目录 定位准确率 IOU 识别精度 precision recall accuracy F1-score(F值) AP mAP P-R曲线.AP值 FP Rate(FPR) TP Rate(TPR ...

  10. 视频目标检测--Flow-Guided Feature Aggregation for Video Object Detection

    Flow-Guided Feature Aggregation for Video Object Detection https://arxiv.org/abs/1703.10025 Our fram ...

最新文章

  1. Ceph mon节点故障处理案例分解
  2. 菜鸟程序员之Asp.net MVC Session过期异常的处理
  3. C# 获取Excel版本
  4. python数据统计代码_Python 数据的累加与统计的示例代码
  5. 端口可以随便设置吗_驱动可以随便更新吗?
  6. 结构类型需要重载GetHashCode和Equals
  7. 百度DuerOS负责人景鲲晋升副总裁,继续向李彦宏汇报
  8. 第十二章课下测试补交博客
  9. H3C OSPF 单区域、多区域(虚连接)配置
  10. Redis-数据结构与对象
  11. CSDN博客下载器v2.0发布(导出PDF)
  12. 飞信2016 5.6.8820.0超级精简版
  13. 计算机设计大赛物联网专项赛专栏
  14. 源码分享—《Java多线程编程核心技术》源码,Java多线程编程核心技术源码 略微有改动。
  15. 免费开源BI工具DataEase实现了SQL数据集动态传参?冲冲冲!!!
  16. 努比亚android P的功能,努比亚Z17Android P 正式版已开启小批量FOTA推送
  17. 亚马逊云科技与德勤中国同行,创造更智慧、美好的数字化未来
  18. 计算机的USB是什么,usb2.0和3.0的区别,教您电脑usb2.0和3.0的有什么区别
  19. FFmpeg流拼接滤镜concat原理与使用注意事项
  20. Spring in Action 3 -The four kinds of autowiring

热门文章

  1. rpm方式安装MySQL5.1.73
  2. JVM源码分析之System.currentTimeMillis及nanoTime原理详解
  3. win7磁盘设置背景方法
  4. 一个C#读写Dxf的类库DXFLibrary
  5. 智能实验室-身份证号码查询(IDio) 1.4.0.140
  6. [Eclipse]GEF入门系列(序)
  7. html 模板中的for循环,Flask模板引擎中的For循环
  8. java final的内存_Java并发编程之final域的内存语义
  9. java ios压缩_iOS与Java服务器GZip压缩问题【转】
  10. Linux6.5图形模式安装,CentOS 6.5弹性云服务器如何安装图形化界面