Paper:《YOLOv4: Optimal Speed and Accuracy of Object Detection》的翻译与解读

目录

YOLOv4的评价

1、四个改进和一个创新

YOLOv4: Optimal Speed and Accuracy of Object Detection

Abstract

1. Introduction

2. Related work

2.1. Object detection models

2.2. Bag of freebies

2.3. Bag of specials

3. Methodology

3.1. Selection of architecture

3.2. Selection of BoF and BoS

3.3. Additional improvements

3.4. YOLOv4

4. Experiments

4.1. Experimental setup

4.2. Influence of different features on Classifier training

4.3. Influence of different features on

4.4. Influence of different backbones and pretrained weightings on Detector training

4.5. Influence of different mini-batch size on Detector training

5. Results

6. Conclusions

7. Acknowledgements


YOLOv4的评价

1、四个改进和一个创新

这篇文章主要有四个改进+一个创新,但组合了大约20项近几年来各种深度学习和目标检测领域的tricks。可以说,这篇论文有创新和改进,但多数是微小的改进。然而这篇文章对比了大量的、近几年新出来的tricks(大约有20多个),肯定得花费大量的时间和精力,工作量巨大。

  • MosaicDA(马赛克数据增强, Mosaic Data Augmentation):把四张图拼成一张图来训练,变相的等价于增大了mini-batch。这是从CutMix混合两张图的基础上改进;
  • CmBN(跨最小批的归一化, Cross mini-batch Normal):在CBN的基础上改进;
  • 修改的SAM:从SAM的逐空间的attention,到逐点的attention;
  • 修改的PAN:把通道从相加(add)改变为concat,改变很小;
  • SAT(自对抗训练, Self-Adversarial Training):这是在一张图上,让神经网络反向更新图像,对图像做改变扰动,然后在这个图像上训练。这个方法,是图像风格化的主要方法,让网络反向更新图像来风格化图像。

来自知乎:https://zhuanlan.zhihu.com/p/135980432

YOLOv4: Optimal Speed and Accuracy of Object Detection

论文地址:https://arxiv.org/abs/2004.10934
代码地址:https://github.com/AlexeyAB/darknet

Abstract

There are a huge number of features which are said to improve Convolutional Neural Network (CNN) accuracy. Practical testing of combinations of such features on large datasets, and theoretical justification of the result, is required. Some features operate on certain models exclusively and for certain problems exclusively, or only for small-scale datasets; while some features, such as batch-normalization and residual-connections, are applicable to the majority of models, tasks, and datasets. We assume that such universal features include Weighted-Residual-Connections (WRC), Cross-Stage-Partial-connections (CSP), Cross mini-Batch Normalization (CmBN), Self-adversarial-training (SAT) and Mish-activation. We use new features: WRC, CSP, CmBN, SAT, Mish activation, Mosaic data augmentation, CmBN, DropBlock regularization, and CIoU loss, and combine some of them to achieve state-of-the-art results: 43.5% AP (65.7% AP50) for the MS COCO dataset at a realtime speed of ~65 FPS on Tesla V100. Source code is at this https URL
https://github.com/AlexeyAB/darknet.

有大量的特征被认为可以提高卷积神经网络(CNN)的精度。需要在大型数据集上对这些特性的组合进行实际测试,并对结果进行理论验证。有些特性只对某些模型起作用,只对某些问题起作用,或者只对小规模数据集起作用;而一些特性,如批处理规范化和调整大小连接,则适用于大多数模型、任务和数据集。我们假设这些通用特性包括加权残差连接(WRC)、跨阶段部分连接(CSP)、跨小批量标准化(CmBN)、自反训练(SAT)和Mish激活。我们使用新特性:WRC,CSP, CmBN,SAT,Mish激活,马赛克数据增强,CmBN, DropBlock正规化,CIoU损失,并结合一些实现先进的结果:基于MS COCO数据集,可以得到43.5%的AP(65.7% AP50),Tesla V100上的实时速度为 ~ 65 FPS。

总结来说:就是什么技巧最先进,我都拿来用用,组合成一个更完美的结果!

1. Introduction

The majority of CNN-based object detectors are largely applicable only for recommendation systems. For example, searching for free parking spaces via urban video cameras is executed by slow accurate models, whereas car collision warning is related to fast inaccurate models. Improving the real-time object detector accuracy enables using them not only for hint generating recommendation systems, but also for stand-alone process management and human input reduction. Real-time object detector operation on conventional Graphics Processing Units (GPU) allows their mass usage at an affordable price. The most accurate modern neural networks do not operate in real time and require large number of GPUs for training with a large mini-batch-size. We address such problems through creating a CNN that operates in real-time on a conventional GPU, and for which training requires only one conventional GPU.

大多数基于cnn的对象检测器主要只适用于推荐系统。例如,通过城市摄像机寻找免费停车位是由缓慢准确的模型执行,而汽车碰撞预警与快速不准确的模型相关。提高实时目标探测器的准确性,不仅可以将其用于提示生成推荐系统,还可以用于独立流程管理和减少人工输入。传统图形处理单元(GPU)上的实时对象检测操作允许它们以可负担的价格大量使用。最精确的现代神经网络不能实时运行,需要大量的gpu来进行小批量的训练。我们通过创建一个在传统GPU上实时运行的CNN来解决这些问题,而训练只需要一个传统GPU。

Figure 1: Comparison of the proposed YOLOv4 and other state-of-the-art object detectors. YOLOv4 runs twice faster than EfficientDet with comparable performance. Improves YOLOv3’s AP and FPS by 10% and 12%, respectively.

图1:所提议的YOLOv4与其他最先进的对象检测器的比较。在性能相当的情况下,YOLOv4的运行速度比EfficientDet快两倍。分别提高YOLOv3的AP和FPS 10%和12%。

The main goal of this work is designing a fast operating speed of an object detector in production systems and optimization for parallel computations, rather than the low computation volume theoretical indicator (BFLOP). We hope that the designed object can be easily trained and used. For example, anyone who uses a conventional GPU to train and test can achieve real-time, high quality, and convincing object detection results, as the YOLOv4 results shown in Figure 1. Our contributions are summarized as follows: 1. We develope an efficient and powerful object detection model. It makes everyone can use a 1080 Ti or 2080 Ti GPU to train a super fast and accurate object detector. 2. We verify the influence of state-of-the-art Bag-ofFreebies and Bag-of-Specials methods of object detection during the detector training. 3. We modify state-of-the-art methods and make them more effecient and suitable for single GPU training, including CBN [89], PAN [49], SAM [85], etc. 这项工作的主要目标是设计一个快速运行的目标探测器在生产系统和并行计算优化,而不是低计算量理论指标(BFLOP)。我们希望设计的对象可以很容易地训练和使用。例如,任何使用传统GPU进行训练和测试的人都可以实现实时、高质量、令人信服的对象检测结果,如图1所示的YOLOv4结果。我们的贡献总结如下:1。我们开发了一个高效、强大的目标检测模型。它使每个人都可以使用1080 Ti或2080 Ti GPU来训练一个超级快速和精确的目标探测器。2. 我们验证了在检测器训练过程中,最先进的袋- offreebies和袋- specials的对象检测方法的影响。3.我们修改了最先进的方法,使其更有效,更适合于单GPU训练,包括CBN[89]、PAN[49]、SAM[85]等。

2. Related work

2.1. Object detection models

A modern detector is usually composed of two parts, a backbone which is pre-trained on ImageNet and a head which is used to predict classes and bounding boxes of objects. For those detectors running on GPU platform, their backbone could be VGG [68], ResNet [26], ResNeXt [86], or DenseNet [30]. For those detectors running on CPU platform, their backbone could be SqueezeNet [31], MobileNet [28, 66, 27, 74], or ShuffleNet [97, 53]. As to the head part, it is usually categorized into two kinds, i.e., one-stage object detector and two-stage object detector. The most representative two-stage object detector is the R-CNN [19] series, including fast R-CNN [18], faster R-CNN [64], R-FCN [9], and Libra R-CNN [58]. It is also possible to make a twostage object detector an anchor-free object detector, such as RepPoints [87]. As for one-stage object detector, the most representative models are YOLO [61, 62, 63], SSD [50], and RetinaNet [45]. In recent years, anchor-free one-stage object detectors are developed. The detectors of this sort are CenterNet [13], CornerNet [37, 38], FCOS [78], etc. Object detectors developed in recent years often insert some layers between backbone and head, and these layers are usually used to collect feature maps from different stages. We can call it the neck of an object detector. Usually, a neck is composed of several bottom-up paths and several topdown paths. Networks equipped with this mechanism include Feature Pyramid Network (FPN) [44], Path Aggregation Network (PAN) [49], BiFPN [77], and NAS-FPN [17]. In addition to the above models, some researchers put their emphasis on directly building a new backbone (DetNet [43], DetNAS [7]) or a new whole model (SpineNet [12], HitDetector [20]) for object detection.

现代探测器通常由两个部分组成,一个是在ImageNet上预先训练的主干,另一个是用来预测物体的类和边界盒的头部。对于那些在GPU平台上运行的检测器,它们的主干可以是VGG[68]、ResNet[26]、ResNeXt[86]或DenseNet[30]。对于那些在CPU平台上运行的检测器,它们的主干可以是SqueezeNet[31]、MobileNet[28,666,27,74]或ShuffleNet[97,53]。至于头部,通常分为两类,即、一级目标探测器和二级目标探测器。最具代表性的两级目标探测器是R-CNN[19]系列,包括fast R-CNN [18], faster R-CNN [64], R-FCN [9], Libra R-CNN[58]。也可以使两级对象检测器成为无锚对象检测器,如RepPoints[87]。单级目标探测器最具代表性的型号有YOLO[61, 62, 63]、SSD[50]、RetinaNet[45]。近年来,无锚单级目标探测器得到了广泛的应用。这类探测器有CenterNet[13]、CornerNet[37,38]、FCOS[78]等。近年来发展起来的物体探测器常常在基干和头部之间插入一些层,这些层通常用来收集不同阶段的特征图。我们可以称之为物体探测器的颈部。通常,一个领由几个自下而上的路径和几个自上而下的路径组成。具有该机制的网络包括特征金字塔网络(Feature Pyramid Network, FPN)[44]、路径汇聚网络(Path Aggregation Network, PAN)[49]、BiFPN[77]和NAS-FPN[17]。除了上述模型外,一些研究者还着重于直接构建一个新的主干(DetNet [43], DetNAS[7])或一个新的整体模型(SpineNet [12], HitDetector[20])用于对象检测。

To sum up, an ordinary object detector is composed of several parts:

  • Input: Image, Patches, Image Pyramid •
  • Backbones: VGG16 [68], ResNet-50 [26], SpineNet [12], EfficientNet-B0/B7 [75], CSPResNeXt50 [81], CSPDarknet53 [81] •
  • Neck: • Additional blocks: SPP [25], ASPP [5], RFB [47], SAM [85] • Path-aggregation blocks: FPN [44], PAN [49], NAS-FPN [17], Fully-connected FPN, BiFPN [77], ASFF [48], SFAM [98] •
  • Heads:: • Dense Prediction (one-stage): ◦ RPN [64], SSD [50], YOLO [61], RetinaNet [45] (anchor based) ◦ CornerNet [37], CenterNet [13], MatrixNet [60], FCOS [78] (anchor free) • Sparse Prediction (two-stage): ◦ Faster R-CNN [64], R-FCN [9], Mask RCNN [23] (anchor based) ◦ RepPoints [87] (anchor free)

综上所述,一个普通的物体探测器由以下几个部分组成:

  • 输入:图像,补丁,图像金字塔•
  • 骨架:VGG16 [68], ResNet-50 [26], SpineNet[12],效率网- b0 /B7 [75], CSPResNeXt50 [81], CSPDarknet53 [81]
  • •路径聚合块:FPN[44]、PAN[49]、NAS-FPN[17]、全连接FPN、BiFPN[77]、ASFF[48]、SFAM [98]
  • ◦RPN [64], SSD [50], YOLO [61], RetinaNet[45](锚网),CenterNet [13], MatrixNet [60], FCOS[78](无锚)•稀疏预测(两阶段):◦更快的R-CNN [64], R-FCN[9],口罩RCNN[23](锚网)[87](无锚)

2.2. Bag of freebies

Usually, a conventional object detector is trained offline. Therefore, researchers always like to take this advantage and develop better training methods which can make the object detector receive better accuracy without increasing the inference cost. We call these methods that only change the training strategy or only increase the training cost as “bag of freebies.” What is often adopted by object detection methods and meets the definition of bag of freebies is data augmentation. The purpose of data augmentation is to increase the variability of the input images, so that the designed object detection model has higher robustness to the images obtained from different environments. For examples, photometric distortions and geometric distortions are two commonly used data augmentation method and they definitely benefit the object detection task. In dealing with photometric distortion, we adjust the brightness, contrast, hue, saturation, and noise of an image. For geometric distortion, we add random scaling, cropping, flipping, and rotating.

通常,传统的目标探测器是离线训练的。因此,研究人员总是希望利用这一优势,开发出更好的训练方法,使目标探测器在不增加推理成本的情况下获得更好的精度。我们把这些只会改变培训策略或只会增加培训成本的方法称为“免费包”。“对象检测方法经常采用的,符合免费赠品包定义的是数据扩充。数据扩充的目的是增加输入图像的可变性,使所设计的目标检测模型对不同环境下获得的图像具有更高的鲁棒性。例如,光度畸变和几何畸变是两种常用的数据增强方法,它们对目标检测任务无疑是有益的。在处理光度失真时,我们调整图像的亮度、对比度、色调、饱和度和噪声。对于几何畸变,我们添加了随机缩放、剪切、翻转和旋转。

The data augmentation methods mentioned above are all pixel-wise adjustments, and all original pixel information in the adjusted area is retained. In addition, some researchers engaged in data augmentation put their emphasis on simulating object occlusion issues. They have achieved good results in image classification and object detection. For example, random erase [100] and CutOut [11] can randomly select the rectangle region in an image and fill in a random or complementary value of zero. As for hide-and-seek [69] and grid mask [6], they randomly or evenly select multiple rectangle regions in an image and replace them to all zeros. If similar concepts are applied to feature maps, there are DropOut [71], DropConnect [80], and DropBlock [16] methods. In addition, some researchers have proposed the methods of using multiple images together to perform data augmentation. For example, MixUp [92] uses two images to multiply and superimpose with different coefficient ratios, and then adjusts the label with these superimposed ratios. As for CutMix [91], it is to cover the cropped image to rectangle region of other images, and adjusts the label according to the size of the mix area. In addition to the above mentioned methods, style transfer GAN [15] is also used for data augmentation, and such usage can effectively reduce the texture bias learned by CNN.

上述数据增强方法均为像素级调整,并保留调整区域内的所有原始像素信息。此外,一些从事数据扩充的研究人员将重点放在模拟物体遮挡问题上。在图像分类和目标检测方面取得了较好的效果。例如,随机擦除[100]和CutOut[11]可以随机选择图像中的矩形区域,并填充一个随机的或互补的零值。对于捉迷藏[69]和网格掩码[6],它们随机或均匀地选择图像中的多个矩形区域,并将其全部替换为零。如果将类似的概念应用于特征图,则有DropOut[71]、DropConnect[80]和DropBlock[16]方法。此外,一些研究者提出了将多幅图像结合在一起进行数据增强的方法。例如,MixUp[92]使用两张图像以不同的系数比率进行相乘和叠加,然后用这些叠加比率调整标签。CutMix[91]是将裁剪后的图像覆盖到其他图像的矩形区域,并根据混合区域的大小调整标签。除了上述方法外,还使用style transfer GAN[15]进行数据扩充,这样可以有效减少CNN学习到的纹理偏差。

Different from the various approaches proposed above, some other bag of freebies methods are dedicated to solving the problem that the semantic distribution in the dataset may have bias. In dealing with the problem of semantic distribution bias, a very important issue is that there is a problem of data imbalance between different classes, and this problem is often solved by hard negative example mining [72] or online hard example mining [67] in two-stage object detector. But the example mining method is not applicable to one-stage object detector, because this kind of detector belongs to the dense prediction architecture. Therefore Lin et al. [45] proposed focal loss to deal with the problem of data imbalance existing between various classes. Another very important issue is that it is difficult to express the relationship of the degree of association between different categories with the one-hot hard representation. This representation scheme is often used when executing labeling. The label smoothing proposed in [73] is to convert hard label into soft label for training, which can make model more robust. In order to obtain a better soft label, Islam et al. [33] introduced the concept of knowledge distillation to design the label refinement network. 与上面提出的各种方法不同,其他一些免费包方法致力于解决数据集中的语义分布可能存在偏差的问题。在处理语义分布偏差问题时,一个非常重要的问题是不同类之间存在数据不平衡的问题,这个问题往往通过两阶段对象检测器中的硬反例挖掘[72]或在线硬例挖掘[67]来解决。但由于该方法属于稠密预测结构,因此不适用于单级目标检测。因此,Lin等人提出了焦损来处理各个类之间存在的数据不平衡问题。另一个非常重要的问题是,很难用一个热硬表示法来表达不同类别之间关联程度的关系。这种表示法常用于执行标记。文献[73]提出的标签平滑是将硬标签转换为软标签进行训练,使模型更加稳健。为了获得更好的软标签,Islam等人引入了知识蒸馏的概念来设计标签细化网络。
The last bag of freebies is the objective function of Bounding Box (BBox) regression. The traditional object detector usually uses Mean Square Error (MSE) to directly perform regression on the center point coordinates and height and width of the BBox, i.e., {xcenter, ycenter, w, h}, or the upper left point and the lower right point, i.e., {xtop lef t, ytop lef t, xbottom right, ybottom right}. As for anchor-based method, it is to estimate the corresponding offset, for example {xcenter of f set, ycenter of f set, wof f set, hof f set} and {xtop lef t of f set, ytop lef t of f set, xbottom right of f set, ybottom right of f set}. However, to directly estimate the coordinate values of each point of the BBox is to treat these points as independent variables, but in fact does not consider the integrity of the object itself. In order to make this issue processed better, some researchers recently proposed IoU loss [90], which puts the coverage of predicted BBox area and ground truth BBox area into consideration. The IoU loss computing process will trigger the calculation of the four coordinate points of the BBox by executing IoU with the ground truth, and then connecting the generated results into a whole code. Because IoU is a scale invariant representation, it can solve the problem that when traditional methods calculate the l1 or l2 loss of {x, y, w, h}, the loss will increase with the scale. Recently, some researchers have continued to improve IoU loss. For example, GIoU loss [65] is to include the shape and orientation of object in addition to the coverage area. They proposed to find the smallest area BBox that can simultaneously cover the predicted BBox and ground truth BBox, and use this BBox as the denominator to replace the denominator originally used in IoU loss. As for DIoU loss [99], it additionally considers the distance of the center of an object, and CIoU loss [99], on the other hand simultaneously considers the overlapping area, the distance between center points, and the aspect ratio. CIoU can achieve better convergence speed and accuracy on the BBox regression problem. 最后一袋赠品是边界盒(BBox)回归的目标函数。传统的目标检测器通常使用均方误差(MSE)直接对BBox的中心点坐标和高度、宽度进行回归,即, {xcenter, ycenter, w, h},或左上点和右下点,即, {xtop lef t, ytop lef t, xbottom right, ybottom right}。基于锚的方法是估计相应的偏移量,例如{f集合的xcenter, f集合的ycenter, wof f集合,hof f集合}和{xtop f集合的lef t, ytop f集合的lef t, xbottom right off集合,ybottom right off集合}。但是,直接估计BBox中每个点的坐标值,就是把这些点当作自变量,而实际上并不考虑对象本身的完整性。为了更好地处理这一问题,一些研究者最近提出了IoU损失[90],将预测BBox区域的覆盖范围和地面真实BBox区域考虑在内。IoU损失计算过程通过执行IoU和ground truth,触发BBox四个坐标点的计算,然后将生成的结果连接成一个完整的代码。由于IoU是尺度不变的表示,可以解决传统方法在计算{x, y, w, h}的l1或l2损耗时,损耗会随着尺度的增大而增大的问题。最近,一些研究人员继续改善欠条损失。例如,GIoU loss[65]除了覆盖区域外,还包括了物体的形状和方向。他们提出寻找能够同时覆盖预测BBox和地面真实BBox的最小面积BBox,并以此BBox作为分母来代替IoU损失中原来使用的分母。对于DIoU loss[99],它额外考虑了物体中心的距离,而CIoU loss[99]则同时考虑了重叠区域、中心点之间的距离和纵横比。对于BBox回归问题,CIoU具有更好的收敛速度和精度。

2.3. Bag of specials

For those plugin modules and post-processing methods that only increase the inference cost by a small amount but can significantly improve the accuracy of object detection, we call them “bag of specials”. Generally speaking, these plugin modules are for enhancing certain attributes in a model, such as enlarging receptive field, introducing attention mechanism, or strengthening feature integration capability, etc., and post-processing is a method for screening model prediction results.

Common modules that can be used to enhance receptive field are SPP [25], ASPP [5], and RFB [47]. The SPP module was originated from Spatial Pyramid Matching (SPM) [39], and SPMs original method was to split feature map into several d × d equal blocks, where d can be {1, 2, 3, ...}, thus forming spatial pyramid, and then extracting bag-of-word features. SPP integrates SPM into CNN and use max-pooling operation instead of bag-of-word operation. Since the SPP module proposed by He et al. [25] will output one dimensional feature vector, it is infeasible to be applied in Fully Convolutional Network (FCN). Thus in the design of YOLOv3 [63], Redmon and Farhadi improve SPP module to the concatenation of max-pooling outputs with kernel size k × k, where k = {1, 5, 9, 13}, and stride equals to 1. Under this design, a relatively large k × k maxpooling effectively increase the receptive field of backbone feature. After adding the improved version of SPP module, YOLOv3-608 upgrades AP50 by 2.7% on the MS COCO object detection task at the cost of 0.5% extra computation. The difference in operation between ASPP [5] module and improved SPP module is mainly from the original k×k kernel size, max-pooling of stride equals to 1 to several 3 × 3 kernel size, dilated ratio equals to k, and stride equals to 1 in dilated convolution operation. RFB module is to use several dilated convolutions of k×k kernel, dilated ratio equals to k, and stride equals to 1 to obtain a more comprehensive spatial coverage than ASPP. RFB [47] only costs 7% extra inference time to increase the AP50 of SSD on MS COCO by 5.7%.

对于那些仅增加少量推理成本,却能显著提高目标检测精度的插件模块和后处理方法,我们称之为“特价包”。一般来说,这些插件模块是为了增强模型中的某些属性,如扩大接受域、引入注意机制、增强特征集成能力等,后处理是筛选模型预测结果的一种方法。

可以用来增强感受野的常用模块有SPP[25]、ASPP[5]和RFB[47]。SPP模块起源于空间金字塔匹配(SPM) [39], SPMs原始方法是将feature map分割成若干个d×d等块,其中d可以是{1,2,3,…,从而形成空间金字塔,然后提取bag-of-word特征。SPP将SPM集成到CNN中,使用max-pooling操作,而不是bag-of-word操作。由于He等人提出的SPP模块将输出一维特征向量,因此在全卷积网络(FCN)中应用是不可实现的。因此,在YOLOv3[63]的设计中,Redmon和Farhadi将SPP模块改进为最大池输出的串联,其内核大小为k×k,其中k = {1,5,9,13}, stride = 1。在本设计中,较大的k×k maxpooling有效地增加了主干特征的接受域。在添加SPP模块的改进版本后,YOLOv3-608在MS COCO对象检测任务上对AP50进行了2.7%的升级,增加了0.5%的额外计算量。ASPP[5]模块与改进后的SPP模块在运算上的差异主要来自于原始的k×k核大小,stride的最大池等于1到几个3×3核大小,扩展比等于k,在扩展卷积运算中stride等于1。RFB模块是利用k×k kernel的几个膨胀卷积,膨胀比等于k, stride等于1,得到比ASPP更全面的空间覆盖。RFB[47]仅花费7%的额外推断时间,使MS COCO上SSD的AP50增加5.7%。

The attention module that is often used in object detection is mainly divided into channel-wise attention and pointwise attention, and the representatives of these two attention models are Squeeze-and-Excitation (SE) [29] and Spatial Attention Module (SAM) [85], respectively. Although SE module can improve the power of ResNet50 in the ImageNet image classification task 1% top-1 accuracy at the cost of only increasing the computational effort by 2%, but on a GPU usually it will increase the inference time by about 10%, so it is more appropriate to be used in mobile devices. But for SAM, it only needs to pay 0.1% extra calculation and it can improve ResNet50-SE 0.5% top-1 accuracy on the ImageNet image classification task. Best of all, it does not affect the speed of inference on the GPU at all.

In terms of feature integration, the early practice is to use skip connection [51] or hyper-column [22] to integrate lowlevel physical feature to high-level semantic feature. Since multi-scale prediction methods such as FPN have become popular, many lightweight modules that integrate different feature pyramid have been proposed. The modules of this sort include SFAM [98], ASFF [48], and BiFPN [77]. The main idea of SFAM is to use SE module to execute channelwise level re-weighting on multi-scale concatenated feature maps. As for ASFF, it uses softmax as point-wise level reweighting and then adds feature maps of different scales. In BiFPN, the multi-input weighted residual connections is proposed to execute scale-wise level re-weighting, and then add feature maps of different scales.

在目标检测中常用的注意模块主要分为通道式注意和点态注意,这两种注意模型的代表分别是挤压-激励(SE)[29]和空间注意模块(SAM)[85]。尽管SE模块可以改善的力量ResNet50 ImageNet图像分类任务中排名前1%精度的代价只会增加2%的计算工作,但是在GPU通常会增加推理时间约10%,所以它更适合用于移动设备。但是对于SAM来说,它只需要额外支付0.1%的计算量,就可以在ImageNet图像分类任务中提高ResNet50-SE 0.5% top-1准确率。最重要的是,它完全不影响GPU的推理速度。

在特征集成方面,早期的实践是使用跳跃连接[51]或超列[22]将低级物理特征集成到高级语义特征。随着FPN等多尺度预测方法的流行,人们提出了许多融合不同特征金字塔的轻量级预测模块。这类模块包括SFAM[98]、ASFF[48]和BiFPN[77]。SFAM的主要思想是利用SE模块在多尺度的拼接特征图上进行信道级重加权。对于ASFF,它使用softmax作为点向水平重加权,然后添加不同尺度的特征图。在BiFPN中,提出了多输入加权剩余连接来执行按比例的水平重加权,然后添加不同比例的特征图。

In the research of deep learning, some people put their focus on searching for good activation function. A good activation function can make the gradient more efficiently propagated, and at the same time it will not cause too much extra computational cost. In 2010, Nair and Hinton [56] propose ReLU to substantially solve the gradient vanish problem which is frequently encountered in traditional tanh and sigmoid activation function. Subsequently, LReLU [54], PReLU [24], ReLU6 [28], Scaled Exponential Linear Unit (SELU) [35], Swish [59], hard-Swish [27], and Mish [55], etc., which are also used to solve the gradient vanish problem, have been proposed. The main purpose of LReLU and PReLU is to solve the problem that the gradient of ReLU is zero when the output is less than zero. As for ReLU6 and hard-Swish, they are specially designed for quantization networks. For self-normalizing a neural network, the SELU activation function is proposed to satisfy the goal. One thing to be noted is that both Swish and Mish are continuously differentiable activation function.

The post-processing method commonly used in deeplearning-based object detection is NMS, which can be used to filter those BBoxes that badly predict the same object, and only retain the candidate BBoxes with higher response. The way NMS tries to improve is consistent with the method of optimizing an objective function. The original method proposed by NMS does not consider the context information, so Girshick et al. [19] added classification confidence score in R-CNN as a reference, and according to the order of confidence score, greedy NMS was performed in the order of high score to low score. As for soft NMS [1], it considers the problem that the occlusion of an object may cause the degradation of confidence score in greedy NMS with IoU score. The DIoU NMS [99] developers way of thinking is to add the information of the center point distance to the BBox screening process on the basis of soft NMS. It is worth mentioning that, since none of above postprocessing methods directly refer to the captured image features, post-processing is no longer required in the subsequent development of an anchor-free method.

在深度学习的研究中,一些人把重点放在寻找好的激活功能上。一个好的激活函数可以使梯度更有效地传播,同时也不会造成过多的计算开销。2010年,Nair和Hinton[56]提出ReLU从本质上解决传统tanh和sigmoid激活函数中经常遇到的梯度消失问题。随后,LReLU[54]、PReLU[24]、ReLU6[28]、标度指数线性单元(SELU)[35]、Swish[59]、hard-Swish[27]、Mish[55]等也被用于解决梯度消失问题。LReLU和PReLU的主要目的是解决输出小于0时ReLU的梯度为零的问题。对于ReLU6和hard-Swish,它们是专门为量化网络设计的。针对神经网络的自归一化问题,提出了SELU激活函数。需要注意的是,Swish和Mish都是连续可微的激活函数。

基于深度挖掘的对象检测中常用的后处理方法是NMS, NMS可以对预测较差的bbox进行过滤,只保留响应较高的候选bbox。NMS试图改进的方法与优化目标函数的方法是一致的。NMS提出的原始方法没有考虑上下文信息,所以Girshick等人[19]在R-CNN中加入了分类的置信分作为参考,按照置信分的顺序,从高到低依次进行贪心NMS。对于软NMS[1],它考虑了对象的遮挡可能导致带IoU分数的贪婪NMS的信心分数下降的问题。DIoU NMS[99]开发人员的思路是在软NMS的基础上,将中心点距离信息添加到BBox筛选过程中。值得一提的是,由于以上的后处理方法都没有直接引用捕获的图像特征,因此在后续的无锚方法开发中不再需要后处理。

3. Methodology

The basic aim is fast operating speed of neural network, in production systems and optimization for parallel computations, rather than the low computation volume theoretical indicator (BFLOP). We present two options of real-time neural networks:

  • For GPU we use a small number of groups (1 - 8) in convolutional layers: CSPResNeXt50 / CSPDarknet53
  • For VPU - we use grouped-convolution, but we refrain from using Squeeze-and-excitement (SE) blocks - specifically this includes the following models: EfficientNet-lite / MixNet [76] / GhostNet [21] / MobileNetV3

其基本目标是在生产系统和并行计算优化中提高神经网络的运行速度,而不是低计算量理论指标(BFLOP)。我们提出了实时神经网络的两种选择:

对于GPU,我们在卷积层中使用少量的组(1 - 8):CSPResNeXt50 / CSPDarknet53
对于VPU -我们使用群卷积,但我们不使用挤压和兴奋(SE)块-具体来说,这包括以下模型:效率网-lite / MixNet [76] / GhostNet [21] / MobileNetV3

3.1. Selection of architecture

Our objective is to find the optimal balance among the input network resolution, the convolutional layer number, the parameter number (filter size2 * filters * channel / groups), and the number of layer outputs (filters). For instance, our numerous studies demonstrate that the CSPResNext50 is considerably better compared to CSPDarknet53 in terms of object classification on the ILSVRC2012 (ImageNet) dataset [10]. However, conversely, the CSPDarknet53 is better compared to CSPResNext50 in terms of detecting objects on the MS COCO dataset [46].

我们的目标是在输入网络分辨率、卷积层数、参数数(filter size2 * filters * channel / groups)和层输出数(filters)之间找到最佳平衡。例如,我们的大量研究表明,在ILSVRC2012 (ImageNet)数据集[10]上的对象分类方面,CSPResNext50要比CSPDarknet53好得多。然而,相反地,在MS COCO数据集[46]上检测对象方面,CSPDarknet53比CSPResNext50更好。

The next objective is to select additional blocks for increasing the receptive field and the best method of parameter aggregation from different backbone levels for different detector levels: e.g. FPN, PAN, ASFF, BiFPN.

A reference model which is optimal for classification is not always optimal for a detector. In contrast to the classifier, the detector requires the following:

  • Higher input network size (resolution) – for detecting multiple small-sized objects.
  • More layers – for a higher receptive field to cover the increased size of input network.
  • More parameters – for greater capacity of a model to detect multiple objects of different sizes in a single image.

下一个目标是选择额外的块来增加感受野,并从不同的主干水平对不同的检测器水平(如FPN、PAN、ASFF、BiFPN)进行参数聚合的最佳方法。

对于分类来说是最优的参考模型对于检测器来说并不总是最优的。与分类器相比,检测器需要满足以下条件:

  • 更高的输入网络大小(分辨率)-用于检测多个小型对象。
  • 更多的层-一个更高的接受域,以覆盖增加的输入网络的大小。
  • 更多的参数-更大的能力,一个模型,以检测多个对象的不同大小在一个单一的形象。

Hypothetically speaking, we can assume that a model with a larger receptive field size (with a larger number of convolutional layers 3 × 3) and a larger number of parameters should be selected as the backbone. Table 1 shows the information of CSPResNeXt50, CSPDarknet53, and EfficientNet B3. The CSPResNext50 contains only 16 convolutional layers 3 × 3, a 425 × 425 receptive field and 20.6 M parameters, while CSPDarknet53 contains 29 convolutional layers 3 × 3, a 725 × 725 receptive field and 27.6 M parameters. This theoretical justification, together with our numerous experiments, show that CSPDarknet53 neural network is the optimal model of the two as the backbone for a detector.The influence of the receptive field with different sizes is summarized as follows:

  • Up to the object size - allows viewing the entire object
  • Up to network size - allows viewing the context around the object
  • Exceeding the network size - increases the number of connections between the image point and the final activation

假设,我们可以假设一个模型的主干是一个更大的接受域大小(包含更多的convolutional layers 3×3)和更多的参数。表1显示了CSPResNeXt50, CSPDarknet53,和efficient entnet B3的信息。CSPResNext50只包含16个卷积层3×3,一个425×425的接受域和20.6 M的参数,而CSPDarknet53包含29个卷积层3×3,一个725×725的接受域和27.6 M的参数。这一理论证明,以及我们的大量实验,表明CSPDarknet53神经网络是两个作为骨干的检测器的最佳模型。不同大小的感受野的影响总结如下:

  • 直到对象大小-允许查看整个对象
  • 网络大小-允许查看对象周围的上下文
  • 超过网络大小-增加图像点和最终激活之间的连接数

We add the SPP block over the CSPDarknet53, since it significantly increases the receptive field, separates out the most significant context features and causes almost no reduction of the network operation speed. We use PANet as the method of parameter aggregation from different backbone levels for different detector levels, instead of the FPN used in YOLOv3.

Finally, we choose CSPDarknet53 backbone, SPP additional module, PANet path-aggregation neck, and YOLOv3 (anchor based) head as the architecture of YOLOv4.

In the future we plan to expand significantly the content of Bag of Freebies (BoF) for the detector, which theoretically can address some problems and increase the detector accuracy, and sequentially check the influence of each feature in an experimental fashion.

We do not use Cross-GPU Batch Normalization (CGBN or SyncBN) or expensive specialized devices. This allows anyone to reproduce our state-of-the-art outcomes on a conventional graphic processor e.g. GTX 1080Ti or RTX 2080Ti.

我们在CSPDarknet53上添加了SPP块,因为它显著增加了接受域,分离出了最重要的上下文特性,并且几乎不会降低网络运行速度。我们使用PANet作为不同检测层的不同主干层的参数聚合方法,而不是YOLOv3中使用的FPN。

最后,我们选择CSPDarknet53主干、SPP附加模块、PANet路径聚合颈和基于锚的YOLOv3头部作为YOLOv4的架构。

在未来,我们计划大幅扩展探测器的免费赠品包(BoF)的内容,这在理论上可以解决一些问题,提高探测器的精度,并以实验的方式依次检查每个特征的影响。

我们不使用跨gpu批处理标准化(CGBN或SyncBN)或昂贵的专用设备。这使得任何人都可以在传统的图形处理器(如GTX 1080Ti或RTX 2080Ti)上重现我们最先进的结果。

3.2. Selection of BoF and BoS

For improving the object detection training, a CNN usually uses the following:

  • Activations: ReLU, leaky-ReLU, parametric-ReLU, ReLU6, SELU, Swish, or Mish
  • Bounding box regression loss: MSE, IoU, GIoU, CIoU, DIoU
  • Data augmentation: CutOut, MixUp, CutMix
  • Regularization method: DropOut, DropPath [36], Spatial DropOut [79], or DropBlock
  • Normalization of the network activations by their mean and variance: Batch Normalization (BN) [32], Cross-GPU Batch Normalization (CGBN or SyncBN) [93], Filter Response Normalization (FRN) [70], or Cross-Iteration Batch Normalization (CBN) [89]
  • Skip-connections: Residual connections, Weighted residual connections, Multi-input weighted residual connections, or Cross stage partial connections (CSP)

为了提高目标检测训练,CNN通常使用以下方法:

  • 激活:ReLU、leaky-ReLU、参变ReLU、ReLU6、SELU、Swish或Mish
  • 回归损失:MSE, IoU, GIoU, CIoU, DIoU
  • 数据增强:中断、混合、切断
  • 正则化方法:DropOut, DropPath [36], Spatial DropOut[79],或DropBlock
  • 通过均值和方差对网络激活进行归一化:批处理归一化(BN)[32],交叉gpu批处理归一化(CGBN或SyncBN)[93],滤波器响应归一化(FRN)[70],或交叉迭代批处理归一化(CBN)[89]。
  • 跳转连接:剩余连接、加权剩余连接、多输入加权剩余连接或跨级部分连接(CSP)
As for training activation function, since PReLU and SELU are more difficult to train, and ReLU6 is specifically designed for quantization network, we therefore remove the above activation functions from the candidate list. In the method of reqularization, the people who published DropBlock have compared their method with other methods in detail, and their regularization method has won a lot. Therefore, we did not hesitate to choose DropBlock as our regularization method. As for the selection of normalization method, since we focus on a training strategy that uses only one GPU, syncBN is not considered. 对于训练激活函数,由于PReLU和SELU更难训练,而ReLU6是专门为量化网络设计的,因此我们将上述激活函数从候选列表中删除。在reqularization方法中,发表了DropBlock的人将他们的方法与其他方法进行了详细的比较,并且他们的regularization方法取得了很大的成果。因此,我们毫不犹豫的选择了DropBlock作为我们的regularization方法。对于归一化方法的选择,由于我们关注的是只使用一个GPU的训练策略,所以没有考虑syncBN。

3.3. Additional improvements

In order to make the designed detector more suitable for training on single GPU, we made additional design and improvement as follows:

  • We introduce a new method of data augmentation Mosaic, and Self-Adversarial Training (SAT)
  • We select optimal hyper-parameters while applying genetic algorithms
  • We modify some exsiting methods to make our design suitble for efficient training and detection - modified SAM, modified PAN, and Cross mini-Batch Normalization (CmBN)

为了使设计的检测器更适合于单GPU上的训练,我们做了以下额外的设计和改进:

介绍了一种新的数据增强镶嵌和自对抗训练(SAT)方法。
应用遗传算法选择最优超参数
我们修改了一些现有的方法,使我们的设计适合于有效的培训和检测-修改的SAM、修改的PAN和交叉小批量标准化(CmBN)

Figure 3: Mosaic represents a new method of data augmentation.

Figure 4: Cross mini-Batch Normalization

图3:Mosaic表示一种新的数据扩充方法。

图4:跨迷你批处理规范化

Mosaic represents a new data augmentation method that mixes 4 training images. Thus 4 different contexts are mixed, while CutMix mixes only 2 input images. This allows detection of objects outside their normal context. In addition, batch normalization calculates activation statistics from 4 different images on each layer. This significantly reduces the need for a large mini-batch size.

Self-Adversarial Training (SAT) also represents a new data augmentation technique that operates in 2 forward backward stages. In the 1st stage the neural network alters the original image instead of the network weights. In this way the neural network executes an adversarial attack on itself, altering the original image to create the deception that there is no desired object on the image. In the 2nd stage, the neural network is trained to detect an object on this modified image in the normal way.

CmBN represents a CBN modified version, as shown in Figure 4, defined as Cross mini-Batch Normalization (CmBN). This collects statistics only between mini-batches within a single batch. We modify SAM from spatial-wise attention to pointwise attention, and replace shortcut connection of PAN to concatenation, as shown in Figure 5 and Figure 6, respectively.

提出了一种混合4幅训练图像的数据增强方法。因此4个不同的上下文被混合,而CutMix只混合了2个输入图像。这允许检测正常上下文之外的对象。此外,批处理归一化计算激活统计从4个不同的图像在每一层。这大大减少了对大型迷你批处理大小的需求。

自对抗训练(SAT)也代表了一种新的数据增加技术,在两个前后阶段操作。在第一阶段,神经网络改变原始图像而不是网络权值。通过这种方式,神经网络对自己执行一种对抗性攻击,改变原始图像,以制造图像上没有期望对象的假象。在第二阶段,训练神经网络以正常的方式对修改后的图像进行目标检测。

CmBN表示CBN修改后的版本,如图4所示,定义为交叉微批标准化(Cross mini-Batch Normalization, CmBN)。这只在单个批内的小批之间收集统计信息。我们将SAM从空间上的注意修改为点态注意,并将PAN的快捷连接替换为拼接,分别如图5和图6所示。

 

3.4. YOLOv4

In this section, we shall elaborate the details of YOLOv4. YOLOv4 consists of:

  • Backbone: CSPDarknet53 [81]
  • Neck: SPP [25], PAN [49]
  • Head: YOLOv3 [63]

YOLO v4 uses:

  • Bag of Freebies (BoF) for backbone: CutMix and Mosaic data augmentation, DropBlock regularization, Class label smoothing
  • Bag of Specials (BoS) for backbone: Mish activation, Cross-stage partial connections (CSP), Multiinput weighted residual connections (MiWRC)
  • Bag of Freebies (BoF) for detector: CIoU-loss, CmBN, DropBlock regularization, Mosaic data augmentation, Self-Adversarial Training, Eliminate grid sensitivity, Using multiple anchors for a single ground truth, Cosine annealing scheduler [52], Optimal hyperparameters, Random training shapes
  • Bag of Specials (BoS) for detector: Mish activation, SPP-block, SAM-block, PAN path-aggregation block, DIoU-NMS

在本节中,我们将详细介绍YOLOv4。YOLOv4包括:

  • 骨干:CSPDarknet53 [81]
  • 颈部:SPP [25], PAN [49]
  • 头:YOLOv3 [63]

YOLO v4意思用途:

  • 骨干网的免费包(BoF): CutMix和Mosaic数据增强,DropBlock正则化,类标签平滑
  • 主干专用包(BoS): Mish激活、跨级部分连接(CSP)、多输入加权剩余连接(MiWRC)
  • 检测器的免费包(BoF): ciu -loss, CmBN, DropBlock正则化,马赛克数据增强,自对抗训练,消除网格敏感性,使用多个锚为一个单一的地面真值,余弦退火调度[52],最优超参数,随机训练形状
  • 检测器专用包(BoS): Mish激活、sp -block、SAM-block、PAN路径聚合块、DIoU-NMS

4. Experiments

We test the influence of different training improvement techniques on accuracy of the classifier on ImageNet (ILSVRC 2012 val) dataset, and then on the accuracy of the detector on MS COCO (test-dev 2017) dataset. 我们在ImageNet (ILSVRC 2012 val)数据集上测试了不同的训练改进技术对分类器精度的影响,然后在MS COCO (test-dev 2017)数据集上测试了检测器的精度。

4.1. Experimental setup

In ImageNet image classification experiments, the default hyper-parameters are as follows: the training steps is 8,000,000; the batch size and the mini-batch size are 128 and 32, respectively; the polynomial decay learning rate scheduling strategy is adopted with initial learning rate 0.1; the warm-up steps is 1000; the momentum and weight decay are respectively set as 0.9 and 0.005. All of our BoS experiments use the same hyper-parameter as the default setting, and in the BoF experiments, we add an additional 50% training steps. In the BoF experiments, we verify MixUp, CutMix, Mosaic, Bluring data augmentation, and label smoothing regularization methods. In the BoS experiments, we compared the effects of LReLU, Swish, and Mish activation function. All experiments are trained with a 1080 Ti or 2080 Ti GPU.

在ImageNet图像分类实验中,默认超参数为:训练步骤为8,000,000;批大小和小批大小分别为128和32;采用多项式衰减学习率调度策略,初始学习率为0.1;热身步长1000;动量衰减为0.9,重量衰减为0.005。我们所有的BoS实验都使用与默认设置相同的超参数,在BoF实验中,我们添加了额外的50%的训练步骤。在BoF实验中,我们验证了混合、切分、镶嵌、Bluring数据增强和标签平滑正则化等方法。在BoS实验中,我们比较了LReLU、Swish和Mish激活函数的作用。所有实验均使用1080 Ti或2080 Ti GPU进行训练。

In MS COCO object detection experiments, the default hyper-parameters are as follows: the training steps is 500,500; the step decay learning rate scheduling strategy is adopted with initial learning rate 0.01 and multiply with a factor 0.1 at the 400,000 steps and the 450,000 steps, respectively; The momentum and weight decay are respectively set as 0.9 and 0.0005. All architectures use a single GPU to execute multi-scale training in the batch size of 64 while mini-batch size is 8 or 4 depend on the architectures and GPU memory limitation. Except for using genetic algorithm for hyper-parameter search experiments, all other experiments use default setting. Genetic algorithm used YOLOv3-SPP to train with GIoU loss and search 300 epochs for min-val 5k sets. We adopt searched learning rate 0.00261, momentum 0.949, IoU threshold for assigning ground truth 0.213, and loss normalizer 0.07 for genetic algorithm experiments. We have verified a large number of BoF, including grid sensitivity elimination, mosaic data augmentation, IoU threshold, genetic algorithm, class label smoothing, cross mini-batch normalization, selfadversarial training, cosine annealing scheduler, dynamic mini-batch size, DropBlock, Optimized Anchors, different kind of IoU losses. We also conduct experiments on various BoS, including Mish, SPP, SAM, RFB, BiFPN, and Gaussian YOLO [8]. For all experiments, we only use one GPU for training, so techniques such as syncBN that optimizes multiple GPUs are not used. 在MS COCO对象检测实验中,默认的超参数为:训练步骤为500500;采用步进衰减学习率调度策略,初始学习率为0.01,在400,000步和450,000步分别乘以因子0.1;动量衰减为0.9,重量衰减为0.0005。所有的架构都使用一个GPU来执行批处理大小为64的多尺度训练,而小批处理大小为8或4取决于架构和GPU内存限制。除超参数搜索实验采用遗传算法外,其他实验均采用默认设置。遗传算法利用YOLOv3-SPP进行带GIoU损失的训练,搜索最小值5k集的300个epoch。遗传算法实验采用搜索学习率0.00261、动量0.949、IoU阈值分配ground truth 0.213和损失正态值0.07。我们已经验证了大量的BoF,包括网格敏感性消除、马赛克数据增强、IoU阈值、遗传算法、类标记平滑、交叉小批量标准化、自对抗训练、余弦退火调度器、动态小批量大小、DropBlock、优化锚,不同类型的IoU损失。我们还对各种BoS进行了实验,包括Mish、SPP、SAM、RFB、BiFPN和Gaussian YOLO[8]。对于所有的实验,我们只使用一个GPU进行训练,所以像syncBN这样的优化多个GPU的技术并没有被使用。

4.2. Influence of different features on Classifier training

First, we study the influence of different features on classifier training; specifically, the influence of Class label smoothing, the influence of different data augmentation techniques, bilateral blurring, MixUp, CutMix and Mosaic, as shown in Fugure 7, and the influence of different activations, such as Leaky-ReLU (by default), Swish, and Mish.

首先,研究了不同特征对分类器训练的影响;具体来说,类标签平滑的影响,不同数据增强技术的影响,双边模糊,混合,CutMix和马赛克的影响,如Fugure 7所示,和不同的激活的影响,如泄漏- relu(默认),Swish,和Mish。

Figure 7: Various method of data augmentation

图7:各种数据增强方法

In our experiments, as illustrated in Table 2, the classifier’s accuracy is improved by introducing the features such as: CutMix and Mosaic data augmentation, Class label smoothing, and Mish activation. As a result, our BoFbackbone (Bag of Freebies) for classifier training includes the following: CutMix and Mosaic data augmentation and Class label smoothing. In addition we use Mish activation as a complementary option, as shown in Table 2 and Table 3. 在我们的实验中,如表2所示,通过引入特征如:CutMix和Mosaic数据增强、Class label平滑、Mish激活等,提高了分类器的准确率。因此,我们用于分类器训练的bof主干(免费包)包括以下内容:CutMix和Mosaic数据增强和类标签平滑。此外,我们使用Mish激活作为补充选项,如表2和表3所示。

4.3. Influence of different features on

Detector training Further study concerns the influence of different Bag-ofFreebies (BoF-detector) on the detector training accuracy, as shown in Table 4. We significantly expand the BoF list through studying different features that increase the detector accuracy without affecting FPS:

  • M: Mosaic data augmentation - using the 4-image mosaic during training instead of single image
  • IT: IoU threshold - using multiple anchors for a single ground truth IoU (truth, anchor) > IoU threshold
  • GA: Genetic algorithms - using genetic algorithms for selecting the optimal hyperparameters during network training on the first 10% of time periods
  • LS: Class label smoothing - using class label smoothing for sigmoid activation
  • CBN: CmBN - using Cross mini-Batch Normalization for collecting statistics inside the entire batch, instead of collecting statistics inside a single mini-batch
  • CA: Cosine annealing scheduler - altering the learning rate during sinusoid training
  • DM: Dynamic mini-batch size - automatic increase of mini-batch size during small resolution training by using Random training shapes
  • OA: Optimized Anchors - using the optimized anchors for training with the 512x512 network resolution
  • GIoU, CIoU, DIoU, MSE - using different loss algorithms for bounded box regression

进一步的研究关注不同的口袋- offreebies (BoF-detector)对检测器训练精度的影响,如表4所示。我们通过研究在不影响FPS的情况下提高检测精度的不同特征,显著扩展了BoF list:

  • M:马赛克数据增强——在训练中使用4幅图像的马赛克代替单幅图像
  • 它:IoU阈值-使用多个锚为一个单一的地面真实IoU(真相,锚)> IoU阈值
  • 遗传算法-使用遗传算法选择最佳超参数在网络训练的前10%的时间周期
  • 类标签平滑——使用类标签平滑来激活乙状结肠
  • CBN: CmBN -使用交叉小批标准化来收集整个批内的统计数据,而不是收集单个小批内的统计数据
  • 余弦退火调度器-改变正弦波训练的学习速率
  • DM:动态小批量大小-在小分辨率训练过程中使用随机训练形状自动增加小批量大小
  • 优化锚点-使用优化的锚点进行训练,网络分辨率为512x512
  • GIoU, CIoU, DIoU, MSE -使用不同的损失算法进行有界盒回归

Further study concerns the influence of different Bagof-Specials (BoS-detector) on the detector training accuracy, including PAN, RFB, SAM, Gaussian YOLO (G), and ASFF, as shown in Table 5. In our experiments, the detector gets best performance when using SPP, PAN, and SAM.

进一步研究不同的Bagof-Specials (boss -detector)对检测器训练精度的影响,包括PAN、RFB、SAM、Gaussian YOLO (G)、ASFF,如表5所示。在我们的实验中,当使用SPP、PAN和SAM时,检测器的性能最佳。

4.4. Influence of different backbones and pretrained weightings on Detector training

Further on we study the influence of different backbone models on the detector accuracy, as shown in Table 6. We notice that the model characterized with the best classification accuracy is not always the best in terms of the detector accuracy.

First, although classification accuracy of CSPResNeXt50 models trained with different features is higher compared to CSPDarknet53 models, the CSPDarknet53 model shows higher accuracy in terms of object detection.

进一步研究不同骨架模型对检测精度的影响,如表6所示。我们注意到,在检测器精度方面,具有最佳分类精度的模型并不总是最佳的。

首先,虽然不同特征训练的CSPResNeXt50模型的分类精度要高于CSPDarknet53模型,但CSPDarknet53模型在目标检测方面具有更高的精度。

Second, using BoF and Mish for the CSPResNeXt50 classifier training increases its classification accuracy, but further application of these pre-trained weightings for detector training reduces the detector accuracy. However, using BoF and Mish for the CSPDarknet53 classifier training increases the accuracy of both the classifier and the detector which uses this classifier pre-trained weightings. The net result is that backbone CSPDarknet53 is more suitable for the detector than for CSPResNeXt50.

We observe that the CSPDarknet53 model demonstrates a greater ability to increase the detector accuracy owing to various improvements.

其次,在CSPResNeXt50分类器训练中使用BoF和Mish可以提高分类精度,但是在检测器训练中进一步使用这些预训练权重会降低检测器的精度。然而,在CSPDarknet53分类器训练中使用BoF和Mish提高了分类器和使用预先训练权重的检测器的准确性。结果表明,基干CSPDarknet53比CSPResNeXt50更适合于检测器。

我们观察到,CSPDarknet53模型由于各种改进而显示出更大的提高检测器准确度的能力。

4.5. Influence of different mini-batch size on Detector training

Finally, we analyze the results obtained with models trained with different mini-batch sizes, and the results are shown in Table 7. From the results shown in Table 7, we found that after adding BoF and BoS training strategies, the mini-batch size has almost no effect on the detector’s performance. This result shows that after the introduction of BoF and BoS, it is no longer necessary to use expensive GPUs for training. In other words, anyone can use only a conventional GPU to train an excellent detector.

最后,我们对不同小批尺寸训练模型的结果进行了分析,结果如表7所示。从表7的结果可以看出,加入BoF和BoS训练策略后,小批量大小对检测器的性能几乎没有影响。这一结果表明,在引入BoF和BoS之后,不再需要使用昂贵的gpu进行培训。换句话说,任何人都只能使用传统的GPU来训练优秀的检测器。

 

Figure 8: Comparison of the speed and accuracy of different object detectors. (Some articles stated the FPS of their detectors for only one of the GPUs: Maxwell/Pascal/Volta)

图8:不同目标探测器速度和精度的比较。(一些文章指出他们的检测器的FPS只适用于其中一个gpu: Maxwell/Pascal/Volta)

5. Results

Comparison of the results obtained with other stateof-the-art object detectors are shown in Figure 8. Our YOLOv4 are located on the Pareto optimality curve and are superior to the fastest and most accurate detectors in terms of both speed and accuracy.

得到的结果与其他最先进的物体探测器的比较如图8所示。我们的YOLOv4位于Pareto最优曲线上,无论是速度还是精度都优于最快最准确的检测器。

Since different methods use GPUs of different architectures for inference time verification, we operate YOLOv4 on commonly adopted GPUs of Maxwell, Pascal, and Volta architectures, and compare them with other state-of-the-art methods. Table 8 lists the frame rate comparison results of using Maxwell GPU, and it can be GTX Titan X (Maxwell) or Tesla M40 GPU. Table 9 lists the frame rate comparison results of using Pascal GPU, and it can be Titan X (Pascal), Titan Xp, GTX 1080 Ti, or Tesla P100 GPU. As for Table 10, it lists the frame rate comparison results of using Volta GPU, and it can be Titan Volta or Tesla V100 GPU.

由于不同的方法使用不同架构的gpu进行推理时间验证,我们在Maxwell架构、Pascal架构和Volta架构常用的gpu上运行YOLOv4,并与其他最先进的方法进行比较。表8列出了使用Maxwell GPU的帧速率比较结果,可以是GTX Titan X (Maxwell)或者Tesla M40 GPU。表9列出了使用Pascal GPU的帧率比较结果,可以是Titan X (Pascal)、Titan Xp、GTX 1080 Ti或Tesla P100 GPU。表10列出了使用Volta GPU的帧速率比较结果,可以是Titan Volta或者Tesla V100 GPU。

6. Conclusions

We offer a state-of-the-art detector which is faster (FPS) and more accurate (MS COCO AP50...95 and AP50) than all available alternative detectors. The detector described can be trained and used on a conventional GPU with 8-16 GB-VRAM this makes its broad use possible. The original concept of one-stage anchor-based detectors has proven its viability. We have verified a large number of features, and selected for use such of them for improving the accuracy of both the classifier and the detector. These features can be used as best-practice for future studies and developments.

我们提供最先进的检测器,更快(FPS)和更准确(MS COCO AP50…超过了所有可用的替代探测器。所述的检测器可以在传统GPU上训练和使用8-16 GB-VRAM,这使其广泛使用成为可能。基于锚点的单级探测器的概念已经被证明是可行的。我们已经验证了大量的特征,并选择使用这些特征来提高分类器和检测器的准确性。这些特性可以作为未来研究和开发的最佳实践。

7. Acknowledgements

The authors wish to thank Glenn Jocher for the ideas of Mosaic data augmentation, the selection of hyper-parameters by using genetic algorithms and solving the grid sensitivity problem https://github.com/ ultralytics/yolov3. 作者感谢Glenn Jocher提出的镶嵌数据增强、利用遗传算法选择超参数以及解决网格敏感性问题的思路:https://github.com/ultralytics /yolov3。

Paper:《YOLOv4: Optimal Speed and Accuracy of Object Detection》的翻译与解读相关推荐

  1. Paper:《Spatial Transformer Networks》的翻译与解读

    Paper:<Spatial Transformer Networks>的翻译与解读 目录 <Spatial Transformer Networks>的翻译与解读 Abstr ...

  2. Paper:《Spatial Transformer Networks空间变换网络》的翻译与解读

    Paper:<Spatial Transformer Networks空间变换网络>的翻译与解读 导读:该论文提出了空间变换网络的概念.主要贡献是提出了空间变换单元(Spatial Tra ...

  3. Spatial Transformer Networks 论文解读

    paper title:Spatial Transformer Networks paper link: https://arxiv.org/pdf/1506.02025.pdf oral or de ...

  4. 详细解读Spatial Transformer Networks(STN)-一篇文章让你完全理解STN了

    Spatial Transformer Networks https://blog.jiangzhenyu.xyz/2018/10/06/Spatial-Transformer-Networks/ 2 ...

  5. Spatial Transformer Networks(STN)

    详细解读Spatial Transformer Networks(STN)-一篇文章让你完全理解STN了_多元思考力-CSDN博客_stn

  6. 【论文学习】STN —— Spatial Transformer Networks

    Paper:Spatial Transformer Networks 这是Google旗下 DeepMind 大作,最近学习人脸识别,这篇paper提出的STN网络可以代替align的操作,端到端的训 ...

  7. 空间映射网络--Spatial Transformer Networks

    Spatial Transformer Networks 主要对目标在特征空间做不变性归一化 解决 角度.尺度等变形引入的影响 Code: https://github.com/skaae/trans ...

  8. Spatial Transformer Networks(STN)详解

    目录 1.STN的作用 1.1 灵感来源 1.2 什么是STN? 2.STN网络架构![在这里插入图片描述](https://img-blog.csdnimg.cn/20190908104416274 ...

  9. 论文阅读:Spatial Transformer Networks

    文章目录 1 概述 2 模型说明 2.1 Localisation Network 2.2 Parameterised Sampling Grid 3 模型效果 参考资料 1 概述 CNN的机理使得C ...

  10. 注意力机制——Spatial Transformer Networks(STN)

    Spatial Transformer Networks(STN)是一种空间注意力模型,可以通过学习对输入数据进行空间变换,从而增强网络的对图像变形.旋转等几何变换的鲁棒性.STN 可以在端到端的训练 ...

最新文章

  1. 尚硅谷springcloud第二季笔记_外行人都能看懂的 Spring Cloud,错过了血亏
  2. Delphi - 对象构造浅析后续
  3. python爬虫天气预报_Python爬虫实例扒取2345天气预报
  4. [导入]数据库物理模型设计的其他模式之继承模式
  5. virtualenv environment怎么选_2020年阿里云双11内容安全怎么选? - 云计算分享家
  6. 浙江高级会计师评审计算机要求,浙江2020年高级会计师评审申报论文要求
  7. ad9生成坐标文件_提高效率 | 教你批量提取CAD坐标的小技巧
  8. 合理利用Java不可变对象,让你的代码更加优雅
  9. 数据库连接出错。错误代码:-2147467259。错误原因:未指定的错误
  10. 祝贺父亲节快乐的python代码_父亲节快乐的祝福语贺词(最新)
  11. 蓝天学校计算机教学反思,小学四年级上册《飞向蓝天的恐龙》教学反思
  12. word模板中添加图片
  13. 音频播放器—打开音频设备(扬声器)
  14. 用bridge创建虚拟网桥
  15. MATLAB用梯度法求解目标函数,机械优化设计作业——梯度法求解
  16. 乌镇峰会丨容联云:统一AI基础设施 形成AI生产与共享闭环机制
  17. Bootstrap 组件:面板组件(panel)
  18. poj3254/洛谷P1896 状压dp
  19. MySQL SQL语句面试准备
  20. 如何为戴尔灵越15 5559加装内存条和固态硬盘

热门文章

  1. 解题:THUWC 2017 在美妙的数学王国中畅游
  2. unity实现风格化动态天空盒
  3. Office365:客户端升级后无法启动Microsoft Outlook
  4. 随笔:读书笔记 --《见识:商业的本质和人生的智慧》
  5. php论坛整合,PHPCMS整合Discuz论坛
  6. 乔治亚理工提出基于GAN的强化学习算法用于推荐系统
  7. 兔子生崽问题。用c语言求解
  8. 亿级数据多条件组合查询——秒级响应解决方案
  9. 健壮且可读的安卓架构设计
  10. 可见光通信产业化现状分析(国内篇)