YOLOv1论文(附论文下载超链接):You Only Look Once: Unified, Real-Time Object Detection

声明:论文翻译仅用来学习,转载请注明出处

You Only Look Once: Unified, Real-Time Object Detection

你只看一次:统一、实时的目标检测

Abstract

摘要

We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance.

Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.

我们提出了YOLO,一种检测目标的新方法。之前关于目标检测的工作重新利用分类器来进行检测。相反,我们把目标检测看作是一个回归问题,即空间分离的边界框和相关的类别概率。一个单一的神经网络在一次评估中直接从完整的图像中预测出边界框和类别概率。由于整个检测管道是一个单一的网络,它可以直接对检测性能进行端到端的优化。

我们的统一架构是非常快的。我们的基本YOLO模型以每秒45帧的速度实时处理图像。该网络的一个较小的版本,即Fast YOLO,处理速度达到了惊人的155帧每秒,同时仍然实现了其他实时检测器的两倍的mAP。与最先进的检测系统相比,YOLO会出现更多的定位错误,但在背景上预测假阳性的可能性更小。最后,YOLO学习了非常普遍的目标表征。当从自然图像泛化到艺术品等其他领域时,它优于其他检测方法,包括DPM和R-CNN。

1. Introduction

1. 引言

Humans glance at an image and instantly know what objects are in the image, where they are, and how they interact. The human visual system is fast and accurate, allowing us to perform complex tasks like driving with little conscious thought. Fast, accurate algorithms for object detection would allow computers to drive cars without specialized sensors, enable assistive devices to convey real-time scene information to human users, and unlock the potential for general purpose, responsive robotic systems.

Current detection systems repurpose classifiers to perform detection. To detect an object, these systems take a classifier for that object and evaluate it at various locations and scales in a test image. Systems like deformable parts models (DPM) use a sliding window approach where the classifier is run at evenly spaced locations over the entire image [10].

人类看一眼图像,就能立即知道图像中的物体是什么,它们在哪里,以及它们如何互动。人类的视觉系统是快速和准确的,使我们能够在几乎没有意识的情况下完成复杂的任务,如驾驶。快速、准确的目标检测算法将使计算机能够在没有专门传感器的情况下驾驶汽车,使辅助设备能够向人类用户传达实时场景信息,并释放出通用的、反应灵敏的机器人系统的潜力。

目前的检测系统重新利用分类器来进行检测。为了检测一个目标,这些系统采用了该目标的分类器,并在测试图像的不同位置和比例上对其进行评估。像可变形部件模型(DPM)这样的系统使用滑动窗口方法,在整个图像上以均匀间隔的位置运行分类器[10]。

More recent approaches like R-CNN use region proposal methods to first generate potential bounding boxes in an image and then run a classifier on these proposed boxes. After classification, post-processing is used to refine the bounding boxes, eliminate duplicate detections, and rescore the boxes based on other objects in the scene [13]. These complex pipelines are slow and hard to optimize because each individual component must be trained separately.

We reframe object detection as a single regression problem, straight from image pixels to bounding box coordinates and class probabilities. Using our system, you only look once (YOLO) at an image to predict what objects are present and where they are.

最近的方法如R-CNN使用区域建议方法,首先在图像中生成潜在的边界框,然后在这些建议的框上运行分类器。在分类之后,后处理被用来细化边界框,消除重复检测,并根据场景中的其他目标对框重新评分[13]。这些复杂的管道很慢,而且很难优化,因为每个单独的组件都必须单独训练。

我们将目标检测重塑为一个单一的回归问题,直接从图像像素到边界框坐标和类别概率。使用我们的系统,你只需看一次(YOLO)图像,就可以预测有哪些目标存在,它们在哪里。

Figure 1: The YOLO Detection System. Processing images with YOLO is simple and straightforward. Our system (1) resizes the input image to 448 × 448, (2) runs a single convolutional network on the image, and (3) thresholds the resulting detections by the model’s confidence.

**图1:YOLO检测系统。**用YOLO处理图像是简单明了的。我们的系统(1)将输入图像的大小调整为448×448,(2)在图像上运行一个单一的卷积网络,(3)根据模型的置信度对产生的检测结果进行阈值处理。

YOLO is refreshingly simple: see Figure 1. A single convolutional network simultaneously predicts multiple bounding boxes and class probabilities for those boxes. YOLO trains on full images and directly optimizes detection performance. This unified model has several benefits over traditional methods of object detection.

First, YOLO is extremely fast. Since we frame detection as a regression problem we don’t need a complex pipeline. We simply run our neural network on a new image at test time to predict detections. Our base network runs at 45 frames per second with no batch processing on a Titan X GPU and a fast version runs at more than 150 fps. This means we can process streaming video in real-time with less than 25 milliseconds of latency. Furthermore, YOLO achieves more than twice the mean average precision of other real-time systems. For a demo of our system running in real-time on a webcam please see our project webpage: http://pjreddie.com/yolo/.

YOLO简单得令人耳目一新:见图1。一个卷积网络同时预测多个边界框和这些框的类别概率。YOLO在完整的图像上进行训练,直接优化检测性能。与传统的目标检测方法相比,这种统一的模型有几个好处。

首先,YOLO的速度非常快。由于我们把检测看作是一个回归问题,所以我们不需要一个复杂的流程。我们只需在测试时在新图像上运行我们的神经网络来预测检测结果。我们的基本网络在Titan X GPU上以每秒45帧的速度运行,没有批量处理,快速版本的运行速度超过150帧。这意味着我们可以实时处理流媒体视频,延迟时间不到25毫秒。此外,YOLO达到了其他实时系统平均精度的两倍以上。关于我们的系统在网络摄像头上实时运行的演示,请看我们的项目网页:http://pjreddie.com/yolo/。

Second, YOLO reasons globally about the image when making predictions. Unlike sliding window and region proposal-based techniques, YOLO sees the entire image during training and test time so it implicitly encodes contextual information about classes as well as their appearance. Fast R-CNN, a top detection method [14], mistakes background patches in an image for objects because it can’t see the larger context. YOLO makes less than half the number of background errors compared to Fast R-CNN.

第二,YOLO在进行预测时对图像进行全局推理。与滑动窗口和基于区域建议的技术不同,YOLO在训练和测试期间可以看到整个图像,因此它隐含地编码了关于类别和外观的上下文信息。Fast R-CNN是一种顶部检测方法[14],它将图像中的背景斑块误认为是目标,因为它不能看到更大的背景。与Fast R-CNN相比,YOLO的背景错误数量不到一半。

Third, YOLO learns generalizable representations of objects. When trained on natural images and tested on artwork, YOLO outperforms top detection methods like DPM and R-CNN by a wide margin. Since YOLO is highly generalizable it is less likely to break down when applied to new domains or unexpected inputs.

YOLO still lags behind state-of-the-art detection systems in accuracy. While it can quickly identify objects in images it struggles to precisely localize some objects, especially small ones. We examine these tradeoffs further in our experiments.

All of our training and testing code is open source. A variety of pretrained models are also available to download.

第三,YOLO学习了目标的可概括性表征。当对自然图像进行训练并对艺术品进行测试时,YOLO的性能远远超过了DPM和R-CNN等顶级检测方法。由于YOLO具有高度的通用性,它在应用于新领域或意外输入时不太可能崩溃。

YOLO在准确性方面仍然落后于最先进的检测系统。虽然它可以快速识别图像中的物体,但它在精确定位一些物体,特别是小物体方面却很困难。我们在实验中进一步研究这些权衡。

我们所有的训练和测试代码都是开源的。各种预训练的模型也可以下载。

2. Unified Detection

2. 统一检测

We unify the separate components of object detection into a single neural network. Our network uses features from the entire image to predict each bounding box. It also predicts all bounding boxes across all classes for an image simultaneously. This means our network reasons globally about the full image and all the objects in the image. The YOLO design enables end-to-end training and real-time speeds while maintaining high average precision.

Our system divides the input image into an S × S grid. If the center of an object falls into a grid cell, that grid cell is responsible for detecting that object.

Each grid cell predicts B bounding boxes and confidence scores for those boxes. These confidence scores reflect how confident the model is that the box contains an object and also how accurate it thinks the box is that it predicts. Formally we define confidence as Pr(Object) ∗ IOUtruthpred . If no object exists in that cell, the confidence scores should be zero. Otherwise we want the confidence score to equal the intersection over union (IOU) between the predicted box and the ground truth.

Each bounding box consists of 5 predictions: x, y, w, h, and confidence. The (x, y) coordinates represent the center of the box relative to the bounds of the grid cell. The width and height are predicted relative to the whole image. Finally the confidence prediction represents the IOU between the predicted box and any ground truth box.

我们将目标检测的独立组件统一到一个神经网络中。我们的网络使用整个图像的特征来预测每个边界框。它还同时预测一个图像的所有类别的所有边界框。这意味着我们的网络对整个图像和图像中的所有目标进行全局推理。YOLO设计实现了端到端的训练和实时速度,同时保持了高平均精度。

我们的系统将输入图像划分为一个S×S网格。如果一个目标的中心落入一个网格单元,该网格单元就负责检测该物体。

每个网格单元预测出B个边界框和这些框的置信度分数。这些置信度分数反映了模型对框包含目标的自信程度,也反映了它认为它预测的框有多准确。形式上,我们将置信度定义为Pr(Object) ∗ IOU truthpred。如果该单元格中没有目标存在,那么置信度分数应该为零。否则,我们希望置信度得分等于预测框和真实框之间的交集(IOU)。

每个边界框由5个预测值组成:x、y、w、h和置信度。(x, y)坐标代表框的中心相对于网格单元的边界。宽度和高度是相对于整个图像的预测。最后,置信度预测表示预测的框和任何真实框之间的IOU。

Each grid cell also predicts C conditional class probabilities, Pr(Classi|Object). These probabilities are conditioned on the grid cell containing an object. We only predict one set of class probabilities per grid cell, regardless of the number of boxes B.

At test time we multiply the conditional class probabilities and the individual box confidence predictions.

Pr⁡(Class i∣Object )∗Pr⁡(Object )∗IOUpred truth =Pr⁡(Class i)∗IOUpred truth \begin{equation} \operatorname{Pr}\left(\text { Class }_{i} \mid \text { Object }\right) * \operatorname{Pr}(\text { Object }) * \mathrm{IOU}_{\text {pred }}^{\text {truth }}=\operatorname{Pr}\left(\text { Class }_{i}\right) * \mathrm{IOU}_{\text {pred }}^{\text {truth }} \end{equation} Pr( Class i​∣ Object )∗Pr( Object )∗IOUpred truth ​=Pr( Class i​)∗IOUpred truth ​​​
which gives us class-specific confidence scores for each box. These scores encode both the probability of that class appearing in the box and how well the predicted box fits the object.

每个网格单元还预测了C类的条件概率,Pr(Classi|Object)。这些概率是以包含目标的网格单元为条件的。我们只预测每个网格单元的一组类别概率,而不考虑框B的数量。

在测试时,我们将条件类概率和单个框的置信度预测相乘:
Pr⁡(Class i∣Object )∗Pr⁡(Object )∗IOUpred truth =Pr⁡(Class i)∗IOUpred truth \begin{equation} \operatorname{Pr}\left(\text { Class }_{i} \mid \text { Object }\right) * \operatorname{Pr}(\text { Object }) * \mathrm{IOU}_{\text {pred }}^{\text {truth }}=\operatorname{Pr}\left(\text { Class }_{i}\right) * \mathrm{IOU}_{\text {pred }}^{\text {truth }} \end{equation} Pr( Class i​∣ Object )∗Pr( Object )∗IOUpred truth ​=Pr( Class i​)∗IOUpred truth ​​​
这给我们提供了每个框的特定类别的置信度分数。这些分数既是对该类出现在框里的概率的编码,也是对预测的框与目标的匹配程度的编码。

Figure 2: The Model. Our system models detection as a regression problem. It divides the image into an S × S grid and for each grid
cell predicts B bounding boxes, confidence for those boxes, and C
class probabilities. These predictions are encoded as an S × S × (B ∗
5 + C) tensor.

**图2:模型。**我们的系统将检测建模为一个回归问题。它将图像划分为一个S×S的网格,并为每个网格单元预测B个边界框、这些框的置信度和C类概率。这些预测被编码为一个S×S×(B∗5+C)张量。

For evaluating YOLO on PASCAL VOC, we use S = 7, B = 2. PASCAL VOC has 20 labelled classes so C = 20. Our final prediction is a 7 × 7 × 30 tensor.

为了在PASCAL VOC上评估YOLO,我们使用S=7,B=2。PASCAL VOC有20个标记的类,所以C = 20。

2.1. Network Design

2.1. 网络设计

We implement this model as a convolutional neural network and evaluate it on the PASCAL VOC detection dataset [9]. The initial convolutional layers of the network extract features from the image while the fully connected layers predict the output probabilities and coordinates.

Our network architecture is inspired by the GoogLeNet model for image classification [34]. Our network has 24 convolutional layers followed by 2 fully connected layers. Instead of the inception modules used by GoogLeNet, we simply use 1 × 1 reduction layers followed by 3 × 3 convolutional layers, similar to Lin et al [22]. The full network is shown in Figure 3.

我们将这个模型实现为卷积神经网络,并在PASCAL VOC检测数据集[9]上对其进行评估。网络的初始卷积层从图像中提取特征,而全连接层则预测输出概率和坐标。

我们的网络结构是受用于图像分类的GoogLeNet模型的启发[34]。我们的网络有24个卷积层,然后是2个全连接层。我们没有使用GoogLeNet使用的接收模块,而是简单地使用1×1的还原层,然后是3×3的卷积层,与Lin等人[22]类似。完整的网络显示在图3中。

Figure 3: The Architecture. Our detection network has 24 convolutional layers followed by 2 fully connected layers. Alternating 1 × 1 convolutional layers reduce the features space from preceding layers. We pretrain the convolutional layers on the ImageNet classification task at half the resolution (224 × 224 input image) and then double the resolution for detection.

**图3:架构。**我们的检测网络有24个卷积层,然后是2个全连接层。交替出现的1×1卷积层减少了前几层的特征空间。我们在ImageNet分类任务中以一半的分辨率(224×224输入图像)对卷积层进行预训练,然后以两倍的分辨率进行检测。

We also train a fast version of YOLO designed to push the boundaries of fast object detection. Fast YOLO uses a neural network with fewer convolutional layers (9 instead of 24) and fewer filters in those layers. Other than the size of the network, all training and testing parameters are the same between YOLO and Fast YOLO.

The final output of our network is the 7 × 7 × 30 tensor of predictions.

我们还训练了一个快速版本的YOLO,旨在推动快速目标检测的边界。快速YOLO使用的神经网络的卷积层较少(9层而不是24层),这些层中的滤波器也较少。除了网络的大小,所有的训练和测试参数在YOLO和Fast YOLO之间都是一样的。

我们网络的最终输出是7×7×30的预测张量。

2.2. Training

2.2. 训练

We pretrain our convolutional layers on the ImageNet 1000-class competition dataset [30]. For pretraining we use the first 20 convolutional layers from Figure 3 followed by a average-pooling layer and a fully connected layer. We train this network for approximately a week and achieve a single crop top-5 accuracy of 88% on the ImageNet 2012 validation set, comparable to the GoogLeNet models in Caffe’s Model Zoo [24]. We use the Darknet framework for all training and inference [26].

We then convert the model to perform detection. Ren et al. show that adding both convolutional and connected layers to pretrained networks can improve performance [29]. Following their example, we add four convolutional layers and two fully connected layers with randomly initialized weights. Detection often requires fine-grained visual information so we increase the input resolution of the network from 224 × 224 to 448 × 448.

Our final layer predicts both class probabilities and bounding box coordinates. We normalize the bounding box width and height by the image width and height so that they fall between 0 and 1. We parametrize the bounding box x and y coordinates to be offsets of a particular grid cell location so they are also bounded between 0 and 1.

We use a linear activation function for the final layer and all other layers use the following leaky rectified linear activation:
ϕ(x)={x,if x>00.1x,otherwise \begin{equation} \phi(x)= \begin{cases}x, & \text { if } x>0 \\ 0.1 x, & \text { otherwise }\end{cases} \end{equation} ϕ(x)={x,0.1x,​ if x>0 otherwise ​​​
我们在ImageNet 1000级竞赛数据集[30]上预训练卷积层。对于预训练,我们使用了图3中的前20个卷积层,然后是一个平均池化层和一个全连接层。我们对这个网络进行了大约一周的训练,并在ImageNet 2012验证集上取得了88%的单目标top-5的准确率,与Caffe的Model Zoo[24]中的GoogLeNet模型相当。我们使用Darknet框架进行所有训练和推理[26]。

然后我们转换模型来进行检测。Ren等人的研究表明,在预训练的网络中同时添加卷积层和连接层可以提高性能[29]。按照他们的例子,我们添加了四个卷积层和两个全连接层,权重随机初始化。检测通常需要精细的视觉信息,因此我们将网络的输入分辨率从224×224提高到448×448。

我们的最后一层预测了类别概率和边界框坐标。我们通过图像的宽度和高度来规范边界框的宽度和高度,使它们在0和1之间。我们将边界框的x和y坐标参数化为特定网格单元位置的偏移,因此它们也在0和1之间。

我们在最后一层使用线性激活函数,所有其他层都使用下面的漏整型线性激活:
ϕ(x)={x,if x>00.1x,otherwise \begin{equation} \phi(x)= \begin{cases}x, & \text { if } x>0 \\ 0.1 x, & \text { otherwise }\end{cases} \end{equation} ϕ(x)={x,0.1x,​ if x>0 otherwise ​​​
We optimize for sum-squared error in the output of our model. We use sum-squared error because it is easy to optimize, however it does not perfectly align with our goal of maximizing average precision. It weights localization error equally with classification error which may not be ideal.

Also, in every image many grid cells do not contain any object. This pushes the “confidence” scores of those cells towards zero, often overpowering the gradient from cells that do contain objects. This can lead to model instability, causing training to diverge early on.

To remedy this, we increase the loss from bounding box coordinate predictions and decrease the loss from confidence predictions for boxes that don’t contain objects. We use two parameters, λcoord and λnoobj to accomplish this. We set λcoord = 5 and λnoobj = .5.

我们对模型输出的平方误差之和进行优化。我们使用平方误差,因为它很容易优化,然而它并不完全符合我们最大化平均精度的目标。它对定位误差和分类误差的权重相同,这可能不是很理想。

另外,在每张图像中,许多网格单元不包含任何目标。这就把这些单元的 "置信度 "分数推向了零,往往压倒了含有目标的单元的梯度。这可能导致模型的不稳定,使训练在早期就出现偏差。

为了解决这个问题,我们增加了边界框坐标预测的损失,减少了不包含物体的框的置信度预测的损失。我们使用两个参数,λcoord和λnoobj来实现这一目标。我们设定λcoord = 5,λnoobj = 0.5。

Sum-squared error also equally weights errors in large boxes and small boxes. Our error metric should reflect that small deviations in large boxes matter less than in small boxes. To partially address this we predict the square root of the bounding box width and height instead of the width and height directly.

YOLO predicts multiple bounding boxes per grid cell. At training time we only want one bounding box predictor to be responsible for each object. We assign one predictor to be “responsible” for predicting an object based on which prediction has the highest current IOU with the ground truth. This leads to specialization between the bounding box predictors. Each predictor gets better at predicting certain sizes, aspect ratios, or classes of object, improving overall recall.

误差总和也同样权衡大框和小框中的误差。我们的误差度量应该反映出大框里的小偏差比小框里的小偏差更重要。为了部分解决这个问题,我们预测边界框的宽度和高度的平方根,而不是直接预测宽度和高度。

YOLO预测了每个网格单元的多个边界框。在训练时,我们只希望每个目标有一个边界框预测器负责。我们指定一个预测器 "负责 "预测一个目标,依据是哪个预测器与地面实况的当前IOU最高。这就导致了边界框预测器之间的专业化。每个预测器都能更好地预测某些尺寸、长宽比或物体的类别,从而提高整体召回率。

During training we optimize the following, multi-part loss function:

在训练过程中,我们优化以下的多部分损失函数:
λcoord ∑i=0S2∑j=0B1ijobj [(xi−x^i)2+(yi−y^i)2]+λcoord ∑i=0S2∑j=0B1ijobj [(wi−w^i)2+(hi−h^i)2]+∑i=0S2∑j=0B1ijobj (Ci−C^i)2+λnoobj ∑i=0S2∑j=0B1ijnoobj (Ci−C^i)2+∑i=0S21iobj ∑c∈classes (pi(c)−p^i(c))2\begin{equation} \begin{gathered} \lambda_{\text {coord }} \sum_{i=0}^{S^{2}} \sum_{j=0}^{B} \mathbb{1}_{i j}^{\text {obj }}\left[\left(x_{i}-\hat{x}_{i}\right)^{2}+\left(y_{i}-\hat{y}_{i}\right)^{2}\right] \\ +\lambda_{\text {coord }} \sum_{i=0}^{S^{2}} \sum_{j=0}^{B} \mathbb{1}_{i j}^{\text {obj }}\left[\left(\sqrt{w_{i}}-\sqrt{\hat{w}_{i}}\right)^{2}+\left(\sqrt{h_{i}}-\sqrt{\hat{h}_{i}}\right)^{2}\right] \\ +\sum_{i=0}^{S^{2}} \sum_{j=0}^{B} \mathbb{1}_{i j}^{\text {obj }}\left(C_{i}-\hat{C}_{i}\right)^{2} \\ +\lambda_{\text {noobj }} \sum_{i=0}^{S^{2}} \sum_{j=0}^{B} \mathbb{1}_{i j}^{\text {noobj }}\left(C_{i}-\hat{C}_{i}\right)^{2} \\ +\sum_{i=0}^{S^{2}} \mathbb{1}_{i}^{\text {obj }} \sum_{c \in \text { classes }}\left(p_{i}(c)-\hat{p}_{i}(c)\right)^{2} \end{gathered} \end{equation} λcoord ​i=0∑S2​j=0∑B​1ijobj ​[(xi​−x^i​)2+(yi​−y^​i​)2]+λcoord ​i=0∑S2​j=0∑B​1ijobj ​[(wi​​−w^i​​)2+(hi​​−h^i​​)2]+i=0∑S2​j=0∑B​1ijobj ​(Ci​−C^i​)2+λnoobj ​i=0∑S2​j=0∑B​1ijnoobj ​(Ci​−C^i​)2+i=0∑S2​1iobj ​c∈ classes ∑​(pi​(c)−p^​i​(c))2​​​
where 1iobj \mathbb{1}_{i}^{\text {obj }}1iobj ​ denotes if object appears in cell iii and 1i,jobj \mathbb{1}_{i,j}^{\text {obj }}1i,jobj ​ denotes that thejjjth bounding box predictor in cell iii is “responsible” for that prediction.

其中 1iobj \mathbb{1}_{i}^{\text {obj }}1iobj ​ 表示目标是否出现在单元格 iii 中,1i,jobj \mathbb{1}_{i,j}^{\text {obj }}1i,jobj ​ 表示单元格 iii 中的第 jjj 个边界框预测器对该预测 “负责”。

Note that the loss function only penalizes classification error if an object is present in that grid cell (hence the conditional class probability discussed earlier). It also only penalizes bounding box coordinate error if that predictor is “responsible” for the ground truth box (i.e. has the highest IOU of any predictor in that grid cell).

注意,如果一个目标存在于网格单元中(因此条件概率被讨论的很早),损失函数只惩罚分类错误。如果预测器负责实际边界框(网格中具有最高IOU的预测器),它也仅仅惩罚边界框坐标错误。

We train the network for about 135 epochs on the training and validation data sets from PASCAL VOC 2007 and 2012. When testing on 2012 we also include the VOC 2007 test data for training. Throughout training we use a batch size of 64, a momentum of 0.9 and a decay of 0.0005.

我们对Pascal VOC 2007和2012的训练集和验证集进行了大约135个迭代周期的网络训练。在测试Pascal 2012时,我们也将VOC 2007的测试数据用于训练。在整个训练过程中,我们使用了64的batchsize,0.9的动量和0.0005的衰减。

Our learning rate schedule is as follows: For the first epochs we slowly raise the learning rate from 10e-3 to 10e-2. If we start at a high learning rate our model often diverges due to unstable gradients. We continue training with 10e-2 for 75 epochs, then 10e-3 for 30 epochs, and finally 10e-4 for 30 epochs.

我们的学习率设置如下:第一个epoch我们的学习率从10e-3慢慢上升到10e-2。如果我们一开始就设置一个很高的学习率我们的模型经常会因为不稳定的梯度而发散。我们以10e-2的学习率继续训练了75个epoch,然后以10e-3的学习率训练30个epoch,最后以10e-4的学习率训练30个epoch。

To avoid overfitting we use dropout and extensive data augmentation. A dropout layer with rate = .5 after the first connected layer prevents co-adaptation between layers [18]. For data augmentation we introduce random scaling and translations of up to 20% of the original image size. We also randomly adjust the exposure and saturation of the image by up to a factor of 1.5 in the HSV color space.

为了避免过拟合我们使用了dropout和大量的数据增强。在第一个连接层后,一个dropout层的比例为0.5,为了防止层之间相互适应。对于数据增强,我们引入高达原始图像20%大小的随机缩放和转换。我们还在HSV颜色空间中使用高达1.5的因子来随机调整图像的曝光和饱和度。

2.3. Inference

2.3 推理

Just like in training, predicting detections for a test image only requires one network evaluation. On PASCAL VOC the network predicts 98 bounding boxes per image and class probabilities for each box. YOLO is extremely fast at test time since it only requires a single network evaluation, unlike classifier-based methods.

就像训练中一样,预测测试图像的检测仅仅要求一次网络评估。在Pascal VOC数据集上,每张图像上网络预测98个边界框和每个框的类别概率。YOLO在测试时间上是非常快的,因为它只需要一次单一网络评估,不像基于分类器的方法。

The grid design enforces spatial diversity in the bounding box predictions. Often it is clear which grid cell an object falls in to and the network only predicts one box for each object. However, some large objects or objects near the border of multiple cells can be well localized by multiple cells. Non-maximal suppression can be used to fix these multiple detections. While not critical to performance as it is for R-CNN or DPM, non-maximal suppression adds 2- 3% in mAP.

网格设计强化了边界框预测中的空间多样性。通常一个目标落在哪一个网格单元中是很清晰的,并且网络仅仅为每个目标预测一个边界框。然而,一些大的目标或者靠近多个网格单元边界的目标可以被多个网格单元很好地定位。非极大值抑制可以用来修正这些多重检测。然而对于R-CNN和DPM来说性能不是关键,非极大值抑制会增加2%~3%的mAP。

2.4. Limitations of YOLO

2.4. YOLO的局限性

YOLO imposes strong spatial constraints on bounding box predictions since each grid cell only predicts two boxes and can only have one class. This spatial constraint limits the number of nearby objects that our model can predict. Our model struggles with small objects that appear in groups, such as flocks of birds.

YOLO在边界框的预测中加了强空间约束,因为每个单元格仅仅预测两个框但只能有一个类别。这个空间约束限制了我们可以预测的邻近目标数量。我们的模型与群组中的小物体作斗争,比如鸟群。

Since our model learns to predict bounding boxes from data, it struggles to generalize to objects in new or unusual aspect ratios or configurations. Our model also uses relatively coarse features for predicting bounding boxes since our architecture has multiple downsampling layers from the input image.

由于我们的模型从数据中学习并预测边界框,因此它很难泛化到新的、与众不同的纵横比或者不寻常配置的目标上。我们的模型使用相对粗糙的特征来预测边界框,因为我们的模型架构从输入图像开始有多个下采样层。

Finally, while we train on a loss function that approximates detection performance, our loss function treats errors the same in small bounding boxes versus large bounding boxes. A small error in a large box is generally benign but a small error in a small box has a much greater effect on IOU. Our main source of error is incorrect localizations.

最后,当我们训练一个近似于检测性能的损失函数时,我们的损失函数会同样对待小边界框和大边界框的误差。大边界框的小误差通常是影响不大的,但小边界框的小误差通常对IOU有较大影响。我们主要的误差来源是错误的定位。

3. Comparison to Other Detection Systems

3. 与其他检测系统的比较

Object detection is a core problem in computer vision. Detection pipelines generally start by extracting a set of robust features from input images (Haar [25], SIFT [23], HOG [4], convolutional features [6]). Then, classifiers [36, 21, 13, 10] or localizers [1, 32] are used to identify objects in the feature space. These classifiers or localizers are run either in sliding window fashion over the whole image or on some subset of regions in the image [35, 15, 39]. We compare the YOLO detection system to several top detection frameworks, highlighting key similarities and differences.

目标检测是计算机视觉的一个核心问题。检测流程通常从提取输入图像的一组鲁棒性特征开始(Haar,SIFT,HOG,卷积特征)。然后分类器或定位器被用来识别特征空间中的目标。这些分类器或定位器以滑动窗口的方式在整幅图像或图像的一些子区域上运行。我们将YOLO检测系统和几个顶级检测框架比较,突出关键的相似性和差异性。

Deformable parts models. Deformable parts models (DPM) use a sliding window approach to object detection [10]. DPM uses a disjoint pipeline to extract static features, classify regions, predict bounding boxes for high scoring regions, etc. Our system replaces all of these disparate parts with a single convolutional neural network. The network performs feature extraction, bounding box prediction, nonmaximal suppression, and contextual reasoning all concurrently. Instead of static features, the network trains the features in-line and optimizes them for the detection task. Our unified architecture leads to a faster, more accurate model than DPM.

**可变形部件模型。**可变形部件模型(DPM)使用滑动窗口的方法进行目标检测。DPM使用不相交的流程提取静态特征,对区域进行分类,预测高评分区域的边界框。我们的系统使用一个单一的卷积神经网络替代这些所有不同的部分,网络同时进行特征提取、边界框预测、非极大值抑制和上下文推理。网络训练内嵌特征而不是静态特征,并为检测任务优化它们。我们的统一架构引导了一个比DPM更快更准确的模型。

R-CNN. R-CNN and its variants use region proposals instead of sliding windows to find objects in images. Selective Search [35] generates potential bounding boxes, a convolutional network extracts features, an SVM scores the boxes, a linear model adjusts the bounding boxes, and non-max suppression eliminates duplicate detections. Each stage of this complex pipeline must be precisely tuned independently and the resulting system is very slow, taking more than 40 seconds per image at test time [14].

R-CNN。R-CNN及其变体使用区域建议而不是滑动窗口在图像中寻找目标。选择性搜索产生潜在的边界框,一个卷积网络提取特征,一个SVM给框打分,一个线性模型调整边界框,非极大抑制消除重复检测。这个复杂流程的每一步都必须经过精确的调整,所得到的系统非常慢,测试时每张图片需要超过40秒。

YOLO shares some similarities with R-CNN. Each grid cell proposes potential bounding boxes and scores those boxes using convolutional features. However, our system puts spatial constraints on the grid cell proposals which helps mitigate multiple detections of the same object. Our system also proposes far fewer bounding boxes, only 98 per image compared to about 2000 from Selective Search. Finally, our system combines these individual components into a single, jointly optimized model.

YOLO与R-CNN有一些相似之处。每个网格单元提出潜在的边界框并使用卷积特征给这些框打分。但是,我们的系统对网格单元的提出进行了空间限制,这有助于缓解对同一目标的多次检测。我们的图像还提出了更少的边界框,每张图像仅仅有98个,而选择性搜索有大约2000个。最后,我们的系统将这些单独的组件合成一个单一的、共同优化的模型。

Other Fast Detectors. Fast and Faster R-CNN focus on speeding up the R-CNN framework by sharing computation and using neural networks to propose regions instead of Selective Search [14] [28]. While they offer speed and accuracy improvements over R-CNN, both still fall short of real-time performance.

**其他快速检测器。**Fast和Faster R-CNN通过共享计算和神经网络替代选择性搜索来提出区域加速R-CNN框架。虽然它们提供了比R-CNN更快和更准确的性能,但两者仍然不能达到实时性能。

Many research efforts focus on speeding up the DPM pipeline [31] [38] [5]. They speed up HOG computation, use cascades, and push computation to GPUs. However, only 30Hz DPM [31] actually runs in real-time.

许多研究工作集中在加快DMP流程上。它们加速HOG计算,使用级联,并将计算推动到GPU上。但是,实际上只有30Hz的DMP可以实时运行。

Instead of trying to optimize individual components of a large detection pipeline, YOLO throws out the pipeline entirely and is fast by design.

YOLO不是尝试优化一个大型检测流程中的单个组件,而是完全抛弃整个流程,从而被设计的很快。

Detectors for single classes like faces or people can be highly optimized since they have to deal with much less variation [37]. YOLO is a general purpose detector that learns to detect a variety of objects simultaneously.

像人脸或人等单个类别的检测器可以高度优化,因为它们处理的变化很少。YOLO是一种通用的检测器,可以同时学习检测多个目标。

Deep MultiBox. Unlike R-CNN, Szegedy et al. train a convolutional neural network to predict regions of interest [8] instead of using Selective Search. MultiBox can also perform single object detection by replacing the confidence prediction with a single class prediction. However, MultiBox cannot perform general object detection and is still just a piece in a larger detection pipeline, requiring further image patch classification. Both YOLO and MultiBox use a convolutional network to predict bounding boxes in an image but YOLO is a complete detection system.

**Deep MultiBox.**不像R-CNN,Szegedy等人训练了一个卷积神经网络去预测感兴趣的区域,而不是选择性搜索。MultiBox也可以通过用单个类别预测来替换置信度预测来执行单个目标检测。然而,MultiBox不能执行通用的目标检测,并且它仍然是一个较大检测流程中的一部分,要求对图像块进一步分类。YOLO和MultiBox都使用了一个卷积网络去预测一幅图像的边界框,但是YOLO是一个完整的检测系统。

OverFeat. Sermanet et al. train a convolutional neural network to perform localization and adapt that localizer to perform detection [32]. OverFeat efficiently performs sliding window detection but it is still a disjoint system. OverFeat optimizes for localization, not detection performance. Like DPM, the localizer only sees local information when making a prediction. OverFeat cannot reason about global context and thus requires significant post-processing to produce coherent detections.

OverFeat. Sermanet等人训练一个卷积神经网络去执行定位,并且调整了定位器去执行检测。OverFeat高效地执行滑动窗口检测,但是它仍然是一个不相交的系统。OverFeat优化了定位,而不是检测性能。像DPM一样,进行检测时定位器仅能看到局部信息。Overfeat不能推断全局上下文,因此需要大量的后处理来产生连贯的检测。

MultiGrasp. Our work is similar in design to work on grasp detection by Redmon et al [27]. Our grid approach to bounding box prediction is based on the MultiGrasp system for regression to grasps. However, grasp detection is a much simpler task than object detection. MultiGrasp only needs to predict a single graspable region for an image containing one object. It doesn’t have to estimate the size, location, or boundaries of the object or predict it’s class, only find a region suitable for grasping. YOLO predicts both bounding boxes and class probabilities for multiple objects of multiple classes in an image.

**多重抓取。**我们的工作在设计上与Redmon等人[27]的抓取检测工作类似。我们的边界框预测的网格方法是基于MultiGrasp系统对抓取的回归。然而,抓取检测是一个比目标检测更简单的任务。MultiGrasp只需要为包含一个目标的图像预测一个可抓取的区域。它不需要估计目标的大小、位置或边界,也不需要预测它的类别,只需要找到一个适合抓取的区域。YOLO对图像中多个类别的多个目标的边界框和类别概率进行预测。

4. Experiments

4. 实验

First we compare YOLO with other real-time detection systems on PASCAL VOC 2007. To understand the differences between YOLO and R-CNN variants we explore the errors on VOC 2007 made by YOLO and Fast R-CNN, one of the highest performing versions of R-CNN [14]. Based on the different error profiles we show that YOLO can be used to rescore Fast R-CNN detections and reduce the errors from background false positives, giving a significant performance boost. We also present VOC 2012 results and compare mAP to current state-of-the-art methods. Finally, we show that YOLO generalizes to new domains better than other detectors on two artwork datasets.

首先我们在PASCAL VOC 2007上将YOLO与其他实时检测系统进行比较。为了了解YOLO和R-CNN变体之间的差异,我们探讨了YOLO和Fast R-CNN(R-CNN的最高性能版本之一)在VOC 2007上的错误[14]。基于不同的错误情况,我们表明YOLO可以用来对Fast R-CNN的检测进行重新评分,并减少来自背景假阳性的错误,使性能得到明显提升。我们还介绍了VOC 2012的结果,并将mAP与目前最先进的方法进行了比较。最后,我们展示了在两个艺术品数据集上,YOLO对新领域的概括比其他检测器更好。

4.1. Comparison to Other Real-Time Systems

4.1. 与其他实时系统的比较

Many research efforts in object detection focus on making standard detection pipelines fast. [5] [38] [31] [14] [17] [28] However, only Sadeghi et al. actually produce a detection system that runs in real-time (30 frames per second or better) [31]. We compare YOLO to their GPU implementation of DPM which runs either at 30Hz or 100Hz. While the other efforts don’t reach the real-time milestone we also compare their relative mAP and speed to examine the accuracy-performance tradeoffs available in object detection systems.

Fast YOLO is the fastest object detection method on PASCAL; as far as we know, it is the fastest extant object detector. With 52.7% mAP , it is more than twice as accurate as prior work on real-time detection. YOLO pushes mAP to 63.4% while still maintaining real-time performance.

许多目标检测的研究工作都集中在使标准检测流程快速化。[5] [38] [31] [14] [17] [28] 然而,只有Sadeghi等人实际产生了一个实时运行的检测系统(每秒30帧或更好)[31]。我们将YOLO与他们的DPM的GPU实现进行了比较,DPM可以在30Hz或100Hz下运行。虽然其他的努力没有达到实时的里程碑,但我们也比较了他们的相对mAP和速度,以检查目标检测系统中可用的准确性-性能权衡。

Fast YOLO是PASCAL上最快的目标检测方法;就我们所知,它是现存最快的目标检测器。它的mAP为52.7%,比之前的实时检测工作的准确度高一倍以上。YOLO将mAP推高到63.4%,同时仍然保持实时性能。

We also train YOLO using VGG-16. This model is more accurate but also significantly slower than YOLO. It is useful for comparison to other detection systems that rely on VGG-16 but since it is slower than real-time the rest of the paper focuses on our faster models.

Fastest DPM effectively speeds up DPM without sacrificing much mAP but it still misses real-time performance by a factor of 2 [38]. It also is limited by DPM’s relatively low accuracy on detection compared to neural network approaches.

R-CNN minus R replaces Selective Search with static bounding box proposals [20]. While it is much faster than R-CNN, it still falls short of real-time and takes a significant accuracy hit from not having good proposals.

我们还使用VGG-16训练YOLO。这个模型更准确,但也明显比YOLO慢。它对于与其他依赖VGG-16的检测系统进行比较是很有用的,但由于它比实时性慢,本文的其余部分侧重于我们更快的模型。

最快的DPM有效地加快了DPM的速度,而没有牺牲很多mAP,但它仍然比实时性能差2倍[38]。与神经网络方法相比,它还受到DPM相对较低的检测精度的限制。

R-CNN minus R用静态边界框建议代替了选择性搜索[20]。虽然它比R-CNN快得多,但它仍然达不到实时性,而且由于没有好的提议,准确性受到很大影响。

Table 1: Real-Time Systems on PASCAL VOC 2007. Comparing the performance and speed of fast detectors. Fast YOLO is the fastest detector on record for PASCAL VOC detection and is still twice as accurate as any other real-time detector. YOLO is 10 mAP more accurate than the fast version while still well above real-time in speed.

**表1: PASCAL VOC 2007上的实时系统。**对比快速检测器的性能和速度。快速YOLO是PASCAL VOC检测记录中最快的检测器,其准确度仍然是其他实时检测器的两倍。YOLO比快速版的准确度高10 mAP,同时在速度上仍远高于实时版。

Fast R-CNN speeds up the classification stage of R-CNN but it still relies on selective search which can take around 2 seconds per image to generate bounding box proposals. Thus it has high mAP but at 0.5 fps it is still far from realtime.

The recent Faster R-CNN replaces selective search with a neural network to propose bounding boxes, similar to Szegedy et al. [8] In our tests, their most accurate model achieves 7 fps while a smaller, less accurate one runs at 18 fps. The VGG-16 version of Faster R-CNN is 10 mAP higher but is also 6 times slower than YOLO. The ZeilerFergus Faster R-CNN is only 2.5 times slower than YOLO but is also less accurate.

Fast R-CNN加快了R-CNN的分类阶段,但它仍然依赖于选择性搜索,每张图像产生边界框建议需要2秒左右。因此,它有很高的mAP,但在0.5fps的情况下,它仍然离实时性很远。

最近的Faster R-CNN用神经网络代替了选择性搜索来提出边界框,与Szegedy等人的做法类似[8]。在我们的测试中,他们最准确的模型达到了7帧,而一个较小的、不太准确的模型则以18帧运行。VGG-16版本的Faster R-CNN高出10 mAP,但也比YOLO慢了6倍。ZeilerFergus的Faster R-CNN只比YOLO慢2.5倍,但也没有那么准确。

4.2. VOC 2007 Error Analysis

4.2. VOC 2007错误分析

To further examine the differences between YOLO and state-of-the-art detectors, we look at a detailed breakdown of results on VOC 2007. We compare YOLO to Fast RCNN since Fast R-CNN is one of the highest performing detectors on PASCAL and it’s detections are publicly available.

We use the methodology and tools of Hoiem et al. [19] For each category at test time we look at the top N predictions for that category. Each prediction is either correct or it is classified based on the type of error:

​ • Correct: correct class and IOU > .5

​ • Localization: correct class, .1 < IOU < .5

​ • Similar: class is similar, IOU > .1

​ • Other: class is wrong, IOU > .1

​ • Background: IOU < .1 for any object

为了进一步研究YOLO和最先进的检测器之间的差异,我们看一下VOC 2007上的详细结果。我们将YOLO与Fast RCNN进行比较,因为Fast R-CNN是PASCAL上性能最高的检测器之一,它的检测结果是公开的。

我们使用Hoiem等人[19]的方法和工具,对于测试时的每个类别,我们看该类别的前N个预测。每个预测要么是正确的,要么根据错误的类型进行分类。

  • 正确:正确的类别和IOU > .5
  • 定位:正确的类别,.1 < IOU < .5
  • 类似:类别相似,IOU > .1
  • 其他:类别错误,IOU > .1
  • 背景。任何对象的IOU < .1

Figure 4 shows the breakdown of each error type averaged across all 20 classes.

图4显示了在所有20个类中平均每个错误类型的分类。

Figure 4: Error Analysis: Fast R-CNN vs. YOLO These charts show the percentage of localization and background errors in the top N detections for various categories (N = # objects in that category).

图4:错误分析:快速R-CNN与YOLO的对比 这些图表显示了不同类别的前N个检测中定位和背景错误的百分比(N = # 该类别中的对象)

YOLO struggles to localize objects correctly. Localization errors account for more of YOLO’s errors than all other sources combined. Fast R-CNN makes much fewer localization errors but far more background errors. 13.6% of it’s top detections are false positives that don’t contain any objects. Fast R-CNN is almost 3x more likely to predict background detections than YOLO.

YOLO在正确定位对象方面很努力。定位错误在YOLO的错误中占的比例超过了所有其他来源的总和。快速R-CNN的定位错误要少得多,但背景错误却多得多。13.6%的顶级检测结果是不包含任何物体的假阳性。快速R-CNN预测背景检测的可能性几乎是YOLO的3倍。

4.3. Combining Fast R-CNN and YOLO

4.3. 结合Fast R-CNN和YOLO

YOLO makes far fewer background mistakes than Fast R-CNN. By using YOLO to eliminate background detections from Fast R-CNN we get a significant boost in performance. For every bounding box that R-CNN predicts we check to see if YOLO predicts a similar box. If it does, we give that prediction a boost based on the probability predicted by YOLO and the overlap between the two boxes.

The best Fast R-CNN model achieves a mAP of 71.8% on the VOC 2007 test set. When combined with YOLO, its mAP increases by 3.2% to 75.0%. We also tried combining the top Fast R-CNN model with several other versions of Fast R-CNN. Those ensembles produced small increases in mAP between .3 and .6%, see Table 2 for details.

YOLO犯的背景错误比Fast R-CNN少得多。通过使用YOLO来消除Fast R-CNN的背景检测,我们在性能上得到了很大的提升。对于R-CNN预测的每一个边界框,我们检查YOLO是否预测了一个类似的框。如果有,我们根据YOLO预测的概率和两个框之间的重叠,给这个预测一个提升。

最好的Fast R-CNN模型在VOC 2007测试集上达到了71.8%的mAP。当与YOLO结合时,其mAP增加了3.2%,达到75.0%。我们还尝试将顶级的Fast R-CNN模型与其他几个版本的Fast R-CNN相结合。这些组合产生了0.3到0.6%之间的小幅mAP增长,详情见表2。

Table 2: Model combination experiments on VOC 2007. We examine the effect of combining various models with the best version of Fast R-CNN. Other versions of Fast R-CNN provide only a small benefit while YOLO provides a significant performance boost.

**表2:关于VOC 2007的模型组合实验。**我们研究了将各种模型与最佳版本的Fast R-CNN相结合的效果。其他版本的Fast R-CNN只提供了很小的好处,而YOLO提供了显著的性能提升。

The boost from YOLO is not simply a byproduct of model ensembling since there is little benefit from combining different versions of Fast R-CNN. Rather, it is precisely because YOLO makes different kinds of mistakes at test time that it is so effective at boosting Fast R-CNN’s performance.

Unfortunately, this combination doesn’t benefit from the speed of YOLO since we run each model seperately and then combine the results. However, since YOLO is so fast it doesn’t add any significant computational time compared to Fast R-CNN.

YOLO的提升并不是简单的模型集合的副产品,因为结合不同版本的Fast R-CNN并没有什么好处。相反,正是因为YOLO在测试时犯了不同种类的错误,所以它在提高Fast R-CNN的性能方面如此有效。

不幸的是,这种组合并没有从YOLO的速度中受益,因为我们单独运行每个模型,然后将结果结合起来。然而,由于YOLO是如此之快,与Fast R-CNN相比,它并没有增加任何重要的计算时间。

4.4. VOC 2012 Results

4.4. VOC 2012的结果

On the VOC 2012 test set, YOLO scores 57.9% mAP . This is lower than the current state of the art, closer to the original R-CNN using VGG-16, see Table 3. Our system struggles with small objects compared to its closest competitors. On categories like bottle, sheep, and tv/monitor YOLO scores 8-10% lower than R-CNN or Feature Edit. However, on other categories like cat and train YOLO achieves higher performance.

在VOC 2012的测试集上,YOLO的mAP得分是57.9%。这比目前的技术水平低,更接近于使用VGG-16的原始R-CNN,见表3。与其最接近的竞争对手相比,我们的系统在处理小目标时很吃力。在瓶子、羊和电视/显示器等类别上,YOLO的得分比R-CNN或Feature Edit低8-10%。然而,在其他类别如猫和火车上,YOLO取得了更高的性能。

Table 3: PASCAL VOC 2012 Leaderboard. YOLO compared with the full comp4 (outside data allowed) public leaderboard as of November 6th, 2015. Mean average precision and per-class average precision are shown for a variety of detection methods. YOLO is the only real-time detector. Fast R-CNN + YOLO is the forth highest scoring method, with a 2.3% boost over Fast R-CNN.

**表3:PASCAL VOC 2012排行榜。**截至2015年11月6日,YOLO与完整的comp4(允许外部数据)公共排行榜相比。显示了各种检测方法的平均精度和每类平均精度。YOLO是唯一的实时检测器。快速R-CNN+YOLO是得分第四高的方法,比快速R-CNN提高了2.3%。

Our combined Fast R-CNN + YOLO model is one of the highest performing detection methods. Fast R-CNN gets a 2.3% improvement from the combination with YOLO, boosting it 5 spots up on the public leaderboard.

我们的快速R-CNN+YOLO组合模型是性能最高的检测方法之一。快速R-CNN从与YOLO的组合中获得了2.3%的改进,使其在公共排行榜上提升了5位。

4.5. Generalizability: Person Detection in Artwork

4.5. 可推广性:艺术品中的人物检测

Academic datasets for object detection draw the training and testing data from the same distribution. In real-world applications it is hard to predict all possible use cases and the test data can diverge from what the system has seen before [3]. We compare YOLO to other detection systems on the Picasso Dataset [12] and the People-Art Dataset [3], two datasets for testing person detection on artwork.

Figure 5 shows comparative performance between YOLO and other detection methods. For reference, we give VOC 2007 detection AP on person where all models are trained only on VOC 2007 data. On Picasso models are trained on VOC 2012 while on People-Art they are trained on VOC 2010.

用于物体检测的学术数据集从相同的分布中提取训练和测试数据。在现实世界的应用中,很难预测所有可能的用例,而且测试数据可能与系统之前看到的数据有出入[3]。我们在Picasso数据集[12]和People-Art数据集[3]上将YOLO与其他检测系统进行比较,这两个数据集用于测试艺术品上的人员检测。

图5显示了YOLO和其他检测方法之间的比较性能。作为参考,我们给出了VOC 2007对人的检测AP,所有的模型都是在VOC 2007数据上训练的。在毕加索上,模型是在VOC 2012上训练的,而在People-Art上是在VOC 2010上训练的。

Figure 5: Generalization results on Picasso and People-Art datasets.

图5:Picasso和People-Art数据集的概括结果。

R-CNN has high AP on VOC 2007. However, R-CNN drops off considerably when applied to artwork. R-CNN uses Selective Search for bounding box proposals which is tuned for natural images. The classifier step in R-CNN only sees small regions and needs good proposals.

DPM maintains its AP well when applied to artwork. Prior work theorizes that DPM performs well because it has strong spatial models of the shape and layout of objects. Though DPM doesn’t degrade as much as R-CNN, it starts from a lower AP.

YOLO has good performance on VOC 2007 and its AP degrades less than other methods when applied to artwork. Like DPM, YOLO models the size and shape of objects, as well as relationships between objects and where objects commonly appear. Artwork and natural images are very different on a pixel level but they are similar in terms of the size and shape of objects, thus YOLO can still predict good bounding boxes and detections.

R-CNN对VOC 2007有很高的AP。然而,当R-CNN应用于艺术品时,它的性能大大下降了。R-CNN使用选择性搜索进行边界框建议,这是为自然图像而调整的。R-CNN中的分类器步骤只看到小区域,需要良好的提议。

DPM在应用于艺术品时能很好地保持其AP。先前的工作认为,DPM表现良好,因为它有强大的物体形状和布局的空间模型。尽管DPM没有像R-CNN那样退化,但它的AP较低。

YOLO在VOC 2007上有很好的表现,当应用于艺术品时,它的AP退化得比其他方法少。像DPM一样,YOLO对目标的大小和形状以及目标之间的关系和目标通常出现的位置进行建模。艺术品和自然图像在像素层面上有很大的不同,但它们在目标的大小和形状方面是相似的,因此YOLO仍然可以预测良好的边界框和检测。

5. Real-Time Detection In The Wild

5. 野外的实时检测

YOLO is a fast, accurate object detector, making it ideal for computer vision applications. We connect YOLO to a webcam and verify that it maintains real-time performance, including the time to fetch images from the camera and display the detections.

The resulting system is interactive and engaging. While YOLO processes images individually, when attached to a webcam it functions like a tracking system, detecting objects as they move around and change in appearance. A demo of the system and the source code can be found on our project website: http://pjreddie.com/yolo/.

YOLO是一个快速、准确的目标检测器,使其成为计算机视觉应用的理想选择。我们将YOLO连接到一个网络摄像头,并验证它是否保持了实时性能,包括从摄像头获取图像和显示检测结果的时间。

由此产生的系统是互动的、有吸引力的。虽然YOLO单独处理图像,但当它连接到网络摄像头时,它的功能就像一个跟踪系统,在目标移动和外观变化时检测它们。该系统的演示和源代码可以在我们的项目网站上找到:http://pjreddie.com/yolo/。

6. Conclusion

6. 结论

We introduce YOLO, a unified model for object detection. Our model is simple to construct and can be trained directly on full images. Unlike classifier-based approaches, YOLO is trained on a loss function that directly corresponds to detection performance and the entire model is trained jointly.

Fast YOLO is the fastest general-purpose object detector in the literature and YOLO pushes the state-of-the-art in real-time object detection. YOLO also generalizes well to new domains making it ideal for applications that rely on fast, robust object detection.

Acknowledgements: This work is partially supported by ONR N00014-13-1-0720, NSF IIS-1338054, and The Allen Distinguished Investigator Award.

我们介绍了YOLO,一个统一的目标检测模型。我们的模型构造简单,可以直接对完整的图像进行训练。与基于分类器的方法不同,YOLO是在直接对应于检测性能的损失函数上训练的,整个模型是联合训练的。

Fast YOLO是文献中最快的通用目标检测器,YOLO推动了实时目标检测的最先进水平。YOLO还能很好地适用于新的领域,使其成为依赖快速、稳健的目标检测的理想应用。

**鸣谢。**这项工作得到了ONR N00014-13-1-0720、NSF IIS-1338054和艾伦杰出研究者奖的部分支持。

References

[1] M. B. Blaschko and C. H. Lampert. Learning to localize objects with structured output regression. In Computer Vision– ECCV 2008, pages 2–15. Springer, 2008. 4

[2] L. Bourdev and J. Malik. Poselets: Body part detectors trained using 3d human pose annotations. In International Conference on Computer Vision (ICCV), 2009. 8

[3] H. Cai, Q. Wu, T. Corradi, and P . Hall. The crossdepiction problem: Computer vision algorithms for recognising objects in artwork and in photographs. arXiv preprint arXiv:1505.00110, 2015. 7

[4] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 886–893. IEEE, 2005. 4, 8

[5] T. Dean, M. Ruzon, M. Segal, J. Shlens, S. Vijayanarasimhan, J. Yagnik, et al. Fast, accurate detection of 100,000 object classes on a single machine. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 1814–1821. IEEE, 2013. 5

[6] J. Donahue, Y . Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. arXiv preprint arXiv:1310.1531, 2013. 4

[7] J. Dong, Q. Chen, S. Yan, and A. Y uille. Towards unified object detection and semantic segmentation. In Computer Vision–ECCV 2014, pages 299–314. Springer, 2014. 7

[8] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov. Scalable object detection using deep neural networks. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 2155–2162. IEEE, 2014. 5, 6

[9] M. Everingham, S. M. A. Eslami, L. V an Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111(1):98–136, Jan. 2015. 2

[10] P . F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(9):1627–1645, 2010. 1, 4

[11] S. Gidaris and N. Komodakis. Object detection via a multiregion & semantic segmentation-aware CNN model. CoRR, abs/1505.01749, 2015. 7

[12] S. Ginosar, D. Haas, T. Brown, and J. Malik. Detecting people in cubist art. In Computer Vision-ECCV 2014 Workshops, pages 101–116. Springer, 2014. 7

[13] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 580–587. IEEE, 2014. 1, 4, 7

[14] R. B. Girshick. Fast R-CNN. CoRR, abs/1504.08083, 2015. 2, 5, 6, 7

[15] S. Gould, T. Gao, and D. Koller. Region-based segmentation and object detection. In Advances in neural information processing systems, pages 655–663, 2009. 4

[16] B. Hariharan, P . Arbeláez, R. Girshick, and J. Malik. Simultaneous detection and segmentation. In Computer Vision– ECCV 2014, pages 297–312. Springer, 2014. 7

[17] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. arXiv preprint arXiv:1406.4729, 2014. 5

[18] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012. 4

[19] D. Hoiem, Y . Chodpathumwan, and Q. Dai. Diagnosing error in object detectors. In Computer Vision–ECCV 2012, pages 340–353. Springer, 2012. 6

[20] K. Lenc and A. V edaldi. R-cnn minus r. arXiv preprint arXiv:1506.06981, 2015. 5, 6

[21] R. Lienhart and J. Maydt. An extended set of haar-like features for rapid object detection. In Image Processing. 2002. Proceedings. 2002 International Conference on, volume 1, pages I–900. IEEE, 2002. 4

[22] M. Lin, Q. Chen, and S. Yan. Network in network. CoRR, abs/1312.4400, 2013. 2

[23] D. G. Lowe. Object recognition from local scale-invariant features. In Computer vision, 1999. The proceedings of the seventh IEEE international conference on, volume 2, pages 1150–1157. Ieee, 1999. 4

[24] D. Mishkin. Models accuracy on imagenet 2012 val. https://github.com/BVLC/caffe/wiki/ Models-accuracy-on-ImageNet-2012-val. Accessed: 2015-10-2. 3

[25] C. P . Papageorgiou, M. Oren, and T. Poggio. A general framework for object detection. In Computer vision, 1998. sixth international conference on, pages 555–562. IEEE, 1998. 4

[26] J. Redmon. Darknet: Open source neural networks in c. http://pjreddie.com/darknet/, 2013–2016. 3

[27] J. Redmon and A. Angelova. Real-time grasp detection using convolutional neural networks. CoRR, abs/1412.3128, 2014. 5

[28] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. arXiv preprint arXiv:1506.01497, 2015. 5, 6, 7

[29] S. Ren, K. He, R. B. Girshick, X. Zhang, and J. Sun. Object detection networks on convolutional feature maps. CoRR, abs/1504.06066, 2015. 3, 7

[30] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 2015. 3

[31] M. A. Sadeghi and D. Forsyth. 30hz object detection with dpm v5. In Computer Vision–ECCV 2014, pages 65–79. Springer, 2014. 5, 6

[32] P . Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y . LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. CoRR, abs/1312.6229, 2013. 4, 5

[33] Z. Shen and X. Xue. Do more dropouts in pool5 feature maps for better object detection. arXiv preprint arXiv:1409.6911, 2014. 7

[34] C. Szegedy, W. Liu, Y . Jia, P . Sermanet, S. Reed, D. Anguelov, D. Erhan, V . V anhoucke, and A. Rabinovich. Going deeper with convolutions. CoRR, abs/1409.4842, 2014. 2

[35] J. R. Uijlings, K. E. van de Sande, T. Gevers, and A. W. Smeulders. Selective search for object recognition. International journal of computer vision, 104(2):154–171, 2013. 4

[36] P . Viola and M. Jones. Robust real-time object detection. International Journal of Computer Vision, 4:34–47, 2001. 4

[37] P . Viola and M. J. Jones. Robust real-time face detection. International journal of computer vision, 57(2):137–154, 2004. 5

[38] J. Yan, Z. Lei, L. Wen, and S. Z. Li. The fastest deformable part model for object detection. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 2497–2504. IEEE, 2014. 5, 6

[39] C. L. Zitnick and P . Dollár. Edge boxes: Locating object proposals from edges. In Computer Vision–ECCV 2014, pages 391–405. Springer, 2014. 4

YOLOv1论文中英文对照翻译相关推荐

  1. YOLOv3论文中英文对照翻译

    YOLOv3论文名称: YOLOv3: An Incremental Improvement YOLOv3论文下载地址:https://arxiv.org/pdf/1804.02767.pdf 声明: ...

  2. AI论文系列-经典论文[原文、中文翻译、中英文对照翻译]

    AI论文系列-经典论文[原文.中文翻译.中英文对照翻译] @[TOC](AI论文系列-经典论文[原文.中文翻译.中英文对照翻译]) 1. CV系列 2. NLP系列 3. GNA系列 1. CV系列 ...

  3. 宇宙最强,meltdown论文中英文对照版(三)

    本文由郭健郭大侠翻译,将分为三次连载完成,这是第三部分.郭大侠是蜗窝科技(http://www.wowotech.net/)的创始人,倡导"慢下来,享受技术"的健康理念,侠之大者, ...

  4. 英语作文计算机主板,(完整版)电脑主板bios英文版的中英文对照翻译.pdf

    电脑主板 BIOS 英文版的中英文对照翻译 让你的电脑 BIOS 知识迅速提高滴. Time/System Time 时间 / 系统时间 Date/System Date 日期/ 系统日期 Level ...

  5. android 错误中英互译,安卓手机Recovery模式刷机情况下的中英文对照翻译

    recovery ,用 关机键 音量 /- (依机型不同而不同,不过有些机型可能没有刷入recovery,可自行刷入.)即可进入recovery界面,在这个界面你可以直接用sd 卡上的zip格式的ro ...

  6. 20210828每周分享(第二期)-中英文对照翻译插件、笔记软件

    1.资讯集中 ​ 发现个非常好用的资讯集中网站,每天早上到公司后,都会打开看看.不用把时间浪费在刷各种APP上,每天稍微了解下时事就行. ​ 网址:http://make.mk/ 2.中英文对照翻译插 ...

  7. 基于Django和翻译API实现web版的中英文对照翻译(一)

    笔者经常需要翻译一些英文文档,但是试用了一些商业软件之后,一来觉得满足不了自己的翻译习惯,二来也是觉得对于个人来说,使用需要的收费的东西总是会有些顾忌. 一番了解之后,决定选用搜狗翻译/有道翻译官方提 ...

  8. 计算机辅助外文文献,计算机辅助设计建筑CAD论文中英文对照资料外文翻译文献.doc...

    中英文对照资料外文翻译文献 Computer-aided design (CAD) ??? Computer-aided design (CAD) is the use of a wide range ...

  9. CVPR 2021 Authors Guidelines 投稿须知 中英文对照翻译

    怕存在一些被忽视的流程, 因此本文对CVPR2021的Authors Guidelines:http://cvpr2021.thecvf.com/node/33  进行全文对照翻译. 目录 AUTHO ...

最新文章

  1. 两条线段相切弧_两条直线间的圆弧连接
  2. web 容器 Jetty 简介
  3. Office 办公软件的问题解决方案
  4. 过滤器过滤特定的url_如何从过滤器中排除URL
  5. Vue数据代理与数据监测
  6. 5个让IT开发效率提高200%的工具,最后一个很实用,你用过几个
  7. 学习 SCCM 2012 的思路
  8. WPF 媒体播放器(MediaElement)实例,实现进度和音量控制
  9. 最新VmWare14激活序列号
  10. mysql 主从1146_MySQL5.7主从复制slave报Last_Errno: 1146错误解决
  11. 又要数数小绵羊(C++) kkmd66
  12. 音质好的蓝牙耳机有哪些?音质好的蓝牙耳机推荐
  13. 苹果id登录_英雄联盟手游用苹果id登录显示账号异常的解决方法_英雄联盟手游...
  14. 【Latex】Latex调整行间距
  15. android 关闭jack_编译Android时禁用Jack Server
  16. 漂浮广告是什么?漂浮广告如何设置
  17. 一起来看流星雨剧情简介/剧情介绍/剧情分集介绍第二十四集
  18. 走进Linux操作系统世界
  19. Python3.6中对爬取网页中的/XBB的处理
  20. vue之div让文字内容水平垂直居中

热门文章

  1. 小马哥----高仿苹果6S A236 刷机拆机主板图与开机界面图 更新解锁界面 全网通4G 警惕
  2. DIY电脑检测软件大集中
  3. 解答诸葛亮反思的七条内容
  4. HRBUST 1313 火影忍者之~静音
  5. Oracle最无奈错误PLS-00103
  6. “戏”说设计模式——外观(门面)模式
  7. 实现一周之内自动登录的 cookie和session还有localStorage的存储机制
  8. Android requires compiler compliance level 5.0 or 6.0. Found '1.7' instead.
  9. 网络原创文章版权维护的辅助好工具copycheck抄袭检测软件
  10. tiny4412移植uboot-2019-01(三)