论文原文:LINK
论文年份:2015
论文被引:52996(2020/25/08) 我爱996? 111428(2022/03/26)


文章目录

  • Deep Residual Learning for Image Recognition
  • Abstract
  • 1. Introduction
  • 2. Related Work
  • 3. Deep Residual Learning
    • 3.1. Residual Learning
    • 3.2. Identity Mapping by Shortcuts
    • 3.3. Network Architectures
    • 3.4. Implementation
  • 4. Experiments
    • 4.1. ImageNet Classification
    • 4.2. CIFAR-10 and Analysis
    • 4.3. Object Detection on PASCAL and MS COCO
  • Appendix
    • A. Object Detection Baselines
      • PASCAL VOC
      • MS COCO
    • B. Object Detection Improvements
      • MS COCO
      • PASCAL VOC
      • ImageNet Detection
    • C. ImageNet Localization

Deep Residual Learning for Image Recognition

Abstract

Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networksareeasiertooptimize, andcangainaccuracyfrom considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [41] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet testset. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.

更深的神经网络更难训练。我们提出了一种残差的学习框架,以简化比以前使用的网络更深入的网络训练。我们显式地将层重新配置为参考层输入学习残差函数,而不是学习未参考函数。我们提供了全面的经验证据,表明这些残差网络很容易被深度验证,并且从大大增加的深度中可以获得准确性。在ImageNet数据集上,我们评估深度最大为152层的残差网络-比VGG网络[41]深8倍,但复杂度仍然较低。这些残留网络的整体在ImageNet测试集上实现了3.57%的误差。该结果在ILSVRC 2015分类任务中获得第一名。我们还介绍了具有100和1000层的CIFAR-10的分析。

The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

深度表示对于许多视觉识别任务至关重要。仅由于我们的深度表示,在COCO对象检测数据集上获得了28%的相对改进。深度残差网络是我们提交ILSVRC和COCO 2015竞赛的基础,在这些竞赛中,我们还获得了ImageNet检测,ImageNet定位,COCO检测和COCO分割等任务的第一名。


1. Introduction

Deep convolutional neural networks [22, 21] have led to a series of breakthroughs for image classification [21, 50, 40]. Deep networks naturally integrate low/mid/highlevel features [50] and classifiers in an end-to-end multilayer fashion, and the “levels” of features can be enriched by the number of stacked layers (depth). Recent evidence [41, 44] reveals that network depth is of crucial importance, and the leading results [41, 44, 13, 16] on the challenging ImageNet dataset [36] all exploit “very deep” [41] models, with a depth of sixteen [41] to thirty [16]. Many other nontrivial visual recognition tasks [8, 12, 7, 32, 27] have also greatly benefited from very deep models.

深度卷积神经网络[22,21]导致了图像分类的一系列突破[21,50,40]。深度网络自然地以端到端的多层方式集成了低/中/高级特征[50]和分类器,并且特征的“级别”可以通过堆叠的层数(深度)来丰富。最新证据[41,44]揭示了网络深度至关重要,在具有挑战性的ImageNet数据集[36]上的领先结果[41,44,13,16]都利用了“非常深”的模型[41],深度为十六[41]到三十[16]。许多其它具有挑战性的视觉识别任务[8、12、7、32、27]也从非常深入的模型中受益匪浅。

Driven by the significance of depth, a question arises: Is learning better networks as easy as stacking more layers? An obstacle to answering this question was the notorious problem of vanishing/exploding gradients [1, 9], which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initialization [23, 9, 37, 13] and intermediate normalization layers [16], which enable networks with tens of layers to start converging for stochastic gradient descent (SGD) with backpropagation [22].

在深度意义的驱动下,出现了一个问题:学习更好的网络是否像堆叠更多的层一样容易?回答这个问题的障碍是梯度消失或爆炸[1,9],从一开始就阻碍了收敛。但是,此问题已通过归一化初始化[23、9、37、13]和中间归一化层[16]得到了很大解决,该归一化初始化使具有数十个层的网络可以通过反向传播开始收敛用于随机梯度下降(SGD)[22]。

When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overfitting, and adding more layers to a suitably deep model leads to higher training error, as reported in [11, 42] and thoroughly verified by our experiments. Fig. 1 shows a typical example.

当更深的网络能够开始融合时,就会出现退化问题:随着网络深度的增加,精度达到饱和(这可能不足为奇),然后迅速退化。出乎意料的是,这种退化不是由过拟合引起的,并且在[11,42]中报道并由我们的实验完全验证,将更多层添加到适当深度的模型中会导致更高的训练误差。图1显示了一个典型示例。


Figure 1. Training error (left) and test error (right) on CIFAR-10 with 20-layer and 56-layer “plain” networks. The deeper network has higher training error, and thus test error. Similar phenomena on ImageNet is presented in Fig. 4.

图1.在带有20层和56层“普通”网络的CIFAR-10上的训练误差(左)和测试误差(右)。较深的网络具有较高的训练误差,从而导致测试误差。 ImageNet上的类似现象如图4所示。

The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to find solutions are comparably good or better than the constructed solution (or unable to do so in feasible time).

训练准确性的下降表明并非所有系统都同样容易优化。让我们考虑一个较浅的结构及其更深的对应结构,它会在其上添加更多层。通过构建更深层的模型,可以找到一种解决方案:添加的层是恒等映射,其它层是从学习的较浅的模型中复制的。这种构造的解决方案的存在表明,较深的模型不会比浅模型产生更高的训练误差。但是实验表明,我们现有的求解器无法找到比构造的解决方案好(或在可行时间内无法找到)的解决方案。

In this paper, we address the degradation problem by introducing a deep residual learning framework. Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as H(x)H(x)H(x), we let the stacked nonlinear layers fit another mapping of F(x):=H(x)−xF(x) := H(x)−xF(x):=H(x)−x. The original mapping is recast into F(x)+xF(x)+xF(x)+x. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.

在本文中,我们通过引入深度残差学习框架来解决退化问题。而不是希望每个堆叠的层都直接适合所需的基础映射,我们明确让这些层适合残差映射。形式上,将所需的基础映射表示为 H(x)H(x)H(x),我们让堆叠的非线性层适合 F(x):=H(x)−xF(x):= H(x)-xF(x):=H(x)−x 的另一个映射。原始映射将重铸为 F(x)+xF(x)+ xF(x)+x。我们假设优化残差映射比优化原始未引用映射要容易。极端地,如果恒等映射是最佳的,则将残差推到零比通过非线性层堆叠拟合恒等映射要容易。

The formulation of F(x)+xF(x)+ xF(x)+x can be realized by feedforward neural networks with “shortcut connections” (Fig. 2). Shortcut connections [2, 34, 49] are those skipping one or more layers. In our case, the shortcut connections simply perform identity mapping, and their outputs are added to the outputs of the stacked layers (Fig. 2). Identity shortcut connections add neither extra parameter nor computational complexity. The entire network can still be trained end-to-end by SGD with backpropagation, and can be easily implemented using common libraries (e.g., Caffe [19]) without modifying the solvers.

F(x)+xF(x)+ xF(x)+x 的公式化可以通过具有“快捷连接”的前馈神经网络来实现(图2)。快捷连接[2、34、49]是跳过一层或多层的连接。在我们的案例中,快捷连接仅执行恒等映射,并将其输出添加到堆叠层的输出中(图2)。恒等快捷连接既不增加额外的参数,也不增加计算复杂度。整个网络仍然可以通过SGD反向传播进行端到端训练,并且可以使用通用库(例如Caffe [19])轻松实现,而无需修改求解器

We present comprehensive experiments on ImageNet [36] to show the degradation problem and evaluate our method. We show that: 1) Our extremely deep residual nets are easy to optimize, but the counterpart “plain” nets (that simply stack layers) exhibit higher training error when the depth increases; 2) Our deep residual nets can easily enjoy accuracy gains from greatly increased depth, producing results substantially better than previous networks.

我们在ImageNet [36]上进行了全面的实验,以显示退化问题并评估我们的方法。我们证明:

  • 1)我们极深的残差网络易于优化,但是当深度增加时,对应的“普通”网络(简单地堆叠层)显示出更高的训练误差;
  • 2)我们的深层残差网络可以通过大大增加深度来轻松实现精度提升,从而产生比以前的网络更好的结果。

Similar phenomena are also shown on the CIFAR-10 set [20], suggesting that the optimization difficulties and the effectsofourmethodarenotjustakintoaparticulardataset. We present successfully trained models on this dataset with over 100 layers, and explore models with over 1000 layers.

在CIFAR-10集上也显示了类似的现象[20],这表明优化困难和方法的调整不适合于特定数据集。我们在此数据集上展示了经过成功训练的100层以上的模型,并探索了1000层以上的模型。

On the ImageNet classification dataset [36], we obtain excellent results by extremely deep residual nets. Our 152layer residual net is the deepest network ever presented on ImageNet, while still having lower complexity than VGG nets [41]. Our ensemble has 3.57% top-5 error on the ImageNet test set, and won the 1st place in the ILSVRC 2015 classification competition. The extremely deep representations also have excellent generalization performance on other recognition tasks, and lead us to further win the 1st places on: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation in ILSVRC & COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems.

在ImageNet分类数据集[36]上,我们通过极深的残差网获得了出色的结果。我们的152层残差网络是ImageNet上提出的最深的网络,同时其复杂度仍低于VGG网络[41]。我们的集成模型在ImageNet测试集上的前5个错误的错误率为3.57%,并在ILSVRC 2015分类竞赛中获得第一名。极深的表示法在其他识别任务上也具有出色的泛化性能,使我们在ILSVRC和COCO 2015竞赛中进一步赢得了第一名:ImageNet检测,ImageNet定位,COCO检测和COCO分割。有力的证据表明,残差学习原理是通用的,我们希望它适用于其它视觉和非视觉问题。


2. Related Work

Residual Representations. In image recognition, VLAD [18] is a representation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector [30] can be formulated as a probabilistic version [18] of VLAD. Both of them are powerful shallow representations for image retrieval and classification [4, 48]. For vector quantization, encoding residual vectors [17] is shown to be more effective than encoding original vectors.

残差表示。在图像识别中,VLAD [18]是通过相对于字典的残差矢量进行编码的表示,Fisher Vector [30]可以表述为VLAD的概率版本[18]。它们都是用于图像检索和分类的有力的浅层表示[4,48]。对于矢量量化,编码残差矢量[17]被证明比编码原始矢量更有效

In low-level vision and computer graphics, for solving Partial Differential Equations (PDEs), the widely used Multigrid method [3] reformulates the system as subproblems at multiple scales, where each subproblem is responsible for the residual solution between a coarser and a finer scale. An alternative to Multigrid is hierarchical basis preconditioning [45, 46], which relies on variables that represent residual vectors between two scales. It has been shown [3,45,46] that these solvers converge much faster than standard solvers that are unaware of the residual nature of the solutions. These methods suggest that a good reformulation or preconditioning can simplify the optimization.

在低级视觉和计算机图形学中,为了求解偏微分方程(PDE),广泛使用的Multigrid方法[3]将系统重新形成为多个尺度的子问题,其中每个子问题负责较粗和较细规模之间的残差解。 Multigrid的替代方法是分层基础预处理[45,46],它依赖于表示两个尺度之间残差矢量的变量。已经证明[3,45,46],这些求解器的收敛速度比不知道解决方案剩余性质的标准求解器快得多。这些方法表明,良好的重构或预处理可以简化优化过程。

Shortcut Connections. Practices and theories that lead to shortcut connections [2, 34, 49] have been studied for a long time. An early practice of training multi-layer perceptrons (MLPs) is to add a linear layer connected from the network input to the output [34, 49]. In [44, 24], a few intermediate layers are directly connected to auxiliary classifiers for addressing vanishing/exploding gradients. The papers of [39, 38, 31, 47] propose methods for centering layer responses, gradients, and propagated errors, implemented by shortcut connections. In [44], an “inception” layer is composed of a shortcut branch and a few deeper branches.

快捷连接。导致快捷连接[2,34,49]的实践和理论已经研究了很长时间。训练多层感知器(MLP)的早期实践是添加从网络输入连接到输出的线性层[34,49]。在[44,24]中,一些中间层直接连接到辅助分类器,以解决消失/爆炸梯度。 [39,38,31,47]的论文提出了通过快捷连接实现居中层响应,梯度和传播误差居中的方法。在[44]中,“inception”层由快捷分支和一些更深的分支组成。

Concurrent with our work, “highway networks” [42, 43] present shortcut connections with gating functions [15]. These gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free. When a gated shortcut is “closed” (approaching zero), the layers in highway networks represent non-residual functions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never closed, and all information is always passed through, with additional residual functions to be learned. In addition, highway networks have not demonstrated accuracy gains with extremely increased depth (e.g., over 100 layers).

与我们的工作同时,highway 网络[42、43]提供具有门控函数[15]的快捷连接。与我们的不带参数的恒等快捷连接相反,这些门取决于数据并具有参数。当某个门的快捷连接“关闭”(接近零)时,highway 网络中的层代表非残差函数。相反,我们的公式总是学习残差函数。我们的恒等快捷连接永远不会关闭,所有信息始终都会通过传递,并需要学习其它残差函数。另外,highway网络还没有证明深度极大增加(例如超过100层)的精度。


3. Deep Residual Learning

3.1. Residual Learning

Let us consider H(x) as an underlying mapping to be fit by a few stacked layers (not necessarily the entire net), with x denoting the inputs to the first of these layers. If one hypothesizes that multiple nonlinear layers can asymptotically approximate complicated functions2, then it is equivalent to hypothesize that they can asymptotically approximate the residual functions, i.e., H(x)−xH(x) − xH(x)−x (assuming that the input and output are of the same dimensions). So rather than expect stacked layers to approximate H(x)H(x)H(x), we explicitly let these layers approximate a residual function F(x):=H(x)−xF(x) := H(x) − xF(x):=H(x)−x. The original function thus becomes F(x)+xF(x)+xF(x)+x. Although both forms should be able to asymptotically approximate the desired functions (as hypothesized), the ease of learning might be different.

让我们将 H(x)H(x)H(x) 视为由一些堆叠层(不一定是整个网络)拟合的基础映射,其中 xxx 表示这些层中第一层的输入。如果假设多个非线性层可以渐近地逼近复杂函数,则等效于假设它们可以渐近地近似残差函数,即 H(x)−xH(x)-xH(x)−x(假设输入和输出的维数相同)。因此,我们没有让堆叠的层近似为 H(x)H(x)H(x),而是明确让这些层近似为残差函数 F(x):=H(x)−xF(x):= H(x)-xF(x):=H(x)−x。因此,原始函数变为 F(x)+xF(x)+ xF(x)+x。尽管两种形式都应能够渐近地逼近所需的函数(如假设),但学习的难易程度可能有所不同。

This reformulation is motivated by the counterintuitive phenomena about the degradation problem (Fig. 1, left). As we discussed in the introduction, if the added layers can be constructed as identity mappings, a deeper model should have training error no greater than its shallower counterpart. The degradation problem suggests that the solvers might have difficulties in approximating identity mappings by multiple nonlinear layers. With the residual learning reformulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear layers toward zero to approach identity mappings.

这种重新设计是由与退化问题有关的违反直觉的现象引起的(图1,左)。正如我们在引言中讨论的那样,如果可以将添加的层构造为恒等映射,则较深的模型应具有的训练误差不大于其较浅的模型的训练误差。退化问题表明,求解器可能难以通过多个非线性层来逼近恒等映射。通过残差学习的重构,如果恒等映射是最佳的,则求解器可以简单地将多个非线性层的权重逼近零以逼近恒等映射。

In real cases, it is unlikely that identity mappings are optimal, but our reformulation may help to precondition the problem. If the optimal function is closer to an identity mapping than to a zero mapping, it should be easier for the solver to find the perturbations with reference to an identity mapping, than to learn the function as a new one. We show byexperiments(Fig.7) that the learned residual functions in general have small responses, suggesting that identity mappings provide reasonable preconditioning.

在实际情况下,恒等映射不太可能是最佳的,但是我们的重新制定可能有助于解决问题。如果最优函数比零映射更接近于一个恒等式,那么求解器应该参考恒等式来查找扰动,而不是学习一个新的函数。我们通过实验显示(图7),学习到的残差函数通常具有较小的响应,这表明恒等映射提供了合理的预处理。


3.2. Identity Mapping by Shortcuts

We adopt residual learning to every few stacked layers. A building block is shown in Fig. 2. Formally, in this paper we consider a building block defined as:

我们对每几个堆叠的层采用残差学习。构建块如图2所示。在形式上,在本文中,我们考虑定义为:


Here xxx and yyy are the input and output vectors of the layers considered. The function F(x,Wi)F(x,{Wi})F(x,Wi) represents the residual mapping to be learned. For the example in Fig. 2 that has two layers, F=W2σ(W1x)F = W_2σ(W1x)F=W2​σ(W1x) in which σσσ denotes ReLU [29] and the biases are omitted for simplifying notations. The operation F+xF + xF+x is performed by a shortcut connection and element-wise addition. We adopt the second nonlinearity after the addition (i.e., σ(y)σ(y)σ(y), see Fig. 2).

这里的 xxx 和 yyy 是所考虑层的输入和输出向量。函数 F(x,Wi)F(x,{Wi})F(x,Wi) 表示要学习的残差映射。对于图2中具有两层的示例,F=W2σ(W1x)F = W_2σ(W_1x)F=W2​σ(W1​x),其中 σσσ 表示ReLU [29],并且为了简化符号省略了偏置。F+xF + xF+x 操作通过快捷连接和逐元素加法执行。在加法之后我们采用第二个非线性度(即 σ(y)σ(y)σ(y),见图2)。

The shortcut connections in Eqn.(1) introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition).

公式(1)中的快捷连接既没有引入额外的参数,也没有引入计算复杂性。这不仅在实践中具有吸引力,而且在我们比较普通网络和残差网络时也很重要。我们可以公平地比较同时具有相同数量的参数,深度,宽度和计算成本(除了可以忽略的逐元素加法)的普通/残差网络。

The dimensions of xxx and FFF must be equal in Eqn.(1). If this is not the case (e.g., when changing the input/output channels), we can perform a linear projection WsW_sWs​ by the shortcut connections to match the dimensions:

xxx 和 FFF 的尺寸在等式(1)中必须相等。如果不是这种情况(例如,当更改输入/输出通道时),可以通过快捷连接执行线性投影 WsW_sWs​ 以匹配尺寸:

We can also use a square matrix WsW_sWs​ in Eqn.(1). But we will show by experiments that the identity mapping is sufficient for addressing the degradation problem and is economical, and thus Ws is only used when matching dimensions.

我们也可以在等式(1)中使用平方矩阵 WsW_sWs​。但是我们将通过实验表明,恒等映射足以解决退化问题并且很经济,因此 WsW_sWs​ 仅在匹配尺寸时使用。

The form of the residual function F is flexible. Experiments in this paper involve a function F that has two or three layers (Fig. 5), while more layers are possible. But if F has only a single layer, Eqn.(1) is similar to a linear layer:y=W1x+xy = W_1x+xy=W1​x+x, for which we have not observed advantages.

残差函数 FFF 的形式是灵活的。本文中的实验涉及一个具有两层或三层的函数 FFF(图5),而更多的层是可能的。但是,如果 FFF 仅具有一层,则等式(1)类似于线性层:y=W1x+xy = W_1x+xy=W1​x+x,对此我们没有观察到优势。

We also note that although the above notations are about fully-connected layers for simplicity, they are applicable to convolutional layers. The function F(x,Wi)F(x,{W_i})F(x,Wi​) can represent multiple convolutional layers. The element-wise addition is performed on two feature maps, channel by channel.

我们还注意到,尽管为简化起见,上述符号是关于全连接层的,但它们也适用于卷积层。函数 F(x,Wi)F(x,{W_i})F(x,Wi​) 可以表示多个卷积层。在两个特征图上逐个通道地执行逐元素加法。

3.3. Network Architectures

We have tested various plain/residual nets, and have observed consistent phenomena. To provide instances for discussion, we describe two models for ImageNet as follows.

我们已经测试了各种普通/残留网络,并观察到了一致的现象。为了提供讨论实例,我们描述了ImageNet的两个模型,如下所示。

Plain Network. Our plain baselines (Fig. 3, middle) are mainly inspired by the philosophy of VGG nets [41] (Fig. 3, left). The convolutional layers mostly have 3×3 filters and follow two simple design rules: (i) for the same output feature map size, the layers have the same number of filters; and (ii) if the feature map size is halved, the number of filters is doubled so as to preserve the time complexity per layer. We perform downsampling directly by convolutional layers that have a stride of 2. The network ends with a global average pooling layer and a 1000-way fully-connected layer with softmax. The total number of weighted layers is 34 in Fig. 3 (middle).

普通网络。我们普通网络的基线(图3,中间)主要受到VGG网络原理的启发[41](图3,左)。卷积层大多数具有3×3滤波器,并遵循两个简单的设计规则:(i)对于相同的输出特征图大小,这些图层具有相同数量的滤波器; (ii)如果特征图的大小减半,则滤波器的数量将增加一倍,以保持每层的时间复杂度。我们直接通过步长为2的卷积层执行下采样。网络以全局平均池化层和带有softmax的1000路全连接层结束。在图3中,权重层的总数为34(中)。

It is worth noticing that our model has fewer filters and lower complexity than VGG nets [41] (Fig. 3, left). Our 34layer baseline has 3.6 billion FLOPs (multiply-adds), which is only 18% of VGG-19 (19.6 billion FLOPs).

值得注意的是,我们的模型比VGG网络[41]具有更少的滤波器和更低的复杂度(图3,左)。我们的34层基准具有36亿个FLOP(乘法加法),仅占VGG-19(196亿个FLOP)的18%。


Figure 3. Example network architectures for ImageNet. Left: the VGG-19 model [41] (19.6 billion FLOPs) as a reference. Middle: a plain network with 34 parameter layers (3.6 billion FLOPs). Right: a residual network with 34 parameter layers (3.6 billion FLOPs). The dotted shortcuts increase dimensions. Table 1 shows more details and other variants.

图3. ImageNet的示例网络架构。左:作为参考的VGG-19模型[41](196亿个FLOP)。中:包含34个参数层(36亿个FLOP)的普通网络。右图:一个具有34个参数层的残差网络(36亿个FLOP)。虚线快捷连接会增加尺寸。表1显示了更多详细信息和其它变体。

Residual Network. Based on the above plain network, we insert shortcut connections (Fig. 3, right) which turn the network into its counterpart residual version. The identity shortcuts (Eqn.(1)) can be directly used when the input and output are of the same dimensions (solid line shortcuts in Fig.3). When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by 1×1 convolutions). For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2.

残差网络。在上面的普通网络的基础上,我们插入快捷连接(右图3),将网络变成其对应的残差版本。当输入和输出的尺寸相同时,可以直接使用恒等快捷连接(等式(1))(图3中的实线快捷连接)。当维度增加时(图3中的虚线快捷链接),我们考虑两个选项:(A)快捷链接仍然执行恒等映射,并为增加维度进行零填充。此选项不引入任何额外的参数。 (B)等式(2)中的投影快捷连接用于匹配尺寸(按1×1卷积完成)。对于这两个选项,当快捷连接跨过两种尺寸的特征贴图时,步幅为2。

3.4. Implementation

Our implementation for ImageNet follows the practice in [21, 41]. The image is resized with its shorter side randomly sampled in [256,480] for scale augmentation [41]. A 224×224 crop is randomly sampled from an image or its horizontal flip, with the per-pixel mean subtracted [21]. The standard color augmentation in [21] is used. We adopt batch normalization (BN) [16] right after each convolution and before activation, following [16]. We initialize the weights as in [13] and train all plain/residual nets from scratch. We use SGD with a mini-batch size of 256. The learning rate starts from 0.1 and is divided by 10 when the error plateaus, and the models are trained for up to 60×104iterations. We use a weight decay of 0.0001 and a momentum of 0.9. We do not use dropout [14], following the practice in [16].

我们对ImageNet的实现遵循[21,41]中的做法。调整图像的大小,并在[256,480]中随机采样其较短的一面,以进行比例增强[41]。从图像或其水平翻转中随机采样224×224裁剪,并减去每像素均值[21]。使用[21]中的标准色彩增强。在每次卷积之后和激活之前,紧接着采用批归一化(BN)[16]。我们按照[13]中的方法初始化权重,并从头开始训练所有普通/残差网络。我们使用最小批量为256的SGD。学习率从0.1开始,当误差平稳时除以10,并且训练模型的迭代次数最多为 60×10460×10^460×104。我们使用0.0001的权重衰减和0.9的动量。遵循[16]中的做法,我们不使用Dropout[14]。

In testing, for comparison studies we adopt the standard 10-crop testing [21]. For best results, we adopt the fullyconvolutional form as in [41, 13], and average the scores at multiple scales (images are resized such that the shorter side is in {224,256,384,480,640}).

在测试中,为了进行比较研究,我们采用了标准的10种裁剪用于测试[21]。为了获得最佳结果,我们采用[41,13]中的全卷积形式,并在多个尺度上对分数取平均(图像被调整大小,使得较短的边在{224,256,384,480,640}中)。

Table 1. Architectures for ImageNet. Building blocks are shown in brackets (see also Fig. 5), with the numbers of blocks stacked. Downsampling is performed by conv3 1, conv4 1, and conv5 1 with a stride of 2.

表1. ImageNet的体系结构。括号中显示了构建块(另请参见图5),其中堆叠了许多块。下采样由conv3 1,conv4 1和conv5 1执行,步幅为2。


4. Experiments

4.1. ImageNet Classification

We evaluate our method on the ImageNet 2012 classificationdataset[36] that consists of 1000 classes. The models are trained on the 1.28 million training images, and evaluated on the 50k validation images. We also obtain a final result on the 100k test images, reported by the test server. We evaluate both top-1 and top-5 error rates.

我们在ImageNet 2012分类数据集[36]上评估了我们的方法,该数据集包含1000个类别。在128万张训练图像上训练模型,并在50k验证图像上进行评估。我们还将在测试服务器报告的10万张测试图像上获得最终结果。我们评估了top-1和top-5错误率。

Plain Networks. We first evaluate 18-layer and 34-layer plain nets. The 34-layer plain net is in Fig. 3 (middle). The 18-layer plain net is of a similar form. See Table 1 for detailed architectures.

普通网络。我们首先评估18层和34层普通网。 34层普通网络在图3中(中间)。 18层普通网具有类似的形式。有关详细架构,请参见表1。

The results in Table 2 show that the deeper 34-layer plain net has higher validation error than the shallower 18-layer plain net. To reveal the reasons, in Fig. 4 (left) we compare their training/validation errors during the training procedure. We have observed the degradation problem - the 34-layer plain net has higher training error throughout the whole training procedure, even though the solution space of the 18-layer plain network is a subspace of that of the 34-layer one.

表2中的结果表明,较深的34层普通网比较浅的18层普通网具有更高的验证误差。为了揭示原因,在图4(左)中,我们比较了他们在训练过程中的训练/验证错误。我们已经观察到了退化问题-即使18层普通网络的解空间是34层普通网络的子空间,在整个训练过程中34层普通网络具有较高的训练误差。

Figure 4. Training on ImageNet. Thin curves denote training error, and bold curves denote validation error of the center crops. Left: plain networks of 18 and 34 layers. Right: ResNets of 18 and 34 layers. In this plot, the residual networks have no extra parameter compared to their plain counterparts.

图4. ImageNet训练。细曲线表示训练误差,粗曲线表示中心裁剪的验证误差。左:18和34层的普通网络。右:18和34层的ResNet。在该图中,残差网络与普通网络相比没有额外的参数。


Table 2. Top-1 error (%, 10-crop testing) on ImageNet validation. Here the ResNets have no extra parameter compared to their plain counterparts. Fig. 4 shows the training procedures.

表2. ImageNet验证中的top-1错误(%,十次裁剪测试)。 ResNet与普通的ResNet相比没有额外的参数。图4显示了训练过程。

We argue that this optimization difficulty is unlikely to be caused by vanishing gradients. These plain networks are trained with BN [16], which ensures forward propagated signals to have non-zero variances. We also verify that the backward propagated gradients exhibit healthy norms with BN. So neither forward nor backward signals vanish. In fact, the 34-layer plain net is still able to achieve competitive accuracy (Table 3), suggesting that the solver works to some extent. We conjecture that the deep plain nets may have exponentially low convergence rates, which impact the reducing of the training error3. The reason for such optimization difficulties will be studied in the future.

我们认为,这种优化困难不太可能是由梯度消失引起的。这些普通网络使用BN [16]进行训练,可确保前向传播信号具有非零方差。我们还验证了向后传播的梯度具有BN的健康规范。因此,前进或后退信号都不会消失。实际上,34层普通网络仍然可以达到有竞争性的精度(表3),这表明求解器在一定程度上可以工作。我们推测,普通网络的收敛速度可能呈指数级降低,这会影响训练误差的降低。将来将研究这种优化困难的原因。

Residual Networks. Next we evaluate 18-layer and 34layer residual nets (ResNets). The baseline architectures are the same as the above plain nets, expect that a shortcut connection is added to each pair of 3×3 filters as in Fig. 3 (right). In the first comparison (Table 2 and Fig. 4 right), we use identity mapping for all shortcuts and zero-padding for increasing dimensions (option A). So they have no extra parameter compared to the plain counterparts.

残留网络。接下来,我们评估18层和34层残差网络(ResNets)。基线架构与上述普通网络相同,希望将快捷连接添加到图3(右)中的每对3×3过滤器中。在第一个比较中(右表2和图4),我们将身份映射用于所有快捷方式,将零填充用于增加尺寸(选项A)。因此,与普通副本相比,它们没有额外的参数。

We have three major observations from Table 2 and Fig. 4. First, the situation is reversed with residual learning – the 34-layer ResNet is better than the 18-layer ResNet (by 2.8%). More importantly, the 34-layer ResNet exhibits considerably lower training error and is generalizable to the validation data. This indicates that the degradation problem is well addressed in this setting and we manage to obtain accuracy gains from increased depth.

我们从表2和图4中获得了三个主要观察结果。首先,这种情况通过残差学习得以逆转– 34层ResNet优于18层ResNet(降低了2.8%)。更重要的是,34层ResNet表现出低得多的训练误差,并且可以推广到验证数据。这表明在这种情况下可以很好地解决退化问题,并且我们设法从增加的深度中获得准确性的提高。

Second, compared to its plain counterpart, the 34-layer ResNet reduces the top-1 error by 3.5% (Table 2), resulting from the successfully reduced training error (Fig. 4 right vs. left). This comparison verifies the effectiveness of residual learning on extremely deep systems.

其次,与普通的相比,34层ResNet将top-1错误减少了3.5%(表2),这是由于成功减少了训练错误(图4右与左)。这项比较验证了残差学习在极深系统上的有效性。

Last, we also note that the 18-layer plain/residual nets are comparably accurate (Table 2), but the 18-layer ResNet converges faster (Fig. 4 right vs. left). When the net is “not overly deep” (18 layers here), the current SGD solver is still able to find good solutions to the plain net. In this case, the ResNet eases the optimization by providing faster convergence at the early stage.

最后,我们还注意到18层普通/残差网络比较准确(表2),但18层ResNet收敛更快(图4右与左)。当网络“不是太深”(此处为18层)时,当前的SGD解算器仍然能够为普通网络找到良好的解决方案。在这种情况下,ResNet通过在早期提供更快的收敛来简化优化。

Identity vs. Projection Shortcuts. We have shown that parameter-free, identity shortcuts help with training. Next we investigate projection shortcuts (Eqn.(2)). In Table 3 we compare three options: (A) zero-padding shortcuts are used for increasing dimensions, and all shortcuts are parameterfree (the same as Table 2 and Fig. 4 right); (B) projection shortcuts are used for increasing dimensions, and other shortcuts are identity; and © all shortcuts are projections.

恒等映射与投影快捷连接。我们已经证明,无参数的恒等映射快捷连接有助于训练。接下来,我们研究投影快捷连接(等式(2))。在表3中,我们比较了三个选项:(A)零填充快捷连接用于增加尺寸,并且所有快捷连接都没有参数(与表2和右图4相同); (B)投影快捷连接用于增加尺寸,其它快捷连接用于标识。 (C)所有快捷连接都是投影。

Table 3 shows that all three options are considerably better than the plain counterpart. B is slightly better than A. We argue that this is because the zero-padded dimensions in A indeed have no residual learning. C is marginally better than B, and we attribute this to the extra parameters introduced by many (thirteen) projection shortcuts. But the small differences among A/B/C indicate that projection shortcuts are not essential for addressing the degradation problem. So we do not use option C in the rest of this paper, to reduce memory/time complexity and model sizes. Identity shortcuts are particularly important for not increasing the complexity of the bottleneck architectures that are introduced below.

表3显示,所有三个选项都比普通选项好得多。 B比A稍好。我们认为这是因为A中的零填充维确实没有残差学习。 C比B好一点,我们将其归因于许多(13)投影快捷l连接引入的额外参数。但是,A / B / C之间的细微差异表明,投影快捷连接对于解决降级问题并不是必不可少的。因此,在本文的其余部分中,我们不会使用选项C来减少内存/时间的复杂性和模型大小。恒等快捷连接对于不增加下面介绍的瓶颈架构的复杂性尤其重要。

Deeper Bottleneck Architectures. Next we describe our deeper nets for ImageNet. Because of concerns on the training time that we can afford, we modify the building block as a bottleneck design4. For each residual function F, we use a stack of 3 layers instead of 2 (Fig. 5). The three layers are 1×1, 3×3, and 1×1 convolutions, where the 1×1 layers are responsible for reducing and then increasing (restoring) dimensions, leaving the 3×3 layer a bottleneck with smaller input/output dimensions. Fig. 5 shows an example, where both designs have similar time complexity.

更深的Bottleneck架构。接下来,我们将介绍ImageNet的更深层网络。由于担心我们负担不起的训练时间,因此我们将构建模块修改为Bottleneck设计。对于每个残差函数F,我们使用3层而不是2层的堆栈(图5)。这三层分别是1×1、3×3和1×1卷积,其中1×1层负责减小然后增加(还原)尺寸,从而使3×3层成为输入/输出尺寸较小的瓶颈。图5显示了一个示例,其中两种设计都具有相似的时间复杂度。

The parameter-free identity shortcuts are particularly important for the bottleneck architectures. If the identity shortcut in Fig. 5 (right) is replaced with projection, one can show that the time complexity and model size are doubled, as the shortcut is connected to the two high-dimensional ends. So identity shortcuts lead to more efficient models for the bottleneck designs.

无参数标识快捷连接对于bottleneck结构特别重要。如果将图5(右)中的身份快捷方式替换为投影,则可以显示时间复杂度和模型大小增加了一倍,因为快捷连接到两个高维端。因此,恒等快捷连接可以为bottleneck设计提供更有效的模型。

50-layer ResNet: We replace each 2-layer block in the 34-layer net with this 3-layer bottleneck block, resulting in a 50-layer ResNet (Table 1). We use option B for increasing dimensions. This model has 3.8 billion FLOPs.

50层ResNet:我们将3层瓶颈模块替换为34层网络中的每个2层模块,从而得到50层ResNet(表1)。我们使用选项B来增加尺寸。该模型具有38亿个FLOP。

101-layer and 152-layer ResNets: We construct 101layer and 152-layer ResNets by using more 3-layer blocks (Table 1). Remarkably, although the depth is significantly increased, the 152-layer ResNet (11.3 billion FLOPs) still has lower complexity than VGG-16/19 nets (15.3/19.6 billion FLOPs).

101层和152层ResNet:我们通过使用更多的3层块来构建101层和152层ResNet(表1)。值得注意的是,尽管深度显着增加,但152层ResNet(113亿个FLOP)的复杂度仍低于VGG-16 / 19网(153.96亿个FLOP)。

The 50/101/152-layer ResNets are more accurate than the 34-layer ones by considerable margins (Table 3 and 4). We do not observe the degradation problem and thus enjoy significant accuracy gains from considerably increased depth. The benefits of depth are witnessed for all evaluation metrics (Table 3 and 4).

50/101/152层ResNet比34层ResNet准确度高得多(表3和表4)。我们没有观察到退化问题,因此从深度的增加中获得了显着的精度提升,所有评估指标都证明了深度的好处(表3和表4)。

Comparisons with State-of-the-art Methods. In Table 4 we compare with the previous best single-model results. Our baseline 34-layer ResNets have achieved very competitive accuracy. Our 152-layer ResNet has a single-model top-5 validation error of 4.49%. This single-model result outperforms all previous ensemble results (Table 5). We combine six models of different depth to form an ensemble (only with two 152-layer ones at the time of submitting). This leads to 3.57% top-5 error on the test set (Table 5). This entry won the 1st place in ILSVRC 2015.

与最新方法的比较。在表4中,我们与以前的最佳单模型结果进行了比较。我们的基准34层ResNet获得了非常具有竞争力的准确性。我们的152层ResNet的单模型top-5验证错误为4.49%。这个单一模型的结果优于所有先前的整体结果(表5)。我们将六个不同深度的模型组合在一起以形成一个集成模型(提交时只有两个152层模型)。这导致测试集上3.5-5的top-5错误(表5),此项获得了ILSVRC 2015的第一名。

4.2. CIFAR-10 and Analysis

We conducted more studies on the CIFAR-10 dataset [20], which consists of 50k training images and 10k testing images in 10 classes. We present experiments trained on the training set and evaluated on the test set. Our focus is on the behaviors of extremely deep networks, but not on pushing the state-of-the-art results, so we intentionally use simple architectures as follows.

我们对CIFAR-10数据集[20]进行了更多研究,该数据集包含10个类别的50k训练图像和10k测试图像。我们介绍在训练集上训练的实验,并在测试集上进行评估。我们关注的是极深网络的行为,而不是关注最先进的结果,因此我们特意使用了如下的简单架构。

The plain/residual architectures follow the form in Fig. 3 (middle/right). The network inputs are 32×32 images, with the per-pixel mean subtracted. The first layer is 3×3 convolutions. Then we use a stack of 6n layers with 3×3 convolutions on the feature maps of sizes {32,16,8} respectively, with 2n layers for each feature map size. The numbers of filtersare{16,32,64}respectively. The subsampling is performed by convolutions with a stride of 2. The network ends with a global average pooling, a 10-way fully-connected layer, and softmax. There are totally 6n+2 stacked weighted layers. The following table summarizes the architecture:

普通/残差结构遵循图3中的形式(中/右)。网络输入为32×32图像,减去像素均值。第一层是3×3卷积。然后,我们分别在大小为{32,16,8}的特征图上使用具有3×3卷积的6n层堆栈,每个特征图尺寸为2n层。滤波器的数量分别为{16,32,64}。二次采样通过步幅为2的卷积执行。网络以全局平均池,10路全连接层和softmax结尾。总共有6n + 2个堆叠的加权层。下表总结了体系结构:


When shortcut connections are used, they are connected to the pairs of 3×3 layers (totally 3n shortcuts). On this dataset we use identity shortcuts in all cases (i.e., option A),so our residual models have exactly the same depth, width, and number of parameters as the plain counterparts.

使用快捷连接时,它们连接到成对的3×3层对(总共3n个快捷连接)。在此数据集上,我们在所有情况下(例如,选项A)都使用恒等快捷连接,因此我们的残差模型的深度,宽度和参数数量与普通模型完全相同。

We use a weight decay of 0.0001 and momentum of 0.9, and adopt the weight initialization in [13] and BN [16] but with no dropout. These models are trained with a minibatch size of 128 on two GPUs. We start with a learning rate of 0.1, divide it by 10 at 32k and 48k iterations, and terminate training at 64k iterations, which is determined on a 45k/5k train/val split. We follow the simple data augmentation in [24] for training: 4 pixels are padded on each side, and a 32×32 crop is randomly sampled from the padded image or its horizontal flip. For testing, we only evaluate the single view of the original 32×32 image.

我们使用0.0001的权重衰减和0.9的动量,并在[13]和BN [16]中采用权重初始化,但是没有Dropout。这些模型在两个GPU上的最小批量为128。我们以0.1的学习率开始,将其在32k和48k迭代中除以10,然后以64k迭代终止训练,这是由45k / 5k的训练集和验证集分配决定的。我们按照[24]中的简单数据增强进行训练:在每侧填充4个像素,并从填充的图像或其水平翻转中随机采样32×32的裁剪。为了进行测试,我们仅评估原始32×32图像的单个视图。

We compare n = {3,5,7,9}, leading to 20, 32, 44, and 56-layer networks. Fig. 6 (left) shows the behaviors of the plain nets. The deep plain nets suffer from increased depth, and exhibit higher training error when going deeper. This phenomenon is similar to that on ImageNet (Fig. 4, left) and on MNIST (see [42]), suggesting that such an optimization difficulty is a fundamental problem.

我们比较n = {3,5,7,9},得出20、32、44和56层网络。图6(左)显示了普通网络的行为。较深的普通网络会增加深度,并且在层数增加时会表现出较高的训练误差。这种现象类似于ImageNet(图4,左)和MNIST(参见[42])上的现象,表明这种优化困难是一个基本问题

Fig. 6 (middle) shows the behaviors of ResNets. Also similar to the ImageNet cases (Fig. 4, right), our ResNets manage to overcome the optimization difficulty and demonstrate accuracy gains when the depth increases.

图6(中)显示了ResNets的行为。同样类似于ImageNet的情况(图4,右),我们的ResNet设法克服了优化难题,并证明了深度增加时精度的提高。

Figure 6. Training on CIFAR-10. Dashed lines denote training error, and bold lines denote testing error. Left: plain networks. The error of plain-110 is higher than 60% and not displayed. Middle: ResNets. Right: ResNets with 110 and 1202 layers.

图6.有关CIFAR-10的培训。虚线表示训练错误,而粗线表示测试错误。左:普通网络。 Plain-110的错误高于60%,并且不显示。中:ResNets。右:具有110和1202层的ResNet。

We further explore n = 18 that leads to a 110-layer ResNet. In this case, we find that the initial learning rate of 0.1 is slightly too large to start converging5. So we use 0.01 to warm up the training until the training error is below 80% (about 400 iterations), and then go back to 0.1 and continue training. The rest of the learning schedule is as done previously. This 110-layer network converges well (Fig. 6, middle). It has fewer parameters than other deep and thin networks such as FitNet [35] and Highway [42] (Table 6), yet is among the state-of-the-art results (6.43%, Table 6).

我们进一步探索n = 18导致110层ResNet。在这种情况下,我们发现初始学习速率0.1太大而无法开始收敛。因此,我们使用0.01来预训练,直到训练误差低于80%(约400次迭代),然后返回0.1继续训练。其余的学习时间表与之前一样。这个110层的网络可以很好地融合(图6,中间)。与其它深层和瘦网络(例如FitNet [35]和Highway [42])相比,它具有更少的参数(表6),但仍属于最新结果(6.43%,表6)。

Figure 7. Standard deviations (std) of layer responses on CIFAR10. The responses are the outputs of each 3×3 layer, after BN and before nonlinearity. Top: the layers are shown in their original order. Bottom: the responses are ranked in descending order.

图7. CIFAR10上层响应的标准偏差(std)。响应是BN之后和非线性之前每个3×3层的输出。顶部:图层以其原始顺序显示。下:响应按降序排列。

Analysis of Layer Responses. Fig. 7 shows the standard deviations (std) of the layer responses. The responses are the outputs of each 3×3 layer, after BN and before other nonlinearity (ReLU/addition). For ResNets, this analysis reveals the response strength of the residual functions. Fig. 7 shows that ResNets have generally smaller responses than their plain counterparts. These results support our basic motivation (Sec.3.1) that the residual functions might be generally closer to zero than the non-residual functions. We also notice that the deeper ResNet has smaller magnitudes of responses, as evidenced by the comparisons among ResNet-20, 56, and 110 in Fig. 7. When there are more layers, an individual layer of ResNets tends to modify the signal less.

层响应分析。图7显示了层响应的标准差(std)。响应是BN之后以及其它非线性(ReLU /加法)之前每个3×3层的输出。对于ResNet,此分析揭示了残差函数的响应强度。图7显示ResNet的响应通常比普通响应小。这些结果支持了我们的基本动机(第3.1节),即与非残差函数相比,残差函数通常可能更接近于零。我们还注意到,更深的ResNet具有较小的响应幅度,如图7中ResNet-20、56和110的比较所证明的。当有更多层时,ResNets的单个层往往会较少地修改信号。

Exploring Over 1000 layers. We explore an aggressively deep model of over 1000 layers. We set n = 200 that leads to a 1202-layer network, which is trained as described above. Our method shows no optimization difficulty, and this 103-layer network is able to achieve training error <0.1% (Fig. 6, right). Its test error is still fairly good (7.93%, Table 6).

探索超过1000层。我们探索了一个超过1000层的深度模型。我们将n设置为200,从而得出1202层网络,该网络如上所述进行了训练。我们的方法没有优化困难,该103层网络能够实现训练误差<0.1%(图6,右)。其测试误差仍然相当不错(7.93%,表6)。

But there are still open problems on such aggressively deep models. The testing result of this 1202-layer network is worse than that of our 110-layer network, although both have similar training error. We argue that this is because of overfitting. The 1202-layer network may be unnecessarily large (19.4M) for this small dataset. Strong regularization such as maxout [10] or dropout [14] is applied to obtain the best results ([10, 25, 24, 35]) on this dataset. In this paper, we use no maxout/dropout and just simply impose regularization via deep and thin architectures by design, without distracting from the focus on the difficulties of optimization. But combining with stronger regularization may improve results, which we will study in the future.

但是,在如此积极的深度模型上仍然存在未解决的问题。尽管这两个1202层网络的训练误差相似,但其测试结果却比我们的110层网络的测试结果差。我们认为这是由于过度拟合。对于这个小的数据集,1202层网络可能会不必要地大(19.4M)。应用强正则化(例如maxout [10]或dropout [14])以在此数据集上获得最佳结果([10、25、24、35])。在本文中,我们不使用maxout / dropout,而只是通过设计通过深度和精简架构强加正则化,而不会分散对优化困难的关注。但是,结合更强的正则化可能会改善结果,我们将在以后进行研究。

4.3. Object Detection on PASCAL and MS COCO

Our method has good generalization performance on other recognition tasks. Table 7 and 8 show the object detection baseline results on PASCAL VOC 2007 and 2012 [5] and COCO [26]. We adopt Faster R-CNN [32] as the detection method. Here we are interested in the improvements of replacing VGG-16 [41] with ResNet-101. The detection implementation (see appendix) of using both models is the same, so the gains can only be attributed to better networks. Most remarkably, on the challenging COCO dataset we obtain a 6.0% increase in COCO’s standard metric (mAP@[.5, .95]), which is a 28% relative improvement. This gain is solely due to the learned representations.

我们的方法在其他识别任务上具有良好的泛化性能。表7和8显示了PASCAL VOC 2007和2012 [5]和COCO [26]上的对象检测基线结果。我们采用Faster R-CNN [32]作为检测方法。在这里,我们对用ResNet-101替换VGG-16 [41]的改进感兴趣。使用这两种模型的检测实现方式(请参见附录)是相同的,因此只能将收益归因于更好的网络。最引人注目的是,在具有挑战性的COCO数据集上,我们的COCO标准指标(mAP @ [.5,.95])增加了6.0%,相对改进了28%。

Based on deep residual nets, we won the 1st places in several tracks in ILSVRC & COCO 2015 competitions: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. The details are in the appendix.

该收益完全归因于所学的表示。基于深层残差网络,我们在ILSVRC和COCO 2015竞赛的多个赛道上均获得了第一名:ImageNet检测,ImageNet定位,COCO检测和COCO分割。详细信息在附录中。



Appendix

A. Object Detection Baselines

In this section we introduce our detection method based on the baseline Faster R-CNN [32] system. The models are initialized by the ImageNet classification models, and then fine-tuned on the object detection data. We have experimented with ResNet-50/101 at the time of the ILSVRC & COCO 2015 detection competitions.

在本节中,我们介绍基于基线Faster R-CNN [32]系统的检测方法。这些模型由ImageNet分类模型初始化,然后根据对象检测数据进行微调。在ILSVRC和COCO 2015检测竞赛时,我们已经对ResNet-50 / 101进行了试验。

Unlike VGG-16 used in [32], our ResNet has no hidden fc layers. We adopt the idea of “Networks on Conv feature maps” (NoC) [33] to address this issue. We compute the full-image shared conv feature maps using those layers whose strides on the image are no greater than 16 pixels (i.e., conv1, conv2 x, conv3 x, and conv4 x, totally 91 conv layers in ResNet-101; Table 1). We consider these layers as analogous to the 13 conv layers in VGG-16, and by doing so, both ResNet and VGG-16 have conv feature maps of the same total stride (16 pixels). These layers are shared by a region proposal network (RPN, generating 300 proposals) [32] and a Fast R-CNN detection network [7]. RoI pooling [7] is performed before conv5 1. On this RoI-pooled feature, all layers of conv5 x and up are adopted for each region, playing the roles of VGG-16’s fc layers. The final classification layer is replaced by two sibling layers (classification and box regression [7]).

与[32]中使用的VGG-16不同,我们的ResNet没有隐藏的fc层。我们采用“ Conv特征图上的网络”(NoC)[33]的想法来解决此问题。我们使用那些步幅不大于16个像素的层(即conv1,conv2 x,conv3 x和conv4 x,在ResNet-101中总共91个conv图层)来计算全图像共享conv特征图。 )。我们认为这些层类似于VGG-16中的13个conv层,这样,ResNet和VGG-16都具有相同总步幅(16像素)的conv特征图。这些层由区域提议网络(RPN,生成300个提议)[32]和快速R-CNN检测网络[7]共享。 RoI池[7]在conv5 1之前执行。在此RoI合并功能中,每个区域都采用conv5 x和up的所有层,起到VGG-16的fc层的作用。最终的分类层被两个同级层替代(分类和框回归[7])。

For the usage of BN layers, after pre-training, we compute the BN statistics (means and variances) for each layer on the ImageNet training set. Then the BN layers are fixed during fine-tuning for object detection. As such, the BN layers become linear activations with constant offsets and scales, and BN statistics are not updated by fine-tuning. We fix the BN layers mainly for reducing memory consumption in Faster R-CNN training.

对于BN层的使用,在预训练之后,我们为ImageNet训练集上的每一层计算BN统计信息(均值和方差)。然后在微调过程中将BN层固定以进行目标检测。这样,BN层变为具有恒定偏移量和比例的线性激活,并且不会通过微调更新BN统计信息。我们修复BN层主要是为了减少Faster R-CNN训练中的内存消耗。

PASCAL VOC

Following [7, 32], for the PASCAL VOC 2007 test set, we use the 5k trainval images in VOC 2007 and 16k trainval images in VOC 2012 for training (“07+12”). For the PASCAL VOC 2012 test set, we use the 10k trainval+test images in VOC 2007 and 16k trainval images in VOC 2012 for training (“07++12”). The hyper-parameters for training Faster R-CNN are the same as in [32]. Table 7 shows the results. ResNet-101 improves the mAP by >3% over VGG-16. This gain is solely because of the improved features learned by ResNet.

按照[7,32],对于PASCAL VOC 2007测试集,我们使用VOC 2007中的5k火车图像和VOC 2012中的16k火车图像进行训练(“ 07 + 12”)。对于PASCAL VOC 2012测试集,我们使用VOC 2007中的10k trainval + test图像和VOC 2012中的16k trainval图像进行训练(“ 07 ++ 12”)。用于训练Faster R-CNN的超参数与[32]中的相同。表7示出了结果。与VGG-16相比,ResNet-101将mAP改善了3%以上。这种收益完全是由于ResNet学会了改进的特征。

MS COCO

The MS COCO dataset [26] involves 80 object categories. We evaluate the PASCAL VOC metric (mAP @ IoU = 0.5) and the standard COCO metric (mAP @ IoU = .5:.05:.95). We use the 80k images on the train set for training and the 40k images on the val set for evaluation. Our detection system for COCO is similar to that for PASCAL VOC. We train the COCO models with an 8-GPU implementation, and thus the RPN step has a mini-batch size of 8 images (i.e., 1 per GPU) and the Fast R-CNN step has a mini-batch size of 16 images. The RPN step and Fast RCNN step are both trained for 240k iterations with a learning rate of 0.001 and then for 80k iterations with 0.0001.

MS COCO数据集[26]涉及80个对象类别。我们评估了PASCAL VOC指标(mAP @ IoU = 0.5)和标准COCO指标(mAP @ IoU = .5:.05:.95)。我们将火车上的80k图像用于训练,将val上的40k图像用于评估。我们的COCO检测系统与PASCAL VOC相似。我们使用8-GPU实施训练COCO模型,因此RPN步骤的最小批量大小为8个图像(即每个GPU 1个),而Fast R-CNN步骤的最小批量大小为16个图像。 RPN步和快速RCNN步均以0.001的学习率进行240k迭代训练,然后以0.0001进行80k迭代训练。

Table 8 shows the results on the MS COCO validation set. ResNet-101 has a 6% increase of mAP@[.5, .95] over VGG-16, which is a 28% relative improvement, solely contributed by the features learned by the better network. Remarkably, the mAP@[.5, .95]’s absolute increase (6.0%) is nearly as big as mAP@.5’s (6.9%). This suggests that a deeper network can improve both recognition and localization.

表8显示了MS COCO验证集的结果。 ResNet-101的mAP @ [.5,.95]比VGG-16增加了6%,相对改善了28%,这完全归功于更好的网络所了解的功能。值得注意的是,mAP @ [.5,.95]的绝对增长(6.0%)与mAP @ .5的绝对增长(6.9%)差不多。这表明更深层次的网络可以同时提高识别能力和定位能力。

B. Object Detection Improvements

For completeness, we report the improvements made for the competitions. These improvements are based on deep features and thus should benefit from residual learning.

为了完整起见,我们报告了比赛的改进。这些改进基于深层功能,因此应该从残余学习中受益。

MS COCO

Box refinement. Our box refinement partially follows the iterative localization in [6]. In Faster R-CNN, the final output is a regressed box that is different from its proposal box. So for inference, we pool a new feature from the regressed box and obtain a new classification score and a new regressed box. We combine these 300 new predictions with the original 300 predictions. Non-maximum suppression (NMS) is applied on the union set of predicted boxes using an IoU threshold of 0.3 [8], followed by box voting [6]. Box refinement improves mAP by about 2 points (Table 9).

Box refinement。我们的Box refinement部分遵循[6]中的迭代定位。在Faster R-CNN中,最终输出是一个回归框,不同于它的提案框。因此,为了进行推断,我们从回归框中合并了一个新功能,并获得了新的分类得分和新的回归框中。我们将这300个新的预测与原始的300个预测结合在一起。使用IoU阈值0.3 [8]将非最大抑制(NMS)应用于预测框的并集,然后进行框投票[6]。Box refinement可将mAP提高约2点(表9)。

Global context. We combine global context in the Fast R-CNN step. Given the full-image conv feature map, we pool a feature by global Spatial Pyramid Pooling [12] (with a “single-level” pyramid) which can be implemented as “RoI” pooling using the entire image’s bounding box as the RoI. This pooled feature is fed into the post-RoI layers to obtain a global context feature. This global feature is concatenated with the original per-region feature, followed by the sibling classification and box regression layers. This new structure is trained end-to-end. Global context improves mAP@.5 by about 1 point (Table 9).

Global context。我们在快速R-CNN步骤中结合了全局上下文。给定全图像转换特征图,我们通过全局空间金字塔池[12](带有“单级”金字塔)来合并特征,可以使用整个图像的边界框作为RoI来实现为“ RoI”池。将此合并的功能馈入后RoI层以获得全局上下文功能。此全局特征与原始的按区域特征连接在一起,然后是同级分类和框回归图层。这种新结构经过端到端的培训。全局环境将mAP @ .5提高了约1个点(表9)。

Multi-scale testing. In the above, all results are obtained by single-scale training/testing as in [32], where the image’s shorter side is s = 600 pixels. Multi-scale training/testing has been developed in [12, 7] by selecting a scale from a feature pyramid, and in [33] by using maxout layers. In our current implementation, we have performed multi-scale testing following [33]; we have not performed multi-scale training because of limited time. In addition, we have performed multi-scale testing only for the Fast R-CNN step (but not yet for the RPN step). With a trained model, we compute conv feature maps on an image pyramid, where the image’s shorter sides are s ∈ {200,400,600,800,1000}.

Multi-scale testing。在上面,所有结果都是通过[32]中的单尺度训练/测试获得的,其中图像的较短边是s = 600像素。在[12,7]中通过从特征金字塔中选择比例尺,在[33]中通过使用maxout层开发了多比例尺训练/测试。在我们当前的实现中,我们根据[33]进行了多尺度测试;由于时间有限,我们尚未进行多尺度培训。此外,我们仅针对Fast R-CNN步骤(但尚未针对RPN步骤)执行多尺度测试。通过训练有素的模型,我们可以在图像金字塔上计算出conv特征图,其中图像的短边为s∈{200,400,600,800,1000}。

We select two adjacent scales from the pyramid following [33]. RoI pooling and subsequent layers are performed on the feature maps of these two scales [33], which are merged by maxout as in [33]. Multi-scale testing improves the mAP by over 2 points (Table 9).

我们从[33]之后的金字塔中选择两个相邻的比例尺。 RoI池和后续层在这两个比例尺的特征图上执行[33],它们由maxout合并,如[33]中所示。多尺度测试将mAP提高了2个百分点(表9)。

Using validation data. Next we use the 80k+40k trainval set for training and the 20k test-dev set for evaluation. The testdev set has no publicly available ground truth and the result is reported by the evaluation server. Under this setting, the results are an mAP@.5 of 55.7% and an mAP@[.5, .95] of 34.9% (Table 9). This is our single-model result.

使用验证数据。接下来,我们使用80k + 40k trainval集进行训练,并使用20k test-dev集进行评估。 testdev集合没有公开可用的基础事实,结果由评估服务器报告。在此设置下,结果的mAP @ .5为55.7%,mAP @ [。5,.95]为34.9%(表9)。这是我们的单模结果。

Ensemble. In Faster R-CNN, the system is designed to learn region proposals and also object classifiers, so an ensemble can be used to boost both tasks. We use an ensemble for proposing regions, and the union set of proposals are processed by an ensemble of per-region classifiers. Table 9 shows our result based on an ensemble of 3 networks. The mAP is 59.0% and 37.4% on the test-dev set. This result won the 1st place in the detection task in COCO 2015.

集成学习。在Faster R-CNN中,该系统旨在学习区域建议以及对象分类器,因此可以使用集成来完成这两项任务。我们使用整体建议区域,建议的联合集由按区域分类的整体处理。表9显示了基于3个网络的合计结果。在测试开发集上,mAP分别为59.0%和37.4%。该结果在COCO 2015的检测任务中获得了第一名。

PASCAL VOC

We revisit the PASCAL VOC dataset based on the above model. With the single model on the COCO dataset (55.7% mAP@.5 in Table 9), we fine-tune this model on the PASCAL VOC sets. The improvements of box refinement, context, and multi-scale testing are also adopted. By doing so we achieve 85.6% mAP on PASCAL VOC 2007 (Table 10) and 83.8% on PASCAL VOC 2012 (Table 11)6. The result on PASCAL VOC 2012 is 10 points higher than the previous state-of-the-art result [6].

我们基于上述模型重新访问PASCAL VOC数据集。使用COCO数据集上的单个模型(表9中的55.7%mAP @ .5),我们可以在PASCAL VOC集上微调该模型。还采用了改进的框优化,上下文和多尺度测试。这样一来,我们在PASCAL VOC 2007(表10)上达到了88.6%的mAP,在PASCAL VOC 2012(表11)上达到了83.8%6。 PASCAL VOC 2012的结果比以前的最新结果高10点[6]。

ImageNet Detection

The ImageNet Detection (DET) task involves 200 object categories. The accuracy is evaluated by mAP@.5. Our object detection algorithm for ImageNet DET is the same as that for MS COCO in Table 9. The networks are pretrained on the 1000-class ImageNet classification set, and are fine-tuned on the DET data. We split the validation set into two parts (val1/val2) following [8]. We fine-tune the detection models using the DET training set and the val1 set. The val2 set is used for validation. We do not use other ILSVRC 2015 data. Our single model with ResNet-101 has 58.8% mAP and our ensemble of 3 models has 62.1% mAP on the DET test set (Table 12). This result won the 1st place in the ImageNet detection task in ILSVRC 2015, surpassing the second place by 8.5 points (absolute).

ImageNet检测(DET)任务涉及200个对象类别。精度由mAP @ .5评估。我们针对ImageNet DET的对象检测算法与表9中针对MS COCO的对象检测算法相同。网络在1000类ImageNet分类集上进行了预训练,并在DET数据上进行了微调。在[8]之后,我们将验证集分为两部分(val1 / val2)。我们使用DET训练集和val1集微调检测模型。 val2集用于验证。我们不会使用其他ILSVRC 2015数据。我们使用ResNet-101的单个模型具有58.8%的mAP,而我们的3个模型的集合在DET测试集上具有62.1%的mAP(表12)。该结果在ILSVRC 2015的ImageNet检测任务中获得第一名,以8.5分(绝对值)超过第二名。

C. ImageNet Localization

The ImageNet Localization (LOC) task [36] requires to classify and localize the objects. Following [40, 41], we assume that the image-level classifiers are first adopted for predicting the class labels of an image, and the localization algorithm only accounts for predicting bounding boxes based on the predicted classes. We adopt the “per-class regression” (PCR) strategy [40, 41], learning a bounding box regressor for each class. We pre-train the networks for ImageNet classification and then fine-tune them for localization. We train networks on the provided 1000-class ImageNet training set.

ImageNet本地化(LOC)任务[36]需要对对象进行分类和本地化。 [40,41]之后,我们假设首先采用图像级分类器来预测图像的类别标签,而定位算法仅根据预测的类别来说明边界框的预测。我们采用“每类回归”(PCR)策略[40,41],为每个类学习边界框回归器。

Our localization algorithm is based on the RPN framework of [32] with a few modifications. Unlike the way in [32] that is category-agnostic, our RPN for localization is designed in a per-class form. This RPN ends with two sibling 1×1 convolutional layers for binary classification (cls) and box regression (reg), as in [32]. The cls and reg layers are both in a per-class from, in contrast to [32]. Specifically, the cls layer has a 1000-d output, and each dimension is binary logistic regression for predicting being or not being an object class; the reg layer has a 1000×4-d output consisting of box regressors for 1000 classes. As in [32], our bounding box regression is with reference to multiple translation-invariant “anchor” boxes at each position.

我们预训练网络以进行ImageNet分类,然后对其进行微调以进行本地化。我们在提供的1000级ImageNet训练集中训练网络。我们的定位算法基于[32]的RPN框架,并做了一些修改。与[32]中与类别无关的方法不同,我们的本地化RPN是按类设计的。该RPN以两个同级1×1卷积层结束,用于二进制分类(cls)和框回归(reg),如[32]所示。与[32]相反,cls和reg层都属于每个类。具体而言,cls层具有1000-d输出,并且每个维度都是用于预测是否为对象类的二进制逻辑回归。 reg层具有1000×4-d输出,该输出由1000类的框式回归组成。如[32],我们的边界框回归是在每个位置参考多个平移不变的“锚”框。

As in our ImageNet classification training (Sec. 3.4), we randomly sample 224×224 crops for data augmentation. We use a mini-batch size of 256 images for fine-tuning. To avoid negative samples being dominate, 8 anchors are randomly sampled for each image, where the sampled positive and negative anchors have a ratio of 1:1 [32]. For testing, the network is applied on the image fully-convolutionally.

与我们的ImageNet分类训练(第3.4节)一样,我们随机抽取224×224种作物进行数据增强。我们使用256幅图像的小批量大小进行微调。为了避免负样本占主导地位,每个图像随机采样8个锚,其中正样本和负样本的比例为1:1 [32]。为了进行测试,将网络完全卷积地应用于图像。

Table 13 compares the localization results. Following [41], we first perform “oracle” testing using the groundtruth class as the classification prediction. VGG’s paper ports a center-crop error of 33.1% (Table 13) using ground truth classes. Under the same setting, our RPN method using ResNet-101 net significantly reduces the center-crop error to 13.3%. This comparison demonstrates the excellent performance of our framework. With dense (fully convolutional) and multi-scale testing, our ResNet-101 has an error of 11.7% using ground truth classes. Using ResNet-101 for predicting classes (4.6% top-5 classification error, Table 4), the top-5 localization error is 14.4%.

表13比较了本地化结果。按照[41],我们首先使用groundtruth类作为分类预测执行“ oracle”测试。 VGG的纸张使用地面实况类别的中心裁切误差为33.1%(表13)。在相同的设置下,我们使用ResNet-101 net的RPN方法将中心裁切误差显着降低到13.3%。此比较证明了我们框架的出色性能。通过密集(完全卷积)和多尺度测试,我们的ResNet-101使用地面真理类的错误率为11.7%。使用ResNet-101预测类别(4.6%的top-5分类错误,表4),top-5的本地化错误为14.4%。

The above results are only based on theproposal network (RPN) in Faster R-CNN [32]. One may use the detection network (Fast R-CNN [7]) in Faster R-CNN to improve the results. But we notice that on this dataset, one image usually contains a single dominate object, and the proposal regions highly overlap with each other and thus have very similar RoI-pooled features. As a result, the image-centric training of Fast R-CNN [7] generates samples of small variations, which may not be desired for stochastic training. Motivated by this, in our current experiment we use the original RCNN [8] that is RoI-centric, in place of Fast R-CNN.

以上结果仅基于Faster R-CNN中的投标网络(RPN)[32]。可以使用Faster R-CNN中的检测网络(Fast R-CNN [7])来改善结果。但是我们注意到,在该数据集上,一张图像通常只包含一个主要对象,并且投标区域彼此高度重叠,因此具有非常相似的RoI合并特征。结果,Fast R-CNN [7]的以图像为中心的训练会产生小的变化样本,这对于随机训练可能是不希望的。因此,在我们当前的实验中,我们使用以RoI为中心的原始RCNN [8]代替了Fast R-CNN。

Our R-CNN implementation is as follows. We apply the per-class RPN trained as above on the training images to predict bounding boxes for the ground truth class. These predicted boxes play a role of class-dependent proposals. For each training image, the highest scored 200 proposals are extracted as training samples to train an R-CNN classifier. The image region is cropped from a proposal, warped to 224×224 pixels, and fed into the classification network as in R-CNN [8]. The outputs of this network consist of two sibling fc layers for cls and reg, also in a per-class form. This R-CNN network is fine-tuned on the training set using a mini-batch size of 256 in the RoI-centric fashion. For testing, the RPN generates the highest scored 200 proposals for each predicted class, and the R-CNN network is used to update these proposals’ scores and box positions.

我们的R-CNN实现如下。我们在训练图像上应用如上训练的每类RPN,以预测地面实况类的边界框。这些预测框起着依赖于类的提议的作用。对于每个训练图像,将获得最高评分的200个建议作为训练样本,以训练R-CNN分类器。图像区域是从提案中裁剪出来的,变形为224×224像素,然后像R-CNN [8]一样输入到分类网络中。该网络的输出由cls和reg的两个同级fc层组成,也是每类的形式。该R-CNN网络在训练集上进行了微调,使用以RoI为中心的256的小批量大小。为了进行测试,RPN为每个预测的类别生成得分最高的200个提案,并且R-CNN网络用于更新这些提案的得分和方框位置。

This method reduces the top-5 localization error to 10.6% (Table 13). This is our single-model result on the validation set. Using an ensemble of networks for both classification and localization, we achieve a top-5 localization error of 9.0% on the test set. This number significantly outperforms the ILSVRC 14 results (Table 14), showing a 64% relative reduction of error. This result won the 1st place in the ImageNet localization task in ILSVRC 2015.

这种方法将前5位的定位误差降低到10.6%(表13)。这是我们在验证集上的单模型结果。使用用于分类和本地化的网络集成,我们在测试集上实现了9.0%的前5个本地化误差。该数字明显优于ILSVRC 14的结果(表14),相对误差降低了64%。该结果在ILSVRC 2015的ImageNet本地化任务中获得了第一名。

【CV-Paper 08】ResNet:Deep Residual Learning for Image Recognition相关推荐

  1. 深度学习论文阅读图像分类篇(五):ResNet《Deep Residual Learning for Image Recognition》

    深度学习论文阅读图像分类篇(五):ResNet<Deep Residual Learning for Image Recognition> Abstract 摘要 1. Introduct ...

  2. 深度学习论文:Deep Residual Learning for Image Recognition

    论文: He, Kaiming, et al. "Deep residual learning for image recognition." Proceedings of the ...

  3. 李沐论文精读: ResNet 《Deep Residual Learning for Image Recognition》 by Kaiming He

    目录 1 摘要 主要内容 主要图表 2 导论 2.1为什么提出残差结构 2.2 实验验证 3 实验部分 3.1 不同配置的ResNet结构 3.2 残差结构效果对比 3.3 残差结构中,输入输出维度不 ...

  4. Deep Residual Learning for Image Recognition(ResNet)论文翻译及学习笔记

    [论文翻译]:Deep Residual Learning for Image Recognition [论文来源]:Deep Residual Learning for Image Recognit ...

  5. Deep Residual Learning for Image Recognition(深度残差网络用于图像识别)理解

    本文转载于https://blog.csdn.net/dulingtingzi/article/details/79870486,个人觉得博主写的通俗易懂,故将其转发,欢迎大家一起学习,一起进步 其实 ...

  6. 【论文泛读】 ResNet:深度残差网络

    [论文泛读] ResNet:深度残差网络 文章目录 [论文泛读] ResNet:深度残差网络 摘要 Abstract 介绍 Introduction 残差结构的提出 残差结构的一些问题 深度残差网络 ...

  7. 【论文翻译】Deep Residual Learning for Image Recognition

    [论文翻译]Deep Residual Learning for Image Recognition [论文题目]Deep Residual Learning for Image Recognitio ...

  8. 【读点论文】Deep Residual Learning for Image Recognition 训练更深的网络

    Deep Residual Learning for Image Recognition 深层次的神经网络更难训练.何凯明等人提出了一个残差学习框架,以简化比以前使用的网络更深的网络训练. 明确地将层 ...

  9. 图像分类经典卷积神经网络—ResNet论文翻译(中英文对照版)—Deep Residual Learning for Image Recognition(深度残差学习的图像识别)

    图像分类经典论文翻译汇总:[翻译汇总] 翻译pdf文件下载:[下载地址] 此版为中英文对照版,纯中文版请稳步:[ResNet纯中文版] Deep Residual Learning for Image ...

  10. 论文笔记:Deep Residual Learning

    之前提到,深度神经网络在训练中容易遇到梯度消失/爆炸的问题,这个问题产生的根源详见之前的读书笔记.在 Batch Normalization 中,我们将输入数据由激活函数的收敛区调整到梯度较大的区域, ...

最新文章

  1. DOM之城市二级联动
  2. c语言计算输入的字母数字个数字,请问这个用c怎么做:输入一串字符,分别统计其中数字和字母的个数...
  3. 整理了7道Python函数的练习题,希望对你学习函数有帮助
  4. IM开发基础知识补课(一):正确理解前置HTTP SSO单点登陆接口的原理
  5. samba   服务
  6. 【STM32】 keil软件工具--菜单详解
  7. bzoj4316: 小C的独立集
  8. 各种排序算法的C++实现
  9. amoeba mysql读写分离_Mysql 实现读写分离的详细教程(amoeba)
  10. OSPF多区域配置实例
  11. Java对excel表格操作
  12. 华三交换机配置vrrp_h3c vrrp配置实例
  13. C++很难?神级程序员自述对C++的认识!见解独到能少走很多弯路!
  14. 【Centos7进入root权限是报错:sudo: /etc/sudo.conf is owned by uid 1000, should be 0】
  15. Excel图表—条形图的高级做法
  16. web实训——3.12
  17. xsmax进入dfu模式_iPhoneXSMax怎么强制重启-如何进入DFU模式
  18. 雅居乐陈卓林第二人生业主文化节收官,记录下这些天的温情与感动
  19. cad图纸问号怎么转换文字_cad打开后图形文字显示问号该怎么办?
  20. 网易云IM(即时通讯) 集成指南(Android)

热门文章

  1. Spring Cloud Alibaba - 抽取功能的pojo类
  2. 《Linux内核修炼之道》精华版之方法论
  3. PR免费转场 动态图形转场PR模板MOGRT免费下载
  4. python绘制中国地图散点图_使用Python实现画一个中国地图
  5. java io 和nio 区别_java IO和NIO区别
  6. 简单集成华为PUSH
  7. 计算机网络实验二 VLAN间路由
  8. (node:3872) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issue
  9. systrace的使用
  10. 三轴加速度传感器和六轴惯性传感器_六轴传感器和三轴传感器的区别在哪