开始前,博主请求大家一定要看注解,博主的努力全在注解里,有帮助的记得一键三连呀!

Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation

基于空洞可分离卷积语义图像分割的编码-解码器(网络)

Abstract
Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89% and 82.1% without any post-processing. Our paper is accompanied with a publicly availablereference implementation of the proposed models in Tensorflow at https://github.com/tensorflow/models/tree/master/research/deeplab.

摘要
在语义分割任务中,空间金字塔池化模型(注解1)或是编码-解码器结构(注解2)已经被应用于深度神经网络。前一种网络(SPP)能够以多种速率和多个有效视场通过过滤器或池操作探测传入的特征来编码多尺度上下文信息,而后一种网络(encoder-decoder)可以通过逐渐恢复空间信息来捕获更清晰的目标边界。 在这项工作中(本文),我们结合两种方法的优点。具体来说,我们提出的模型DeepLabv3+扩展了DeepLabv3,添加了一个简单而有效的解码器模块来细化分割结果,特别是沿着对象边界。我们进一步探索将深度可分离卷积(注解3)应用于空洞空间金字塔池和解码器模块,从而得到更快、更强的编码器-解码器网络。我们在PASCAL VOC 2012和cityscape数据集上验证了该模型的有效性,在不进行任何后处理的情况下,测试集的性能分别达到89%和82.1%。论文复现代码放在下面(作者用的tensorflow框架)https://github.com/tensorflow/models/tree/master/research/deeplab.

摘要解读
注解1:SPP在本文中被推广为基于空洞空间金字塔模型,实质上是将从DCNN【Xception】中的输出分别进行1X1、3X3的不同空洞率的卷积块中,最后concat一下)
注解2:encoder-decoder是一种经典的深度学习网络结构,通过编码块对图像特征进行提取并压缩图像尺寸,然后通过解码块对图像进行像素填充和尺寸恢复,在本文中,作者对Xception和SPP用深度可分离卷积进行改写作为网络的编码模块,再通过4倍上采样和卷积相结合的方式完成解码模块的构建。
注解3:接触过深度学习的小伙伴们都会知道卷积方式有很多种,通过一刀断的方式,我们可以把卷积分为标准卷积和形变卷积、空洞卷积等非标准卷积,这里的深度可分离卷积也是一种非标准卷积,这类卷积将卷积操作分为深度卷积和逐点卷积两部分,复现代码我放在了这里

Keywords: Semantic image segmentation, spatial pyramid pooling, encoderdecoder, and depthwise separable convolution.

关键词:语义图像分割,空间金字塔池化,编码-解码结构,深度可分离卷积。

1 Introduction

1 引言

Semantic segmentation with the goal to assign semantic labels to every pixel in an image is one of the fundamental topics in computer vision. Deep convolutional neural networks based on the Fully Convolutional Neural Network show striking improvement over systems relying on hand-crafted features on benchmark tasks. In this work, we consider two types of neural networks that use spatial pyramid pooling module or encoder-decoder structure for semantic segmentation, where the former one captures rich contextual information by pooling features at different resolution while the latter one is able to obtain sharp object boundaries.

语义分割是计算机视觉的基本研究课题之一,其目标是为图像中的每个像素分配语义标签。基于全卷积神经网络的深度卷积神经网络在基准测试任务中比依赖手工功能的系统有了显著的改进。在本研究中,我们考虑使用空间金字塔池化模块或编码器-解码器结构搭建神经网络进行语义分割,前者通过不同分辨率的池化特征获取丰富的上下文信息,而后者能够获得清晰的目标边界。(注解1)

In order to capture the contextual information at multiple scales, DeepLabv3 applies several parallel atrous convolution with different rates (called Atrous Spatial Pyramid Pooling, or ASPP), while PSPNet performs pooling operations at different grid scales. Even though rich semantic information is encoded in the last feature map, detailed information related to object boundaries is missing due to the pooling or convolutions with striding operations within the network backbone. This could be alleviated by applying the atrous convolution to extract denser feature maps. However, given the design of state-of-art neural networks and limited GPU memory, it is computationally prohibitive to extract output feature maps that are 8, or even 4 times smaller than the input resolution. Taking ResNet-101 for example, when applying atrous convolution to extract output features that are 16 times smaller than input resolution, features within the last 3 residual blocks (9 layers) have to be dilated. Even worse, 26 residual blocks (78 layers!) will be affected if output features that are 8 times smaller than input are desired. Thus, it is computationally intensive if denser output features are extracted for this type of models. On the other hand, encoder-decoder models lend themselves to faster computation (since no features are dilated) in the encoder path and gradually recover sharp object boundaries in the decoder path. Attempting to combine the advantages from both methods, we propose to enrich the encoder module in the encoder-decoder networks by incorporating the multi-scale contextual information.

为了在多尺度中获取上下文信息,DeepLab v3(作者的上篇文章)应用了多个串联式、不同空洞率的空洞卷积(注解2)(这种结构我们叫做空洞空间金字塔池化,或者直接ASPP),而PSPNet则在不同的网格尺度上执行池化操作。尽管在最后的特征图中编码了丰富的语义信息,但由于网络主干内跨操作的池化或卷积,对象边界相关的详细信息丢失了。这可以通过应用空洞卷积来提取更密集的特征图来缓解。然而,考虑到目前最先进的神经网络设计和有限的GPU内存,提取比输入分辨率小8倍甚至4倍的输出特征图在计算上是不可能的。以ResNet-101为例,当应用atrous卷积提取比输入分辨率小16倍的输出特征时,最后3个残块(9层)内的特征必须被放大。更糟糕的是,如果输出特性比所需的输入小8倍,那么26个残留块(78层!)将受到影响。因此,如果为这种类型的模型提取更密集的输出特征,它是计算密集型的。另一方面,编码器-解码器模型在编码器路径中提高了计算速度(因为没有特征被放大),并在解码器路径中逐渐恢复清晰的目标边界。我们尝试结合两种方法的优点,提出通过融合多尺度上下文信息来丰富编解码器网络中的编码器模块。(注解3)

In particular, our proposed model, called DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to recover the object boundaries, as illustrated in Fig. 1. The rich semantic information is encoded in the output of DeepLabv3, with atrous convolution allowing one to control the density of the encoder features, depending on the budget of computation resources. Furthermore, the decoder module allows detailed object boundary recovery.

特别是,我们提出的名为DeepLabv3+的模型扩展了DeepLab v3,添加了一个简单但有效的解码器模块来恢复对象边界,如图1(注解4)所示。丰富的语义信息被编码到DeepLab v3的输出中,使用空洞卷积允许根据计算资源的预算来控制编码器特征的密度。此外,解码器模块允许详细的对象边界恢复。

Motivated by the recent success of depthwise separable convolution, we also explore this operation and show improvement in terms of both speed and accuracy by adapting the Xception model , similar to , for the task of semantic segmentation, and applying the atrous separable convolution to both the ASPP and decoder modules. Finally, we demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasts and attain the test set performance of 89.0% and 82.1% without any post-processing, setting a new state-of-the-art.

出于最近成功的切除分离卷积,我们也探索这个操作和显示方面的改善速度和准确度通过调整Xception模型(讲解5),相似,语义细分任务,应用深度分离卷积ASPP和译码器模块。最后,我们在PASCAL VOC 2012和城市景观数据上验证了所提出模型的有效性,并在不进行任何后处理的情况下获得了89.0%和82.1%的测试集性能,达到了新的水平。

总结,我们的贡献(确实可以称得上是贡献,牛掰plus):
我们提出了一种新的编码器-解码器结构,使用DeepLabv3作为一个强大的编码器模块和一个简单而有效的解码器模块。
在我们的结构中,人们可以通过空洞卷积来任意控制提取的编码器特征的分辨率,以权衡精度和运行时间,这是现有的编码器-解码器模型所不可能的。
我们采用Xception模型来完成分割任务,并将深度可分卷积应用于ASPP模块和解码器模块,从而得到更快、更强的编解码器网络。
我们提出的模型在PASCAL VOC 2012和城市景观数据集上取得了新的性能。我们还提供了详细的分析设计选择和模型变体。
我们在https://github.com/tensorflow/models/tree/master/ research/deeplab上公开了基于tensorflow的模型实现。

引言解读
注解1:简介研究热点,引出本文方法。
注解2:空洞卷积实质上是在原图加入“小孔”,使得图像像素连续变为离散型分布,然后再使用某一卷积核进行卷积操作

注解3:引出ASPP和encoder-decoder结构
注解4:解读图1,如图一所示,作者展示了3中网络结构,a表示并联式空间金字塔池化结构,实际上就是在网络中引入了一个特定的“池化”结构,实现在编码过程中尽可能多的保留图像特征,b表示经典的对称编码器-解码器结构,有多少次下采样就会对应多少次上采样,在编码和解码之间通过跳跃连接的方式实现表征信息和语义信息的完美结合,c就是作者的网络构想了,就是将a,b两种进行融合,具体融合方式是将ASPP放置在encoder-decoder的编码器模块。
讲解5:想了解Xsception同学点这里

2 Related Work

2 相关工作

Models based on Fully Convolutional Networks (FCNs) have demonstrated significant improvement on several segmentation benchmarks. There are several model variants proposed to exploit the contextual information for segmentation, including those that employ multi-scale inputs (i.e., image pyramid) or those that adopt probabilistic graphical models (such as DenseCRF with efficient inference algorithm) . In this work, we mainly discuss about the models that use spatial pyramid pooling and encoder-decoder structure.

基于全卷积网络(FCNs)的模型在若干分割基准上显示出显著的改进。提出了几种利用上下文信息进行分割的模型变体,包括那些使用多尺度输入的模型变体(即图像金字塔)或采用概率图形模型(如DenseCRF和高效推理算法)。在本工作中,我们主要讨论了使用空间金字塔池和编码器-解码器结构的模型。(注解1)

Spatial pyramid pooling: Models, such as PSPNet [24] or DeepLab [39,23], perform spatial pyramid pooling [18,19] at several grid scales (including imagelevel pooling [52]) or apply several parallel atrous convolution with different rates (called Atrous Spatial Pyramid Pooling, or ASPP). These models have shown promising results on several segmentation benchmarks by exploiting the multi-scale information.

空间金字塔池化:如PSPNet或DeepLab,在多个网格尺度上执行空间金字塔池化(包括图像级池化)或应用多个不同速率的并行空洞卷积(称为空洞空间金字塔池化,或ASPP)。这些模型利用多尺度信息,在多个分割基准上显示出良好的结果。(注解2)

Encoder-decoder: The encoder-decoder networks have been successfully applied to many computer vision tasks, including human pose estimation, object detection, and semantic segmentation. Typically, the encoder-decoder networks contain (1) an encoder module that gradually reduces the feature maps and captures higher semantic information, and (2) a decoder module that gradually recovers the spatial information. Building on top of this idea, we propose to use DeepLabv3 as the encoder module and add a simple yet effective decoder module to obtain sharper segmentations.

编码器-解码器:编码器-解码器网络已成功应用于许多计算机视觉任务,包括人体姿态估计、目标检测和语义分割。通常,编码器-解码器网络包含(1)一个编码器模块,它逐渐减少特征图并捕获更高的语义信息,以及(2)一个解码器模块,它逐渐恢复空间信息。在此基础上,我们提出使用DeepLabv3作为编码器模块,并添加一个简单而有效的解码器模块,以获得更清晰的分割。(注解3)

Depthwise separable convolution: Depthwise separable convolution or group convolution, a powerful operation to reduce the computation cost and number of parameters while maintaining similar (or slightly better) performance. This operation has been adopted in many recent neural network designs. In particular, we explore the Xception model, similar to for their COCO 2017 detection challenge submission, and show improvement in terms of both accuracy and speed for the task of semantic segmentation.

深度可分卷积:深度可分卷积或组卷积,是一种强大的运算,可以减少计算成本和参数数量,同时保持相似(或稍好)的性能。这种操作在最近的许多神经网络设计中被采用。特别地,我们探索了Xception模型,类似于COCO 2017检测挑战提交,并显示了语义分割任务在准确性和速度方面的改进。(注解4)
相关工作解读
注解1 :简单介绍FCN,实际上是diss FCN上采样过程太过粗糙
注解2:引出ASPP
注解3:引出encoder-decoder
注解4:引出separable conv

3 Methods

3 方法

In this section, we briefly introduce atrous convolution and depthwise separable convolution. We then review DeepLabv3 which is used as our encoder module before discussing the proposed decoder module appended to the encoder output. We also present a modified Xception model which further improves the performance with faster computation.

在本节中,我们简要介绍了空洞卷积和深度可分离卷积。然后,我们回顾DeepLabv3,它用作我们的编码器模块,然后讨论附加到编码器输出的建议解码器模块。我们还提出了一个改进的Xception模型(注解1),进一步提高了性能,计算速度更快。

3.1 Encoder-Decoder with Atrous Convolution

3.1带有空洞卷积的编码-解码器

Atrous convolution: Atrous convolution, a powerful tool that allows us to explicitly control the resolution of features computed by deep convolutional neural networks and adjust filter s field-of-view in order to capture multi-scale information, generalizes standard convolution operation. In the case of two-dimensional signals, for each location i on the output feature map y and a convolution filter w, atrous convolution is applied over the input feature map x as follows

空洞卷积:空洞卷积是一种强大的工具,它允许我们显式控制深度卷积神经网络计算的特征的分辨率,并调整滤波器的视场,以捕获多尺度信息,推广标准卷积操作。对于二维信号,对于输出特征映射y和卷积滤波器w上的每个位置i,对输入特征映射x进行如下卷积。

其中卷积率r决定了我们采样输入信号时的步幅。我们建议有兴趣的读者到[39]了解更多的细节。注意,标准卷积是速率r = 1的特殊情况。通过改变速率值,对滤波器的视场进行自适应调整。

Depthwise separable convolution: Depthwise separable convolution, factorizing a standard convolution into a depthwise convolution followed by a pointwise convolution (i.e., 1 1 convolution), drastically reduces computation complexity. Specifically, the depthwise convolution performs a spatial convolution independently for each input channel, while the pointwise convolution is employed to combine the output from the depthwise convolution. In the TensorFlow implementation of depthwise separable convolution, atrous convolution has been supported in the depthwise convolution (i.e., the spatial convolution), as illustrated in Fig. 3. In this work, we refer the resulting convolution as atrous separable convolution, and found that atrous separable convolution significantly reduces the computation complexity of proposed model while maintaining similar (or better) performance.

深度可分卷积:深度可分卷积,将标准卷积分解为深度卷积,然后是点卷积(即1X1卷积),大大降低了计算复杂度。具体来说,深度卷积对每个输入通道独立进行空间卷积,而点向卷积用于合并深度卷积的输出。在TensorFlow的深度可分离卷积实现中,在深度卷积(即空间卷积)中支持空洞卷积,如图3所示。在本工作中,我们将所得的卷积称为无分可分卷积,并发现无分可分卷积显著地降低了所提出模型的计算复杂度,同时保持了相似(或更好)的性能。

DeepLabv3 as encoder: DeepLabv3 employs atrous convolution to extract the features computed by deep convolutional neural networks at an arbitrary resolution. Here, we denote output stride as the ratio of input image spatial resolution to the final output resolution (before global pooling or fullyconnected layer). For the task of image classification, the spatial resolution of the final feature maps is usually 32 times smaller than the input image resolution and thus output stride = 32. For the task of semantic segmentation, one can adopt output stride = 16 (or 8) for denser feature extraction by removing the striding in the last one (or two) block(s) and applying the atrous convolution correspondingly (e.g., we apply rate = 2 and rate = 4 to the last two blocks respectively for output stride = 8).

DeepLabv3作为编码器:DeepLabv3采用空洞卷积提取任意分辨率的深度卷积神经网络计算的特征。这里,我们将输出步长表示为输入图像空间分辨率与最终输出分辨率的比值(在全局池化或完全连通层之前)。在图像分类任务中,最终的feature map的空间分辨率通常是输入图像分辨率的32倍,因此输出stride = 32。语义分割的任务,一个可以采用输出步= 16(或8)密度特征提取的大步在最后一个(或两个)应用卷积深黑色的块(s)和相应的(例如,我们运用率= 2和率= 4最后两块分别输出步= 8)。

Additionally, DeepLabv3 augments the Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales by applying atrous convolution with different rates, with the image-level features. We use the last feature map before logits in the original DeepLabv3 as the encoder output in our proposed encoder-decoder structure. Note the encoder output feature map contains 256 channels and rich semantic information. Besides, one could extract features at an arbitrary resolution by applying the atrous convolution, depending on the computation budget.

此外,DeepLabv3对ASPP模块进行了增强,该模块通过应用不同速率的空洞卷积在多个尺度上探测卷积特征,图像级特征。在我们建议的编码器-解码器结构中,我们使用原始DeepLabv3中logits之前的最后一个特性映射作为编码器输出。注意编码器输出特征图包含256个通道和丰富的语义信息。另外,根据计算量的不同,应用空洞卷积可以提取任意分辨率的特征。

Proposed decoder: The encoder features from DeepLabv3 are usually computed with output stride = 16. In the work of [23], the features are bilinearly upsampled by a factor of 16, which could be considered a naive decoder module. However, this naive decoder module may not successfully recover object segmentation details. We thus propose a simple yet effective decoder module, as illustrated in Fig. 2. The encoder features are first bilinearly upsampled by a factor of 4 and then concatenated with the corresponding low-level features [73] from the network backbone that have the same spatial resolution (e.g., Conv2 before striding in ResNet-101 [25]). We apply another 1 1 convolution on the low-level features to reduce the number of channels, since the corresponding lowlevel features usually contain a large number of channels (e.g., 256 or 512) which may outweigh the importance of the rich encoder features (only 256 channels in our model) and make the training harder. After the concatenation, we apply a few 3 3 convolutions to refine the features followed by another simple bilinear upsampling by a factor of 4. We show in Sec. 4 that using output stride = 16 for the encoder module strikes the best trade-off between speed and accuracy. The performance is marginally improved when using output stride = 8 for the encoder module at the cost of extra computation complexity.

解码器:DeepLabv3的编码器特征通常用输出stride = 16来计算。在[23]的工作中,特征被双线性上采样了16倍,这可以被认为是一个朴素的解码模块。然而,这个幼稚的解码器模块可能不能成功地恢复对象分割的细节。因此,我们提出了一个简单而有效的解码器模块,如图2所示。编码器特征首先以4倍的倍数进行双线性上采样,然后与来自具有相同空间分辨率的网络主干(例如,在ResNet-101[25]中跨步前的Conv2)的相应低阶特征[73]连接。我们应用另一个1卷积的低级功能来减少渠道的数量,因为相应的低电平的功能通常包含大量的渠道(例如,256年或512年)可能超过丰富的编码器特性的重要性在我们的模型中(只有256个频道)和训练的难度。在连接之后,我们应用几个卷积来细化特征,然后再用一个简单的双线性上采样的因子4。我们在第4节中展示了使用编码器模块的输出stride = 16在速度和精度之间取得了最好的平衡。当编码器模块使用output stride = 8时,性能略有提高,但代价是额外的计算复杂性。

3.2 Modified Aligned Xception

3.2 改进Xception

The Xception model [26] has shown promising image classification results on ImageNet [74] with fast computation. More recently, the MSRA team [31] modifies the Xception model (called Aligned Xception) and further pushes the performance in the task of object detection. Motivated by these findings, we work in the same direction to adapt the Xception model for the task of semantic image segmentation. In particular, we make a few more changes on top of MSRA s modifications, namely (1) deeper Xception same as in [31] except that we do not modify the entry flow network structure for fast computation and memory efficiency, (2) all max pooling operations are replaced by depthwise separable convolution with striding, which enables us to apply atrous separable convolution to extract feature maps at an arbitrary resolution (another option is to extend the atrous algorithm to max pooling operations), and (3) extra batch normalization [75] and ReLU activation are added after each 3 3 depthwise convolution, similar to MobileNet design [29]. See Fig. 4 for details.

例外模型[26]在ImageNet上显示了很好的图像分类结果[74],计算速度很快。最近,MSRA团队[31]修改了Xception模型(称为Aligned Xception),并进一步提高了对象检测任务的性能。基于这些发现,我们致力于将异常模型应用于语义图像分割。特别是,我们同行上作一些修改修改,即(1)深入Xception[31]除了一样,我们不修改条目流网络结构的快速计算和内存效率,(2)所有马克斯池操作都被切除与大步分离卷积,这使得我们能够应用空洞可分离卷积来提取任意分辨率的特征映射(另一个选项是将空洞算法扩展到最大池操作),(3)额外的批处理归一化[75]和ReLU激活后,每3 3次深度卷积,类似于MobileNet设计[29]。具体见图4。

方法解读
注解1:改进的Xception模型复现代码点这里

4 Experimental Evaluation(略)

5 Conclusion

5 结论

Our proposed model DeepLabv3+ employs the encoder-decoder structure where DeepLabv3 is used to encode the rich contextual information and a simple yet effective decoder module is adopted to recover the object boundaries. One could also apply the atrous convolution to extract the encoder features at an arbitrary resolution, depending on the available computation resources. We also explore the Xception model and atrous separable convolution to make the proposed model faster and stronger. Finally, our experimental results show that the proposed model sets a new state-of-the-art performance on PASCAL VOC 2012 and Cityscapes datasets.

我们提出的模型DeepLabv3+采用编码器-解码器结构,其中DeepLabv3用于编码丰富的上下文信息,并采用一个简单而有效的解码器模块来恢复对象边界。根据可用的计算资源,还可以应用空洞卷积来提取任意分辨率的编码器特征。我们还研究了异常模型和空洞可分离卷积,使模型更快更强。最后,我们的实验结果表明,所提出的模型在PASCAL VOC 2012和城市景观数据集上取得了新的性能。

DeepLab v3+为啥可以封神?(论文讲解含超详细注解+中英文对照+配图)相关推荐

  1. AI论文系列-经典论文[原文、中文翻译、中英文对照翻译]

    AI论文系列-经典论文[原文.中文翻译.中英文对照翻译] @[TOC](AI论文系列-经典论文[原文.中文翻译.中英文对照翻译]) 1. CV系列 2. NLP系列 3. GNA系列 1. CV系列 ...

  2. 真封神虚拟服务器,服务器端文件详细介绍即修改(三)

    我们每星期 加三个修改教程,废话不多说 开始吧. 1.打开服务器端,修改等级在version\chinese_gb\config 的 game_rule.ini 可以设置最高等级和宝宝最高等级包括传送 ...

  3. NN-Descent构建K近邻图——论文超详细注解

    个人博客:www.mzwang.top 论文题目 Efficient K-Nearest Neighbor Graph Construction for Generic Similarity Meas ...

  4. Python爬虫实例,根据项目需求所讲解,超详细

    最近公司接了一个项目,客户需要对某一网址进行数据爬虫,这是我第一次接触爬虫,也是我第一次使用Python语言,小白上路,写的不是很好,技术也不是很新,各位大佬轻喷! 爬虫步骤 Created with ...

  5. stylegan2论文代码复现超详细

    stylegan2论文解读 论文就略过啦,参考别人博客了解一下 https://blog.csdn.net/g11d111/article/details/109187245 stylegan2原论文 ...

  6. Omnipeek 空口抓包之无线ping包详细讲解,超详细

    目录 一.环境搭建 1.搭建无线局域网环境 1)准备一个路由器,打开wifi,固定协议信道,加密方式open 2)两台手机通过无线接入wifi网络 2.omnipeek环境准备 二.抓取ping包 1 ...

  7. 【云计算与大数据技术】Google、亚马逊、IBM、阿里云等云计算应用平台介绍讲解(超详细)

    云应用跟云计算最大的不同在于,云计算作为一种宏观技术发展概念而存在,而云应用则是直接面对客户解决实际问题的产品. "云应用"的工作原理是把传统软件"本地安装.本地运算&q ...

  8. 【云计算与大数据技术】云交付模型、云部署模型、云计算优势与挑战、应用的讲解(超详细必看)

    一.云交付模型 云计算主要分为三种交付模型,而且这三种交付模型主要是从用户体验的角度出发的,分别是软件即服务(SaaS),平台即服务(PaaS),基础设施即服务(IaaS),对于普通用户而言,他们主要 ...

  9. 【云计算与大数据计算】大数据物理、集成、安全架构及阿里云飞天系统架构讲解(超详细)

    一.物理架构 物理架构 - 企业大数据系统的各层次系统最终要部署到主机节点中,这些节点通过网络连接成 为一个整体,为企业的大数据应用提供物理支撑 ,企业大数据系统由多个逻辑层组成,多个逻辑层可以映射到 ...

最新文章

  1. XenApp_XenDesktop_7.6实战篇之十五:StoreFront的配置
  2. 《正则表达式经典实例(第2版)》——2.6 匹配完整单词
  3. Python基础教程:括号()[]{}详解
  4. Codeforces 1276D/1259G Tree Elimination (树形DP)
  5. TMS320F28335之GPIO原理
  6. linux shm_open,c – 如何更改shm_open路径?
  7. Catch That Cow——BFS
  8. day36-表关系(外键)
  9. 有效的云安全态势始于三个步骤
  10. 枚举类型的定义和应用
  11. 排序算法速度测试(插入排序、二分法插入、选择排序、快速排序、堆排序)js实现...
  12. 关于getResource和getClassLoader().getResource()
  13. JDK8下载及其环境配置
  14. procc 编程需要oracle11.lib,AVProVideo Pro 1.7.3版本 1.7.3属于稳定版本 (官网最新版1.9.1)...
  15. oracle怎么装测试库,测试库csdb安装ORACLE_TEXT组件
  16. opencv 4.5.5 imread 失败(报错)的处理方式
  17. PostgreSQL的全文检索(一)
  18. 如何解决SpringBoot的单测启动慢的问题
  19. js删除数组,checkedBox选中状态,javascript数组删除重复项
  20. Xcode—新建/配置pch文件

热门文章

  1. 如何在程序中给word文档加上标和下标
  2. 程序员编程书籍-列表汇总(附下载链接)
  3. 自己动手写类似酷狗播放器(5)_文件的保存和读取
  4. php写账户冻结_账户冻结与解冻
  5. 基于 Openlayers 实现的地图常用功能工具集合
  6. @Qualifier
  7. jsp 九大内置对象及其作用
  8. 计算机网络 :网络层
  9. 利用 latex 将png 转化为 eps 格式
  10. Springboot2模块系列:securityoauth2(Token安全认证)