:)xception

Xception简介 (Intro to Xception)

Xception - the Extreme Inception! Sounds cool and Xtreme! But “Why such a name?”, one might wonder! One obvious thing is that the author Francois Chollet (creator of Keras) had been inspired by the Inception architecture. He tells about how he sees the Inception architecture in his abstract, which I’ve quoted below.

Xception-极限盗版! 听起来很酷,而且Xtreme! 但是“为什么这样一个名字?”,一个人可能会好奇! 一个显而易见的事情是作家Francois Chollet(Keras的创建者)受到了Inception体系结构的启发。 他在摘要中讲述了他如何看待Inception架构,下面将对此进行引用。

We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation

我们将卷积神经网络中的Inception模块解释为是常规卷积和深度可分离卷积运算之间的中间步骤

-Francois Chollet in the Xception paper

-Xception论文中的Francois Chollet

Another new deep learning term for today, “depthwise separable convolution”. Now, I know that most of you would’ve guessed that it’s some kind of layer. But for those who have never heard it, fret not. I’ve got you covered in this post. We’ll go deeper about what’s a depthwise separable convolution and how it’s used to build the Xception model. And also more on how it builds upon the Inception hypothesis. As always, I’ll try to throw in more illustrations to make the details clear. Let’s dive in!

今天,另一个新的深度学习术语是“ 深度可分离卷积 ”。 现在,我知道你们中的大多数人都已经猜到这是某种层次。 但是对于那些从未听过的人,不用担心。 我已经在这篇文章中涵盖了您。 我们将深入探讨什么是深度可分离卷积以及如何将其用于构建Xception模型。 还有更多关于它如何建立在初始假设上的信息。 与往常一样,我将尝试添加更多插图以使细节清晰。 让我们潜入吧!

初始假设 (The Inception Hypothesis)

The Inception-style architecture was introduced in the “Going Deeper With Convolution” paper. The authors called the model introduced in the paper as GoogLeNet, which used the Inception blocks. It was a novel and innovative architecture and it still is. Also, it got much attention as many architectures at that time were stacks of more and more layers to increase network capacity. Inception, on the other hand, was more creative and slick!

Inception风格的体系结构已在“ 通过卷积进行更深入的研究 ”一文中进行了介绍。 作者将本文介绍的模型称为GoogLeNet,该模型使用了Inception块。 这是一个新颖的创新架构,现在仍然如此。 此外,它受到了很多关注,因为当时许多体系结构都是越来越多的层的堆栈,以增加网络容量。 另一方面,Inception更具创造力和技巧!

Rather than just going deeper by simply adding more layers, it also went wide. We’ll see what I mean by “wide” shortly. The Inception blocks take in an input tensor and perform a combination of convolution and pooling in parallel.

它不仅可以通过简单地添加更多的层来进一步深入,而且还可以扩展。 我们很快就会看到“宽”的意思。 初始块接收输入张量,并并行执行卷积和池化的组合。

Image by author图片作者

For people who have seen or read the Inception papers, you might find that this is not exactly like an Inception block. Yeah, you’re right! I’ve just illustrated it this way so that everybody gets a rough idea of what it does. You can call it “the naive version of inception” as the authors called it.

对于看过或阅读《盗梦空间》论文的人,您可能会发现这并不完全像《盗梦空间》一样。 是啊,你说得对! 我只是以这种方式进行了说明,以便每个人都能大致了解其功能。 您可以像作者所说的那样将其称为“初始的原始版本”。

Now, the actual Inception block is a little bit different in terms of the number of convolutions, their size, and how they’re layered. But this naive illustration conveys what I meant by “wide” before. The block performs convolution with different filter sizes in parallel. And the output tensors are concatenated along the channel dimension i.e stacked one behind the other.

现在,实际的Inception块在卷积数量,卷积大小以及如何分层方面有所不同。 但是这个天真的例子传达了我之前所说的“宽泛”。 该块并行执行具有不同滤波器大小的卷积。 输出张量沿着通道尺寸连接,即一个堆叠在另一个堆叠在一起。

更深入一点 (Going a Bit Deeper)

Now that you’ve seen the parallel convolution block, I shall go a bit deeper about the block above. The input tensor which is processed by the different convolution would be of shape (BatchSize, Height, Width, Channels). In the Inception block, the input tensor’s channel dimension is reduced using 1x1 convolution before applying 3x3 or 5x5 convolutions. This reduction in channel size is done to reduce the computation when feeding this tensor to the subsequent layers. You can find this concept explained in great detail in the 1x1 convolution article.

既然您已经看到了并行卷积块,那么我将对上面的块进行更深入的介绍。 由不同卷积处理的输入张量将具有形状(BatchSize,Height,Width,Channels)。 在Inception块中,在应用3x3或5x5卷积之前,使用1x1卷积减小输入张量的通道尺寸。 这样做是为了减小通道大小,以减少在将该张量馈送到后续层时的计算量。 您可以在1x1卷积文章中找到详细解释的概念。

Image by author图片作者

Again, this is just another depiction to understand and is not drawn to be the same as the original Inception block. You can give or take some layers, maybe even wider, and make your own version. The input tensor is individually processed by the three convolution towers. And the three separate output tensors are concatenated along the channel dimension. The GoogLeNet uses multiple Inception block and a few other tricks and tweaks to achieve its performance. I believe, now you get the idea of what an Inception block does.

同样,这只是要理解的另一种描述,并不与原始的Inception块相同。 您可以赋予或采用一些甚至更广泛的层次,然后制作自己的版本。 输入张量由三个卷积塔分别处理。 并将三个单独的输出张量沿着通道尺寸连接在一起。 GoogLeNet使用多个Inception块以及其他一些技巧和调整来实现其性能。 我相信,现在您已经了解了Inception块的功能。

Next, we shall remodel the above block to make it “Xtreme”!

接下来,我们将对上面的块进行重塑以使其成为“ Xtreme”!

使其成为Xtreme (Making it Xtreme)

We’ll replace the three separate 1x1 layers in each of the parallel towers, with a single layer. It’ll look something like this.

我们将用单个层替换每个并行塔中的三个单独的1x1层。 它看起来像这样。

Inception block with a common 1x1 layer - Image by author
具有常见1x1层的起始块-作者提供的图像

Rather than just passing on the output of the 1x1 layer to the following 3x3 layers, we’ll slice it and pass each channel separately. Let me illustrate with an example. Say, the output of the 1x1 layer is of shape (1x5x5x5). Let’s not consider the batch dimension and just see it as a (5x5x5) tensor. This is sliced along the channel dimension as shown below and fed separately to the following layers.

我们不仅将1x1层的输出传递到后续的3x3层,还对其进行切片并分别传递每个通道。 让我举例说明。 假设1x1层的输出是形状(1x5x5x5)。 我们不要考虑批处理尺寸,而只是将其视为(5x5x5)张量。 如下图所示,将其沿通道尺寸切成薄片 ,并分别输送到以下各层。

Slicing of the output tensor of 1x1 convolution along the channel dimension - Image by author沿通道维度对1x1卷积的输出张量进行切片-图片由作者

Now, each of the slices is passed on to a separate 3x3 layer. This means that each 3x3 block will have to process a tensor of shape (5x5x1). Therefore, there’ll be 5 separate convolution blocks, one for each slice. And each of the convolution blocks will just have a single filter.

现在,每个切片将传递到单独的3x3层。 这意味着每个3x3块都必须处理一个形状为(5x5x1)的张量 因此,将有5个独立的卷积块,每个切片一个。 每个卷积块都只有一个过滤器。

Image by author作者提供的图像

This same process scales up to bigger input tensors. If the output of the pointwise convolution is (5x5x100), there’d be 100 convolution blocks each with one filter. And all of their output would be concatenated at last. This way of doing convolution is why it’s named EXTREME as each channel of the input tensor is processed separately.

相同的过程可以扩展到更大的输入张量。 如果逐点卷积的输出为(5x5x100),则将有100个卷积块,每个块都有一个滤波器。 它们的所有输出最终将被串联在一起。 这种进行卷积的方法就是为什么将其命名为EXTREME的原因,因为输入张量的每个通道都是单独处理的。

We have seen enough about the idea of Inception and an Xtreme version of it too. Now, let’s venture further to see what powers the Xception architecture.

我们已经对Inception及其Xtreme版本有了足够的了解。 现在,让我们进一步冒险看看是什么为Xception架构提供了动力。

深度可分离卷积:Xception的能量 (Depthwise Separable Convolution: That which powers Xception)

The depthwise separable convolution layer is what powers the Xception. And it heavily uses that in its architecture. This type of convolution is similar to the extreme version of the Inception block that we saw above. But differs slightly in its working. Let’s see how!

深度可分离卷积层为Xception提供动力。 它在其架构中大量使用了它。 这种类型的卷积类似于我们在上面看到的Inception区块的极端版本​​。 但其工作方式略有不同。 让我们看看如何!

Consider a typical convolution layer with ten 5x5 filters operating on a (1x10x10x100) tensor. Each of the ten filters is of shape (5x5x100) and slides over the input tensor to produce the output. Each of the 5x5 filters covers the whole channel dimension (the entire 100) as it slides over the input. This means a typical convolution operation encompasses both the spatial (height, width) and the channel dimensions.

考虑一个典型的卷积层,其中有十个在(1x10x10x100)张量上运行的5x5滤波器。 十个滤波器中的每一个都具有形状(5x5x100),并在输入张量上滑动以产生输出。 每个5x5过滤器会在输入上滑动时覆盖整个通道尺寸(整个100)。 这意味着典型的卷积运算同时包含空间(高度,宽度)和通道尺寸。

Image by author作者author

If you’re not familiar with convolutions, I suggest you go through How Convolution Works?

如果您对卷积不熟悉,建议您阅读卷积的工作原理。

A depthwise separable layer has two functional parts that split the job of a conventional convolution layer. The two parts are depthwise convolution and pointwise convolution. We’ll go through them one by one.

深度可分离层具有两个功能部分,可拆分常规卷积层的工作。 这两个部分是深度卷积和点卷积。 我们将一一介绍。

深度卷积 (Depthwise Convolution)

Let’s take an example of a depthwise convolution layer with 3x3 filters that operate on an input tensor of shape (1x5x5x5). Again, let’s lose the batch dimension for simplicity as it doesn’t change anything and consider it as a (5x5x5) tensor. Our depthwise convolution will have five 3x3 filters one for each channel of the input tensor. And each filter will slide spatially through a single channel and generate the output feature map for that channel.

让我们以具有3x3滤镜的深度卷积层为例,该滤镜在形状为(1x5x5x5)的输入张量上运行 。 再次,为了简单起见,我们放弃了批处理维度,因为它不会改变任何内容,并将其视为(5x5x5)张量。 我们的深度卷积将有五个3x3滤波器,用于输入张量的每个通道。 每个滤镜将在单个通道中在空间上滑动,并生成该通道的输出特征图。

As the number of filters is equal to the number of channels of the input, the output tensor will also have the same number of channels. Let’s not have any zero paddings in the convolution operation and keep the stride as 1.

由于滤波器的数量等于输入的通道数,因此输出张量也将具有相同的通道数。 让我们在卷积运算中没有任何零填充,并将步幅保持为1。

Image by author作者提供的图像

Going by the formula for the output size after convolution, our (5x5x5) will become a (3x3x5) tensor. The illustration below will make the idea clear!

根据卷积后输出大小的公式,我们的(5x5x5)将成为(3x3x5)张量。 下图将使这个想法更清楚!

Illustration of Depthwise Convolution Operation - Image by author深度卷积运算的插图-作者提供的图像

That’s Depthwise Convolution for you! You can see that it’s almost similar to the way we did the Xtreme convolution in the Inception.

这就是您的深度卷积! 您会看到它几乎与我们在Inception中进行Xtreme卷积的方式相似。

Next up, we have to feed this output tensor to a pointwise convolution which performs cross-channel correlation. It simply means that it operates across all the channels of the tensor.

接下来,我们必须将此输出张量馈入一个执行跨通道相关的逐点卷积。 它只是意味着它跨张量的所有通道运行。

点向卷积 (Pointwise Convolution)

The pointwise convolution is just another name for a 1x1 convolution. If we ever want to increase or decrease the depth (channel dimension) of a tensor, we can use a pointwise convolution. That’s why it was used in the Inception block to reduce the depth before the 3x3 or 5x5 layers. Here, we’re gonna use it to increase the depth. But how?

点式卷积只是1x1卷积的另一个名称。 如果我们想增加或减少张量的深度(通道尺寸),可以使用逐点卷积。 这就是为什么在Inception块中使用它来减小3x35x5层之前的深度的原因。 在这里,我们将使用它来增加深度。 但是如何?

The pointwise convolution is just a normal convolution layer with a filter size of one (1x1 filters). Therefore, it doesn’t change the spatial output size after convolution. In our example, the output tensor of the depthwise convolution has a size of (8x8x5). If we apply 50 1x1 filters, we’ll get the output as (8x8x50). And RELU activation is applied in the pointwise convolution layer.

逐点卷积只是一个滤镜大小为1 ( 1x1滤镜)的普通卷积层。 因此,它不会在卷积后更改空间输出大小。 在我们的示例中,深度卷积的输出张量的大小为(8x8x5) 。 如果我们应用50个1x1滤镜,则输出将为(8x8x50)RELU激活应用于逐点卷积层。

See Pointwise Convolution for more detailed illustrations and its advantages.

有关更详细的插图及其优点,请参见点向卷积 。

Combining the depthwise convolution and pointwise convolution, we get the Depthwise Separable Convolution. Let’s just call it DSC from here.

结合深度方向卷积和点方向卷积,我们得到了深度方向可分离卷积。 让我们从这里开始称之为DSC

DSC和Xtreme Inception之间的差异 (Differences between DSC and the Xtreme Inception)

In the Inception block, first comes the pointwise convolution followed by the 3x3 or 5x5 layer. Since we’d be stacking the DSC blocks on above the other, the order doesn’t matter much. The Inception block applies an activation function on both pointwise and the following convolution layers. But in the DSC, it’s just applied once, after the pointwise convolution.

在Inception块中,首先是逐点卷积,然后是3x3或5x5层。 由于我们将DSC块堆叠在另一个之上,因此顺序并不重要。 初始块在逐点和随后的卷积层上都应用激活函数。 但是在DSC中,在逐点卷积之后仅应用了一次。

Image by author图片作者

The Xception author discusses the effect of having activation on both the depthwise and pointwise steps in the DSC. And has observed that learning is faster when there’s no intermediate activation.

Xception的作者讨论了激活对DSC中的深度步骤和点方向步骤的影响。 并且已经观察到没有中间激活时学习会更快。

Illustration of DSC with and without having an intermediate activation - Image by author带有和不带有中间激活的DSC插图-作者提供的图像

Xception建筑 (Xception Architecture)

The author has split the entire Xception Architecture into 14 modules where each module is just a bunch of DSC and pooling layers. The 14 modules are grouped into three groups viz. the entry flow, the middle flow, and the exit flow. And each of the groups has four, eight, and two modules respectively. The final group, i.e the exit flow, can optionally have fully connected layers at the end.

作者将整个Xception Architecture分为14个模块,每个模块只是一堆DSC和池层。 14个模块分为三组。 入口流,中间流和出口流。 每个组分别具有四个,八个和两个模块。 最后一组,即出口流,可以选择在末端具有完全连接的层。

Note: All the DSC layers in the architecture use a filter size of 3x3, stride 1, and “same” padding. And all the MaxPooling layers use a 3x3 kernel and a stride of 2.

注意:架构中的所有DSC层均使用3x3的滤镜大小,跨度1和“相同”的填充。 并且所有MaxPooling层都使用3x3内核,步幅为2。

盗用录入流程 (Entry Flow of Xception)

Image by author作者提供的图片

The above illustration is a detailed version of the one given in the Xception paper. Might seem intimidating at first but look again, it’s very simple.

上面的插图是Xception论文中给出的详细版本。 乍一看可能令人生畏,但再看一遍,这很简单。

The very first module contains conventional convolution layers and they don’t have any DSC ones. They take input tensors of size (-1, 299, 299, 3). The -1 in the first dimension represents the batch size. A negative -1 just denotes that the batch size can be anything.

第一个模块包含常规卷积层,并且没有任何DSC层。 它们采用大小为(-1、299、299、3)的输入张量。 第一维中的-1表示批次大小。 负-1只是表示批处理大小可以是任何值。

And every convolution layer, both conventional and DSC, is followed by a Batch Normalization layer. The convolutions that have a stride of 2 reduces it by almost half. And the output’s shape is shown by the side which is calculated using the convolution formula that we saw before.

常规和DSC的每个卷积层都紧跟着批处理归一化层。 步长为2的卷积将其减少近一半。 输出的形状显示在侧面,该侧面是使用我们之前看到的卷积公式计算得出的。

Image by author作者提供的图像

Excluding the first module, all the others in the entry flow have residual skip connections. The parallel skip connections have a pointwise convolution layer that gets added to the output from the main path.

除第一个模块外,条目流中的所有其他模块都具有剩余的跳过连接。 并行跳过连接具有逐点卷积层,该层已添加到主路径的输出中。

窃听的中间流程 (Middle Flow of Xception)

Illustration of middle flow - Image by author中间流的插图-作者提供的图像

In the middle flow, there are eight such modules, one after the other. The above module is repeated eight times to form the middle flow. All the 8 modules in the middle flow use a stride of 1 and don’t have any pooling layers. Therefore, the spatial size of the tensor that’s passed from the entry flow remains the same. The channel depth remains the same too as all the middle flow modules have 728 filters. And that’s the same as the input’s depth.

在中间流程中,有八个这样的模块,一个接一个。 重复上述模块八次以形成中间流程。 中间流中的所有8个模块都使用跨度1,并且没有任何池化层。 因此,从输入流传递的张量的空间大小保持不变。 通道深度也保持不变,因为所有中间流模块都具有728个过滤器。 这与输入的深度相同。

Xception的退出流程 (Exit Flow of Xception)

Image by author作者提供的图片

The exit flow has just two convolution modules and the second one doesn’t have any skip connection. The second module uses Global Average Pooling, unlike the earlier modules which used Maxpooling. The output vector of the average pooling layer can be fed to a logistic regression layer directly. But we can optionally use intermediate Fully Connected layers too.

出口流只有两个卷积模块,而第二个没有任何跳过连接。 第二个模块使用全局平均池,与之前使用Maxpooling的模块不同。 平均池化层的输出向量可以直接馈送到逻辑回归层。 但是我们也可以选择使用中间的全连接层。

总结一下 (To Sum Up)

The Xception model contains almost the same number of parameters as the Inception V3 but outperforms Inception V3 by a small margin on the ImageNet dataset. But it beats Inception V3 with a better margin on the JFT image classification dataset (Google’s internal dataset). Performing better with almost the same number of parameters can be attributed to its architecture engineering.

Xception模型包含的参数数量几乎与Inception V3相同,但是在ImageNet数据集上的性能略优于Inception V3。 但是在JFT图像分类数据集(Google的内部数据集)上,它以更好的优势击败了Inception V3。 在几乎相同数量的参数下,更好的性能可以归因于其架构工程。

翻译自: https://towardsdatascience.com/xception-meet-the-xtreme-inception-db569755f4d6

:)xception


http://www.taodudu.cc/news/show-863472.html

相关文章:

  • 评估模型如何建立_建立和评估分类ML模型
  • 介绍神经网络_神经网络介绍
  • 人物肖像速写_深度视频肖像
  • 奇异值值分解。svd_推荐系统-奇异值分解(SVD)和截断SVD
  • 机器学习 对模型进行惩罚_使用Streamlit对机器学习模型进行原型制作
  • 神经网络实现xor_在神经网络中实现逻辑门和XOR解决方案
  • sagan 自注意力_请使用英语:自我注意生成对抗网络(SAGAN)
  • pytorch 音频分类_Pytorch中音频的神经风格转换
  • 变压器 5g_T5:文本到文本传输变压器
  • 演示方法:有抱负的分析师
  • 机器学习 模型性能评估_如何评估机器学习模型的性能
  • 深度学习将灰度图着色_通过深度学习为视频着色
  • 工业机器人入门实用教程_机器学习实用入门
  • facebook 图像比赛_使用Facebook的Detectron进行图像标签
  • 营销大数据分析 关键技术_营销分析的3个最关键技能
  • ue4 gpu构建_待在家里吗 为什么不构建GPU Box!
  • 使用机器学习预测天气_使用机器学习的二手车价格预测
  • python集群_使用Python集群文档
  • 马尔可夫的营销归因
  • 使用Scikit-learn,Spotify API和Tableau Public进行无监督学习
  • 街景图像分割_借助深度学习和街景图像进行城市的大规模树木死亡率研究
  • 多目标分类的混淆矩阵_用于目标检测的混淆矩阵
  • 检测和语义分割_分割和对象检测-第2部分
  • watson软件使用_使用Watson Assistant进行多语言管理
  • keras核心已转储_转储Keras-ImageDataGenerator。 开始使用TensorFlow-tf.data(第2部分)
  • 闪亮蔚蓝_在R中构建第一个闪亮的Web应用
  • 亚马逊训练alexa的方法_Alexa对话是AI驱动的对话界面新方法
  • nlp文本相似度_用几行代码在Python中搜索相似文本:一个NLP项目
  • 爬虫goodreads数据_使用Python从Goodreads数据中预测好书
  • opengl层次建模_层次建模简介

:)xception_Xception:认识Xtreme盗梦空间相关推荐

  1. 知乎 开源机器学习_使用开源数据和机器学习预测海洋温度

    知乎 开源机器学习 In this tutorial, we're going to show you how to take open source data from the National O ...

  2. 机器学习算法如何应用于控制_将机器学习算法应用于NBA MVP数据

    机器学习算法如何应用于控制 A step-by-step tutorial in R R中的分步教程 1引言 (1 Introduction) This blog makes up the Machi ...

  3. 随机模拟_随机模拟可帮助您掌握统计概念

    随机模拟 模拟有助于提炼概念 (Simulation helps distilling concepts) 掌握与统计相关的概念可能很困难 (Grasping statistics-related c ...

  4. 机器学习 深度学习 ai_人工智能,机器学习,深度学习-特征和差异

    机器学习 深度学习 ai Artificial Intelligence (AI) will and is currently taking over an important role in our ...

  5. 人口预测和阻尼-增长模型_使用分类模型预测利率-第3部分

    人口预测和阻尼-增长模型 This is the final article of the series " Predicting Interest Rate with Classifica ...

  6. 轨迹预测演变(第1/2部分)

    无人驾驶汽车 (Self-Driving Cars) "If you recognize that self-driving cars are going to prevent car ac ...

  7. 数据结构栈和队列_使您的列表更上一层楼:链接列表和队列数据结构

    数据结构栈和队列 When you want to store several elements somewhere in a program, the go-to data type is an a ...

  8. 数据科学自动化_数据科学会自动化吗?

    数据科学自动化 意见 (Opinion) 目录 (Table of Contents) Introduction介绍 Automation of Data Science数据科学自动化 Pros an ...

  9. 人工智能对金融世界的改变_人工智能革命正在改变网络世界

    人工智能对金融世界的改变 The race of tech leaders for AI is so rude and competitive. Things are getting changed ...

最新文章

  1. 前端进阶(三) webpack处理vue以及vue-cli脚手架环境
  2. 七日存留查询(MYSQL)
  3. [转]Supporting OData Query Options in ASP.NET Web API 2
  4. C语言经典例74-连接两个链表
  5. 使用牛刀云开发微信小程序(问题集锦)
  6. A*B Problem
  7. 异步加载js的三种方法
  8. netbeans6.8_NetBeans IDE 8.0和Java 8的新功能
  9. 解决滑动UITableView自动显示delete按钮
  10. Python基础语法-02-异常
  11. mysql 回滚sql_Mysql误操作后使用binlog2sql快速回滚
  12. 投票问题 python
  13. [Python] 从ip138网站爬取ip所处地点
  14. 1024_scsdn_徽章获取日_日常工作记录_百度图片爬取小程序
  15. ..\OBJ\PRESSURE_SYSTEM.axf: Error: L6218E: Undefined symbol FLASH_ErasePage (referred from flash.o).
  16. Altium Designer——PCB中更改线宽的技巧总结
  17. 快速入门Web前端开发
  18. 天梯赛题目练习——平面向量加法(附带PTA测试点)
  19. Java(老白再次入门) - 入门概述
  20. 2019年全国职业院校技能大赛—大数据技术与应用

热门文章

  1. jquery 动态添加一行数据,支持动态删除
  2. 对 makefile 中 $*和静态模式规则结合的学习
  3. 004-hadoop家族概述
  4. django在nginx uwsgi和tornado异步方案在项目中的体验
  5. 【动画技巧】在Flash中自定义鼠标外观
  6. ActionScript for Multiplayer Games and Virtual Worlds 下载。
  7. 思科认证36个热门考点汇总
  8. 浅谈本地服务器的搭建 XAMPP
  9. android 广播 7.0变化,安卓7.0到底带来了那些变化?
  10. php菲波那切数列,php如何实现菲波那切数列