<link href="https://csdnimg.cn/public/favicon.ico" rel="SHORTCUT ICON">
<title>AI:IPPR的数学表示-CNN基本结构分析( Conv层、Pooling层、FCN层/softmax层) - wishchinYang的专栏 - CSDN博客</title><link rel="stylesheet" href="https://csdnimg.cn/release/phoenix/template/css/detail-afa8a65ceb.min.css"><link rel="stylesheet" href="https://csdnimg.cn/release/phoenix/themes/skin3-template/skin3-template-9b39979775.min.css"><script type="text/javascript">var username = "wishchin";var blog_address = "https://blog.csdn.net/wishchin";var static_host = "https://csdnimg.cn/release/phoenix/";var currentUserName = "qq_27263511";var isShowAds = true;var isOwner = false;var loginUrl = "http://passport.csdn.net/account/login?from=https://blog.csdn.net/wishchin/article/details/75213289"var blogUrl = "https://blog.csdn.net/";//页面皮肤样式var curSkin = "skin3-template";// 第四范式所需数据var articleTitles = "AI:IPPR的数学表示-CNN基本结构分析( Conv层、Pooling层、FCN层/softmax层) - wishchinYang的专栏";var articleID = "75213289";var nickName = "wishchin";
</script>
<script type="text/javascript">// Traffic Stats of the entire Web site By baiduvar _hmt = _hmt || [];(function() {var hm = document.createElement("script");hm.src = "https://hm.baidu.com/hm.js?6bcd52f51e9b3dce32bec4a3997715ac";var s = document.getElementsByTagName("script")[0];s.parentNode.insertBefore(hm, s);})();// Traffic Stats of the entire Web site By baidu end
</script>
<script src="https://csdnimg.cn/public/common/libs/jquery/jquery-1.9.1.min.js" type="text/javascript"></script>
<script src="https://csdnimg.cn/rabbit/exposure-click/main-1.0.6.js"></script>
<!-- 新版上报 -->
<script src="//g.csdnimg.cn/track/1.1.1/track.js" type="text/javascript"></script>
<!-- 新版上报end --><link rel="stylesheet" href="https://csdnimg.cn/public/sandalstrap/1.4/css/sandalstrap.min.css">
<style>.MathJax, .MathJax_Message, .MathJax_Preview{display: none}
</style>

AI:IPPR的数学表示-CNN基本结构分析( Conv层、Pooling层、FCN层/softmax层)

置顶 2017年07月17日 13:50:57 wishchin 阅读数:958

                                                                                <div class="tags-box space"><span class="label">个人分类:</span><a class="tag-link" href="https://blog.csdn.net/wishchin/article/category/3109825"  target="_blank">ANN/DNN/纤维丛                                                               <a class="tag-link" href="https://blog.csdn.net/wishchin/article/category/6972922"  target="_blank">TuringMachine                                                                <a class="tag-link" href="https://blog.csdn.net/wishchin/article/category/6985098"  target="_blank">AI/ML                                                                </a></div></div><div class="operating"></div></div></div>
</div>
<article><div id="article_content" class="article_content clearfix csdn-tracking-statistics" data-pid="blog"  data-mod=popu_307  data-dsm = "post" ><link rel="stylesheet" href="https://csdnimg.cn/release/phoenix/template/css/ck_htmledit_views-f76675cdea.css" /><div class="htmledit_views"><p><span style="font-size:12px;">        类似于SVM,CNN为代表的DNN方法的边缘参数随着多类和高精度的要求必然增长。比如向量机方法,使用可以映射到无穷维的高斯核,即使进行两类分类,在大数据集上得到高精度,即保持准确率和高精度的双指标,支持向量的个数会随着数据集增长,SVM三层网会变得非常宽。CNN方法的多层结构,在保留边缘映射的数目的同时可以有效地降低“支持向量”的个数,是通过函数复合—因式分解得到的,至于要使用多少层的网络,每一层网神经元的个数,两层之间的链接方式,理论上也应该有一般的指导规则。</span></p><p><span style="font-size:12px;">       参考链接:<span class="link_title"><a href="http://blog.csdn.net/wishchin/article/details/71195098" rel="nofollow">人工机器:作为归纳系统的深度学习</a></span><span class="gray"></span> ,参考文章:<span style="font-family:'Times New Roman';"><a href="http://www.cnblogs.com/zf-blog/p/6075286.html" rel="nofollow">卷积神经网络.卷积层和池化层</a>学习  ,原始CNN论文,对摘抄段落有大量修改,如有疑问,请参考原文。</span></span></p><p><span style="font-size:12px;"><span style="font-family:'Times New Roman';"></span><br /></span></p><p><span style="font-size:14px;"><strong>特征学习与结构学习</strong></span></p><p><span style="font-size:12px;">        深度学习以“数据驱动”范式颠覆了“人造特征”范式,完成“特征学习”,这是一个重大的进步。但与此同时,它自己又陷入了一个“人造结构”窠臼中。06年hinton教授发表在nature上的最初的论文,多层压缩映射。给出的深度学习的方案是无监督学习获取网络结构,之后再通过有监督学习优化参数,DNN网络的引爆点恰恰是结构学习。大量利用未标记数据学习网络结构是深度学习最初的构想。</span></p><p><span style="font-size:12px;">  但无论Hinton教授组最初设计的AlexNet,还是后来的VGG,GoogLeNet,ResNet等等,都是富有经验的专家人工设计出来的。给定一个新问题,到底什么样的网络结构是最佳的(如多少卷积层)却不得而知,这在一定程度上阻碍了深度学习在更多智能任务上的普及和应用。因此,同时学习网络结构和网络参数是一个值得大力关注的研究方向。( 假设空间问题难以突破,至于过拟合欠拟合都没有统一简单的评判准则,也因此难以使用机器自动完成,相对于给定结构后使用数据结构优化参数, 优化结构是个更加复杂且更高层的问题。多年已过,众人发现,ALexNet并非最优结构,这是一个经过大量专家枚举结构很长时间才得到的结果)。</span></p><p><span style="font-size:12px;">        而2006年Hinton教授等人倡导的却恰恰是利用无监督学习来对深层神经网络进行预训练。利用超量的数据学习网络结构是一个更耗费时间和计算能力的事情。此后,特别是DCNN兴起之后,无监督的预训练似乎已经被很多研究者所抛弃( 特别是在CV领域,因为耗日持久,远不如使用专家经验代价更小)。 <br /></span></p><p><span style="font-size:12px;">        直接从大量无监督数据中学习模型确实是非常困难的,即便是人这部“机器”,也有“狼孩”的例子警告我们“无师自通”似乎是不现实的。但“少量有导师数据+大量无导师数据”的模式也许是更值得大力研究的。</span></p><p><span style="font-size:12px;">        Hinton教授提出的使用无监督训练确定网络结构,使用编码/解码器,对每次编码/解码进行分解/重建,使用残差逐层优化编码器,保证每一层对无标签样本的压缩函数的紧邻特性,在理论上是可行的,但在实际运用中是极难实现的。这种方法需要比有标签样本多更多的无标签样本,以分析海量样本中隐藏的模式,甚至需要高出一个或几个级别的样本数量,甚至是遍历级别的样本。在不停的调整网络结构,预训练结束之前,凭专家知识可能已经得到了近似最优网络结构。</span></p><p><span style="font-size:12px;">        这是一个人脑分支定界法法和计算机遍历方法的PK,现阶段计算机遍历的方法还是不现实的。</span></p><p><span style="font-size:12px;">        而工业上广为流行的CNN结构一般使用了ImageNet的分类模型,再通过微调使其适用于使用者自身研究的问题。这个众人使用的ImageNet模型,便相当于已经通过非监督学习学习了网络结构的预训练模型。<br /></span></p><span style="font-size:12px;">       <br /></span><p><span style="font-size:14px;"><strong>CNN的结构分析—卷积层</strong></span></p><p><span style="font-size:12px;">         CNN的二维结构天然适合图像特征提取和识别过程,卷积运算的计算过程及功能相当于反向模板匹配。CNN通过卷积层,训练不同的卷积核来提取图像中隐含的模式。<strong>CNN训练得到的是滤波器-卷积核,本质上是对于某种特定的模式有响应,反之无响应</strong>,所以全卷积一直到最后一层,<strong>响应最强的特征图。</strong><br /></span></p><p><span style="font-size:12px;">         网如其名,CNN方法必然包含了一个卷积算子层。还是拿LeNet为例,结构图为:</span></p><p><span style="font-size:12px;">         <img src="https://img-blog.csdn.net/20170717134403187?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvd2lzaGNoaW4=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt="" width="606" height="170" /><br /></span></p><p><span style="font-size:12px;"><br /></span></p><p><span style="font-size:12px;">       LeNet使用了两个卷积层,C1层和C3层。C1层,输入图片:32*32 ;卷积核大小:5*5 ;卷积核种类:6 ;输出featuremap大小:28*28 (32-5+1);神经元数量: 28 *28 *6; 可训练参数:(5*5+1)*6(每个滤波器5*5=25个unit参数和一个bias参数,一共6个滤波器);连接数:(5*5+1)*6*28*28。</span></p><p><span style="font-size:12px;"><strong>         C3层也是一个卷积层,</strong>输入:S2中所有6个或者几个特征map组合; 卷积核大小:5*5;卷积核种类:16; 输出featureMap大小:10*10;C3中的每个特征map是连接到S2中的所有6个或者几个特征map的,表示本层的特征map是上一层提取到的特征map的不同组合,存在的一个方式是:C3的前6个特征图以S2中3个相邻的特征图子集为输入。接下来6个特征图以S2中4个相邻特征图子集为输入。然后的3个以不相邻的4个特征图子集为输入,最后一个将S2中所有特征图为输入;则:可训练参数:6*(3*25+1)+6*(4*25+1)+3*(4*25+1)+(25*6+1)=1516;连接数:10 *10 *1516 = 151600。</span></p><p><span style="font-size:12px;">         <strong>卷积层的作用</strong>:<span style="font-family:'Times New Roman';">卷积神经网络中每层卷积层由若干卷积单元组成,每个卷积单元的参数都是通过<a href="http://zh.wikipedia.org/wiki/%E5%8F%8D%E5%90%91%E4%BC%A0%E6%92%AD%E7%AE%97%E6%B3%95" rel="nofollow" title="反向传播算法">反向传播算法</a>优化得到的。卷积运算的目的是提取输入的不同特征。使用不同的卷积核的组合,可实现梯度计算、尺度计算(<span style="font-family:'Times New Roman';">配合maxpooling层</span>)等作用,,可以对显著性进行选择。第一层卷积层可能只能提取一些低级的特征如边缘、线条和角等层级,更多层的网络能从低级特征中迭代提取更复杂的特征(这也是深度的要求)。</span></span></p><p><span style="font-family:'Times New Roman';font-size:12px;">         <img src="https://img-blog.csdn.net/20170805155314895?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvd2lzaGNoaW4=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt="" width="479" height="252" /><br /></span></p><p><span style="font-family:'Times New Roman';font-size:12px;">                <img src="https://img-blog.csdn.net/20170805155926406?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvd2lzaGNoaW4=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" alt="" width="429" height="193" /><br /></span></p><p><span style="font-family:'Times New Roman';font-size:12px;">                         图片来自:</span><span style="font-family:'Times New Roman';font-size:12px;"><a href="http://www.cnblogs.com/zf-blog/p/6075286.html" rel="nofollow">卷积神经网络.卷积层和池化层</a>学习  ,抠图于原始CNN论文<br /></span></p><p><span style="font-family:'Times New Roman';font-size:12px;">         卷积层的<strong>更新模型</strong>:卷积层类似于BP网络的二维描述,依然使用前向传播计算输出,使用反向传播在训练过程中调整权重和偏置,更新网络。参考<strong><span style="color:#FF0000;">卷积核的反传算法</span></strong>。一般前向计算,把卷积方法转换为图像矩阵和卷积核矩阵的乘法形式,直接计算。</span></p><p><span style="font-family:'Times New Roman';font-size:12px;">         卷积层的C<strong>affe表示</strong>:caffe直接给出了卷积层的参数调整方法,不需要自己设计卷积神经元。</span></p><p><span style="font-family:'Times New Roman';font-size:12px;">         <a href="https://github.com/BVLC/caffe/blob/master/models/bvlc_reference_caffenet/train_val.prototxt" rel="nofollow">AlexNet模型</a>配置文件:输出256个卷积图,使用5*5的卷积核,使用高斯核,标准偏差为0.1。此层附带了ReLu激活函数层。上层输入为96<img alt="大笑" src="http://static.blog.csdn.net/xheditor/xheditor_emot/default/laugh.gif" />.</span><span style="font-size:12px;"><br /></span></p><pre><code class="language-html"><span style="font-size:12px;">layer {

name: “conv2”
type: “Convolution”
bottom: “norm1”
top: “conv2”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 2
kernel_size: 5
group: 2
weight_filler {
type: “gaussian”
std: 0.01
}
bias_filler {
type: “constant”
value: 1
}
}
}
layer {
name: “relu2”
type: “ReLU”
bottom: “conv2”
top: “conv2”
}

CNN的结构分析—Pooling层

使用卷积核提取的大量特征,产生超高的维度,面临着表示困难的问题,且直接叠加的卷积层会产生更庞大的卷积特征集合。Pooling层一般作为显著性选取和降维的作用存在。

Pooling明显地降低了特征图的维度

使用MeanPooling的方式,相当于平均化特性,用于简化函数模型,同时丧失了一些特异性准确性,增加泛化性能,即是把复杂模型转化为一个平均模型,得失都很明显。而maxpooling则描述为提取特征本身的显著性作用,同时进行数据压缩。

MeanPooling可以用网络加深来替换其数据压缩的作用,一个MeanPooling层相当于网络深度增加两倍,而MeanPooling自身模型简单化的特点丧失了准确性表示,逐渐被取代一般不再被使用。上图中,同样地采用一个22的filter,max pooling是在每一个区域中寻找最大值,这里的stride=2,最终在原特征图中提取主要特征得到右图。概率意义上, MaxPooling 过程之后,特征更小且相对表示性更强。

参考文章:http://ufldl.stanford.edu/wiki/index.php/池化

池化的平移不变性:如果人们选择图像中的连续范围作为池化区域,并且只是池化相同(重复)的隐藏单元产生的特征,那么,这些池化单元就具有平移不变性 (translation invariant)。这就意味着即使图像经历了一个小的平移之后,依然会产生相同的 (池化的) 特征。在很多任务中 (例如物体检测、声音识别),我们都更希望得到具有平移不变性的特征,因为即使图像经过了平移,样例(图像)的标记仍然保持不变。例如,如果你处理一个MNIST数据集的数字,把它向左侧或右侧平移,那么不论最终的位置在哪里,你都会期望你的分类器仍然能够精确地将其分类为相同的数字。(MNIST 是一个手写数字库识别库: http://yann.lecun.com/exdb/mnist/)

池化的方式:可使用划分池化的形式,也可以使用Overlap池化的形式。此外可以使用金字塔池化的形式,每层使用不同的池化单元,形成一个金字塔特征,也用于缩放不变性,同时可以处理一定的形变。

金字塔池化,可用于处理一定的仿射形变。

CNN的结构分析—全链接层

使用卷积核提取的大量特征,产生超高的维度,同时使用MaxPooling层进行维度压缩同时选取明显特征。CNN网络通常反复堆叠Conv+MaxPooling层,变得更深,因此能提取更加全局更加高层的特征,同时不会产生太高的特征维度。对一个图片输入产生一个特征集合。

全链接层,连接所有的特征,把多个Map压缩为1个X维向量,将输出值送给分类器(如softmax分类器)

ALexNet模型的配置文本如下:

layer {
name: “fc7”
type: “InnerProduct”
bottom: “fc6”
top: “fc7”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 4096
weight_filler {
type: “gaussian”
std: 0.005
}
bias_filler {
type: “constant”
value: 1
}
}
}
layer {
name: “relu7”
type: “ReLU”
bottom: “fc7”
top: “fc7”
}
layer {
name: “drop7”
type: “Dropout”
bottom: “fc7”
top: “fc7”
dropout_param {
dropout_ratio: 0.5
}
}
layer {
name: “fc8”
type: “InnerProduct”
bottom: “fc7”
top: “fc8”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 1000
weight_filler {
type: “gaussian”
std: 0.01
}
bias_filler {
type: “constant”
value: 0
}
}
}

第7层全链接层输出参数为4096,默认表示输出4096个维度向量。此外,设置的dropout率为0.5,则意味着使用了另外0.5的链接冗余,用于增强泛化能力。

CNN的结构分析—SoftMax分类器

CNN多数分类模型最终选择了MLP+SoftMax分类器,使用MLP-全连接层进行特征降维,SoftMax函数进行分类。是否因为SoftMax分类器在多分类上的无偏性,便利性?训练时参数更新的更快。

为什么一定要把最后的分类器设置为处理向量空间的SoftMax分类器,而不是直接使用xx—>11或者x1—>11的卷积方式呢?。

使用Softmax回归模型,该模型是logistic回归模型在多分类问题上的推广,在多分类问题中,类标签 可以取两个以上的值。 Softmax回归模型对于诸如MNIST手写数字分类等问题是很有用的,该问题的目的是辨识10个不同的单个数字。Softmax回归是有监督的。

SoftMax分类器: http://ufldl.stanford.edu/wiki/index.php/Softmax…

SoftMax层计算过程:

Caffe配置文件:

layer {
name: “fc8”
type: “InnerProduct”
bottom: “fc7”
top: “fc8”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 1000
weight_filler {
type: “gaussian”
std: 0.01
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “accuracy”
type: “Accuracy”
bottom: “fc8”
bottom: “label”
top: “accuracy”
include {
phase: TEST
}
}
layer {
name: “loss”
type: “SoftmaxWithLoss”
bottom: “fc8”
bottom: “label”
top: “loss”
}

第8层全链接层输出参数为1000,表示AlexNet模型默认输出1000个类别。

CNN结构总结

CNN方法对输入图像不停的卷积、pooling,提取更多的特征图,使用全链接层映射到特定维度的特征向量空间,再通过MLP或者softmax分类器获得图像目标分类。

检测可以视为选取BoundingBox和分类的结合,而后出现的DarkNet更是直接产生了回归模型。

下图为典型的DeepID模型。

Car的图片经过CNN层层特征提取和Polling过程,最后生成的Map经过压缩为m维向量,经过SoftMax函数,压缩为n维浮点数,然后经过Max()函数,取得分类结果。

发表评论
添加代码片
  • HTML/XML
  • objective-c
  • Ruby
  • PHP
  • C
  • C++
  • JavaScript
  • Python
  • Java
  • CSS
  • SQL
  • 其它
还能输入1000个字符

卷积神经网络系列之softmaxsoftmax loss和cross entropy的讲解 - ai之路(计算机视觉、深度学习、机器学习爱好者,欢迎交流!)

08-17 6.5万

我们知道卷积神经网络(CNN)在图像领域的应用已经非常广泛了,一般一个CNN网络主要包含卷积层,池化层(pooling),全连接层,损失层等。虽然现在已经开源了很多深度学习框架(比如MxNet,Caf... 来自: AI之路

       <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/l691899397/article/details/52291909,BlogCommendFromBaidu_0"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/l691899397/article/details/52291909,BlogCommendFromBaidu_0"}'><div class="content"><a href="https://blog.csdn.net/l691899397/article/details/52291909" target="_blank" title="深度学习笔记8:softmax层的实现 - l691899397的博客"><h4 class="text-truncate oneline">深度学习笔记8:<em>softmax</em>层的实现 - l691899397的博客             </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/A/C/1/3_l691899397.jpg" alt="l691899397" class="avatar-pic"><span class="namebox"><span class="name">l691899397</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">08-23</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>3.6万</span></p></div></a><p class="content"><a href="https://blog.csdn.net/l691899397/article/details/52291909" target="_blank" title="深度学习笔记8:softmax层的实现 - l691899397的博客"><span class="desc oneline">如果有什么疑问或者发现什么错误,欢迎在评论区留言,有时间我会一一回复

softmax简介
Softmax回归模型是logistic回归模型在多分类问题上的推广,在多分类问题中,待分类的类别数量大于…

来自: l691899397的博客

     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/h_jlwg6688/article/details/52624396,BlogCommendFromBaidu_2"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/h_jlwg6688/article/details/52624396,BlogCommendFromBaidu_2"}'><div class="content"><a href="https://blog.csdn.net/h_jlwg6688/article/details/52624396" target="_blank" title="softmax层(无loss)是什么样的? - 有信念,才能走的更远"><h4 class="text-truncate oneline"><em>softmax</em>层(无loss)是什么样的? - 有信念,才能走的更远             </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/4/1/9/3_h_jlwg6688.jpg" alt="h_jlwg6688" class="avatar-pic"><span class="namebox"><span class="name">h_jlwg6688</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">09-22</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>3236</span></p></div></a><p class="content"><a href="https://blog.csdn.net/h_jlwg6688/article/details/52624396" target="_blank" title="softmax层(无loss)是什么样的? - 有信念,才能走的更远"><span class="desc oneline">名称:softmax_layer

连接:softmax层一般连接的是全连接层和loss层
这里有softmax层的来历解释,我感觉解释的很好:http://zhidao.baidu.com/lin…

来自: 有信念,才能走的更远

<div class="recommend-item-box recommend-ad-box"><div id="kp_box_78" data-pid="59" data-track-view='{"mod":"kp_popu_78","con":",,"}'><script type="text/javascript" src="//rabc1.iteye.com/production/res/rxjg.js?pkcgstj=jm"></script></div><script>$(function(){csdn.track.viewCheck($("#kp_box_78"));});</script></div><div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/fengbingchun/article/details/50529500,BlogCommendFromBaidu_3"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/fengbingchun/article/details/50529500,BlogCommendFromBaidu_3"}'><div class="content"><a href="https://blog.csdn.net/fengbingchun/article/details/50529500" target="_blank" title="卷积神经网络(CNN)基础介绍 -                                          网络资源是无限的"><h4 class="text-truncate oneline">卷积神经网络(<em>CNN</em>)基础介绍 -                                          网络资源是无限的               </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/6/6/F/3_fengbingchun.jpg" alt="fengbingchun" class="avatar-pic"><span class="namebox"><span class="name">fengbingchun</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">01-16</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>15.3万</span></p></div></a><p class="content"><a href="https://blog.csdn.net/fengbingchun/article/details/50529500" target="_blank" title="卷积神经网络(CNN)基础介绍 -                                          网络资源是无限的"><span class="desc oneline">卷积神经网络(CNN)基础介绍</span></a><span class="blog_title_box oneline"><a target="_blank" href="https://blog.csdn.net/fengbingchun">来自:  <span class="blog_title">                                          网络资源是无限的</span></a></span></p></div></div><div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/lanran2/article/details/75330979,BlogCommendFromBaidu_4"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/lanran2/article/details/75330979,BlogCommendFromBaidu_4"}'><div class="content"><a href="https://blog.csdn.net/lanran2/article/details/75330979" target="_blank" title="Softmax层解析 - lanran2的博客"><h4 class="text-truncate oneline"><em>Softmax</em>层解析 - lanran2的博客              </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/1/7/1/3_lanran2.jpg" alt="lanran2" class="avatar-pic"><span class="namebox"><span class="name">lanran2</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">07-18</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>1291</span></p></div></a><p class="content"><a href="https://blog.csdn.net/lanran2/article/details/75330979" target="_blank" title="Softmax层解析 - lanran2的博客"><span class="desc oneline">这里我们简单介绍一下Caffe是如何实现Softmax层的,通常我们使用的是SoftmaxWithLossLayer,这里我们仅仅讲讲Caffe的SoftmaxLayer

定义输入
在Caffe的…

来自: lanran2的博客

     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/daydayup_668819/article/details/59486223,BlogCommendFromBaidu_5"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/daydayup_668819/article/details/59486223,BlogCommendFromBaidu_5"}'><div class="content"><a href="https://blog.csdn.net/daydayup_668819/article/details/59486223" target="_blank" title="深度学习笔记2:池化 全连接 激活函数 softmax - daydayup_668819的博客(开发人员技术积累)"><h4 class="text-truncate oneline">深度学习笔记2:池化 全连接 激活函数 <em>softmax</em> - daydayup_668819的博客(开发人员技术积累)                </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/7/0/9/3_daydayup_668819.jpg" alt="daydayup_668819" class="avatar-pic"><span class="namebox"><span class="name">daydayup_668819</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">03-02</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>1.3万</span></p></div></a><p class="content"><a href="https://blog.csdn.net/daydayup_668819/article/details/59486223" target="_blank" title="深度学习笔记2:池化 全连接 激活函数 softmax - daydayup_668819的博客(开发人员技术积累)"><span class="desc oneline">深度学习  池化 全连接 激活函数 softmax</span></a><span class="blog_title_box oneline"><a target="_blank" href="https://blog.csdn.net/daydayup_668819">来自:    <span class="blog_title"> daydayup_668819的博客</span></a></span></p></div></div><div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/xg123321123/article/details/53092154,BlogCommendFromBaidu_6"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/xg123321123/article/details/53092154,BlogCommendFromBaidu_6"}'><div class="content"><a href="https://blog.csdn.net/xg123321123/article/details/53092154" target="_blank" title="FCN 简单梳理 - 时光杂货店"><h4 class="text-truncate oneline"><em>FCN</em> 简单梳理 - 时光杂货店               </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/4/2/9/3_xg123321123.jpg" alt="xg123321123" class="avatar-pic"><span class="namebox"><span class="name">xg123321123</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">11-09</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>1.1万</span></p></div></a><p class="content"><a href="https://blog.csdn.net/xg123321123/article/details/53092154" target="_blank" title="FCN 简单梳理 - 时光杂货店"><span class="desc oneline">FCN 简单梳理全卷积网络(Fully Convolutional Network)将CNN应用到了图像语义分割领域。

图像语义分割,就是对一张图片上的所有像素点进行分类。以往的CNN都是对整张图片…

来自: 时光杂货店

     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/Fate_fjh/article/details/53446630,BlogCommendFromBaidu_7"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/Fate_fjh/article/details/53446630,BlogCommendFromBaidu_7"}'><div class="content"><a href="https://blog.csdn.net/Fate_fjh/article/details/53446630" target="_blank" title="卷积神经网络CNN(3)—— FCN(Fully Convolutional Networks)要点解释 - fate_fjh的博客"><h4 class="text-truncate oneline">卷积神经网络<em>CNN</em>(3)—— <em>FCN</em>(Fully <em>Conv</em>olutional Networks)要点解释 - fate_fjh的博客             </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/E/D/4/3_fate_fjh.jpg" alt="Fate_fjh" class="avatar-pic"><span class="namebox"><span class="name">Fate_fjh</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">12-03</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>2.8万</span></p></div></a><p class="content"><a href="https://blog.csdn.net/Fate_fjh/article/details/53446630" target="_blank" title="卷积神经网络CNN(3)—— FCN(Fully Convolutional Networks)要点解释 - fate_fjh的博客"><span class="desc oneline">FCN作为图像语义分割的先河,实现像素级别的分类(即end to end,pixel-wise),作者尽量用浅白的方式讲述FCN的原理与过程。...</span></a><span class="blog_title_box oneline"><a target="_blank" href="https://blog.csdn.net/Fate_fjh">来自:    <span class="blog_title"> Fate_fjh的博客</span></a></span></p></div></div><div class="recommend-item-box recommend-ad-box"><div id="kp_box_78" data-pid="60" data-track-view='{"mod":"kp_popu_78","con":",,"}'><script type="text/javascript" src="//rabc1.iteye.com/production/res/rxjg.js?pkcgstj=jm"></script></div><script>$(function(){csdn.track.viewCheck($("#kp_box_78"));});</script></div><div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/tkyjqh/article/details/64443396,BlogCommendFromBaidu_8"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/tkyjqh/article/details/64443396,BlogCommendFromBaidu_8"}'><div class="content"><a href="https://blog.csdn.net/tkyjqh/article/details/64443396" target="_blank" title="CNN数学公式、Caffe框架简要 - tkyjqh的专栏"><h4 class="text-truncate oneline"><em>CNN</em><em>数学</em>公式、Caffe框架简要 - tkyjqh的专栏               </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/5/F/F/3_tkyjqh.jpg" alt="tkyjqh" class="avatar-pic"><span class="namebox"><span class="name">tkyjqh</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">03-21</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>771</span></p></div></a><p class="content"><a href="https://blog.csdn.net/tkyjqh/article/details/64443396" target="_blank" title="CNN数学公式、Caffe框架简要 - tkyjqh的专栏"><span class="desc oneline">1. 前言

下载AR人脸数据库,用Caffe-face中的face_example中的模型去学习,用一体机CPU方式,感觉没多久就死机了似的。觉得前一段时间急于得到成效,中间看到的很多东西都没消化…

来自: tkyjqh的专栏

     <div class="recommend-item-box type_hot_word"><div class="content clearfix oneline"><h5 class="float-left">相关热词</h5><div class="float-left"><span><a href="https://blog.csdn.net/qq_42310852/article/details/82621075" target="_blank">ai</a></span><span><a href="https://blog.csdn.net/yangyin007/article/details/80852851" target="_blank">AI</a></span><span><a href="https://blog.csdn.net/coollangzi/article/details/5770329" target="_blank">俄罗斯方块 AI</a></span><span><a href="https://blog.csdn.net/np4rhi455vg29y2/article/details/79276193" target="_blank">工业与AI</a></span><span><a href="https://blog.csdn.net/a1026438521/article/details/72626270" target="_blank">游戏Ai</a></span></div></div></div><div class="recommend-item-box recommend-box-ident" data-track-view='{"mod":"popu_387","con":",https://edu.csdn.net/course/detail/5015?from=recdm&lessonid=93838,BlogCommendFromEdu_9"}' data-track-click='{"mod":"popu_387","con":",https://edu.csdn.net/course/detail/5015?from=recdm&lessonid=93838,BlogCommendFromEdu_9"}'><a href="https://edu.csdn.net/course/detail/5015?from=recdm&lessonid=93838" target="_blank"><h4 class="text-truncate oneline">机器学习40天精英计划-课程概述与答疑          </h4><div class="info-box d-flex align-content-center"><p><span class="read-num">学院</span></p><p><span class="date">01-01</span></p></div><p class="content oneline">适合人群:所有人,章节:Pooling层         </p></a></div><div class="recommend-item-box blog-expert-recommend-box"><div class="d-flex"><div class="blog-expert-recommend"><div class="blog-expert"><div class="blog-expert-flexbox"></div></div></div><div class="blog-expert-load-new" data-index="3"><svg version="1.1" viewBox="0 0 200 200" style="enable-background:new 0 0 200 200;" xml:space="preserve"><path d="M142.5,57.5c-1.3-1.2-2.1-3.3-2.1-5.4c0-4.2,3.3-7.5,7.5-7.5h19.6c4.2,0,7.5,3.3,7.5,7.5s-3.3,7.5-7.5,7.5h-4.2C170.8,71.2,175,85,175,100c0,41.7-33.8,75-75,75c-4.2,0-7.5-3.3-7.5-7.5c0-4.2,3.3-7.5,7.5-7.5c33.3,0,60-27.1,60-60C160,83.7,153.8,68.7,142.5,57.5L142.5,57.5z M57.5,142.5c1.2,1.2,2.1,3.3,2.1,5.4c0,4.2-3.3,7.5-7.5,7.5H32.5c-4.2,0-7.5-3.3-7.5-7.5c0-4.2,3.3-7.5,7.5-7.5h4.2C29.2,128.8,25,115,25,100c0-41.7,33.7-75,75-75c4.2,0,7.5,3.3,7.5,7.5S104.2,40,100,40c-33.3,0-60,27.1-60,60C40,116.2,46.2,131.2,57.5,142.5z"/></svg><span>换一批</span></div></div></div><div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/u011630575/article/details/78064402,BlogCommendFromBaidu_10"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/u011630575/article/details/78064402,BlogCommendFromBaidu_10"}'><div class="content"><a href="https://blog.csdn.net/u011630575/article/details/78064402" target="_blank" title="池化函数(Pooling Function) - 飘过的春风(小白的进阶)"><h4 class="text-truncate oneline">池化函数(<em>Pooling</em> Function) - 飘过的春风(小白的进阶)               </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/B/0/5/3_u011630575.jpg" alt="u011630575" class="avatar-pic"><span class="namebox"><span class="name">u011630575</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">09-22</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>1564</span></p></div></a><p class="content"><a href="https://blog.csdn.net/u011630575/article/details/78064402" target="_blank" title="池化函数(Pooling Function) - 飘过的春风(小白的进阶)"><span class="desc oneline">1. 池化(Pooling)概念

在神经网络中,池化函数(Pooling Function)一般在卷积函数的下一层。在经过卷积层提取特征之后,得到的特征图代表了
比 像素 更高级的特…

来自: 飘过的春风

     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/u014380165/article/details/79632950,BlogCommendFromBaidu_11"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/u014380165/article/details/79632950,BlogCommendFromBaidu_11"}'><div class="content"><a href="https://blog.csdn.net/u014380165/article/details/79632950" target="_blank" title="卷积神经网络系列之softmax loss对输入的求导推导 - ai之路(计算机视觉、深度学习、机器学习爱好者,欢迎交流!)"><h4 class="text-truncate oneline">卷积神经网络系列之<em>softmax</em> loss对输入的求导推导 - <em>ai</em>之路(计算机视觉、深度学习、机器学习爱好者,欢迎交流!)              </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/D/6/1/3_u014380165.jpg" alt="u014380165" class="avatar-pic"><span class="namebox"><span class="name">u014380165</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">03-20</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>5242</span></p></div></a><p class="content"><a href="https://blog.csdn.net/u014380165/article/details/79632950" target="_blank" title="卷积神经网络系列之softmax loss对输入的求导推导 - ai之路(计算机视觉、深度学习、机器学习爱好者,欢迎交流!)"><span class="desc oneline">我们知道卷积神经网络(CNN)在图像领域的应用已经非常广泛了,一般一个CNN网络主要包含卷积层,池化层(pooling),全连接层,损失层等。虽然现在已经开源了很多深度学习框架(比如MxNet,Caf...</span></a><span class="blog_title_box oneline"><a target="_blank" href="https://blog.csdn.net/u014380165">来自:  <span class="blog_title"> AI之路</span></a></span></p></div></div><div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/pascal_lin/article/details/78643460,BlogCommendFromBaidu_12"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/pascal_lin/article/details/78643460,BlogCommendFromBaidu_12"}'><div class="content"><a href="https://blog.csdn.net/pascal_lin/article/details/78643460" target="_blank" title="神经网络的softmax层 - pascal_lin(人生最大的遗憾在于,你胸怀大志,却在虚度光阴。)"><h4 class="text-truncate oneline">神经网络的<em>softmax</em>层 - pascal_lin(人生最大的遗憾在于,你胸怀大志,却在虚度光阴。)                </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/E/3/6/3_pascal_lin.jpg" alt="pascal_lin" class="avatar-pic"><span class="namebox"><span class="name">pascal_lin</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">11-27</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>309</span></p></div></a><p class="content"><a href="https://blog.csdn.net/pascal_lin/article/details/78643460" target="_blank" title="神经网络的softmax层 - pascal_lin(人生最大的遗憾在于,你胸怀大志,却在虚度光阴。)"><span class="desc oneline">softmax

神经网络

来自: pascal_lin

<div class="recommend-item-box recommend-ad-box"><div id="kp_box_557" data-pid="61" data-track-view='{"mod":"kp_popu_557","con":",,"}'><div id="three_ad13" class="mediav_ad" ></div>
     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/qunnie_yi/article/details/80128463,BlogCommendFromBaidu_13"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/qunnie_yi/article/details/80128463,BlogCommendFromBaidu_13"}'><div class="content"><a href="https://blog.csdn.net/qunnie_yi/article/details/80128463" target="_blank" title="李理:卷积神经网络之Dropout - qunnie_yi的博客"><h4 class="text-truncate oneline">李理:卷积神经网络之Dropout - qunnie_yi的博客             </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/4/0/E/3_qunnie_yi.jpg" alt="qunnie_yi" class="avatar-pic"><span class="namebox"><span class="name">qunnie_yi</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">03-06</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>3255</span></p></div></a><p class="content"><a href="https://blog.csdn.net/qunnie_yi/article/details/80128463" target="_blank" title="李理:卷积神经网络之Dropout - qunnie_yi的博客"><span class="desc oneline">

本系列文章面向深度学习研发者,希望通过Image Caption Generation,一个有意思的具体任务,深入浅出地介绍深度学习的知识。本系列文章涉及到很多深度学习流行的模型,如CNN,RN…

来自: qunnie_yi的博客

     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/qq_14845119/article/details/52454539,BlogCommendFromBaidu_14"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/qq_14845119/article/details/52454539,BlogCommendFromBaidu_14"}'><div class="content"><a href="https://blog.csdn.net/qq_14845119/article/details/52454539" target="_blank" title="基于 CNN的年龄和性别检测 - 年轻即出发,(不努力,拿什么说明天)"><h4 class="text-truncate oneline">基于 <em>CNN</em>的年龄和性别检测 - 年轻即出发,(不努力,拿什么说明天)              </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/5/B/9/3_qq_14845119.jpg" alt="qq_14845119" class="avatar-pic"><span class="namebox"><span class="name">qq_14845119</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">09-06</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>7422</span></p></div></a><p class="content"><a href="https://blog.csdn.net/qq_14845119/article/details/52454539" target="_blank" title="基于 CNN的年龄和性别检测 - 年轻即出发,(不努力,拿什么说明天)"><span class="desc oneline">自2012年深度学习火起来后,AlexNet,vgg16,vgg19,gooGleNet,caffeNet,faster RCNN等,各种模型层出不群,颇有文艺复兴时的形态。

在各种顶会论文中,对年龄…

来自: 年轻即出发,

     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/u012905422/article/details/52463324,BlogCommendFromBaidu_15"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/u012905422/article/details/52463324,BlogCommendFromBaidu_15"}'><div class="content"><a href="https://blog.csdn.net/u012905422/article/details/52463324" target="_blank" title="CNN中的不同种类层简介 - xiaoxiang_aq 的博客"><h4 class="text-truncate oneline"><em>CNN</em>中的不同种类层简介 - xiaoxiang_aq 的博客             </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/F/D/A/3_u012905422.jpg" alt="u012905422" class="avatar-pic"><span class="namebox"><span class="name">u012905422</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">09-07</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>4692</span></p></div></a><p class="content"><a href="https://blog.csdn.net/u012905422/article/details/52463324" target="_blank" title="CNN中的不同种类层简介 - xiaoxiang_aq 的博客"><span class="desc oneline">在卷积神经网络(Convolutional Neural Network,CNN)中,往往包含许多种不同的网络层交替组成,主要有卷积层(Convolutional Layer)、池化层(Pooling...</span></a><span class="blog_title_box oneline"><a target="_blank" href="https://blog.csdn.net/u012905422">来自: <span class="blog_title"> xiaoxiang_AQ 的博客</span></a></span></p></div></div><div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/GoodShot/article/details/79683022,searchFromBaidu_16"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/GoodShot/article/details/79683022,searchFromBaidu_16"}'><div class="content"><a href="https://blog.csdn.net/GoodShot/article/details/79683022" target="_blank" title="Tensorflow CNN(两层卷积+全连接+softmax) - goodshot的专栏(追赶,超越)"><h4 class="text-truncate oneline">Tensorflow <em>CNN</em>(两层卷积+全连接+<em>softmax</em>) - goodshot的专栏(追赶,超越)             </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/4/B/B/3_goodshot.jpg" alt="GoodShot" class="avatar-pic"><span class="namebox"><span class="name">GoodShot</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">03-24</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>769</span></p></div></a><p class="content"><a href="https://blog.csdn.net/GoodShot/article/details/79683022" target="_blank" title="Tensorflow CNN(两层卷积+全连接+softmax) - goodshot的专栏(追赶,超越)"><span class="desc oneline">由于卷积用于分类的方法非常固定,因此直接贴上源码以及链接,有需要的直接稍加修改就可以了。 传送门 简单写一下心得体会 卷积层+pooling层#定义变量,初始化为截断正态分布的变量

def weigh…

来自: GoodShot的专栏

     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/liumingpei/article/details/80386685,searchFromBaidu_17"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/liumingpei/article/details/80386685,searchFromBaidu_17"}'><div class="content"><a href="https://blog.csdn.net/liumingpei/article/details/80386685" target="_blank" title="目标检测系列——R-CNN - liumingpei的博客"><h4 class="text-truncate oneline">目标检测系列——R-<em>CNN</em> - liumingpei的博客               </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/1/E/6/3_liumingpei.jpg" alt="liumingpei" class="avatar-pic"><span class="namebox"><span class="name">liumingpei</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">05-21</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>196</span></p></div></a><p class="content"><a href="https://blog.csdn.net/liumingpei/article/details/80386685" target="_blank" title="目标检测系列——R-CNN - liumingpei的博客"><span class="desc oneline">

写在前面
Abstract
Introduction
Object detection with R-CNN
Visualization,ablation,and modes of error

来自: liumingpei的博客

<div class="recommend-item-box recommend-ad-box"><div id="kp_box_623" data-pid="62" data-track-view='{"mod":"kp_popu_623","con":",,"}'><script type="text/javascript" src="//rabc1.iteye.com/production/dkjgal.js?cxpuwupg=g"></script></div><script>$(function(){csdn.track.viewCheck($("#kp_box_623"));});</script></div><div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/bailufeiyan/article/details/50879391,searchFromBaidu_18"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/bailufeiyan/article/details/50879391,searchFromBaidu_18"}'><div class="content"><a href="https://blog.csdn.net/bailufeiyan/article/details/50879391" target="_blank" title="Caffe 代码解读之 softmax layer - bailufeiyan的博客"><h4 class="text-truncate oneline">Caffe 代码解读之 <em>softmax</em> layer - b<em>ai</em>lufeiyan的博客              </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/9/C/1/3_bailufeiyan.jpg" alt="bailufeiyan" class="avatar-pic"><span class="namebox"><span class="name">bailufeiyan</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">03-13</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>6970</span></p></div></a><p class="content"><a href="https://blog.csdn.net/bailufeiyan/article/details/50879391" target="_blank" title="Caffe 代码解读之 softmax layer - bailufeiyan的博客"><span class="desc oneline">转自http://zhangliliang.com/2015/05/27/about-caffe-code-softmax-loss-layer/

关于softmax回归
看过最清晰的关于so…

来自: bailufeiyan的博客

     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/yjl9122/article/details/70198357,BlogCommendFromGuangxin_19"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/yjl9122/article/details/70198357,BlogCommendFromGuangxin_19"}'><div class="content"><a href="https://blog.csdn.net/yjl9122/article/details/70198357" target="_blank" title="卷积神经网络——输入层、卷积层、激活函数、池化层、全连接层 - 大丈夫(达则兼济天下,穷则独善其身)"><h4 class="text-truncate oneline">卷积神经网络——输入层、卷积层、激活函数、池化层、全连接层 - 大丈夫(达则兼济天下,穷则独善其身)             </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/6/0/7/3_yjl9122.jpg" alt="yjl9122" class="avatar-pic"><span class="namebox"><span class="name">yjl9122</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">04-16</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>7.9万</span></p></div></a><p class="content"><a href="https://blog.csdn.net/yjl9122/article/details/70198357" target="_blank" title="卷积神经网络——输入层、卷积层、激活函数、池化层、全连接层 - 大丈夫(达则兼济天下,穷则独善其身)"><span class="desc oneline">卷积神经网络(CNN)由输入层、卷积层、激活函数、池化层、全连接层组成,即INPUT(输入层)-CONV(卷积层)-RELU(激活函数)-POOL(池化层)-FC(全连接层)卷积层用它来进行特征提取,...</span></a><span class="blog_title_box oneline"><a target="_blank" href="https://blog.csdn.net/yjl9122">来自:    <span class="blog_title"> 大丈夫</span></a></span></p></div></div><div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/qq_31456593/article/details/78137945,BlogCommendFromGuangxin_20"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/qq_31456593/article/details/78137945,BlogCommendFromGuangxin_20"}'><div class="content"><a href="https://blog.csdn.net/qq_31456593/article/details/78137945" target="_blank" title="全卷积神经神经网络-深度学习笔记 - 知行_那片天(知,行,寻找我的那片天)"><h4 class="text-truncate oneline">全卷积神经神经网络-深度学习笔记 - 知行_那片天(知,行,寻找我的那片天)               </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/5/B/5/3_qq_31456593.jpg" alt="qq_31456593" class="avatar-pic"><span class="namebox"><span class="name">qq_31456593</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">09-29</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>885</span></p></div></a><p class="content"><a href="https://blog.csdn.net/qq_31456593/article/details/78137945" target="_blank" title="全卷积神经神经网络-深度学习笔记 - 知行_那片天(知,行,寻找我的那片天)"><span class="desc oneline">全卷积网络FCN是深度学习运用于图像语义分割的代表作,是一种端到端的图像分割方法,可以做到像素级别的预测,直接得出label map,输出分割结果。...</span></a><span class="blog_title_box oneline"><a target="_blank" href="https://blog.csdn.net/qq_31456593">来自:  <span class="blog_title"> 知行_那片天</span></a></span></p></div></div><div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/damifeng2018/article/details/52164146,BlogCommendFromGuangxin_21"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/damifeng2018/article/details/52164146,BlogCommendFromGuangxin_21"}'><div class="content"><a href="https://blog.csdn.net/damifeng2018/article/details/52164146" target="_blank" title="一文读懂卷积神经网络(Convolutional Neural Network,CNN) - damifeng2018的博客"><h4 class="text-truncate oneline">一文读懂卷积神经网络(<em>Conv</em>olutional Neural Network,<em>CNN</em>) - damifeng2018的博客             </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/3/C/E/3_damifeng2018.jpg" alt="damifeng2018" class="avatar-pic"><span class="namebox"><span class="name">damifeng2018</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">08-09</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>5390</span></p></div></a><p class="content"><a href="https://blog.csdn.net/damifeng2018/article/details/52164146" target="_blank" title="一文读懂卷积神经网络(Convolutional Neural Network,CNN) - damifeng2018的博客"><span class="desc oneline">先明确一点就是,Deep Learning是全部深度学习算法的总称,CNN是深度学习算法在图像处理领域的应用。

第一点,在学习Deep learning和CNN之前,总以为它们是很了不得的知识,…

来自: damifeng2018的博客

     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/ZK_J1994/article/details/72084842,BlogCommendFromGuangxin_22"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/ZK_J1994/article/details/72084842,BlogCommendFromGuangxin_22"}'><div class="content"><a href="https://blog.csdn.net/ZK_J1994/article/details/72084842" target="_blank" title="机器学习 - 卷积神经网络CNN初解 - 呵呵(哈哈)"><h4 class="text-truncate oneline">机器学习 - 卷积神经网络<em>CNN</em>初解 - 呵呵(哈哈)             </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/0/0/F/3_zk_j1994.jpg" alt="ZK_J1994" class="avatar-pic"><span class="namebox"><span class="name">ZK_J1994</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">05-15</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>787</span></p></div></a><p class="content"><a href="https://blog.csdn.net/ZK_J1994/article/details/72084842" target="_blank" title="机器学习 - 卷积神经网络CNN初解 - 呵呵(哈哈)"><span class="desc oneline">自今年七月份以来,一直在实验室负责卷积神经网络(Convolutional Neural Network,CNN),期间配置和使用过theano和cuda-convnet、cuda-convnet2。...</span></a><span class="blog_title_box oneline"><a target="_blank" href="https://blog.csdn.net/ZK_J1994">来自: <span class="blog_title"> 呵呵</span></a></span></p></div></div><div class="recommend-item-box recommend-ad-box"><div id="kp_box_78" data-pid="63" data-track-view='{"mod":"kp_popu_78","con":",,"}'><script type="text/javascript" src="//rabc1.iteye.com/production/res/rxjg.js?pkcgstj=jm"></script></div><script>$(function(){csdn.track.viewCheck($("#kp_box_78"));});</script></div><div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/weixin_40396948/article/details/79353869,BlogCommendFromGuangxin_23"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/weixin_40396948/article/details/79353869,BlogCommendFromGuangxin_23"}'><div class="content"><a href="https://blog.csdn.net/weixin_40396948/article/details/79353869" target="_blank" title="对池化层、ReLU函数、全连接层的理解 - weixin_40396948的博客"><h4 class="text-truncate oneline">对池化层、ReLU函数、全连接层的理解 - weixin_40396948的博客             </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/B/7/0/3_weixin_40396948.jpg" alt="weixin_40396948" class="avatar-pic"><span class="namebox"><span class="name">weixin_40396948</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">02-23</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>2787</span></p></div></a><p class="content"><a href="https://blog.csdn.net/weixin_40396948/article/details/79353869" target="_blank" title="对池化层、ReLU函数、全连接层的理解 - weixin_40396948的博客"><span class="desc oneline">一、pooling层的作用      pooling主要是在用于图像处理的卷积神经网络中,但随着深层神经网络的发展,pooling相关技术在其他领域,其他结构的神经网络中也越来越受关注。      卷...</span></a><span class="blog_title_box oneline"><a target="_blank" href="https://blog.csdn.net/weixin_40396948">来自:  <span class="blog_title"> weixin_40396948的博客</span></a></span></p></div></div><div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/JIEJINQUANIL/article/details/50042791,BlogCommendFromQuerySearch_24"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/JIEJINQUANIL/article/details/50042791,BlogCommendFromQuerySearch_24"}'><div class="content"><a href="https://blog.csdn.net/JIEJINQUANIL/article/details/50042791" target="_blank" title="对CNN中pooling的理解 - 竭尽全力的专栏(发表是最好的记忆)"><h4 class="text-truncate oneline">对<em>CNN</em>中<em>pooling</em>的理解 - 竭尽全力的专栏(发表是最好的记忆)               </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/C/F/2/3_jiejinquanil.jpg" alt="JIEJINQUANIL" class="avatar-pic"><span class="namebox"><span class="name">JIEJINQUANIL</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">11-25</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>2.2万</span></p></div></a><p class="content"><a href="https://blog.csdn.net/JIEJINQUANIL/article/details/50042791" target="_blank" title="对CNN中pooling的理解 - 竭尽全力的专栏(发表是最好的记忆)"><span class="desc oneline">自己在看论文的过程中结合网上的一些资料,对pooling的一些理解汇总如下,以供参考:

1、pooling主要是在用于图像处理的卷积神经网络中,但随着深层神经网络的发展,pooling相关技术在其他…

来自: 竭尽全力的专栏

     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/byplane/article/details/52422997,BlogCommendFromQuerySearch_25"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/byplane/article/details/52422997,BlogCommendFromQuerySearch_25"}'><div class="content"><a href="https://blog.csdn.net/byplane/article/details/52422997" target="_blank" title="CNN中pooling层的作用 - byplane的博客"><h4 class="text-truncate oneline"><em>CNN</em>中<em>pooling</em>层的作用 - byplane的博客               </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/6/2/4/3_byplane.jpg" alt="byplane" class="avatar-pic"><span class="namebox"><span class="name">byplane</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">09-03</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>3826</span></p></div></a><p class="content"><a href="https://blog.csdn.net/byplane/article/details/52422997" target="_blank" title="CNN中pooling层的作用 - byplane的博客"><span class="desc oneline">1.引入了位移不变性,更关注是否存在某些特征而不是特征具体的位置。比如最常见的max

pooling,因为取一片区域的最大值,所以这个最大值在该区域内无论在哪,max-pooling之后都是它,相…

来自: byplane的博客

     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/liulina603/article/details/44915905,BlogCommendFromQuerySearch_26"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/liulina603/article/details/44915905,BlogCommendFromQuerySearch_26"}'><div class="content"><a href="https://blog.csdn.net/liulina603/article/details/44915905" target="_blank" title="CNN神经网络层次分析 - liulina603的专栏"><h4 class="text-truncate oneline"><em>CNN</em>神经网络层次<em>分析</em> - liulina603的专栏                </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/D/9/0/3_liulina603.jpg" alt="liulina603" class="avatar-pic"><span class="namebox"><span class="name">liulina603</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">04-07</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>4.1万</span></p></div></a><p class="content"><a href="https://blog.csdn.net/liulina603/article/details/44915905" target="_blank" title="CNN神经网络层次分析 - liulina603的专栏"><span class="desc oneline">1.2 池化层(Pooling)

用ReLU代替sigmoid

来自: liulina603的专栏

     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/u014365862/article/details/77159143,BlogCommendFromQuerySearch_27"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/u014365862/article/details/77159143,BlogCommendFromQuerySearch_27"}'><div class="content"><a href="https://blog.csdn.net/u014365862/article/details/77159143" target="_blank" title="CNN—pooling层的作用 - machinelp的专栏(成功收获成果,失败收获智慧,投入收获快乐!)"><h4 class="text-truncate oneline"><em>CNN</em>—<em>pooling</em>层的作用 - machinelp的专栏(成功收获成果,失败收获智慧,投入收获快乐!)              </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/9/C/0/3_u014365862.jpg" alt="u014365862" class="avatar-pic"><span class="namebox"><span class="name">u014365862</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">08-14</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>772</span></p></div></a><p class="content"><a href="https://blog.csdn.net/u014365862/article/details/77159143" target="_blank" title="CNN—pooling层的作用 - machinelp的专栏(成功收获成果,失败收获智慧,投入收获快乐!)"><span class="desc oneline">此处是个人见解:欢迎微信探讨:lp5319

1、是构建更深层次的网络变得可行;
2、使得filters获得更多的全局和contextual(上下文)信息;
3、使训练可行,也可以说使得训练变得更高效,…

来自: MachineLP的专栏

<div class="recommend-item-box recommend-ad-box"><div id="kp_box_81" data-pid="64" data-track-view='{"mod":"kp_popu_81","con":",,"}'><div id="three_ad28" class="mediav_ad" ></div>
     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/kangqi5602/article/details/78321083,BlogCommendFromQuerySearch_28"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/kangqi5602/article/details/78321083,BlogCommendFromQuerySearch_28"}'><div class="content"><a href="https://blog.csdn.net/kangqi5602/article/details/78321083" target="_blank" title="MatConvNet基础—卷积,激活,池化层操作 - kangqi5602的博客"><h4 class="text-truncate oneline">Mat<em>Conv</em>Net基础—卷积,激活,池化层操作 - kangqi5602的博客               </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/0/A/5/3_kangqi5602.jpg" alt="kangqi5602" class="avatar-pic"><span class="namebox"><span class="name">kangqi5602</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">10-23</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>1024</span></p></div></a><p class="content"><a href="https://blog.csdn.net/kangqi5602/article/details/78321083" target="_blank" title="MatConvNet基础—卷积,激活,池化层操作 - kangqi5602的博客"><span class="desc oneline">卷积conv,激活Relu(还有Sigmoid激活),池化pooling是卷积神经网络CNN中最基础的操作值一,理解起来也非常的简单。%%

%MatConvNet学习
%对于三通道RGB图像进行卷积,…

来自: kangqi5602的博客

     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/woaijssss/article/details/79535052,BlogCommendFromBaidu_29"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/woaijssss/article/details/79535052,BlogCommendFromBaidu_29"}'><div class="content"><a href="https://blog.csdn.net/woaijssss/article/details/79535052" target="_blank" title="CNN基本原理详解 - woaijssss的博客"><h4 class="text-truncate oneline"><em>CNN</em><em>基本</em>原理详解 - wo<em>ai</em>jssss的博客                </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/5/2/D/3_woaijssss.jpg" alt="woaijssss" class="avatar-pic"><span class="namebox"><span class="name">woaijssss</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">03-13</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>3401</span></p></div></a><p class="content"><a href="https://blog.csdn.net/woaijssss/article/details/79535052" target="_blank" title="CNN基本原理详解 - woaijssss的博客"><span class="desc oneline">卷积神经网络(Convolutional Neural Network,简称CNN),是一种前馈神经网络,人工神经元可以响应周围单元,可以进行大型图像处理。卷积神经网络包括卷积层和池化层。...</span></a><span class="blog_title_box oneline"><a target="_blank" href="https://blog.csdn.net/woaijssss">来自: <span class="blog_title"> woaijssss的博客</span></a></span></p></div></div><div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/XYYHLark01/article/details/79601920,BlogCommendFromBaidu_30"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/XYYHLark01/article/details/79601920,BlogCommendFromBaidu_30"}'><div class="content"><a href="https://blog.csdn.net/XYYHLark01/article/details/79601920" target="_blank" title="各种CNN构架解读 - xyyhlark01的博客"><h4 class="text-truncate oneline">各种<em>CNN</em>构架解读 - xyyhlark01的博客                </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/3/2/1/3_xyyhlark01.jpg" alt="XYYHLark01" class="avatar-pic"><span class="namebox"><span class="name">XYYHLark01</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">03-18</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>2018</span></p></div></a><p class="content"><a href="https://blog.csdn.net/XYYHLark01/article/details/79601920" target="_blank" title="各种CNN构架解读 - xyyhlark01的博客"><span class="desc oneline">1、       传统图像分类优缺点图像分类的传统流程涉及2个模块:特征提取和分类。传统的特征提取是从原始图像中提取手工设计的特征,如Haar、LBP、HOG等,然后采用分类器对其进行分类(如SVM、...</span></a><span class="blog_title_box oneline"><a target="_blank" href="https://blog.csdn.net/XYYHLark01">来自:   <span class="blog_title"> XYYHLark01的博客</span></a></span></p></div></div><div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/WWWQ2386466490/article/details/79004814,BlogCommendFromBaidu_31"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/WWWQ2386466490/article/details/79004814,BlogCommendFromBaidu_31"}'><div class="content"><a href="https://blog.csdn.net/WWWQ2386466490/article/details/79004814" target="_blank" title="cnn(卷积神经网络)比较系统的讲解 - 生活不止眼前的苟且,还有诗和远方.....(如果你坚持学习一生,你会非常幸福!)"><h4 class="text-truncate oneline"><em>cnn</em>(卷积神经网络)比较系统的讲解 - 生活不止眼前的苟且,还有诗和远方.....(如果你坚持学习一生,你会非常幸福!)             </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/3/2/1/3_wwwq2386466490.jpg" alt="WWWQ2386466490" class="avatar-pic"><span class="namebox"><span class="name">WWWQ2386466490</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">01-08</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>7354</span></p></div></a><p class="content"><a href="https://blog.csdn.net/WWWQ2386466490/article/details/79004814" target="_blank" title="cnn(卷积神经网络)比较系统的讲解 - 生活不止眼前的苟且,还有诗和远方.....(如果你坚持学习一生,你会非常幸福!)"><span class="desc oneline">本文整理了网上几位大牛的博客,详细地讲解了CNN的基础结构与核心思想,欢迎交流。

[1]Deep learning简介

[2]Deep Learning训练过程

[3]De…

来自: 生活不止眼前的苟且,还有诗和远方…

     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/doublezsx/article/details/79662775,BlogCommendFromBaidu_32"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/doublezsx/article/details/79662775,BlogCommendFromBaidu_32"}'><div class="content"><a href="https://blog.csdn.net/doublezsx/article/details/79662775" target="_blank" title="卷积层和池化层的计算公式相同: - 有机合成研究员的博客(我是这个世界的梦魇)"><h4 class="text-truncate oneline">卷积层和池化层的计算公式相同: - 有机合成研究员的博客(我是这个世界的梦魇)               </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/6/B/8/3_doublezsx.jpg" alt="doublezsx" class="avatar-pic"><span class="namebox"><span class="name">doublezsx</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">03-23</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>1062</span></p></div></a><p class="content"><a href="https://blog.csdn.net/doublezsx/article/details/79662775" target="_blank" title="卷积层和池化层的计算公式相同: - 有机合成研究员的博客(我是这个世界的梦魇)"><span class="desc oneline">https://blog.csdn.net/qq_27009517/article/details/79440262若没有边缘填充,padding=VALID,计算公式如下:

O=ceil((W-K+…

来自: 有机合成研究员的博客

<div class="recommend-item-box recommend-ad-box"><div id="kp_box_78" data-pid="65" data-track-view='{"mod":"kp_popu_78","con":",,"}'><script type="text/javascript" src="//rabc1.iteye.com/production/res/rxjg.js?pkcgstj=jm"></script></div><script>$(function(){csdn.track.viewCheck($("#kp_box_78"));});</script></div><div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/bobo_jiang/article/details/81590965,BlogCommendFromBaidu_33"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/bobo_jiang/article/details/81590965,BlogCommendFromBaidu_33"}'><div class="content"><a href="https://blog.csdn.net/bobo_jiang/article/details/81590965" target="_blank" title="CNN入门讲解:我的Softmax和你的不太一样 - bobo_jiang的博客"><h4 class="text-truncate oneline"><em>CNN</em>入门讲解:我的<em>Softmax</em>和你的不太一样 - bobo_jiang的博客              </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/A/3/C/3_bobo_jiang.jpg" alt="bobo_jiang" class="avatar-pic"><span class="namebox"><span class="name">bobo_jiang</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">08-11</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>141</span></p></div></a><p class="content"><a href="https://blog.csdn.net/bobo_jiang/article/details/81590965" target="_blank" title="CNN入门讲解:我的Softmax和你的不太一样 - bobo_jiang的博客"><span class="desc oneline">首发于知乎专栏:

卷积神经网络(CNN)入门讲解​

个人公众号:follow_bobo

时隔一个月,我又回来了

好了,好了

(做出停止鼓掌的手势

今天我们的主题是CNN最后最后那…

来自: bobo_jiang的博客

     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/lovelyaiq/article/details/79460243,BlogCommendFromBaidu_34"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/lovelyaiq/article/details/79460243,BlogCommendFromBaidu_34"}'><div class="content"><a href="https://blog.csdn.net/lovelyaiq/article/details/79460243" target="_blank" title="深度学习--softmax函数推导 - tiran_yang(假如你不逼你自己,你永远不知道自己有多优秀。)"><h4 class="text-truncate oneline">深度学习--<em>softmax</em>函数推导 - tiran_yang(假如你不逼你自己,你永远不知道自己有多优秀。)                </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/D/8/A/3_lovelyaiq.jpg" alt="lovelyaiq" class="avatar-pic"><span class="namebox"><span class="name">lovelyaiq</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">03-06</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>531</span></p></div></a><p class="content"><a href="https://blog.csdn.net/lovelyaiq/article/details/79460243" target="_blank" title="深度学习--softmax函数推导 - tiran_yang(假如你不逼你自己,你永远不知道自己有多优秀。)"><span class="desc oneline">  softmax函数在神经网络中使用是比较频繁,我们刚刚学习的时候,只是直到网络的最后一层经过softmax层,得到最后的输出,但不知道它的具体公式推导,因此本篇,以一个简单的网络来说明神经网络的前...</span></a><span class="blog_title_box oneline"><a target="_blank" href="https://blog.csdn.net/lovelyaiq">来自:   <span class="blog_title"> TiRan_Yang</span></a></span></p></div></div><div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/Jaster_wisdom/article/details/78379697,BlogCommendFromBaidu_35"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/Jaster_wisdom/article/details/78379697,BlogCommendFromBaidu_35"}'><div class="content"><a href="https://blog.csdn.net/Jaster_wisdom/article/details/78379697" target="_blank" title="BP神经网络(输出层采用Softmax激活函数、交叉熵损失函数)公式推导 - jaster_wisdom的专栏(待到山花烂漫时,她在丛中笑)"><h4 class="text-truncate oneline">BP神经网络(输出层采用<em>Softmax</em>激活函数、交叉熵损失函数)公式推导 - jaster_wisdom的专栏(待到山花烂漫时,她在丛中笑)             </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/8/D/B/3_jaster_wisdom.jpg" alt="Jaster_wisdom" class="avatar-pic"><span class="namebox"><span class="name">Jaster_wisdom</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">10-28</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>3928</span></p></div></a><p class="content"><a href="https://blog.csdn.net/Jaster_wisdom/article/details/78379697" target="_blank" title="BP神经网络(输出层采用Softmax激活函数、交叉熵损失函数)公式推导 - jaster_wisdom的专栏(待到山花烂漫时,她在丛中笑)"><span class="desc oneline">本篇博客主要介绍经典的三层BP神经网络的基本结构及反向传播算法的公式推导。我们首先假设有四类样本,每个样本有三类特征,并且我们在输出层与隐藏层加上一个偏置单元。这样的话,我们可以得到以下经典的三层BP...</span></a><span class="blog_title_box oneline"><a target="_blank" href="https://blog.csdn.net/Jaster_wisdom">来自:   <span class="blog_title"> Jaster_wisdom的专栏</span></a></span></p></div></div><div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/u013010889/article/details/76343758,BlogCommendFromBaidu_36"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/u013010889/article/details/76343758,BlogCommendFromBaidu_36"}'><div class="content"><a href="https://blog.csdn.net/u013010889/article/details/76343758" target="_blank" title="Softmax与SoftmaxWithLoss原理及代码详解 - sundrops的专栏(deep learning)"><h4 class="text-truncate oneline"><em>Softmax</em>与<em>Softmax</em>WithLoss原理及代码详解 - sundrops的专栏(deep learning)               </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/2/9/4/3_u013010889.jpg" alt="u013010889" class="avatar-pic"><span class="namebox"><span class="name">u013010889</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">07-29</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>3646</span></p></div></a><p class="content"><a href="https://blog.csdn.net/u013010889/article/details/76343758" target="_blank" title="Softmax与SoftmaxWithLoss原理及代码详解 - sundrops的专栏(deep learning)"><span class="desc oneline">

一直对softmax的反向传播的caffe代码看不懂,最近在朱神的数学理论支撑下给我详解了它的数学公式,才豁然开朗

SoftmaxWithLoss的由来

SoftmaxWithLoss…

来自: Sundrops的专栏

     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/App_12062011/article/details/54374893,BlogCommendFromBaidu_37"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/App_12062011/article/details/54374893,BlogCommendFromBaidu_37"}'><div class="content"><a href="https://blog.csdn.net/App_12062011/article/details/54374893" target="_blank" title="系统学习深度学习(四) --CNN原理,推导及实现源码分析 - 工作笔记(从科学家手里,接取火种,然后燎原大地。。。)"><h4 class="text-truncate oneline">系统学习深度学习(四) --<em>CNN</em>原理,推导及实现源码<em>分析</em> - 工作笔记(从科学家手里,接取火种,然后燎原大地。。。)                </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/9/0/8/3_app_12062011.jpg" alt="App_12062011" class="avatar-pic"><span class="namebox"><span class="name">App_12062011</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">01-12</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>2.8万</span></p></div></a><p class="content"><a href="https://blog.csdn.net/App_12062011/article/details/54374893" target="_blank" title="系统学习深度学习(四) --CNN原理,推导及实现源码分析 - 工作笔记(从科学家手里,接取火种,然后燎原大地。。。)"><span class="desc oneline">之前看机器学习中,多层感知器部分,提到可以在设计多层感知器时,对NN的结构设计优化,例如结构化设计和权重共享,当时还没了解深度学习,现在看到CNN,原来CNN就是这方面的一个代表。CNN由纽约大学的Y...</span></a><span class="blog_title_box oneline"><a target="_blank" href="https://blog.csdn.net/App_12062011">来自:   <span class="blog_title"> 工作笔记</span></a></span></p></div></div><div class="recommend-item-box recommend-ad-box"><div id="kp_box_87" data-pid="66" data-track-view='{"mod":"kp_popu_87","con":",,"}'><div id="three_ad38" class="mediav_ad" ></div>
     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/u014422406/article/details/52805924,BlogCommendFromBaidu_38"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/u014422406/article/details/52805924,BlogCommendFromBaidu_38"}'><div class="content"><a href="https://blog.csdn.net/u014422406/article/details/52805924" target="_blank" title="sigmoid和softmax总结 - 老哥的专栏"><h4 class="text-truncate oneline">sigmoid和<em>softmax</em>总结 - 老哥的专栏             </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/9/D/2/3_u014422406.jpg" alt="u014422406" class="avatar-pic"><span class="namebox"><span class="name">u014422406</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">10-13</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>9.3万</span></p></div></a><p class="content"><a href="https://blog.csdn.net/u014422406/article/details/52805924" target="_blank" title="sigmoid和softmax总结 - 老哥的专栏"><span class="desc oneline">sigmoid函数(也叫逻辑斯谛函数):

引用wiki百科的定义:  A logistic function or logistic curve is a common “S” shape (si…

来自: 老哥的专栏

     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/u013088062/article/details/50890263,BlogCommendFromQuerySearch_39"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/u013088062/article/details/50890263,BlogCommendFromQuerySearch_39"}'><div class="content"><a href="https://blog.csdn.net/u013088062/article/details/50890263" target="_blank" title="C++卷积神经网络实例:tiny_cnn代码详解(6)——average_pooling_layer层结构类分析 - 陈俊岭的程序员之路(公众号求关注,方便交流)(烦请关注一下下方公众号,方便交流,私信和评论可能没办法及时回复,非常抱歉)"><h4 class="text-truncate oneline">C++卷积神经网络实例:tiny_<em>cnn</em>代码详解(6)——average_<em>pooling</em>_layer层<em>结构</em>类<em>分析</em> - 陈俊岭的程序员之路(公众号求关注,方便交流)(烦请关注一下下方公众号,方便交流,私信和评论可能没办法及时回复,非常抱歉)                </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/5/1/7/3_u013088062.jpg" alt="u013088062" class="avatar-pic"><span class="namebox"><span class="name">u013088062</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">03-14</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>9074</span></p></div></a><p class="content"><a href="https://blog.csdn.net/u013088062/article/details/50890263" target="_blank" title="C++卷积神经网络实例:tiny_cnn代码详解(6)——average_pooling_layer层结构类分析 - 陈俊岭的程序员之路(公众号求关注,方便交流)(烦请关注一下下方公众号,方便交流,私信和评论可能没办法及时回复,非常抱歉)"><span class="desc oneline">  在之前的博文中我们着重分析了convolutional_layer类的代码结构,在这篇博文中分析对应的下采样层average_pooling_layer类:  一、下采样层的作用  下采样层的作用...</span></a><span class="blog_title_box oneline"><a target="_blank" href="https://blog.csdn.net/u013088062">来自:  <span class="blog_title"> 陈俊岭的程序员之路(公众号求关注,方便交流)</span></a></span></p></div></div><div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/a200800170331/article/details/80007421,BlogCommendFromQuerySearch_40"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/a200800170331/article/details/80007421,BlogCommendFromQuerySearch_40"}'><div class="content"><a href="https://blog.csdn.net/a200800170331/article/details/80007421" target="_blank" title="【caffe】计算pooling层和convolution层输出图像大小 - a200800170331的专栏"><h4 class="text-truncate oneline">【caffe】计算<em>pooling</em>层和<em>conv</em>olution层输出图像大小 - a200800170331的专栏               </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/4/A/0/3_a200800170331.jpg" alt="a200800170331" class="avatar-pic"><span class="namebox"><span class="name">a200800170331</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">04-19</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>411</span></p></div></a><p class="content"><a href="https://blog.csdn.net/a200800170331/article/details/80007421" target="_blank" title="【caffe】计算pooling层和convolution层输出图像大小 - a200800170331的专栏"><span class="desc oneline">参考:https://blog.csdn.net/qq_27009517/article/details/79440262pooling层和convolution层的计算方法一样。1. 没有pad,计...</span></a><span class="blog_title_box oneline"><a target="_blank" href="https://blog.csdn.net/a200800170331">来自: <span class="blog_title"> a200800170331的专栏</span></a></span></p></div></div><div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/u012702874/article/details/43247983,BlogCommendFromQuerySearch_41"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/u012702874/article/details/43247983,BlogCommendFromQuerySearch_41"}'><div class="content"><a href="https://blog.csdn.net/u012702874/article/details/43247983" target="_blank" title="在训练CNN的时候,各层back propagation的递推公式 - u012702874的专栏"><h4 class="text-truncate oneline">在训练<em>CNN</em>的时候,各层back propagation的递推公式 - u012702874的专栏              </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/6/4/6/3_u012702874.jpg" alt="u012702874" class="avatar-pic"><span class="namebox"><span class="name">u012702874</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">01-28</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>4682</span></p></div></a><p class="content"><a href="https://blog.csdn.net/u012702874/article/details/43247983" target="_blank" title="在训练CNN的时候,各层back propagation的递推公式 - u012702874的专栏"><span class="desc oneline">由于下学期毕设要做CNN的东西,最近开始接触CNN。看了一些资料,发现这些资料里面讲的BP+SGD的训练策略都是针对conv layer 的,而像caffe这种ConvNet库,里面包含了很多非卷积层...</span></a><span class="blog_title_box oneline"><a target="_blank" href="https://blog.csdn.net/u012702874">来自:   <span class="blog_title"> u012702874的专栏</span></a></span></p></div></div><div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/jiachen0212/article/details/78548667,BlogCommendFromQuerySearch_42"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/jiachen0212/article/details/78548667,BlogCommendFromQuerySearch_42"}'><div class="content"><a href="https://blog.csdn.net/jiachen0212/article/details/78548667" target="_blank" title="dilated conv带孔卷积、pooling层提高感受野 反卷积 的理解 - jiachen0212的博客"><h4 class="text-truncate oneline">dilated <em>conv</em>带孔卷积、<em>pooling</em>层提高感受野 反卷积 的理解 - jiachen0212的博客               </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/E/7/8/3_jiachen0212.jpg" alt="jiachen0212" class="avatar-pic"><span class="namebox"><span class="name">jiachen0212</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">11-16</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>7285</span></p></div></a><p class="content"><a href="https://blog.csdn.net/jiachen0212/article/details/78548667" target="_blank" title="dilated conv带孔卷积、pooling层提高感受野 反卷积 的理解 - jiachen0212的博客"><span class="desc oneline">首先放链接:https://www.zhihu.com/question/54149221

首先,初次接触这个问题是在做图像分割遇到的。
pooling为什么可以提高感受野?
得这样理解:首先它第一个…

来自: jiachen0212的博客

<div class="recommend-item-box recommend-ad-box"><div id="kp_box_558" data-pid="67" data-track-view='{"mod":"kp_popu_558","con":",,"}'><script
async="async"
charset="utf-8"
src="https://shared.ydstatic.com/js/yatdk/3.0.1/stream.js"
data-id="8935aa488dd58452b9e5ee3b44f1212f"
data-udid="24C56021-A1CB-4A07-993A-2D2A7F00FDCD"
data-div-Style="width:900px;height:76px;"
data-img-Style="float:left;margin-right:15px;width:90px;height:60px;"
data-tit-Style="font-size:16px;color:#f13d3d;"

data-des-Style=“font-size:12px;color:#333;”

广告
     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/weiyongle1996/article/details/78088654,BlogCommendFromQuerySearch_43"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/weiyongle1996/article/details/78088654,BlogCommendFromQuerySearch_43"}'><div class="content"><a href="https://blog.csdn.net/weiyongle1996/article/details/78088654" target="_blank" title="CNN卷积神经网络层级结构 - 一路前行(生命不息,奋斗不止)"><h4 class="text-truncate oneline"><em>CNN</em>卷积神经网络层级<em>结构</em> - 一路前行(生命不息,奋斗不止)             </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/F/4/5/3_weiyongle1996.jpg" alt="weiyongle1996" class="avatar-pic"><span class="namebox"><span class="name">weiyongle1996</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">09-25</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>1430</span></p></div></a><p class="content"><a href="https://blog.csdn.net/weiyongle1996/article/details/78088654" target="_blank" title="CNN卷积神经网络层级结构 - 一路前行(生命不息,奋斗不止)"><span class="desc oneline">一、卷积神经网络层级结构

卷积神经网络层次结构包括:
数据输入层/ Input layer
卷积计算层/ CONV layer
激励层 / ReLU layer
池化层 / Pooling la…

来自: 一路前行

     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/u010555688/article/details/26353333,BlogCommendFromBaidu_44"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/u010555688/article/details/26353333,BlogCommendFromBaidu_44"}'><div class="content"><a href="https://blog.csdn.net/u010555688/article/details/26353333" target="_blank" title="Deep Learning模型之:CNN卷积神经网络(三)CNN常见问题总结 - u010555688的专栏"><h4 class="text-truncate oneline">Deep Learning模型之:<em>CNN</em>卷积神经网络(三)<em>CNN</em>常见问题总结 - u010555688的专栏                </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/4/B/D/3_u010555688.jpg" alt="u010555688" class="avatar-pic"><span class="namebox"><span class="name">u010555688</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">05-20</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>1.6万</span></p></div></a><p class="content"><a href="https://blog.csdn.net/u010555688/article/details/26353333" target="_blank" title="Deep Learning模型之:CNN卷积神经网络(三)CNN常见问题总结 - u010555688的专栏"><span class="desc oneline">遇到的问题

梯度消失

我在实现过程中犯的第一个错误是没有循序渐进。仗着自己写过一些神经网络的代码以为手到擒来,直接按照LeNet-5的结构写,过于复杂的结构给测试和调试都带来了很大…

来自: u010555688的专栏

     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/zlsjsj/article/details/81209497,BlogCommendFromBaidu_45"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/zlsjsj/article/details/81209497,BlogCommendFromBaidu_45"}'><div class="content"><a href="https://blog.csdn.net/zlsjsj/article/details/81209497" target="_blank" title="CNN反向传播中卷积层和池化层是如何反向传播的 - zlsjsj的博客"><h4 class="text-truncate oneline"><em>CNN</em>反向传播中卷积层和池化层是如何反向传播的 - zlsjsj的博客             </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/5/B/9/3_zlsjsj.jpg" alt="zlsjsj" class="avatar-pic"><span class="namebox"><span class="name">zlsjsj</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">07-25</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>95</span></p></div></a><p class="content"><a href="https://blog.csdn.net/zlsjsj/article/details/81209497" target="_blank" title="CNN反向传播中卷积层和池化层是如何反向传播的 - zlsjsj的博客"><span class="desc oneline"></span></a><span class="blog_title_box oneline"><a target="_blank" href="https://blog.csdn.net/zlsjsj">来自:  <span class="blog_title"> zlsjsj的博客</span></a></span></p></div></div><div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/foreyang00/article/details/72868567,BlogCommendFromBaidu_46"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/foreyang00/article/details/72868567,BlogCommendFromBaidu_46"}'><div class="content"><a href="https://blog.csdn.net/foreyang00/article/details/72868567" target="_blank" title="CNN中各层图像大小的计算 - foreyang00(bless b !)"><h4 class="text-truncate oneline"><em>CNN</em>中各层图像大小的计算 - foreyang00(bless b !)             </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/1/D/E/3_foreyang00.jpg" alt="foreyang00" class="avatar-pic"><span class="namebox"><span class="name">foreyang00</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">06-05</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>2653</span></p></div></a><p class="content"><a href="https://blog.csdn.net/foreyang00/article/details/72868567" target="_blank" title="CNN中各层图像大小的计算 - foreyang00(bless b !)"><span class="desc oneline">转自:http://blog.csdn.net/gavin__zhou/article/details/50609325

CNN刚刚入门,一直不是很明白通过卷积或者pooling之后图像的大…

来自: foreyang00

     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/yepeng_xinxian/article/details/82380707,BlogCommendFromBaidu_47"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/yepeng_xinxian/article/details/82380707,BlogCommendFromBaidu_47"}'><div class="content"><a href="https://blog.csdn.net/yepeng_xinxian/article/details/82380707" target="_blank" title="深度学习中卷积层和pooling层的输出计算公式 - yepeng_xinxian的博客"><h4 class="text-truncate oneline">深度学习中卷积层和<em>pooling</em>层的输出计算公式 - yepeng_xinxian的博客               </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/C/6/0/3_yepeng_xinxian.jpg" alt="yepeng_xinxian" class="avatar-pic"><span class="namebox"><span class="name">yepeng_xinxian</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">09-04</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>327</span></p></div></a><p class="content"><a href="https://blog.csdn.net/yepeng_xinxian/article/details/82380707" target="_blank" title="深度学习中卷积层和pooling层的输出计算公式 - yepeng_xinxian的博客"><span class="desc oneline">1.卷积层的输出计算公式

class torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dil…

来自: yepeng_xinxian的博客

<div class="recommend-item-box recommend-ad-box"><div id="kp_box_558" data-pid="68" data-track-view='{"mod":"kp_popu_558","con":",,"}'><script
async="async"
charset="utf-8"
src="https://shared.ydstatic.com/js/yatdk/3.0.1/stream.js"
data-id="8935aa488dd58452b9e5ee3b44f1212f"
data-udid="24C56021-A1CB-4A07-993A-2D2A7F00FDCD"
data-div-Style="width:900px;height:76px;"
data-img-Style="float:left;margin-right:15px;width:90px;height:60px;"
data-tit-Style="font-size:16px;color:#f13d3d;"

data-des-Style=“font-size:12px;color:#333;”

广告
     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/samylee/article/details/73555701,BlogCommendFromBaidu_48"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/samylee/article/details/73555701,BlogCommendFromBaidu_48"}'><div class="content"><a href="https://blog.csdn.net/samylee/article/details/73555701" target="_blank" title="神经网络测试之softmax输出 - samylee的博客"><h4 class="text-truncate oneline">神经网络测试之<em>softmax</em>输出 - samylee的博客              </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/2/E/E/3_samylee.jpg" alt="samylee" class="avatar-pic"><span class="namebox"><span class="name">samylee</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">06-21</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>743</span></p></div></a><p class="content"><a href="https://blog.csdn.net/samylee/article/details/73555701" target="_blank" title="神经网络测试之softmax输出 - samylee的博客"><span class="desc oneline">因博主非计算机专业,代码大神可忽略此文。

代码经过博主测试,准确无误!…

来自: samylee的博客

     <div class="recommend-item-box recommend-box-ident type_blog clearfix" data-track-view='{"mod":"popu_387","con":",https://blog.csdn.net/qq_26222859/article/details/73225242,BlogCommendFromBaidu_49"}' data-track-click='{"mod":"popu_387","con":",https://blog.csdn.net/qq_26222859/article/details/73225242,BlogCommendFromBaidu_49"}'><div class="content"><a href="https://blog.csdn.net/qq_26222859/article/details/73225242" target="_blank" title="Softmax 函数及其作用(含推导) - qq_26222859的博客"><h4 class="text-truncate oneline"><em>Softmax</em> 函数及其作用(含推导) - qq_26222859的博客                </h4><div class="info-box d-flex align-content-center"><!-- <p class="avatar"><img src="https://avatar.csdn.net/7/D/6/3_qq_26222859.jpg" alt="qq_26222859" class="avatar-pic"><span class="namebox"><span class="name">qq_26222859</span><span class="triangle"></span></span></p> --><p class="date-and-readNum"><span class="date hover-show">06-14</span><span class="read-num hover-hide"><svg class="icon csdnc-yuedushu" aria-hidden="true"><use xlink:href="#csdnc-m-passwords-visible"></use></svg>872</span></p></div></a><p class="content"><a href="https://blog.csdn.net/qq_26222859/article/details/73225242" target="_blank" title="Softmax 函数及其作用(含推导) - qq_26222859的博客"><span class="desc oneline">Softmax函数的定义及作用

Softmax是一种形如下式的函数:

P(i)=exp(θTix)∑Kk=1exp(θTkx)
其中θi和x是列向量,θTix可能被换成函数关于x…

来自: qq_26222859的博客

        <div class="recommend-loading-box"><img src='https://csdnimg.cn/release/phoenix/images/feedLoading.gif'></div><div class="recommend-end-box"><p class="text-center">没有更多推荐了,<a href="https://blog.csdn.net/" class="c-blue c-blue-hover c-blue-focus">返回首页</a></p></div></div>
</main><aside><div id="asideProfile" class="aside-box">
<!-- <h3 class="aside-title">个人资料</h3> -->
<div class="profile-intro d-flex"><div class="avatar-box d-flex justify-content-center flex-column"><a href="https://blog.csdn.net/wishchin"><img src="https://avatar.csdn.net/9/9/E/3_wishchin.jpg" class="avatar_pic"></a></div><div class="user-info d-flex justify-content-center flex-column"><p class="name csdn-tracking-statistics tracking-click" data-mod="popu_379"><a href="https://blog.csdn.net/wishchin" target="_blank" class="" id="uid">wishchin</a></p></div><div class="opt-box d-flex justify-content-center flex-column"><span  class="csdn-tracking-statistics tracking-click" data-mod="popu_379"><a class="btn btn-sm btn-red-hollow attention" id="btnAttent">关注</a></span></div></div>
<div class="data-info d-flex item-tiling"><dl class="text-center" title="344"><dt><a href="https://blog.csdn.net/wishchin?t=1">原创</a></dt><dd><a href="https://blog.csdn.net/wishchin?t=1"><span class="count">344</span></a></dd></dl><dl class="text-center" id="fanBox" title="632"><dt>粉丝</dt><dd><span class="count" id="fan">632</span></dd></dl><dl class="text-center" title="149"><dt>喜欢</dt><dd><span class="count">149</span></dd></dl><dl class="text-center" title="276"><dt>评论</dt><dd><span class="count">276</span></dd></dl>
</div>
<div class="grade-box clearfix"><dl><dt>等级:</dt><dd><a href="https://blog.csdn.net/home/help.html#level" title="7级,点击查看等级说明" target="_blank"><svg class="icon icon-level" aria-hidden="true"><use xlink:href="#csdnc-bloglevel-7"></use></svg></a></dd></dl><dl><dt>访问:</dt><dd title="1504101">150万+            </dd></dl><dl><dt>积分:</dt><dd title="18743">1万+            </dd></dl><dl title="711"><dt>排名:</dt><dd>711</dd></dl>
</div><div class="badge-box d-flex"><span>勋章:</span><div class="icon-badge" title="持之以恒"><div class="mouse-box"><svg class="icon" aria-hidden="true"><use xlink:href="#csdnc-m-lasting"></use></svg><div class="icon-arrow"></div></div><div class="grade-detail-box"><div class="pos-box"><div class="left-box d-flex justify-content-center align-items-center flex-column"><svg class="icon" aria-hidden="true"><use xlink:href="#csdnc-m-lasting"></use></svg><p>持之以恒</p></div><div class="right-box d-flex justify-content-center align-items-center">授予每个自然月内发布4篇或4篇以上原创或翻译IT博文的用户。不积跬步无以至千里,不积小流无以成江海,程序人生的精彩需要坚持不懈地积累!</div></div></div></div><script>(function ($) {setTimeout(function(){$('div.icon-badge.show-moment').removeClass('show-moment');}, 5000);})(window.jQuery)</script>
</div>
</div><div class="csdn-tracking-statistics mb8 box-shadow" data-pid="blog" data-mod="popu_4" style="height:250px;">
<div class="aside-content text-center" id="cpro_u2734133"><div id="kp_box_76" data-pid="56" data-track-view='{"mod":"kp_popu_76","con":",,"}'><script type="text/javascript" src="//rabc1.iteye.com/source/openjs/api/mbxf4.js?b=wocoltly"></script></div><script>$(function(){csdn.track.viewCheck($("#kp_box_76"));});</script>    </div>

最新文章

  • C++:error C2558 没有可用的复制构造函数或复制构造函数声明为“explicit”
  • C++:int 与string相互转换
  • OpenCV:判定曲线为弧线的简单方法
  • OpenCV:简单计算曲线弧度-弓形弧度
  • OpenCV:findContours的曲线断开-离散点问题

个人分类

  • 心理学/职业 38篇
  • 场景处理/RgbD累积 104篇
  • C++编程 66篇
  • MFC编程 26篇
  • 目标追踪 9篇
  • 人脸识别 14篇
  • MLandPy 46篇
  • Matlab编程 17篇
  • BigDataMini 19篇
  • PythonLG 34篇
  • 图像检索 34篇
  • C+/代码迁移 32篇
  • BOOST/FlANN/Eigen/C+0X 25篇
  • 艺术/图像评价 35篇
  • QT./Linux 57篇
  • AI/ES 65篇
  • 开源标准 10篇
  • 毕业论文 5篇
  • 数学/工具 33篇
  • 工程/设计师 29篇
  • CUDA 23篇
  • 图像特征 50篇
  • 计算机视觉 44篇
  • 语言(学) 7篇
  • Django 3篇
  • PS 4篇
  • Linux开发 10篇
  • STL/算法 44篇
  • AI:A Modern Approach 4thEdtion 4篇
  • IAAS 9篇
  • Spark 6篇
  • PaaS 9篇
  • ANN/DNN/纤维丛 126篇
  • 聚类分析 11篇
  • 判别分析 30篇
  • AR/VR_3D 41篇
  • 人形机器人 57篇
  • 三维重建/SLAM 79篇
  • ML日报收集 1篇
  • OpenCV 63篇
  • 生成式模型 12篇
  • ReinforceLearning 30篇
  • ROS 25篇
  • 时序/变长分析 35篇
  • 资源整理 44篇
  • 飞行机器人 3篇
  • GazeTracker 4篇
  • Humanoid 27篇
  • TuringMachine 26篇
  • AI/ML 42篇
  • 推荐/Rank系统 22篇
  • StyleAI 37篇
  • 总体哲学AIPRIPCV 23篇

展开

归档

  • 2018年10月 7篇
  • 2018年9月 6篇
  • 2018年8月 2篇
  • 2018年7月 17篇
  • 2018年6月 11篇
  • 2018年5月 5篇
  • 2018年4月 8篇
  • 2018年3月 23篇
  • 2018年2月 5篇
  • 2018年1月 8篇
  • 2017年12月 13篇
  • 2017年11月 18篇
  • 2017年10月 7篇
  • 2017年9月 9篇
  • 2017年8月 9篇
  • 2017年7月 15篇
  • 2017年6月 11篇
  • 2017年5月 8篇
  • 2017年4月 4篇
  • 2017年3月 18篇
  • 2017年2月 4篇
  • 2017年1月 1篇
  • 2016年12月 2篇
  • 2016年11月 3篇
  • 2016年10月 3篇
  • 2016年9月 2篇
  • 2016年8月 8篇
  • 2016年7月 11篇
  • 2016年6月 13篇
  • 2016年5月 40篇
  • 2016年4月 4篇
  • 2016年3月 4篇
  • 2016年2月 3篇
  • 2016年1月 3篇
  • 2015年12月 21篇
  • 2015年11月 5篇
  • 2015年10月 12篇
  • 2015年9月 13篇
  • 2015年8月 16篇
  • 2015年7月 14篇
  • 2015年6月 8篇
  • 2015年5月 4篇
  • 2015年4月 24篇
  • 2015年3月 8篇
  • 2015年1月 10篇
  • 2014年12月 23篇
  • 2014年11月 3篇
  • 2014年10月 7篇
  • 2014年9月 1篇
  • 2014年8月 16篇
  • 2014年7月 21篇
  • 2014年6月 11篇
  • 2014年5月 3篇
  • 2014年4月 5篇
  • 2014年3月 10篇
  • 2014年2月 21篇
  • 2014年1月 11篇
  • 2013年12月 17篇
  • 2013年11月 38篇
  • 2013年10月 28篇
  • 2013年9月 28篇
  • 2013年8月 12篇
  • 2013年7月 24篇

展开

   <div class="aside-box"><div id="kp_box_77" data-pid="57" data-track-view='{"mod":"kp_popu_77","con":",,"}'><script type="text/javascript" src="//rabc1.iteye.com/common/web/site/9i6gu.js?av=neunkwb"></script></div><script>$(function(){csdn.track.viewCheck($("#kp_box_77"));});</script>     </div><div class="aside-box"><div class="persion_article"></div></div>
</div>
  • 点赞 取消点赞

    0

  • 评论
  • 目录
  • 收藏
  • 手机看
  • 上一篇
  • 下一篇
  • 更多
           </a><ul class="widescreen-more-box"><li class="widescreen-more"><a class="btn-comments low-height hover-box" href="https://blog.csdn.net/wishchin/article/details/75008329" title="AI:IPPR的数学表示-CNN结构/参数分析"><svg class="icon hover-hide" aria-hidden="true"><use xlink:href="#csdnc-chevronleft"></use></svg><span class="hover-show text text3">上一篇</span></a></li><li class="widescreen-more"><a class="btn-comments hover-box low-height" href="https://blog.csdn.net/wishchin/article/details/75330755" title="三维重建面试13X:一些算法试题-今日头条AI-Lab"><svg class="icon hover-hide" aria-hidden="true"><use xlink:href="#csdnc-chevronright"></use></svg><span class="hover-show text text3">下一篇</span></a></li></ul></li>
    </ul>
    
<link rel="stylesheet" href="https://csdnimg.cn/release/blog_editor_html/release1.3.1/ckeditor/plugins/chart/chart.css" />
<script type="text/javascript" src="https://csdnimg.cn/release/blog_editor_html/release1.3.1/ckeditor/plugins/chart/lib/chart.min.js"></script>
<script type="text/javascript" src="https://csdnimg.cn/release/blog_editor_html/release1.3.1/ckeditor/plugins/chart/widget2chart.js"></script>
<link rel="stylesheet" href="https://csdnimg.cn/release/blog_editor_html/release1.3.1/ckeditor/plugins/codesnippet/lib/highlight/styles/atelier-sulphurpool-light.css">
<script type="text/javascript" src="https://csdnimg.cn/release/phoenix/production/pc_wap_common-9e177e0136.js" /></script><script type="text/javascript">
$(function(){var allEscRegex = /&(lt|gt|amp|quot|nbsp|shy|#\d{1,5});/g,namedEntities = {lt: '<',gt: '>',amp: '&',quot: '"',nbsp: '\u00a0',shy: '\u00ad'}var allEscDecode = function( match, code ) {return namedEntities[ code ];};htmlDecodeAttr = function( text ) {return text.replace( allEscRegex, allEscDecode );}hljs.initHighlightingOnLoad();hljs.initCopyButtonOnLoad();hljs.initLineNumbersOnLoad();if($('pre .language-plain').length>0){$('pre .language-plain').each(function(i,e){var highlightRe = hljs.highlightAuto(htmlDecodeAttr(e.innerHTML))e.innerHTML = highlightRe.value;e.className = 'language-'+highlightRe.language;});}
})
</script>

11111111111相关推荐

  1. 30 个 php 操作 redis 常用方法代码例子

    这篇文章主要介绍了 30 个 php 操作 redis 常用方法代码例子 , 本文其实不止 30 个方法 , 可以操作 string 类 型. list 类型和 set 类型的数据 , 需要的朋友可以 ...

  2. jsp ul设置滚动条_jquery实现Li滚动时滚动条自动添加样式的方法

    本文实例讲述了jquery实现Li滚动时滚动条自动添加样式的方法.分享给大家供大家参考.具体如下: 这里使用jquery实现当拖动滚动条的时候,Li滚动列表中的内容会自动随滚动条变化而下移,并自动添加 ...

  3. dom刷新局部元素_JavaScript中DOM和BOM基础

    BOM部分基础内容 BOM(Broswer Object Model)浏览器对象模型 ,主要用来获取或设置浏览器的属性.行为 ; 使JavaScript可以和浏览器进行交互 ; window 是 BO ...

  4. django admin组件

    admin实例 from django.contrib import admin from app01 import models from django.utils.safestring impor ...

  5. Codeforces Round #630 (Div. 2) A~D【思维,数论,字符串,位运算】

    A. Exercising Walk 水题一道:在指定空间内你一定要向各个方向走a,b,c,d步问你能否在规定空间内走完这题的坑点样例都给出来了qwq #include <iostream> ...

  6. 服务器响应的生成:HTTP响应报头——HttpServletResponse接口的应用

    一,响应报头 响应报头允许服务器传递不能放在状态行中的附加响应信息,以及关于服务器的信息和对Request-URI所标识的资源进行下一步访问的信息 常用的响应报头 Location Content-T ...

  7. Java基础篇:面向对象

    文章目录 学习面向对象内容的三条主线 面向过程(POP)与面向对象(OOP) 面向对象的思想概述 Java类和对象 创建Java自定义类 对象的创建和使用 对象的创建和使用:匿名对象 类的成员之一:属 ...

  8. xml笔记整理_基础概括

    为什么80%的码农都做不了架构师?>>>    1.表单提交方式     * 使用submit提交         <form>             .....   ...

  9. python之路之面向对象3

    一.知识点拾遗 1.多继承的易错点 二.设计模式 1.设计模式介绍 Gof设计模式 大话设计模式 2.单例模式 当所有实例中封装的数据相同时,使用单例模式 静态方法+静态字段 单例就是只有一个实例 a ...

最新文章

  1. Maven就是这么简单
  2. php和python对比-python学习笔记一和PHP的一些对比
  3. 精密空调机组及零部件相关专业术语
  4. [云炬创业基础笔记]第二章创业者测试1
  5. 0917变量类型注意点
  6. Hive列合并与元素搜集
  7. Cookie编码解码
  8. C# DIRECTX INPUT 模拟 (鼠标玩FBA街机)
  9. 冲突域和广播域的理解
  10. c语言字母排列组合的实现,c语言中一种典型的排列组合算法
  11. 09-01 面向对象编程
  12. 2007noip提高组初赛总结
  13. 我认为还是得学会自己焊接贴片元件,有专门的贴片元件焊接练习板,虽然有SMT,就像无人机要练习飞自稳一样。我们不能什么都靠SMT
  14. (二)安全计算-Threat Modelling威胁建模
  15. 数据版“吐槽大会”: 国产综艺节目年终盘点
  16. get_transform is not allowed to be called from a MonoBehaviour constructor (or instance field initia
  17. docker删除镜像时报错解决办法
  18. Linux 系统Apache配置SSL证书
  19. 【渝粤教育】国家开放大学2018年秋季 0359-21T会计学原理 参考试题
  20. 多维时序 | MATLAB实现基于VMD-SSA-LSSVM、SSA-LSSVM、VMD-LSSVM、LSSVM的多变量时间序列预测对比

热门文章

  1. Gartner:超级应用成为战略技术趋势,小程序是否能够脱引而出?
  2. Tokyo Cabinet及Tokyo Tyrant tcb tch比较分析
  3. UVa 12627 Erratic Expansion - 分治
  4. 计算机网络中m的含义,宽带中的“M(兆)”是什么意思?
  5. Eclipse安装插件的方法
  6. 项目:招聘网站信息(获取数据+数据分析+数据可视化)
  7. android动态图制作,Android 教程:如何在手机上制作高质量的 GIF 图片
  8. 腾讯汤道生:产业互联网真正目的是降本增效
  9. UIColor延伸:判断两个颜色是否相等
  10. Android 时间转换