mnist手写数字识别,是AI领域的“Hello World”,我们通过剖析这个程序,来加深对深度学习的认识。

本博客的代码来自“《PaddlePaddle从入门到炼丹》四——卷积神经网络”,博客链接如下。

https://blog.csdn.net/qq_33200967/article/details/83506694

在这篇博客中,使用了深度学习的深层神经网络和卷积神经网络,代码运行在baidu的AIStudio,选择GPU运行方式,本文主要记录在神经网络不同层深的情况下的数字识别率,来认识神经网络的特性。

一、深层神经网络

这一部分,我们有把神经网络该为单层/双层/三层/四层,超过或等于三层一般被称为深层神经网络。

修改代码,选择分类器为多层感知器

# 获取分类器
model = multilayer_perceptron(image)

1. 单层神经网络

网络模型为,输入层-->输出层,修改代码如下,

# 定义多层感知器
def multilayer_perceptron(input):# 第一个全连接层,激活函数为ReLU#hidden1 = fluid.layers.fc(input=input, size=100, act='relu')# 第二个全连接层,激活函数为ReLU#hidden2 = fluid.layers.fc(input=hidden1, size=100, act='relu')# 以softmax为激活函数的全连接输出层,大小为label大小fc = fluid.layers.fc(input=input, size=10, act='softmax')return fc

下面是训练和测试数据的准确率数据,

Pass:0, Batch:0, Cost:3.12611, Accuracy:0.14844
Pass:0, Batch:100, Cost:0.57369, Accuracy:0.84375
Pass:0, Batch:200, Cost:0.34888, Accuracy:0.92188
Pass:0, Batch:300, Cost:0.35908, Accuracy:0.89844
Pass:0, Batch:400, Cost:0.46956, Accuracy:0.85938
Test:0, Cost:0.35950, Accuracy:0.93750
Pass:1, Batch:0, Cost:0.31775, Accuracy:0.93750
Pass:1, Batch:100, Cost:0.27652, Accuracy:0.92188
Pass:1, Batch:200, Cost:0.27065, Accuracy:0.92969
Pass:1, Batch:300, Cost:0.28560, Accuracy:0.89844
Pass:1, Batch:400, Cost:0.41429, Accuracy:0.86719
Test:1, Cost:0.31951, Accuracy:0.93750
Pass:2, Batch:0, Cost:0.26233, Accuracy:0.94531
Pass:2, Batch:100, Cost:0.24122, Accuracy:0.92969
Pass:2, Batch:200, Cost:0.25647, Accuracy:0.92188
Pass:2, Batch:300, Cost:0.27161, Accuracy:0.91406
Pass:2, Batch:400, Cost:0.38802, Accuracy:0.85938
Test:2, Cost:0.30584, Accuracy:0.93750
Pass:3, Batch:0, Cost:0.23583, Accuracy:0.93750
Pass:3, Batch:100, Cost:0.22913, Accuracy:0.92969
Pass:3, Batch:200, Cost:0.24992, Accuracy:0.91406
Pass:3, Batch:300, Cost:0.26481, Accuracy:0.91406
Pass:3, Batch:400, Cost:0.36972, Accuracy:0.86719
Test:3, Cost:0.29924, Accuracy:0.93750
Pass:4, Batch:0, Cost:0.22000, Accuracy:0.93750
Pass:4, Batch:100, Cost:0.22292, Accuracy:0.93750
Pass:4, Batch:200, Cost:0.24542, Accuracy:0.91406
Pass:4, Batch:300, Cost:0.26016, Accuracy:0.91406
Pass:4, Batch:400, Cost:0.35566, Accuracy:0.85938
Test:4, Cost:0.29542, Accuracy:0.93750

令人惊奇的是,单层神经网络也可以达到约93%的准确率,这个的确出人意外。

2. 双层神经网络

网络模型为,输入层->隐层->输出层。

代码如下,

# 定义多层感知器
def multilayer_perceptron(input):# 第一个全连接层,激活函数为ReLUhidden1 = fluid.layers.fc(input=input, size=100, act='relu')# 第二个全连接层,激活函数为ReLU#hidden2 = fluid.layers.fc(input=hidden1, size=100, act='relu')# 以softmax为激活函数的全连接输出层,大小为label大小fc = fluid.layers.fc(input=hidden1, size=10, act='softmax')return fc

训练和测试准确率如下,

Pass:0, Batch:0, Cost:3.01273, Accuracy:0.13281
Pass:0, Batch:100, Cost:0.47009, Accuracy:0.85156
Pass:0, Batch:200, Cost:0.25585, Accuracy:0.92969
Pass:0, Batch:300, Cost:0.28107, Accuracy:0.92188
Pass:0, Batch:400, Cost:0.42777, Accuracy:0.86719
Test:0, Cost:0.26124, Accuracy:0.93750
Pass:1, Batch:0, Cost:0.16402, Accuracy:0.96094
Pass:1, Batch:100, Cost:0.19026, Accuracy:0.93750
Pass:1, Batch:200, Cost:0.18443, Accuracy:0.94531
Pass:1, Batch:300, Cost:0.20063, Accuracy:0.96094
Pass:1, Batch:400, Cost:0.29557, Accuracy:0.90625
Test:1, Cost:0.19170, Accuracy:0.93750
Pass:2, Batch:0, Cost:0.13216, Accuracy:0.97656
Pass:2, Batch:100, Cost:0.12720, Accuracy:0.96875
Pass:2, Batch:200, Cost:0.16064, Accuracy:0.96094
Pass:2, Batch:300, Cost:0.14157, Accuracy:0.97656
Pass:2, Batch:400, Cost:0.22292, Accuracy:0.92969
Test:2, Cost:0.16187, Accuracy:1.00000
Pass:3, Batch:0, Cost:0.10297, Accuracy:0.96875
Pass:3, Batch:100, Cost:0.10456, Accuracy:0.95312
Pass:3, Batch:200, Cost:0.15482, Accuracy:0.95312
Pass:3, Batch:300, Cost:0.10015, Accuracy:0.99219
Pass:3, Batch:400, Cost:0.18723, Accuracy:0.92969
Test:3, Cost:0.14261, Accuracy:1.00000
Pass:4, Batch:0, Cost:0.08154, Accuracy:0.98438
Pass:4, Batch:100, Cost:0.09554, Accuracy:0.94531
Pass:4, Batch:200, Cost:0.14740, Accuracy:0.93750
Pass:4, Batch:300, Cost:0.08526, Accuracy:0.99219
Pass:4, Batch:400, Cost:0.15813, Accuracy:0.95312
Test:4, Cost:0.13024, Accuracy:1.00000

经过3个pass的训练,测试数据可以达到100%的准确率,训练数据的准确率也达到99%。

3. 三层神经网络

网络结构为,输入层->隐层->隐层->输出层。

代码如下,

# 定义多层感知器
def multilayer_perceptron(input):# 第一个全连接层,激活函数为ReLUhidden1 = fluid.layers.fc(input=input, size=100, act='relu')# 第二个全连接层,激活函数为ReLUhidden2 = fluid.layers.fc(input=hidden1, size=100, act='relu')# 以softmax为激活函数的全连接输出层,大小为label大小fc = fluid.layers.fc(input=hidden2, size=10, act='softmax')return fc

训练和测试数据集的准确度如下,

Pass:0, Batch:0, Cost:2.39492, Accuracy:0.11719
Pass:0, Batch:100, Cost:0.38125, Accuracy:0.85156
Pass:0, Batch:200, Cost:0.23671, Accuracy:0.93750
Pass:0, Batch:300, Cost:0.30749, Accuracy:0.91406
Pass:0, Batch:400, Cost:0.40188, Accuracy:0.87500
Test:0, Cost:0.21345, Accuracy:1.00000
Pass:1, Batch:0, Cost:0.12216, Accuracy:0.98438
Pass:1, Batch:100, Cost:0.16529, Accuracy:0.94531
Pass:1, Batch:200, Cost:0.17492, Accuracy:0.95312
Pass:1, Batch:300, Cost:0.12466, Accuracy:0.96875
Pass:1, Batch:400, Cost:0.31326, Accuracy:0.90625
Test:1, Cost:0.15984, Accuracy:0.93750
Pass:2, Batch:0, Cost:0.08932, Accuracy:0.98438
Pass:2, Batch:100, Cost:0.10728, Accuracy:0.96094
Pass:2, Batch:200, Cost:0.13919, Accuracy:0.97656
Pass:2, Batch:300, Cost:0.07744, Accuracy:0.98438
Pass:2, Batch:400, Cost:0.24006, Accuracy:0.94531
Test:2, Cost:0.13407, Accuracy:0.93750
Pass:3, Batch:0, Cost:0.06845, Accuracy:0.98438
Pass:3, Batch:100, Cost:0.09714, Accuracy:0.95312
Pass:3, Batch:200, Cost:0.11451, Accuracy:0.96875
Pass:3, Batch:300, Cost:0.06234, Accuracy:0.98438
Pass:3, Batch:400, Cost:0.16804, Accuracy:0.96094
Test:3, Cost:0.11795, Accuracy:1.00000
Pass:4, Batch:0, Cost:0.05074, Accuracy:0.99219
Pass:4, Batch:100, Cost:0.09888, Accuracy:0.96094
Pass:4, Batch:200, Cost:0.09145, Accuracy:0.96875
Pass:4, Batch:300, Cost:0.04734, Accuracy:0.98438
Pass:4, Batch:400, Cost:0.12088, Accuracy:0.96875
Test:4, Cost:0.10956, Accuracy:1.00000

相比较双层神经网络,三层神经网络可以更快的的高准确率,如在第一个pass后,测试数据的准确度就可以达到100%;但是,之后测试数据识别率有降低而后又回到100%的情形,不明白其中的原因。

4. 四层神经网络

四层神经网络的结构为,输入层->隐层->隐层->隐层->输出层。

代码如下,

# 定义多层感知器
def multilayer_perceptron(input):# 第一个全连接层,激活函数为ReLUhidden1 = fluid.layers.fc(input=input, size=100, act='relu')# 第二个全连接层,激活函数为ReLUhidden2 = fluid.layers.fc(input=hidden1, size=100, act='relu')# 第三个全连接层,激活函数为ReLUhidden3 = fluid.layers.fc(input=hidden1, size=100, act='relu')# 以softmax为激活函数的全连接输出层,大小为label大小fc = fluid.layers.fc(input=hidden3, size=10, act='softmax')return fc

训练数据和测试数据的准确率如下,

Pass:0, Batch:0, Cost:2.47239, Accuracy:0.11719
Pass:0, Batch:100, Cost:0.45294, Accuracy:0.85156
Pass:0, Batch:200, Cost:0.26287, Accuracy:0.92188
Pass:0, Batch:300, Cost:0.29945, Accuracy:0.89062
Pass:0, Batch:400, Cost:0.45551, Accuracy:0.85938
Test:0, Cost:0.23784, Accuracy:1.00000
Pass:1, Batch:0, Cost:0.16192, Accuracy:0.96094
Pass:1, Batch:100, Cost:0.19751, Accuracy:0.92969
Pass:1, Batch:200, Cost:0.16019, Accuracy:0.95312
Pass:1, Batch:300, Cost:0.16501, Accuracy:0.95312
Pass:1, Batch:400, Cost:0.29107, Accuracy:0.91406
Test:1, Cost:0.15401, Accuracy:1.00000
Pass:2, Batch:0, Cost:0.10620, Accuracy:0.97656
Pass:2, Batch:100, Cost:0.10332, Accuracy:0.95312
Pass:2, Batch:200, Cost:0.14469, Accuracy:0.96094
Pass:2, Batch:300, Cost:0.11048, Accuracy:0.96875
Pass:2, Batch:400, Cost:0.22040, Accuracy:0.94531
Test:2, Cost:0.12461, Accuracy:1.00000
Pass:3, Batch:0, Cost:0.07755, Accuracy:0.97656
Pass:3, Batch:100, Cost:0.07912, Accuracy:0.97656
Pass:3, Batch:200, Cost:0.12591, Accuracy:0.96094
Pass:3, Batch:300, Cost:0.09253, Accuracy:0.97656
Pass:3, Batch:400, Cost:0.17517, Accuracy:0.95312
Test:3, Cost:0.11185, Accuracy:1.00000
Pass:4, Batch:0, Cost:0.06638, Accuracy:0.98438
Pass:4, Batch:100, Cost:0.07160, Accuracy:0.98438
Pass:4, Batch:200, Cost:0.11147, Accuracy:0.96094
Pass:4, Batch:300, Cost:0.08665, Accuracy:0.97656
Pass:4, Batch:400, Cost:0.15449, Accuracy:0.95312
Test:4, Cost:0.10896, Accuracy:1.00000

四层的神经网络表现收敛快速/稳定且准确率较三层网络高。但是,是否层数越高越好呢?我没有接着测试。

二、卷积神经网络

卷积神经网络是由一个或多个卷积层、池化层以及全连接层组成。这里,我们选取一直三个卷积+池化做测试对比。

首先,修改代码,设置分类器的类型为CNN(Convolutional Neural Network, 卷积神经网络)

# 获取分类器
#model = multilayer_perceptron(image)
model = convolutional_neural_network(image)

1. 含有一个卷积+池化层

网络结构为,输入层->卷积层->池化层->全连接层。

代码如下,

# 卷积神经网络
def convolutional_neural_network(input):# 第一个卷积层,卷积核大小为3*3,一共有32个卷积核conv1 = fluid.layers.conv2d(input = input, num_filters=32, filter_size=3, stride=1)# 第一个池化层,池化大小为2*2,步长为1,最大池化pool1 = fluid.layers.pool2d(input=conv1, pool_size=2, pool_stride=1, pool_type='max')# 第二个卷积层,卷积核大小为3*3,一共有64个卷积核#conv2 = fluid.layers.conv2d(input=pool1, num_filters=64, filter_size=3, stride=1)# 第二个池化层,池化大小为2*2,步长为1,最大池化#pool2 = fluid.layers.pool2d(input=conv2, pool_size=2, pool_stride=1, pool_type='max')# 以softmax为激活函数的全连接输出层,大小为label大小fc = fluid.layers.fc(input=pool1, size=10, act='softmax')return fc

训练数据和测试数据准确率如下,

Pass:0, Batch:0, Cost:3.51505, Accuracy:0.10156
Pass:0, Batch:100, Cost:0.34445, Accuracy:0.89844
Pass:0, Batch:200, Cost:0.21319, Accuracy:0.95312
Pass:0, Batch:300, Cost:0.23849, Accuracy:0.94531
Pass:0, Batch:400, Cost:0.47284, Accuracy:0.87500
Test:0, Cost:0.19661, Accuracy:1.00000
Pass:1, Batch:0, Cost:0.18348, Accuracy:0.97656
Pass:1, Batch:100, Cost:0.08346, Accuracy:0.96875
Pass:1, Batch:200, Cost:0.09432, Accuracy:0.96875
Pass:1, Batch:300, Cost:0.14066, Accuracy:0.96875
Pass:1, Batch:400, Cost:0.12486, Accuracy:0.96094
Test:1, Cost:0.12581, Accuracy:0.93750
Pass:2, Batch:0, Cost:0.09072, Accuracy:0.97656
Pass:2, Batch:100, Cost:0.09942, Accuracy:0.96875
Pass:2, Batch:200, Cost:0.06352, Accuracy:0.98438
Pass:2, Batch:300, Cost:0.11163, Accuracy:0.97656
Pass:2, Batch:400, Cost:0.10208, Accuracy:0.96875
Test:2, Cost:0.13513, Accuracy:0.93750
Pass:3, Batch:0, Cost:0.09934, Accuracy:0.97656
Pass:3, Batch:100, Cost:0.04291, Accuracy:0.97656
Pass:3, Batch:200, Cost:0.04950, Accuracy:0.98438
Pass:3, Batch:300, Cost:0.09500, Accuracy:0.96875
Pass:3, Batch:400, Cost:0.06909, Accuracy:0.98438
Test:3, Cost:0.12898, Accuracy:1.00000
Pass:4, Batch:0, Cost:0.08588, Accuracy:0.97656
Pass:4, Batch:100, Cost:0.03593, Accuracy:0.98438
Pass:4, Batch:200, Cost:0.04931, Accuracy:0.96875
Pass:4, Batch:300, Cost:0.08851, Accuracy:0.97656
Pass:4, Batch:400, Cost:0.09598, Accuracy:0.97656
Test:4, Cost:0.11659, Accuracy:1.00000

可以看出,一个卷积层的网络,其训练数据的准确率已经超过了4层神经网络,表现优异。

2. 含有2个卷积层的网络

网络结构为, 输入层->卷积层->池化层->卷积层->池化层->全连接层。

代码如下,

# 卷积神经网络
def convolutional_neural_network(input):# 第一个卷积层,卷积核大小为3*3,一共有32个卷积核conv1 = fluid.layers.conv2d(input = input, num_filters=32, filter_size=3, stride=1)# 第一个池化层,池化大小为2*2,步长为1,最大池化pool1 = fluid.layers.pool2d(input=conv1, pool_size=2, pool_stride=1, pool_type='max')# 第二个卷积层,卷积核大小为3*3,一共有64个卷积核conv2 = fluid.layers.conv2d(input=pool1, num_filters=64, filter_size=3, stride=1)# 第二个池化层,池化大小为2*2,步长为1,最大池化pool2 = fluid.layers.pool2d(input=conv2, pool_size=2, pool_stride=1, pool_type='max')# 以softmax为激活函数的全连接输出层,大小为label大小fc = fluid.layers.fc(input=pool2, size=10, act='softmax')return fc

测试数据和验证数据的准确度如下,

Pass:0, Batch:0, Cost:4.55156, Accuracy:0.06250
Pass:0, Batch:100, Cost:0.21274, Accuracy:0.93750
Pass:0, Batch:200, Cost:0.13221, Accuracy:0.95312
Pass:0, Batch:300, Cost:0.14602, Accuracy:0.97656
Pass:0, Batch:400, Cost:0.21743, Accuracy:0.94531
Test:0, Cost:0.10561, Accuracy:0.93750
Pass:1, Batch:0, Cost:0.13267, Accuracy:0.96875
Pass:1, Batch:100, Cost:0.07436, Accuracy:0.96875
Pass:1, Batch:200, Cost:0.05657, Accuracy:0.98438
Pass:1, Batch:300, Cost:0.17919, Accuracy:0.96875
Pass:1, Batch:400, Cost:0.16327, Accuracy:0.97656
Test:1, Cost:0.09448, Accuracy:0.93750
Pass:2, Batch:0, Cost:0.09776, Accuracy:0.98438
Pass:2, Batch:100, Cost:0.03945, Accuracy:0.98438
Pass:2, Batch:200, Cost:0.05310, Accuracy:0.98438
Pass:2, Batch:300, Cost:0.14646, Accuracy:0.97656
Pass:2, Batch:400, Cost:0.06727, Accuracy:0.96875
Test:2, Cost:0.09720, Accuracy:1.00000
Pass:3, Batch:0, Cost:0.06443, Accuracy:0.98438
Pass:3, Batch:100, Cost:0.09163, Accuracy:0.96875
Pass:3, Batch:200, Cost:0.01216, Accuracy:1.00000
Pass:3, Batch:300, Cost:0.10314, Accuracy:0.98438
Pass:3, Batch:400, Cost:0.08002, Accuracy:0.97656
Test:3, Cost:0.11338, Accuracy:1.00000
Pass:4, Batch:0, Cost:0.02246, Accuracy:0.98438
Pass:4, Batch:100, Cost:0.01949, Accuracy:0.99219
Pass:4, Batch:200, Cost:0.06579, Accuracy:0.97656
Pass:4, Batch:300, Cost:0.15396, Accuracy:0.98438
Pass:4, Batch:400, Cost:0.03079, Accuracy:0.99219
Test:4, Cost:0.12499, Accuracy:1.00000

包含2个卷积的神经网络,其训练数据的准确度十分稳定的接近100%,这个的确十分厉害;不过,测试数据的准确度不是在前2个pass只达到93%,不知是和原因。

3. 含有3个卷积的神经网络

网络结构为, 输入层->卷积层->池化层->卷积层->池化层->卷积层->池化层->全连接层。

代码如下,第三个卷积层的代码是我自己设的,我不确定第三个卷积层是否要用128个卷积核。设置128个卷积核,是因为看到第一层设了32个卷积核,第二层设了64个卷积核,所以我猜第三层要设为128个卷积核,但是我没有任何的数学依据。

# 卷积神经网络
def convolutional_neural_network(input):# 第一个卷积层,卷积核大小为3*3,一共有32个卷积核conv1 = fluid.layers.conv2d(input = input, num_filters=32, filter_size=3, stride=1)# 第一个池化层,池化大小为2*2,步长为1,最大池化pool1 = fluid.layers.pool2d(input=conv1, pool_size=2, pool_stride=1, pool_type='max')# 第二个卷积层,卷积核大小为3*3,一共有64个卷积核conv2 = fluid.layers.conv2d(input=pool1, num_filters=64, filter_size=3, stride=1)# 第二个池化层,池化大小为2*2,步长为1,最大池化pool2 = fluid.layers.pool2d(input=conv2, pool_size=2, pool_stride=1, pool_type='max')# 第三个卷积层,卷积核大小为3*3,一共有128个卷积核conv3 = fluid.layers.conv2d(input=pool2, num_filters=128, filter_size=3, stride=1)# 第三个池化层,池化大小为2*2,步长为1,最大池化pool3 = fluid.layers.pool2d(input=conv3, pool_size=2, pool_stride=1, pool_type='max')# 以softmax为激活函数的全连接输出层,大小为label大小fc = fluid.layers.fc(input=pool3, size=10, act='softmax')return fc

训练数据和测试数据的准确度如下,

Pass:0, Batch:0, Cost:6.76483, Accuracy:0.16406
Pass:0, Batch:100, Cost:0.13075, Accuracy:0.95312
Pass:0, Batch:200, Cost:0.18448, Accuracy:0.96875
Pass:0, Batch:300, Cost:0.21740, Accuracy:0.97656
Pass:0, Batch:400, Cost:0.40639, Accuracy:0.92969
Test:0, Cost:0.21052, Accuracy:0.93750
Pass:1, Batch:0, Cost:0.22828, Accuracy:0.96094
Pass:1, Batch:100, Cost:0.06976, Accuracy:0.97656
Pass:1, Batch:200, Cost:0.15817, Accuracy:0.96875
Pass:1, Batch:300, Cost:0.16659, Accuracy:0.98438
Pass:1, Batch:400, Cost:0.16523, Accuracy:0.96875
Test:1, Cost:0.14129, Accuracy:1.00000
Pass:2, Batch:0, Cost:0.15643, Accuracy:0.96875
Pass:2, Batch:100, Cost:0.04042, Accuracy:0.98438
Pass:2, Batch:200, Cost:0.09001, Accuracy:0.98438
Pass:2, Batch:300, Cost:0.19979, Accuracy:0.96094
Pass:2, Batch:400, Cost:0.26533, Accuracy:0.96094
Test:2, Cost:0.34692, Accuracy:0.93750
Pass:3, Batch:0, Cost:0.32040, Accuracy:0.97656
Pass:3, Batch:100, Cost:0.23548, Accuracy:0.96094
Pass:3, Batch:200, Cost:0.14403, Accuracy:0.96875
Pass:3, Batch:300, Cost:0.10629, Accuracy:0.97656
Pass:3, Batch:400, Cost:0.36311, Accuracy:0.94531
Test:3, Cost:0.34852, Accuracy:0.93750
Pass:4, Batch:0, Cost:0.12174, Accuracy:0.99219
Pass:4, Batch:100, Cost:0.15106, Accuracy:0.96875
Pass:4, Batch:200, Cost:0.17723, Accuracy:0.96875
Pass:4, Batch:300, Cost:0.18383, Accuracy:0.96875
Pass:4, Batch:400, Cost:0.17384, Accuracy:0.96094
Test:4, Cost:0.16233, Accuracy:1.00000

奇怪的是,3个卷积的并没有比2个卷积的更好,难道代码设置有问题?

mnist手写数字识别与优化相关推荐

  1. 用MXnet实战深度学习之一:安装GPU版mxnet并跑一个MNIST手写数字识别 (zz)

    用MXnet实战深度学习之一:安装GPU版mxnet并跑一个MNIST手写数字识别 我想写一系列深度学习的简单实战教程,用mxnet做实现平台的实例代码简单讲解深度学习常用的一些技术方向和实战样例.这 ...

  2. 深度学习练手项目(一)-----利用PyTorch实现MNIST手写数字识别

    一.前言 MNIST手写数字识别程序就不过多赘述了,这个程序在深度学习中的地位跟C语言中的Hello World地位并驾齐驱,虽然很基础,但很重要,是深度学习入门必备的程序之一. 二.MNIST数据集 ...

  3. 使用PYTORCH复现ALEXNET实现MNIST手写数字识别

    网络介绍: Alexnet网络是CV领域最经典的网络结构之一了,在2012年横空出世,并在当年夺下了不少比赛的冠军,下面是Alexnet的网络结构: 网络结构较为简单,共有五个卷积层和三个全连接层,原 ...

  4. 使用tf.keras搭建mnist手写数字识别网络

    使用tf.keras搭建mnist手写数字识别网络 目录 使用tf.keras搭建mnist手写数字识别网络 1.使用tf.keras.Sequential搭建序列模型 1.1 tf.keras.Se ...

  5. TensorFlow高阶 API: keras教程-使用tf.keras搭建mnist手写数字识别网络

    TensorFlow高阶 API:keras教程-使用tf.keras搭建mnist手写数字识别网络 目录 TensorFlow高阶 API:keras教程-使用tf.keras搭建mnist手写数字 ...

  6. 将tensorflow训练好的模型移植到Android (MNIST手写数字识别)

    将tensorflow训练好的模型移植到Android (MNIST手写数字识别) [尊重原创,转载请注明出处]https://blog.csdn.net/guyuealian/article/det ...

  7. 持久化的基于L2正则化和平均滑动模型的MNIST手写数字识别模型

    持久化的基于L2正则化和平均滑动模型的MNIST手写数字识别模型 觉得有用的话,欢迎一起讨论相互学习~Follow Me 参考文献Tensorflow实战Google深度学习框架 实验平台: Tens ...

  8. tensorflow saver_机器学习入门(6):Tensorflow项目Mnist手写数字识别-分析详解

    本文主要内容:Ubuntu下基于Tensorflow的Mnist手写数字识别的实现 训练数据和测试数据资料:http://yann.lecun.com/exdb/mnist/ 前面环境都搭建好了,直接 ...

  9. 深度学习21天——卷积神经网络(CNN):实现mnist手写数字识别(第1天)

    目录 一.前期准备 1.1 环境配置 1.2 CPU和GPU 1.2.1 CPU 1.2.2 GPU 1.2.3 CPU和GPU的区别 第一步:设置GPU 1.3 MNIST 手写数字数据集 第二步: ...

  10. 机器学习篇——MNIST手写数字识别

    MNIST手写数字识别是调教一个完整的神经元来进行分类模型的构建应用,为什么说是一个完整神经元呢?因为它具备生物学上一个神经元的特征,除了有输入输出函数,还有一个激活函数,对应着生物学上神经元的阈值. ...

最新文章

  1. jieba(结巴)常用方法
  2. 用python画关系网络图-python networkx 包绘制复杂网络关系图的实现
  3. 【LeetCode从零单排】No.135Candy(双向动态规划)
  4. mybatis-plus自定义配置方式
  5. Bing改善Microsoft Office及Edge浏览器图像搜索功能
  6. UML类图关系表示方法
  7. 分享10道常考Java面试题及答案
  8. 信息安全隐忧是快递实名的“死穴”
  9. 找不到Share Project(Subversion)_android studio
  10. Unity(TransForm)
  11. Facebook vs Chrome 关公秦琼的未来之战,互联网营销
  12. python3实用编程技巧_Python3实用编程技巧进阶一
  13. WinXP升级IE6至IE8以及WIN7下IE8升级至IE11
  14. Java编程学习指南(带学习经验)
  15. MAML小样本学习算法解读及基于飞桨的代码实现
  16. Hadoop原理和特性
  17. ElementUI Cascader 级联选择器实现点击文本选中
  18. iOS小技能: 自定义相机(基础知识储备)
  19. 一键平仓含挂单全商品版脚本.mq4
  20. GC overhead limit exceeded

热门文章

  1. xp系统怎样安装传真服务器,Windows XP系统怎样配置传真机
  2. 小型计算器的实现——Java GUI图形界面设计案例
  3. P1600 [NOIP2016 提高组] 天天爱跑步
  4. 获取深户股市列表api_网易163 财经股票接口
  5. IFIX组态----安全与权限配置
  6. 基于CAS4.0.0的单点登陆
  7. 自动驾驶技术-环境感知篇:基于视觉相关技术介绍
  8. 通信upf是什么意思_管理UPF的方法、装置及系统与流程
  9. H5调用微信扫一扫识别二维码
  10. 截图工具-Excel加载宏(图片大小可调)