简 介: 在Paddle下使用最基本的BP网络以及LeNet网络结构,测试了Cifar10数据集合。但实际运行在测试集合上的效果始终没有突破0.3,具体原因还需要进行查找。
后来经过测试,发现问题出现在数据加载上,后面的实验中实际上只是使用了1000个训练数据进行的训练。使得模型的预测精度始终上不去。

关键词Cifar10LeNet

#mermaid-svg-u6Al0NHKREqxvrgY {font-family:"trebuchet ms",verdana,arial,sans-serif;font-size:16px;fill:#333;}#mermaid-svg-u6Al0NHKREqxvrgY .error-icon{fill:#552222;}#mermaid-svg-u6Al0NHKREqxvrgY .error-text{fill:#552222;stroke:#552222;}#mermaid-svg-u6Al0NHKREqxvrgY .edge-thickness-normal{stroke-width:2px;}#mermaid-svg-u6Al0NHKREqxvrgY .edge-thickness-thick{stroke-width:3.5px;}#mermaid-svg-u6Al0NHKREqxvrgY .edge-pattern-solid{stroke-dasharray:0;}#mermaid-svg-u6Al0NHKREqxvrgY .edge-pattern-dashed{stroke-dasharray:3;}#mermaid-svg-u6Al0NHKREqxvrgY .edge-pattern-dotted{stroke-dasharray:2;}#mermaid-svg-u6Al0NHKREqxvrgY .marker{fill:#333333;stroke:#333333;}#mermaid-svg-u6Al0NHKREqxvrgY .marker.cross{stroke:#333333;}#mermaid-svg-u6Al0NHKREqxvrgY svg{font-family:"trebuchet ms",verdana,arial,sans-serif;font-size:16px;}#mermaid-svg-u6Al0NHKREqxvrgY .label{font-family:"trebuchet ms",verdana,arial,sans-serif;color:#333;}#mermaid-svg-u6Al0NHKREqxvrgY .cluster-label text{fill:#333;}#mermaid-svg-u6Al0NHKREqxvrgY .cluster-label span{color:#333;}#mermaid-svg-u6Al0NHKREqxvrgY .label text,#mermaid-svg-u6Al0NHKREqxvrgY span{fill:#333;color:#333;}#mermaid-svg-u6Al0NHKREqxvrgY .node rect,#mermaid-svg-u6Al0NHKREqxvrgY .node circle,#mermaid-svg-u6Al0NHKREqxvrgY .node ellipse,#mermaid-svg-u6Al0NHKREqxvrgY .node polygon,#mermaid-svg-u6Al0NHKREqxvrgY .node path{fill:#ECECFF;stroke:#9370DB;stroke-width:1px;}#mermaid-svg-u6Al0NHKREqxvrgY .node .label{text-align:center;}#mermaid-svg-u6Al0NHKREqxvrgY .node.clickable{cursor:pointer;}#mermaid-svg-u6Al0NHKREqxvrgY .arrowheadPath{fill:#333333;}#mermaid-svg-u6Al0NHKREqxvrgY .edgePath .path{stroke:#333333;stroke-width:2.0px;}#mermaid-svg-u6Al0NHKREqxvrgY .flowchart-link{stroke:#333333;fill:none;}#mermaid-svg-u6Al0NHKREqxvrgY .edgeLabel{background-color:#e8e8e8;text-align:center;}#mermaid-svg-u6Al0NHKREqxvrgY .edgeLabel rect{opacity:0.5;background-color:#e8e8e8;fill:#e8e8e8;}#mermaid-svg-u6Al0NHKREqxvrgY .cluster rect{fill:#ffffde;stroke:#aaaa33;stroke-width:1px;}#mermaid-svg-u6Al0NHKREqxvrgY .cluster text{fill:#333;}#mermaid-svg-u6Al0NHKREqxvrgY .cluster span{color:#333;}#mermaid-svg-u6Al0NHKREqxvrgY div.mermaidTooltip{position:absolute;text-align:center;max-width:200px;padding:2px;font-family:"trebuchet ms",verdana,arial,sans-serif;font-size:12px;background:hsl(80, 100%, 96.2745098039%);border:1px solid #aaaa33;border-radius:2px;pointer-events:none;z-index:100;}#mermaid-svg-u6Al0NHKREqxvrgY :root{--mermaid-font-family:"trebuchet ms",verdana,arial,sans-serif;}

作业要求
目 录
Contents
作业要求
数据库准备
网络训练
BP网络
Lenet网络
改进LeNet网络
总 结
补充说明
Paddle模型实现
Cifar10训练AlexNet
重新训练LeNet
博文程序

§01 作业要求


  根据 2021年人工神经网络第四次作业要求 中的第四题,是基于 The CIFAR-10 dataset 数据集的验证实验。

▲ 图1.1 CIFAR-10数据库部分样本

1.1 作业要求

  ① 建立一个用于识别图像的相对较小的卷积神经网络,完成CIFAR-10数据集合的识别问题。

  • 建立一个规范的网络结构,训练并进行评估;
  • 为建立更大规模更加复杂的模型提供一个范例

  选择CIFAR-10是因为它的复杂程度足以用来检验TensorFlow中的大部分功能,并可将其扩展为更大的模型。与此同时由于模型较小所以训练速度很快,比较适合用来测试新的想法,检验新的技术。

  ② 大家可以参见网络上的“CIFAR-10 教程”来讨论卷积神经网络中如下各部分参数对于卷积神经网络训练的影响“

  • 相关核心数学对象,如卷积、修正线性激活、最大池化以及局部响应归一化;
  • 训练过程中一些网络行为的可视化,这些行为包括输入图像、损失情况、网络行为的分布情况以及梯度;
  • 算法学习参数的移动平均值的计算函数,以及在评估阶段使用这些平均值提高预测性能;
  • 实现了一种机制,使得学习率随着时间的推移而递减;
  • 如果有条件,可以讨论网络在多个GPU上并行训练,多个GPU之间实现参数共享和更新变量值。

  ③ 讨论数据增强对于网络性能的改善。

  讨论DROP-OUT机制对于网络训练和泛华能力的影响。

  CIFAR-10数据库部分样本:

1.2 数据库准备

  实验所需要的数据库可以直接从 CIFAR-10 Dataset 下载,也可以在 Paddle中的Cifar10自带数据库 中直接读取。

1.2.1 下载数据库

(1)Paddle中代码

import sys,os,math,time
import matplotlib.pyplot as plt
from numpy import *from paddle.vision.datasets import Cifar10
from paddle.vision.transforms import Normalizenormalize = Normalize(mean=[0.5,0.5,0.5], std=[0.5,0.5,0.5], data_format='HWC')
cifar10 = Cifar10(mode='train', transform=normalize)

  运行程序之后触发Paddle notebook环境自动下载Cifar数据库:

▲ 图1.2.1 运行程序之后触发Paddle notebook环境自动下载Cifar数据库

  下载的数据库存储在:"/home/aistudio/.cache/paddle/dataset/cifar 中:

cifar-10-python.tar.gz
aistudio@jupyter-262579-3253281:~/.cache/paddle/dataset/cifar$

(2)Cifar10变量结构

  观察 cifar10的内部数据结构:

print(type(cifar10), dir(cifar10))

<class 'paddle.vision.datasets.cifar.Cifar10'>
['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__len__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_init_url_md5_flag', '_load_data', 'backend', 'data', 'data_file', 'data_md5', 'data_url', 'dtype', 'flag', 'mode', 'transform']

  可以看到在读入的Cifar10的数据中,包含有很多内部变量:

print(cifar10.backend, type(cifar10.data))
print(shape(cifar10.data))
print(cifar10.mode, cifar10.transform)

pil
<class 'list'>
(50000, 2)
train
<paddle.vision.transforms.transforms.Normalize object at 0x7fef3a1ec250>

  其中 "cifar10.data包含有数据库中的图片和标签。

Cifar10.data结构:
结构:list
形状:(50000,2)

  对于每一个 Cifar10.data[n],它 是一个tuple:

image0 = cifar10.data[0][0]
label0 = cifar10.data[0][1]print(type(cifar10.data[0]), type(image0), shape(image0), label0)

<class 'tuple'>
<class 'numpy.ndarray'>
(3072,)
6

  包含一个 长度为3072=32×32×3的图片以及对应的标号 “6”。

cifarname = {0:'airplane', 1:'automobile', 2:'bird', 3:'cat',4:'deear', 5:'dog', 6:'frog', 7:'horse', 8:'ship', 9:'truck'}plt.figure(figsize=(5,5))
plt.imshow(image0.reshape(3,32,32).T.swapaxes(0,1)
plt.title(cifarname[label0], fontsize=16, color='blue')

  对于Cifar10的数据,需要经过如下的转换才能够形成matplotlib可以显示的结构(32,32,3):

data.reshape(3,32,32).T.swapaxes(0,1)

  Cifar10 第0号图片:frog:6 :

▲ 图1.2.2 Cifar10的第一号图片:frog:6 ``▲ 图1.2.2 Cifar10 第0号图片:frog:6 ``

index = list(range(len(cifar10.data)))
random.shuffle(index)PIC_ROW         = 6
PIC_COL         = 12
plt.figure(figsize=(15,10))
for j in range(PIC_ROW):for i in range(PIC_COL):id = i+j*PIC_COLplt.subplot(PIC_ROW, PIC_COL, id+1)plt.axis('off')rid = index[id]img = cifar10.data[rid][0].reshape(3,32,32).T.swapaxes(0,1)lbl = cifar10.data[rid][1]plt.imshow(img, cmap=plt.cm.gray)plt.title(cifarname[cifar10.data[rid][1]], fontsize=12, color='darkgreen')

  Cifar10中的图片:

▲ 图1.2.3 Cifar10中的图片

§02 网络训练


2.1 BP网络

2.1.1 数据加载与预处理

import sys,os,math,time
import matplotlib.pyplot as plt
from numpy import *import paddle
from paddle.vision.transforms import Normalize
normalize = Normalize(mean=[0.5,0.5,0.5], std=[0.5,0.5,0.5], data_format='HWC')from paddle.vision.datasets import Cifar10
cifar10_train = Cifar10(mode='train', transform=normalize)
cifar10_test = Cifar10(mode='test', transform=normalize)

2.1.2 建立BP网络

import paddle
bpnet = paddle.nn.Sequential(paddle.nn.Flatten(),paddle.nn.Linear(32*32*3, 512),paddle.nn.ReLU(),paddle.nn.Dropout(0.2),paddle.nn.Linear(512, 10))model = paddle.Model(bpnet)
model.prepare(paddle.optimizer.Adam(parameters=model.parameters()),paddle.nn.CrossEntropyLoss(),paddle.metric.Accuracy())

2.1.3 训练网络模型

model.fit(cifar10_train, epochs=5, verbose=1)

▲ 图2.1.1 训练过程

  使用BP网络训练Cifar10的数据过程中,网络的精度始终维持在0.1左右。

  在此过程中:*

  • 将隐层ReLU传递函数修改成Sigmoid;
  • 设置学习速率:Lr为0.1;

  dvyu 训练过程中的精度没有改善。

2.2 Lenet网络

2.2.1 数据加载与预处理

  由于使用CNN,所以需要将输入的数据整理成(3,32,32)的格式,用于LeNet网络的输入。

import sys,os,math,time
import matplotlib.pyplot as plt
from numpy import *import paddle
from paddle.vision.transforms import Normalize
normalize = Normalize(mean=[0.5,0.5,0.5], std=[0.5,0.5,0.5], data_format='HWC')from paddle.vision.datasets import Cifar10
cifar10_train = Cifar10(mode='train', transform=normalize)
cifar10_test = Cifar10(mode='test', transform=normalize)#------------------------------------------------------------
train_dataset = [cifar10_train.data[id][0].reshape(3,32,32) for id in range(len(cifar10_train.data))]
train_labels = [cifar10_train.data[id][1] for id in range(len(cifar10_train.data))]

2.2.2 构建训练加载函数

  将Cifar10的数据构建成训练LeNet的批处理数据。

class Dataset(paddle.io.Dataset):def __init__(self, num_samples):super(Dataset, self).__init__()self.num_samples = num_samplesdef __getitem__(self, index):data = train_dataset[index]label = train_labels[index]return paddle.to_tensor(data,dtype='float32'), paddle.to_tensor(label,dtype='int64')def __len__(self):return self.num_samples_dataset = Dataset(1000)
train_loader = paddle.io.DataLoader(_dataset, batch_size=100, shuffle=True)

  设置数据加载形式:

数据加载形式:
批次大型:100
Shuffer:Yes

2.2.3 构建LeNet网络

imgwidth = 32
imgheight = 32
inputchannel = 3
kernelsize   = 5
targetsize = 10
ftwidth = ((imgwidth-kernelsize+1)//2-kernelsize+1)//2
ftheight = ((imgheight-kernelsize+1)//2-kernelsize+1)//2class lenet(paddle.nn.Layer):def __init__(self, ):super(lenet, self).__init__()self.conv1 = paddle.nn.Conv2D(in_channels=inputchannel, out_channels=6, kernel_size=kernelsize, stride=1, padding=0)self.conv2 = paddle.nn.Conv2D(in_channels=6, out_channels=16, kernel_size=kernelsize, stride=1, padding=0)self.mp1    = paddle.nn.MaxPool2D(kernel_size=2, stride=2)self.mp2    = paddle.nn.MaxPool2D(kernel_size=2, stride=2)self.L1     = paddle.nn.Linear(in_features=ftwidth*ftheight*16, out_features=120)self.L2     = paddle.nn.Linear(in_features=120, out_features=86)self.L3     = paddle.nn.Linear(in_features=86, out_features=targetsize)def forward(self, x):x = self.conv1(x)x = paddle.nn.functional.relu(x)x = self.mp1(x)x = self.conv2(x)x = paddle.nn.functional.relu(x)x = self.mp2(x)x = paddle.flatten(x, start_axis=1, stop_axis=-1)x = self.L1(x)x = paddle.nn.functional.relu(x)x = self.L2(x)x = paddle.nn.functional.relu(x)x = self.L3(x)return xmodel = lenet()
paddle.summary(model,(100, 3,32,32))

  模型结构为:

---------------------------------------------------------------------------Layer (type)       Input Shape          Output Shape         Param #
===========================================================================Conv2D-17     [[100, 3, 32, 32]]    [100, 6, 28, 28]         456      MaxPool2D-17    [[100, 6, 28, 28]]    [100, 6, 14, 14]          0       Conv2D-18     [[100, 6, 14, 14]]   [100, 16, 10, 10]        2,416     MaxPool2D-18   [[100, 16, 10, 10]]    [100, 16, 5, 5]           0       Linear-25        [[100, 400]]          [100, 120]          48,120     Linear-26        [[100, 120]]          [100, 86]           10,406     Linear-27        [[100, 86]]           [100, 10]             870
===========================================================================
Total params: 62,268
Trainable params: 62,268
Non-trainable params: 0
---------------------------------------------------------------------------
Input size (MB): 1.17
Forward/backward pass size (MB): 6.18
Params size (MB): 0.24
Estimated Total Size (MB): 7.59
---------------------------------------------------------------------------{'total_params': 62268, 'trainable_params': 62268}

2.2.4 定义训练过程

  在训练过程中,显示训练Loss,Accuracy的变化,并最终绘制训练曲线。

optimizer = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())
def train(model):model.train()epochs = 100accdim = []lossdim = []for epoch in range(epochs):for batch, data in enumerate(train_loader()):out = model(data[0])loss = paddle.nn.functional.cross_entropy(out, data[1])acc = paddle.metric.accuracy(out, data[1])loss.backward()optimizer.step()optimizer.clear_grad()accdim.append(acc.numpy())lossdim.append(loss.numpy())print('Epoch:{}, Loss:{}, Accuracys:{}'.format(epoch, loss.numpy(), acc.numpy()))plt.figure(figsize=(10, 6))plt.subplot(211)plt.plot(accdim, label='Accuracy')plt.xlabel('Step')plt.ylabel('Acc')plt.grid(True)plt.legend(loc='upper left')plt.tight_layout()plt.subplot(212)plt.plot(lossdim, label='Loss')plt.xlabel('Step')plt.ylabel('Loss')plt.grid(True)plt.legend(loc='upper left')plt.tight_layout()plt.show()train(model)

2.2.5 训练结果

  在 AI Studio 至尊版环境下进行训练。

训练参数:
训练批次:100
学习速率:0.001
样本批次大小:100
训练样本:50000

  训练时间:12.661秒。

  训练过程的Loss,Accuarcy的变化:

▲ 图2.2.1 训练过程的Loss,Accuarcy的变化

2.2.6 测试集合验证

test_dataset = [cifar10_test.data[id][0].reshape(3,32,32) for id in range(len(cifar10_test.data))]
test_label = [cifar10_test.data[id][1] for id in range(len(cifar10_test.data))]predict = model(paddle.to_tensor(test_dataset, dtype='float32'))test_target = paddle.fluid.layers.argmax(predict, axis=1).numpy()
print(test_target)
origin_label = array(test_label)errorid = where(test_target != origin_label)[0]
print(errorid, len(errorid))
[1 1 1 ... 5 5 2]
[   0    1    2 ... 9996 9998 9999]
7655

  可以看到在10000个测试样本集合上,此时先错误的标签多大7655。 这说明前面的模型在训练过程中已经进入的“过拟合状态”。

cifarname = {0:'airplane', 1:'automobile', 2:'bird', 3:'cat',4:'deear', 5:'dog', 6:'frog', 7:'horse', 8:'ship', 9:'truck'}PIC_ROW         = 4
PIC_COL         = 6
plt.figure(figsize=(12,8))
for j in range(PIC_ROW):for i in range(PIC_COL):id = i+j*PIC_COLplt.subplot(PIC_ROW, PIC_COL, id+1)plt.axis('off')eid = errorid[id]tid = test_target[eid]plt.imshow(test_dataset[eid].swapaxes(1,2).T, cmap=plt.cm.gray)plt.title(cifarname[tid], fontsize=12, color='blue')

▲ 图2.2.2 识别错误的部分样本

▲ 图2.2.3 训练过程中的训练君度与测试精度变化曲线

2.3 改进LeNet网络

  根据上面给出的训练结果,可以看到LeNet在Cifar10的数据集合上存在着过拟合现象,需要从网络结构以及训练参数两个方面进行改进。

2.3.1 在LeNet增加Drop

    def forward(self, x):x = self.conv1(x)x = paddle.nn.functional.relu(x)x = self.mp1(x)x = self.conv2(x)x = paddle.nn.functional.relu(x)x = self.mp2(x)x = paddle.flatten(x, start_axis=1, stop_axis=-1)x = self.L1(x)x = paddle.nn.functional.relu(x)x = self.L2(x)x = paddle.nn.functional.relu(x)x = paddle.fluid.layers.dropout(x, 0.2)x = self.L3(x)return x

(1)系数:0.2

  设置Dropout的程序是为0.2。

  增加Dropout之后重新训练500次,对应的训练经度与测试精度的变化:

▲ 图2.3.1 增加Dropout之后重新训练500次,对应的训练经度与测试精度的变化

  可以看到测试精度并没有明显的提示。

  设置Dropout系数:0.5对应的训练曲线:

▲ 图2.3.2 设置Dropout系数:0.5对应的训练曲线

训练参数:
Dropout:0.2
Batch
size:100
Lr:0.001

  最终测试集合精度: 75%。

  设置Dropout系数:0.2对应的训练曲线:

▲ 图2.3.3 设置Dropout系数:0.2对应的训练曲线

2.3.2 改动卷积核尺寸

  将卷积核尺寸改小,改成边长为3的卷积核。

  卷积核改成3×3,Dropout=0.2训练结果:

▲ 图2.3.4 卷积核改成3×3,Dropout=0.2训练结果

  如下是去掉Dropout层,Lr=0.001情况下训练结果。

▲ 图2.3.5 卷积核改成3×3,去掉Dropout训练结果

  在Lr减小到0.0005时,训练500之后的测试精度:0.283。

2.3.3 增加一层卷积层

imgwidth = 32
imgheight = 32
inputchannel = 3
kernelsize   = 3
targetsize = 10
ftwidth = (imgwidth-kernelsize+1)//2
ftheight = (imgheight-kernelsize+1)//2
ftwidth = (ftwidth-kernelsize+1)//2
ftheight = (ftheight-kernelsize+1)//2
ftwidth = (ftwidth-kernelsize+1)//2
ftheight = (ftheight-kernelsize+1)//2class lenet(paddle.nn.Layer):def __init__(self, ):super(lenet, self).__init__()self.conv1 = paddle.nn.Conv2D(in_channels=inputchannel, out_channels=6, kernel_size=kernelsize, stride=1, padding=0)self.conv2 = paddle.nn.Conv2D(in_channels=6, out_channels=16, kernel_size=kernelsize, stride=1, padding=0)self.conv3 = paddle.nn.Conv2D(in_channels=16, out_channels=32, kernel_size=kernelsize, stride=1, padding=0)self.mp1    = paddle.nn.MaxPool2D(kernel_size=2, stride=2)self.mp2    = paddle.nn.MaxPool2D(kernel_size=2, stride=2)self.mp3    = paddle.nn.MaxPool2D(kernel_size=2, stride=2)self.L1     = paddle.nn.Linear(in_features=ftwidth*ftheight*32, out_features=120)self.L2     = paddle.nn.Linear(in_features=120, out_features=86)self.L3     = paddle.nn.Linear(in_features=86, out_features=targetsize)def forward(self, x):x = self.conv1(x)x = paddle.nn.functional.relu(x)x = self.mp1(x)x = self.conv2(x)x = paddle.nn.functional.relu(x)x = self.mp2(x)x = paddle.nn.functional.relu(x)x = self.conv3(x)x = self.mp3(x)x = paddle.flatten(x, start_axis=1, stop_axis=-1)x = self.L1(x)x = paddle.nn.functional.relu(x)x = self.L2(x)x = paddle.nn.functional.relu(x)x = paddle.fluid.layers.dropout(x, 0.1)x = self.L3(x)return xmodel = lenet()

  三层卷积神经网络结果:

▲ 图2.3.6 三层卷积神经网络结果

---------------------------------------------------------------------------Layer (type)       Input Shape          Output Shape         Param #
===========================================================================Conv2D-38     [[100, 3, 32, 32]]    [100, 6, 30, 30]         168      MaxPool2D-36    [[100, 6, 30, 30]]    [100, 6, 15, 15]          0       Conv2D-39     [[100, 6, 15, 15]]   [100, 16, 13, 13]         880      MaxPool2D-37   [[100, 16, 13, 13]]    [100, 16, 6, 6]           0       Conv2D-40     [[100, 16, 6, 6]]     [100, 32, 4, 4]         4,640     MaxPool2D-38    [[100, 32, 4, 4]]     [100, 32, 2, 2]           0       Linear-50        [[100, 128]]          [100, 120]          15,480     Linear-51        [[100, 120]]          [100, 86]           10,406     Linear-52        [[100, 86]]           [100, 10]             870
===========================================================================
Total params: 32,444
Trainable params: 32,444
Non-trainable params: 0
---------------------------------------------------------------------------
Input size (MB): 1.17
Forward/backward pass size (MB): 8.31
Params size (MB): 0.12
Estimated Total Size (MB): 9.60
---------------------------------------------------------------------------{'total_params': 32444, 'trainable_params': 32444}

※ 总  结 ※


  在Paddle下使用最基本的BP网络以及LeNet网络结构,测试了Cifar10数据集合。但实际运行在测试集合上的效果始终没有突破0.3,具体原因还需要进行查找。

◎ 补充说明

  上面的问题,直到后来,在使用 AlexNet网络 进行训练的时候才发现,是在编写前期的数据加载的时候,把训练 数据的格式限制在1000,而不是50000!这就解释了为什么使用LeNet训练CiFar10数据集合始终预测精度很低的原因了。

1.3 Paddle模型实现

  利用Paddle中的神经网络模型构建Alexnet。

1.3.1 搭建Alexnet网络

(1)网络代码

import paddleclass alexnet(paddle.nn.Layer):def __init__(self, ):super(alexnet, self).__init__()self.conv1 = paddle.nn.Conv2D(in_channels=3, out_channels=96, kernel_size=7, stride=2, padding=2)self.conv2 = paddle.nn.Conv2D(in_channels=96, out_channels=256, kernel_size=5, stride=1, padding=2)self.conv3 = paddle.nn.Conv2D(in_channels=256, out_channels=384, kernel_size=3, stride=1, padding=1)self.conv4 = paddle.nn.Conv2D(in_channels=384, out_channels=384, kernel_size=3, stride=1, padding=1)self.conv5 = paddle.nn.Conv2D(in_channels=384, out_channels=256, kernel_size=3, stride=1, padding=1)self.mp1    = paddle.nn.MaxPool2D(kernel_size=3, stride=2)self.mp2    = paddle.nn.MaxPool2D(kernel_size=3, stride=2)self.L1     = paddle.nn.Linear(in_features=256*3*3, out_features=1024)self.L2     = paddle.nn.Linear(in_features=1024, out_features=512)self.L3     = paddle.nn.Linear(in_features=512, out_features=10)def forward(self, x):x = self.conv1(x)x = paddle.nn.functional.relu(x)x = self.mp1(x)x = self.conv2(x)x = paddle.nn.functional.relu(x)x = self.mp2(x)x = self.conv3(x)x = paddle.nn.functional.relu(x)x = self.conv4(x)x = paddle.nn.functional.relu(x)x = self.conv5(x)x = paddle.nn.functional.relu(x)x = paddle.flatten(x, start_axis=1, stop_axis=-1)x = self.L1(x)x = paddle.nn.functional.relu(x)x = self.L2(x)x = paddle.nn.functional.relu(x)x = self.L3(x)return x

(2)网络结构

  应用paddle.summary检查网络结构是否正确。

model = alexnet()paddle.summary(model, (100,3,32,32))
---------------------------------------------------------------------------Layer (type)       Input Shape          Output Shape         Param #
===========================================================================Conv2D-16     [[100, 3, 32, 32]]   [100, 96, 15, 15]       14,208     MaxPool2D-7   [[100, 96, 15, 15]]    [100, 96, 7, 7]           0       Conv2D-17     [[100, 96, 7, 7]]     [100, 256, 7, 7]       614,656    MaxPool2D-8    [[100, 256, 7, 7]]    [100, 256, 3, 3]          0       Conv2D-18     [[100, 256, 3, 3]]    [100, 384, 3, 3]       885,120    Conv2D-19     [[100, 384, 3, 3]]    [100, 384, 3, 3]      1,327,488   Conv2D-20     [[100, 384, 3, 3]]    [100, 256, 3, 3]       884,992    Linear-10       [[100, 2304]]         [100, 1024]         2,360,320   Linear-11       [[100, 1024]]          [100, 512]          524,800    Linear-12        [[100, 512]]          [100, 10]            5,130
===========================================================================
Total params: 6,616,714
Trainable params: 6,616,714
Non-trainable params: 0
---------------------------------------------------------------------------
Input size (MB): 1.17
Forward/backward pass size (MB): 39.61
Params size (MB): 25.24
Estimated Total Size (MB): 66.02
---------------------------------------------------------------------------{'total_params': 6616714, 'trainable_params': 6616714}

  在网络设计过程中,往往会出现结构性差错的地方就在卷积层与全连接层之间出现,在进行Flatten(扁平化)之后,出现数据维度对不上。可以在网络定义的过程中,首先将Flatten之后的全连接层去掉,通过paddle.summary输出结构确认卷积层数出为 256×3×3之后,再将全连接层接上。如果出现差错,可以进行每一层校验。

1.4 Cifar10训练AlexNet

1.4.1 载入数据

import sys,os,math,time
import matplotlib.pyplot as plt
from numpy import *import paddle
from paddle.vision.transforms import Normalize
normalize = Normalize(mean=[0.5,0.5,0.5], std=[0.5,0.5,0.5], data_format='HWC')from paddle.vision.datasets import Cifar10
cifar10_train = Cifar10(mode='train', transform=normalize)
cifar10_test = Cifar10(mode='test', transform=normalize)train_dataset = [cifar10_train.data[id][0].reshape(3,32,32) for id in range(len(cifar10_train.data))]
train_labels = [cifar10_train.data[id][1] for id in range(len(cifar10_train.data))]class Dataset(paddle.io.Dataset):def __init__(self, num_samples):super(Dataset, self).__init__()self.num_samples = num_samplesdef __getitem__(self, index):data = train_dataset[index]label = train_labels[index]return paddle.to_tensor(data,dtype='float32'), paddle.to_tensor(label,dtype='int64')def __len__(self):return self.num_samples_dataset = Dataset(len(cifar10_train.data))
train_loader = paddle.io.DataLoader(_dataset, batch_size=100, shuffle=True)

1.4.2 构建网络

class alexnet(paddle.nn.Layer):def __init__(self, ):super(alexnet, self).__init__()self.conv1 = paddle.nn.Conv2D(in_channels=3, out_channels=96, kernel_size=7, stride=2, padding=2)self.conv2 = paddle.nn.Conv2D(in_channels=96, out_channels=256, kernel_size=5, stride=1, padding=2)self.conv3 = paddle.nn.Conv2D(in_channels=256, out_channels=384, kernel_size=3, stride=1, padding=1)self.conv4 = paddle.nn.Conv2D(in_channels=384, out_channels=384, kernel_size=3, stride=1, padding=1)self.conv5 = paddle.nn.Conv2D(in_channels=384, out_channels=256, kernel_size=3, stride=1, padding=1)self.mp1    = paddle.nn.MaxPool2D(kernel_size=3, stride=2)self.mp2    = paddle.nn.MaxPool2D(kernel_size=3, stride=2)self.L1     = paddle.nn.Linear(in_features=256*3*3, out_features=1024)self.L2     = paddle.nn.Linear(in_features=1024, out_features=512)self.L3     = paddle.nn.Linear(in_features=512, out_features=10)def forward(self, x):x = self.conv1(x)x = paddle.nn.functional.relu(x)x = self.mp1(x)x = self.conv2(x)x = paddle.nn.functional.relu(x)x = self.mp2(x)x = self.conv3(x)x = paddle.nn.functional.relu(x)x = self.conv4(x)x = paddle.nn.functional.relu(x)x = self.conv5(x)x = paddle.nn.functional.relu(x)x = paddle.flatten(x, start_axis=1, stop_axis=-1)x = self.L1(x)x = paddle.nn.functional.relu(x)x = self.L2(x)x = paddle.nn.functional.relu(x)x = self.L3(x)return xmodel = alexnet()

1.4.3 训练网络

test_dataset = [cifar10_test.data[id][0].reshape(3,32,32) for id in range(len(cifar10_test.data))]
test_label = [cifar10_test.data[id][1] for id in range(len(cifar10_test.data))]test_input = paddle.to_tensor(test_dataset, dtype='float32')
test_l = paddle.to_tensor(array(test_label)[:,newaxis])optimizer = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())
def train(model):model.train()epochs = 2accdim = []lossdim = []testaccdim = []for epoch in range(epochs):for batch, data in enumerate(train_loader()):out = model(data[0])loss = paddle.nn.functional.cross_entropy(out, data[1])acc = paddle.metric.accuracy(out, data[1])loss.backward()optimizer.step()optimizer.clear_grad()accdim.append(acc.numpy())lossdim.append(loss.numpy())predict = model(test_input)testacc = paddle.metric.accuracy(predict, test_l)testaccdim.append(testacc.numpy())if batch%10 == 0 and batch>0:print('Epoch:{}, Batch: {}, Loss:{}, Accuracys:{}{}'.format(epoch, batch, loss.numpy(), acc.numpy(), testacc.numpy()))plt.figure(figsize=(10, 6))plt.plot(accdim, label='Accuracy')plt.plot(testaccdim, label='Test')plt.xlabel('Step')plt.ylabel('Acc')plt.grid(True)plt.legend(loc='upper left')plt.tight_layout()train(model)

1.4.4 训练结果

训练参数:
BatchSize:100
LearningRate:0.001

  如果BatchSize过小,训练速度变慢。

▲ 图3.3.1 训练精度和测试精度变化曲线

训练参数:
BatchSize:5000
LearningRate:0.0005

▲ 图3.3.2 训练精度和测试精度的变化

▲ 图3.3.3 训练精度和测试精度的变化

重新训练LeNet

根据前面分析,是因为前面的实验中,定义的Dataset中将数据的长度错误定义成了(1000),而没有能够定义成 len(train_data):50000,因此造成的测试精度始终不超过30%。

第一次训练

训练参数:
BatchSize:100
Lr:0.001

▲ 图3.4.1 Accuracy训练变化曲线

训练参数:
BatchSize:5000
Lr:0.001

▲ 图3.4.2 训练过程中计算机资源应用情况

▲ 图3.4.3 训练过程中训练精度与测试精度

◎ 博文程序

#!/usr/local/bin/python
# -*- coding: gbk -*-
#============================================================
# TESTLENET.PY                 -- by Dr. ZhuoQing 2021-12-20
#
# Note:
#============================================================from headm import *                 # =import paddle
from paddle.vision.transforms import Normalize
normalize = Normalize(mean=[0.5,0.5,0.5], std=[0.5,0.5,0.5], data_format='HWC')from paddle.vision.datasets import Cifar10
cifar10_train = Cifar10(mode='train', transform=normalize)
cifar10_test = Cifar10(mode='test', transform=normalize)#------------------------------------------------------------
train_dataset = [cifar10_train.data[id][0].reshape(3,32,32) for id in range(len(cifar10_train.data))]
train_labels = [cifar10_train.data[id][1] for id in range(len(cifar10_train.data))]#------------------------------------------------------------
class Dataset(paddle.io.Dataset):def __init__(self, num_samples):super(Dataset, self).__init__()self.num_samples = num_samplesdef __getitem__(self, index):data = train_dataset[index]label = train_labels[index]return paddle.to_tensor(data,dtype='float32'), paddle.to_tensor(label,dtype='int64')def __len__(self):return self.num_samples_dataset = Dataset(1000)
train_loader = paddle.io.DataLoader(_dataset, batch_size=100, shuffle=True)#------------------------------------------------------------
imgwidth = 32
imgheight = 32
inputchannel = 3
kernelsize   = 3
targetsize = 10
ftwidth = (imgwidth-kernelsize+1)//2
ftheight = (imgheight-kernelsize+1)//2
ftwidth = (ftwidth-kernelsize+1)//2
ftheight = (ftheight-kernelsize+1)//2
ftwidth = (ftwidth-kernelsize+1)//2
ftheight = (ftheight-kernelsize+1)//2class lenet(paddle.nn.Layer):def __init__(self, ):super(lenet, self).__init__()self.conv1 = paddle.nn.Conv2D(in_channels=inputchannel, out_channels=6, kernel_size=kernelsize, stride=1, padding=0)self.conv2 = paddle.nn.Conv2D(in_channels=6, out_channels=16, kernel_size=kernelsize, stride=1, padding=0)self.conv3 = paddle.nn.Conv2D(in_channels=16, out_channels=32, kernel_size=kernelsize, stride=1, padding=0)self.mp1    = paddle.nn.MaxPool2D(kernel_size=2, stride=2)self.mp2    = paddle.nn.MaxPool2D(kernel_size=2, stride=2)self.mp3    = paddle.nn.MaxPool2D(kernel_size=2, stride=2)self.L1     = paddle.nn.Linear(in_features=ftwidth*ftheight*32, out_features=120)self.L2     = paddle.nn.Linear(in_features=120, out_features=86)self.L3     = paddle.nn.Linear(in_features=86, out_features=targetsize)def forward(self, x):x = self.conv1(x)x = paddle.nn.functional.relu(x)x = self.mp1(x)x = self.conv2(x)x = paddle.nn.functional.relu(x)x = self.mp2(x)x = paddle.nn.functional.relu(x)x = self.conv3(x)x = self.mp3(x)x = paddle.flatten(x, start_axis=1, stop_axis=-1)x = self.L1(x)x = paddle.nn.functional.relu(x)x = self.L2(x)x = paddle.nn.functional.relu(x)x = paddle.fluid.layers.dropout(x, 0.1)x = self.L3(x)return xmodel = lenet()#------------------------------------------------------------
test_dataset = [cifar10_test.data[id][0].reshape(3,32,32) for id in range(len(cifar10_test.data))]
test_label = [cifar10_test.data[id][1] for id in range(len(cifar10_test.data))]test_input = paddle.to_tensor(test_dataset, dtype='float32')
test_l = paddle.to_tensor(array(test_label)[:,newaxis])#------------------------------------------------------------
optimizer = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())
def train(model):model.train()epochs = 500accdim = []lossdim = []testaccdim = []for epoch in range(epochs):for batch, data in enumerate(train_loader()):out = model(data[0])loss = paddle.nn.functional.cross_entropy(out, data[1])acc = paddle.metric.accuracy(out, data[1])loss.backward()optimizer.step()optimizer.clear_grad()accdim.append(acc.numpy())lossdim.append(loss.numpy())predict = model(test_input)testacc = paddle.metric.accuracy(predict, test_l)testaccdim.append(testacc.numpy())if epoch%20 == 0 and epoch>0:print('Epoch:{}, Loss:{}, Accuracys:{}{}'.format(epoch, loss.numpy(), acc.numpy(), testacc.numpy()))plt.figure(figsize=(10, 6))
#    plt.subplot(211)plt.plot(accdim, label='Accuracy')plt.plot(testaccdim, label='Test')plt.xlabel('Step')plt.ylabel('Acc')plt.grid(True)plt.legend(loc='upper left')plt.tight_layout()
#    plt.subplot(212)
#    plt.plot(lossdim, label='Loss')
#    plt.xlabel('Step')
#    plt.ylabel('Loss')
#    plt.grid(True)
#    plt.legend(loc='upper left')
#    plt.tight_layout()plt.show()train(model)#------------------------------------------------------------
paddle.summary(model,(100, 3,32,32))
#------------------------------------------------------------
paddle.save(model.state_dict(), './work/cifar10_lenet.pdparams')#------------------------------------------------------------predict = model(paddle.to_tensor(test_dataset, dtype='float32'))#------------------------------------------------------------
test_target = paddle.fluid.layers.argmax(predict, axis=1).numpy()
printt(test_target)
origin_label = array(test_label)#------------------------------------------------------------
errorid = where(test_target != origin_label)[0]
printt(errorid, len(errorid))#------------------------------------------------------------
cifarname = {0:'airplane', 1:'automobile', 2:'bird', 3:'cat',4:'deear', 5:'dog', 6:'frog', 7:'horse', 8:'ship', 9:'truck'}PIC_ROW         = 4
PIC_COL         = 6
plt.figure(figsize=(12,8))
for j in range(PIC_ROW):for i in range(PIC_COL):id = i+j*PIC_COLplt.subplot(PIC_ROW, PIC_COL, id+1)plt.axis('off')eid = errorid[id]tid = test_target[eid]plt.imshow(test_dataset[eid].swapaxes(1,2).T, cmap=plt.cm.gray)plt.title(cifarname[tid], fontsize=12, color='blue')#------------------------------------------------------------
#        END OF FILE : TESTLENET.PY
#============================================================

■ 相关文献链接:

  • 2021年人工神经网络第四次作业要求
  • The CIFAR-10 dataset
  • Paddle中的Cifar10自带数据库
  • 在Paddle中利用AlexNet测试CIFAR10数据集合

● 相关图表链接:

  • 图1.1 CIFAR-10数据库部分样本
  • 图1.2.1 运行程序之后触发Paddle notebook环境自动下载Cifar数据库
  • 图1.2.2 Cifar10的第一号图片:frog:6
  • 图1.2.3 Cifar10中的图片
  • 图2.1.1 训练过程
  • 图2.2.1 训练过程的Loss,Accuarcy的变化
  • 图2.2.2 识别错误的部分样本
  • 图2.2.3 训练过程中的训练君度与测试精度变化曲线
  • 图2.3.1 增加Dropout之后重新训练500次,对应的训练经度与测试精度的变化
  • 图2.3.2 设置Dropout系数:0.5对应的训练曲线
  • 图2.3.3 设置Dropout系数:0.2对应的训练曲线
  • 图2.3.4 卷积核改成3×3,Dropout=0.2训练结果
  • 图2.3.5 卷积核改成3×3,去掉Dropout训练结果
  • 图2.3.6 三层卷积神经网络结果
  • 图3.3.1 训练精度和测试精度变化曲线
  • 图3.3.2 训练精度和测试精度的变化
  • 图3.3.3 训练精度和测试精度的变化

2021年人工神经网络第四次作业 - 第三题Cifar10相关推荐

  1. 2021年人工神经网络第四次作业-第五题:危险品识别

    简 介: 通过对于物品X射线数据集合的整理,挑选出15类体积比较大的物品,训练LeNet网络进行识别. 关键词: X射线,危险品识别,LeNet,Paddle #mermaid-svg-wZUMACG ...

  2. 2021年人工神经网络第四次作业:基于MATLAB的求解

    简 介: 本文选取了2021年人工神经网络第四次作业学生提交的作业.供交流使用. 关键词: 人工神经网络 #mermaid-svg-ATqdYQemfmABj2Hj {font-family:&quo ...

  3. 2021年人工神经网络第四次作业要求:第七题

    简 介: 对于MATLAB中的SimpleClass数据集合,对比了BP,RBF,SVM,SOFM,DHNN 等方法分类结果,可以看到BP,RBF,SVM,SOFM都具有非常好的分类效果.DHNN对于 ...

  4. 2021年人工神经网络第四次作业 - 第二题MNIST手写体识别

    简 介: ※MNIST数据集合是深度学习基础训练数据集合.改数据集合可以使用稠密前馈神经网络训练,也可以使用CNN.本文采用了单隐层BP网络和LeNet网络对于MNIST数据集合进行测试.实验结果标明 ...

  5. 2021年人工神经网络第四次作业-第四题:旋转的数字

    简 介: 本文对于作业中给定的机械数字字符识别问题进行了实验研究.通过对于采样1000样本的数据集合进行训练,经过增加DropOut的可以增加网络的泛化性能.对于网络规模的增加对训练精度没有明显的改进 ...

  6. 2021年人工神经网络第四次作业-第一题:LeNet对于水果与动物进行分类

    简 介: 对于有五种动物和五中水果组成的FAMNIST数据集合的图像分类问题进行了测试.本文主要是集中在前期的数据库的准备和网络的构建方面.对于网络的详细测试参见在 对于FAMNIST中的十种动物和水 ...

  7. 2022年秋季学期人工神经网络第四次作业

    说明: 本次作业是针对这学期经典神经网络中的内容,主要涵盖竞争神经网络课程内容相关的算法. 完成作业可以使用你所熟悉的编程语言和平台,比如 C,C++.MATLAB.Python等. 作业要求链接: ...

  8. 2022年秋季学期人工神经网络第五次作业

    说明: 本次作业是针对这学期经典神经网络中的内容,主要涵盖竞争神经网络课程内容相关的算法. 完成作业可以使用你所熟悉的编程语言和平台,比如 C,C++.MATLAB.Python等. 作业要求链接: ...

  9. 2021年人工神经网络第一次作业:参考答案-1

    简 介: 本文给出了 2021年人工神经网络第一次作业要求 中,由同学提交的作业示例. 关键词: 人工神经网络,感知机,BP,数据压缩 #mermaid-svg-mAbRor9AKp6fkRrk {f ...

最新文章

  1. DevOps的工程化
  2. Spring Boot 简单集成 Liquibase
  3. 数据结构学习笔记(2)
  4. Array.Resize(ref arry, size);
  5. git遇到的问题-- Another git process seems to be running in this repository
  6. 在Spring Boot中使用@ConfigurationProperties
  7. 利用Kinect将投影变得可直接用手操控
  8. java 状态机_Java 数据持久化系列之池化技术
  9. ISE创建Microblaze软核(三)
  10. java获取反射机制的三种方式
  11. 【事件驱动】【蓝牙控制车锁或电灯】初步方案
  12. 大学mysql期末试题_大学期末考试综合实训试题一
  13. 【SpringBoot_ANNOTATIONS】AOP 01 AOP功能测试
  14. 《ARM嵌入式Linux系统开发从入门到精通》勘误
  15. 那些年我们追过的网络小说
  16. java8使用并行流parallelStream以及普通迭代,并行流,普通流之间的效率对比
  17. (HG模块,简洁明了)Hourglass Module介绍
  18. 常见的蔬菜(vegetables)英语单词:
  19. 回顾过去,才能展望未来
  20. 腾讯物联网系统TOS,内核移植起来比你想象的要简单

热门文章

  1. 基于Confluent.Kafka实现的KafkaConsumer消费者类和KafkaProducer消息生产者类型
  2. 利用Attribute简化Unity框架IOC注入
  3. 20145234黄斐《信息安全系统设计基础》实验一
  4. Linux下如何执行Shell脚本
  5. ubuntu平台下搭建PHPWind网站运行环境
  6. 【spring基础】spring 官方下载地址
  7. linux程序设计——运行SQL语句(第八章)
  8. 1.1.1. Atitit Cocos2d-JS v3.x的问题
  9. linux vps 自动拒绝弱口令ssh扫描
  10. Tomcat问题 无法启动