目录

5.4 基于残差网络的手写体数字识别实验

5.4.1 模型构建

5.4.1.1 残差单元

5.4.1.2 残差网络的整体结构

5.4.2 没有残差连接的ResNet18

5.4.2.1 模型训练

模型评价

带残差连接的ResNet18

模型训练

5.4.3.2 模型评价

5.4.4 与高层API实现版本的对比实验

总结

参考文献


5.4 基于残差网络的手写体数字识别实验

残差网络(Residual Network,ResNet)是在神经网络模型中给非线性层增加直连边的方式来缓解梯度消失问题,从而使训练深度神经网络变得更加容易。

在残差网络中,最基本的单位为残差单元

5.4.1 模型构建

构建ResNet18的残差单元,然后在组建完整的网络。

5.4.1.1 残差单元

这里,我们实现一个算子ResBlock来构建残差单元,其中定义了use_residual参数,用于在后续实验中控制是否使用残差连接。

残差单元包裹的非线性层的输入和输出形状大小应该一致。如果一个卷积层的输入特征图和输出特征图的通道数不一致,则其输出与输入特征图无法直接相加。为了解决上述问题,可以使用1×1大小的卷积将输入特征图的通道数映射为与级联卷积输出特征图的一致通道数。

1×1卷积:与标准卷积完全一样,唯一的特殊点在于卷积核的尺寸是1×1,也就是不去考虑输入数据局部信息之间的关系,而把关注点放在不同通道间。通过使用1×1卷积,可以起到如下作用:

  • 实现信息的跨通道交互与整合。考虑到卷积运算的输入输出都是3个维度(宽、高、多通道),所以1×1卷积实际上就是对每个像素点,在不同的通道上进行线性组合,从而整合不同通道的信息;
  • 对卷积核通道数进行降维和升维,减少参数量。经过1×1卷积后的输出保留了输入数据的原有平面结构,通过调控通道数,从而完成升维或降维的作用;
  • 利用1×1卷积后的非线性激活函数,在保持特征图尺寸不变的前提下,大幅增加非线性。
class ResBlock(nn.Module):def __init__(self, in_channels, out_channels, stride=1, use_residual=True):super(ResBlock, self).__init__()self.stride = strideself.use_residual = use_residual# 第一个卷积层,卷积核大小为3×3,可以设置不同输出通道数以及步长self.conv1 = nn.Conv2d(in_channels, out_channels, 3, padding=1, stride=self.stride, bias=False)# 第二个卷积层,卷积核大小为3×3,不改变输入特征图的形状,步长为1self.conv2 = nn.Conv2d(out_channels, out_channels, 3, padding=1, bias=False)# 如果conv2的输出和此残差块的输入数据形状不一致,则use_1x1conv = True# 当use_1x1conv = True,添加1个1x1的卷积作用在输入数据上,使其形状变成跟conv2一致if in_channels != out_channels or stride != 1:self.use_1x1conv = Trueelse:self.use_1x1conv = False# 当残差单元包裹的非线性层输入和输出通道数不一致时,需要用1×1卷积调整通道数后再进行相加运算if self.use_1x1conv:self.shortcut = nn.Conv2d(in_channels, out_channels, 1, stride=self.stride, bias=False)# 每个卷积层后会接一个批量规范化层,批量规范化的内容在7.5.1中会进行详细介绍self.bn1 = nn.BatchNorm2d(out_channels)self.bn2 = nn.BatchNorm2d(out_channels)if self.use_1x1conv:self.bn3 = nn.BatchNorm2d(out_channels)def forward(self, inputs):y = F.relu(self.bn1(self.conv1(inputs)))y = self.bn2(self.conv2(y))if self.use_residual:if self.use_1x1conv:  # 如果为真,对inputs进行1×1卷积,将形状调整成跟conv2的输出y一致shortcut = self.shortcut(inputs)shortcut = self.bn3(shortcut)else:  # 否则直接将inputs和conv2的输出y相加shortcut = inputsy = torch.add(shortcut, y)out = F.relu(y)return out

5.4.1.2 残差网络的整体结构

残差网络就是将很多个残差单元串联起来构成的一个非常深的网络。ResNet18 的网络结构如下图所示。

其中为了便于理解,可以将ResNet18网络划分为6个模块:

  • 第一模块:包含了一个步长为2,大小为7×77×7的卷积层,卷积层的输出通道数为64,卷积层的输出经过批量归一化、ReLU激活函数的处理后,接了一个步长为2的3×33×3的最大汇聚层;
  • 第二模块:包含了两个残差单元,经过运算后,输出通道数为64,特征图的尺寸保持不变;
  • 第三模块:包含了两个残差单元,经过运算后,输出通道数为128,特征图的尺寸缩小一半;
  • 第四模块:包含了两个残差单元,经过运算后,输出通道数为256,特征图的尺寸缩小一半;
  • 第五模块:包含了两个残差单元,经过运算后,输出通道数为512,特征图的尺寸缩小一半;
  • 第六模块:包含了一个全局平均汇聚层,将特征图变为1×11×1的大小,最终经过全连接层计算出最后的输出。

ResNet18模型的代码实现如下:

定义模块一:

def make_first_module(in_channels):# 模块一:7*7卷积、批量规范化、汇聚m1 = nn.Sequential(nn.Conv2d(in_channels, 64, 7, stride=2, padding=3),nn.BatchNorm2d(64), nn.ReLU(),nn.MaxPool2d(kernel_size=3, stride=2, padding=1))return m1

定义模块二到模块五:

def resnet_module(input_channels, out_channels, num_res_blocks, stride=1, use_residual=True):blk = []# 根据num_res_blocks,循环生成残差单元for i in range(num_res_blocks):if i == 0: # 创建模块中的第一个残差单元blk.append(ResBlock(input_channels, out_channels,stride=stride, use_residual=use_residual))else:      # 创建模块中的其他残差单元blk.append(ResBlock(out_channels, out_channels, use_residual=use_residual))return blk

封装模块二到模块五:

def make_modules(use_residual):# 模块二:包含两个残差单元,输入通道数为64,输出通道数为64,步长为1,特征图大小保持不变m2 = nn.Sequential(*resnet_module(64, 64, 2, stride=1, use_residual=use_residual))# 模块三:包含两个残差单元,输入通道数为64,输出通道数为128,步长为2,特征图大小缩小一半。m3 = nn.Sequential(*resnet_module(64, 128, 2, stride=2, use_residual=use_residual))# 模块四:包含两个残差单元,输入通道数为128,输出通道数为256,步长为2,特征图大小缩小一半。m4 = nn.Sequential(*resnet_module(128, 256, 2, stride=2, use_residual=use_residual))# 模块五:包含两个残差单元,输入通道数为256,输出通道数为512,步长为2,特征图大小缩小一半。m5 = nn.Sequential(*resnet_module(256, 512, 2, stride=2, use_residual=use_residual))return m2, m3, m4, m5

定义完整网络:

# 定义完整网络
class Model_ResNet18(nn.Module):def __init__(self, in_channels=3, num_classes=10, use_residual=True):super(Model_ResNet18,self).__init__()m1 = make_first_module(in_channels)m2, m3, m4, m5 = make_modules(use_residual)# 封装模块一到模块6self.net = nn.Sequential(m1, m2, m3, m4, m5,# 模块六:汇聚层、全连接层nn.AdaptiveAvgPool2d(1), nn.Flatten(), nn.Linear(512, num_classes) )def forward(self, x):return self.net(x)

使用torchsummary.summary统计模型的参数量。

from torchsummary import summarymodel = Model_ResNet18(in_channels=1, num_classes=10, use_residual=True)
params_info = summary(model,input_size=(1,64,32))
print(params_info)

输出结果

----------------------------------------------------------------Layer (type)               Output Shape         Param #
================================================================Conv2d-1           [-1, 64, 16, 16]           3,200BatchNorm2d-2           [-1, 64, 16, 16]             128ReLU-3           [-1, 64, 16, 16]               0MaxPool2d-4             [-1, 64, 8, 8]               0Conv2d-5             [-1, 64, 8, 8]          36,864BatchNorm2d-6             [-1, 64, 8, 8]             128Conv2d-7             [-1, 64, 8, 8]          36,864BatchNorm2d-8             [-1, 64, 8, 8]             128ResBlock-9             [-1, 64, 8, 8]               0Conv2d-10             [-1, 64, 8, 8]          36,864BatchNorm2d-11             [-1, 64, 8, 8]             128Conv2d-12             [-1, 64, 8, 8]          36,864BatchNorm2d-13             [-1, 64, 8, 8]             128ResBlock-14             [-1, 64, 8, 8]               0Conv2d-15            [-1, 128, 4, 4]          73,728BatchNorm2d-16            [-1, 128, 4, 4]             256Conv2d-17            [-1, 128, 4, 4]         147,456BatchNorm2d-18            [-1, 128, 4, 4]             256Conv2d-19            [-1, 128, 4, 4]           8,192BatchNorm2d-20            [-1, 128, 4, 4]             256ResBlock-21            [-1, 128, 4, 4]               0Conv2d-22            [-1, 128, 4, 4]         147,456BatchNorm2d-23            [-1, 128, 4, 4]             256Conv2d-24            [-1, 128, 4, 4]         147,456BatchNorm2d-25            [-1, 128, 4, 4]             256ResBlock-26            [-1, 128, 4, 4]               0Conv2d-27            [-1, 256, 2, 2]         294,912BatchNorm2d-28            [-1, 256, 2, 2]             512Conv2d-29            [-1, 256, 2, 2]         589,824BatchNorm2d-30            [-1, 256, 2, 2]             512Conv2d-31            [-1, 256, 2, 2]          32,768BatchNorm2d-32            [-1, 256, 2, 2]             512ResBlock-33            [-1, 256, 2, 2]               0Conv2d-34            [-1, 256, 2, 2]         589,824BatchNorm2d-35            [-1, 256, 2, 2]             512Conv2d-36            [-1, 256, 2, 2]         589,824BatchNorm2d-37            [-1, 256, 2, 2]             512ResBlock-38            [-1, 256, 2, 2]               0Conv2d-39            [-1, 512, 1, 1]       1,179,648BatchNorm2d-40            [-1, 512, 1, 1]           1,024Conv2d-41            [-1, 512, 1, 1]       2,359,296BatchNorm2d-42            [-1, 512, 1, 1]           1,024Conv2d-43            [-1, 512, 1, 1]         131,072BatchNorm2d-44            [-1, 512, 1, 1]           1,024ResBlock-45            [-1, 512, 1, 1]               0Conv2d-46            [-1, 512, 1, 1]       2,359,296BatchNorm2d-47            [-1, 512, 1, 1]           1,024Conv2d-48            [-1, 512, 1, 1]       2,359,296BatchNorm2d-49            [-1, 512, 1, 1]           1,024ResBlock-50            [-1, 512, 1, 1]               0
AdaptiveAvgPool2d-51            [-1, 512, 1, 1]               0Flatten-52                  [-1, 512]               0Linear-53                   [-1, 10]           5,130
================================================================
Total params: 11,175,434
Trainable params: 11,175,434
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 1.05
Params size (MB): 42.63
Estimated Total Size (MB): 43.69
----------------------------------------------------------------

使用torchstat统计模型的计算量。

from torchstat import stat
model = Model_ResNet18(in_channels=1, num_classes=10, use_residual=True)
stat(model, (1, 32, 32))

输出

[MAdd]: AdaptiveAvgPool2d is not supported!
[Flops]: AdaptiveAvgPool2d is not supported!
[Memory]: AdaptiveAvgPool2d is not supported!
[MAdd]: Flatten is not supported!
[Flops]: Flatten is not supported!
[Memory]: Flatten is not supported!df = df.append(total_df)module name  input shape output shape      params memory(MB)          MAdd         Flops  MemRead(B)  MemWrite(B) duration[%]   MemR+W(B)
0               net.0.0    1  32  32   64  16  16      3200.0       0.06   1,605,632.0     819,200.0     16896.0      65536.0      11.11%     82432.0
1               net.0.1   64  16  16   64  16  16       128.0       0.06      65,536.0      32,768.0     66048.0      65536.0      11.11%    131584.0
2               net.0.2   64  16  16   64  16  16         0.0       0.06      16,384.0      16,384.0     65536.0      65536.0       0.00%    131072.0
3               net.0.3   64  16  16   64   8   8         0.0       0.02      32,768.0      16,384.0     65536.0      16384.0       0.00%     81920.0
4         net.1.0.conv1   64   8   8   64   8   8     36864.0       0.02   4,714,496.0   2,359,296.0    163840.0      16384.0      11.11%    180224.0
5         net.1.0.conv2   64   8   8   64   8   8     36864.0       0.02   4,714,496.0   2,359,296.0    163840.0      16384.0       0.00%    180224.0
6           net.1.0.bn1   64   8   8   64   8   8       128.0       0.02      16,384.0       8,192.0     16896.0      16384.0       0.00%     33280.0
7           net.1.0.bn2   64   8   8   64   8   8       128.0       0.02      16,384.0       8,192.0     16896.0      16384.0       0.00%     33280.0
8         net.1.1.conv1   64   8   8   64   8   8     36864.0       0.02   4,714,496.0   2,359,296.0    163840.0      16384.0      11.11%    180224.0
9         net.1.1.conv2   64   8   8   64   8   8     36864.0       0.02   4,714,496.0   2,359,296.0    163840.0      16384.0       0.00%    180224.0
10          net.1.1.bn1   64   8   8   64   8   8       128.0       0.02      16,384.0       8,192.0     16896.0      16384.0       0.00%     33280.0
11          net.1.1.bn2   64   8   8   64   8   8       128.0       0.02      16,384.0       8,192.0     16896.0      16384.0       0.00%     33280.0
12        net.2.0.conv1   64   8   8  128   4   4     73728.0       0.01   2,357,248.0   1,179,648.0    311296.0       8192.0       0.00%    319488.0
13        net.2.0.conv2  128   4   4  128   4   4    147456.0       0.01   4,716,544.0   2,359,296.0    598016.0       8192.0       0.00%    606208.0
14     net.2.0.shortcut   64   8   8  128   4   4      8192.0       0.01     260,096.0     131,072.0     49152.0       8192.0       0.00%     57344.0
15          net.2.0.bn1  128   4   4  128   4   4       256.0       0.01       8,192.0       4,096.0      9216.0       8192.0      11.11%     17408.0
16          net.2.0.bn2  128   4   4  128   4   4       256.0       0.01       8,192.0       4,096.0      9216.0       8192.0       0.00%     17408.0
17          net.2.0.bn3  128   4   4  128   4   4       256.0       0.01       8,192.0       4,096.0      9216.0       8192.0       0.00%     17408.0
18        net.2.1.conv1  128   4   4  128   4   4    147456.0       0.01   4,716,544.0   2,359,296.0    598016.0       8192.0      11.11%    606208.0
19        net.2.1.conv2  128   4   4  128   4   4    147456.0       0.01   4,716,544.0   2,359,296.0    598016.0       8192.0       0.00%    606208.0
20          net.2.1.bn1  128   4   4  128   4   4       256.0       0.01       8,192.0       4,096.0      9216.0       8192.0       0.00%     17408.0
21          net.2.1.bn2  128   4   4  128   4   4       256.0       0.01       8,192.0       4,096.0      9216.0       8192.0       0.00%     17408.0
22        net.3.0.conv1  128   4   4  256   2   2    294912.0       0.00   2,358,272.0   1,179,648.0   1187840.0       4096.0       0.00%   1191936.0
23        net.3.0.conv2  256   2   2  256   2   2    589824.0       0.00   4,717,568.0   2,359,296.0   2363392.0       4096.0       0.00%   2367488.0
24     net.3.0.shortcut  128   4   4  256   2   2     32768.0       0.00     261,120.0     131,072.0    139264.0       4096.0       0.00%    143360.0
25          net.3.0.bn1  256   2   2  256   2   2       512.0       0.00       4,096.0       2,048.0      6144.0       4096.0       0.00%     10240.0
26          net.3.0.bn2  256   2   2  256   2   2       512.0       0.00       4,096.0       2,048.0      6144.0       4096.0       0.00%     10240.0
27          net.3.0.bn3  256   2   2  256   2   2       512.0       0.00       4,096.0       2,048.0      6144.0       4096.0       0.00%     10240.0
28        net.3.1.conv1  256   2   2  256   2   2    589824.0       0.00   4,717,568.0   2,359,296.0   2363392.0       4096.0       0.00%   2367488.0
29        net.3.1.conv2  256   2   2  256   2   2    589824.0       0.00   4,717,568.0   2,359,296.0   2363392.0       4096.0      11.11%   2367488.0
30          net.3.1.bn1  256   2   2  256   2   2       512.0       0.00       4,096.0       2,048.0      6144.0       4096.0       0.00%     10240.0
31          net.3.1.bn2  256   2   2  256   2   2       512.0       0.00       4,096.0       2,048.0      6144.0       4096.0       0.00%     10240.0
32        net.4.0.conv1  256   2   2  512   1   1   1179648.0       0.00   2,358,784.0   1,179,648.0   4722688.0       2048.0      11.11%   4724736.0
33        net.4.0.conv2  512   1   1  512   1   1   2359296.0       0.00   4,718,080.0   2,359,296.0   9439232.0       2048.0       0.00%   9441280.0
34     net.4.0.shortcut  256   2   2  512   1   1    131072.0       0.00     261,632.0     131,072.0    528384.0       2048.0       0.00%    530432.0
35          net.4.0.bn1  512   1   1  512   1   1      1024.0       0.00       2,048.0       1,024.0      6144.0       2048.0       0.00%      8192.0
36          net.4.0.bn2  512   1   1  512   1   1      1024.0       0.00       2,048.0       1,024.0      6144.0       2048.0       0.00%      8192.0
37          net.4.0.bn3  512   1   1  512   1   1      1024.0       0.00       2,048.0       1,024.0      6144.0       2048.0       0.00%      8192.0
38        net.4.1.conv1  512   1   1  512   1   1   2359296.0       0.00   4,718,080.0   2,359,296.0   9439232.0       2048.0       0.00%   9441280.0
39        net.4.1.conv2  512   1   1  512   1   1   2359296.0       0.00   4,718,080.0   2,359,296.0   9439232.0       2048.0       0.00%   9441280.0
40          net.4.1.bn1  512   1   1  512   1   1      1024.0       0.00       2,048.0       1,024.0      6144.0       2048.0       0.00%      8192.0
41          net.4.1.bn2  512   1   1  512   1   1      1024.0       0.00       2,048.0       1,024.0      6144.0       2048.0       0.00%      8192.0
42                net.5  512   1   1  512   1   1         0.0       0.00           0.0           0.0         0.0          0.0      11.11%         0.0
43                net.6  512   1   1          512         0.0       0.00           0.0           0.0         0.0          0.0       0.00%         0.0
44                net.7          512           10      5130.0       0.00      10,230.0       5,120.0     22568.0         40.0       0.00%     22608.0
total                                              11175434.0       0.47  71,039,478.0  35,561,472.0     22568.0         40.0     100.00%  45695056.0
=====================================================================================================================================================
Total params: 11,175,434
-----------------------------------------------------------------------------------------------------------------------------------------------------
Total memory: 0.47MB
Total MAdd: 71.04MMAdd
Total Flops: 35.56MFlops
Total MemR+W: 43.58MB进程已结束,退出代码为 0

为了验证残差连接对深层卷积神经网络的训练可以起到促进作用,接下来先使用ResNet18(use_residual设置为False)进行手写数字识别实验,再添加残差连接(use_residual设置为True),观察实验对比效果。

5.4.2 没有残差连接的ResNet18

为了验证残差连接的效果,先使用没有残差连接的ResNet18进行实验。

5.4.2.1 模型训练

使用训练集和验证集进行模型训练,共训练5个epoch。在实验中,保存准确率最高的模型作为最佳模型。代码如下:

torch.manual_seed(100)
# 学习率大小
lr = 0.005
# 批次大小
batch_size = 64
# 加载数据
train_loader = data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
dev_loader = data.DataLoader(dev_dataset, batch_size=batch_size)
test_loader = data.DataLoader(test_dataset, batch_size=batch_size)
# 定义网络,不使用残差结构的深层网络
model = Model_ResNet18(in_channels=1, num_classes=10, use_residual=False)
# 定义优化器
optimizer = opt.SGD(lr=lr, params=model.parameters())
# 定义损失函数
loss_fn = F.cross_entropy
# 定义评价指标
metric = metric.Accuracy(is_logist=True)
# 实例化RunnerV3
runner = RunnerV3(model, optimizer, loss_fn, metric)
# 启动训练
log_steps = 15
eval_steps = 15
runner.train(train_loader, dev_loader, num_epochs=5, log_steps=log_steps,eval_steps=eval_steps, save_path="best_model.pdparams")
# 可视化观察训练集与验证集的Loss变化情况
plot(runner, 'cnn-loss2.pdf')

plot

# 可视化
def plot(runner, fig_name):plt.figure(figsize=(10, 5))plt.subplot(1, 2, 1)train_items = runner.train_step_losses[::30]train_steps = [x[0] for x in train_items]train_losses = [x[1] for x in train_items]plt.plot(train_steps, train_losses, color='#8E004D', label="Train loss")if runner.dev_losses[0][0] != -1:dev_steps = [x[0] for x in runner.dev_losses]dev_losses = [x[1] for x in runner.dev_losses]plt.plot(dev_steps, dev_losses, color='#E20079', linestyle='--', label="Dev loss")# 绘制坐标轴和图例plt.ylabel("loss", fontsize='x-large')plt.xlabel("step", fontsize='x-large')plt.legend(loc='upper right', fontsize='x-large')plt.subplot(1, 2, 2)# 绘制评价准确率变化曲线if runner.dev_losses[0][0] != -1:plt.plot(dev_steps, runner.dev_scores,color='#E20079', linestyle="--", label="Dev accuracy")else:plt.plot(list(range(len(runner.dev_scores))), runner.dev_scores,color='#E20079', linestyle="--", label="Dev accuracy")# 绘制坐标轴和图例plt.ylabel("score", fontsize='x-large')plt.xlabel("step", fontsize='x-large')plt.legend(loc='lower right', fontsize='x-large')plt.savefig(fig_name)plt.show()

Accuracy:

class Accuracy():def __init__(self, is_logist=True):# 用于统计正确的样本个数self.num_correct = 0# 用于统计样本的总数self.num_count = 0self.is_logist = is_logistdef update(self, outputs, labels):# 判断是二分类任务还是多分类任务,shape[1]=1时为二分类任务,shape[1]>1时为多分类任务if outputs.shape[1] == 1:  # 二分类outputs = torch.squeeze(outputs, dim=-1)if self.is_logist:# logist判断是否大于0preds = torch.tensor((outputs >= 0), dtype=torch.float32)else:# 如果不是logist,判断每个概率值是否大于0.5,当大于0.5时,类别为1,否则类别为0preds = torch.tensor((outputs >= 0.5), dtype=torch.float32)else:# 多分类时,使用'torch.argmax'计算最大元素索引作为类别preds = torch.argmax(outputs, dim=1)# 获取本批数据中预测正确的样本个数labels = torch.squeeze(labels, dim=-1)batch_correct = torch.sum(torch.tensor(preds == labels, dtype=torch.float32)).numpy()batch_count = len(labels)# 更新num_correct 和 num_countself.num_correct += batch_correctself.num_count += batch_countdef accumulate(self):# 使用累计的数据,计算总的指标if self.num_count == 0:return 0return self.num_correct / self.num_countdef reset(self):# 重置正确的数目和总数self.num_correct = 0self.num_count = 0def name(self):return "Accuracy"

RunnerV3:

class RunnerV3(object):def __init__(self, model, optimizer, loss_fn, metric, **kwargs):self.model = modelself.optimizer = optimizerself.loss_fn = loss_fnself.metric = metric  # 只用于计算评价指标# 记录训练过程中的评价指标变化情况self.dev_scores = []# 记录训练过程中的损失函数变化情况self.train_epoch_losses = []  # 一个epoch记录一次lossself.train_step_losses = []  # 一个step记录一次lossself.dev_losses = []# 记录全局最优指标self.best_score = 0def train(self, train_loader, dev_loader=None, **kwargs):# 将模型切换为训练模式self.model.train()# 传入训练轮数,如果没有传入值则默认为0num_epochs = kwargs.get("num_epochs", 0)# 传入log打印频率,如果没有传入值则默认为100log_steps = kwargs.get("log_steps", 100)# 评价频率eval_steps = kwargs.get("eval_steps", 0)# 传入模型保存路径,如果没有传入值则默认为"best_model.pdparams"save_path = kwargs.get("save_path", "best_model.pdparams")custom_print_log = kwargs.get("custom_print_log", None)# 训练总的步数num_training_steps = num_epochs * len(train_loader)if eval_steps:if self.metric is None:raise RuntimeError('Error: Metric can not be None!')if dev_loader is None:raise RuntimeError('Error: dev_loader can not be None!')# 运行的step数目global_step = 0# 进行num_epochs轮训练for epoch in range(num_epochs):# 用于统计训练集的损失total_loss = 0for step, data in enumerate(train_loader):X, y = data# 获取模型预测logits = self.model(X)loss = self.loss_fn(logits, y)  # 默认求meantotal_loss += loss# 训练过程中,每个step的loss进行保存self.train_step_losses.append((global_step, loss.item()))if log_steps and global_step % log_steps == 0:print(f"[Train] epoch: {epoch}/{num_epochs}, step: {global_step}/{num_training_steps}, loss: {loss.item():.5f}")# 梯度反向传播,计算每个参数的梯度值loss.backward()if custom_print_log:custom_print_log(self)# 小批量梯度下降进行参数更新self.optimizer.step()# 梯度归零self.optimizer.zero_grad()# 判断是否需要评价if eval_steps > 0 and global_step > 0 and \(global_step % eval_steps == 0 or global_step == (num_training_steps - 1)):dev_score, dev_loss = self.evaluate(dev_loader, global_step=global_step)print(f"[Evaluate]  dev score: {dev_score:.5f}, dev loss: {dev_loss:.5f}")# 将模型切换为训练模式self.model.train()# 如果当前指标为最优指标,保存该模型if dev_score > self.best_score:self.save_model(save_path)print(f"[Evaluate] best accuracy performence has been updated: {self.best_score:.5f} --> {dev_score:.5f}")self.best_score = dev_scoreglobal_step += 1# 当前epoch 训练loss累计值trn_loss = (total_loss / len(train_loader)).item()# epoch粒度的训练loss保存self.train_epoch_losses.append(trn_loss)print("[Train] Training done!")# 模型评估阶段,使用'paddle.no_grad()'控制不计算和存储梯度@torch.no_grad()def evaluate(self, dev_loader, **kwargs):assert self.metric is not None# 将模型设置为评估模式self.model.eval()global_step = kwargs.get("global_step", -1)# 用于统计训练集的损失total_loss = 0# 重置评价self.metric.reset()# 遍历验证集每个批次for batch_id, data in enumerate(dev_loader):X, y = data# 计算模型输出logits = self.model(X)# 计算损失函数loss = self.loss_fn(logits, y).item()# 累积损失total_loss += loss# 累积评价self.metric.update(logits, y)dev_loss = (total_loss / len(dev_loader))dev_score = self.metric.accumulate()# 记录验证集lossif global_step != -1:self.dev_losses.append((global_step, dev_loss))self.dev_scores.append(dev_score)return dev_score, dev_loss# 模型评估阶段,使用'paddle.no_grad()'控制不计算和存储梯度@torch.no_grad()def predict(self, x, **kwargs):# 将模型设置为评估模式self.model.eval()# 运行模型前向计算,得到预测值logits = self.model(x)return logitsdef save_model(self, save_path):torch.save(self.model.state_dict(), save_path)def load_model(self, model_path):state_dict = torch.load(model_path)self.model.load_state_dict(state_dict)

输出

[Train] epoch: 0/5, step: 0/80, loss: 2.31209
[Train] epoch: 0/5, step: 15/80, loss: 0.86413
[Evaluate]  dev score: 0.11000, dev loss: 2.30072
[Evaluate] best accuracy performence has been updated: 0.00000 --> 0.11000
[Train] epoch: 1/5, step: 30/80, loss: 0.45704
[Evaluate]  dev score: 0.11000, dev loss: 2.29350
[Train] epoch: 2/5, step: 45/80, loss: 0.18045
[Evaluate]  dev score: 0.72000, dev loss: 1.29890
[Evaluate] best accuracy performence has been updated: 0.11000 --> 0.72000
[Train] epoch: 3/5, step: 60/80, loss: 0.08861
[Evaluate]  dev score: 0.91000, dev loss: 0.41233
[Evaluate] best accuracy performence has been updated: 0.72000 --> 0.91000
[Train] epoch: 4/5, step: 75/80, loss: 0.07691
[Evaluate]  dev score: 0.93500, dev loss: 0.29393
[Evaluate] best accuracy performence has been updated: 0.91000 --> 0.93500
[Evaluate]  dev score: 0.92500, dev loss: 0.24343
[Train] Training done!

关于上述代码实现中所使用到的一些函数的补充:

准确率Accuracy,

class Accuracy():def __init__(self, is_logist=True):"""输入:- is_logist: outputs是logist还是激活后的值"""# 用于统计正确的样本个数self.num_correct = 0# 用于统计样本的总数self.num_count = 0self.is_logist = is_logistdef update(self, outputs, labels):"""输入:- outputs: 预测值, shape=[N,class_num]- labels: 标签值, shape=[N,1]"""# 判断是二分类任务还是多分类任务,shape[1]=1时为二分类任务,shape[1]>1时为多分类任务if outputs.shape[1] == 1:  # 二分类outputs = torch.squeeze(outputs, dim=-1)if self.is_logist:# logist判断是否大于0preds = torch.tensor((outputs >= 0), dtype=torch.float32)else:# 如果不是logist,判断每个概率值是否大于0.5,当大于0.5时,类别为1,否则类别为0preds = torch.tensor((outputs >= 0.5), dtype=torch.float32)else:# 多分类时,使用'torch.argmax'计算最大元素索引作为类别preds = torch.argmax(outputs, dim=1)# 获取本批数据中预测正确的样本个数labels = torch.squeeze(labels, dim=-1)batch_correct = torch.sum(torch.tensor(preds == labels, dtype=torch.float32)).numpy()batch_count = len(labels)# 更新num_correct 和 num_countself.num_correct += batch_correctself.num_count += batch_countdef accumulate(self):# 使用累计的数据,计算总的指标if self.num_count == 0:return 0return self.num_correct / self.num_countdef reset(self):# 重置正确的数目和总数self.num_correct = 0self.num_count = 0def name(self):return "Accuracy"

RunnerV3类,

class RunnerV3(object):def __init__(self, model, optimizer, loss_fn, metric, **kwargs):self.model = modelself.optimizer = optimizerself.loss_fn = loss_fnself.metric = metric  # 只用于计算评价指标# 记录训练过程中的评价指标变化情况self.dev_scores = []# 记录训练过程中的损失函数变化情况self.train_epoch_losses = []  # 一个epoch记录一次lossself.train_step_losses = []  # 一个step记录一次lossself.dev_losses = []# 记录全局最优指标self.best_score = 0def train(self, train_loader, dev_loader=None, **kwargs):# 将模型切换为训练模式self.model.train()# 传入训练轮数,如果没有传入值则默认为0num_epochs = kwargs.get("num_epochs", 0)# 传入log打印频率,如果没有传入值则默认为100log_steps = kwargs.get("log_steps", 100)# 评价频率eval_steps = kwargs.get("eval_steps", 0)# 传入模型保存路径,如果没有传入值则默认为"best_model.pdparams"save_path = kwargs.get("save_path", "best_model.pdparams")custom_print_log = kwargs.get("custom_print_log", None)# 训练总的步数num_training_steps = num_epochs * len(train_loader)if eval_steps:if self.metric is None:raise RuntimeError('Error: Metric can not be None!')if dev_loader is None:raise RuntimeError('Error: dev_loader can not be None!')# 运行的step数目global_step = 0# 进行num_epochs轮训练for epoch in range(num_epochs):# 用于统计训练集的损失total_loss = 0for step, data in enumerate(train_loader):X, y = data# 获取模型预测logits = self.model(X)y = y.to(dtype=torch.int64)loss = self.loss_fn(logits, y)  # 默认求meantotal_loss += loss# 训练过程中,每个step的loss进行保存self.train_step_losses.append((global_step, loss.item()))if log_steps and global_step % log_steps == 0:print(f"[Train] epoch: {epoch}/{num_epochs}, step: {global_step}/{num_training_steps}, loss: {loss.item():.5f}")# 梯度反向传播,计算每个参数的梯度值loss.backward()if custom_print_log:custom_print_log(self)# 小批量梯度下降进行参数更新self.optimizer.step()# 梯度归零optimizer.zero_grad()# 判断是否需要评价if eval_steps > 0 and global_step > 0 and \(global_step % eval_steps == 0 or global_step == (num_training_steps - 1)):dev_score, dev_loss = self.evaluate(dev_loader, global_step=global_step)print(f"[Evaluate]  dev score: {dev_score:.5f}, dev loss: {dev_loss:.5f}")# 将模型切换为训练模式self.model.train()# 如果当前指标为最优指标,保存该模型if dev_score > self.best_score:self.save_model(save_path)print(f"[Evaluate] best accuracy performence has been updated: {self.best_score:.5f} --> {dev_score:.5f}")self.best_score = dev_scoreglobal_step += 1# 当前epoch 训练loss累计值trn_loss = (total_loss / len(train_loader)).item()# epoch粒度的训练loss保存self.train_epoch_losses.append(trn_loss)print("[Train] Training done!")# 模型评估阶段,使用'paddle.no_grad()'控制不计算和存储梯度@torch.no_grad()def evaluate(self, dev_loader, **kwargs):assert self.metric is not None# 将模型设置为评估模式self.model.eval()global_step = kwargs.get("global_step", -1)# 用于统计训练集的损失total_loss = 0# 重置评价self.metric.reset()# 遍历验证集每个批次for batch_id, data in enumerate(dev_loader):X, y = data# 计算模型输出logits = self.model(X)# 计算损失函数y=y.to(dtype=torch.int64)loss = self.loss_fn(logits, y).item()# 累积损失total_loss += loss# 累积评价self.metric.update(logits, y)dev_loss = (total_loss / len(dev_loader))dev_score = self.metric.accumulate()# 记录验证集lossif global_step != -1:self.dev_losses.append((global_step, dev_loss))self.dev_scores.append(dev_score)return dev_score, dev_loss# 模型评估阶段,使用'paddle.no_grad()'控制不计算和存储梯度@torch.no_grad()def predict(self, x, **kwargs):# 将模型设置为评估模式self.model.eval()# 运行模型前向计算,得到预测值logits = self.model(x)return logitsdef save_model(self, save_path):torch.save(self.model.state_dict(), save_path)def load_model(self, model_path):state_dict = torch.load(model_path)self.model.load_state_dict(state_dict)

loss_fn, metric

# 定义损失函数
loss_fn = F.cross_entropy
# 定义评价指标
metric = metric.Accuracy(is_logist=True)

模型评价

使用测试数据对在训练过程中保存的最佳模型进行评价,观察模型在测试集上的准确率以及损失情况。代码实现如下

# 加载最优模型
runner.load_model('best_model.pdparams')
# 模型评价
score, loss = runner.evaluate(test_loader)
print("[Test] accuracy/loss: {:.4f}/{:.4f}".format(score, loss))
[Test] accuracy/loss: 0.9620/0.2346

从输出结果看,对比LeNet-5模型评价实验结果,网络层级加深后,训练效果不升反降。

带残差连接的ResNet18

模型训练

使用带残差连接的ResNet18重复上面的实验(use_residual=True),代码实现如下:

# 学习率大小
lr = 0.01
# 批次大小
batch_size = 64
# 加载数据
train_loader = data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
dev_loader = data.DataLoader(dev_dataset, batch_size=batch_size)
test_loader = data.DataLoader(test_dataset, batch_size=batch_size)
# 定义网络,通过指定use_residual为True,使用残差结构的深层网络
model = Model_ResNet18(in_channels=1, num_classes=10, use_residual=True)
# 定义优化器
optimizer = opt.SGD(lr=lr, params=model.parameters())
# 实例化RunnerV3
runner = RunnerV3(model, optimizer, loss_fn, metric)
# 启动训练
log_steps = 15
eval_steps = 15
runner.train(train_loader, dev_loader, num_epochs=5, log_steps=log_steps,eval_steps=eval_steps, save_path="best_model.pdparams")# 可视化观察训练集与验证集的Loss变化情况
plot(runner, 'cnn-loss3.pdf')
[Train] epoch: 0/5, step: 0/160, loss: 2.46978
[Train] epoch: 0/5, step: 15/160, loss: 0.52145
[Evaluate]  dev score: 0.19000, dev loss: 2.29718
[Evaluate] best accuracy performence has been updated: 0.00000 --> 0.19000
[Train] epoch: 0/5, step: 30/160, loss: 0.22503
[Evaluate]  dev score: 0.39500, dev loss: 1.75715
[Evaluate] best accuracy performence has been updated: 0.19000 --> 0.39500
[Train] epoch: 1/5, step: 45/160, loss: 0.13266
[Evaluate]  dev score: 0.90000, dev loss: 0.37835
[Evaluate] best accuracy performence has been updated: 0.39500 --> 0.90000
[Train] epoch: 1/5, step: 60/160, loss: 0.07993
[Evaluate]  dev score: 0.90500, dev loss: 0.23769
[Evaluate] best accuracy performence has been updated: 0.90000 --> 0.90500
[Train] epoch: 2/5, step: 75/160, loss: 0.03920
[Evaluate]  dev score: 0.94500, dev loss: 0.13020
[Evaluate] best accuracy performence has been updated: 0.90500 --> 0.94500
[Train] epoch: 2/5, step: 90/160, loss: 0.04129
[Evaluate]  dev score: 0.95500, dev loss: 0.11184
[Evaluate] best accuracy performence has been updated: 0.94500 --> 0.95500
[Train] epoch: 3/5, step: 105/160, loss: 0.01144
[Evaluate]  dev score: 0.95500, dev loss: 0.10348
[Train] epoch: 3/5, step: 120/160, loss: 0.00599
[Evaluate]  dev score: 0.96500, dev loss: 0.09905
[Evaluate] best accuracy performence has been updated: 0.95500 --> 0.96500
[Train] epoch: 4/5, step: 135/160, loss: 0.00453
[Evaluate]  dev score: 0.95500, dev loss: 0.09267
[Train] epoch: 4/5, step: 150/160, loss: 0.00763
[Evaluate]  dev score: 0.95500, dev loss: 0.08276
[Evaluate]  dev score: 0.95500, dev loss: 0.07131
[Train] Training done!

5.4.3.2 模型评价

使用测试数据对在训练过程中保存的最佳模型进行评价,观察模型在测试集上的准确率以及损失情况。

# 加载最优模型
runner.load_model('best_model.pdparams')
# 模型评价
score, loss = runner.evaluate(test_loader)
print("[Test] accuracy/loss: {:.4f}/{:.4f}".format(score, loss))
[Test] accuracy/loss: 0.9650/0.1460

从输出结果看,和不使用残差连接的ResNet相比,添加了残差连接后,模型效果有了一定的提升。

5.4.4 与高层API实现版本的对比实验

飞桨高层 API是对飞桨API的进一步封装与升级,提供了更加简洁易用的API,进一步提升了飞桨的易学易用性。

其中,飞桨高层API封装了以下模块:

  1. Model类,支持仅用几行代码完成模型的训练;
  2. 图像预处理模块,包含数十种数据处理函数,基本涵盖了常用的数据处理、数据增强方法;
  3. 计算机视觉领域和自然语言处理领域的常用模型,包括但不限于mobilenet、resnet、yolov3、cyclegan、bert、transformer、seq2seq等等,同时发布了对应模型的预训练模型,可以直接使用这些模型或者在此基础上完成二次开发。

飞桨高层 API主要包含在paddle.vision和paddle.text目录中。

对于Reset18这种比较经典的图像分类网络,飞桨高层API中都为大家提供了实现好的版本,大家可以不再从头开始实现。

这里为高层API版本的resnet18模型和自定义的resnet18模型赋予相同的权重,并使用相同的输入数据,观察输出结果是否一致。

import warnings
#warnings.filterwarnings("ignore")# 使用飞桨HAPI中实现的resnet18模型,该模型默认输入通道数为3,输出类别数1000
hapi_model = resnet18(pretrained=True)
# 自定义的resnet18模型
model = Model_ResNet18(in_channels=3, num_classes=1000, use_residual=True)# 获取网络的权重
params = hapi_model.state_dict()
# 用来保存参数名映射后的网络权重
new_params = {}
# 将参数名进行映射
for key in params:if 'layer' in key:if 'downsample.0' in key:new_params['net.' + key[5:8] + '.shortcut' + key[-7:]] = params[key]elif 'downsample.1' in key:new_params['net.' + key[5:8] + '.shorcutt' + key[23:]] = params[key]else:new_params['net.' + key[5:]] = params[key]elif 'conv1.weight' == key:new_params['net.0.0.weight'] = params[key]elif 'bn1' in key:new_params['net.0.1' + key[3:]] = params[key]elif 'fc' in key:new_params['net.7' + key[2:]] = params[key]# 将飞桨HAPI中实现的resnet18模型的权重参数赋予自定义的resnet18模型,保持两者一致del new_params[ "net.2.0.shorcutteight"]
del new_params["net.2.0.shorcuttias"]
del new_params["net.2.0.shorcuttunning_mean"]
del new_params["net.2.0.shorcuttunning_var"]
del new_params["net.2.0.shorcuttum_batches_tracked"]
del new_params["net.3.0.shorcutteight"]
del new_params["net.3.0.shorcuttias"]
del new_params["net.3.0.shorcuttunning_mean"]
del new_params["net.3.0.shorcuttunning_var"]
del new_params["net.3.0.shorcuttum_batches_tracked"]
del new_params["net.4.0.shorcutteight"]
del new_params["net.4.0.shorcuttias"]
del new_params["net.4.0.shorcuttunning_mean"]
del new_params["net.4.0.shorcuttunning_var"]
del new_params["net.4.0.shorcuttum_batches_tracked"]#model.load_state_dict(torch.load("best_model.pdparams"))
#model.load_state_dict(new_params)
# 这里用np.random创建一个随机数组作为测试数据
inputs = np.random.randn(*[3,3,32,32])
inputs = inputs.astype('float32')
x = torch.tensor(inputs)output = hapi_model(x)
hapi_out = hapi_model(x)# 计算两个模型输出的差异
diff = output - hapi_out
# 取差异最大的值
max_diff = torch.max(diff)
print(max_diff)
tensor(0., grad_fn=<MaxBackward1>)

高层API版本的resnet18模型和自定义的resnet18模型输出结果是一致的,也就说明两个模型的实现完全一样。

总结

本次实验完成了使用ResNet18实现出MNIST手写数字的识别,对于残差网络有了一定的认识,通过实验证明了高层APi版本的resnet18模型和自定义的resnet18模型输出结果是一致,本次实验捋清了很多思路,学到了很多。。。

参考文献

NNDL 实验5(上) - HBU_DAVID - 博客园 (cnblogs.com)

NNDL 实验5(下) - HBU_DAVID - 博客园 (cnblogs.com)

6. 卷积神经网络 — 动手学深度学习 2.0.0-beta1 documentation (d2l.ai)

7. 现代卷积神经网络 — 动手学深度学习 2.0.0-beta1 documentation (d2l.ai)

神经网络与深度学习(六)卷积神经网络(4)ResNet18实现MNIST相关推荐

  1. 【神经网络与深度学习】卷积神经网络在自然语言处理的应用

    摘要:CNN作为当今绝大多数计算机视觉系统的核心技术,在图像分类领域做出了巨大贡献.本文从计算机视觉的用例开始,介绍CNN及其在自然语言处理中的优势和发挥的作用. 当我们听到卷积神经网络(Convol ...

  2. Kaggle深度学习与卷积神经网络项目实战-猫狗分类检测数据集

    Kaggle深度学习与卷积神经网络项目实战-猫狗分类检测数据集 一.相关介绍 二.下载数据集 三.代码示例 1.导入keras库,并显示版本号 2.构建网络 3.数据预处理 4.使用数据增强 四.使用 ...

  3. 解析深度学习:卷积神经网络原理与视觉实践

    解析深度学习:卷积神经网络原理与视觉实践 魏秀参 著 ISBN:9787121345289 包装:平装 开本:16开 正文语种:中文 出版社: 电子工业出版社 出版时间:2018-11-01

  4. 【深度学习】卷积神经网络实现图像多分类的探索

    [深度学习]卷积神经网络实现图像多分类的探索 文章目录 1 数字图像解释 2 cifar10数据集踩坑 3 Keras代码实现流程 3.1 导入数据 3.2 浅层CNN 3.3 深层CNN 3.4 进 ...

  5. 【深度学习】卷积神经网络速成

    [深度学习]卷积神经网络速成 文章目录 [深度学习]卷积神经网络速成 1 概述 2 组成 2.1 卷积层 2.2 池化层 2.3 全连接层 3 一个案例 4 详细分析 1 概述 前馈神经网络(feed ...

  6. 深度学习~卷积神经网络(CNN)概述

    目录​​​​​​​ 1. 卷积神经网络的形成和演变 1.1 卷积神经网络结构 1.2 卷积神经网络的应用和影响 1.3 卷积神经网络的缺陷和视图 1.3.1 缺陷:可能错分 1.3.2 解决方法:视图 ...

  7. 深度学习之卷积神经网络(13)DenseNet

    深度学习之卷积神经网络(13)DenseNet  Skip Connection的思想在ResNet上面获得了巨大的成功,研究人员开始尝试不同的Skip Connection方案,其中比较流行的就是D ...

  8. 深度学习之卷积神经网络(12)深度残差网络

    深度学习之卷积神经网络(12)深度残差网络 ResNet原理 ResBlock实现 AlexNet.VGG.GoogleLeNet等网络模型的出现将神经网络的法阵带入了几十层的阶段,研究人员发现网络的 ...

  9. 深度学习之卷积神经网络(11)卷积层变种

    深度学习之卷积神经网络(11)卷积层变种 1. 空洞卷积 2. 转置卷积 矩阵角度 转置卷积实现 3. 分离卷积 卷积神经网络的研究产生了各种各样优秀的网络模型,还提出了各种卷积层的变种,本节将重点介 ...

  10. 深度学习之卷积神经网络(10)CIFAR10与VGG13实战

    深度学习之卷积神经网络(10)CIFAR10与VGG13实战 MNIST是机器学习最常用的数据集之一,但由于手写数字图片非常简单,并且MNIST数据集只保存了图片灰度信息,并不适合输入设计为RGB三通 ...

最新文章

  1. alsa 测试 linux_python语音智能对话聊天机器人--linuxamp;amp;树莓派双平台兼容
  2. leetcode算法题--Minimum Number of Arrows to Burst Balloons
  3. Dbeaver连接MySQL
  4. 通过Flask和Redis构造一个动态维护的代理池
  5. k8s pod重启策略:Always、OnFailure、Never配置示例
  6. 为dhcp服务器授权的作用,有关在 AD DS 中为 DHCP 服务器授权的详细信息
  7. django 文件下载到本地
  8. 2015计算机二级office真题,2015年计算机二级office题库及答案
  9. 系列课程 ElasticSearch 之第 5 篇 —— Kibana 高级查询语句、DSL语言查询和过滤、中文分词器(elasticsearch.bat闪退解决办法)
  10. java 上传文件接口_Java接口实现文件上传
  11. 入侵修改服务器内容,怎么入侵服务器修改数据库
  12. ubuntu桌面美化
  13. 微信公众号服务器端脑图,微信公众号中隐藏的思维导图工具,帮你随时随地高效思考...
  14. 按照题目打印菜单c语言,--单片机C语言编程实训
  15. oracle监控pga,oracle pga使用情况常用脚本:
  16. matlab 超拉丁,拉丁超立方抽样 专注matlab代码下载 Downma.com 当码网
  17. 利用shell求取两个文件的交集、差集、并集等
  18. 微信小程序获得用户头像昵称调整(2022年9月28日修改)
  19. 牛散NO.3:MACD放之四海 假作真时真亦假
  20. Python代码制作Wifi万能钥匙,成功获取到隔壁邻居的Wifi密码

热门文章

  1. 数独游戏 | c++ | BFS
  2. 基于SVM的数据分类预測——意大利葡萄酒种类识别
  3. R语言期末考试复习题第一天整理内容(自己整理+参考博主:紧到长不胖 )请多关注 紧到长不胖 ,每天有惊喜!
  4. html6张拼图,拼图6张-好看的6宫格拼图大全-稿定设计
  5. java计算机毕业设计交通事故档案管理系统源程序+mysql+系统+lw文档+远程调试
  6. 【论文快读】DeepFool(2016)
  7. web资源优化-图片篇(一)
  8. vue.js解析lrc格式歌词文件
  9. 聊聊那些计量软件(区别、流行度 R Matlab SPSS SAS STATA)
  10. cerna(测rna浓度260280比值大于2)