经典卷积模型(四)GoogLeNet-Inception(V3)代码解析
Inception-V3
网络主干依旧由Inception、辅助分类器构成,其中Inception有六类。
BasicConv2d 基本卷积模块
BasicConv2d为带BN+relu的卷积。
class BasicConv2d(nn.Module):def __init__(self, in_channels, out_channels, **kwargs):super(BasicConv2d, self).__init__()self.conv = nn.Conv2d(in_channels, out_channels, bias=False, **kwargs)self.bn = nn.BatchNorm2d(out_channels, eps=0.001)def forward(self, x):x = self.conv(x)x = self.bn(x)return F.relu(x, inplace=True)
Inception部分
pytorch提供的有六种基本的inception模块,分别是InceptionA——InceptionE。
InceptionA
得到输入大小不变,通道数为224+pool_features的特征图。 假如输入为(35, 35, 192)的数据:
- 第一个brach:
- 经过
branch1x1
为带有64个1*1的卷积核,所以生成第一张特征图(35, 35, 64);
- 经过
- 第二个brach:
- 首先经过
branch5x5_1
为带有48个1*1的卷积核,所以第二张特征图(35, 35, 48), - 然后经过
branch5x5_2
为带有64个5*5大小且填充为2的卷积核,特征图大小依旧不变,因此第二张特征图最终为(35, 35, 64);
- 首先经过
- 第三个brach:
- 首先经过
branch3x3dbl_1
为带有64个1*1的卷积核,所以第三张特征图(35, 35, 64), - 然后经过
branch3x3dbl_2
为带有96个3*3大小且填充为1的卷积核,特征图大小依旧不变,因此进一步生成第三张特征图(35, 35, 96), - 最后经过
branch3x3dbl_3
为带有96个3*3大小且填充为1的卷积核,特征图大小和通道数不变,因此第三张特征图最终为(35, 35, 96);
- 首先经过
- 第四个brach:
- 首先经过
avg_pool2d
,其中池化核3*3,步长为1,填充为1,所以第四张特征图大小不变,通道数不变,第四张特征图为(35, 35, 192), - 然后经过
branch_pool
为带有pool_features个的1*1卷积,因此第四张特征图最终为(35, 35, pool_features);
- 首先经过
- 最后将四张特征图进行拼接,最终得到(35,35,64+64+96+pool_features)的特征图。
class InceptionA(nn.Module):def __init__(self, in_channels, pool_features):super(InceptionA, self).__init__()self.branch1x1 = BasicConv2d(in_channels, 64, kernel_size=1)self.branch5x5_1 = BasicConv2d(in_channels, 48, kernel_size=1)self.branch5x5_2 = BasicConv2d(48, 64, kernel_size=5, padding=2)self.branch3x3dbl_1 = BasicConv2d(in_channels, 64, kernel_size=1)self.branch3x3dbl_2 = BasicConv2d(64, 96, kernel_size=3, padding=1)self.branch3x3dbl_3 = BasicConv2d(96, 96, kernel_size=3, padding=1)self.branch_pool = BasicConv2d(in_channels, pool_features, kernel_size=1)def forward(self, x):branch1x1 = self.branch1x1(x)branch5x5 = self.branch5x5_1(x)branch5x5 = self.branch5x5_2(branch5x5)branch3x3dbl = self.branch3x3dbl_1(x)branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl)branch3x3dbl = self.branch3x3dbl_3(branch3x3dbl)branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1)branch_pool = self.branch_pool(branch_pool)outputs = [branch1x1, branch5x5, branch3x3dbl, branch_pool]return torch.cat(outputs, 1)
InceptionB
得到输入大小减半,通道数+480的特征图,假如输入为(35, 35, 288)的数据:
- 第一个brach:
- 经过
branch1x1
为带有384个3*3大小且步长2的卷积核,(35-3+2*0)/2+1=17所以生成第一张特征图(17, 17, 384);
- 经过
- 第二个brach:
- 首先经过
branch3x3dbl
为带有64个1*1的卷积核,特征图大小不变,即(35, 35, 64); - 然后经过
branch3x3dbl_2
为带有96个3*3大小填充1的卷积核,特征图大小不变,即(35, 35, 96), - 再经过
branch3x3dbl_3
为带有96个3*3大小步长2的卷积核,(35-3+2*0)/2+1=17,即第二张特征图为(17, 17, 96);
- 首先经过
- 第三个brach:
- 经过
max_pool2d
,池化核大小3*3,步长为2,所以是二倍最大值下采样,通道数保持不变,第三张特征图为(17, 17, 288);
- 经过
- 最后将三张特征图进行拼接,最终得到(17(即Hin/2),17(即Win/2),384+96+288(Cin)=768)的特征图。
class InceptionB(nn.Module):def __init__(self, in_channels):super(InceptionB, self).__init__()self.branch3x3 = BasicConv2d(in_channels, 384, kernel_size=3, stride=2)self.branch3x3dbl_1 = BasicConv2d(in_channels, 64, kernel_size=1)self.branch3x3dbl_2 = BasicConv2d(64, 96, kernel_size=3, padding=1)self.branch3x3dbl_3 = BasicConv2d(96, 96, kernel_size=3, stride=2)def forward(self, x):branch3x3 = self.branch3x3(x)branch3x3dbl = self.branch3x3dbl_1(x)branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl)branch3x3dbl = self.branch3x3dbl_3(branch3x3dbl)branch_pool = F.max_pool2d(x, kernel_size=3, stride=2)outputs = [branch3x3, branch3x3dbl, branch_pool]return torch.cat(outputs, 1)
InceptionC
最终得到输入大小不变,通道数为768的特征图。 假如输入为(17,17, 768)的数据:
- 第一个
branch1x1
为带有192个1*1的卷积核,所以生成第一张特征图(17,17, 192); - 第二个brach:
- 首先经过
branch7x7_1
为带有c7个1*1的卷积核,所以第二张特征图(17,17, c7), - 然后经过
branch7x7_2
为带有c7个1*7大小且填充为0*3的卷积核,特征图大小不变,进一步生成第二张特征图(17,17, c7), - 然后经过
branch7x7_3
为带有192个7*1大小且填充为3*0的卷积核,特征图大小不变,进一步生成第二张特征图(17,17, 192),因此第二张特征图最终为(17,17, 192);
- 首先经过
- 第三个brach:
- 首先经过
branch7x7dbl_1
为带有c7个1*1的卷积核,所以第三张特征图(17,17, c7), - 然后经过
branch7x7dbl_2
为带有c7个7*1大小且填充为3*0的卷积核,特征图大小不变,进一步生成第三张特征图(17,17, c7), - 然后经过
branch7x7dbl_3
为带有c7个1*7大小且填充为0*3的卷积核,特征图大小不变,进一步生成第三张特征图(17,17, c7), - 然后经过
branch7x7dbl_4
为带有c7个7*1大小且填充为3*0的卷积核,特征图大小不变,进一步生成第三张特征图(17,17, c7), - 然后经过
branch7x7dbl_5
为带有192个1*7大小且填充为0*3的卷积核,特征图大小不变,因此第二张特征图最终为(17,17, 192);
- 首先经过
- 第四个brach:
- 首先经过
avg_pool2d
,其中池化核3*3,步长为1,填充为1,所以第四张特征图大小不变,通道数不变,第四张特征图为(17,17, 768), - 然后经过
branch_pool
为带有192个的1*1卷积,因此第四张特征图最终为(17,17, 192);
- 首先经过
- 最后将四张特征图进行拼接,最终得到(17, 17, 192+192+192+192=768)的特征图。
class InceptionC(nn.Module):def __init__(self, in_channels, channels_7x7):super(InceptionC, self).__init__()self.branch1x1 = BasicConv2d(in_channels, 192, kernel_size=1)c7 = channels_7x7self.branch7x7_1 = BasicConv2d(in_channels, c7, kernel_size=1)self.branch7x7_2 = BasicConv2d(c7, c7, kernel_size=(1, 7), padding=(0, 3))self.branch7x7_3 = BasicConv2d(c7, 192, kernel_size=(7, 1), padding=(3, 0))self.branch7x7dbl_1 = BasicConv2d(in_channels, c7, kernel_size=1)self.branch7x7dbl_2 = BasicConv2d(c7, c7, kernel_size=(7, 1), padding=(3, 0))self.branch7x7dbl_3 = BasicConv2d(c7, c7, kernel_size=(1, 7), padding=(0, 3))self.branch7x7dbl_4 = BasicConv2d(c7, c7, kernel_size=(7, 1), padding=(3, 0))self.branch7x7dbl_5 = BasicConv2d(c7, 192, kernel_size=(1, 7), padding=(0, 3))self.branch_pool = BasicConv2d(in_channels, 192, kernel_size=1)def forward(self, x):branch1x1 = self.branch1x1(x)branch7x7 = self.branch7x7_1(x)branch7x7 = self.branch7x7_2(branch7x7)branch7x7 = self.branch7x7_3(branch7x7)branch7x7dbl = self.branch7x7dbl_1(x)branch7x7dbl = self.branch7x7dbl_2(branch7x7dbl)branch7x7dbl = self.branch7x7dbl_3(branch7x7dbl)branch7x7dbl = self.branch7x7dbl_4(branch7x7dbl)branch7x7dbl = self.branch7x7dbl_5(branch7x7dbl)branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1)branch_pool = self.branch_pool(branch_pool)outputs = [branch1x1, branch7x7, branch7x7dbl, branch_pool]return torch.cat(outputs, 1)
InceptionD
得到输入大小减半,通道数+512的特征图,假如输入为(17, 17, 768)的数据:
- 第一个brach:
- 首先经过
branch3x3_1
为带有192个1*1的卷积核,所以生成第一张特征图(17, 17, 192); - 然后经过
branch3x3_2
为带有320个3*3大小步长为2的卷积核,(17-3+2*0)/2+1=8,最终第一张特征图(8, 8, 320);
- 首先经过
- 第二个brach:
- 首先经过
branch7x7x3_1
为带有192个1*1的卷积核,特征图大小不变,即(17, 17, 192); - 然后经过
branch7x7x3_2
为带有192个1*7大小且填充为0*3的卷积核,特征图大小不变,进一步生成第三张特征图(17,17, 192); - 再经过
branch7x7x3_3
为带有192个7*1大小且填充为3*0的卷积核,特征图大小不变,进一步生成第三张特征图(17,17, 192); - 最后经过
branch7x7x3_4
为带有192个3*3大小步长为2的卷积核,最终第一张特征图(8, 8, 192);
- 首先经过
- 第三个brach:
- 为
max_pool2d
,池化核大小3*3,步长为2,所以是二倍最大值下采样,通道数保持不变,第三张特征图为(8, 8, 768);
- 为
- 最后将三张特征图进行拼接,最终得到(8(即Hin/2),8(即Win/2),320+192+768(Cin)=1280)的特征图。
class InceptionD(nn.Module):def __init__(self, in_channels):super(InceptionD, self).__init__()self.branch3x3_1 = BasicConv2d(in_channels, 192, kernel_size=1)self.branch3x3_2 = BasicConv2d(192, 320, kernel_size=3, stride=2)self.branch7x7x3_1 = BasicConv2d(in_channels, 192, kernel_size=1)self.branch7x7x3_2 = BasicConv2d(192, 192, kernel_size=(1, 7), padding=(0, 3))self.branch7x7x3_3 = BasicConv2d(192, 192, kernel_size=(7, 1), padding=(3, 0))self.branch7x7x3_4 = BasicConv2d(192, 192, kernel_size=3, stride=2)def forward(self, x):branch3x3 = self.branch3x3_1(x)branch3x3 = self.branch3x3_2(branch3x3)branch7x7x3 = self.branch7x7x3_1(x)branch7x7x3 = self.branch7x7x3_2(branch7x7x3)branch7x7x3 = self.branch7x7x3_3(branch7x7x3)branch7x7x3 = self.branch7x7x3_4(branch7x7x3)branch_pool = F.max_pool2d(x, kernel_size=3, stride=2)outputs = [branch3x3, branch7x7x3, branch_pool]return torch.cat(outputs, 1)
InceptionE
最终得到输入大小不变,通道数为2048的特征图。 假如输入为(8,8, 1280)的数据:
- 第一个brach:
- 首先经过
branch1x1
为带有320个1*1的卷积核,所以生成第一张特征图(8, 8, 320);
- 首先经过
- 第二个brach:
- 首先经过
branch3x3_1
为带有384个1*1的卷积核,所以第二张特征图(8, 8, 384),- 经过分支
branch3x3_2a
为带有384个1*3大小且填充为0*1的卷积核,特征图大小不变,进一步生成特征图(8,8, 384), - 经过分支
branch3x3_2b
为带有192个3*1大小且填充为1*0的卷积核,特征图大小不变,进一步生成特征图(8,8, 384),
- 经过分支
- 因此第二张特征图最终为两个分支拼接(8,8, 384+384=768);
- 首先经过
- 第三个brach:
- 首先经过
branch3x3dbl_1
为带有448个1*1的卷积核,所以第三张特征图(8,8, 448), - 然后经过
branch3x3dbl_2
为带有384个3*3大小且填充为1的卷积核,特征图大小不变,进一步生成第三张特征图(8,8, 384),- 经过分支
branch3x3dbl_3a
为带有384个1*3大小且填充为0*1的卷积核,特征图大小不变,进一步生成特征图(8,8, 384), - 经过分支
branch3x3dbl_3b
为带有384个3*1大小且填充为1*0的卷积核,特征图大小不变,进一步生成特征图(8,8, 384),
- 经过分支
- 因此第三张特征图最终为两个分支拼接(8,8, 384+384=768);
- 首先经过
- 第四个brach:
- 首先经过
avg_pool2d
,其中池化核3*3,步长为1,填充为1,所以第四张特征图大小不变,通道数不变,第四张特征图为(8,8, 1280), - 然后经过
branch_pool
为带有192个的1*1卷积,因此第四张特征图最终为(8,8, 192);
- 首先经过
- 最后将四张特征图进行拼接,最终得到(8, 8, 320+768+768+192=2048)的特征图。
class InceptionE(nn.Module):def __init__(self, in_channels):super(InceptionE, self).__init__()self.branch1x1 = BasicConv2d(in_channels, 320, kernel_size=1)self.branch3x3_1 = BasicConv2d(in_channels, 384, kernel_size=1)self.branch3x3_2a = BasicConv2d(384, 384, kernel_size=(1, 3), padding=(0, 1))self.branch3x3_2b = BasicConv2d(384, 384, kernel_size=(3, 1), padding=(1, 0))self.branch3x3dbl_1 = BasicConv2d(in_channels, 448, kernel_size=1)self.branch3x3dbl_2 = BasicConv2d(448, 384, kernel_size=3, padding=1)self.branch3x3dbl_3a = BasicConv2d(384, 384, kernel_size=(1, 3), padding=(0, 1))self.branch3x3dbl_3b = BasicConv2d(384, 384, kernel_size=(3, 1), padding=(1, 0))self.branch_pool = BasicConv2d(in_channels, 192, kernel_size=1)def forward(self, x):branch1x1 = self.branch1x1(x)branch3x3 = self.branch3x3_1(x)branch3x3 = [self.branch3x3_2a(branch3x3),self.branch3x3_2b(branch3x3),]branch3x3 = torch.cat(branch3x3, 1)branch3x3dbl = self.branch3x3dbl_1(x)branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl)branch3x3dbl = [self.branch3x3dbl_3a(branch3x3dbl),self.branch3x3dbl_3b(branch3x3dbl),]branch3x3dbl = torch.cat(branch3x3dbl, 1)branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1)branch_pool = self.branch_pool(branch_pool)outputs = [branch1x1, branch3x3, branch3x3dbl, branch_pool]return torch.cat(outputs, 1)
InceptionAux 辅助分类器
在中间层使用中间层特征+辅助分类器,以便最终的损失函数加入该正则化项,优化参数,以达到提升模型分类效果的作用。
结构:Pool——>1x1Conv——>5x5Conv——>FC
class InceptionAux(nn.Module):def __init__(self, in_channels, num_classes):super(InceptionAux, self).__init__()self.conv0 = BasicConv2d(in_channels, 128, kernel_size=1)self.conv1 = BasicConv2d(128, 768, kernel_size=5)self.conv1.stddev = 0.01self.fc = nn.Linear(768, num_classes)self.fc.stddev = 0.001def forward(self, x):# 17 x 17 x 768x = F.avg_pool2d(x, kernel_size=5, stride=3)# 5 x 5 x 768x = self.conv0(x)# 5 x 5 x 128x = self.conv1(x)# 1 x 1 x 768x = x.view(x.size(0), -1)# 768x = self.fc(x)# 1000return x
InceptionV3 主要代码
- 输入(229,229,3)的数据,首先归一化输入,经过5个卷积,2个最大池化层。
if self.transform_input: # 1x = x.clone()x[:, 0] = x[:, 0] * (0.229 / 0.5) + (0.485 - 0.5) / 0.5x[:, 1] = x[:, 1] * (0.224 / 0.5) + (0.456 - 0.5) / 0.5x[:, 2] = x[:, 2] * (0.225 / 0.5) + (0.406 - 0.5) / 0.5 # 299 x 299 x 3 x = self.Conv2d_1a_3x3(x) # BasicConv2d(3, 32, kernel_size=3, stride=2) # 149 x 149 x 32 x = self.Conv2d_2a_3x3(x) #BasicConv2d(32, 32, kernel_size=3) # 147 x 147 x 32 x = self.Conv2d_2b_3x3(x) #BasicConv2d(32, 64, kernel_size=3, padding=1) # 147 x 147 x 64 x = F.max_pool2d(x, kernel_size=3, stride=2) # 73 x 73 x 64 x = self.Conv2d_3b_1x1(x)# BasicConv2d(64, 80, kernel_size=1) # 73 x 73 x 80 x = self.Conv2d_4a_3x3(x)# BasicConv2d(80, 192, kernel_size=3) # 71 x 71 x 192 x = F.max_pool2d(x, kernel_size=3, stride=2) # 35 x 35 x 192
- 然后经过3个InceptionA结构,1个InceptionB,3个InceptionC,1个InceptionD,2个InceptionE,其中InceptionA,辅助分类器AuxLogits以经过最后一个InceptionC的输出为输入。
- InceptionA:得到输入大小不变,通道数为224+pool_features的特征图。
- InceptionB:得到输入大小减半,通道数+480的特征图。
- InceptionC:得到输入大小不变,通道数为768的特征图。
- InceptionD:得到输入大小减半,通道数+512的特征图。
- InceptionE:得到输入大小不变,通道数为2048的特征图。
# 35 x 35 x 192 x = self.Mixed_5b(x) # InceptionA(192, pool_features=32) # 35 x 35 x 256 x = self.Mixed_5c(x) # InceptionA(256, pool_features=64) # 35 x 35 x 288 x = self.Mixed_5d(x) # InceptionA(288, pool_features=64) # 35 x 35 x 288 x = self.Mixed_6a(x) # InceptionB(288) # 17 x 17 x 768 x = self.Mixed_6b(x) #InceptionC(768, channels_7x7=128) # 17 x 17 x 768 x = self.Mixed_6c(x) # InceptionC(768, channels_7x7=160) # 17 x 17 x 768 x = self.Mixed_6d(x) # InceptionC(768, channels_7x7=160) # 17 x 17 x 768 x = self.Mixed_6e(x) # InceptionC(768, channels_7x7=192) # 17 x 17 x 768 if self.training and self.aux_logits:aux = self.AuxLogits(x) #InceptionAux(768, num_classes) # 17 x 17 x 768 x = self.Mixed_7a(x) # InceptionD(768) # 8 x 8 x 1280 x = self.Mixed_7b(x) # InceptionE(1280) # 8 x 8 x 2048 x = self.Mixed_7c(x) # InceptionE(2048) # 8 x 8 x 2048
- 进入分类部分。经过平均池化层+dropout+打平+全连接层输出。
# 8 x 8 x 2048 x = F.avg_pool2d(x, kernel_size=8) # 1 x 1 x 2048 x = F.dropout(x, training=self.training) # 1 x 1 x 2048 x = x.view(x.size(0), -1) # 2048 x = self.fc(x) # 1000 (num_classes) if self.training and self.aux_logits:return x, aux return x
代码:
class InceptionV3(nn.Module):def __init__(self, num_classes=1000, aux_logits=True, transform_input=False):super(InceptionV3, self).__init__()self.aux_logits = aux_logitsself.transform_input = transform_inputself.Conv2d_1a_3x3 = BasicConv2d(3, 32, kernel_size=3, stride=2)self.Conv2d_2a_3x3 = BasicConv2d(32, 32, kernel_size=3)self.Conv2d_2b_3x3 = BasicConv2d(32, 64, kernel_size=3, padding=1)self.Conv2d_3b_1x1 = BasicConv2d(64, 80, kernel_size=1)self.Conv2d_4a_3x3 = BasicConv2d(80, 192, kernel_size=3)self.Mixed_5b = InceptionA(192, pool_features=32)self.Mixed_5c = InceptionA(256, pool_features=64)self.Mixed_5d = InceptionA(288, pool_features=64)self.Mixed_6a = InceptionB(288)self.Mixed_6b = InceptionC(768, channels_7x7=128)self.Mixed_6c = InceptionC(768, channels_7x7=160)self.Mixed_6d = InceptionC(768, channels_7x7=160)self.Mixed_6e = InceptionC(768, channels_7x7=192)if aux_logits:self.AuxLogits = InceptionAux(768, num_classes)self.Mixed_7a = InceptionD(768)self.Mixed_7b = InceptionE(1280)self.Mixed_7c = InceptionE(2048)self.fc = nn.Linear(2048, num_classes)for m in self.modules():if isinstance(m, nn.Conv2d) or isinstance(m, nn.Linear):import scipy.stats as statsstddev = m.stddev if hasattr(m, 'stddev') else 0.1X = stats.truncnorm(-2, 2, scale=stddev)values = torch.Tensor(X.rvs(m.weight.data.numel()))m.weight.data.copy_(values)elif isinstance(m, nn.BatchNorm2d):m.weight.data.fill_(1)m.bias.data.zero_()def forward(self, x):if self.transform_input: # 1x = x.clone()x[:, 0] = x[:, 0] * (0.229 / 0.5) + (0.485 - 0.5) / 0.5x[:, 1] = x[:, 1] * (0.224 / 0.5) + (0.456 - 0.5) / 0.5x[:, 2] = x[:, 2] * (0.225 / 0.5) + (0.406 - 0.5) / 0.5# 299 x 299 x 3x = self.Conv2d_1a_3x3(x)# 149 x 149 x 32x = self.Conv2d_2a_3x3(x)# 147 x 147 x 32x = self.Conv2d_2b_3x3(x)# 147 x 147 x 64x = F.max_pool2d(x, kernel_size=3, stride=2)# 73 x 73 x 64x = self.Conv2d_3b_1x1(x)# 73 x 73 x 80x = self.Conv2d_4a_3x3(x)# 71 x 71 x 192x = F.max_pool2d(x, kernel_size=3, stride=2)# 35 x 35 x 192x = self.Mixed_5b(x)# 35 x 35 x 256x = self.Mixed_5c(x)# 35 x 35 x 288x = self.Mixed_5d(x)# 35 x 35 x 288x = self.Mixed_6a(x)# 17 x 17 x 768x = self.Mixed_6b(x)# 17 x 17 x 768x = self.Mixed_6c(x)# 17 x 17 x 768x = self.Mixed_6d(x)# 17 x 17 x 768x = self.Mixed_6e(x)# 17 x 17 x 768if self.training and self.aux_logits:aux = self.AuxLogits(x)# 17 x 17 x 768x = self.Mixed_7a(x)# 8 x 8 x 1280x = self.Mixed_7b(x)# 8 x 8 x 2048x = self.Mixed_7c(x)# 8 x 8 x 2048x = F.avg_pool2d(x, kernel_size=8)# 1 x 1 x 2048x = F.dropout(x, training=self.training)# 1 x 1 x 2048x = x.view(x.size(0), -1)# 2048x = self.fc(x)# 1000 (num_classes)if self.training and self.aux_logits:return x, auxreturn x
参考
https://zhuanlan.zhihu.com/p/30172532
经典卷积模型(四)GoogLeNet-Inception(V3)代码解析相关推荐
- 深度学习卷积神经网络——经典网络GoogLeNet(Inception V3)网络的搭建与实现
一.Inception网络(google公司)--GoogLeNet网络的综述 获得高质量模型最保险的做法就是增加模型的深度(层数)或者是其宽度(层核或者神经元数), 但是这里一般设计思路的情况下会出 ...
- 经典卷积神经系列(Inception v1\v2\v3\v4、ResNet、ResNext、DenseNet、SENet)
写在前面:此文只记录了下本人感觉需要注意的地方,不全且不一定准确.详细内容可以参考文中帖的链接,比较好!!! 经典的CNN:Inception v1\v2\v3\v4.Resnet.Resnext.D ...
- 李沐动手学深度学习笔记---含并行连结的网络 GoogLeNet / Inception V3
Inception块: Inception块由四条并行路径组成.前三条路径使用窗口大小为1 × 1.3 × 3和5 × 5的卷积层, 从不同空间大小中提取信息.中间的两条路径在输入上执行1 × 1卷 ...
- 含并行连结的网络 GoogLeNet / Inception V3 动手学深度学习v2 pytorch
1. 含并行连结的网络 GoogLeNet 第一个神经网络可以做到超过100层. Inception 块的名字,取自盗梦空间,就是不断深入到梦境中,不断深入. Inception 使得参数减少,计算量 ...
- 经典卷积模型回顾20—YOLOv1实现猫狗检测(matlab,Tensorflow2.0)
目录 1. 模型介绍 2. matlab实现 3. Tensorflow2.0实现 1. 模型介绍 YOLO(v1)全称为You Only Look Once,是一种基于深度学习的物体检测算法.与传统 ...
- (论文加源码)基于DEAP和MABHOB数据集的二分类脑电情绪识别(pytorch深度神经网络(DNN)和卷积神经网络(CNN))代码解析
论文解析见个人主页: https://blog.csdn.net/qq_45874683/article/details/130000469?csdn_share_tail=%7B%22type%22 ...
- 【Tensorflow】深度学习实战05——Tensorflow实现Inception V3
[fishing-pan:https://blog.csdn.net/u013921430转载请注明出处] 前言 前些日子在忙其他的事情,一直没有更新自己学习神经网络的博客,就在端午这天更吧!也祝大家 ...
- 【深度学习系列】用PaddlePaddle和Tensorflow实现GoogLeNet InceptionV2/V3/V4
上一篇文章我们引出了GoogLeNet InceptionV1的网络结构,这篇文章中我们会详细讲到Inception V2/V3/V4的发展历程以及它们的网络结构和亮点. GoogLeNet Ince ...
- 深度学习网络篇——Inception v3
文章目录 先夸一夸我们的GoogLeNet Inception v3的薅羊毛顺序 第一部分 总体设计原则 1.避免表达的瓶颈,特别是在网络前面的部分 2.高维度特征更适合在网络局部中处理 3.在较低维 ...
最新文章
- iOS开发-UIColor转UIIamge方法
- web页面版权部分的显示问题
- (视频+图文)机器学习入门系列-第13章 降维
- .NET MVC访问某方法后会跳转页面
- python观察日志(part16)--收集关键词参数
- 同步您的Google Chrome书签,主题等
- C ++ 数组 | 多维数组(MultiDimensional Arrays)_2
- 什么叫单模光纤_什么是单模单纤/双纤光纤收发器?二者有何区别?
- Ehcache(06)——监听器
- Ubuntu下安装MySQL及简单操作
- 两步集成TV移动框架,从未如此简单
- html空间图片,html+js实现图片预览
- caxa齿轮零件图_CAXA软件如何快速地画一个齿轮?
- 达梦数据库备份还原使用
- android 录屏工具,ShareREC for Android全系统手机录屏软件原理解析
- 有了这三个网站,你再也不用去找其它工具网站了
- 有哪些目前流行的前端框架
- selenium:class属性内带有空格的定位坑
- 云计算实验2 Spark分布式内存计算框架配置及编程案例
- 小程序容器技术加持下,企业自主打造小程序生态