深度学习网络模型——Vision Transformer详解 VIT详解

通用深度学习网络效果改进调参训练公司自己的数据集,训练步骤记录:

代码实现version-Transformer网络各个流程,以此实现一下模块:
1、Path Embedding模块操作
(1)实现图像分块处理,利用16x16的二维卷积提取图像初步特征,将224x224x3图像卷积后得到14x14x768特征图像
(2)将批次图像数据进行展平操作,得到批次一维特征数据,数据维度是【B,C,H,W】----->【B,C,HW】进行矩阵转置
操作,得到【B,HW,C】

2、进行Class Token模块操作
(1)此处需要在每个批次的一维数据头部加入class token数据位(即可训练参数),需要保持数据维度一致
(2)将class token数据位与第一步处理得到的特征批次数据进行数据拼接操作(concat)
(3)拼接完成后,数据维度由【batch,196】
3、进行位置编码操作(此处采用的是相对位置编码)
(1)定义可训练参数,维度需要注意
(2)将可训练参数与步骤2中得到的数据特征进行相加操作,即可实现位置编码
4、实现Transformer模块:
(1)首先实现Encoder Block小模块:
a、实现多头注意力机制模块(Multi-Head Attention)
(a)定义QKV全连接网络
(b)实现注意力机制运算操作
b、实现MLP-Block模块
(a)模块分别是:Linear层——>​激活函数(nn.GELU​)——>Dropout——>Linear层——>Dropout​​
(2)将多个Ecncoder Block模块进行堆叠,得到Encoder Block模块​
5、获取到嵌入的Class Token数据,
6、实现MLP Head模块,将步骤5得到的数据作为输入数据
7、获取Class Token维度,获得分类数据



整体项目训练分类数据集,测试集准确率可以达到98%
模型代码实现:

"""
original code from rwightman:
https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.pyhttps://github.com/rwightman/pytorch-image-models/releases/tag/v0.1-vitjx
"""
from functools import partial
from collections import OrderedDictimport torch
import torch.nn as nndef drop_path(x, drop_prob: float = 0., training: bool = False):"""Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).This is the same as the DropConnect impl I created for EfficientNet, etc networks, however,the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper...See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted forchanging the layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use'survival rate' as the argument."""if drop_prob == 0. or not training:return xkeep_prob = 1 - drop_probshape = (x.shape[0],) + (1,) * (x.ndim - 1)  # work with diff dim tensors, not just 2D ConvNetsrandom_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)random_tensor.floor_()  # binarizeoutput = x.div(keep_prob) * random_tensorreturn outputclass DropPath(nn.Module):"""Drop paths (Stochastic Depth) per sample  (when applied in main path of residual blocks)."""def __init__(self, drop_prob=None):super(DropPath, self).__init__()self.drop_prob = drop_probdef forward(self, x):return drop_path(x, self.drop_prob, self.training)class PatchEmbed(nn.Module):"""2D Image to Patch Embedding"""def __init__(self, img_size=224, patch_size=16, in_c=3, embed_dim=768, norm_layer=None):super().__init__()img_size = (img_size, img_size)patch_size = (patch_size, patch_size)self.img_size = img_sizeself.patch_size = patch_sizeself.grid_size = (img_size[0] // patch_size[0], img_size[1] // patch_size[1])   # 得到输出网格的尺寸大小self.num_patches = self.grid_size[0] * self.grid_size[1]   # 计算小块的总数self.proj = nn.Conv2d(in_c, embed_dim, kernel_size=patch_size, stride=patch_size)self.norm = norm_layer(embed_dim) if norm_layer else nn.Identity()   # 判断,如果true,则进行norm_layer;否则不做任何处理def forward(self, x):B, C, H, W = x.shapeassert H == self.img_size[0] and W == self.img_size[1], \f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."# flatten: [B, C, H, W] -> [B, C, HW]# transpose: [B, C, HW] -> [B, HW, C]x = self.proj(x).flatten(2).transpose(1, 2)x = self.norm(x)return xclass Attention(nn.Module):def __init__(self,dim,   # 输入token的dimnum_heads=8,qkv_bias=False,  # 表示在生成QKV时,是否使用偏置qk_scale=None,attn_drop_ratio=0.,proj_drop_ratio=0.):super(Attention, self).__init__()self.num_heads = num_headshead_dim = dim // num_heads   # 得到针对每一个head,其对应的dimself.scale = qk_scale or head_dim ** -0.5   # 即开方操作self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)  # 通常是使用三个全连接层得到QKV,此处使用一个,得到QKV,方便并行化处理self.attn_drop = nn.Dropout(attn_drop_ratio)self.proj = nn.Linear(dim, dim)self.proj_drop = nn.Dropout(proj_drop_ratio)def forward(self, x):# [batch_size, num_patches + 1, total_embed_dim]   : num_patches + 1:因为后面会加上一个cls—token,所以+1B, N, C = x.shape# qkv(): -> [batch_size, num_patches + 1, 3 * total_embed_dim]# reshape: -> [batch_size, num_patches + 1, 3, num_heads, embed_dim_per_head]# permute: -> [3, batch_size, num_heads, num_patches + 1, embed_dim_per_head]qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)# [batch_size, num_heads, num_patches + 1, embed_dim_per_head]q, k, v = qkv[0], qkv[1], qkv[2]  # make torchscript happy (cannot use tensor as tuple)# transpose: -> [batch_size, num_heads, embed_dim_per_head, num_patches + 1]# @: multiply -> [batch_size, num_heads, num_patches + 1, num_patches + 1]attn = (q @ k.transpose(-2, -1)) * self.scaleattn = attn.softmax(dim=-1)attn = self.attn_drop(attn)# @: multiply -> [batch_size, num_heads, num_patches + 1, embed_dim_per_head]# transpose: -> [batch_size, num_patches + 1, num_heads, embed_dim_per_head]# reshape: -> [batch_size, num_patches + 1, total_embed_dim]x = (attn @ v).transpose(1, 2).reshape(B, N, C)   # 此处的reshape操作相当于是进行了concant的拼接x = self.proj(x)x = self.proj_drop(x)return xclass Mlp(nn.Module):"""MLP as used in Vision Transformer, MLP-Mixer and related networks"""def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):super().__init__()out_features = out_features or in_featureshidden_features = hidden_features or in_featuresself.fc1 = nn.Linear(in_features, hidden_features)self.act = act_layer()self.fc2 = nn.Linear(hidden_features, out_features)self.drop = nn.Dropout(drop)def forward(self, x):x = self.fc1(x)x = self.act(x)x = self.drop(x)x = self.fc2(x)x = self.drop(x)return x# Enconder Block模块
class Block(nn.Module):def __init__(self,dim,num_heads,mlp_ratio=4.,   # 第一个全连接层节点个数是输入节点个数的4倍qkv_bias=False,qk_scale=None,drop_ratio=0.,attn_drop_ratio=0.,drop_path_ratio=0.,act_layer=nn.GELU,norm_layer=nn.LayerNorm):super(Block, self).__init__()self.norm1 = norm_layer(dim)self.attn = Attention(dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,attn_drop_ratio=attn_drop_ratio, proj_drop_ratio=drop_ratio)# NOTE: drop path for stochastic depth, we shall see if this is better than dropout hereself.drop_path = DropPath(drop_path_ratio) if drop_path_ratio > 0. else nn.Identity()self.norm2 = norm_layer(dim)mlp_hidden_dim = int(dim * mlp_ratio)   # 为mlp模块中第一个全连接层节点的个数self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop_ratio)def forward(self, x):x = x + self.drop_path(self.attn(self.norm1(x)))x = x + self.drop_path(self.mlp(self.norm2(x)))return xclass VisionTransformer(nn.Module):def __init__(self, img_size=224, patch_size=16, in_c=3, num_classes=1000,embed_dim=768, depth=12, num_heads=12, mlp_ratio=4.0, qkv_bias=True,qk_scale=None, representation_size=None, distilled=False, drop_ratio=0.,attn_drop_ratio=0., drop_path_ratio=0., embed_layer=PatchEmbed, norm_layer=None,act_layer=None):"""Args:img_size (int, tuple): input image sizepatch_size (int, tuple): patch sizein_c (int): number of input channelsnum_classes (int): number of classes for classification headembed_dim (int): embedding dimensiondepth (int): depth of transformer  表示重复堆叠encoder的个数num_heads (int): number of attention headsmlp_ratio (int): ratio of mlp hidden dim to embedding dimqkv_bias (bool): enable bias for qkv if Trueqk_scale (float): override default qk scale of head_dim ** -0.5 if setrepresentation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if setdistilled (bool): model includes a distillation token and head as in DeiT modelsdrop_ratio (float): dropout rateattn_drop_ratio (float): attention dropout ratedrop_path_ratio (float): stochastic depth rateembed_layer (nn.Module): patch embedding layernorm_layer: (nn.Module): normalization layer"""super(VisionTransformer, self).__init__()self.num_classes = num_classesself.num_features = self.embed_dim = embed_dim  # num_features for consistency with other modelsself.num_tokens = 2 if distilled else 1   # 暂时不用管,为1norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)   # partial():表示使用该方法传入默认的参数eps=1e-6act_layer = act_layer or nn.GELU# step1、构建patch-embedding模块self.patch_embed = embed_layer(img_size=img_size, patch_size=patch_size, in_c=in_c, embed_dim=embed_dim)num_patches = self.patch_embed.num_patches  # 得到patch的个数self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))   # 定义可训练参数 (1,1,768)self.dist_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) if distilled else None   # 默认为None,使用不到self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + self.num_tokens, embed_dim))   # (1,196+1,168)self.pos_drop = nn.Dropout(p=drop_ratio)dpr = [x.item() for x in torch.linspace(0, drop_path_ratio, depth)]  # stochastic depth decay rule# 利用序列化列表创建self.blocks = nn.Sequential(*[Block(dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,drop_ratio=drop_ratio, attn_drop_ratio=attn_drop_ratio, drop_path_ratio=dpr[i],norm_layer=norm_layer, act_layer=act_layer)for i in range(depth)])self.norm = norm_layer(embed_dim)# step4、构建MLP-Head模块# Representation layer   即最后的MLP Head模块if representation_size and not distilled:    # 默认not distilled为Trueself.has_logits = Trueself.num_features = representation_size# 利用nn.Sequential加上有序字典来构建self.pre_logits = nn.Sequential(OrderedDict([("fc", nn.Linear(embed_dim, representation_size)),("act", nn.Tanh())]))else:self.has_logits = Falseself.pre_logits = nn.Identity()   # 就相当于没有Per-Logits层# Classifier head(s)self.head = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity()    # 输出为类别数self.head_dist = Noneif distilled:self.head_dist = nn.Linear(self.embed_dim, self.num_classes) if num_classes > 0 else nn.Identity()# 进行权重初始化# Weight initnn.init.trunc_normal_(self.pos_embed, std=0.02)if self.dist_token is not None:nn.init.trunc_normal_(self.dist_token, std=0.02)nn.init.trunc_normal_(self.cls_token, std=0.02)self.apply(_init_vit_weights)def forward_features(self, x):# [B, C, H, W] -> [B, num_patches, embed_dim]x = self.patch_embed(x)  # [B, 196, 768]# [1, 1, 768] -> [B, 1, 768]cls_token = self.cls_token.expand(x.shape[0], -1, -1)if self.dist_token is None:x = torch.cat((cls_token, x), dim=1)  # [B, 197, 768] 表示是在197维度上拼接else:x = torch.cat((cls_token, self.dist_token.expand(x.shape[0], -1, -1), x), dim=1)x = self.pos_drop(x + self.pos_embed)x = self.blocks(x)x = self.norm(x)if self.dist_token is None:return self.pre_logits(x[:, 0])   # 提取到Class-Token对应的输出,传入到self.pre_logits()else:return x[:, 0], x[:, 1]def forward(self, x):x = self.forward_features(x)if self.head_dist is not None:x, x_dist = self.head(x[0]), self.head_dist(x[1])if self.training and not torch.jit.is_scripting():# during inference, return the average of both classifier predictionsreturn x, x_distelse:return (x + x_dist) / 2else:x = self.head(x)return xdef _init_vit_weights(m):"""ViT weight initialization:param m: module"""if isinstance(m, nn.Linear):nn.init.trunc_normal_(m.weight, std=.01)if m.bias is not None:nn.init.zeros_(m.bias)elif isinstance(m, nn.Conv2d):nn.init.kaiming_normal_(m.weight, mode="fan_out")if m.bias is not None:nn.init.zeros_(m.bias)elif isinstance(m, nn.LayerNorm):nn.init.zeros_(m.bias)nn.init.ones_(m.weight)def vit_base_patch16_224(num_classes: int = 1000,has_logits:bool=False):"""ViT-Base model (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929).ImageNet-1k weights @ 224x224, source https://github.com/google-research/vision_transformer.weights ported from official Google JAX impl:链接: https://pan.baidu.com/s/1zqb08naP0RPqqfSXfkB2EA  密码: eu9f"""model = VisionTransformer(img_size=224,patch_size=16,embed_dim=768,depth=12,num_heads=12,representation_size=None,num_classes=num_classes)return modeldef vit_base_patch16_224_in21k(num_classes: int = 21843, has_logits: bool = True):"""ViT-Base model (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929).ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.weights ported from official Google JAX impl:https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_patch16_224_in21k-e5005f0a.pth"""model = VisionTransformer(img_size=224,patch_size=16,embed_dim=768,depth=12,num_heads=12,representation_size=768 if has_logits else None,num_classes=num_classes)return modeldef vit_base_patch32_224(num_classes: int = 1000):"""ViT-Base model (ViT-B/32) from original paper (https://arxiv.org/abs/2010.11929).ImageNet-1k weights @ 224x224, source https://github.com/google-research/vision_transformer.weights ported from official Google JAX impl:链接: https://pan.baidu.com/s/1hCv0U8pQomwAtHBYc4hmZg  密码: s5hl"""model = VisionTransformer(img_size=224,patch_size=32,embed_dim=768,depth=12,num_heads=12,representation_size=None,num_classes=num_classes)return modeldef vit_base_patch32_224_in21k(num_classes: int = 21843, has_logits: bool = True):"""ViT-Base model (ViT-B/32) from original paper (https://arxiv.org/abs/2010.11929).ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.weights ported from official Google JAX impl:https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_patch32_224_in21k-8db57226.pth"""model = VisionTransformer(img_size=224,patch_size=32,embed_dim=768,depth=12,num_heads=12,representation_size=768 if has_logits else None,num_classes=num_classes)return modeldef vit_large_patch16_224(num_classes: int = 1000):"""ViT-Large model (ViT-L/16) from original paper (https://arxiv.org/abs/2010.11929).ImageNet-1k weights @ 224x224, source https://github.com/google-research/vision_transformer.weights ported from official Google JAX impl:链接: https://pan.baidu.com/s/1cxBgZJJ6qUWPSBNcE4TdRQ  密码: qqt8"""model = VisionTransformer(img_size=224,patch_size=16,embed_dim=1024,depth=24,num_heads=16,representation_size=None,num_classes=num_classes)return modeldef vit_large_patch16_224_in21k(num_classes: int = 21843, has_logits: bool = True):"""ViT-Large model (ViT-L/16) from original paper (https://arxiv.org/abs/2010.11929).ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.weights ported from official Google JAX impl:https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_patch16_224_in21k-606da67d.pth"""model = VisionTransformer(img_size=224,patch_size=16,embed_dim=1024,depth=24,num_heads=16,representation_size=1024 if has_logits else None,num_classes=num_classes)return modeldef vit_large_patch32_224_in21k(num_classes: int = 21843, has_logits: bool = True):"""ViT-Large model (ViT-L/32) from original paper (https://arxiv.org/abs/2010.11929).ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.weights ported from official Google JAX impl:https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_patch32_224_in21k-9046d2e7.pth"""model = VisionTransformer(img_size=224,patch_size=32,embed_dim=1024,depth=24,num_heads=16,representation_size=1024 if has_logits else None,num_classes=num_classes)return modeldef vit_huge_patch14_224_in21k(num_classes: int = 21843, has_logits: bool = True):"""ViT-Huge model (ViT-H/14) from original paper (https://arxiv.org/abs/2010.11929).ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.NOTE: converted weights not currently available, too large for github release hosting."""model = VisionTransformer(img_size=224,patch_size=14,embed_dim=1280,depth=32,num_heads=16,representation_size=1280 if has_logits else None,num_classes=num_classes)return model

深度学习网络模型——Vision Transformer详解 VIT详解相关推荐

  1. 深度学习网络模型——RepVGG网络详解、RepVGG网络训练花分类数据集整体项目实现

    深度学习网络模型--RepVGG网络详解.RepVGG网络训练花分类数据集整体项目实现 0 前言 1 RepVGG Block详解 2 结构重参数化 2.1 融合Conv2d和BN 2.2 Conv2 ...

  2. 【深度学习系列】卷积神经网络CNN原理详解(一)——基本原理(1)

    上篇文章我们给出了用paddlepaddle来做手写数字识别的示例,并对网络结构进行到了调整,提高了识别的精度.有的同学表示不是很理解原理,为什么传统的机器学习算法,简单的神经网络(如多层感知机)都可 ...

  3. [深度学习] 自然语言处理---Transformer原理(一)

    <Attention Is All You Need>是Google在2017年提出的一篇将Attention思想发挥到极致的论文.该论文提出的Transformer模型,基于encode ...

  4. 【深度学习】Swin Transformer结构和应用分析

    [深度学习]Swin Transformer结构和应用分析 文章目录 1 引言 2 Swin Transformer结构 3 分析3.1 Hierarchical Feature Representa ...

  5. 【深度学习】深入浅出transformer内部结构

    [深度学习]深入浅出transformer内部结构 文章目录 1 概述 2 Self-Attention与Transformer 3 Feed Forward Neural Network 4 enc ...

  6. [Github项目]基于PyTorch的深度学习网络模型实现

    2019 年第 48 篇文章,总第 72 篇文章 本文大约 1500 字,阅读大约需要 4 分钟 今天主要分享两份 Github 项目,都是采用 PyTorch 来实现深度学习网络模型,主要是一些常用 ...

  7. 论文阅读学习 - 深度学习网络模型分析对比

    深度学习网络模型分析对比 [Paper - An Analysis of Deep Neural Network Models for Practiacal Applications] 从准确率Acc ...

  8. 【毕业设计_课程设计】基于深度学习网络模型训练的车型识别系统

    文章目录 0 项目说明 1 简介 2 模型训练精度 3 扫一扫识别功能 4 技术栈 5 模型训练 6 最后 0 项目说明 基于深度学习网络模型训练的车型识别系统 提示:适合用于课程设计或毕业设计,工作 ...

  9. [深度学习概念]·实例分割模型Mask R-CNN详解

    实例分割模型Mask R-CNN详解 基础深度学习的目标检测技术演进解析 本文转载地址 Mask R-CNN是ICCV 2017的best paper,彰显了机器学习计算机视觉领域在2017年的最新成 ...

最新文章

  1. 深入理解ES6笔记(九)JS的类(class)
  2. python绘制三维地形shade(vert_exag)参数_Python的地形三维可视化Matplotlib和gdal使用实例...
  3. PyCharm如何集成PyQt
  4. 编译opencv文件
  5. @transactional 接口_Spring事物(@transactional注解)在什么情况下会失效,为什么?...
  6. 有道云笔记不需要通过开通会员的方式来去除广告显示
  7. m3u8下载ts 合并成一个视频
  8. 855计算机应用基础,2017年曲阜师范大学信息技术与传播学院855计算机应用基础考研导师圈点必考题汇编...
  9. spring-cloud 学习四 服务网关
  10. 在硅晶片上实现量子计算,英特尔可能改变了这项技术的未来
  11. 简单电子相册视频制作的步骤和要点
  12. AE插件自动创建图层工具LayerGenerators使用教程
  13. 在Excel中批量生成二维码标签,标签中可添加二维码或者条形码
  14. Adobe Illustrator CS6 出现错误报告16
  15. 电视从u盘启动linux系统软件,自己制作从USB启动LINUX系统的方法
  16. 海量数据、丰厚奖金,美团外卖推荐技术评测邀你来战!
  17. 佩尔方程以及hdu6222
  18. 一个菜鸡的ACM之路
  19. 校外活动计算机社团策划书,社团活动策划
  20. Groovy(Java笨狗)系列---Getting Started(三)

热门文章

  1. PHPExcel类的使用
  2. 造车新势力包揽2家,IDG资本投资的小鹏汽车市值超1026亿!
  3. 阿里云oss使用ossimport从又拍云迁移文件数据
  4. 程序员平常加班严重,如何有效率的提升自己?
  5. 健康体检中心与医院的差别?
  6. TSCPC、TDC、CFD
  7. ESP8266-Arduino编程实例-LIS2DH 三轴线性加速度计驱动
  8. Echarts类似航班选座如何做一个实时监测设备状态的案例
  9. HTML(1):WebBrowser
  10. 不同操作系统的TTL值