nn.TransformerEncoderLayer

这个类是transformer encoder的组成部分,代表encoder的一个层,而encoder就是将transformerEncoderLayer重复几层。

Args:
d_model: the number of expected features in the input (required).
nhead: the number of heads in the multiheadattention models (required).
dim_feedforward: the dimension of the feedforward network model (default=2048).
dropout: the dropout value (default=0.1).
activation: the activation function of intermediate layer, relu or gelu (default=relu).

Examples::
encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8)
src = torch.rand(10, 32, 512)
out = encoder_layer(src)

需要注意的是transformer 只能输入 seqlenth x batch x dim 形式的数据。

nn.TransformerEncoder

这里是transformer的encoder部分,即将上述的encoder-layer作为参数输入初始话以后可以获得TransformerEncoder

Args
encoder_layer: an instance of the TransformerEncoderLayer() class (required).
num_layers: the number of sub-encoder-layers in the encoder (required).
norm: the layer normalization component (optional).

Examples::
encoder_layer = nn.TransformerEncoderLayer(d_model=512,nhead=8) transformer_encoder=nn.TransformerEncoder(encoder_layer,num_layers=6)
src = torch.rand(10, 32, 512)
out =transformer_encoder(src)

PostionEncoder

这里的数学原理就不再详细叙述了,因为我也没搞特别明白反正就是获得位置信息,与embedding加起来就行了。

class PositionalEncoding(nn.Module):def __init__(self, d_model, dropout=0.1, max_len=5000):super(PositionalEncoding, self).__init__()self.dropout = nn.Dropout(p=dropout)pe = torch.zeros(max_len, d_model)position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))pe[:, 0::2] = torch.sin(position * div_term)pe[:, 1::2] = torch.cos(position * div_term)pe = pe.unsqueeze(0).transpose(0, 1)self.register_buffer('pe', pe)def forward(self, x):x = x + self.pe[:x.size(0), :]return self.dropout(x)

TransformerModel

这里将参考pytorch tutorial中的内容

class First_TransformerModel(nn.Module):def __init__(self, ninp=300, nhead=4, nhid=128, nlayers=6, dropout=0.5):super(First_TransformerModel, self).__init__()from torch.nn import TransformerEncoder, TransformerEncoderLayerself.model_type = 'Transformer'self.src_mask = Noneself.pos_encoder = PositionalEncoding(ninp, dropout)encoder_layers = TransformerEncoderLayer(ninp, nhead, nhid, dropout)self.transformer_encoder = TransformerEncoder(encoder_layers, nlayers)self.ninp = ninpdef _generate_square_subsequent_mask(self, src, lenths):'''padding_masksrc:max_lenth,num,300lenths:[lenth1,lenth2...]'''# mask num_of_sens x max_lenthmask = torch.ones(src.size(1), src.size(0)) == 1for i in range(len(lenths)):lenth = lenths[i]for j in range(lenth):mask[i][j] = Falsereturn maskdef forward(self, src, mask):'''src:num_of_all_sens,max_lenth,300'''self.src_mask = masksrc = src * math.sqrt(self.ninp)src = self.pos_encoder(src)output = self.transformer_encoder(src, src_key_padding_mask=self.src_mask)output = output[0,:,:]return outputclass PositionalEncoding(nn.Module):def __init__(self, d_model, dropout=0.1, max_len=5000):super(PositionalEncoding, self).__init__()self.dropout = nn.Dropout(p=dropout)pe = torch.zeros(max_len, d_model)position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))pe[:, 0::2] = torch.sin(position * div_term)pe[:, 1::2] = torch.cos(position * div_term)pe = pe.unsqueeze(0).transpose(0, 1)self.register_buffer('pe', pe)def forward(self, x):x = x + self.pe[:x.size(0), :]return self.dropout(x)

在这里我们只需将输入的src (seqlenth x batch x ninp)进行下面的操作即可,先乘上根号下的ninp,经过positionencoder,再经过encoder即可。

    src = src * math.sqrt(self.ninp)src = self.pos_encoder(src)output = self.transformer_encoder(src, src_key_padding_mask=self.src_mask)

这里还需要提一下mask

mask 是什么呢?
        mask主要可以分为两种mask,一种是src_mask,一种是src_key_padding_mask, 这里我们主要解释src_key_padding_mask。

nn.Transformer中,提到了src_key_padding_mask的size,必须是 NxS ,即 batch x seqlenths通过这个mask,就可以将padding的部分忽略掉,让attention注意力机制不再参与这一部分的运算。

        需要注意的是,src_key_padding_mask 是一个二值化的tensor,在需要被忽略地方应该是True,在需要保留原值的情况下,是False

这里附上我定义的双层transformer代码
第一层

class First_TransformerModel(nn.Module):def __init__(self, ninp=300, nhead=4, nhid=128, nlayers=6, dropout=0.5):super(First_TransformerModel, self).__init__()from torch.nn import TransformerEncoder, TransformerEncoderLayerself.model_type = 'Transformer'self.src_mask = Noneself.pos_encoder = PositionalEncoding(ninp, dropout)encoder_layers = TransformerEncoderLayer(ninp, nhead, nhid, dropout)self.transformer_encoder = TransformerEncoder(encoder_layers, nlayers)# self.encoder = nn.Embedding(ntoken, ninp)self.ninp = ninp# self.decoder = nn.Linear(ninp, ntoken)def _generate_square_subsequent_mask(self, src, lenths):'''padding_masksrc:max_lenth,num,300lenths:[lenth1,lenth2...]'''# mask num_of_sens x max_lenthmask = torch.ones(src.size(1), src.size(0)) == 1for i in range(len(lenths)):lenth = lenths[i]for j in range(lenth):mask[i][j] = False# mask = (torch.triu(torch.ones(sz, sz)) == 1).transpose(0, 1)#mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))return maskdef forward(self, src, mask):'''src:num_of_all_sens,max_lenth,300'''self.src_mask = masksrc = src * math.sqrt(self.ninp)src = self.pos_encoder(src)output = self.transformer_encoder(src, src_key_padding_mask=self.src_mask)output = output[0,:,:]#output = self.decoder(output)return outputclass PositionalEncoding(nn.Module):def __init__(self, d_model, dropout=0.1, max_len=5000):super(PositionalEncoding, self).__init__()self.dropout = nn.Dropout(p=dropout)pe = torch.zeros(max_len, d_model)position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))pe[:, 0::2] = torch.sin(position * div_term)pe[:, 1::2] = torch.cos(position * div_term)pe = pe.unsqueeze(0).transpose(0, 1)self.register_buffer('pe', pe)def forward(self, x):x = x + self.pe[:x.size(0), :]return self.dropout(x)

第二层

#second levelclass Second_TransformerModel(nn.Module):def __init__(self, ninp=300, nhead=4, nhid=128, nlayers=6, dropout=0.5):super(Second_TransformerModel, self).__init__()from torch.nn import TransformerEncoder, TransformerEncoderLayerself.src_mask = Noneself.pos_encoder = PositionalEncoding(ninp, dropout)encoder_layers = TransformerEncoderLayer(ninp, nhead, nhid, dropout)self.transformer_encoder = TransformerEncoder(encoder_layers, nlayers)self.ninp = ninpdef _generate_square_subsequent_mask(self, src, lenths):'''padding_masksrc:num_of_sentence x batch(文章数) x 300lenths:[lenth1,lenth2...]'''# mask num_of_sens x max_lenthmask = torch.ones(src.size(1), src.size(0)) == 1for i in range(len(lenths)):lenth = lenths[i]for j in range(lenth):mask[i][j] = Falsereturn maskdef forward(self, src, mask):'''src:max_sentence_num x batch(文章数) x 300'''self.src_mask = masksrc = src * math.sqrt(self.ninp)src = self.pos_encoder(src)output = self.transformer_encoder(src, src_key_padding_mask=self.src_mask)#output = self.decoder(output)return output

最终代码

class segmentmodel(nn.Module):def __init__(self, ninp=300, nhead=4, nhid=128, nlayers=6, dropout=0.5):super(segmentmodel, self).__init__()self.first_layer = First_TransformerModel(ninp,nhead,nhid,nlayers,dropout)self.second_layer = Second_TransformerModel(ninp,nhead,nhid,nlayers,dropout)self.linear = nn.Linear(ninp,2)  def pad(self, s, max_length):s_length = s.size()[0]v = torch.tensor(s.unsqueeze(0).unsqueeze(0))padded = F.pad(v, (0, 0, 0, max_length - s_length))  # (1, 1, max_length, 300)shape = padded.size()return padded.view(shape[2], 1, shape[3])  # (max_length, 1, 300)def pad_document(self, d, max_document_length):d_length = d.size()[0]v = d.unsqueeze(0).unsqueeze(0)padded = F.pad(v, (0, 0,0, max_document_length - d_length ))  # (1, 1, max_length, 300)shape = padded.size()return padded.view(shape[2], 1, shape[3])  # (max_length, 1, 300)def forward(self, batch):batch_size = len(batch)sentences_per_doc = []all_batch_sentences = []for document in batch:all_batch_sentences.extend(document)sentences_per_doc.append(len(document))lengths = [s.size()[0] for s in all_batch_sentences]max_length = max(lengths)#logger.debug('Num sentences: %s, max sentence length: %s', # sum(sentences_per_doc), max_length)padded_sentences = [self.pad(s, max_length) for s in all_batch_sentences]big_tensor = torch.cat(padded_sentences, 1)  # (max_length, batch size, 300)mask = self.first_layer._generate_square_subsequent_mask(big_tensor,lengths).cuda()firstlayer_out = self.first_layer(src = big_tensor,mask = mask)# 句子数 x 300#padded_output  batch x 300 # 将各个文章中的句子分别取出来encoded_documents =[]index = 0for sentences_count in sentences_per_doc:end_index = index + sentences_countencoded_documents.append(firstlayer_out[index : end_index, :])index = end_index#docuemnt_paddingdoc_sizes = [doc.size()[0] for doc in encoded_documents]max_doc_size = np.max(doc_sizes)padded_docs = [self.pad_document(d, max_doc_size) for d in encoded_documents]docs_tensor = torch.cat(padded_docs, 1)#docs_tensor max_doc_size x batch x 300mask = self.second_layer._generate_square_subsequent_mask(docs_tensor,doc_sizes).cuda()second_layer_out = self.second_layer(src = docs_tensor,mask = mask)#去除最后一个句子doc_outputs = []for i, doc_len in enumerate(doc_sizes):doc_outputs.append(second_layer_out[0:doc_len - 1, i, :])  # -1 to remove last predicsentence_outputs = torch.cat(doc_outputs, 0)# 句子数 x 300out = self.linear(sentence_outputs)return out

值得注意的是,这里的第一层提取的句子信息,是采用的第一层的输出的一个向量来表示的,即从 seqlenth x N x 300 中选出 seqlenth维度的第一个作为句子表示,得到Nx300的tensor。
————————————————
来源:https://blog.csdn.net/qq_43645301/article/details/109279616

【PyTorch】torch.nn.Transformer解读与应用相关推荐

  1. torch.nn.Transformer解读与应用

    nn.TransformerEncoderLayer 这个类是transformer encoder的组成部分,代表encoder的一个层,而encoder就是将transformerEncoderL ...

  2. Pytorch中 nn.Transformer的使用详解与Transformer的黑盒讲解

    文章目录 本文内容 将Transformer看成黑盒 Transformer的推理过程 Transformer的训练过程 Pytorch中的nn.Transformer nn.Transformer简 ...

  3. PyTorch : torch.nn.xxx 和 torch.nn.functional.xxx

    PyTorch : torch.nn.xxx 和 torch.nn.functional.xxx 在写 PyTorch 代码时,我们会发现在 torch.nn.xxx 和 torch.nn.funct ...

  4. pytorch torch.nn.MSELoss

    应用 # 1.计算绝对差总和:|0-1|^2+|1-1|^2+|2-1|^2+|3-1|^2=6 # 2.求平均: 6/4 =1.5 import torch import torch.nn as n ...

  5. pytorch torch.nn.Module.register_buffer

    API register_buffer(name: str, tensor: Optional[torch.Tensor], persistent: bool = True) → None 注册buf ...

  6. pytorch torch.nn.TransformerEncoderLayer

    API CLASS torch.nn.TransformerEncoderLayer(d_model, nhead, dim_feedforward=2048, dropout=0.1, activa ...

  7. pytorch torch.nn.TransformerEncoder

    API CLASS torch.nn.TransformerEncoder(encoder_layer, num_layers, norm=None) TransformerEncoder is a ...

  8. pytorch torch.nn.Embedding

    词嵌入矩阵,可以加载使用word2vector,glove API CLASS torch.nn.Embedding(num_embeddings: int, embedding_dim: int, ...

  9. pytorch: torch.nn.functional.affine_grid(theta,size)

    # 仍有部分疑惑  torch.nn.functional.affine_grid(theta,size): 给定一组仿射矩阵(theta),生成一个2d的流场.通常与 grid_sample() 结 ...

最新文章

  1. 零售行业SAP项目 --- SAP顾问向大数据转型的契机
  2. ios的 UIButton
  3. 大数据风控-反欺诈之黑卡与养卡
  4. bde oracle 商友的流程_BorlandC++使用BDE访问Oracle方法
  5. Python 第三方模块之 MySQL数据库连接模块 PyMySQL
  6. VC6.0 中的__asm语句
  7. Ext.state.Manager.setProvider(new Ext.state.CookieProvider())
  8. DemocracyOS促进双方的公民参与
  9. 动态加载so库的实现方法与问题处理
  10. iOS开发者联系 联系方式
  11. 神器--通过Workspaces来编辑本地文件
  12. 实现Http Server
  13. ppt快速美化四步法
  14. 【翻译】CEDEC2014[跨越我的尸体2]跨越Stylized Rendering
  15. 初创公司如何选择企业级服务器配置
  16. dhrystone测试前系统软件准备与计算
  17. 全国计算机三级考试网络技术--应用题总结
  18. Python | List和Deque的速度对比
  19. VMware虚拟机Linux增加磁盘空间的扩容操作
  20. 鸟哥的Linux私房菜(服务器)- 第十九章、主机名控制者: DNS 服务器

热门文章

  1. 死链提交为什么不能提交 html文件,死链提交有什么用(如何处理网站死链)
  2. ajax请求失败readyState为0
  3. 在线重定义在线更换分区表分区类型
  4. 036卫星轨道及卫星在轨运动
  5. linux学习之linux百问,不断更新
  6. 读《矿矿上高中一年级》所得到的收获
  7. 计算机软考英语复习,计算机软考综合之计算机英语经典短文
  8. js关闭当前页面/关闭当前窗口/移动端 代码
  9. ADSO中的表和视图
  10. [4G5G基础学习]:流程 - 4G LTE PLMN选择、扫频、小区搜索、系统消息读取、小区选择过程