源链接:http://blog.csdn.net/u014595019/article/details/52759104

损失函数计算

# softmax_w , shape=[hidden_size, vocab_size], 用于将distributed表示的单词转化为one-hot表示
softmax_w = tf.get_variable("softmax_w", [size, vocab_size], dtype=data_type())
softmax_b = tf.get_variable("softmax_b", [vocab_size], dtype=data_type())
# [batch*numsteps, vocab_size] 从隐藏语义转化成完全表示
logits = tf.matmul(output, softmax_w) + softmax_b# loss , shape=[batch*num_steps]
# 带权重的交叉熵计算
loss = tf.nn.seq2seq.sequence_loss_by_example([logits],   # output [batch*numsteps, vocab_size][tf.reshape(self._targets, [-1])],  # target, [batch_size, num_steps] 然后展开成一维【列表】[tf.ones([batch_size * num_steps], dtype=data_type())]) # weight
self._cost = cost = tf.reduce_sum(loss) / batch_size # 计算得到平均每批batch的误差
self._final_state = state
self._lr = tf.Variable(0.0, trainable=False)  # lr 指的是 learning_rate
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars),config.max_grad_norm)
# 梯度下降优化,指定学习速率
optimizer = tf.train.GradientDescentOptimizer(self._lr)
# optimizer = tf.train.AdamOptimizer()
# optimizer = tf.train.GradientDescentOptimizer(0.5)
self._train_op = optimizer.apply_gradients(zip(grads, tvars))  # 将梯度应用于变量
# self._train_op = optimizer.minimize(grads)

全部源代码:

# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# =============================================================================="""Example / benchmark for building a PTB LSTM model.
Trains the model described in:
(Zaremba, et. al.) Recurrent Neural Network Regularization
http://arxiv.org/abs/1409.2329
There are 3 supported model configurations:
===========================================
| config | epochs | train | valid  | test
===========================================
| small  | 13     | 37.99 | 121.39 | 115.91
| medium | 39     | 48.45 |  86.16 |  82.07
| large  | 55     | 37.87 |  82.62 |  78.29
The exact results may vary depending on the random initialization.
The hyperparameters used in the model:
- init_scale - the initial scale of the weights
- learning_rate - the initial value of the learning rate
- max_grad_norm - the maximum permissible norm of the gradient
- num_layers - the number of LSTM layers
- num_steps - the number of unrolled steps of LSTM
- hidden_size - the number of LSTM units
- max_epoch - the number of epochs trained with the initial learning rate
- max_max_epoch - the total number of epochs for training
- keep_prob - the probability of keeping weights in the dropout layer
- lr_decay - the decay of the learning rate for each epoch after "max_epoch"
- batch_size - the batch size
The data required for this example is in the data/ dir of the
PTB dataset from Tomas Mikolov's webpage:
$ wget http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz
$ tar xvf simple-examples.tgz
To run:
$ python ptb_word_lm.py --data_path=simple-examples/data/
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_functionimport timeimport numpy as np
import tensorflow as tffrom tensorflow.models.rnn.ptb import readerflags = tf.flags
logging = tf.loggingflags.DEFINE_string("model", "small","A type of model. Possible options are: small, medium, large.")
flags.DEFINE_string("data_path", '/home/multiangle/download/simple-examples/data/', "data_path")
flags.DEFINE_bool("use_fp16", False,"Train using 16-bit floats instead of 32bit floats")FLAGS = flags.FLAGSdef data_type():return tf.float16 if FLAGS.use_fp16 else tf.float32class PTBModel(object):"""The PTB model."""def __init__(self, is_training, config):""":param is_training: 是否要进行训练.如果is_training=False,则不会进行参数的修正。"""self.batch_size = batch_size = config.batch_sizeself.num_steps = num_steps = config.num_stepssize = config.hidden_sizevocab_size = config.vocab_sizeself._input_data = tf.placeholder(tf.int32, [batch_size, num_steps])    # 输入self._targets = tf.placeholder(tf.int32, [batch_size, num_steps])       # 预期输出,两者都是index序列,长度为num_step# Slightly better results can be obtained with forget gate biases# initialized to 1 but the hyperparameters of the model would need to be# different than reported in the paper.lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(size, forget_bias=0.0, state_is_tuple=True)if is_training and config.keep_prob < 1: # 在外面包裹一层dropoutlstm_cell = tf.nn.rnn_cell.DropoutWrapper(lstm_cell, output_keep_prob=config.keep_prob)cell = tf.nn.rnn_cell.MultiRNNCell([lstm_cell] * config.num_layers, state_is_tuple=True) # 多层lstm cell 堆叠起来self._initial_state = cell.zero_state(batch_size, data_type()) # 参数初始化,rnn_cell.RNNCell.zero_statewith tf.device("/cpu:0"):embedding = tf.get_variable("embedding", [vocab_size, size], dtype=data_type()) # vocab size * hidden size, 将单词转成embedding描述# 将输入seq用embedding表示, shape=[batch, steps, hidden_size]inputs = tf.nn.embedding_lookup(embedding, self._input_data)if is_training and config.keep_prob < 1:inputs = tf.nn.dropout(inputs, config.keep_prob)# Simplified version of tensorflow.models.rnn.rnn.py's rnn().# This builds an unrolled LSTM for tutorial purposes only.# In general, use the rnn() or state_saving_rnn() from rnn.py.## The alternative version of the code below is:## inputs = [tf.squeeze(input_, [1])#           for input_ in tf.split(1, num_steps, inputs)]# outputs, state = tf.nn.rnn(cell, inputs, initial_state=self._initial_state)outputs = []state = self._initial_state # state 表示 各个batch中的状态with tf.variable_scope("RNN"):for time_step in range(num_steps):if time_step > 0: tf.get_variable_scope().reuse_variables()# cell_out: [batch, hidden_size](cell_output, state) = cell(inputs[:, time_step, :], state)outputs.append(cell_output)  # output: shape[num_steps][batch,hidden_size]# 把之前的list展开,成[batch, hidden_size*num_steps],然后 reshape, 成[batch*numsteps, hidden_size]output = tf.reshape(tf.concat(1, outputs), [-1, size])# softmax_w , shape=[hidden_size, vocab_size], 用于将distributed表示的单词转化为one-hot表示softmax_w = tf.get_variable("softmax_w", [size, vocab_size], dtype=data_type())softmax_b = tf.get_variable("softmax_b", [vocab_size], dtype=data_type())# [batch*numsteps, vocab_size] 从隐藏语义转化成完全表示logits = tf.matmul(output, softmax_w) + softmax_b# loss , shape=[batch*num_steps]# 带权重的交叉熵计算loss = tf.nn.seq2seq.sequence_loss_by_example([logits],   # output [batch*numsteps, vocab_size][tf.reshape(self._targets, [-1])],  # target, [batch_size, num_steps] 然后展开成一维【列表】[tf.ones([batch_size * num_steps], dtype=data_type())]) # weightself._cost = cost = tf.reduce_sum(loss) / batch_size # 计算得到平均每批batch的误差self._final_state = stateif not is_training:  # 如果没有训练,则不需要更新state的值。returnself._lr = tf.Variable(0.0, trainable=False)tvars = tf.trainable_variables()# clip_by_global_norm: 梯度衰减,具体算法为t_list[i] * clip_norm / max(global_norm, clip_norm)# 这里gradients求导,ys和xs都是张量# 返回一个长为len(xs)的张量,其中的每个元素都是\grad{\frac{dy}{dx}}# clip_by_global_norm 用于控制梯度膨胀,前两个参数t_list, global_norm, 则# t_list[i] * clip_norm / max(global_norm, clip_norm)# 其中 global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars),config.max_grad_norm)# 梯度下降优化,指定学习速率optimizer = tf.train.GradientDescentOptimizer(self._lr)# optimizer = tf.train.AdamOptimizer()# optimizer = tf.train.GradientDescentOptimizer(0.5)self._train_op = optimizer.apply_gradients(zip(grads, tvars))  # 将梯度应用于变量self._new_lr = tf.placeholder(tf.float32, shape=[], name="new_learning_rate")     #   用于外部向graph输入新的 lr值self._lr_update = tf.assign(self._lr, self._new_lr)     #   使用new_lr来更新lr的值def assign_lr(self, session, lr_value):# 使用 session 来调用 lr_update 操作session.run(self._lr_update, feed_dict={self._new_lr: lr_value})@propertydef input_data(self):return self._input_data@propertydef targets(self):return self._targets@propertydef initial_state(self):return self._initial_state@propertydef cost(self):return self._cost@propertydef final_state(self):return self._final_state@propertydef lr(self):return self._lr@propertydef train_op(self):return self._train_opclass SmallConfig(object):"""Small config."""init_scale = 0.1        #learning_rate = 1.0     # 学习速率max_grad_norm = 5       # 用于控制梯度膨胀,num_layers = 2          # lstm层数num_steps = 20          # 单个数据中,序列的长度。hidden_size = 200       # 隐藏层规模max_epoch = 4           # epoch<max_epoch时,lr_decay值=1,epoch>max_epoch时,lr_decay逐渐减小max_max_epoch = 13      # 指的是整个文本循环13遍。keep_prob = 1.0lr_decay = 0.5          # 学习速率衰减batch_size = 20         # 每批数据的规模,每批有20个。vocab_size = 10000      # 词典规模,总共10K个词class MediumConfig(object):"""Medium config."""init_scale = 0.05learning_rate = 1.0max_grad_norm = 5num_layers = 2num_steps = 35hidden_size = 650max_epoch = 6max_max_epoch = 39keep_prob = 0.5lr_decay = 0.8batch_size = 20vocab_size = 10000class LargeConfig(object):"""Large config."""init_scale = 0.04learning_rate = 1.0max_grad_norm = 10num_layers = 2num_steps = 35hidden_size = 1500max_epoch = 14max_max_epoch = 55keep_prob = 0.35lr_decay = 1 / 1.15batch_size = 20vocab_size = 10000class TestConfig(object):"""Tiny config, for testing."""init_scale = 0.1learning_rate = 1.0max_grad_norm = 1num_layers = 1num_steps = 2hidden_size = 2max_epoch = 1max_max_epoch = 1keep_prob = 1.0lr_decay = 0.5batch_size = 20vocab_size = 10000def run_epoch(session, model, data, eval_op, verbose=False):"""Runs the model on the given data."""# epoch_size 表示批次总数。也就是说,需要向session喂这么多次数据epoch_size = ((len(data) // model.batch_size) - 1) // model.num_steps  # // 表示整数除法start_time = time.time()costs = 0.0iters = 0state = session.run(model.initial_state)for step, (x, y) in enumerate(reader.ptb_iterator(data, model.batch_size,model.num_steps)):fetches = [model.cost, model.final_state, eval_op] # 要进行的操作,注意训练时和其他时候eval_op的区别feed_dict = {}      # 设定input和target的值feed_dict[model.input_data] = xfeed_dict[model.targets] = yfor i, (c, h) in enumerate(model.initial_state):feed_dict[c] = state[i].c   # 这部分有什么用?看不懂feed_dict[h] = state[i].hcost, state, _ = session.run(fetches, feed_dict) # 运行session,获得cost和statecosts += cost   # 将 cost 累积iters += model.num_stepsif verbose and step % (epoch_size // 10) == 10:  # 也就是每个epoch要输出10个perplexity值print("%.3f perplexity: %.3f speed: %.0f wps" %(step * 1.0 / epoch_size, np.exp(costs / iters),iters * model.batch_size / (time.time() - start_time)))return np.exp(costs / iters)def get_config():if FLAGS.model == "small":return SmallConfig()elif FLAGS.model == "medium":return MediumConfig()elif FLAGS.model == "large":return LargeConfig()elif FLAGS.model == "test":return TestConfig()else:raise ValueError("Invalid model: %s", FLAGS.model)# def main(_):
if __name__=='__main__':if not FLAGS.data_path:raise ValueError("Must set --data_path to PTB data directory")print(FLAGS.data_path)raw_data = reader.ptb_raw_data(FLAGS.data_path) # 获取原始数据train_data, valid_data, test_data, _ = raw_dataconfig = get_config()eval_config = get_config()eval_config.batch_size = 1eval_config.num_steps = 1with tf.Graph().as_default(), tf.Session() as session:initializer = tf.random_uniform_initializer(-config.init_scale, # 定义如何对参数变量初始化config.init_scale)with tf.variable_scope("model", reuse=None,initializer=initializer):m = PTBModel(is_training=True, config=config)   # 训练模型, is_trainable=Truewith tf.variable_scope("model", reuse=True,initializer=initializer):mvalid = PTBModel(is_training=False, config=config) #  交叉检验和测试模型,is_trainable=Falsemtest = PTBModel(is_training=False, config=eval_config)summary_writer = tf.train.SummaryWriter('/tmp/lstm_logs',session.graph)tf.initialize_all_variables().run()  # 对参数变量初始化for i in range(config.max_max_epoch):   # 所有文本要重复多次进入模型训练# learning rate 衰减# 在 遍数小于max epoch时, lr_decay = 1 ; > max_epoch时, lr_decay = 0.5^(i-max_epoch)lr_decay = config.lr_decay ** max(i - config.max_epoch, 0.0)m.assign_lr(session, config.learning_rate * lr_decay) # 设置learning rateprint("Epoch: %d Learning rate: %.3f" % (i + 1, session.run(m.lr)))train_perplexity = run_epoch(session, m, train_data, m.train_op,verbose=True) # 训练困惑度print("Epoch: %d Train Perplexity: %.3f" % (i + 1, train_perplexity))valid_perplexity = run_epoch(session, mvalid, valid_data, tf.no_op()) # 检验困惑度print("Epoch: %d Valid Perplexity: %.3f" % (i + 1, valid_perplexity))test_perplexity = run_epoch(session, mtest, test_data, tf.no_op())  # 测试困惑度print("Test Perplexity: %.3f" % test_perplexity)# if __name__ == "__main__":
#     tf.app.run()

gradient clipping相关推荐

  1. 【调参19】如何使用梯度裁剪(Gradient Clipping)避免梯度爆炸

    文章目录 1. 梯度爆炸和裁剪 2. TensorFlow.Keras 实现 2.1 梯度范数缩放(Gradient Norm Scaling) 2.2 梯度值裁剪(Gradient Value Cl ...

  2. 深度学习之梯度裁剪(Gradient Clipping)

    梯度剪裁的原因 神经网络是通过梯度下降来学习的.而梯度爆炸问题一般会随着网络层数的增加而变得越来越明显.如果发生梯度爆炸,那么就是学过了,会直接跳过最优解. 例如:在反向传播中,假设第一层倒数乘以权重 ...

  3. 【笔记】WGAN GP :WGAN自己的李普西斯条件是gradient clipping(大部分weight是正负0.01),在此基础上增加新的motivation让WGAN GP实现李普西斯条件

    注: 下文作者写的不错,但是有地方搞得不是很清楚.比如误差反向传播过程是对权重和偏置求偏导,不是对输入数据.这点要搞清楚. 注: 李普西斯条件:当且仅当处处可微函数f的导函数处处有界,f满足利普希茨条 ...

  4. 深度学习实战 Tricks —— 梯度消失与梯度爆炸(gradient exploding)

    梯度爆炸:梯度过大会使得损失函数很难收敛,甚至导致梯度为 NaN,异常退出: 解决方案:gradient cliping 梯度消失:较前的层次很难对较后的层次产生影响,梯度更新失效: 解决方案:对于 ...

  5. 梯度消亡(Gradient Vanishing)和梯度爆炸(Gradient Exploding)

    文章目录 一.梯度消失 1.1 定义 1.2 梯度消亡(Gradient Vanishing)前提 1.3 产生的原因 1.4 解决方案 二.梯度爆炸 2.1 解决方法 一.梯度消失 1.1 定义 神 ...

  6. 【韩松】Deep Gradient Comression_一只神秘的大金毛_新浪博客

    <Deep Gradient Compression> 作者韩松,清华电子系本科,Stanford PhD,深鉴科技联合创始人.主要的研究方向是,神经网络模型压缩以及硬件架构加速. 论文链 ...

  7. 【韩松】Deep Gradient Comression

    <Deep Gradient Compression> 作者韩松,清华电子系本科,Stanford PhD,深鉴科技联合创始人.主要的研究方向是,神经网络模型压缩以及硬件架构加速. 论文链 ...

  8. 梯度下降算法总结(Gradient Descent Algorithms)

    0. 摘要 机器学习与深度学习中常用到梯度下降(Vanilla Gradient Descent)优化方法及其改进的变种(Improved Variants),不同专业书与教程中均有所涉及,但缺乏系统 ...

  9. pytorch常用代码

    20211228 https://mp.weixin.qq.com/s/4breleAhCh6_9tvMK3WDaw 常用代码段 本文代码基于 PyTorch 1.x 版本,需要用到以下包: impo ...

最新文章

  1. java文件读写的两种方式
  2. android列表时间轴,Android实现列表时间轴
  3. jQuery获取自身HTML
  4. ubuntu 14.04中文显示乱码问题
  5. ionic保存到mysql_ionic sqlite 存取数据封装(兼容真机与webkit浏览器)
  6. c语言为什么要建项目,一个C语言小项目为什么都说牛逼
  7. 利用计算机管理分区,win7增加磁盘分区教学 利用磁盘管理增加分区
  8. 简单脚本之显示系统当前的一些信息
  9. java 数组减除值_java数组操作 - osc_hwpd2zko的个人空间 - OSCHINA - 中文开源技术交流社区...
  10. spark sql常用方法
  11. Win7精简成功后的总结
  12. Overloud TH3 for Mac - 电吉他效果器
  13. 纯净PE推荐——优启通 v3.3.2019.0605
  14. 王小川谈创业必须突破的六大槛:人、产品、渠道、体制、资本、移动
  15. python生成单位矩阵_python 实现一个反向单位矩阵示例
  16. 在word的文字右上角添加符号(插入上标)?
  17. 认识软件定义网络(SDN)(一)
  18. 全国高校信息 ,全国市州信息 ,全国省市信息 ,全国区县联查信息 2019-08-27
  19. 理解CSS clear:both/left/right的含义以及应用
  20. 情感计算 - 情感倾向性分析

热门文章

  1. [蓝桥杯]测试题 E 算法提高 我们的征途是星辰大海 题解和C++示例代码
  2. 自动化 Google 以图搜图
  3. HL7 标准及实现指南 必看的网址
  4. 微软小娜 服务器连不上网,win10小娜怎么连不上网 win10小娜没反应怎么回事
  5. 【学习笔记】Linux 系统编程入门
  6. 蓝桥杯算法很美笔记—排序实现题
  7. A 完全k叉树(CCPC-Wannafly Comet OJ 夏季欢乐赛(2019))
  8. 成都网络推广浅析怎样让网站的文章能够快速收录?
  9. XCTF练习题---CRYPTO---wtc_rsa_bbq
  10. 微信与支付宝钱包的竞争分析