深度学习之循环神经网络(11-b)GRU情感分类问题代码

  • 1. Cell方式
    • 代码
    • 运行结果
  • 2. 层方式
    • 代码
    • 运行结果

1. Cell方式

代码

import os
import tensorflow as tf
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers, losses, optimizers, Sequential
from tensorflow.python.keras.datasets import imdbtf.random.set_seed(22)
np.random.seed(22)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
assert tf.__version__.startswith('2.')batchsz = 128  # 批量大小
total_words = 10000  # 词汇表大小N_vocab
max_review_len = 80  # 句子最大长度s,大于的句子部分将截断,小于的将填充
embedding_len = 100  # 词向量特征长度f
# 加载IMDB数据集,此处的数据采用数字编码,一个数字代表一个单词
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=total_words)
# (x_train, y_train), (x_test, y_test) = keras.datasets.imdb.load_data(num_words=total_words)
print(x_train.shape, len(x_train[0]), y_train.shape)
print(x_test.shape, len(x_test[0]), y_test.shape)
#%%
x_train[0]
#%%
# 数字编码表
word_index = keras.datasets.imdb.get_word_index()
# for k,v in word_index.items():
#     print(k,v)
#%%
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2  # unknown
word_index["<UNUSED>"] = 3
# 翻转编码表
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])def decode_review(text):return ' '.join([reverse_word_index.get(i, '?') for i in text])decode_review(x_train[8])#%%# x_train:[b, 80]
# x_test: [b, 80]
# 截断和填充句子,使得等长,此处长句子保留句子后面的部分,短句子在前面填充
x_train = keras.preprocessing.sequence.pad_sequences(x_train, maxlen=max_review_len)
x_test = keras.preprocessing.sequence.pad_sequences(x_test, maxlen=max_review_len)
# 构建数据集,打散,批量,并丢掉最后一个不够batchsz的batch
db_train = tf.data.Dataset.from_tensor_slices((x_train, y_train))
db_train = db_train.shuffle(1000).batch(batchsz, drop_remainder=True)
db_test = tf.data.Dataset.from_tensor_slices((x_test, y_test))
db_test = db_test.batch(batchsz, drop_remainder=True)
print('x_train shape:', x_train.shape, tf.reduce_max(y_train), tf.reduce_min(y_train))
print('x_test shape:', x_test.shape)#%%class MyRNN(keras.Model):# Cell方式构建多层网络def __init__(self, units):super(MyRNN, self).__init__()# [b, 64],构建Cell初始化状态向量,重复使用self.state0 = [tf.zeros([batchsz, units])]self.state1 = [tf.zeros([batchsz, units])]# 词向量编码 [b, 80] => [b, 80, 100]self.embedding = layers.Embedding(total_words, embedding_len,input_length=max_review_len)# 构建2个Cellself.rnn_cell0 = layers.GRUCell(units, dropout=0.5)self.rnn_cell1 = layers.GRUCell(units, dropout=0.5)# 构建分类网络,用于将CELL的输出特征进行分类,2分类# [b, 80, 100] => [b, 64] => [b, 1]self.outlayer = Sequential([layers.Dense(units),layers.Dropout(rate=0.5),layers.ReLU(),layers.Dense(1)])def call(self, inputs, training=None):x = inputs # [b, 80]# embedding: [b, 80] => [b, 80, 100]x = self.embedding(x)# rnn cell compute,[b, 80, 100] => [b, 64]state0 = self.state0state1 = self.state1for word in tf.unstack(x, axis=1): # word: [b, 100]out0, state0 = self.rnn_cell0(word, state0, training)out1, state1 = self.rnn_cell1(out0, state1, training)# 末层最后一个输出作为分类网络的输入: [b, 64] => [b, 1]x = self.outlayer(out1, training)# p(y is pos|x)prob = tf.sigmoid(x)return probdef main():units = 64 # RNN状态向量长度fepochs = 50 # 训练epochsmodel = MyRNN(units)# 装配model.compile(optimizer = optimizers.RMSprop(0.001),loss = losses.BinaryCrossentropy(),metrics=['accuracy'])# 训练和验证model.fit(db_train, epochs=epochs, validation_data=db_test)# 测试model.evaluate(db_test)if __name__ == '__main__':main()

运行结果

运行结果如下所示:

(25000,) 218 (25000,)
(25000,) 68 (25000,)
x_train shape: (25000, 80) tf.Tensor(1, shape=(), dtype=int64) tf.Tensor(0, shape=(), dtype=int64)
x_test shape: (25000, 80)
Epoch 1/50
195/195 [==============================] - 27s 75ms/step - loss: 0.5313 - accuracy: 0.7230 - val_loss: 0.3653 - val_accuracy: 0.8379
Epoch 2/50
195/195 [==============================] - 11s 58ms/step - loss: 0.3624 - accuracy: 0.8492 - val_loss: 0.4092 - val_accuracy: 0.8215
Epoch 3/50
195/195 [==============================] - 12s 63ms/step - loss: 0.3125 - accuracy: 0.8747 - val_loss: 0.3466 - val_accuracy: 0.8469
Epoch 4/50
195/195 [==============================] - 12s 60ms/step - loss: 0.2882 - accuracy: 0.8863 - val_loss: 0.3449 - val_accuracy: 0.8473
Epoch 5/50
195/195 [==============================] - 11s 58ms/step - loss: 0.2620 - accuracy: 0.8993 - val_loss: 0.3564 - val_accuracy: 0.8441
Epoch 6/50
195/195 [==============================] - 12s 60ms/step - loss: 0.2433 - accuracy: 0.9050 - val_loss: 0.3797 - val_accuracy: 0.8390
Epoch 7/50
195/195 [==============================] - 13s 65ms/step - loss: 0.2284 - accuracy: 0.9136 - val_loss: 0.3808 - val_accuracy: 0.8442
Epoch 8/50
195/195 [==============================] - 12s 60ms/step - loss: 0.2148 - accuracy: 0.9191 - val_loss: 0.4447 - val_accuracy: 0.8404
Epoch 9/50
195/195 [==============================] - 12s 62ms/step - loss: 0.2026 - accuracy: 0.9249 - val_loss: 0.4039 - val_accuracy: 0.8409
Epoch 10/50
195/195 [==============================] - 12s 60ms/step - loss: 0.1858 - accuracy: 0.9325 - val_loss: 0.4054 - val_accuracy: 0.8361
Epoch 11/50
195/195 [==============================] - 12s 62ms/step - loss: 0.1795 - accuracy: 0.9350 - val_loss: 0.4211 - val_accuracy: 0.8390
Epoch 12/50
195/195 [==============================] - 13s 66ms/step - loss: 0.1629 - accuracy: 0.9408 - val_loss: 0.4978 - val_accuracy: 0.8359
Epoch 13/50
195/195 [==============================] - 12s 62ms/step - loss: 0.1563 - accuracy: 0.9448 - val_loss: 0.4397 - val_accuracy: 0.8361
Epoch 14/50
195/195 [==============================] - 11s 58ms/step - loss: 0.1453 - accuracy: 0.9478 - val_loss: 0.5085 - val_accuracy: 0.8353
Epoch 15/50
195/195 [==============================] - 11s 59ms/step - loss: 0.1368 - accuracy: 0.9522 - val_loss: 0.5143 - val_accuracy: 0.8325
Epoch 16/50
195/195 [==============================] - 11s 59ms/step - loss: 0.1288 - accuracy: 0.9538 - val_loss: 0.6158 - val_accuracy: 0.8255
Epoch 17/50
195/195 [==============================] - 12s 60ms/step - loss: 0.1201 - accuracy: 0.9573 - val_loss: 0.5548 - val_accuracy: 0.8282
Epoch 18/50
195/195 [==============================] - 12s 61ms/step - loss: 0.1124 - accuracy: 0.9611 - val_loss: 0.6440 - val_accuracy: 0.8269
Epoch 19/50
195/195 [==============================] - 12s 62ms/step - loss: 0.1068 - accuracy: 0.9637 - val_loss: 0.6014 - val_accuracy: 0.8256
Epoch 20/50
195/195 [==============================] - 12s 64ms/step - loss: 0.1016 - accuracy: 0.9645 - val_loss: 0.6732 - val_accuracy: 0.8175
Epoch 21/50
195/195 [==============================] - 12s 63ms/step - loss: 0.0927 - accuracy: 0.9682 - val_loss: 0.6812 - val_accuracy: 0.8219
Epoch 22/50
195/195 [==============================] - 12s 63ms/step - loss: 0.0887 - accuracy: 0.9702 - val_loss: 0.6998 - val_accuracy: 0.8215
Epoch 23/50
195/195 [==============================] - 14s 70ms/step - loss: 0.0795 - accuracy: 0.9733 - val_loss: 0.6457 - val_accuracy: 0.8169
Epoch 24/50
195/195 [==============================] - 13s 67ms/step - loss: 0.0761 - accuracy: 0.9749 - val_loss: 0.8002 - val_accuracy: 0.8152
Epoch 25/50
195/195 [==============================] - 11s 58ms/step - loss: 0.0690 - accuracy: 0.9764 - val_loss: 0.8147 - val_accuracy: 0.8177
Epoch 26/50
195/195 [==============================] - 12s 61ms/step - loss: 0.0663 - accuracy: 0.9782 - val_loss: 0.8104 - val_accuracy: 0.8183
Epoch 27/50
195/195 [==============================] - 12s 64ms/step - loss: 0.0616 - accuracy: 0.9791 - val_loss: 0.7919 - val_accuracy: 0.8157
Epoch 28/50
195/195 [==============================] - 12s 64ms/step - loss: 0.0554 - accuracy: 0.9805 - val_loss: 0.9586 - val_accuracy: 0.8163
Epoch 29/50
195/195 [==============================] - 12s 64ms/step - loss: 0.0564 - accuracy: 0.9816 - val_loss: 0.8694 - val_accuracy: 0.8107
Epoch 30/50
195/195 [==============================] - 13s 65ms/step - loss: 0.0459 - accuracy: 0.9853 - val_loss: 1.0061 - val_accuracy: 0.8098
Epoch 31/50
195/195 [==============================] - 12s 63ms/step - loss: 0.0417 - accuracy: 0.9863 - val_loss: 1.1465 - val_accuracy: 0.8091
Epoch 32/50
195/195 [==============================] - 13s 65ms/step - loss: 0.0390 - accuracy: 0.9870 - val_loss: 1.1344 - val_accuracy: 0.8098
Epoch 33/50
195/195 [==============================] - 12s 62ms/step - loss: 0.0386 - accuracy: 0.9871 - val_loss: 1.2853 - val_accuracy: 0.8065
Epoch 34/50
195/195 [==============================] - 13s 67ms/step - loss: 0.0359 - accuracy: 0.9881 - val_loss: 1.1043 - val_accuracy: 0.8101
Epoch 35/50
195/195 [==============================] - 12s 64ms/step - loss: 0.0340 - accuracy: 0.9894 - val_loss: 1.2222 - val_accuracy: 0.8111
Epoch 36/50
195/195 [==============================] - 12s 62ms/step - loss: 0.0315 - accuracy: 0.9893 - val_loss: 1.2501 - val_accuracy: 0.8058
Epoch 37/50
195/195 [==============================] - 13s 67ms/step - loss: 0.0266 - accuracy: 0.9913 - val_loss: 1.2969 - val_accuracy: 0.8077
Epoch 38/50
195/195 [==============================] - 12s 62ms/step - loss: 0.0247 - accuracy: 0.9922 - val_loss: 1.2353 - val_accuracy: 0.8058
Epoch 39/50
195/195 [==============================] - 13s 66ms/step - loss: 0.0217 - accuracy: 0.9931 - val_loss: 1.2809 - val_accuracy: 0.8091
Epoch 40/50
195/195 [==============================] - 12s 60ms/step - loss: 0.0224 - accuracy: 0.9929 - val_loss: 1.2100 - val_accuracy: 0.8097
Epoch 41/50
195/195 [==============================] - 12s 60ms/step - loss: 0.0188 - accuracy: 0.9942 - val_loss: 1.2793 - val_accuracy: 0.8109
Epoch 42/50
195/195 [==============================] - 12s 61ms/step - loss: 0.0169 - accuracy: 0.9948 - val_loss: 1.4286 - val_accuracy: 0.8087
Epoch 43/50
195/195 [==============================] - 11s 58ms/step - loss: 0.0160 - accuracy: 0.9951 - val_loss: 1.4288 - val_accuracy: 0.8069
Epoch 44/50
195/195 [==============================] - 12s 60ms/step - loss: 0.0178 - accuracy: 0.9948 - val_loss: 1.5837 - val_accuracy: 0.8103
Epoch 45/50
195/195 [==============================] - 12s 59ms/step - loss: 0.0117 - accuracy: 0.9957 - val_loss: 1.8518 - val_accuracy: 0.8079
Epoch 46/50
195/195 [==============================] - 11s 59ms/step - loss: 0.0112 - accuracy: 0.9968 - val_loss: 1.9529 - val_accuracy: 0.8082
Epoch 47/50
195/195 [==============================] - 12s 61ms/step - loss: 0.0127 - accuracy: 0.9961 - val_loss: 1.6620 - val_accuracy: 0.8096
Epoch 48/50
195/195 [==============================] - 11s 58ms/step - loss: 0.0105 - accuracy: 0.9968 - val_loss: 1.9470 - val_accuracy: 0.8075
Epoch 49/50
195/195 [==============================] - 12s 61ms/step - loss: 0.0139 - accuracy: 0.9959 - val_loss: 1.8875 - val_accuracy: 0.8094
Epoch 50/50
195/195 [==============================] - 12s 59ms/step - loss: 0.0114 - accuracy: 0.9972 - val_loss: 1.9203 - val_accuracy: 0.8064
195/195 [==============================] - 3s 16ms/step - loss: 1.9203 - accuracy: 0.8064

2. 层方式

代码

import os
import tensorflow as tf
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers, losses, optimizers, Sequential
from tensorflow.python.keras.datasets import imdbtf.random.set_seed(22)
np.random.seed(22)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
assert tf.__version__.startswith('2.')batchsz = 128  # 批量大小
total_words = 10000  # 词汇表大小N_vocab
max_review_len = 80  # 句子最大长度s,大于的句子部分将截断,小于的将填充
embedding_len = 100  # 词向量特征长度f
# 加载IMDB数据集,此处的数据采用数字编码,一个数字代表一个单词
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=total_words)
# (x_train, y_train), (x_test, y_test) = keras.datasets.imdb.load_data(num_words=total_words)
print(x_train.shape, len(x_train[0]), y_train.shape)
print(x_test.shape, len(x_test[0]), y_test.shape)
#%%
x_train[0]
#%%
# 数字编码表
word_index = keras.datasets.imdb.get_word_index()
# for k,v in word_index.items():
#     print(k,v)
#%%
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2  # unknown
word_index["<UNUSED>"] = 3
# 翻转编码表
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])def decode_review(text):return ' '.join([reverse_word_index.get(i, '?') for i in text])decode_review(x_train[8])#%%# x_train:[b, 80]
# x_test: [b, 80]
# 截断和填充句子,使得等长,此处长句子保留句子后面的部分,短句子在前面填充
x_train = keras.preprocessing.sequence.pad_sequences(x_train, maxlen=max_review_len)
x_test = keras.preprocessing.sequence.pad_sequences(x_test, maxlen=max_review_len)
# 构建数据集,打散,批量,并丢掉最后一个不够batchsz的batch
db_train = tf.data.Dataset.from_tensor_slices((x_train, y_train))
db_train = db_train.shuffle(1000).batch(batchsz, drop_remainder=True)
db_test = tf.data.Dataset.from_tensor_slices((x_test, y_test))
db_test = db_test.batch(batchsz, drop_remainder=True)
print('x_train shape:', x_train.shape, tf.reduce_max(y_train), tf.reduce_min(y_train))
print('x_test shape:', x_test.shape)#%%class MyRNN(keras.Model):# Cell方式构建多层网络def __init__(self, units):super(MyRNN, self).__init__()# 词向量编码 [b, 80] => [b, 80, 100]self.embedding = layers.Embedding(total_words, embedding_len,input_length=max_review_len)# 构建RNNself.rnn = keras.Sequential([layers.GRU(units, dropout=0.5, return_sequences=True),layers.GRU(units, dropout=0.5)])# 构建分类网络,用于将CELL的输出特征进行分类,2分类# [b, 80, 100] => [b, 64] => [b, 1]self.outlayer = Sequential([layers.Dense(32),layers.Dropout(rate=0.5),layers.ReLU(),layers.Dense(1)])def call(self, inputs, training=None):x = inputs # [b, 80]# embedding: [b, 80] => [b, 80, 100]x = self.embedding(x)# rnn cell compute,[b, 80, 100] => [b, 64]x = self.rnn(x)# 末层最后一个输出作为分类网络的输入: [b, 64] => [b, 1]x = self.outlayer(x,training)# p(y is pos|x)prob = tf.sigmoid(x)return probdef main():units = 32 # RNN状态向量长度fepochs = 50 # 训练epochsmodel = MyRNN(units)# 装配model.compile(optimizer = optimizers.Adam(0.001),loss = losses.BinaryCrossentropy(),metrics=['accuracy'])# 训练和验证model.fit(db_train, epochs=epochs, validation_data=db_test)# 测试model.evaluate(db_test)if __name__ == '__main__':main()

运行结果

运行结果如下所示:

(25000,) 218 (25000,)
(25000,) 68 (25000,)
x_train shape: (25000, 80) tf.Tensor(1, shape=(), dtype=int64) tf.Tensor(0, shape=(), dtype=int64)
x_test shape: (25000, 80)
Epoch 1/50
195/195 [==============================] - 15s 63ms/step - loss: 0.5453 - accuracy: 0.7086 - val_loss: 0.3727 - val_accuracy: 0.8329
Epoch 2/50
195/195 [==============================] - 12s 59ms/step - loss: 0.3404 - accuracy: 0.8619 - val_loss: 0.3770 - val_accuracy: 0.8387
Epoch 3/50
195/195 [==============================] - 12s 62ms/step - loss: 0.2796 - accuracy: 0.8921 - val_loss: 0.3811 - val_accuracy: 0.8334
Epoch 4/50
195/195 [==============================] - 12s 61ms/step - loss: 0.2419 - accuracy: 0.9076 - val_loss: 0.4437 - val_accuracy: 0.8317
Epoch 5/50
195/195 [==============================] - 12s 60ms/step - loss: 0.2083 - accuracy: 0.9243 - val_loss: 0.5327 - val_accuracy: 0.8237
Epoch 6/50
195/195 [==============================] - 12s 59ms/step - loss: 0.1819 - accuracy: 0.9345 - val_loss: 0.5159 - val_accuracy: 0.8251
Epoch 7/50
195/195 [==============================] - 12s 63ms/step - loss: 0.1492 - accuracy: 0.9479 - val_loss: 0.6070 - val_accuracy: 0.8212
Epoch 8/50
195/195 [==============================] - 12s 63ms/step - loss: 0.1356 - accuracy: 0.9528 - val_loss: 0.6642 - val_accuracy: 0.8227
Epoch 9/50
195/195 [==============================] - 13s 65ms/step - loss: 0.1080 - accuracy: 0.9640 - val_loss: 0.6305 - val_accuracy: 0.8207
Epoch 10/50
195/195 [==============================] - 13s 65ms/step - loss: 0.0973 - accuracy: 0.9663 - val_loss: 0.8183 - val_accuracy: 0.8166
Epoch 11/50
195/195 [==============================] - 12s 64ms/step - loss: 0.0859 - accuracy: 0.9712 - val_loss: 0.8450 - val_accuracy: 0.8155
Epoch 12/50
195/195 [==============================] - 14s 70ms/step - loss: 0.0783 - accuracy: 0.9736 - val_loss: 0.7626 - val_accuracy: 0.8115
Epoch 13/50
195/195 [==============================] - 12s 61ms/step - loss: 0.0757 - accuracy: 0.9752 - val_loss: 0.9203 - val_accuracy: 0.8110
Epoch 14/50
195/195 [==============================] - 12s 61ms/step - loss: 0.0600 - accuracy: 0.9802 - val_loss: 1.0984 - val_accuracy: 0.8108
Epoch 15/50
195/195 [==============================] - 12s 60ms/step - loss: 0.0559 - accuracy: 0.9810 - val_loss: 1.0869 - val_accuracy: 0.8143
Epoch 16/50
195/195 [==============================] - 12s 60ms/step - loss: 0.0509 - accuracy: 0.9838 - val_loss: 1.1889 - val_accuracy: 0.8106
Epoch 17/50
195/195 [==============================] - 12s 61ms/step - loss: 0.0498 - accuracy: 0.9832 - val_loss: 1.1193 - val_accuracy: 0.8130
Epoch 18/50
195/195 [==============================] - 12s 63ms/step - loss: 0.0450 - accuracy: 0.9854 - val_loss: 1.1024 - val_accuracy: 0.8119
Epoch 19/50
195/195 [==============================] - 13s 65ms/step - loss: 0.0420 - accuracy: 0.9860 - val_loss: 1.2353 - val_accuracy: 0.8086
Epoch 20/50
195/195 [==============================] - 12s 63ms/step - loss: 0.0384 - accuracy: 0.9875 - val_loss: 1.2411 - val_accuracy: 0.8073
Epoch 21/50
195/195 [==============================] - 12s 62ms/step - loss: 0.0382 - accuracy: 0.9876 - val_loss: 1.2832 - val_accuracy: 0.8076
Epoch 22/50
195/195 [==============================] - 13s 66ms/step - loss: 0.0376 - accuracy: 0.9879 - val_loss: 1.3139 - val_accuracy: 0.8052
Epoch 23/50
195/195 [==============================] - 12s 62ms/step - loss: 0.0314 - accuracy: 0.9897 - val_loss: 1.2738 - val_accuracy: 0.8087
Epoch 24/50
195/195 [==============================] - 12s 64ms/step - loss: 0.0324 - accuracy: 0.9898 - val_loss: 1.3274 - val_accuracy: 0.8111
Epoch 25/50
195/195 [==============================] - 12s 62ms/step - loss: 0.0303 - accuracy: 0.9903 - val_loss: 1.3256 - val_accuracy: 0.8083
Epoch 26/50
195/195 [==============================] - 12s 60ms/step - loss: 0.0290 - accuracy: 0.9903 - val_loss: 1.3456 - val_accuracy: 0.8090
Epoch 27/50
195/195 [==============================] - 13s 64ms/step - loss: 0.0262 - accuracy: 0.9910 - val_loss: 1.3736 - val_accuracy: 0.8090
Epoch 28/50
195/195 [==============================] - 12s 62ms/step - loss: 0.0264 - accuracy: 0.9919 - val_loss: 1.5663 - val_accuracy: 0.8065
Epoch 29/50
195/195 [==============================] - 12s 63ms/step - loss: 0.0279 - accuracy: 0.9915 - val_loss: 1.2952 - val_accuracy: 0.8026
Epoch 30/50
195/195 [==============================] - 12s 62ms/step - loss: 0.0251 - accuracy: 0.9919 - val_loss: 1.3855 - val_accuracy: 0.8067
Epoch 31/50
195/195 [==============================] - 13s 69ms/step - loss: 0.0233 - accuracy: 0.9925 - val_loss: 1.5669 - val_accuracy: 0.8052
Epoch 32/50
195/195 [==============================] - 13s 68ms/step - loss: 0.0217 - accuracy: 0.9932 - val_loss: 1.5572 - val_accuracy: 0.8039
Epoch 33/50
195/195 [==============================] - 12s 63ms/step - loss: 0.0228 - accuracy: 0.9928 - val_loss: 1.4645 - val_accuracy: 0.8022
Epoch 34/50
195/195 [==============================] - 13s 67ms/step - loss: 0.0238 - accuracy: 0.9923 - val_loss: 1.5204 - val_accuracy: 0.8077
Epoch 35/50
195/195 [==============================] - 13s 65ms/step - loss: 0.0179 - accuracy: 0.9937 - val_loss: 1.3944 - val_accuracy: 0.8080
Epoch 36/50
195/195 [==============================] - 13s 68ms/step - loss: 0.0178 - accuracy: 0.9946 - val_loss: 1.6660 - val_accuracy: 0.8063
Epoch 37/50
195/195 [==============================] - 13s 66ms/step - loss: 0.0163 - accuracy: 0.9954 - val_loss: 1.9218 - val_accuracy: 0.8047
Epoch 38/50
195/195 [==============================] - 12s 63ms/step - loss: 0.0190 - accuracy: 0.9940 - val_loss: 1.5856 - val_accuracy: 0.8075
Epoch 39/50
195/195 [==============================] - 13s 67ms/step - loss: 0.0155 - accuracy: 0.9952 - val_loss: 1.5744 - val_accuracy: 0.8077
Epoch 40/50
195/195 [==============================] - 13s 66ms/step - loss: 0.0167 - accuracy: 0.9947 - val_loss: 1.7135 - val_accuracy: 0.8053
Epoch 41/50
195/195 [==============================] - 13s 66ms/step - loss: 0.0155 - accuracy: 0.9951 - val_loss: 1.4940 - val_accuracy: 0.8037
Epoch 42/50
195/195 [==============================] - 13s 68ms/step - loss: 0.0167 - accuracy: 0.9954 - val_loss: 1.6509 - val_accuracy: 0.8040
Epoch 43/50
195/195 [==============================] - 13s 65ms/step - loss: 0.0151 - accuracy: 0.9953 - val_loss: 1.6721 - val_accuracy: 0.8045
Epoch 44/50
195/195 [==============================] - 14s 69ms/step - loss: 0.0166 - accuracy: 0.9945 - val_loss: 1.6668 - val_accuracy: 0.8053
Epoch 45/50
195/195 [==============================] - 13s 66ms/step - loss: 0.0201 - accuracy: 0.9942 - val_loss: 1.4363 - val_accuracy: 0.8076
Epoch 46/50
195/195 [==============================] - 12s 62ms/step - loss: 0.0140 - accuracy: 0.9956 - val_loss: 1.5191 - val_accuracy: 0.8050
Epoch 47/50
195/195 [==============================] - 12s 59ms/step - loss: 0.0129 - accuracy: 0.9960 - val_loss: 1.7104 - val_accuracy: 0.8064
Epoch 48/50
195/195 [==============================] - 12s 59ms/step - loss: 0.0143 - accuracy: 0.9954 - val_loss: 1.6236 - val_accuracy: 0.8054
Epoch 49/50
195/195 [==============================] - 12s 59ms/step - loss: 0.0151 - accuracy: 0.9961 - val_loss: 1.6736 - val_accuracy: 0.8058
Epoch 50/50
195/195 [==============================] - 12s 63ms/step - loss: 0.0120 - accuracy: 0.9962 - val_loss: 1.7336 - val_accuracy: 0.8043
195/195 [==============================] - 3s 15ms/step - loss: 1.7336 - accuracy: 0.8043

深度学习之循环神经网络(11-b)GRU情感分类问题代码相关推荐

  1. 深度学习之循环神经网络(11)LSTM/GRU情感分类问题实战

    深度学习之循环神经网络(11)LSTM/GRU情感分类问题实战 1. LSTM模型 2. GRU模型  前面我们介绍了情感分类问题,并利用SimpleRNN模型完成了情感分类问题的实战,在介绍完更为强 ...

  2. 深度学习之循环神经网络(10)GRU简介

    深度学习之循环神经网络(10)GRU简介 1. 复位门 2. 更新门 3. GRU使用方法  LSTM具有更长的记忆能力,在大部分序列任务上面都取得了比基础RNN模型更好的性能表现,更重要的是,LST ...

  3. 深度学习原理-----循环神经网络(RNN、LSTM)

    系列文章目录 深度学习原理-----线性回归+梯度下降法 深度学习原理-----逻辑回归算法 深度学习原理-----全连接神经网络 深度学习原理-----卷积神经网络 深度学习原理-----循环神经网 ...

  4. 深度学习之循环神经网络(11-a)LSTM情感分类问题代码

    深度学习之循环神经网络(11-a)LSTM情感分类问题代码 1. Cell方式 代码 运行结果 2. 层方式 代码 运行结果 1. Cell方式 代码 import os import tensorf ...

  5. 深度学习之循环神经网络(4)RNN层使用方法

    深度学习之循环神经网络(4)RNN层使用方法 1. SimpleRNNCell 2. 多层SimpleRNNCell网络 3. SimpleRNN层  在介绍完循环神经网络的算法原理之后,我们来学习如 ...

  6. 水很深的深度学习-Task05循环神经网络RNN

    循环神经网络 Recurrent Neural Network 参考资料: Unusual-Deep-Learning 零基础入门深度学习(5) - 循环神经网络 史上最小白之RNN详解_Tink19 ...

  7. 深度学习之循环神经网络(12)预训练的词向量

    深度学习之循环神经网络(12)预训练的词向量  在情感分类任务时,Embedding层是从零开始训练的.实际上,对于文本处理任务来说,领域知识大部分是共享的,因此我们能够利用在其它任务上训练好的词向量 ...

  8. 深度学习之循环神经网络(9)LSTM层使用方法

    深度学习之循环神经网络(9)LSTM层使用方法 1. LSTMCell 2. LSTM层  在TensorFlow中,同样有两种方式实现LSTM网络.既可以使用LSTMCell来手动完成时间戳上面的循 ...

  9. 深度学习之循环神经网络(8)长短时记忆网络(LSTM)

    深度学习之循环神经网络(8)长短时记忆网络(LSTM) 0. LSTM原理 1. 遗忘门 2. 输入门 3. 刷新Memory 4. 输出门 5. 小结  循环神经网络除了训练困难,还有一个更严重的问 ...

最新文章

  1. python server酱_面向回家编程!GitHub标星两万的quot;Python抢票神器”快用起来!...
  2. java实现MD5加密
  3. 用Python Pandas处理亿级数据
  4. mysql两张表一起计数_mysql-同一张表上的多个联接,其中一个查询计数
  5. 基于Vue-SSR优化方案归纳总结
  6. mybatis3 添加ehcache支持
  7. 原生JS那些事:原生JS添加和删除class类名
  8. 力扣-剑指offer 06 从尾到头打印链表
  9. Solaris10文件布局
  10. Sublime Text (崇高文本)
  11. [摘抄]从 GitHub 身上学到的 3 个创业经验
  12. 【Android 应用开发】GitHub 优秀的 Android 开源项目
  13. 魔方游戏程序设计制作(C语言)
  14. Excel函数应用(3)--筛选随机数
  15. New 900 Sentences
  16. bzoj2448 挖油
  17. Oracle索引梳理系列(一)- Oracle访问数据的方法
  18. Kali Linux渗透测试之端口扫描(一)——UDP、TCP、隐蔽端口扫描、全连接端口扫描
  19. 树莓派 4b 可执行文件 无法双击运行_云拆机,一起来看看树莓派的新品——Raspberry Pi 400...
  20. 韩国人彻底疯狂了,竟想向中俄索要大片领土

热门文章

  1. 人脸测温门禁 传感器_湖南人脸测温门禁如何选择
  2. Qt::Key_Return和Qt::Key_Enter区别
  3. Android开发之设置listview分割线的颜色
  4. python中反斜杠b_Python bytes 反斜杠转义问题解决方法
  5. 批量设置 style
  6. TODO:Laravel增加验证码
  7. Swift 数组、字典
  8. java基础:数据类型
  9. [NHibernate]获取分组查询的记录总数
  10. python setdefault函数_python 字典 setdefault()和get()方法比较详解