DL之LSTM:基于《wonderland爱丽丝梦游仙境记》小说数据集利用LSTM算法(层加深,基于keras)对单个character字符预测

目录

基于《wonderland爱丽丝梦游仙境记》小说数据集利用LSTM算法(层加深,基于keras)对单个character字符预测

设计思路

输出结果

核心代码


基于《wonderland爱丽丝梦游仙境记》小说数据集利用LSTM算法(层加深,基于keras)对单个character字符预测

设计思路

数据集下载:https://download.csdn.net/download/qq_41185868/13767751

输出结果

Using TensorFlow backend.
F:\Program Files\Python\Python36\lib\site-packages\tensorflow\python\framework\dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'._np_qint8 = np.dtype([("qint8", np.int8, 1)])
F:\Program Files\Python\Python36\lib\site-packages\tensorflow\python\framework\dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'._np_quint8 = np.dtype([("quint8", np.uint8, 1)])
F:\Program Files\Python\Python36\lib\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'._np_qint16 = np.dtype([("qint16", np.int16, 1)])
F:\Program Files\Python\Python36\lib\site-packages\tensorflow\python\framework\dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'._np_quint16 = np.dtype([("quint16", np.uint16, 1)])
F:\Program Files\Python\Python36\lib\site-packages\tensorflow\python\framework\dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'._np_qint32 = np.dtype([("qint32", np.int32, 1)])
F:\Program Files\Python\Python36\lib\site-packages\tensorflow\python\framework\dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.np_resource = np.dtype([("resource", np.ubyte, 1)])
[nltk_data] Error loading punkt: <urlopen error [Errno 11004]
[nltk_data]     getaddrinfo failed>
raw_text[:10] : alice's ad
Total Characters: 144413
chars ['\n', ' ', '!', '"', "'", '(', ')', '*', ',', '-', '.', '0', '3', ':', ';', '?', '[', ']', '_', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z']
Total Vocab: 45
sentences 1625 ["alice's adventures in wonderland\n\nlewis carroll\n\nthe millennium fulcrum edition 3.0\n\nchapter i. down the rabbit-hole\n\nalice was beginning to get very tired of sitting by her sister on the\nbank, and of having nothing to do: once or twice she had peeped into the\nbook her sister was reading, but it had no pictures or conversations in\nit, 'and what is the use of a book,' thought alice 'without pictures or\nconversations?'", 'so she was considering in her own mind (as well as she could, for the\nhot day made her feel very sleepy and stupid), whether the pleasure\nof making a daisy-chain would be worth the trouble of getting up and\npicking the daisies, when suddenly a white rabbit with pink eyes ran\nclose by her.', "there was nothing so very remarkable in that; nor did alice think it so\nvery much out of the way to hear the rabbit say to itself, 'oh dear!", 'oh dear!', "i shall be late!'"]
lengths (1625,) [420 289 140 ... 636 553   7]
CharMapInt_dict 45 {'\n': 0, ' ': 1, '!': 2, '"': 3, "'": 4, '(': 5, ')': 6, '*': 7, ',': 8, '-': 9, '.': 10, '0': 11, '3': 12, ':': 13, ';': 14, '?': 15, '[': 16, ']': 17, '_': 18, 'a': 19, 'b': 20, 'c': 21, 'd': 22, 'e': 23, 'f': 24, 'g': 25, 'h': 26, 'i': 27, 'j': 28, 'k': 29, 'l': 30, 'm': 31, 'n': 32, 'o': 33, 'p': 34, 'q': 35, 'r': 36, 's': 37, 't': 38, 'u': 39, 'v': 40, 'w': 41, 'x': 42, 'y': 43, 'z': 44}
IntMapChar_dict 45 {0: '\n', 1: ' ', 2: '!', 3: '"', 4: "'", 5: '(', 6: ')', 7: '*', 8: ',', 9: '-', 10: '.', 11: '0', 12: '3', 13: ':', 14: ';', 15: '?', 16: '[', 17: ']', 18: '_', 19: 'a', 20: 'b', 21: 'c', 22: 'd', 23: 'e', 24: 'f', 25: 'g', 26: 'h', 27: 'i', 28: 'j', 29: 'k', 30: 'l', 31: 'm', 32: 'n', 33: 'o', 34: 'p', 35: 'q', 36: 'r', 37: 's', 38: 't', 39: 'u', 40: 'v', 41: 'w', 42: 'x', 43: 'y', 44: 'z'}
dataX: 144313 100 [[19, 30, 27, 21, 23, 4, 37, 1, 19, 22, 40, 23, 32, 38, 39, 36, 23, 37, 1, 27, 32, 1, 41, 33, 32, 22, 23, 36, 30, 19, 32, 22, 0, 0, 30, 23, 41, 27, 37, 1, 21, 19, 36, 36, 33, 30, 30, 0, 0, 38, 26, 23, 1, 31, 27, 30, 30, 23, 32, 32, 27, 39, 31, 1, 24, 39, 30, 21, 36, 39, 31, 1, 23, 22, 27, 38, 27, 33, 32, 1, 12, 10, 11, 0, 0, 21, 26, 19, 34, 38, 23, 36, 1, 27, 10, 1, 22, 33, 41, 32], [30, 27, 21, 23, 4, 37, 1, 19, 22, 40, 23, 32, 38, 39, 36, 23, 37, 1, 27, 32, 1, 41, 33, 32, 22, 23, 36, 30, 19, 32, 22, 0, 0, 30, 23, 41, 27, 37, 1, 21, 19, 36, 36, 33, 30, 30, 0, 0, 38, 26, 23, 1, 31, 27, 30, 30, 23, 32, 32, 27, 39, 31, 1, 24, 39, 30, 21, 36, 39, 31, 1, 23, 22, 27, 38, 27, 33, 32, 1, 12, 10, 11, 0, 0, 21, 26, 19, 34, 38, 23, 36, 1, 27, 10, 1, 22, 33, 41, 32, 1], [27, 21, 23, 4, 37, 1, 19, 22, 40, 23, 32, 38, 39, 36, 23, 37, 1, 27, 32, 1, 41, 33, 32, 22, 23, 36, 30, 19, 32, 22, 0, 0, 30, 23, 41, 27, 37, 1, 21, 19, 36, 36, 33, 30, 30, 0, 0, 38, 26, 23, 1, 31, 27, 30, 30, 23, 32, 32, 27, 39, 31, 1, 24, 39, 30, 21, 36, 39, 31, 1, 23, 22, 27, 38, 27, 33, 32, 1, 12, 10, 11, 0, 0, 21, 26, 19, 34, 38, 23, 36, 1, 27, 10, 1, 22, 33, 41, 32, 1, 38], [21, 23, 4, 37, 1, 19, 22, 40, 23, 32, 38, 39, 36, 23, 37, 1, 27, 32, 1, 41, 33, 32, 22, 23, 36, 30, 19, 32, 22, 0, 0, 30, 23, 41, 27, 37, 1, 21, 19, 36, 36, 33, 30, 30, 0, 0, 38, 26, 23, 1, 31, 27, 30, 30, 23, 32, 32, 27, 39, 31, 1, 24, 39, 30, 21, 36, 39, 31, 1, 23, 22, 27, 38, 27, 33, 32, 1, 12, 10, 11, 0, 0, 21, 26, 19, 34, 38, 23, 36, 1, 27, 10, 1, 22, 33, 41, 32, 1, 38, 26], [23, 4, 37, 1, 19, 22, 40, 23, 32, 38, 39, 36, 23, 37, 1, 27, 32, 1, 41, 33, 32, 22, 23, 36, 30, 19, 32, 22, 0, 0, 30, 23, 41, 27, 37, 1, 21, 19, 36, 36, 33, 30, 30, 0, 0, 38, 26, 23, 1, 31, 27, 30, 30, 23, 32, 32, 27, 39, 31, 1, 24, 39, 30, 21, 36, 39, 31, 1, 23, 22, 27, 38, 27, 33, 32, 1, 12, 10, 11, 0, 0, 21, 26, 19, 34, 38, 23, 36, 1, 27, 10, 1, 22, 33, 41, 32, 1, 38, 26, 23]]
dataY: 144313 [1, 38, 26, 23, 1]
Total patterns: 144313
X_train.shape (144313, 100, 1)
Y_train.shape (144313, 45)
Init data,after read_out, chars: 144313 alice's adventures in wonderlandlewis carrolltge millennium fulcrum edition 3.0cgapter i. down
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
F:\File_Jupyter\实用代码\NeuralNetwork(神经网络)\CharacterLanguageLSTM.py:135: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.LSTM_Model.fit(X_train[:train_index], Y_train[:train_index], nb_epoch=10, batch_size=64, callbacks=callbacks_list)
lstm_1 (LSTM)                (None, 256)               264192
_________________________________________________________________
dropout_1 (Dropout)          (None, 256)               0
_________________________________________________________________
dense_1 (Dense)              (None, 45)                11565
=================================================================
Total params: 275,757
Trainable params: 275,757
Non-trainable params: 0
_________________________________________________________________
LSTM_Model None
Epoch 1/10
2020-12-23 23:42:07.919094: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX264/1000 [>.............................] - ETA: 29s - loss: 3.8086128/1000 [==>...........................] - ETA: 15s - loss: 3.7953192/1000 [====>.........................] - ETA: 11s - loss: 3.7823256/1000 [======>.......................] - ETA: 8s - loss: 3.7692 320/1000 [========>.....................] - ETA: 7s - loss: 3.7552384/1000 [==========>...................] - ETA: 5s - loss: 3.7372448/1000 [============>.................] - ETA: 4s - loss: 3.7026512/1000 [==============>...............] - ETA: 4s - loss: 3.6552576/1000 [================>.............] - ETA: 3s - loss: 3.5955640/1000 [==================>...........] - ETA: 2s - loss: 3.5678704/1000 [====================>.........] - ETA: 2s - loss: 3.5116768/1000 [======================>.......] - ETA: 1s - loss: 3.4778832/1000 [=======================>......] - ETA: 1s - loss: 3.4441896/1000 [=========================>....] - ETA: 0s - loss: 3.4278960/1000 [===========================>..] - ETA: 0s - loss: 3.4092
1000/1000 [==============================] - 7s 7ms/step - loss: 3.3925Epoch 00001: loss improved from inf to 3.39249, saving model to hdf5/weights-improvement-01-3.3925.hdf5
Epoch 2/1064/1000 [>.............................] - ETA: 4s - loss: 3.1429128/1000 [==>...........................] - ETA: 4s - loss: 3.1370192/1000 [====>.........................] - ETA: 3s - loss: 3.1034256/1000 [======>.......................] - ETA: 3s - loss: 3.1038320/1000 [========>.....................] - ETA: 3s - loss: 3.0962384/1000 [==========>...................] - ETA: 2s - loss: 3.1055448/1000 [============>.................] - ETA: 2s - loss: 3.0986512/1000 [==============>...............] - ETA: 2s - loss: 3.0628576/1000 [================>.............] - ETA: 2s - loss: 3.0452640/1000 [==================>...........] - ETA: 1s - loss: 3.0571704/1000 [====================>.........] - ETA: 1s - loss: 3.0684768/1000 [======================>.......] - ETA: 1s - loss: 3.0606832/1000 [=======================>......] - ETA: 0s - loss: 3.0596896/1000 [=========================>....] - ETA: 0s - loss: 3.0529960/1000 [===========================>..] - ETA: 0s - loss: 3.0484
1000/1000 [==============================] - 5s 5ms/step - loss: 3.0371Epoch 00002: loss improved from 3.39249 to 3.03705, saving model to hdf5/weights-improvement-02-3.0371.hdf5
Epoch 3/1064/1000 [>.............................] - ETA: 4s - loss: 3.1671128/1000 [==>...........................] - ETA: 4s - loss: 3.0008192/1000 [====>.........................] - ETA: 4s - loss: 3.0159256/1000 [======>.......................] - ETA: 4s - loss: 3.0019320/1000 [========>.....................] - ETA: 3s - loss: 3.0056384/1000 [==========>...................] - ETA: 3s - loss: 3.0156448/1000 [============>.................] - ETA: 2s - loss: 3.0392512/1000 [==============>...............] - ETA: 2s - loss: 3.0243576/1000 [================>.............] - ETA: 2s - loss: 3.0226640/1000 [==================>...........] - ETA: 1s - loss: 3.0162704/1000 [====================>.........] - ETA: 1s - loss: 3.0238768/1000 [======================>.......] - ETA: 1s - loss: 3.0195832/1000 [=======================>......] - ETA: 0s - loss: 3.0286896/1000 [=========================>....] - ETA: 0s - loss: 3.0272960/1000 [===========================>..] - ETA: 0s - loss: 3.0214
1000/1000 [==============================] - 6s 6ms/step - loss: 3.0225Epoch 00003: loss improved from 3.03705 to 3.02249, saving model to hdf5/weights-improvement-03-3.0225.hdf5
Epoch 4/1064/1000 [>.............................] - ETA: 5s - loss: 2.7843128/1000 [==>...........................] - ETA: 5s - loss: 2.8997192/1000 [====>.........................] - ETA: 4s - loss: 2.9975256/1000 [======>.......................] - ETA: 4s - loss: 3.0150320/1000 [========>.....................] - ETA: 3s - loss: 3.0025384/1000 [==========>...................] - ETA: 3s - loss: 3.0442448/1000 [============>.................] - ETA: 3s - loss: 3.0494512/1000 [==============>...............] - ETA: 2s - loss: 3.0398576/1000 [================>.............] - ETA: 2s - loss: 3.0170640/1000 [==================>...........] - ETA: 2s - loss: 3.0421704/1000 [====================>.........] - ETA: 1s - loss: 3.0366768/1000 [======================>.......] - ETA: 1s - loss: 3.0339832/1000 [=======================>......] - ETA: 0s - loss: 3.0316896/1000 [=========================>....] - ETA: 0s - loss: 3.0361960/1000 [===========================>..] - ETA: 0s - loss: 3.0326
1000/1000 [==============================] - 6s 6ms/step - loss: 3.0352Epoch 00004: loss did not improve from 3.02249
Epoch 5/1064/1000 [>.............................] - ETA: 4s - loss: 2.8958128/1000 [==>...........................] - ETA: 4s - loss: 2.9239192/1000 [====>.........................] - ETA: 4s - loss: 2.9044256/1000 [======>.......................] - ETA: 4s - loss: 2.9417320/1000 [========>.....................] - ETA: 3s - loss: 2.9674384/1000 [==========>...................] - ETA: 3s - loss: 2.9646448/1000 [============>.................] - ETA: 3s - loss: 2.9629512/1000 [==============>...............] - ETA: 2s - loss: 2.9707576/1000 [================>.............] - ETA: 2s - loss: 2.9699640/1000 [==================>...........] - ETA: 1s - loss: 2.9594704/1000 [====================>.........] - ETA: 1s - loss: 2.9830768/1000 [======================>.......] - ETA: 1s - loss: 2.9773832/1000 [=======================>......] - ETA: 0s - loss: 2.9774896/1000 [=========================>....] - ETA: 0s - loss: 2.9891960/1000 [===========================>..] - ETA: 0s - loss: 3.0070
1000/1000 [==============================] - 5s 5ms/step - loss: 3.0120Epoch 00005: loss improved from 3.02249 to 3.01205, saving model to hdf5/weights-improvement-05-3.0120.hdf5
Epoch 6/1064/1000 [>.............................] - ETA: 4s - loss: 3.0241128/1000 [==>...........................] - ETA: 4s - loss: 3.0463192/1000 [====>.........................] - ETA: 3s - loss: 3.0364256/1000 [======>.......................] - ETA: 3s - loss: 2.9712320/1000 [========>.....................] - ETA: 3s - loss: 2.9840384/1000 [==========>...................] - ETA: 3s - loss: 2.9887448/1000 [============>.................] - ETA: 2s - loss: 2.9785512/1000 [==============>...............] - ETA: 2s - loss: 2.9852576/1000 [================>.............] - ETA: 2s - loss: 2.9893640/1000 [==================>...........] - ETA: 1s - loss: 2.9931704/1000 [====================>.........] - ETA: 1s - loss: 2.9790768/1000 [======================>.......] - ETA: 1s - loss: 2.9962832/1000 [=======================>......] - ETA: 0s - loss: 3.0166896/1000 [=========================>....] - ETA: 0s - loss: 3.0213960/1000 [===========================>..] - ETA: 0s - loss: 3.0143
1000/1000 [==============================] - 5s 5ms/step - loss: 3.0070Epoch 00006: loss improved from 3.01205 to 3.00701, saving model to hdf5/weights-improvement-06-3.0070.hdf5
Epoch 7/1064/1000 [>.............................] - ETA: 5s - loss: 3.0738128/1000 [==>...........................] - ETA: 5s - loss: 3.0309192/1000 [====>.........................] - ETA: 4s - loss: 2.9733256/1000 [======>.......................] - ETA: 4s - loss: 2.9728320/1000 [========>.....................] - ETA: 4s - loss: 2.9422384/1000 [==========>...................] - ETA: 3s - loss: 2.9496448/1000 [============>.................] - ETA: 3s - loss: 2.9548512/1000 [==============>...............] - ETA: 3s - loss: 2.9635576/1000 [================>.............] - ETA: 2s - loss: 2.9614640/1000 [==================>...........] - ETA: 2s - loss: 2.9537704/1000 [====================>.........] - ETA: 1s - loss: 2.9454768/1000 [======================>.......] - ETA: 1s - loss: 2.9649832/1000 [=======================>......] - ETA: 1s - loss: 2.9814896/1000 [=========================>....] - ETA: 0s - loss: 2.9955960/1000 [===========================>..] - ETA: 0s - loss: 2.9948
1000/1000 [==============================] - 6s 6ms/step - loss: 2.9903Epoch 00007: loss improved from 3.00701 to 2.99027, saving model to hdf5/weights-improvement-07-2.9903.hdf5
Epoch 8/1064/1000 [>.............................] - ETA: 5s - loss: 2.9248128/1000 [==>...........................] - ETA: 4s - loss: 2.9293192/1000 [====>.........................] - ETA: 4s - loss: 2.9820256/1000 [======>.......................] - ETA: 4s - loss: 3.0261320/1000 [========>.....................] - ETA: 3s - loss: 2.9989384/1000 [==========>...................] - ETA: 3s - loss: 3.0101448/1000 [============>.................] - ETA: 3s - loss: 3.0050512/1000 [==============>...............] - ETA: 2s - loss: 3.0155576/1000 [================>.............] - ETA: 2s - loss: 3.0414640/1000 [==================>...........] - ETA: 2s - loss: 3.0180704/1000 [====================>.........] - ETA: 1s - loss: 3.0295768/1000 [======================>.......] - ETA: 1s - loss: 2.9996832/1000 [=======================>......] - ETA: 0s - loss: 3.0151896/1000 [=========================>....] - ETA: 0s - loss: 3.0201960/1000 [===========================>..] - ETA: 0s - loss: 3.0063
1000/1000 [==============================] - 6s 6ms/step - loss: 3.0064Epoch 00008: loss did not improve from 2.99027
Epoch 9/1064/1000 [>.............................] - ETA: 4s - loss: 2.8417128/1000 [==>...........................] - ETA: 4s - loss: 2.9652192/1000 [====>.........................] - ETA: 4s - loss: 2.9907256/1000 [======>.......................] - ETA: 3s - loss: 3.0133320/1000 [========>.....................] - ETA: 3s - loss: 3.0092384/1000 [==========>...................] - ETA: 3s - loss: 3.0139448/1000 [============>.................] - ETA: 2s - loss: 3.0453512/1000 [==============>...............] - ETA: 2s - loss: 3.0481576/1000 [================>.............] - ETA: 2s - loss: 3.0434640/1000 [==================>...........] - ETA: 1s - loss: 3.0158704/1000 [====================>.........] - ETA: 1s - loss: 3.0141768/1000 [======================>.......] - ETA: 1s - loss: 3.0203832/1000 [=======================>......] - ETA: 0s - loss: 3.0068896/1000 [=========================>....] - ETA: 0s - loss: 2.9980960/1000 [===========================>..] - ETA: 0s - loss: 3.0016
1000/1000 [==============================] - 5s 5ms/step - loss: 2.9944Epoch 00009: loss did not improve from 2.99027
Epoch 10/1064/1000 [>.............................] - ETA: 4s - loss: 3.0100128/1000 [==>...........................] - ETA: 4s - loss: 3.0620192/1000 [====>.........................] - ETA: 4s - loss: 3.0169256/1000 [======>.......................] - ETA: 3s - loss: 3.0289320/1000 [========>.....................] - ETA: 3s - loss: 3.0060384/1000 [==========>...................] - ETA: 3s - loss: 2.9940448/1000 [============>.................] - ETA: 2s - loss: 2.9823512/1000 [==============>...............] - ETA: 2s - loss: 2.9686576/1000 [================>.............] - ETA: 2s - loss: 2.9699640/1000 [==================>...........] - ETA: 1s - loss: 2.9710704/1000 [====================>.........] - ETA: 1s - loss: 2.9625768/1000 [======================>.......] - ETA: 1s - loss: 2.9748832/1000 [=======================>......] - ETA: 0s - loss: 2.9794896/1000 [=========================>....] - ETA: 0s - loss: 2.9788960/1000 [===========================>..] - ETA: 0s - loss: 2.9802
1000/1000 [==============================] - 5s 5ms/step - loss: 2.9963Epoch 00010: loss did not improve from 2.99027
LSTM_Pre_word.shape: (3, 45)
after LSTM read_out, chars: 3 ["\n,\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n!\n\n '\n\n!!\n\n\n\n !\n\n! ' \n\n\n\n\n", "\n,\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n!\n\n '\n\n!!\n\n\n\n !\n\n! ' \n\n\n\n\n", '\n,\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n!\n\n "\n\n!!\n\n \n !\n\n! \' \n\n\n\n\n']
LSTM_Model,Seed:
" ent down its head to hide a smile: some of the other birds
tittered audibly.'what i was going to s "199 100Generated Sequence:Done.
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
lstm_2 (LSTM)                (None, 100, 256)          264192
_________________________________________________________________
dropout_2 (Dropout)          (None, 100, 256)          0
_________________________________________________________________
lstm_3 (LSTM)                (None, 64)                82176
_________________________________________________________________
dropout_3 (Dropout)          (None, 64)                0
_________________________________________________________________
dense_2 (Dense)              (None, 45)                2925
=================================================================
Total params: 349,293
Trainable params: 349,293
Non-trainable params: 0
_________________________________________________________________
DeepLSTM_Model None
F:\File_Jupyter\实用代码\NeuralNetwork(神经网络)\CharacterLanguageLSTM.py:246: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.DeepLSTM_Model.fit(X_train[:train_index], Y_train[:train_index], nb_epoch=2, batch_size=256, callbacks=callbacks_list)
Epoch 1/2256/1000 [======>.......................] - ETA: 11s - loss: 3.8128512/1000 [==============>...............] - ETA: 5s - loss: 3.8058 768/1000 [======================>.......] - ETA: 2s - loss: 3.7976
1000/1000 [==============================] - 10s 10ms/step - loss: 3.7883Epoch 00001: loss improved from inf to 3.78827, saving model to hdf5/weights-improvement-01-3.7883.hdf5
Epoch 2/2256/1000 [======>.......................] - ETA: 5s - loss: 3.7167512/1000 [==============>...............] - ETA: 4s - loss: 3.6880768/1000 [======================>.......] - ETA: 1s - loss: 3.6622
1000/1000 [==============================] - 8s 8ms/step - loss: 3.6151Epoch 00002: loss improved from 3.78827 to 3.61512, saving model to hdf5/weights-improvement-02-3.6151.hdf5
DeepLSTM_Pre_word.shape: (3, 45)
after DeepLSTM read_out, chars: 3 ["\n,\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n!\n\n '\n\n!!\n\n\n\n !\n\n! ' \n\n\n\n\n", "\n,\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n!\n\n '\n\n!!\n\n\n\n !\n\n! ' \n\n\n\n\n", '\n,\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n!\n\n "\n\n!!\n\n \n !\n\n! \' \n\n\n\n\n']

核心代码

LSTM_Model = Sequential()
LSTM_Model.add(LSTM(256, input_shape=(X_train.shape[1], X_train.shape[2])))
LSTM_Model.add(Dropout(0.2))
LSTM_Model.add(Dense(Y_train.shape[1], activation='softmax'))
LSTM_Model.compile(loss='categorical_crossentropy', optimizer='adam')
print('LSTM_Model \n',LSTM_Model.summary())embedding_vector_length = 32
LSTMWithE_Model = Sequential()
LSTMWithE_Model.add(Embedding(chars_len, embedding_vector_length, input_length=seq_length))
LSTMWithE_Model.add(LSTM(256))
LSTMWithE_Model.add(Dropout(0.2))
LSTMWithE_Model.add(Dense(Y_train.shape[1], activation='softmax'))
LSTMWithE_Model.compile(loss='categorical_crossentropy', optimizer='adam')
print (LSTMWithE_Model.summary())DeepLSTM_Model = Sequential()
DeepLSTM_Model.add(LSTM(256, input_shape=(X_train.shape[1], X_train.shape[2]), return_sequences=True))
DeepLSTM_Model.add(Dropout(0.2))
DeepLSTM_Model.add(LSTM(64))
DeepLSTM_Model.add(Dropout(0.2))
DeepLSTM_Model.add(Dense(Y_train.shape[1], activation='softmax'))
DeepLSTM_Model.compile(loss='categorical_crossentropy', optimizer='adam')
print('DeepLSTM_Model \n',DeepLSTM_Model.summary())

DL之LSTM:基于《wonderland爱丽丝梦游仙境记》小说数据集利用LSTM算法(层加深,基于keras)对单个character字符预测相关推荐

  1. DL之LSTM:基于《wonderland爱丽丝梦游仙境记》小说数据集利用LSTM算法(基于keras)对word实现预测

    DL之LSTM:基于<wonderland爱丽丝梦游仙境记>小说数据集利用LSTM算法(基于keras)对word实现预测 目录 基于<wonderland爱丽丝梦游仙境记>小 ...

  2. ML之catboost:基于自定义数据集利用catboost 算法实现回归预测(训练采用CPU和GPU两种方式)

    ML之catboost:基于自定义数据集利用catboost 算法实现回归预测(训练采用CPU和GPU两种方式) 目录 基于自定义数据集利用catboost 算法实现回归预测(训练采用CPU和GPU两 ...

  3. ML之K-means:基于DIY数据集利用K-means算法聚类(测试9种不同聚类中心的模型性能)

    ML之K-means:基于DIY数据集利用K-means算法聚类(测试9种不同聚类中心的模型性能) 目录 输出结果 设计思路 实现代码 输出结果 设计思路 1.使用均匀分布函数随机三个簇,每个簇周围1 ...

  4. ML之K-means:基于(完整的)手写数字图片识别数据集利用K-means算法实现图片聚类

    ML之K-means:基于(完整的)手写数字图片识别数据集利用K-means算法实现图片聚类 目录 输出结果 设计思路 核心代码 输出结果 设计思路 核心代码 metrics.adjusted_ran ...

  5. ML之FE:基于波士顿房价数据集利用LightGBM算法进行模型预测然后通过3σ原则法(计算残差标准差)寻找测试集中的异常值/异常样本

    ML之FE:基于波士顿房价数据集利用LightGBM算法进行模型预测然后通过3σ原则法(计算残差标准差)寻找测试集中的异常值/异常样本 目录 基于波士顿房价数据集利用LiR和LightGBM算法进行模 ...

  6. ML之CatboostC:基于titanic泰坦尼克数据集利用catboost算法实现二分类

    ML之CatboostC:基于titanic泰坦尼克数据集利用catboost算法实现二分类 目录 基于titanic泰坦尼克数据集利用catboost算法实现二分类 设计思路 输出结果 核心代码 相 ...

  7. 机器学习之利用SMO算法求解支持向量机—基于python

    大家好,我是带我去滑雪! 本期将讨论支持向量机的实现问题,我们知道支持向量机的学习问题可以化为求解凸二次规划问题.这样的凸二次规划问题具有全局最优解,并且有许多最优化算法可以用于这一问题的求解.但是当 ...

  8. DL之GRU:基于2022年6月最新上证指数数据集结合Pytorch框架利用GRU算法预测最新股票上证指数实现回归预测

    DL之GRU:基于2022年6月最新上证指数数据集结合Pytorch框架利用GRU算法预测最新股票上证指数实现回归预测 目录 基于2022年6月最新上证指数数据集结合Pytorch框架利用GRU算法预 ...

  9. gwo算法matlab源代码,智能优化算法应用:基于GWO优化BP神经网络 - 附代码

    智能优化算法应用:基于GWO优化BP神经网络 - 附代码 智能优化算法应用:基于GWO优化BP神经网络 - 附代码 智能优化算法应用:基于GWO优化BP神经网络 文章目录智能优化算法应用:基于GWO优 ...

最新文章

  1. C语言fgetpos()函数:获得当前文件的读写指针(转)
  2. 激进or务实?HEVC、AV1 和私有Codecs现状
  3. 前端用Sass实现星级评定效果,简单快捷实现星级切换。
  4. 本地tomcat 配置环境变量
  5. C语言编程对缓冲区的理解
  6. Informix IDS 11体系操持(918测验)认证指南,第 4 部门: 机能调优(1)
  7. vue.js表格赋值_vue.js input框之间赋值方法
  8. 【英语学习】【Level 07】U01 Making friends L4 Meet your new colleague
  9. [spark]Spark2.4.6用put写入写入Hbase1.3.1
  10. 「收藏」其实是欺骗自己
  11. FreeRTOS的HOOK,以及(23)FreeRTOS 空闲任务分析
  12. 内连接、外连接和全连接的区别
  13. 虚拟化技术原理(CPU、内存、IO)
  14. IMO Res MSC 307(88) 国际船舶材料防火试验
  15. 如何提高笔记本电脑开机速度
  16. html如何让文字变斜体,CSS中如何让文字变成斜体
  17. 西南石油大学计科院主页
  18. 苹果可穿戴设备项目背后的那些专家
  19. 震为雷:始于足下;艮为山:红灯刹车
  20. 多组数据求最大公约数

热门文章

  1. python numpy矩阵索引_python-为什么scipy csr矩阵的行索引比numpy数组...
  2. rbf神经网络_基于RBF神经网络的监督控制(09)
  3. nacos 怎么配置 里的配置ip_Nacos-服务注册地址为内网IP的解决办法
  4. uboot流程——uboot启动流程
  5. nohub 将程序永久运行下去
  6. 在Linux Debian 8下部署基于PHP的Web项目。
  7. mingw编译ffmpeg 错误:Unknown option --enable-memalign-hack
  8. Ubuntu 14.04 hadoop单机安装
  9. cron计划任务书写格式
  10. ASP实现隐藏下载地址和防盗