使用model.save(filepath)将Keras模型和权重保存在一个HDF5文件中,该文件将包含:

模型的结构,以便重构该模型
模型的权重
训练配置(损失函数,优化器等)
优化器的状态,以便于从上次训练中断的地方开始

使用keras.models.load_model(filepath)来重新实例化你的模型,如果文件中存储了训练配置的话,该函数还会同时完成模型的编译


只保存模型结构,而不包含其权重或配置信息


#保存成json格式的文件
# save as JSONjson_string = model.to_json()
open('my_model_architecture.json','w').write(json_string)
from keras.models import model_from_json
model = model_from_json(open('my_model_architecture.json').read())  #保存成yaml文件
# save as YAML
yaml_string = model.to_yaml()
open('my_model_architectrue.yaml','w').write(yaml_string)
from keras.models import model_from_yaml
model = model_from_yaml(open('my_model_architecture.yaml').read())#这项操作将把模型序列化为json或yaml文件,这些文件对人而言也是友好的,如果需要的话你甚至可以手动打开这些文件并进行编辑。当然,你也可以从保存好的json文件或yaml文件中载入模型:# model reconstruction from JSON:
from keras.modelsimport model_from_json
model = model_from_json(json_string)  # model reconstruction from YAML
model =model_from_yaml(yaml_string)  

需要保存模型的权重


import keras.models import load_model
model.save_weights('my_model_weights.h5')
#需要在代码中初始化一个完全相同的模型
model.load_weights('my_model_weights.h5')
#需要加载权重到不同的网络结构(有些层一样)中,例如fine-tune或transfer-learning,可以通过层名字来加载模型
model.load_weights('my_model_weights.h5', by_name=True)
 open('my_model_architecture.json','w').write(json_string)
model.save_weights('my_model_weights.h5')
model = model_from_json(open('my_model_architecture.json').read())
model.load_weights('my_model_weights.h5')

实时保存模型结构、训练出来的权重、及优化器状态并调用


keras 的callback参数可以帮助我们实现在训练过程中的适当时机被调用。实现实时保存训练模型以及训练参数

keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=False, save_weights_only=False, mode='auto', period=1
)1. filename:字符串,保存模型的路径
2. monitor:需要监视的值
3. verbose:信息展示模式,0或1
4. save_best_only:当设置为True时,将只保存在验证集上性能最好的模型
5. mode:‘auto’,‘min’,‘max’之一,在save_best_only=True时决定性能最佳模型的评判准则,例如,当监测值为val_acc时,模式应为max,当检测值为val_loss时,模式应为min。在auto模式下,评价准则由被监测值的名字自动推断。
6. save_weights_only:若设置为True,则只保存模型权重,否则将保存整个模型(包括模型结构,配置信息等)
7. period:CheckPoint之间的间隔的epoch数

示例


"""
假如原模型为:model = Sequential()model.add(Dense(2, input_dim=3, name="dense_1"))model.add(Dense(3, name="dense_2"))...model.save_weights(fname)
"""
# new model
model = Sequential()
model.add(Dense(2, input_dim=3, name="dense_1"))  # will be loaded
model.add(Dense(10, name="new_dense"))  # will not be loaded# load weights from first model; will only affect the first layer, dense_1.
model.load_weights(fname, by_name=True)

How to Check-Point Deep Learning Models in Keras


Checkpoint Neural Network Model Improvements

# Checkpoint the weights when validation accuracy improves
from keras.models import Sequential
from keras.layers import Dense
from keras.callbacks import ModelCheckpoint
import matplotlib.pyplot as plt
import numpy
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# load pima indians dataset
dataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# create model
model = Sequential()
model.add(Dense(12, input_dim=8, kernel_initializer='uniform', activation='relu'))
model.add(Dense(8, kernel_initializer='uniform', activation='relu'))
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# checkpoint
filepath="weights-improvement-{epoch:02d}-{val_acc:.2f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
# Fit the model
model.fit(X, Y, validation_split=0.33, epochs=150, batch_size=10, callbacks=callbacks_list, verbose=0)

Running the example produces the following output (truncated for brevity):

...
Epoch 00134: val_acc did not improve
Epoch 00135: val_acc did not improve
Epoch 00136: val_acc did not improve
Epoch 00137: val_acc did not improve
Epoch 00138: val_acc did not improve
Epoch 00139: val_acc did not improve
Epoch 00140: val_acc improved from 0.83465 to 0.83858, saving model to weights-improvement-140-0.84.hdf5
Epoch 00141: val_acc did not improve
Epoch 00142: val_acc did not improve
Epoch 00143: val_acc did not improve
Epoch 00144: val_acc did not improve
Epoch 00145: val_acc did not improve
Epoch 00146: val_acc improved from 0.83858 to 0.84252, saving model to weights-improvement-146-0.84.hdf5
Epoch 00147: val_acc did not improve
Epoch 00148: val_acc improved from 0.84252 to 0.84252, saving model to weights-improvement-148-0.84.hdf5
Epoch 00149: val_acc did not improve

You will see a number of files in your working directory containing the network weights in HDF5 format. For example:

...
weights-improvement-53-0.76.hdf5
weights-improvement-71-0.76.hdf5
weights-improvement-77-0.78.hdf5
weights-improvement-99-0.78.hdf5

Checkpoint Best Neural Network Model Only

# Checkpoint the weights for best model on validation accuracy
from keras.models import Sequential
from keras.layers import Dense
from keras.callbacks import ModelCheckpoint
import matplotlib.pyplot as plt
import numpy
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# load pima indians dataset
dataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# create model
model = Sequential()
model.add(Dense(12, input_dim=8, kernel_initializer='uniform', activation='relu'))
model.add(Dense(8, kernel_initializer='uniform', activation='relu'))
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# checkpoint
filepath="weights.best.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
# Fit the model
model.fit(X, Y, validation_split=0.33, epochs=150, batch_size=10, callbacks=callbacks_list, verbose=0)

Running this example provides the following output (truncated for brevity):

...
Epoch 00139: val_acc improved from 0.79134 to 0.79134, saving model to weights.best.hdf5
Epoch 00140: val_acc did not improve
Epoch 00141: val_acc did not improve
Epoch 00142: val_acc did not improve
Epoch 00143: val_acc did not improve
Epoch 00144: val_acc improved from 0.79134 to 0.79528, saving model to weights.best.hdf5
Epoch 00145: val_acc improved from 0.79528 to 0.79528, saving model to weights.best.hdf5
Epoch 00146: val_acc did not improve
Epoch 00147: val_acc did not improve
Epoch 00148: val_acc did not improve
Epoch 00149: val_acc did not improve

You should see the weight file in your local directory.


weights.best.hdf5

Loading a Check-Pointed Neural Network Model

# How to load and use weights from a checkpoint
from keras.models import Sequential
from keras.layers import Dense
from keras.callbacks import ModelCheckpoint
import matplotlib.pyplot as plt
import numpy
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# create model
model = Sequential()
model.add(Dense(12, input_dim=8, kernel_initializer='uniform', activation='relu'))
model.add(Dense(8, kernel_initializer='uniform', activation='relu'))
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))
# load weights
model.load_weights("weights.best.hdf5")
# Compile model (required to make predictions)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print("Created model and loaded weights from file")
# load pima indians dataset
dataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# estimate accuracy on whole dataset using loaded weights
scores = model.evaluate(X, Y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))

Running the example produces the following output

Created model and loaded weights from file
acc: 77.73%

参考文献


How to Check-Point Deep Learning Models in Keras

http://blog.csdn.net/u010159842/article/details/54602217

用Keras搞一个阅读理解机器人
Keras中文文档
如何保存Keras模型
人工神经网络(三) –keras模型的保存和使用

keras如何保存模型相关推荐

  1. Keras:保存模型并载入模型继续训练

    参考:https://blog.csdn.net/qq_34218078/article/details/101663882

  2. Keras框架下的保存模型和加载模型

    在Keras框架下训练深度学习模型时,一般思路是在训练环境下训练出模型,然后拿训练好的模型(即保存模型相应信息的文件)到生产环境下去部署.在训练过程中我们可能会遇到以下情况: 需要运行很长时间的程序在 ...

  3. python模型保存save_浅谈keras保存模型中的save()和save_weights()区别

    今天做了一个关于keras保存模型的实验,希望有助于大家了解keras保存模型的区别. 我们知道keras的模型一般保存为后缀名为h5的文件,比如final_model.h5.同样是h5文件用save ...

  4. keras保存模型_TF2 8.模型保存与加载

    举个例子:先训练出一个模型 import 接下来第一种方法:只保留模型的参数:这个有2种方法: model.save_weights("adasd.h5")model.load_w ...

  5. keras中的模型保存和加载

    tensorflow中的模型常常是protobuf格式,这种格式既可以是二进制也可以是文本.keras模型保存和加载与tensorflow不同,keras中的模型保存和加载往往是保存成hdf5格式. ...

  6. Keras中保存和加载权重及模型结构

    微信公众号 1. 保存和加载模型结构 (1)保存为JSON字串 json_string = model.to_json() (2)从JSON字串重构模型 from keras.models impor ...

  7. Keras如何保存、加载Keras模型

    链接 Keras中文文档 一.如何保存 Keras 模型? 1.保存/加载整个模型(结构 + 权重 + 优化器状态) 不建议使用 pickle 或 cPickle 来保存 Keras 模型. 你可以使 ...

  8. [Keras] 使用Keras调用多GPU时出现无法保存模型的解决方法

    在使用keras 的并行多路GPU时出现了模型无法 保存,在使用单个GPU时运行完全没有问题.运行出现can't pickle的问题 随后在网上找了很多解决方法.下面列举一些我实验成功的方法. 方法一 ...

  9. Tensorflow 2.x(keras)源码详解之第十章:keras中的模型保存与加载(详解Checkpointmd5模型序列化)

      大家好,我是爱编程的喵喵.双985硕士毕业,现担任全栈工程师一职,热衷于将数据思维应用到工作与生活中.从事机器学习以及相关的前后端开发工作.曾在阿里云.科大讯飞.CCF等比赛获得多次Top名次.现 ...

最新文章

  1. pika-NoSQL原理概述
  2. CodeForces 213 E
  3. 李宏毅2020深度学习-判别方法和生成方法
  4. Quick cocos2dx学习笔记
  5. 小小c#算法题 - 1 - 找出数组中满足条件的两个数
  6. CentOS 5.6 使用光驱+系统光盘做YUM源
  7. 助你成为专业终端人,阿里巴巴第三届终端练习生计划开启报名!
  8. 医院的HIS系统简介
  9. freeCAD transform stepamp;amp; stp to stl logging py2exe 打包
  10. 计算机网络的常用命令汇总
  11. 联想服务器安装GHO系统,联想win7旗舰版32位gho安装教程
  12. EXCEL抓取SQL查询数据
  13. UNIX 环境高级编程读书笔记(1)
  14. 由于目标计算机积极拒绝,无法连接。 Could not connect to Redis at 127.0.0.1:6379: 由于目标计算机积极拒绝,无法连接
  15. php 公众号 模板消息id如何获取_微信公众号后台模板消息如何实现发送的功能...
  16. 微信里怎么添加预约链接_分享公众号预约怎么做
  17. android如何暂停倒计时,Android计时器和倒计时的实现(含开始,暂停,和复位)...
  18. 亚马逊影响者红人,用关联视频给卖家带来哪些好处?
  19. maven下载jia比较慢的解决方法
  20. 生化奇兵: 无限 设置简体中文和显示对白字幕

热门文章

  1. 卸料装置弹性零件的计算方法_冲裁力、卸料力及推件力的计算-常见问题.doc
  2. python使用redis教程 敲黑板划重点
  3. 自己用嵌入式系统搭建云服务器,嵌入式服务器搭建
  4. 【OS】操作系统体系结构
  5. python middleware_Sanic middleware – 中间件
  6. img 标签 点击跳出图层_你竟然不知道cad图层也可以导出与导入?
  7. 深度学习(18)神经网络与全连接层一: 数据加载
  8. ad19电气规则检查_PROTEL DXP电气规则检查
  9. [深度学习] 一篇文章理解 word2vec
  10. OpenCV中的HOG+SVM在自动驾驶车辆检测中的应用实例