模型训练部分代码

(注: 仅仅是训练的代码,如需训练还需要load_and_process.py、cnn.py等代码)

from keras.callbacks import CSVLogger, ModelCheckpoint, EarlyStopping
from keras.callbacks import ReduceLROnPlateau
from keras.preprocessing.image import ImageDataGenerator
from load_and_process import load_fer2013
from load_and_process import preprocess_input
from models.cnn import XCEPTION
from sklearn.model_selection import train_test_split

定义参数

# 参数
batch_size = 32
num_epochs = 10000
input_shape = (48, 48, 1)
validation_split = .2
verbose = 1
num_classes = 7
patience = 50
base_path = 'models/'

# 构建模型、配置、打印
# 1. 使用定义好的mini_XCEPTION模型
# 2. compile:Model 类模型方法用于配置训练模型。
# 参数一:optimizer优化器采用adam
# 参数二:loss多分类的对数损失函数,
# categorical_crossentropy(kears后端函数)输出张量与目标张量之间的分类交叉熵。
# 参数三:metrics: 在训练和测试期间的模型评估标准。
# 3. model.summary():打印出模型概况,它实际调用的是keras.utils.print_summary

model = XCEPTION(input_shape, num_classes)
model.compile(optimizer='adam', # 优化器采用adamloss='categorical_crossentropy', # 多分类的对数损失函数metrics=['accuracy'])
model.summary() 

运行结果:打印模型结果如下

WARNING:tensorflow:From C:\Users\asus\Anaconda3\envs\Face\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to
==================================================================================================
input_1 (InputLayer)            (None, 48, 48, 1)    0
__________________________________________________________________________________________________
block1_conv1 (Conv2D)           (None, 23, 23, 32)   288         input_1[0][0]
__________________________________________________________________________________________________
block1_conv1_bn (BatchNormaliza (None, 23, 23, 32)   128         block1_conv1[0][0]
__________________________________________________________________________________________________
block1_conv1_act (Activation)   (None, 23, 23, 32)   0           block1_conv1_bn[0][0]
__________________________________________________________________________________________________
block1_conv2 (Conv2D)           (None, 21, 21, 64)   18432       block1_conv1_act[0][0]
__________________________________________________________________________________________________
block1_conv2_bn (BatchNormaliza (None, 21, 21, 64)   256         block1_conv2[0][0]
__________________________________________________________________________________________________
block1_conv2_act (Activation)   (None, 21, 21, 64)   0           block1_conv2_bn[0][0]
__________________________________________________________________________________________________
block2_sepconv1 (SeparableConv2 (None, 21, 21, 128)  8768        block1_conv2_act[0][0]
__________________________________________________________________________________________________
block2_sepconv1_bn (BatchNormal (None, 21, 21, 128)  512         block2_sepconv1[0][0]
__________________________________________________________________________________________________
block2_sepconv2_act (Activation (None, 21, 21, 128)  0           block2_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block2_sepconv2 (SeparableConv2 (None, 21, 21, 128)  17536       block2_sepconv2_act[0][0]
__________________________________________________________________________________________________
block2_sepconv2_bn (BatchNormal (None, 21, 21, 128)  512         block2_sepconv2[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 11, 11, 128)  8192        block1_conv2_act[0][0]
__________________________________________________________________________________________________
block2_pool (MaxPooling2D)      (None, 11, 11, 128)  0           block2_sepconv2_bn[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 11, 11, 128)  512         conv2d_1[0][0]
__________________________________________________________________________________________________
add_1 (Add)                     (None, 11, 11, 128)  0           block2_pool[0][0]                batch_normalization_1[0][0]
__________________________________________________________________________________________________
block3_sepconv1_act (Activation (None, 11, 11, 128)  0           add_1[0][0]
__________________________________________________________________________________________________
block3_sepconv1 (SeparableConv2 (None, 11, 11, 256)  33920       block3_sepconv1_act[0][0]
__________________________________________________________________________________________________
block3_sepconv1_bn (BatchNormal (None, 11, 11, 256)  1024        block3_sepconv1[0][0]
__________________________________________________________________________________________________
block3_sepconv2_act (Activation (None, 11, 11, 256)  0           block3_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block3_sepconv2 (SeparableConv2 (None, 11, 11, 256)  67840       block3_sepconv2_act[0][0]
__________________________________________________________________________________________________
block3_sepconv2_bn (BatchNormal (None, 11, 11, 256)  1024        block3_sepconv2[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 6, 6, 256)    32768       add_1[0][0]
__________________________________________________________________________________________________
block3_pool (MaxPooling2D)      (None, 6, 6, 256)    0           block3_sepconv2_bn[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 6, 6, 256)    1024        conv2d_2[0][0]
__________________________________________________________________________________________________
add_2 (Add)                     (None, 6, 6, 256)    0           block3_pool[0][0]                batch_normalization_2[0][0]
__________________________________________________________________________________________________
block4_sepconv1_act (Activation (None, 6, 6, 256)    0           add_2[0][0]
__________________________________________________________________________________________________
block4_sepconv1 (SeparableConv2 (None, 6, 6, 728)    188672      block4_sepconv1_act[0][0]
__________________________________________________________________________________________________
block4_sepconv1_bn (BatchNormal (None, 6, 6, 728)    2912        block4_sepconv1[0][0]
__________________________________________________________________________________________________
block4_sepconv2_act (Activation (None, 6, 6, 728)    0           block4_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block4_sepconv2 (SeparableConv2 (None, 6, 6, 728)    536536      block4_sepconv2_act[0][0]
__________________________________________________________________________________________________
block4_sepconv2_bn (BatchNormal (None, 6, 6, 728)    2912        block4_sepconv2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 3, 3, 728)    186368      add_2[0][0]
__________________________________________________________________________________________________
block4_pool (MaxPooling2D)      (None, 3, 3, 728)    0           block4_sepconv2_bn[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 3, 3, 728)    2912        conv2d_3[0][0]
__________________________________________________________________________________________________
add_3 (Add)                     (None, 3, 3, 728)    0           block4_pool[0][0]                batch_normalization_3[0][0]
__________________________________________________________________________________________________
block5_sepconv1_act (Activation (None, 3, 3, 728)    0           add_3[0][0]
__________________________________________________________________________________________________
block5_sepconv1 (SeparableConv2 (None, 3, 3, 728)    536536      block5_sepconv1_act[0][0]
__________________________________________________________________________________________________
block5_sepconv1_bn (BatchNormal (None, 3, 3, 728)    2912        block5_sepconv1[0][0]
__________________________________________________________________________________________________
block5_sepconv2_act (Activation (None, 3, 3, 728)    0           block5_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block5_sepconv2 (SeparableConv2 (None, 3, 3, 728)    536536      block5_sepconv2_act[0][0]
__________________________________________________________________________________________________
block5_sepconv2_bn (BatchNormal (None, 3, 3, 728)    2912        block5_sepconv2[0][0]
__________________________________________________________________________________________________
block5_sepconv3_act (Activation (None, 3, 3, 728)    0           block5_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block5_sepconv3 (SeparableConv2 (None, 3, 3, 728)    536536      block5_sepconv3_act[0][0]
__________________________________________________________________________________________________
block5_sepconv3_bn (BatchNormal (None, 3, 3, 728)    2912        block5_sepconv3[0][0]
__________________________________________________________________________________________________
add_4 (Add)                     (None, 3, 3, 728)    0           block5_sepconv3_bn[0][0]         add_3[0][0]
__________________________________________________________________________________________________
block6_sepconv1_act (Activation (None, 3, 3, 728)    0           add_4[0][0]
__________________________________________________________________________________________________
block6_sepconv1 (SeparableConv2 (None, 3, 3, 728)    536536      block6_sepconv1_act[0][0]
__________________________________________________________________________________________________
block6_sepconv1_bn (BatchNormal (None, 3, 3, 728)    2912        block6_sepconv1[0][0]
__________________________________________________________________________________________________
block6_sepconv2_act (Activation (None, 3, 3, 728)    0           block6_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block6_sepconv2 (SeparableConv2 (None, 3, 3, 728)    536536      block6_sepconv2_act[0][0]
__________________________________________________________________________________________________
block6_sepconv2_bn (BatchNormal (None, 3, 3, 728)    2912        block6_sepconv2[0][0]
__________________________________________________________________________________________________
block6_sepconv3_act (Activation (None, 3, 3, 728)    0           block6_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block6_sepconv3 (SeparableConv2 (None, 3, 3, 728)    536536      block6_sepconv3_act[0][0]
__________________________________________________________________________________________________
block6_sepconv3_bn (BatchNormal (None, 3, 3, 728)    2912        block6_sepconv3[0][0]
__________________________________________________________________________________________________
add_5 (Add)                     (None, 3, 3, 728)    0           block6_sepconv3_bn[0][0]         add_4[0][0]
__________________________________________________________________________________________________
block7_sepconv1_act (Activation (None, 3, 3, 728)    0           add_5[0][0]
__________________________________________________________________________________________________
block7_sepconv1 (SeparableConv2 (None, 3, 3, 728)    536536      block7_sepconv1_act[0][0]
__________________________________________________________________________________________________
block7_sepconv1_bn (BatchNormal (None, 3, 3, 728)    2912        block7_sepconv1[0][0]
__________________________________________________________________________________________________
block7_sepconv2_act (Activation (None, 3, 3, 728)    0           block7_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block7_sepconv2 (SeparableConv2 (None, 3, 3, 728)    536536      block7_sepconv2_act[0][0]
__________________________________________________________________________________________________
block7_sepconv2_bn (BatchNormal (None, 3, 3, 728)    2912        block7_sepconv2[0][0]
__________________________________________________________________________________________________
block7_sepconv3_act (Activation (None, 3, 3, 728)    0           block7_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block7_sepconv3 (SeparableConv2 (None, 3, 3, 728)    536536      block7_sepconv3_act[0][0]
__________________________________________________________________________________________________
block7_sepconv3_bn (BatchNormal (None, 3, 3, 728)    2912        block7_sepconv3[0][0]
__________________________________________________________________________________________________
add_6 (Add)                     (None, 3, 3, 728)    0           block7_sepconv3_bn[0][0]         add_5[0][0]
__________________________________________________________________________________________________
block8_sepconv1_act (Activation (None, 3, 3, 728)    0           add_6[0][0]
__________________________________________________________________________________________________
block8_sepconv1 (SeparableConv2 (None, 3, 3, 728)    536536      block8_sepconv1_act[0][0]
__________________________________________________________________________________________________
block8_sepconv1_bn (BatchNormal (None, 3, 3, 728)    2912        block8_sepconv1[0][0]
__________________________________________________________________________________________________
block8_sepconv2_act (Activation (None, 3, 3, 728)    0           block8_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block8_sepconv2 (SeparableConv2 (None, 3, 3, 728)    536536      block8_sepconv2_act[0][0]
__________________________________________________________________________________________________
block8_sepconv2_bn (BatchNormal (None, 3, 3, 728)    2912        block8_sepconv2[0][0]
__________________________________________________________________________________________________
block8_sepconv3_act (Activation (None, 3, 3, 728)    0           block8_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block8_sepconv3 (SeparableConv2 (None, 3, 3, 728)    536536      block8_sepconv3_act[0][0]
__________________________________________________________________________________________________
block8_sepconv3_bn (BatchNormal (None, 3, 3, 728)    2912        block8_sepconv3[0][0]
__________________________________________________________________________________________________
add_7 (Add)                     (None, 3, 3, 728)    0           block8_sepconv3_bn[0][0]         add_6[0][0]
__________________________________________________________________________________________________
block9_sepconv1_act (Activation (None, 3, 3, 728)    0           add_7[0][0]
__________________________________________________________________________________________________
block9_sepconv1 (SeparableConv2 (None, 3, 3, 728)    536536      block9_sepconv1_act[0][0]
__________________________________________________________________________________________________
block9_sepconv1_bn (BatchNormal (None, 3, 3, 728)    2912        block9_sepconv1[0][0]
__________________________________________________________________________________________________
block9_sepconv2_act (Activation (None, 3, 3, 728)    0           block9_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block9_sepconv2 (SeparableConv2 (None, 3, 3, 728)    536536      block9_sepconv2_act[0][0]
__________________________________________________________________________________________________
block9_sepconv2_bn (BatchNormal (None, 3, 3, 728)    2912        block9_sepconv2[0][0]
__________________________________________________________________________________________________
block9_sepconv3_act (Activation (None, 3, 3, 728)    0           block9_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block9_sepconv3 (SeparableConv2 (None, 3, 3, 728)    536536      block9_sepconv3_act[0][0]
__________________________________________________________________________________________________
block9_sepconv3_bn (BatchNormal (None, 3, 3, 728)    2912        block9_sepconv3[0][0]
__________________________________________________________________________________________________
add_8 (Add)                     (None, 3, 3, 728)    0           block9_sepconv3_bn[0][0]         add_7[0][0]
__________________________________________________________________________________________________
block10_sepconv1_act (Activatio (None, 3, 3, 728)    0           add_8[0][0]
__________________________________________________________________________________________________
block10_sepconv1 (SeparableConv (None, 3, 3, 728)    536536      block10_sepconv1_act[0][0]
__________________________________________________________________________________________________
block10_sepconv1_bn (BatchNorma (None, 3, 3, 728)    2912        block10_sepconv1[0][0]
__________________________________________________________________________________________________
block10_sepconv2_act (Activatio (None, 3, 3, 728)    0           block10_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block10_sepconv2 (SeparableConv (None, 3, 3, 728)    536536      block10_sepconv2_act[0][0]
__________________________________________________________________________________________________
block10_sepconv2_bn (BatchNorma (None, 3, 3, 728)    2912        block10_sepconv2[0][0]
__________________________________________________________________________________________________
block10_sepconv3_act (Activatio (None, 3, 3, 728)    0           block10_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block10_sepconv3 (SeparableConv (None, 3, 3, 728)    536536      block10_sepconv3_act[0][0]
__________________________________________________________________________________________________
block10_sepconv3_bn (BatchNorma (None, 3, 3, 728)    2912        block10_sepconv3[0][0]
__________________________________________________________________________________________________
add_9 (Add)                     (None, 3, 3, 728)    0           block10_sepconv3_bn[0][0]        add_8[0][0]
__________________________________________________________________________________________________
block11_sepconv1_act (Activatio (None, 3, 3, 728)    0           add_9[0][0]
__________________________________________________________________________________________________
block11_sepconv1 (SeparableConv (None, 3, 3, 728)    536536      block11_sepconv1_act[0][0]
__________________________________________________________________________________________________
block11_sepconv1_bn (BatchNorma (None, 3, 3, 728)    2912        block11_sepconv1[0][0]
__________________________________________________________________________________________________
block11_sepconv2_act (Activatio (None, 3, 3, 728)    0           block11_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block11_sepconv2 (SeparableConv (None, 3, 3, 728)    536536      block11_sepconv2_act[0][0]
__________________________________________________________________________________________________
block11_sepconv2_bn (BatchNorma (None, 3, 3, 728)    2912        block11_sepconv2[0][0]
__________________________________________________________________________________________________
block11_sepconv3_act (Activatio (None, 3, 3, 728)    0           block11_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block11_sepconv3 (SeparableConv (None, 3, 3, 728)    536536      block11_sepconv3_act[0][0]
__________________________________________________________________________________________________
block11_sepconv3_bn (BatchNorma (None, 3, 3, 728)    2912        block11_sepconv3[0][0]
__________________________________________________________________________________________________
add_10 (Add)                    (None, 3, 3, 728)    0           block11_sepconv3_bn[0][0]        add_9[0][0]
__________________________________________________________________________________________________
block12_sepconv1_act (Activatio (None, 3, 3, 728)    0           add_10[0][0]
__________________________________________________________________________________________________
block12_sepconv1 (SeparableConv (None, 3, 3, 728)    536536      block12_sepconv1_act[0][0]
__________________________________________________________________________________________________
block12_sepconv1_bn (BatchNorma (None, 3, 3, 728)    2912        block12_sepconv1[0][0]
__________________________________________________________________________________________________
block12_sepconv2_act (Activatio (None, 3, 3, 728)    0           block12_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block12_sepconv2 (SeparableConv (None, 3, 3, 728)    536536      block12_sepconv2_act[0][0]
__________________________________________________________________________________________________
block12_sepconv2_bn (BatchNorma (None, 3, 3, 728)    2912        block12_sepconv2[0][0]
__________________________________________________________________________________________________
block12_sepconv3_act (Activatio (None, 3, 3, 728)    0           block12_sepconv2_bn[0][0]
__________________________________________________________________________________________________
block12_sepconv3 (SeparableConv (None, 3, 3, 728)    536536      block12_sepconv3_act[0][0]
__________________________________________________________________________________________________
block12_sepconv3_bn (BatchNorma (None, 3, 3, 728)    2912        block12_sepconv3[0][0]
__________________________________________________________________________________________________
add_11 (Add)                    (None, 3, 3, 728)    0           block12_sepconv3_bn[0][0]        add_10[0][0]
__________________________________________________________________________________________________
block13_sepconv1_act (Activatio (None, 3, 3, 728)    0           add_11[0][0]
__________________________________________________________________________________________________
block13_sepconv1 (SeparableConv (None, 3, 3, 728)    536536      block13_sepconv1_act[0][0]
__________________________________________________________________________________________________
block13_sepconv1_bn (BatchNorma (None, 3, 3, 728)    2912        block13_sepconv1[0][0]
__________________________________________________________________________________________________
block13_sepconv2_act (Activatio (None, 3, 3, 728)    0           block13_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block13_sepconv2 (SeparableConv (None, 3, 3, 1024)   752024      block13_sepconv2_act[0][0]
__________________________________________________________________________________________________
block13_sepconv2_bn (BatchNorma (None, 3, 3, 1024)   4096        block13_sepconv2[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, 2, 2, 1024)   745472      add_11[0][0]
__________________________________________________________________________________________________
block13_pool (MaxPooling2D)     (None, 2, 2, 1024)   0           block13_sepconv2_bn[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 2, 2, 1024)   4096        conv2d_4[0][0]
__________________________________________________________________________________________________
add_12 (Add)                    (None, 2, 2, 1024)   0           block13_pool[0][0]               batch_normalization_4[0][0]
__________________________________________________________________________________________________
block14_sepconv1 (SeparableConv (None, 2, 2, 1536)   1582080     add_12[0][0]
__________________________________________________________________________________________________
block14_sepconv1_bn (BatchNorma (None, 2, 2, 1536)   6144        block14_sepconv1[0][0]
__________________________________________________________________________________________________
block14_sepconv1_act (Activatio (None, 2, 2, 1536)   0           block14_sepconv1_bn[0][0]
__________________________________________________________________________________________________
block14_sepconv2 (SeparableConv (None, 2, 2, 2048)   3159552     block14_sepconv1_act[0][0]
__________________________________________________________________________________________________
block14_sepconv2_bn (BatchNorma (None, 2, 2, 2048)   8192        block14_sepconv2[0][0]
__________________________________________________________________________________________________
block14_sepconv2_act (Activatio (None, 2, 2, 2048)   0           block14_sepconv2_bn[0][0]
__________________________________________________________________________________________________
avg_pool (GlobalAveragePooling2 (None, 2048)         0           block14_sepconv2_act[0][0]
__________________________________________________________________________________________________
predictions (Dense)             (None, 7)            14343       avg_pool[0][0]
==================================================================================================
Total params: 20,875,247
Trainable params: 20,820,719
Non-trainable params: 54,528
__________________________________________________________________________________________________

我的项目中的一些定义

# 定义回调函数 Callbacks使用的一些底层函数
# Callbacks用于训练过程查看训练模型的内在状态和统计。
# base_path = 'models/' 为什么没有生成_emotion_training文件?0kb?中间通道?

# 1. 定义日志路径

# 2. CSVLogger把训练轮结果数据流到 csv 文件的回调函数。
# 参数一:filename=log_file_path csv 文件的文件名,例如 'run/log.csv'。
# 参数二:append False:覆盖存在的文件。

# 3. EarlyStopping当被监测的数量不再提升,则停止训练。
# 参数一:monitor=val_loss: 被监测的数据。
# 参数二:patience: 没有进步的训练轮数,在这之后训练就会被停止。

#3.  ReduceLROnPlateau 当标准评估停止提升时,降低学习速率。
# 参数一:monitor=val_loss被监测的数据。
# 参数二:factor=0.1,学习速率被降低的因数。新的学习速率 = 学习速率 * 因数
# 参数三:patience=(patience/4):(50)缩小 没有进步的训练轮数,在这之后训练速率会被降低。
# 参数四:verbose: 整数。0:安静,1:更新信息。

log_file_path = base_path + '_emotion_training.log'
csv_logger = CSVLogger(log_file_path, append=False)
early_stop = EarlyStopping('val_loss', patience=patience)
reduce_lr = ReduceLROnPlateau('val_loss', factor=0.1,patience=int(patience/4),verbose=1)
# 参数定义:用在model_checkpoint 模型位置及命名ModelCheckpoint参数定义
trained_models_path = base_path + 'XCEPTION'
model_names = trained_models_path + '.{epoch:02d}-{val_acc:.2f}.hdf5'

# 1. ModelCheckpoint参数定义:用在callbacks ModelCheckpoint 在每个训练期之后保存模型
# 参数一:filepath=model_names: 字符串,保存模型的路径。
# 参数二:monitor=val_loss: 被监测的数据 
# 参数三:verbose(0/1)详细信息模式
# 参数四:save_best_only: True,被监测数据的最佳模型就不会被覆盖.

# 2. callbacks 回调函数是一个函数的合集,会在训练的阶段中所使用。
 # callbacks 定义     fit_generator模型训练时的一个参数

model_checkpoint = ModelCheckpoint(model_names,'val_loss', verbose=1,save_best_only=True)
callbacks = [model_checkpoint, csv_logger, early_stop, reduce_lr]
# 载入数据集load_and_process.py中已经定义好的load_fer2013()函数
# load_and_process.py中已经定义好的preprocess_input(faces)函数
faces, emotions = load_fer2013()
faces = preprocess_input(faces)
num_samples, num_classes = emotions.shape # 数据维度
# 参数定义:划分训练、测试集 fit_generator validation_data里的参数
xtrain, xtest,ytrain,ytest = train_test_split(faces, emotions,test_size=0.2,shuffle=True)

# 数据增强:图片产生器,在批量中对数据进行增强,扩充数据集大小
# ImageDataGenerator通过实时数据增强生成张量图像数据批次。数据将不断循环(按批次)。
# 参数一:featurewise_center: 布尔值。将输入数据的均值设置为 0,逐特征进行。
# 参数二:featurewise_std_normalization: Boolean. 布尔值。将输入除以数据标准差,逐特征进行。
# 参数三:rotation_range: 整数。随机旋转的度数范围。
# 参数四:width_shift_range: 浮点数、一维数组或整数。
# float: 如果 <1,则是除以总宽度的值,或者如果 >=1,则为像素值。
# 参数五:height_shift_range: 浮点数、一维数组或整数
# 参数六:zoom_range: 浮点数 或 [lower, upper]。随机缩放范围。如果是浮点数,[lower, upper] = [1-zoom_range, 1+zoom_range]。
# 参数七:horizontal_flip: 布尔值。随机水平翻转。

data_generator = ImageDataGenerator(featurewise_center=False,featurewise_std_normalization=False,rotation_range=10,width_shift_range=0.1,height_shift_range=0.1,zoom_range=.1,horizontal_flip=True)

# 注意为了绘画正确率损失函数一定要记得将训练结果传给history参数

# 数据训练:利用数据增强进行训练 batch_size = 32
# fit_generator,fit 训练,按批次训练模型,生成器与模型并行运行,以提高效率。 
# 参数一:data_generator=ImageDataGenerator,一个生成器,生成器的输出(xtrain, ytrain, batch_size)元组?
# 参数二:steps_per_epoch 声明一个 epoch 完成并开始下一个 epoch 之前从 generator 产生的总步数(批次样本)。 
# 它通常应该等于你的数据集的样本数量除以批量大小(len(xtrain) / batch_size)
# 参数三:epochs:整数。训练模型的迭代总轮数。一个 epoch 是对所提供的整个数据的一轮迭代
# epochs=num_epochs (num_epochs = 10000)
# 参数四:verbose: 0, 1 或 2。日志显示模式。 0 = 安静模式, 1 = 进度条, 2 = 每轮一行。verbose=1 进度条
# 参数五:callbacks: keras.callbacks.Callback 实例的列表。在训练时调用的一系列回调函数。
# 参数六:validation_data=(xtest,ytest) 验证数据,在每个 epoch 结束时评估损失和任何模型指标。该模型不会对此数据进行训练。

history = model.fit_generator(data_generator.flow(xtrain, ytrain, batch_size),steps_per_epoch=len(xtrain) / batch_size,epochs=num_epochs,verbose=1, callbacks=callbacks,validation_data=(xtest,ytest))

运行结果:训练结果

WARNING:tensorflow:From C:\Users\asus\Anaconda3\envs\Face\lib\site-packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/10000
898/897 [==============================] - 253s 282ms/step - loss: 1.6659 - acc: 0.3397 - val_loss: 1.9674 - val_acc: 0.3526Epoch 00001: val_loss improved from inf to 1.96741, saving model to models/XCEPTION.01-0.35.hdf5
Epoch 2/10000
898/897 [==============================] - 232s 258ms/step - loss: 1.4373 - acc: 0.4454 - val_loss: 3.1111 - val_acc: 0.3649Epoch 00002: val_loss did not improve from 1.96741
Epoch 3/10000
898/897 [==============================] - 232s 258ms/step - loss: 1.3367 - acc: 0.4948 - val_loss: 1.3311 - val_acc: 0.5010Epoch 00003: val_loss improved from 1.96741 to 1.33113, saving model to models/XCEPTION.03-0.50.hdf5
Epoch 4/10000
898/897 [==============================] - 234s 260ms/step - loss: 1.2640 - acc: 0.5266 - val_loss: 1.4253 - val_acc: 0.5006Epoch 00004: val_loss did not improve from 1.33113
Epoch 5/10000
898/897 [==============================] - 234s 261ms/step - loss: 1.2270 - acc: 0.5398 - val_loss: 1.1800 - val_acc: 0.5614Epoch 00005: val_loss improved from 1.33113 to 1.17996, saving model to models/XCEPTION.05-0.56.hdf5
Epoch 6/10000
898/897 [==============================] - 232s 258ms/step - loss: 1.1928 - acc: 0.5538 - val_loss: 1.4678 - val_acc: 0.5587Epoch 00006: val_loss did not improve from 1.17996
Epoch 7/10000
898/897 [==============================] - 233s 259ms/step - loss: 1.2171 - acc: 0.5520 - val_loss: 1.2607 - val_acc: 0.5194Epoch 00007: val_loss did not improve from 1.17996
Epoch 8/10000
898/897 [==============================] - 235s 262ms/step - loss: 1.2409 - acc: 0.5437 - val_loss: 2.4143 - val_acc: 0.4160Epoch 00008: val_loss did not improve from 1.17996
Epoch 9/10000
898/897 [==============================] - 235s 262ms/step - loss: 1.1859 - acc: 0.5568 - val_loss: 1.1573 - val_acc: 0.5697Epoch 00009: val_loss improved from 1.17996 to 1.15731, saving model to models/XCEPTION.09-0.57.hdf5
Epoch 10/10000
898/897 [==============================] - 237s 263ms/step - loss: 1.1067 - acc: 0.5838 - val_loss: 4.0526 - val_acc: 0.4299Epoch 00010: val_loss did not improve from 1.15731
Epoch 11/10000
898/897 [==============================] - 237s 264ms/step - loss: 1.0921 - acc: 0.5913 - val_loss: 1.1778 - val_acc: 0.5471Epoch 00011: val_loss did not improve from 1.15731
Epoch 12/10000
898/897 [==============================] - 237s 264ms/step - loss: 1.0740 - acc: 0.5936 - val_loss: 1.1783 - val_acc: 0.5600Epoch 00012: val_loss did not improve from 1.15731
Epoch 13/10000
898/897 [==============================] - 236s 262ms/step - loss: 1.0592 - acc: 0.6000 - val_loss: 1.0890 - val_acc: 0.5977Epoch 00013: val_loss improved from 1.15731 to 1.08896, saving model to models/XCEPTION.13-0.60.hdf5
Epoch 14/10000
898/897 [==============================] - 234s 261ms/step - loss: 1.0431 - acc: 0.6058 - val_loss: 1.0631 - val_acc: 0.5954Epoch 00014: val_loss improved from 1.08896 to 1.06306, saving model to models/XCEPTION.14-0.60.hdf5
Epoch 15/10000
898/897 [==============================] - 234s 261ms/step - loss: 1.0290 - acc: 0.6111 - val_loss: 1.0599 - val_acc: 0.6004Epoch 00015: val_loss improved from 1.06306 to 1.05987, saving model to models/XCEPTION.15-0.60.hdf5
Epoch 16/10000
898/897 [==============================] - 234s 261ms/step - loss: 1.0108 - acc: 0.6183 - val_loss: 1.0576 - val_acc: 0.6030Epoch 00016: val_loss improved from 1.05987 to 1.05763, saving model to models/XCEPTION.16-0.60.hdf5
Epoch 17/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.9956 - acc: 0.6226 - val_loss: 1.0592 - val_acc: 0.6127Epoch 00017: val_loss did not improve from 1.05763
Epoch 18/10000
898/897 [==============================] - 235s 262ms/step - loss: 0.9908 - acc: 0.6273 - val_loss: 1.1557 - val_acc: 0.5561Epoch 00018: val_loss did not improve from 1.05763
Epoch 19/10000
898/897 [==============================] - 235s 262ms/step - loss: 0.9916 - acc: 0.6267 - val_loss: 1.0356 - val_acc: 0.6177Epoch 00019: val_loss improved from 1.05763 to 1.03557, saving model to models/XCEPTION.19-0.62.hdf5
Epoch 20/10000
898/897 [==============================] - 232s 259ms/step - loss: 0.9775 - acc: 0.6308 - val_loss: 1.0222 - val_acc: 0.6134Epoch 00020: val_loss improved from 1.03557 to 1.02222, saving model to models/XCEPTION.20-0.61.hdf5
Epoch 21/10000
898/897 [==============================] - 233s 259ms/step - loss: 0.9570 - acc: 0.6389 - val_loss: 1.0285 - val_acc: 0.6223Epoch 00021: val_loss did not improve from 1.02222
Epoch 22/10000
898/897 [==============================] - 234s 260ms/step - loss: 0.9470 - acc: 0.6449 - val_loss: 1.1531 - val_acc: 0.5963Epoch 00022: val_loss did not improve from 1.02222
Epoch 23/10000
898/897 [==============================] - 234s 260ms/step - loss: 0.9268 - acc: 0.6508 - val_loss: 1.0245 - val_acc: 0.6297Epoch 00023: val_loss did not improve from 1.02222
Epoch 24/10000
898/897 [==============================] - 233s 260ms/step - loss: 0.9150 - acc: 0.6570 - val_loss: 0.9913 - val_acc: 0.6385Epoch 00024: val_loss improved from 1.02222 to 0.99130, saving model to models/XCEPTION.24-0.64.hdf5
Epoch 25/10000
898/897 [==============================] - 232s 258ms/step - loss: 0.9081 - acc: 0.6595 - val_loss: 0.9774 - val_acc: 0.6383Epoch 00025: val_loss improved from 0.99130 to 0.97737, saving model to models/XCEPTION.25-0.64.hdf5
Epoch 26/10000
898/897 [==============================] - 233s 259ms/step - loss: 0.8984 - acc: 0.6598 - val_loss: 1.0867 - val_acc: 0.6198Epoch 00026: val_loss did not improve from 0.97737
Epoch 27/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.8808 - acc: 0.6669 - val_loss: 1.0477 - val_acc: 0.6183Epoch 00027: val_loss did not improve from 0.97737
Epoch 28/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.8728 - acc: 0.6712 - val_loss: 0.9732 - val_acc: 0.6396Epoch 00028: val_loss improved from 0.97737 to 0.97319, saving model to models/XCEPTION.28-0.64.hdf5
Epoch 29/10000
898/897 [==============================] - 233s 259ms/step - loss: 0.8654 - acc: 0.6722 - val_loss: 1.1426 - val_acc: 0.6031Epoch 00029: val_loss did not improve from 0.97319
Epoch 30/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.8746 - acc: 0.6709 - val_loss: 0.9778 - val_acc: 0.6411Epoch 00030: val_loss did not improve from 0.97319
Epoch 31/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.8348 - acc: 0.6852 - val_loss: 0.9817 - val_acc: 0.6461Epoch 00031: val_loss did not improve from 0.97319
Epoch 32/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.8343 - acc: 0.6871 - val_loss: 0.9618 - val_acc: 0.6467Epoch 00032: val_loss improved from 0.97319 to 0.96177, saving model to models/XCEPTION.32-0.65.hdf5
Epoch 33/10000
898/897 [==============================] - 233s 259ms/step - loss: 0.8274 - acc: 0.6890 - val_loss: 0.9642 - val_acc: 0.6402Epoch 00033: val_loss did not improve from 0.96177
Epoch 34/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.8166 - acc: 0.6934 - val_loss: 0.9959 - val_acc: 0.6365Epoch 00034: val_loss did not improve from 0.96177
Epoch 35/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.8075 - acc: 0.6965 - val_loss: 1.5719 - val_acc: 0.5956Epoch 00035: val_loss did not improve from 0.96177
Epoch 36/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.8008 - acc: 0.6971 - val_loss: 1.0286 - val_acc: 0.6360Epoch 00036: val_loss did not improve from 0.96177
Epoch 37/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.7892 - acc: 0.7050 - val_loss: 0.9681 - val_acc: 0.6471Epoch 00037: val_loss did not improve from 0.96177
Epoch 38/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.7820 - acc: 0.7065 - val_loss: 0.9752 - val_acc: 0.6442Epoch 00038: val_loss did not improve from 0.96177
Epoch 39/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.7679 - acc: 0.7153 - val_loss: 0.9893 - val_acc: 0.6402Epoch 00039: val_loss did not improve from 0.96177
Epoch 40/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.7551 - acc: 0.7129 - val_loss: 1.1149 - val_acc: 0.6375Epoch 00040: val_loss did not improve from 0.96177
Epoch 41/10000
898/897 [==============================] - 234s 260ms/step - loss: 0.7500 - acc: 0.7208 - val_loss: 0.9677 - val_acc: 0.6566Epoch 00041: val_loss did not improve from 0.96177
Epoch 42/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.7416 - acc: 0.7189 - val_loss: 0.9999 - val_acc: 0.6480Epoch 00042: val_loss did not improve from 0.96177
Epoch 43/10000
898/897 [==============================] - 234s 260ms/step - loss: 0.7380 - acc: 0.7241 - val_loss: 1.0890 - val_acc: 0.6470Epoch 00043: val_loss did not improve from 0.96177
Epoch 44/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.7204 - acc: 0.7316 - val_loss: 0.9864 - val_acc: 0.6489Epoch 00044: val_loss did not improve from 0.96177Epoch 00044: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 45/10000
898/897 [==============================] - 233s 259ms/step - loss: 0.6523 - acc: 0.7562 - val_loss: 0.9394 - val_acc: 0.6734Epoch 00045: val_loss improved from 0.96177 to 0.93938, saving model to models/XCEPTION.45-0.67.hdf5
Epoch 46/10000
898/897 [==============================] - 233s 259ms/step - loss: 0.6199 - acc: 0.7698 - val_loss: 0.9427 - val_acc: 0.6764Epoch 00046: val_loss did not improve from 0.93938
Epoch 47/10000
898/897 [==============================] - 233s 260ms/step - loss: 0.6029 - acc: 0.7766 - val_loss: 0.9580 - val_acc: 0.6746Epoch 00047: val_loss did not improve from 0.93938
Epoch 48/10000
898/897 [==============================] - 233s 260ms/step - loss: 0.5912 - acc: 0.7782 - val_loss: 0.9617 - val_acc: 0.6764Epoch 00048: val_loss did not improve from 0.93938
Epoch 49/10000
898/897 [==============================] - 233s 260ms/step - loss: 0.5886 - acc: 0.7799 - val_loss: 0.9505 - val_acc: 0.6794Epoch 00049: val_loss did not improve from 0.93938
Epoch 50/10000
898/897 [==============================] - 233s 260ms/step - loss: 0.5864 - acc: 0.7809 - val_loss: 0.9720 - val_acc: 0.6751Epoch 00050: val_loss did not improve from 0.93938
Epoch 51/10000
898/897 [==============================] - 233s 260ms/step - loss: 0.5719 - acc: 0.7861 - val_loss: 0.9792 - val_acc: 0.6732Epoch 00051: val_loss did not improve from 0.93938
Epoch 52/10000
898/897 [==============================] - 233s 260ms/step - loss: 0.5659 - acc: 0.7892 - val_loss: 0.9878 - val_acc: 0.6726Epoch 00052: val_loss did not improve from 0.93938
Epoch 53/10000
898/897 [==============================] - 233s 260ms/step - loss: 0.5587 - acc: 0.7914 - val_loss: 1.0260 - val_acc: 0.6702Epoch 00053: val_loss did not improve from 0.93938
Epoch 54/10000
898/897 [==============================] - 234s 260ms/step - loss: 0.5618 - acc: 0.7908 - val_loss: 0.9862 - val_acc: 0.6729Epoch 00054: val_loss did not improve from 0.93938
Epoch 55/10000
898/897 [==============================] - 233s 260ms/step - loss: 0.5492 - acc: 0.7957 - val_loss: 0.9880 - val_acc: 0.6760Epoch 00055: val_loss did not improve from 0.93938
Epoch 56/10000
898/897 [==============================] - 233s 260ms/step - loss: 0.5530 - acc: 0.7917 - val_loss: 0.9726 - val_acc: 0.6783Epoch 00056: val_loss did not improve from 0.93938
Epoch 57/10000
898/897 [==============================] - 233s 260ms/step - loss: 0.5460 - acc: 0.7957 - val_loss: 1.0289 - val_acc: 0.6711Epoch 00057: val_loss did not improve from 0.93938Epoch 00057: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 58/10000
898/897 [==============================] - 233s 260ms/step - loss: 0.5254 - acc: 0.8039 - val_loss: 1.0095 - val_acc: 0.6753Epoch 00058: val_loss did not improve from 0.93938
Epoch 59/10000
898/897 [==============================] - 233s 260ms/step - loss: 0.5281 - acc: 0.8030 - val_loss: 1.0123 - val_acc: 0.6755Epoch 00059: val_loss did not improve from 0.93938
Epoch 60/10000
898/897 [==============================] - 233s 260ms/step - loss: 0.5275 - acc: 0.8037 - val_loss: 1.0019 - val_acc: 0.6771Epoch 00060: val_loss did not improve from 0.93938
Epoch 61/10000
898/897 [==============================] - 233s 260ms/step - loss: 0.5301 - acc: 0.8027 - val_loss: 1.0042 - val_acc: 0.6767Epoch 00061: val_loss did not improve from 0.93938
Epoch 62/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5250 - acc: 0.8060 - val_loss: 1.0349 - val_acc: 0.6716Epoch 00062: val_loss did not improve from 0.93938
Epoch 63/10000
898/897 [==============================] - 235s 261ms/step - loss: 0.5296 - acc: 0.8023 - val_loss: 1.0028 - val_acc: 0.6757Epoch 00063: val_loss did not improve from 0.93938
Epoch 64/10000
898/897 [==============================] - 235s 261ms/step - loss: 0.5255 - acc: 0.8042 - val_loss: 1.0151 - val_acc: 0.6751Epoch 00064: val_loss did not improve from 0.93938
Epoch 65/10000
898/897 [==============================] - 235s 261ms/step - loss: 0.5244 - acc: 0.8045 - val_loss: 1.0187 - val_acc: 0.6730Epoch 00065: val_loss did not improve from 0.93938
Epoch 66/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5204 - acc: 0.8068 - val_loss: 1.0198 - val_acc: 0.6721Epoch 00066: val_loss did not improve from 0.93938
Epoch 67/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5224 - acc: 0.8070 - val_loss: 1.0318 - val_acc: 0.6707Epoch 00067: val_loss did not improve from 0.93938
Epoch 68/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5231 - acc: 0.8062 - val_loss: 1.0202 - val_acc: 0.6722Epoch 00068: val_loss did not improve from 0.93938
Epoch 69/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5227 - acc: 0.8071 - val_loss: 1.0177 - val_acc: 0.6723Epoch 00069: val_loss did not improve from 0.93938Epoch 00069: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06.
Epoch 70/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5209 - acc: 0.8068 - val_loss: 1.0175 - val_acc: 0.6734Epoch 00070: val_loss did not improve from 0.93938
Epoch 71/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5187 - acc: 0.8069 - val_loss: 1.0146 - val_acc: 0.6737Epoch 00071: val_loss did not improve from 0.93938
Epoch 72/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5187 - acc: 0.8070 - val_loss: 1.0090 - val_acc: 0.6743Epoch 00072: val_loss did not improve from 0.93938
Epoch 73/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5169 - acc: 0.8086 - val_loss: 1.0193 - val_acc: 0.6725Epoch 00073: val_loss did not improve from 0.93938
Epoch 74/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5190 - acc: 0.8039 - val_loss: 1.0111 - val_acc: 0.6746Epoch 00074: val_loss did not improve from 0.93938
Epoch 75/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5277 - acc: 0.8041 - val_loss: 1.0150 - val_acc: 0.6747Epoch 00075: val_loss did not improve from 0.93938
Epoch 76/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5196 - acc: 0.8078 - val_loss: 1.0149 - val_acc: 0.6737Epoch 00076: val_loss did not improve from 0.93938
Epoch 77/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5190 - acc: 0.8070 - val_loss: 1.0311 - val_acc: 0.6708Epoch 00077: val_loss did not improve from 0.93938
Epoch 78/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5117 - acc: 0.8096 - val_loss: 1.0127 - val_acc: 0.6744Epoch 00078: val_loss did not improve from 0.93938
Epoch 79/10000
898/897 [==============================] - 234s 260ms/step - loss: 0.5156 - acc: 0.8066 - val_loss: 1.0194 - val_acc: 0.6722Epoch 00079: val_loss did not improve from 0.93938
Epoch 80/10000898/897 [==============================] - 234s 261ms/step - loss: 0.5245 - acc: 0.8001 - val_loss: 1.0184 - val_acc: 0.6712Epoch 00080: val_loss did not improve from 0.93938
Epoch 81/10000
898/897 [==============================] - 234s 260ms/step - loss: 0.5150 - acc: 0.8093 - val_loss: 1.0117 - val_acc: 0.6744Epoch 00081: val_loss did not improve from 0.93938Epoch 00081: ReduceLROnPlateau reducing learning rate to 1.0000001111620805e-07.
Epoch 82/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5241 - acc: 0.8075 - val_loss: 1.0180 - val_acc: 0.6733Epoch 00082: val_loss did not improve from 0.93938
Epoch 83/10000
898/897 [==============================] - 234s 260ms/step - loss: 0.5098 - acc: 0.8085 - val_loss: 1.0125 - val_acc: 0.6746Epoch 00083: val_loss did not improve from 0.93938
Epoch 84/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5175 - acc: 0.8051 - val_loss: 1.0328 - val_acc: 0.6700Epoch 00084: val_loss did not improve from 0.93938
Epoch 85/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5203 - acc: 0.8064 - val_loss: 1.0094 - val_acc: 0.6755Epoch 00085: val_loss did not improve from 0.93938
Epoch 86/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5158 - acc: 0.8060 - val_loss: 1.0068 - val_acc: 0.6744Epoch 00086: val_loss did not improve from 0.93938
Epoch 87/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5209 - acc: 0.8076 - val_loss: 1.0155 - val_acc: 0.6736Epoch 00087: val_loss did not improve from 0.93938
Epoch 88/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5183 - acc: 0.8058 - val_loss: 1.0295 - val_acc: 0.6707Epoch 00088: val_loss did not improve from 0.93938
Epoch 89/10000
898/897 [==============================] - 234s 260ms/step - loss: 0.5206 - acc: 0.8086 - val_loss: 1.0127 - val_acc: 0.6744Epoch 00089: val_loss did not improve from 0.93938
Epoch 90/10000
898/897 [==============================] - 233s 260ms/step - loss: 0.5220 - acc: 0.8059 - val_loss: 1.0149 - val_acc: 0.6739Epoch 00090: val_loss did not improve from 0.93938
Epoch 91/10000
898/897 [==============================] - 233s 260ms/step - loss: 0.5212 - acc: 0.8060 - val_loss: 1.0130 - val_acc: 0.6743Epoch 00091: val_loss did not improve from 0.93938
Epoch 92/10000
898/897 [==============================] - 233s 260ms/step - loss: 0.5156 - acc: 0.8068 - val_loss: 1.0169 - val_acc: 0.6732Epoch 00092: val_loss did not improve from 0.93938
Epoch 93/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5128 - acc: 0.8084 - val_loss: 1.0116 - val_acc: 0.6740Epoch 00093: val_loss did not improve from 0.93938Epoch 00093: ReduceLROnPlateau reducing learning rate to 1.000000082740371e-08.
Epoch 94/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5259 - acc: 0.8028 - val_loss: 1.0192 - val_acc: 0.6730Epoch 00094: val_loss did not improve from 0.93938
Epoch 95/10000
898/897 [==============================] - 234s 261ms/step - loss: 0.5207 - acc: 0.8085 - val_loss: 1.0285 - val_acc: 0.6709Epoch 00095: val_loss did not improve from 0.93938

曲线图

import matplotlib.pyplot as plt #重要
history_dict = history.history
acc = history_dict['acc']
val_acc = history_dict['val_acc']epochs = range(1, len(val_acc) + 1)plt.plot(epochs, acc, 'b',color='orange', label='Tra_acc')
plt.plot(epochs, val_acc, 'b', label='Val_acc')
plt.legend()
plt.savefig('E:/dujuan_papers/result/X1000_acc.png',dpi=1000)
plt.show()loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'b',color='orange', label='Tra_loss')
plt.plot(epochs, val_loss, 'b', label='Val_loss')
plt.legend()
plt.savefig('E:/dujuan_papers/result/X1000_loss.png',dpi=1000)
plt.show()
####结束

结果

CNN之Xception Keras实现模型训练相关推荐

  1. 深度学习-Tensorflow使用Keras进行模型训练

    tf.keras 是 tensorflow2 引入的高封装度的框架,可以用于快速搭建神经网络模型,keras 为支持快速实验而生,能够把想法迅速转换为结果,是深度学习框架之中最终易上手的一个,它提供了 ...

  2. TensorFlow 2.0 —— 模型训练

    目录 1.Keras版本模型训练 1.1 构造模型(顺序模型.函数式模型.子类模型) 1.2 模型训练:model.fit() 1.3 模型验证:model.evaluate() 1.4 模型预测:m ...

  3. TensorFlow2.0学习笔记-3.模型训练

    3.模型训练 3.1.Keras版本模型训练 • 构建模型(顺序模型.函数式模型.子类模型) • 模型训练: model.fit() • 模型验证: model.evaluate() • 模型预测:  ...

  4. 【人工智能与机器学习】——Keras编程分别实现人脸微笑和口罩数据集的识别模型训练和测试(卷积神经网络CNN) + 实时分类微笑和口罩识别检测

    机器学习练习目录 一.理解人脸图像特征提取的各种方法的特征 1.HOG 2.Dlib 3.卷积神经网络(CNN) 二.卷积神经网络(CNN)笑脸数据集(genki4k)正负样本的划分.模型训练和测试的 ...

  5. Keras 的预训练权值模型用来进行预测、特征提取和微调(fine-tuning)

    转至:Keras中文文档 https://keras.io/zh/applications/ 应用 Applications Keras 的应用模块(keras.applications)提供了带有预 ...

  6. Keras深度学习实战(1)——神经网络基础与模型训练过程详解

    Keras深度学习实战(1)--神经网络基础与模型训练过程详解 0. 前言 1. 神经网络基础 1.1 简单神经网络的架构 1.2 神经网络的训练 1.3 神经网络的应用 2. 从零开始构建前向传播 ...

  7. 基于Keras搭建CNN、TextCNN文本分类模型

    基于Keras搭建CNN.TextCNN文本分类模型 一.CNN 1.1 数据读取分词 1.2.数据编码 1.3 数据序列标准化 1.4 构建模型 1.5 模型验证 二.TextCNN文本分类 2.1 ...

  8. cnn模型训练——从环境搭建到模型部署

    一.准备python环境 以Windows平台为例: 1.安装python3 直接默认安装,并且添加到PATH. 安装完毕后在命令行输入python回车查看是否安装成功. 2.更换pip源 在win+ ...

  9. Keras图像分割实战:数据整理分割、自定义数据生成器、模型训练

    Keras图像分割实战:数据整理分割.自定义数据生成器.模型训练 目录 Keras图像分割实战:数据整理分割.自定义数据生成器.模型训练

  10. 你正在使用GPU进行CNN类模型训练,突然发生了内存溢出的错误,你可以通过哪些方式来进行解决?

    你正在使用GPU进行CNN类模型训练,突然发生了内存溢出的错误,你可以通过哪些方式来进行解决? 在GPU内存发生溢出(memory overflow)的时候.我们首先想到的肯定是增加GPU的资源,比如 ...

最新文章

  1. 《Python面向对象编程指南》——1.2 基类中的__init__()方法
  2. 打开git命令窗口_用动图的形式打开 10 大 Git 命令?
  3. DOS特殊字符转义方法
  4. 虚拟化容器Docker的安全性讨论
  5. 【POJ - 1703】Find them, Catch them(带权并查集之--种类并查集 权为与父节点关系)
  6. 怎么黑学校计算机室的电脑,大学计算机室变身黑网吧 校方:有需求
  7. java1.8win7_JDK 1.8 安装配置教程(win7 64bit )
  8. 数据库表可以没有外键
  9. 使用 New Relic 监控接口服务性能 (APM)
  10. 从零开始用阿里云服务器搭建网页
  11. 3文聚焦:RNA m6A甲基化修饰在不同农作物中的研究进展(马铃薯+水稻+玉米+小麦)|易基因
  12. 游戏公司岗位有哪些?各职位都是做什么的?有什么特点?
  13. 外酥里嫩的锅包肉做法
  14. 九步!让你从零基础学习python成为一名开源程序员!
  15. ferguson博弈_组合博弈游戏
  16. 解决计算机问题联盟,[转载]Ansys证书问题解决方法
  17. Mozilla 开源稍后阅读应用 Pocket 代码
  18. idea springboot mybaits扫描不到第三方jar中的mapper
  19. RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the
  20. ​2021年,ThinkPad为什么存在?

热门文章

  1. 树莓派linux led字符设备驱动( linux自带)
  2. 谷歌AI平均每天发表2篇论文!Jeff Dean执笔年度汇总:16大方向
  3. Linux文件颜色与文件类型对应(白色、蓝色、红色、黄色、绿色 代表的意义。。。)
  4. PR导入视频后无声音
  5. php zend debugger_PHPStorm 与 Zend Debugger
  6. 自制简单的诗歌搜索系统
  7. 配置中心—Consul配置管理
  8. 汉语的音素,音节,汉字,短语,句子
  9. android studio无法连接小米手机问题解决
  10. UpdatePanel触发javascript脚本技巧javascript