if mAP > best_mAP:best_mAP = mAPsaver_best.save(sess, args.save_dir + 'best_model_Epoch_{}_step_{}_mAP_{:.4f}_loss_{:.4f}_lr_{:.7g}'.format(epoch, int(__global_step), best_mAP, val_loss_total.average, __lr)) 

如上部分代码,当训练结果比预设的best_mAP好,则保存此时的TensorFlow 训练模型,此时,在第几个epoch,第几个global_step,最好的best_mAP值,平均验证损失val_loss_total.average,lr。

tf.train.Saver().save(sess, 'ckpts/')在ckpts/ 路径下主要保存四个文件checkpoint:

checkpoint:model.ckpt.data-00000-of-00001: 某个ckpt的数据文件,保存每个变量的取值,保存的是网络的权值,偏置,操作等等。

model.ckpt.index :某个ckpt的index文件 二进制 或者其他格式 不可直接查看 。是一个不可变得字符串表,每一个键都是张量的名称,它的值是一个序列化的BundleEntryProto。 每个BundleEntryProto描述张量的元数据:“数据”文件中的哪个文件包含张量的内容,该文件的偏移量,校验和,一些辅助数据等等。

model.ckpt.meta:某个ckpt的meta数据  二进制 或者其他格式 不可直接查看,保存了TensorFlow计算图的结构信息。model.ckpt-200.meta文件保存的是图结构,通俗地讲就是神经网络的网络结构。一般而言网络结构是不会发生改变,所以可以只保存一个就行了。我们可以使用下面的代码只在第一次保存meta文件。

checkpoint:记录训练较好的几次训练结果

本人训练后的checkpoint文件内容,如下(Epoch 26,30,34,94,98,):

model_checkpoint_path: "best_model_Epoch_98_step_50291_mAP_0.7358_loss_3.6063_lr_1e-05"
all_model_checkpoint_paths: "best_model_Epoch_26_step_13715_mAP_0.7163_loss_3.3344_lr_0.0001"
all_model_checkpoint_paths: "best_model_Epoch_30_step_15747_mAP_0.7211_loss_3.3941_lr_0.0001"
all_model_checkpoint_paths: "best_model_Epoch_34_step_17779_mAP_0.7317_loss_3.3543_lr_3e-05"
all_model_checkpoint_paths: "best_model_Epoch_94_step_48259_mAP_0.7328_loss_3.5932_lr_1e-05"
all_model_checkpoint_paths: "best_model_Epoch_98_step_50291_mAP_0.7358_loss_3.6063_lr_1e-05"

模型加载需要利用Saver.restore方法。可以加载固定参数,也可以加在所有参数。

    tf.train.Saver.restore(sess,model_path)

训练过程保存了大量tensorflow模型 :

-4.2$ ls
best_model_Epoch_10_step_5664_mAP_0.0729_loss_8.1562_lr_0.0001.data-00000-of-00001
best_model_Epoch_10_step_5664_mAP_0.0729_loss_8.1562_lr_0.0001.index
best_model_Epoch_10_step_5664_mAP_0.0729_loss_8.1562_lr_0.0001.meta
best_model_Epoch_12_step_6694_mAP_0.0748_loss_7.4896_lr_0.0001.data-00000-of-00001
best_model_Epoch_12_step_6694_mAP_0.0748_loss_7.4896_lr_0.0001.index
best_model_Epoch_12_step_6694_mAP_0.0748_loss_7.4896_lr_0.0001.meta
best_model_Epoch_16_step_8754_mAP_0.0775_loss_7.6105_lr_0.0001.data-00000-of-00001
best_model_Epoch_16_step_8754_mAP_0.0775_loss_7.6105_lr_0.0001.index
best_model_Epoch_16_step_8754_mAP_0.0775_loss_7.6105_lr_0.0001.meta
best_model_Epoch_20_step_10814_mAP_0.0715_loss_8.2555_lr_0.0001.data-00000-of-00001
best_model_Epoch_20_step_10814_mAP_0.0715_loss_8.2555_lr_0.0001.index
best_model_Epoch_20_step_10814_mAP_0.0715_loss_8.2555_lr_0.0001.meta
best_model_Epoch_24_step_12874_mAP_0.0717_loss_8.5067_lr_0.0001.data-00000-of-00001
best_model_Epoch_24_step_12874_mAP_0.0717_loss_8.5067_lr_0.0001.index
best_model_Epoch_24_step_12874_mAP_0.0717_loss_8.5067_lr_0.0001.meta
best_model_Epoch_26_step_13715_mAP_0.7163_loss_3.3344_lr_0.0001.data-00000-of-00001
best_model_Epoch_26_step_13715_mAP_0.7163_loss_3.3344_lr_0.0001.index
best_model_Epoch_26_step_13715_mAP_0.7163_loss_3.3344_lr_0.0001.meta
best_model_Epoch_26_step_13904_mAP_0.0726_loss_8.5469_lr_0.0001.data-00000-of-00001
best_model_Epoch_26_step_13904_mAP_0.0726_loss_8.5469_lr_0.0001.index
best_model_Epoch_26_step_13904_mAP_0.0726_loss_8.5469_lr_0.0001.meta
best_model_Epoch_26_step_13904_mAP_0.0788_loss_7.9227_lr_0.0001.data-00000-of-00001
best_model_Epoch_26_step_13904_mAP_0.0788_loss_7.9227_lr_0.0001.index
best_model_Epoch_26_step_13904_mAP_0.0788_loss_7.9227_lr_0.0001.meta
best_model_Epoch_28_step_14731_mAP_0.7189_loss_3.2645_lr_0.0001.data-00000-of-00001
best_model_Epoch_28_step_14731_mAP_0.7189_loss_3.2645_lr_0.0001.index
best_model_Epoch_28_step_14731_mAP_0.7189_loss_3.2645_lr_0.0001.meta
best_model_Epoch_30_step_15747_mAP_0.7211_loss_3.3941_lr_0.0001.data-00000-of-00001
best_model_Epoch_30_step_15747_mAP_0.7211_loss_3.3941_lr_0.0001.index
best_model_Epoch_30_step_15747_mAP_0.7211_loss_3.3941_lr_0.0001.meta
best_model_Epoch_30_step_15747_mAP_0.7310_loss_3.4142_lr_0.0001.data-00000-of-00001
best_model_Epoch_30_step_15747_mAP_0.7310_loss_3.4142_lr_0.0001.index
best_model_Epoch_30_step_15747_mAP_0.7310_loss_3.4142_lr_0.0001.meta
best_model_Epoch_30_step_15964_mAP_0.0797_loss_8.0055_lr_0.0001.data-00000-of-00001
best_model_Epoch_30_step_15964_mAP_0.0797_loss_8.0055_lr_0.0001.index
best_model_Epoch_30_step_15964_mAP_0.0797_loss_8.0055_lr_0.0001.meta
best_model_Epoch_32_step_16763_mAP_0.7343_loss_3.3788_lr_0.0001.data-00000-of-00001
best_model_Epoch_32_step_16763_mAP_0.7343_loss_3.3788_lr_0.0001.index
best_model_Epoch_32_step_16763_mAP_0.7343_loss_3.3788_lr_0.0001.meta
best_model_Epoch_34_step_17779_mAP_0.7317_loss_3.3543_lr_3e-05.data-00000-of-00001
best_model_Epoch_34_step_17779_mAP_0.7317_loss_3.3543_lr_3e-05.index
best_model_Epoch_34_step_17779_mAP_0.7317_loss_3.3543_lr_3e-05.meta
best_model_Epoch_34_step_17779_mAP_0.7419_loss_3.2561_lr_3e-05.data-00000-of-00001
best_model_Epoch_34_step_17779_mAP_0.7419_loss_3.2561_lr_3e-05.index
best_model_Epoch_34_step_17779_mAP_0.7419_loss_3.2561_lr_3e-05.meta
best_model_Epoch_36_step_18795_mAP_0.7390_loss_3.3124_lr_3e-05.data-00000-of-00001
best_model_Epoch_36_step_18795_mAP_0.7390_loss_3.3124_lr_3e-05.index
best_model_Epoch_36_step_18795_mAP_0.7390_loss_3.3124_lr_3e-05.meta
best_model_Epoch_38_step_19811_mAP_0.7427_loss_3.3455_lr_3e-05.data-00000-of-00001
best_model_Epoch_38_step_19811_mAP_0.7427_loss_3.3455_lr_3e-05.index
best_model_Epoch_38_step_19811_mAP_0.7427_loss_3.3455_lr_3e-05.meta
best_model_Epoch_38_step_19811_mAP_0.7428_loss_3.3389_lr_3e-05.data-00000-of-00001
best_model_Epoch_38_step_19811_mAP_0.7428_loss_3.3389_lr_3e-05.index
best_model_Epoch_38_step_19811_mAP_0.7428_loss_3.3389_lr_3e-05.meta
best_model_Epoch_48_step_24891_mAP_0.7435_loss_3.4464_lr_3e-05.data-00000-of-00001
best_model_Epoch_48_step_24891_mAP_0.7435_loss_3.4464_lr_3e-05.index
best_model_Epoch_48_step_24891_mAP_0.7435_loss_3.4464_lr_3e-05.meta
best_model_Epoch_4_step_2574_mAP_0.0042_loss_9.6720_lr_0.0001.data-00000-of-00001
best_model_Epoch_4_step_2574_mAP_0.0042_loss_9.6720_lr_0.0001.index
best_model_Epoch_4_step_2574_mAP_0.0042_loss_9.6720_lr_0.0001.meta
best_model_Epoch_4_step_2574_mAP_0.0428_loss_7.8114_lr_0.0001.data-00000-of-00001
best_model_Epoch_4_step_2574_mAP_0.0428_loss_7.8114_lr_0.0001.index
best_model_Epoch_4_step_2574_mAP_0.0428_loss_7.8114_lr_0.0001.meta
best_model_Epoch_4_step_2574_mAP_0.0484_loss_7.5300_lr_0.0001.data-00000-of-00001
best_model_Epoch_4_step_2574_mAP_0.0484_loss_7.5300_lr_0.0001.index
best_model_Epoch_4_step_2574_mAP_0.0484_loss_7.5300_lr_0.0001.meta
best_model_Epoch_4_step_2574_mAP_0.0492_loss_7.8293_lr_0.0001.data-00000-of-00001
best_model_Epoch_4_step_2574_mAP_0.0492_loss_7.8293_lr_0.0001.index
best_model_Epoch_4_step_2574_mAP_0.0492_loss_7.8293_lr_0.0001.meta
best_model_Epoch_52_step_26923_mAP_0.7448_loss_3.5138_lr_3e-05.data-00000-of-00001
best_model_Epoch_52_step_26923_mAP_0.7448_loss_3.5138_lr_3e-05.index
best_model_Epoch_52_step_26923_mAP_0.7448_loss_3.5138_lr_3e-05.meta
best_model_Epoch_58_step_30384_mAP_0.0728_loss_9.0220_lr_1e-05.data-00000-of-00001
best_model_Epoch_58_step_30384_mAP_0.0728_loss_9.0220_lr_1e-05.index
best_model_Epoch_58_step_30384_mAP_0.0728_loss_9.0220_lr_1e-05.meta
best_model_Epoch_6_step_3604_mAP_0.0531_loss_8.0416_lr_0.0001.data-00000-of-00001
best_model_Epoch_6_step_3604_mAP_0.0531_loss_8.0416_lr_0.0001.index
best_model_Epoch_6_step_3604_mAP_0.0531_loss_8.0416_lr_0.0001.meta
best_model_Epoch_6_step_3604_mAP_0.0633_loss_7.4694_lr_0.0001.data-00000-of-00001
best_model_Epoch_6_step_3604_mAP_0.0633_loss_7.4694_lr_0.0001.index
best_model_Epoch_6_step_3604_mAP_0.0633_loss_7.4694_lr_0.0001.meta
best_model_Epoch_74_step_38624_mAP_0.0731_loss_9.1056_lr_1e-05.data-00000-of-00001
best_model_Epoch_74_step_38624_mAP_0.0731_loss_9.1056_lr_1e-05.index
best_model_Epoch_74_step_38624_mAP_0.0731_loss_9.1056_lr_1e-05.meta
best_model_Epoch_80_step_41147_mAP_0.7455_loss_3.5324_lr_1e-05.data-00000-of-00001
best_model_Epoch_80_step_41147_mAP_0.7455_loss_3.5324_lr_1e-05.index
best_model_Epoch_80_step_41147_mAP_0.7455_loss_3.5324_lr_1e-05.meta
best_model_Epoch_8_step_4634_mAP_0.0639_loss_7.5254_lr_0.0001.data-00000-of-00001
best_model_Epoch_8_step_4634_mAP_0.0639_loss_7.5254_lr_0.0001.index
best_model_Epoch_8_step_4634_mAP_0.0639_loss_7.5254_lr_0.0001.meta
best_model_Epoch_8_step_4634_mAP_0.0710_loss_7.8850_lr_0.0001.data-00000-of-00001
best_model_Epoch_8_step_4634_mAP_0.0710_loss_7.8850_lr_0.0001.index
best_model_Epoch_8_step_4634_mAP_0.0710_loss_7.8850_lr_0.0001.meta
best_model_Epoch_8_step_4634_mAP_0.0726_loss_7.0773_lr_0.0001.data-00000-of-00001
best_model_Epoch_8_step_4634_mAP_0.0726_loss_7.0773_lr_0.0001.index
best_model_Epoch_8_step_4634_mAP_0.0726_loss_7.0773_lr_0.0001.meta
best_model_Epoch_94_step_48259_mAP_0.7328_loss_3.5932_lr_1e-05.data-00000-of-00001
best_model_Epoch_94_step_48259_mAP_0.7328_loss_3.5932_lr_1e-05.index
best_model_Epoch_94_step_48259_mAP_0.7328_loss_3.5932_lr_1e-05.meta
best_model_Epoch_98_step_50291_mAP_0.7358_loss_3.6063_lr_1e-05.data-00000-of-00001
best_model_Epoch_98_step_50291_mAP_0.7358_loss_3.6063_lr_1e-05.index
best_model_Epoch_98_step_50291_mAP_0.7358_loss_3.6063_lr_1e-05.meta
checkpoint
model-epoch_10_step_5664_loss_1.1807_lr_0.0001.data-00000-of-00001
model-epoch_10_step_5664_loss_1.1807_lr_0.0001.index
model-epoch_10_step_5664_loss_1.1807_lr_0.0001.meta
model-epoch_20_step_10667_loss_0.8032_lr_0.0001.data-00000-of-00001
model-epoch_20_step_10667_loss_0.8032_lr_0.0001.index
model-epoch_20_step_10667_loss_0.8032_lr_0.0001.meta
model-epoch_30_step_15747_loss_0.5327_lr_0.0001.data-00000-of-00001
model-epoch_30_step_15747_loss_0.5327_lr_0.0001.index
model-epoch_30_step_15747_loss_0.5327_lr_0.0001.meta
model-epoch_40_step_20827_loss_0.3800_lr_3e-05.data-00000-of-00001
model-epoch_40_step_20827_loss_0.3800_lr_3e-05.index
model-epoch_40_step_20827_loss_0.3800_lr_3e-05.meta
model-epoch_50_step_25907_loss_0.3512_lr_3e-05.data-00000-of-00001
model-epoch_50_step_25907_loss_0.3512_lr_3e-05.index
model-epoch_50_step_25907_loss_0.3512_lr_3e-05.meta
model-epoch_50_step_25907_loss_0.3513_lr_3e-05.data-00000-of-00001
model-epoch_50_step_25907_loss_0.3513_lr_3e-05.index
model-epoch_50_step_25907_loss_0.3513_lr_3e-05.meta
model-epoch_50_step_25907_loss_0.3590_lr_3e-05.data-00000-of-00001
model-epoch_50_step_25907_loss_0.3590_lr_3e-05.index
model-epoch_50_step_25907_loss_0.3590_lr_3e-05.meta
model-epoch_50_step_26264_loss_0.3299_lr_3e-05.data-00000-of-00001
model-epoch_50_step_26264_loss_0.3299_lr_3e-05.index
model-epoch_50_step_26264_loss_0.3299_lr_3e-05.meta
model-epoch_50_step_26264_loss_0.3480_lr_3e-05.data-00000-of-00001
model-epoch_50_step_26264_loss_0.3480_lr_3e-05.index
model-epoch_50_step_26264_loss_0.3480_lr_3e-05.meta
model-epoch_60_step_30987_loss_0.3373_lr_1e-05.data-00000-of-00001
model-epoch_60_step_30987_loss_0.3373_lr_1e-05.index
model-epoch_60_step_30987_loss_0.3373_lr_1e-05.meta
model-epoch_60_step_30987_loss_0.3422_lr_1e-05.data-00000-of-00001
model-epoch_60_step_30987_loss_0.3422_lr_1e-05.index
model-epoch_60_step_30987_loss_0.3422_lr_1e-05.meta
model-epoch_60_step_30987_loss_0.3430_lr_1e-05.data-00000-of-00001
model-epoch_60_step_30987_loss_0.3430_lr_1e-05.index
model-epoch_60_step_30987_loss_0.3430_lr_1e-05.meta
model-epoch_60_step_31414_loss_0.3199_lr_1e-05.data-00000-of-00001
model-epoch_60_step_31414_loss_0.3199_lr_1e-05.index
model-epoch_60_step_31414_loss_0.3199_lr_1e-05.meta
model-epoch_60_step_31414_loss_0.3371_lr_1e-05.data-00000-of-00001
model-epoch_60_step_31414_loss_0.3371_lr_1e-05.index
model-epoch_60_step_31414_loss_0.3371_lr_1e-05.meta
model-epoch_70_step_36067_loss_0.3288_lr_1e-05.data-00000-of-00001
model-epoch_70_step_36067_loss_0.3288_lr_1e-05.index
model-epoch_70_step_36067_loss_0.3288_lr_1e-05.meta
model-epoch_70_step_36067_loss_0.3350_lr_1e-05.data-00000-of-00001
model-epoch_70_step_36067_loss_0.3350_lr_1e-05.index
model-epoch_70_step_36067_loss_0.3350_lr_1e-05.meta
model-epoch_70_step_36564_loss_0.3164_lr_1e-05.data-00000-of-00001
model-epoch_70_step_36564_loss_0.3164_lr_1e-05.index
model-epoch_70_step_36564_loss_0.3164_lr_1e-05.meta
model-epoch_70_step_36564_loss_0.3291_lr_1e-05.data-00000-of-00001
model-epoch_70_step_36564_loss_0.3291_lr_1e-05.index
model-epoch_70_step_36564_loss_0.3291_lr_1e-05.meta
model-epoch_80_step_41147_loss_0.3309_lr_1e-05.data-00000-of-00001
model-epoch_80_step_41147_loss_0.3309_lr_1e-05.index
model-epoch_80_step_41147_loss_0.3309_lr_1e-05.meta
model-epoch_80_step_41147_loss_0.3321_lr_1e-05.data-00000-of-00001
model-epoch_80_step_41147_loss_0.3321_lr_1e-05.index
model-epoch_80_step_41147_loss_0.3321_lr_1e-05.meta
model-epoch_80_step_41714_loss_0.3133_lr_1e-05.data-00000-of-00001
model-epoch_80_step_41714_loss_0.3133_lr_1e-05.index
model-epoch_80_step_41714_loss_0.3133_lr_1e-05.meta
model-epoch_80_step_41714_loss_0.3244_lr_1e-05.data-00000-of-00001
model-epoch_80_step_41714_loss_0.3244_lr_1e-05.index
model-epoch_80_step_41714_loss_0.3244_lr_1e-05.meta
model-epoch_90_step_46227_loss_0.3235_lr_1e-05.data-00000-of-00001
model-epoch_90_step_46227_loss_0.3235_lr_1e-05.index
model-epoch_90_step_46227_loss_0.3235_lr_1e-05.meta
model-epoch_90_step_46227_loss_0.3270_lr_1e-05.data-00000-of-00001
model-epoch_90_step_46227_loss_0.3270_lr_1e-05.index
model-epoch_90_step_46227_loss_0.3270_lr_1e-05.meta
model-epoch_90_step_46864_loss_0.3098_lr_1e-05.data-00000-of-00001
model-epoch_90_step_46864_loss_0.3098_lr_1e-05.index
model-epoch_90_step_46864_loss_0.3098_lr_1e-05.meta
model-epoch_90_step_46864_loss_0.3239_lr_1e-05.data-00000-of-00001
model-epoch_90_step_46864_loss_0.3239_lr_1e-05.index
model-epoch_90_step_46864_loss_0.3239_lr_1e-05.meta
sh-4.2$

如下为 yolov3 train.py:

# coding: utf-8from __future__ import division, print_functionimport tensorflow as tf
import numpy as np
import logging
from tqdm import trangeimport argsfrom utils.data_utils import get_batch_data
from utils.misc_utils import shuffle_and_overwrite, make_summary, config_learning_rate, config_optimizer, AverageMeter
from utils.eval_utils import evaluate_on_cpu, evaluate_on_gpu, get_preds_gpu, voc_eval, parse_gt_rec
from utils.nms_utils import gpu_nmsfrom model import yolov3# setting loggers
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s',datefmt='%a, %d %b %Y %H:%M:%S', filename=args.progress_log_path, filemode='w')# setting placeholders
is_training = tf.placeholder(tf.bool, name="phase_train")
handle_flag = tf.placeholder(tf.string, [], name='iterator_handle_flag')
# register the gpu nms operation here for the following evaluation scheme
pred_boxes_flag = tf.placeholder(tf.float32, [1, None, None])
pred_scores_flag = tf.placeholder(tf.float32, [1, None, None])
gpu_nms_op = gpu_nms(pred_boxes_flag, pred_scores_flag, args.class_num, args.nms_topk, args.score_threshold, args.nms_threshold)##################
# tf.data pipeline
##################
train_dataset = tf.data.TextLineDataset(args.train_file)
train_dataset = train_dataset.shuffle(args.train_img_cnt)
train_dataset = train_dataset.batch(args.batch_size)
train_dataset = train_dataset.map(lambda x: tf.py_func(get_batch_data,inp=[x, args.class_num, args.img_size, args.anchors, 'train', args.multi_scale_train, args.use_mix_up, args.letterbox_resize],Tout=[tf.int64, tf.float32, tf.float32, tf.float32, tf.float32]),num_parallel_calls=args.num_threads
)
train_dataset = train_dataset.prefetch(args.prefetech_buffer)val_dataset = tf.data.TextLineDataset(args.val_file)
val_dataset = val_dataset.batch(1)
val_dataset = val_dataset.map(lambda x: tf.py_func(get_batch_data,inp=[x, args.class_num, args.img_size, args.anchors, 'val', False, False, args.letterbox_resize],Tout=[tf.int64, tf.float32, tf.float32, tf.float32, tf.float32]),num_parallel_calls=args.num_threads
)
val_dataset.prefetch(args.prefetech_buffer)iterator = tf.data.Iterator.from_structure(train_dataset.output_types, train_dataset.output_shapes)
train_init_op = iterator.make_initializer(train_dataset)
val_init_op = iterator.make_initializer(val_dataset)# get an element from the chosen dataset iterator
image_ids, image, y_true_13, y_true_26, y_true_52 = iterator.get_next()
y_true = [y_true_13, y_true_26, y_true_52]# tf.data pipeline will lose the data `static` shape, so we need to set it manually
image_ids.set_shape([None])
image.set_shape([None, None, None, 3])
for y in y_true:y.set_shape([None, None, None, None, None])##################
# Model definition
##################
yolo_model = yolov3(args.class_num, args.anchors, args.use_label_smooth, args.use_focal_loss, args.batch_norm_decay, args.weight_decay, use_static_shape=False)
with tf.variable_scope('yolov3'):pred_feature_maps = yolo_model.forward(image, is_training=is_training)
loss = yolo_model.compute_loss(pred_feature_maps, y_true)
y_pred = yolo_model.predict(pred_feature_maps)l2_loss = tf.losses.get_regularization_loss()# setting restore parts and vars to update
saver_to_restore = tf.train.Saver(var_list=tf.contrib.framework.get_variables_to_restore(include=args.restore_include, exclude=args.restore_exclude))
update_vars = tf.contrib.framework.get_variables_to_restore(include=args.update_part)tf.summary.scalar('train_batch_statistics/total_loss', loss[0])
tf.summary.scalar('train_batch_statistics/loss_xy', loss[1])
tf.summary.scalar('train_batch_statistics/loss_wh', loss[2])
tf.summary.scalar('train_batch_statistics/loss_conf', loss[3])
tf.summary.scalar('train_batch_statistics/loss_class', loss[4])
tf.summary.scalar('train_batch_statistics/loss_l2', l2_loss)
tf.summary.scalar('train_batch_statistics/loss_ratio', l2_loss / loss[0])global_step = tf.Variable(float(args.global_step), trainable=False, collections=[tf.GraphKeys.LOCAL_VARIABLES])
if args.use_warm_up:learning_rate = tf.cond(tf.less(global_step, args.train_batch_num * args.warm_up_epoch), lambda: args.learning_rate_init * global_step / (args.train_batch_num * args.warm_up_epoch),lambda: config_learning_rate(args, global_step - args.train_batch_num * args.warm_up_epoch))
else:learning_rate = config_learning_rate(args, global_step)
tf.summary.scalar('learning_rate', learning_rate)if not args.save_optimizer:saver_to_save = tf.train.Saver()saver_best = tf.train.Saver()optimizer = config_optimizer(args.optimizer_name, learning_rate)# set dependencies for BN ops
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):# train_op = optimizer.minimize(loss[0] + l2_loss, var_list=update_vars, global_step=global_step)# apply gradient clip to avoid gradient explodinggvs = optimizer.compute_gradients(loss[0] + l2_loss, var_list=update_vars)clip_grad_var = [gv if gv[0] is None else [tf.clip_by_norm(gv[0], 100.), gv[1]] for gv in gvs]train_op = optimizer.apply_gradients(clip_grad_var, global_step=global_step)if args.save_optimizer:print('Saving optimizer parameters to checkpoint! Remember to restore the global_step in the fine-tuning afterwards.')saver_to_save = tf.train.Saver()saver_best = tf.train.Saver()with tf.Session() as sess:sess.run([tf.global_variables_initializer(), tf.local_variables_initializer()])saver_to_restore.restore(sess, args.restore_path)merged = tf.summary.merge_all()writer = tf.summary.FileWriter(args.log_dir, sess.graph)print('\n----------- start to train -----------\n')best_mAP = -np.Inffor epoch in range(args.total_epoches):sess.run(train_init_op)loss_total, loss_xy, loss_wh, loss_conf, loss_class = AverageMeter(), AverageMeter(), AverageMeter(), AverageMeter(), AverageMeter()for i in trange(args.train_batch_num):_, summary, __y_pred, __y_true, __loss, __global_step, __lr = sess.run([train_op, merged, y_pred, y_true, loss, global_step, learning_rate],feed_dict={is_training: True})writer.add_summary(summary, global_step=__global_step)loss_total.update(__loss[0], len(__y_pred[0]))loss_xy.update(__loss[1], len(__y_pred[0]))loss_wh.update(__loss[2], len(__y_pred[0]))loss_conf.update(__loss[3], len(__y_pred[0]))loss_class.update(__loss[4], len(__y_pred[0]))if __global_step % args.train_evaluation_step == 0 and __global_step > 0:# recall, precision = evaluate_on_cpu(__y_pred, __y_true, args.class_num, args.nms_topk, args.score_threshold, args.nms_threshold)recall, precision = evaluate_on_gpu(sess, gpu_nms_op, pred_boxes_flag, pred_scores_flag, __y_pred, __y_true, args.class_num, args.nms_threshold)info = "Epoch: {}, global_step: {} | loss: total: {:.2f}, xy: {:.2f}, wh: {:.2f}, conf: {:.2f}, class: {:.2f} | ".format(epoch, int(__global_step), loss_total.average, loss_xy.average, loss_wh.average, loss_conf.average, loss_class.average)info += 'Last batch: rec: {:.3f}, prec: {:.3f} | lr: {:.5g}'.format(recall, precision, __lr)print(info)logging.info(info)writer.add_summary(make_summary('evaluation/train_batch_recall', recall), global_step=__global_step)writer.add_summary(make_summary('evaluation/train_batch_precision', precision), global_step=__global_step)if np.isnan(loss_total.average):print('****' * 10)raise ArithmeticError('Gradient exploded! Please train again and you may need modify some parameters.')# NOTE: this is just demo. You can set the conditions when to save the weights.if epoch % args.save_epoch == 0 and epoch > 0:if loss_total.average <= 2.:saver_to_save.save(sess, args.save_dir + 'model-epoch_{}_step_{}_loss_{:.4f}_lr_{:.5g}'.format(epoch, int(__global_step), loss_total.average, __lr))# switch to validation dataset for evaluationif epoch % args.val_evaluation_epoch == 0 and epoch >= args.warm_up_epoch:sess.run(val_init_op)val_loss_total, val_loss_xy, val_loss_wh, val_loss_conf, val_loss_class = \AverageMeter(), AverageMeter(), AverageMeter(), AverageMeter(), AverageMeter()val_preds = []for j in trange(args.val_img_cnt):__image_ids, __y_pred, __loss = sess.run([image_ids, y_pred, loss],feed_dict={is_training: False})pred_content = get_preds_gpu(sess, gpu_nms_op, pred_boxes_flag, pred_scores_flag, __image_ids, __y_pred)val_preds.extend(pred_content)val_loss_total.update(__loss[0])val_loss_xy.update(__loss[1])val_loss_wh.update(__loss[2])val_loss_conf.update(__loss[3])val_loss_class.update(__loss[4])# calc mAPrec_total, prec_total, ap_total = AverageMeter(), AverageMeter(), AverageMeter()gt_dict = parse_gt_rec(args.val_file, args.img_size, args.letterbox_resize)info = '======> Epoch: {}, global_step: {}, lr: {:.6g} <======\n'.format(epoch, __global_step, __lr)for ii in range(args.class_num):npos, nd, rec, prec, ap = voc_eval(gt_dict, val_preds, ii, iou_thres=args.eval_threshold, use_07_metric=args.use_voc_07_metric)info += 'EVAL: Class {}: Recall: {:.4f}, Precision: {:.4f}, AP: {:.4f}\n'.format(ii, rec, prec, ap)rec_total.update(rec, npos)prec_total.update(prec, nd)ap_total.update(ap, 1)mAP = ap_total.averageinfo += 'EVAL: Recall: {:.4f}, Precison: {:.4f}, mAP: {:.4f}\n'.format(rec_total.average, prec_total.average, mAP)info += 'EVAL: loss: total: {:.2f}, xy: {:.2f}, wh: {:.2f}, conf: {:.2f}, class: {:.2f}\n'.format(val_loss_total.average, val_loss_xy.average, val_loss_wh.average, val_loss_conf.average, val_loss_class.average)print(info)logging.info(info)if mAP > best_mAP:best_mAP = mAPsaver_best.save(sess, args.save_dir + 'best_model_Epoch_{}_step_{}_mAP_{:.4f}_loss_{:.4f}_lr_{:.7g}'.format(epoch, int(__global_step), best_mAP, val_loss_total.average, __lr))writer.add_summary(make_summary('evaluation/val_mAP', mAP), global_step=epoch)writer.add_summary(make_summary('evaluation/val_recall', rec_total.average), global_step=epoch)writer.add_summary(make_summary('evaluation/val_precision', prec_total.average), global_step=epoch)writer.add_summary(make_summary('validation_statistics/total_loss', val_loss_total.average), global_step=epoch)writer.add_summary(make_summary('validation_statistics/loss_xy', val_loss_xy.average), global_step=epoch)writer.add_summary(make_summary('validation_statistics/loss_wh', val_loss_wh.average), global_step=epoch)writer.add_summary(make_summary('validation_statistics/loss_conf', val_loss_conf.average), global_step=epoch)writer.add_summary(make_summary('validation_statistics/loss_class', val_loss_class.average), global_step=epoch)

TensorFlow 训练模型保存四个文件相关推荐

  1. 模型保存的序列化文件pb 什么是PB文件 pb是protocol(协议) buffer(缓冲)的缩写

    pb是protocol(协议) buffer(缓冲)的缩写 TensorFlow 模型保存为pb文件的解释,怎么使用pb文件/模型的Save and Restore_u014264373的博客-CSD ...

  2. Tensorflow保存模型详解(进阶版二):如何保存最近的.ckpt文件 及 如何分开保存.ckpt数据文件和.meta图文件

    在学会了如何有选择的保存变量后,我们来学习如何如何分开保存.ckpt数据文件和.meta图文件 和 如何 保存最近几轮的.ckpt数据文件. 直接上代码: import tensorflow as t ...

  3. 使用tensorflow训练模型时制作自己的mnist集(附代码)

    (该方法存在问题,待改正)使用tensorflow训练模型时制作自己的mnist集(附代码) 探索过程 代码(python) 想法 探索过程 (ps:第一次写,写的不好多多见谅!) mnist集合是一 ...

  4. TensorFlow数据读取机制:文件队列 tf.train.slice_input_producer和 tf.data.Dataset机制

    TensorFlow数据读取机制:文件队列 tf.train.slice_input_producer和tf.data.Dataset机制 之前写了一篇博客,关于<Tensorflow生成自己的 ...

  5. VS2015+OpenCV3.4.5+QT5.12+WINDOWS10用c++调用tensorflow训练好的.pb文件图像检测

    版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明. 本文链接:https://blog.csdn.net/qq_31806049/article/ ...

  6. C#代码实现把网页文件保存为mht文件

    MHT叫"web单一文件".顾名思义,就是把网页中包含得图片,CSS文件以及HTML文件全部放到一个MHT文件里面.而且浏览器可以直接读取得. 由于项目需要,需实现把指定的网页文件 ...

  7. CV:基于Keras利用CNN主流架构之mini_XCEPTION训练性别分类模型hdf5并保存到指定文件夹下

    CV:基于Keras利用CNN主流架构之mini_XCEPTION训练性别分类模型hdf5并保存到指定文件夹下 目录 图示过程 核心代码 图示过程 核心代码 from keras.callbacks ...

  8. CV:基于Keras利用CNN主流架构之mini_XCEPTION训练情感分类模型hdf5并保存到指定文件夹下

    CV:基于Keras利用CNN主流架构之mini_XCEPTION训练情感分类模型hdf5并保存到指定文件夹下 目录 图示过程 核心代码 图示过程 核心代码 def mini_XCEPTION(inp ...

  9. TensorFlow模型保存和加载方法

    TensorFlow模型保存和加载方法 模型保存 import tensorflow as tfw1 = tf.Variable(tf.constant(2.0, shape=[1]), name=& ...

最新文章

  1. Mac OS X Git安装教程
  2. 一个球从100 米高的自由落下的反弹高度
  3. 路径总和 II—leetcode113
  4. IOS学习笔记二十NSSet和NSMutableSet
  5. python中出现ascii编码问题的解决办法
  6. Java 多线程 —— wait 与 notify
  7. css书写规范、行高
  8. Python FTP文件下载简介
  9. 某城郊110kV降压变电站监控系统设计
  10. 深海迷航代码_?《深海迷航(Subnautica)》如何输入代码
  11. c语言%.4f,4f,4F法则是什么
  12. 读取文本文件内容并写到excel
  13. if(a)是什么意思
  14. 【修电脑】每次关机提示rundll32.exe程序没有响应,修改注册表解决问题
  15. android 环信集成,Android 环信集成使用总结
  16. 使用IDLE进行编程
  17. 《阿哈!算法》4-1不撞南墙不回头 4-2 解救小哈——深度优先搜索
  18. 手把手教你如何利用Meterpreter渗透Windows系统
  19. Smack核心机制浅析
  20. 功能安全-26262-理论到实践-基础知识-标准机构与认可、认证

热门文章

  1. DGL官方教程--图分类
  2. H5canvas(渐变,绘制图片和视频,画布变换,制作马赛克)
  3. IM模块-声音麦克风监控波动
  4. 程序媛才能读懂的高级情话
  5. QQ多功能加速工具箱源码
  6. 《神经质人格》摘录(第三章)
  7. javaweb实现的在线点餐系统
  8. Android讯飞语音集成【在线语音合成2】
  9. JAVA小练习33——使用java描述一个车类, 车具备颜色, 轮子数、 名字这些公共的属性, 车还具备跑的功能行为
  10. 2022年全球链甲袜子行业分析报告