DL之VGG16:基于VGG16(Keras)利用Knifey-Spoony数据集对网络架构迁移学习

目录

数据集

输出结果

设计思路

1、基模型

2、思路导图

核心代码

更多输出


数据集

Dataset之Knifey-Spoony:Knifey-Spoony数据集的简介、下载、使用方法之详细攻略

输出结果

 

设计思路

1、基模型

2、思路导图

核心代码


model_VGG16.summary()
transfer_layer = model_VGG16.get_layer('block5_pool')
print('transfer_layer.output:', transfer_layer.output)  conv_model = Model(inputs=model_VGG16.input,outputs=transfer_layer.output)VGG16_TL_model = Sequential()                        # Start a new Keras Sequential model.
VGG16_TL_model.add(conv_model)                       # Add the convolutional part of the VGG16 model from above.
VGG16_TL_model.add(Flatten())                        # Flatten the output of the VGG16 model because it is from a convolutional layer.
VGG16_TL_model.add(Dense(1024, activation='relu'))   # Add a dense (aka. fully-connected) layer. This is for combining features that the VGG16 model has recognized in the image.
VGG16_TL_model.add(Dropout(0.5))                     # Add a dropout-layer which may prevent overfitting and improve generalization ability to unseen data e.g. the test-set.
VGG16_TL_model.add(Dense(num_classes, activation='softmax'))  # Add the final layer for the actual classification.print_layer_trainable() conv_model.trainable = False
for layer in conv_model.layers:layer.trainable = Falseprint_layer_trainable()
loss = 'categorical_crossentropy'
metrics = ['categorical_accuracy'] VGG16_TL_model.compile(optimizer=optimizer, loss=loss, metrics=metrics)      epochs = 20
steps_per_epoch = 100history = VGG16_TL_model.fit_generator(generator=generator_train,epochs=epochs,steps_per_epoch=steps_per_epoch,class_weight=class_weight,validation_data=generator_test,validation_steps=steps_test)plot_training_history(history)  VGG16_TL_model_result = VGG16_TL_model.evaluate_generator(generator_test, steps=steps_test)
print("Test-set classification accuracy: {0:.2%}".format(VGG16_TL_model_result[1]))

更多输出

输出tensorflow的版本: 1.10.0
Data has apparently already been downloaded and unpacked.
maybe_download_and_extract()函数执行结束!
load()函数的data_dir: data/knifey-spoony/
Creating dataset from the files in: data/knifey-spoony/
- Data loaded from cache-file: data/knifey-spoony/knifey-spoony.pkl
执行load()函数结束!
get_paths()函数的self.in_dir输出: data/knifey-spoony
- Copied training-set to: data/knifey-spoony/train/
get_paths()函数的self.in_dir输出: data/knifey-spoony
- Copied test-set to: data/knifey-spoony/test/
data/knifey-spoony/train/ data/knifey-spoony/test/
……383418368/553467096 [===================>..........] - ETA: 2:09
383614976/553467096 [===================>..........] - ETA: 2:09
383811584/553467096 [===================>..........] - ETA: 2:09
383860736/553467096 [===================>..........] - ETA: 2:09
383942656/553467096 [===================>..........] - ETA: 2:09……394297344/553467096 [====================>.........] - ETA: 2:00
394510336/553467096 [====================>.........] - ETA: 1:59
394723328/553467096 [====================>.........] - ETA: 1:59
394821632/553467096 [====================>.........] - ETA: 1:59
395018240/553467096 [====================>.........] - ETA: 1:59
395214848/553467096 [====================>.........] - ETA: 1:59
395395072/553467096 [====================>.........] - ETA: 1:59
395542528/553467096 [====================>.........] - ETA: 1:58……469909504/553467096 [========================>.....] - ETA: 1:00
470040576/553467096 [========================>.....] - ETA: 1:00
470122496/553467096 [========================>.....] - ETA: 59s
470351872/553467096 [========================>.....] - ETA: 59s
470499328/553467096 [========================>.....] - ETA: 59s
470630400/553467096 [========================>.....] - ETA: 59s
470712320/553467096 [========================>.....] - ETA: 59s
470925312/553467096 [========================>.....] - ETA: 59s
471089152/553467096 [========================>.....] - ETA: 59s
471220224/553467096 [========================>.....] - ETA: 59s
471302144/553467096 [========================>.....] - ETA: 59s
471515136/553467096 [========================>.....] - ETA: 58s
471678976/553467096 [========================>.....] - ETA: 58s……536248320/553467096 [============================>.] - ETA: 12s
536772608/553467096 [============================>.] - ETA: 11s
537329664/553467096 [============================>.] - ETA: 11s
537378816/553467096 [============================>.] - ETA: 11s
537444352/553467096 [============================>.] - ETA: 11s
537640960/553467096 [============================>.] - ETA: 11s
537755648/553467096 [============================>.] - ETA: 11s
537788416/553467096 [============================>.] - ETA: 10s
537821184/553467096 [============================>.] - ETA: 10s……551862272/553467096 [============================>.] - ETA: 1s
551993344/553467096 [============================>.] - ETA: 1s
552042496/553467096 [============================>.] - ETA: 0s
552124416/553467096 [============================>.] - ETA: 0s
552337408/553467096 [============================>.] - ETA: 0s
552501248/553467096 [============================>.] - ETA: 0s
552517632/553467096 [============================>.] - ETA: 0s
552583168/553467096 [============================>.] - ETA: 0s
552697856/553467096 [============================>.] - ETA: 0s
552910848/553467096 [============================>.] - ETA: 0s
553041920/553467096 [============================>.] - ETA: 0s
553123840/553467096 [============================>.] - ETA: 0s
553287680/553467096 [============================>.] - ETA: 0s
553467904/553467096 [==============================] - 386s 1us/step
2019-08-14 11:44:51.782638: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-08-14 11:44:53.212742: W T:\src\github\tensorflow\tensorflow\core\framework\allocator.cc:108] Allocation of 411041792 exceeds 10% of system memory.
2019-08-14 11:44:54.302588: W T:\src\github\tensorflow\tensorflow\core\framework\allocator.cc:108] Allocation of 411041792 exceeds 10% of system memory.
2019-08-14 11:44:54.310978: W T:\src\github\tensorflow\tensorflow\core\framework\allocator.cc:108] Allocation of 411041792 exceeds 10% of system memory.
(224, 224)
Found 4170 images belonging to 5 classes.
Found 530 images belonging to 5 classes.
26.5
['forky', 'knifey', 'spoony', 'test', 'train']
5
class_weight: [1.39839034 1.14876033 0.70701933]
['forky', 'knifey', 'spoony', 'test', 'train']
Downloading data from https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json8192/35363 [=====>........................] - ETA: 0s
16384/35363 [============>.................] - ETA: 0s
40960/35363 [==================================] - 0s 6us/step
79.02% : macaw6.61% : bubble3.64% : vine_snake1.90% : pinwheel1.22% : knot
50.31% : shower_curtain
17.08% : handkerchief
12.75% : mosquito_net2.87% : window_shade1.32% : toilet_tissue
45.08% : shower_curtain
21.84% : mosquito_net
11.55% : handkerchief2.02% : window_shade0.91% : Windsor_tie
26.75% : spoonbill7.06% : black_stork7.04% : wooden_spoon4.21% : limpkin3.72% : paddle
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
input_1 (InputLayer)         (None, 224, 224, 3)       0
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 224, 224, 64)      1792
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 224, 224, 64)      36928
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 112, 112, 64)      0
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 112, 112, 128)     73856
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 112, 112, 128)     147584
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 56, 56, 128)       0
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 56, 56, 256)       295168
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 56, 56, 256)       590080
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 56, 56, 256)       590080
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 28, 28, 256)       0
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 28, 28, 512)       1180160
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 28, 28, 512)       2359808
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 28, 28, 512)       2359808
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 14, 14, 512)       0
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 14, 14, 512)       2359808
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 14, 14, 512)       2359808
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 14, 14, 512)       2359808
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 7, 7, 512)         0
_________________________________________________________________
flatten (Flatten)            (None, 25088)             0
_________________________________________________________________
fc1 (Dense)                  (None, 4096)              102764544
_________________________________________________________________
fc2 (Dense)                  (None, 4096)              16781312
_________________________________________________________________
predictions (Dense)          (None, 1000)              4097000
=================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
_________________________________________________________________
Tensor("block5_pool/MaxPool:0", shape=(?, 7, 7, 512), dtype=float32)
True:   input_1
True:   block1_conv1
True:   block1_conv2
True:   block1_pool
True:   block2_conv1
True:   block2_conv2
True:   block2_pool
True:   block3_conv1
True:   block3_conv2
True:   block3_conv3
True:   block3_pool
True:   block4_conv1
True:   block4_conv2
True:   block4_conv3
True:   block4_pool
True:   block5_conv1
True:   block5_conv2
True:   block5_conv3
True:   block5_pool
False:  input_1
False:  block1_conv1
False:  block1_conv2
False:  block1_pool
False:  block2_conv1
False:  block2_conv2
False:  block2_pool
False:  block3_conv1
False:  block3_conv2
False:  block3_conv3
False:  block3_pool
False:  block4_conv1
False:  block4_conv2
False:  block4_conv3
False:  block4_pool
False:  block5_conv1
False:  block5_conv2
False:  block5_conv3
False:  block5_pool--------------Epoch 1/201/100 [..............................] - ETA: 24:24 - loss: 2.0064 - categorical_accuracy: 0.2500
……
100/100 [==============================] - 4064s 41s/step - loss: 1.1529 - categorical_accuracy: 0.4490 - val_loss: 0.8731 - val_categorical_accuracy: 0.6189
……
……
100/100 [==============================] - 2850s 29s/step - loss: 0.9524 - categorical_accuracy: 0.5480 - val_loss: 0.8089 - val_categorical_accuracy: 0.6377
Epoch 3/201/100 [..............................] - ETA: 22:19 - loss: 0.6235 - categorical_accuracy: 0.8000
……99/100 [============================>.] - ETA: 18s - loss: 0.8497 - categorical_accuracy: 0.6056
100/100 [==============================] - 2404s 24s/step - loss: 0.8499 - categorical_accuracy: 0.6060 - val_loss: 0.7322 - val_categorical_accuracy: 0.7283
……
……99/100 [============================>.] - ETA: 11s - loss: 0.6253 - categorical_accuracy: 0.7389
100/100 [==============================] - 1519s 15s/step - loss: 0.6248 - categorical_accuracy: 0.7390 - val_loss: 0.5702 - val_categorical_accuracy: 0.7811
Epoch 10/201/100 [..............................] - ETA: 21:14 - loss: 0.4481 - categorical_accuracy: 0.8000
……99/100 [============================>.] - ETA: 12s - loss: 0.6033 - categorical_accuracy: 0.7490
100/100 [==============================] - 1570s 16s/step - loss: 0.6045 - categorical_accuracy: 0.7475 - val_loss: 0.5199 - val_categorical_accuracy: 0.8075
Epoch 11/201/100 [..............................] - ETA: 19:40 - loss: 0.5531 - categorical_accuracy: 0.7500
……99/100 [============================>.] - ETA: 12s - loss: 0.5403 - categorical_accuracy: 0.7813
100/100 [==============================] - 1559s 16s/step - loss: 0.5401 - categorical_accuracy: 0.7810 - val_loss: 0.5147 - val_categorical_accuracy: 0.8132
Epoch 15/201/100 [..............................] - ETA: 20:10 - loss: 0.5337 - categorical_accuracy: 0.70002/100 [..............................] - ETA: 19:46 - loss: 0.4598 - categorical_accuracy: 0.8250
……99/100 [============================>.] - ETA: 12s - loss: 0.5495 - categorical_accuracy: 0.7601
100/100 [==============================] - 1578s 16s/step - loss: 0.5482 - categorical_accuracy: 0.7610 - val_loss: 0.5832 - val_categorical_accuracy: 0.7491
Epoch 16/201/100 [..............................] - ETA: 20:07 - loss: 0.2315 - categorical_accuracy: 1.0000
……19/100 [====>.........................] - ETA: 16:29 - loss: 0.5293 - categorical_accuracy: 0.7816

DL之VGG16:基于VGG16(Keras)利用Knifey-Spoony数据集对网络架构进行迁移学习相关推荐

  1. DL之VGG16:基于VGG16(Keras)利用Knifey-Spoony数据集对网络架构FineTuning

    DL之VGG16:基于VGG16(Keras)利用Knifey-Spoony数据集对网络架构FineTuning 输出结果   False: input_1 False: block1_conv1 F ...

  2. ML之NB:(NLP)基于sklearn库利用不同语种数据集训练NB(朴素贝叶斯)算法,对新语种进行语种检测

    ML之NB:(NLP)基于sklearn库利用不同语种数据集训练NB(朴素贝叶斯)算法,对新语种进行语种检测 目录 输出结果 训练数据集 设计思路 核心代码 输出结果 测试01:I love you ...

  3. “从零到一“基于Freeswitch二次开发:Freeswitch入门与网络架构 (一)

    一. 简介与背景 在当前的互联网时代,想必大家对于网络电话或者通过网络进行语音.视频的场景再熟悉不过了.基于IP的语音传输(英语:Voice over Internet Protocol,缩写为VoI ...

  4. 神经网络学习小记录26——Keras 利用efficientnet系列模型搭建yolov3目标检测平台

    神经网络学习小记录26--Keras 利用efficientnet系列模型搭建efficientnet-yolov3目标检测平台 学习前言 什么是EfficientNet模型 源码下载 Efficie ...

  5. 基于特征的对抗迁移学习论文_有关迁移学习论文

    如果你有好的想法,欢迎讨论! 1 Application of Transfer Learning in Continuous Time Series for Anomaly Detection in ...

  6. 大作业论文之基于迁移学习的图像预测研究

    基于迁移学习的图像预测研究 摘  要:深度学习技术发展迅速,在图像处理领域取得了显著成果.[2]但是由于部分图像样本少,标注困难,使得深度学习的效果远未达到预期.迁移学习是机器学习中一种新的学习范式, ...

  7. 基于深度迁移学习进行时间序列分类

    在碎片化阅读充斥眼球的时代,越来越少的人会去关注每篇论文背后的探索和思考. 在这个栏目里,你会快速 get 每篇精选论文的亮点和痛点,时刻紧跟 AI 前沿成果. 点击本文底部的「阅读原文」即刻加入社区 ...

  8. Tensorflow 2.x(keras)源码详解之第十五章:迁移学习与微调

      大家好,我是爱编程的喵喵.双985硕士毕业,现担任全栈工程师一职,热衷于将数据思维应用到工作与生活中.从事机器学习以及相关的前后端开发工作.曾在阿里云.科大讯飞.CCF等比赛获得多次Top名次.现 ...

  9. 基于特征的对抗迁移学习论文_学界 | 综述论文:四大类深度迁移学习

    选自arXiv 作者:Chuanqi Tan.Fuchun Sun.Tao Kong. Wenchang Zhang.Chao Yang.Chunfang Liu 机器之心编译 参与:乾树.刘晓坤 本 ...

最新文章

  1. 刻意练习:LeetCode实战 -- Task02. 删除排序数组中的重复项
  2. 【ACM】杭电OJ 1106 函数atoi
  3. Python “with” keyword
  4. attr和prop的区别
  5. LeetCode 2130. 链表最大孪生和(链表快慢指针+反转链表+双指针)
  6. linux入门指令 详解,Linux基础命令之mktemp详解
  7. 【面试题3】二维数组中的查找
  8. 晨哥真有料丨等女神分了我上位!
  9. python fetchall()转化为数据框_python 操作mysql数据中fetchone()和fetchall()方式
  10. 阿里面试官常问的TCP和UDP,你真的弄懂了吗?
  11. AcWing 1826. 农田缩减(思维+枚举)
  12. pandas读取csv文件数据并对数据分类使用matplotlib画出折线图
  13. 积少成多Flash(4) - ActionScript 3.0 实例之Hello World, 时钟, 计时器
  14. java基础语法(三)--运算符、控制语句
  15. MATLAB频谱图绘制
  16. 五环打击理论的主要原则
  17. IDEA插件系列(89):Copy/Paste Stack插件——复制/粘贴工具栈
  18. python中的date的含义_浅谈python中的dateime
  19. node.js+uni计算机毕设项目计算机配件价格查询微信小程序(程序+小程序+LW)
  20. JUC第六讲:ThreadLocal/InheritableThreadLocal详解/TTL-MDC日志上下文实践

热门文章

  1. linux apache 负载均衡,使用Apache作为前端负载均衡器
  2. Sublime Text 3(中文)在Windows下的配置、安装、运行
  3. 虚拟form 下载文件
  4. bzoj2326 [HNOI2011]数学作业
  5. Redis的zset有多牛?请把耳朵递过来
  6. 月均活跃用户达1.3亿,B站高可用架构实践
  7. 源码读不会,小白两行泪!
  8. (多图)老弟,你连HTTPS 原理都不懂,还给我讲“中间人攻击”,逗我吗...
  9. 小米自动化运维平台演进设计思路
  10. 七个步骤,带你快速读懂 RPC 框架原理