最近经常会刷到很多关于paddle的文章,说实在的paddle我之前读研的时候还是有用过的,后面就没怎么再用过了,经过几年的捶打完善,感觉现在已经是一款不错的深度学习框架了,是时候捡起来学习了解一下了。

PaddlePaddle地址在这里,首页截图如下所示:

上面图中的每一个项目都是当前很火的项目,不过我们今天的主要学习内容不在上面,之前也做过不同框架之间模型转化的工作,今天也是这样的内容,因为PaddlePaddle自家提供了类似的开源工具X2Paddle,顾名思义,就是提供其他框架转化为PaddlePaddle自家模型文件的工具。

话不多说,这里先来看X2Paddle,地址在这里,首页如下:

感兴趣的话可以去仔细了解一下,我们今天主要是借助具体的实践来学习一下具体的操作使用,VGG_16是CV领域的一个经典模型,这里就以VGG_16为例,给大家展示如何将TensorFlow训练好的模型转换为飞桨模型。

首先下载模型,地址如下:

http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz

之后解压缩,截图如下所示:

接下来我们需要重新加载参数,并将网络结构和参数一起保存为checkpoint模型,代码如下:

with tf.Session() as sess:inputs = tf.placeholder(dtype=tf.float32, shape=[None, 224, 224, 3], name="inputs")logits, endpoint = vgg.vgg_16(inputs, num_classes=1000, is_training=False)load_model = slim.assign_from_checkpoint_fn("vgg_16.ckpt", slim.get_model_variables("vgg_16"))load_model(sess)data = numpy.random.rand(5, 224, 224, 3)input_tensor = sess.graph.get_tensor_by_name("inputs:0")output_tensor = sess.graph.get_tensor_by_name("vgg_16/fc8/squeezed:0")result = sess.run([output_tensor], {input_tensor:data})numpy.save("tensorflow.npy", numpy.array(result))saver = tf.train.Saver()saver.save(sess, "checkpoint/model")

完成上述处理,我们就可以着手将模型转为PaddlePaddle模型了,代码如下:

parser = convert._get_parser()
parser.meta_file = "checkpoint/model.meta"
parser.ckpt_dir = "checkpoint"
parser.in_nodes = ["inputs"]
parser.input_shape = ["None,224,224,3"]
parser.output_nodes = ["vgg_16/fc8/squeezed"]
parser.use_cuda = "True"
parser.input_format = "NHWC"
parser.save_dir = "paddle_model"
convert.run(parser)

由于是初次学习使用,这里我附上完整的日志输出,如下:

Use tf.compat.v1.graph_util.extract_sub_graph
INFO:root:Tensorflow model loaded!
INFO:root:TotalNum:86,TraslatedNum:1,CurrentNode:inputs
INFO:root:TotalNum:86,TraslatedNum:2,CurrentNode:vgg_16/conv1/conv1_1/weights
INFO:root:TotalNum:86,TraslatedNum:3,CurrentNode:vgg_16/conv1/conv1_1/biases
INFO:root:TotalNum:86,TraslatedNum:4,CurrentNode:vgg_16/conv1/conv1_2/weights
INFO:root:TotalNum:86,TraslatedNum:5,CurrentNode:vgg_16/conv1/conv1_2/biases
INFO:root:TotalNum:86,TraslatedNum:6,CurrentNode:vgg_16/conv2/conv2_1/weights
INFO:root:TotalNum:86,TraslatedNum:7,CurrentNode:vgg_16/conv2/conv2_1/biases
INFO:root:TotalNum:86,TraslatedNum:8,CurrentNode:vgg_16/conv2/conv2_2/weights
INFO:root:TotalNum:86,TraslatedNum:9,CurrentNode:vgg_16/conv2/conv2_2/biases
INFO:root:TotalNum:86,TraslatedNum:10,CurrentNode:vgg_16/conv3/conv3_1/weights
INFO:root:TotalNum:86,TraslatedNum:11,CurrentNode:vgg_16/conv3/conv3_1/biases
INFO:root:TotalNum:86,TraslatedNum:12,CurrentNode:vgg_16/conv3/conv3_2/weights
INFO:root:TotalNum:86,TraslatedNum:13,CurrentNode:vgg_16/conv3/conv3_2/biases
INFO:root:TotalNum:86,TraslatedNum:14,CurrentNode:vgg_16/conv3/conv3_3/weights
INFO:root:TotalNum:86,TraslatedNum:15,CurrentNode:vgg_16/conv3/conv3_3/biases
INFO:root:TotalNum:86,TraslatedNum:16,CurrentNode:vgg_16/conv4/conv4_1/weights
INFO:root:TotalNum:86,TraslatedNum:17,CurrentNode:vgg_16/conv4/conv4_1/biases
INFO:root:TotalNum:86,TraslatedNum:18,CurrentNode:vgg_16/conv4/conv4_2/weights
INFO:root:TotalNum:86,TraslatedNum:19,CurrentNode:vgg_16/conv4/conv4_2/biases
INFO:root:TotalNum:86,TraslatedNum:20,CurrentNode:vgg_16/conv4/conv4_3/weights
INFO:root:TotalNum:86,TraslatedNum:21,CurrentNode:vgg_16/conv4/conv4_3/biases
INFO:root:TotalNum:86,TraslatedNum:22,CurrentNode:vgg_16/conv5/conv5_1/weights
INFO:root:TotalNum:86,TraslatedNum:23,CurrentNode:vgg_16/conv5/conv5_1/biases
INFO:root:TotalNum:86,TraslatedNum:24,CurrentNode:vgg_16/conv5/conv5_2/weights
INFO:root:TotalNum:86,TraslatedNum:25,CurrentNode:vgg_16/conv5/conv5_2/biases
INFO:root:TotalNum:86,TraslatedNum:26,CurrentNode:vgg_16/conv5/conv5_3/weights
INFO:root:TotalNum:86,TraslatedNum:27,CurrentNode:vgg_16/conv5/conv5_3/biases
INFO:root:TotalNum:86,TraslatedNum:28,CurrentNode:vgg_16/fc6/weights
INFO:root:TotalNum:86,TraslatedNum:29,CurrentNode:vgg_16/fc6/biases
INFO:root:TotalNum:86,TraslatedNum:30,CurrentNode:vgg_16/fc7/weights
INFO:root:TotalNum:86,TraslatedNum:31,CurrentNode:vgg_16/fc7/biases
INFO:root:TotalNum:86,TraslatedNum:32,CurrentNode:vgg_16/fc8/weights
INFO:root:TotalNum:86,TraslatedNum:33,CurrentNode:vgg_16/fc8/biases
INFO:root:TotalNum:86,TraslatedNum:34,CurrentNode:vgg_16/conv1/conv1_1/Conv2D
INFO:root:TotalNum:86,TraslatedNum:35,CurrentNode:vgg_16/conv1/conv1_1/BiasAdd
INFO:root:TotalNum:86,TraslatedNum:36,CurrentNode:vgg_16/conv1/conv1_1/Relu
INFO:root:TotalNum:86,TraslatedNum:37,CurrentNode:vgg_16/conv1/conv1_2/Conv2D
INFO:root:TotalNum:86,TraslatedNum:38,CurrentNode:vgg_16/conv1/conv1_2/BiasAdd
INFO:root:TotalNum:86,TraslatedNum:39,CurrentNode:vgg_16/conv1/conv1_2/Relu
INFO:root:TotalNum:86,TraslatedNum:40,CurrentNode:vgg_16/pool1/MaxPool
INFO:root:TotalNum:86,TraslatedNum:41,CurrentNode:vgg_16/conv2/conv2_1/Conv2D
INFO:root:TotalNum:86,TraslatedNum:42,CurrentNode:vgg_16/conv2/conv2_1/BiasAdd
INFO:root:TotalNum:86,TraslatedNum:43,CurrentNode:vgg_16/conv2/conv2_1/Relu
INFO:root:TotalNum:86,TraslatedNum:44,CurrentNode:vgg_16/conv2/conv2_2/Conv2D
INFO:root:TotalNum:86,TraslatedNum:45,CurrentNode:vgg_16/conv2/conv2_2/BiasAdd
INFO:root:TotalNum:86,TraslatedNum:46,CurrentNode:vgg_16/conv2/conv2_2/Relu
INFO:root:TotalNum:86,TraslatedNum:47,CurrentNode:vgg_16/pool2/MaxPool
INFO:root:TotalNum:86,TraslatedNum:48,CurrentNode:vgg_16/conv3/conv3_1/Conv2D
INFO:root:TotalNum:86,TraslatedNum:49,CurrentNode:vgg_16/conv3/conv3_1/BiasAdd
INFO:root:TotalNum:86,TraslatedNum:50,CurrentNode:vgg_16/conv3/conv3_1/Relu
INFO:root:TotalNum:86,TraslatedNum:51,CurrentNode:vgg_16/conv3/conv3_2/Conv2D
INFO:root:TotalNum:86,TraslatedNum:52,CurrentNode:vgg_16/conv3/conv3_2/BiasAdd
INFO:root:TotalNum:86,TraslatedNum:53,CurrentNode:vgg_16/conv3/conv3_2/Relu
INFO:root:TotalNum:86,TraslatedNum:54,CurrentNode:vgg_16/conv3/conv3_3/Conv2D
INFO:root:TotalNum:86,TraslatedNum:55,CurrentNode:vgg_16/conv3/conv3_3/BiasAdd
INFO:root:TotalNum:86,TraslatedNum:56,CurrentNode:vgg_16/conv3/conv3_3/Relu
INFO:root:TotalNum:86,TraslatedNum:57,CurrentNode:vgg_16/pool3/MaxPool
INFO:root:TotalNum:86,TraslatedNum:58,CurrentNode:vgg_16/conv4/conv4_1/Conv2D
INFO:root:TotalNum:86,TraslatedNum:59,CurrentNode:vgg_16/conv4/conv4_1/BiasAdd
INFO:root:TotalNum:86,TraslatedNum:60,CurrentNode:vgg_16/conv4/conv4_1/Relu
INFO:root:TotalNum:86,TraslatedNum:61,CurrentNode:vgg_16/conv4/conv4_2/Conv2D
INFO:root:TotalNum:86,TraslatedNum:62,CurrentNode:vgg_16/conv4/conv4_2/BiasAdd
INFO:root:TotalNum:86,TraslatedNum:63,CurrentNode:vgg_16/conv4/conv4_2/Relu
INFO:root:TotalNum:86,TraslatedNum:64,CurrentNode:vgg_16/conv4/conv4_3/Conv2D
INFO:root:TotalNum:86,TraslatedNum:65,CurrentNode:vgg_16/conv4/conv4_3/BiasAdd
INFO:root:TotalNum:86,TraslatedNum:66,CurrentNode:vgg_16/conv4/conv4_3/Relu
INFO:root:TotalNum:86,TraslatedNum:67,CurrentNode:vgg_16/pool4/MaxPool
INFO:root:TotalNum:86,TraslatedNum:68,CurrentNode:vgg_16/conv5/conv5_1/Conv2D
INFO:root:TotalNum:86,TraslatedNum:69,CurrentNode:vgg_16/conv5/conv5_1/BiasAdd
INFO:root:TotalNum:86,TraslatedNum:70,CurrentNode:vgg_16/conv5/conv5_1/Relu
INFO:root:TotalNum:86,TraslatedNum:71,CurrentNode:vgg_16/conv5/conv5_2/Conv2D
INFO:root:TotalNum:86,TraslatedNum:72,CurrentNode:vgg_16/conv5/conv5_2/BiasAdd
INFO:root:TotalNum:86,TraslatedNum:73,CurrentNode:vgg_16/conv5/conv5_2/Relu
INFO:root:TotalNum:86,TraslatedNum:74,CurrentNode:vgg_16/conv5/conv5_3/Conv2D
INFO:root:TotalNum:86,TraslatedNum:75,CurrentNode:vgg_16/conv5/conv5_3/BiasAdd
INFO:root:TotalNum:86,TraslatedNum:76,CurrentNode:vgg_16/conv5/conv5_3/Relu
INFO:root:TotalNum:86,TraslatedNum:77,CurrentNode:vgg_16/pool5/MaxPool
INFO:root:TotalNum:86,TraslatedNum:78,CurrentNode:vgg_16/fc6/Conv2D
INFO:root:TotalNum:86,TraslatedNum:79,CurrentNode:vgg_16/fc6/BiasAdd
INFO:root:TotalNum:86,TraslatedNum:80,CurrentNode:vgg_16/fc6/Relu
INFO:root:TotalNum:86,TraslatedNum:81,CurrentNode:vgg_16/fc7/Conv2D
INFO:root:TotalNum:86,TraslatedNum:82,CurrentNode:vgg_16/fc7/BiasAdd
INFO:root:TotalNum:86,TraslatedNum:83,CurrentNode:vgg_16/fc7/Relu
INFO:root:TotalNum:86,TraslatedNum:84,CurrentNode:vgg_16/fc8/Conv2D
INFO:root:TotalNum:86,TraslatedNum:85,CurrentNode:vgg_16/fc8/BiasAdd
INFO:root:TotalNum:86,TraslatedNum:86,CurrentNode:vgg_16/fc8/squeezed
INFO:root:Model translated!

执行成功截图如下所示:

转化成功后会出现 paddle_model 目录,如下:

截图如下所示:

转化的工作到这里其实就已经结束了,但是一般的转化都逃不开精度损失的问题,这里需要做一下验证计算。代码如下:

model = ml.ModelLoader("paddle_model", use_cuda=False)
numpy.random.seed(13)
data = numpy.random.rand(5, 224, 224, 3).astype("float32")
# NHWC -> NCHW
data = numpy.transpose(data, (0, 3, 1, 2))
results = model.inference(feed_dict={model.inputs[0]:data})
numpy.save("paddle.npy", numpy.array(results))

执行的时候报错了,错误信息如下:

AssertionError: In PaddlePaddle 2.x, we turn on dynamic graph mode by default, and 'data()' is only supported in static graph mode. So if you want to use this api, please call 'paddle.enable_static()' before this api to enter static graph mode.

代码更正为下面的,其实主要就是静态图的问题。

paddle.enable_static()
model = ml.ModelLoader("paddle_model", use_cuda=False)
numpy.random.seed(13)
data = numpy.random.rand(5, 224, 224, 3).astype("float32")
# NHWC -> NCHW
data = numpy.transpose(data, (0, 3, 1, 2))
results = model.inference(feed_dict={model.inputs[0]:data})
numpy.save("paddle.npy", numpy.array(results))

之后计算一下模型的损失,通过把两个模型文件加载进来后,通过numpy.fabs来求两个模型结果的差异即可,如下:

paddle_result = numpy.load("paddle.npy")
tensorflow_result = numpy.load("tensorflow.npy")
diff = numpy.fabs(paddle_result - tensorflow_result)
print(numpy.max(diff))

结果如下所示:

误差的数据还是比较小的,在可接受的范围内。

最后附上完整代码,如下:

#!usr/bin/env python
#encoding:utf-8
from __future__ import division'''
__Author__:沂水寒城
功能: VGG Tensorflow 转 Paddle 实战
'''import os
import tensorflow.contrib.slim as slim
from tensorflow.contrib.slim.nets import vgg
import tensorflow as tf
import numpy'''
保存模型为checkpoint格式
'''
with tf.Session() as sess:inputs = tf.placeholder(dtype=tf.float32, shape=[None, 224, 224, 3], name="inputs")logits, endpoint = vgg.vgg_16(inputs, num_classes=1000, is_training=False)load_model = slim.assign_from_checkpoint_fn("vgg_16.ckpt", slim.get_model_variables("vgg_16"))load_model(sess)data = numpy.random.rand(5, 224, 224, 3)input_tensor = sess.graph.get_tensor_by_name("inputs:0")output_tensor = sess.graph.get_tensor_by_name("vgg_16/fc8/squeezed:0")result = sess.run([output_tensor], {input_tensor:data})numpy.save("tensorflow.npy", numpy.array(result))saver = tf.train.Saver()saver.save(sess, "checkpoint/model")'''
将模型转换为飞桨模型
'''
import tf2fluid.convert as convert
import argparse
parser = convert._get_parser()
parser.meta_file = "checkpoint/model.meta"
parser.ckpt_dir = "checkpoint"
parser.in_nodes = ["inputs"]
parser.input_shape = ["None,224,224,3"]
parser.output_nodes = ["vgg_16/fc8/squeezed"]
parser.use_cuda = "True"
parser.input_format = "NHWC"
parser.save_dir = "paddle_model"
convert.run(parser)'''
预测结果差异对比
'''
import numpy
import paddle
import tf2fluid.model_loader as ml
paddle.enable_static()
model = ml.ModelLoader("paddle_model", use_cuda=False)
numpy.random.seed(13)
data = numpy.random.rand(5, 224, 224, 3).astype("float32")
# NHWC -> NCHW
data = numpy.transpose(data, (0, 3, 1, 2))
results = model.inference(feed_dict={model.inputs[0]:data})
numpy.save("paddle.npy", numpy.array(results))'''
对比模型损失
'''
import numpy
paddle_result = numpy.load("paddle.npy")
tensorflow_result = numpy.load("tensorflow.npy")
diff = numpy.fabs(paddle_result - tensorflow_result)
print(numpy.max(diff))

初次的实践使用就到这里了,后面有时间继续深入研究下。

X2Paddle实践之——Tensorflow版本VGG模型转化为paddle版本模型相关推荐

  1. 人工智能实践:TensorFlow笔记学习(八)—— 卷积神经网络实践

    大纲 7.1  复现已有的卷积神经网络 7.2  用vgg16实现图片识别 目标 掌握复现已有网络,用vgg16实现图片识别 7.1  复现已有的卷积神经网络 VGGNet是Karen simonya ...

  2. 人工智能实践:TensorFlow笔记学习(一)—— 人工智能概述

    概  述 一. 基本概念  1.什么是人工智能  人工智能的概念:机器模拟人的意识和思维 重要人物:艾伦·麦席森·图灵(Alan Mathison Turing) 人物简介:1912年6月23日-19 ...

  3. 保存模型后无法训练_模型构建到部署实践

    导读 在工业界一般会采用了tensorflow-serving进行模型的部署,而在模型构建时会因人而异会使用不同的深度学习框架,这就需要在使用指定深度学习框架训练出模型后,统一将模型转为pb格式,便于 ...

  4. 北京大学 人工智能实践:Tensorflow笔记——曹健(writed by Enigmalgia)

    可以看下我转载的一篇文章机器学习入门概括 关于TensorFlow的安装 Ubuntu18.04下安装anaconda和pycharm搭建TensorFlow 贴一下TensorFlow的官方中文版教 ...

  5. 人工智能实践:TensorFlow笔记学习(六)—— 全连接网络实践

    输入手写数字输出识别结果 大纲 6.1 输入手写数字图片输出识别结果 6.2 制作数据集 目标 1.实现断点续训 2.输入真实图片,输出预测结果 3.制作数据集,实现特定应用 6.1  输入手写数字图 ...

  6. 人工智能实践:TensorFlow笔记学习(五)—— 全连接网络基础

    MNIST数据集输出手写数字识别准确率 大纲 5.1 MNIST数据集 5.2 模块化搭建神经网络 5.3 手写数字识别准确率输出 目标 利用MNIST数据集巩固模块化搭建神经网路的八股,实践前向传播 ...

  7. 人工智能实践:TensorFlow笔记学习(四)—— 神经网络优化

    神经网络优化  大纲 4.1 损失函数 4.2 学习率 4.3 滑动平均 4.4 正则化 4.5 神经网络搭建八股 目标 掌握神经网络优化方法 4.1 损失函数 神经元模型:用数学公式表示为:,f为激 ...

  8. 人工智能实践:TensorFlow笔记学习(三)——TensorFlow框架

    搭建神经网络 大纲 3.1 张量.计算图.会话 3.2 前向传播 3.3 反向传播 目标 搭建神经网络,总结搭建八股 3.1 张量.计算图.会话 一.基本概念 基于Tensorflow的NN:用张量表 ...

  9. 我的AI之路(5)--如何选择和正确安装跟Tensorflow版本对应的CUDA和cuDNN版本

    最新的Tensorflow和CUDA cuDNN的对应关系可以从这里找到: https://tensorflow.google.cn/install/source https://tensorflow ...

  10. SOTA模型飞入寻常百姓家-BEiT模型在AIStudio动手实践

    转自AI Studio,原文链接:​​​​​​SOTA模型飞入寻常百姓家-BEiT模型在AIStudio动手实践 - 飞桨AI Studio 一.缘起 众所周知Transformer模型精度高,但是训 ...

最新文章

  1. Unix 的基本命令:
  2. SQL CREATE TABLE 语句(转)
  3. 鸟哥的Linux私房菜(基础篇)-第一章、Linux是什么(一.2. Torvalds的Linux发展)
  4. Cisco路由器安全配置命令
  5. Spring Batch:多种格式输出编写器
  6. linux下unix timestamp 与 可视化时间/常规时间进行转换
  7. 田渊栋教你读paper的正确姿势
  8. 设置线程当天十二点执行_这份JAVA多线程笔记真的是细节满满,几乎全是你工作能用到的干货...
  9. 问题TypeError: __init__() takes 1 positional argument but 2 were given解决方案
  10. tcpdf中增加微软雅黑的正确方式
  11. IPv6带给互联网的新契机
  12. pat乙级 1091 N-自守数 (15 分)
  13. 纯css实现各种箭头图片效果
  14. 招聘网站分析-前程无忧网站的爬虫设计与实现
  15. 【方法】Chrome如何下载视频
  16. 实用的配音软件推荐,确定不来看看?
  17. iPhone设置中的“开发者”选项
  18. TCP/IP协议簇之数据链路层
  19. Linux远程和文件传输工具Xshell、Xftp使用方法
  20. 因为写的程序BUG太多来重头开始整理最基础的C语言学习记录

热门文章

  1. XDebug On Ubuntu
  2. 在mysql 使用binlog日志
  3. http性能测试点滴
  4. 自动化测试===Macaca环境搭建,自我总结
  5. gh-ost 学习笔记
  6. WebApp那些事——(JqueryMobile) 实战(一)
  7. JAVA中易出错的小问题(二)
  8. 分享一款国外的优化IE9浏览器的软件感觉很好用兼容win7
  9. [心情]一落千丈的反差
  10. Eclipse无法查看Servlet源代码的解决方案