测试keras和mxnet的速度
测试一下keras和mxnet的速度
win10 64 cuda8.0 cudnn5.1 gtx1060
cnn mnist
- import numpy
- import os
- import urllib
- import gzip
- import struct
- def read_data(label_name, image_name):
- s=os.getenv('DATA')
- with gzip.open(os.getenv('DATA')+'\\MNIST\\'+label_name) as flbl:
- magic, num = struct.unpack(">II", flbl.read(8))
- label = numpy.fromstring(flbl.read(), dtype=numpy.int8)
- with gzip.open(os.getenv('DATA')+'\\MNIST\\'+image_name, 'rb') as fimg:
- magic, num, rows, cols = struct.unpack(">IIII", fimg.read(16))
- image = numpy.fromstring(fimg.read(), dtype=numpy.uint8).reshape(len(label), rows, cols)
- return (label, image)
- (train_lbl, train_img) = read_data('train-labels-idx1-ubyte.gz', 'train-images-idx3-ubyte.gz')
- (val_lbl, val_img) = read_data('t10k-labels-idx1-ubyte.gz','t10k-images-idx3-ubyte.gz')
- def to4d(img):
- return img.reshape(img.shape[0], 1, 28, 28).astype(numpy.float32)/255
- def repack_data(d):
- t = numpy.zeros((d.size, 10))
- for i in range(d.size):
- t[i][d[i]] = 1
- return t
- train_img=to4d(train_img)
- val_img=to4d(val_img)
- batch_size = 100
- num_epoch =5
- #backend='mxnet'
- backend='keras'
- if backend=='keras':
- from keras.models import *
- from keras.layers import *
- from keras.optimizers import *
- model = Sequential()
- model.add(Convolution2D(64, 5, 5, input_shape=(1,28,28), init='uniform', activation='relu'))
- model.add(MaxPooling2D())
- model.add(Convolution2D(128, 5, 5, init='uniform', activation='relu'))
- model.add(MaxPooling2D())
- model.add(Flatten())
- model.add(Dense(1024, init='uniform', activation='relu'))
- model.add(Dense(1024, init='uniform', activation='relu'))
- model.add(Dense(10, init='uniform', activation='softmax'))
- model.summary()
- model.compile(loss='categorical_crossentropy', optimizer=adadelta(), metrics=['accuracy'])
- model.fit(train_img,repack_data(train_lbl),batch_size=batch_size,nb_epoch=num_epoch,validation_data=(val_img,repack_data(val_lbl)))
- else:
- import mxnet
- train_iter = mxnet.io.NDArrayIter(train_img, train_lbl, batch_size, shuffle=True)
- val_iter = mxnet.io.NDArrayIter(val_img, val_lbl, batch_size)
- data = mxnet.symbol.Variable('data')
- conv1 = mxnet.sym.Convolution(data=data, kernel=(5, 5), num_filter=64)
- relu1 = mxnet.sym.Activation(data=conv1, act_type="relu")
- pool1 = mxnet.sym.Pooling(data=relu1, pool_type="max", kernel=(2, 2), stride=(2, 2))
- conv2 = mxnet.sym.Convolution(data=pool1, kernel=(5, 5), num_filter=128)
- relu2 = mxnet.sym.Activation(data=conv2, act_type="relu")
- pool2 = mxnet.sym.Pooling(data=relu2, pool_type="max", kernel=(2, 2), stride=(2, 2))
- flatten = mxnet.sym.Flatten(data=pool2)
- fc1 = mxnet.symbol.FullyConnected(data=flatten, num_hidden=1024)
- relu3 = mxnet.sym.Activation(data=fc1, act_type="relu")
- fc2 = mxnet.symbol.FullyConnected(data=relu3, num_hidden=1024)
- relu4 = mxnet.sym.Activation(data=fc2, act_type="relu")
- fc3 = mxnet.sym.FullyConnected(data=relu4, num_hidden=10)
- net = mxnet.sym.SoftmaxOutput(data=fc3, name='softmax')
- mxnet.viz.plot_network(symbol=net, shape= {"data" : (batch_size, 1, 28, 28)}).render('mxnet')
- model = mxnet.model.FeedForward(
- ctx=mxnet.gpu(0), # use GPU 0 for training, others are same as before
- symbol=net,
- num_epoch=num_epoch,
- learning_rate=0.1,
- optimizer='AdaDelta',
- initializer=mxnet.initializer.Uniform())
- import logging
- logging.getLogger().setLevel(logging.DEBUG)
- model.fit(
- X=train_iter,
- eval_data=val_iter,
- batch_end_callback=mxnet.callback.Speedometer(batch_size, 200)
- )
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
convolution2d_1 (Convolution2D) (None, 64, 24, 24) 1664 convolution2d_input_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D) (None, 64, 12, 12) 0 convolution2d_1[0][0]
____________________________________________________________________________________________________
convolution2d_2 (Convolution2D) (None, 128, 8, 8) 204928 maxpooling2d_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_2 (MaxPooling2D) (None, 128, 4, 4) 0 convolution2d_2[0][0]
____________________________________________________________________________________________________
flatten_1 (Flatten) (None, 2048) 0 maxpooling2d_2[0][0]
____________________________________________________________________________________________________
dense_1 (Dense) (None, 1024) 2098176 flatten_1[0][0]
____________________________________________________________________________________________________
dense_2 (Dense) (None, 1024) 1049600 dense_1[0][0]
____________________________________________________________________________________________________
dense_3 (Dense) (None, 10) 10250 dense_2[0][0]
====================================================================================================
Total params: 3364618
____________________________________________________________________________________________________
keras+theano
Train on 60000 samples, validate on 10000 samples
Epoch 1/5
60000/60000 [==============================] - 7s - loss: 0.1975 - acc: 0.9379 - val_loss: 0.0450 - val_acc: 0.9856
Epoch 2/5
60000/60000 [==============================] - 7s - loss: 0.0449 - acc: 0.9857 - val_loss: 0.0351 - val_acc: 0.9891
Epoch 3/5
60000/60000 [==============================] - 7s - loss: 0.0303 - acc: 0.9907 - val_loss: 0.0248 - val_acc: 0.9921
Epoch 4/5
60000/60000 [==============================] - 7s - loss: 0.0207 - acc: 0.9932 - val_loss: 0.0257 - val_acc: 0.9920
Epoch 5/5
60000/60000 [==============================] - 7s - loss: 0.0151 - acc: 0.9954 - val_loss: 0.0232 - val_acc: 0.9929
mxnet
INFO:root:Start training with [gpu(0)]
INFO:root:Epoch[0] Batch [200] Speed: 2960.54 samples/secTrain-accuracy=0.845600
INFO:root:Epoch[0] Batch [400] Speed: 2878.78 samples/secTrain-accuracy=0.975150
INFO:root:Epoch[0] Batch [600] Speed: 2875.59 samples/secTrain-accuracy=0.980750
INFO:root:Epoch[0] Resetting Data Iterator
INFO:root:Epoch[0] Time cost=21.459
INFO:root:Epoch[0] Validation-accuracy=0.986700
INFO:root:Epoch[1] Batch [200] Speed: 2888.17 samples/secTrain-accuracy=0.985850
INFO:root:Epoch[1] Batch [400] Speed: 2867.33 samples/secTrain-accuracy=0.988150
INFO:root:Epoch[1] Batch [600] Speed: 2867.63 samples/secTrain-accuracy=0.990200
INFO:root:Epoch[1] Resetting Data Iterator
INFO:root:Epoch[1] Time cost=20.874
INFO:root:Epoch[1] Validation-accuracy=0.980700
INFO:root:Epoch[2] Batch [200] Speed: 2894.78 samples/secTrain-accuracy=0.992200
INFO:root:Epoch[2] Batch [400] Speed: 2876.13 samples/secTrain-accuracy=0.993150
INFO:root:Epoch[2] Batch [600] Speed: 2858.85 samples/secTrain-accuracy=0.994650
INFO:root:Epoch[2] Resetting Data Iterator
INFO:root:Epoch[2] Time cost=20.875
INFO:root:Epoch[2] Validation-accuracy=0.990300
INFO:root:Epoch[3] Batch [200] Speed: 2879.48 samples/secTrain-accuracy=0.994600
INFO:root:Epoch[3] Batch [400] Speed: 2859.86 samples/secTrain-accuracy=0.995800
INFO:root:Epoch[3] Batch [600] Speed: 2860.25 samples/secTrain-accuracy=0.995800
INFO:root:Epoch[3] Resetting Data Iterator
INFO:root:Epoch[3] Time cost=20.951
INFO:root:Epoch[3] Validation-accuracy=0.990300
INFO:root:Epoch[4] Batch [200] Speed: 2887.86 samples/secTrain-accuracy=0.995750
INFO:root:Epoch[4] Batch [400] Speed: 2865.84 samples/secTrain-accuracy=0.997100
INFO:root:Epoch[4] Batch [600] Speed: 2868.30 samples/secTrain-accuracy=0.997700
INFO:root:Epoch[4] Resetting Data Iterator
INFO:root:Epoch[4] Time cost=20.915
INFO:root:Epoch[4] Validation-accuracy=0.988300
keras的速度我挺满意的,基本上达到了同类卡应该有的效果,而且gpu经常100%
但是theano后端的编译速度好慢好慢好慢!
mxnet好慢啊,三倍时间啊!跑一个官方例子也比gtx980慢一倍,感觉是什么地方配置跪了
不过我发现mxnet训练的时候cpu一直是100,可能是这个原因。。。。
悲伤的故事
测试keras和mxnet的速度相关推荐
- 主流深度学习框架对比(TensorFlow、Keras、MXNet、PyTorch)
近几年来,深度学习的研究和应用的热潮持续高涨,各种开源深度学习框架层出不穷,包括TensorFlow,Keras,MXNet,PyTorch,CNTK,Theano,Caffe,DeepLearnin ...
- 常用深度学习框——Caffe/TensorFlow / Keras/ PyTorch/MXNet
常用深度学习框--Caffe/TensorFlow / Keras/ PyTorch/MXNet 一.概述 近几年来,深度学习的研究和应用的热潮持续高涨,各种开源深度学习框架层出不穷,包括Tensor ...
- 人工智能AI:TensorFlow Keras PyTorch MXNet PaddlePaddle 深度学习实战 part1
日萌社 人工智能AI:TensorFlow Keras PyTorch MXNet PaddlePaddle 深度学习实战 part1 人工智能AI:TensorFlow Keras PyTorch ...
- linux下测试磁盘的读写IO速度-简易方法
linux下测试磁盘的读写IO速度-简易方法 参考资料: https://blog.csdn.net/zqtsx/article/details/25487185 一:使用hdparm命令 这是一个是 ...
- DL框架:主流深度学习框架(TensorFlow/Pytorch/Caffe/Keras/CNTK/MXNet/Theano/PaddlePaddle)简介、多个方向比较、案例应用之详细攻略
DL框架:主流深度学习框架(TensorFlow/Pytorch/Caffe/Keras/CNTK/MXNet/Theano/PaddlePaddle)简介.多个方向比较.案例应用之详细攻略 目录 深 ...
- linux下测试磁盘的读写IO速度
有时候我们在做维护的时候,总会遇到类似于IO特别高,但不能判定是IO瓶颈还是软件参数设置不当导致热盘的问题.这时候通常希望能知道磁盘的读写速度,来进行下一步的决策. 下面是两种测试方法: (1)使用h ...
- 网站打开速度多少毫秒为正常_个人做shopify-怎么测试和优化网站打开速度
当我们shopify独立站弄得差不多的时候,还有一项非常重要的工作需要完成,那就是测试我们的站点打开速度. 根据国外的一项调查显示,如果某个网站打开速度比较慢,通常意味着这个网站是不安全的,79%的网 ...
- 电脑上传网速怎么测试软件,宽带上传速度怎么测试 教你如何看电脑宽带上传速度...
网速一般分为下载速度和上传速度,一般我们测算网速只是测试下载速度,只要电脑从网上下载速度快,用户通常比较满意.但你知道吗?上传速度也是值得关注的,尤其是在云网络时代,很多朋友都要传文件到网盘或者视频网 ...
- 常见存储过程分页PK赛——简单测试分析常见存储过程分页速度
数据的分页是我们再熟悉不过的功能了,各种各样的分页方式层出不穷.今天我把一些常见的存储过程分页列出来,再简单地测一下性能,算是对知识的总结,也是对您好想法的抛钻引玉.废话不多说,开始吧~~ 1.首先建 ...
最新文章
- 跟小静学CLR via C#(12)-委托Delegate
- FCKeditor 2.6 精简版
- Python 技术篇-用smtplib和email库实现邮件发送各种类型的附件实例演示
- 【需求】手机无线投屏到电视需求整理,Miracast无线投屏功能应用及需求
- 公开课视频-《第03章 部署-IT基础架构》-大企业云桌面部署实战-在线培训-视频(奉献)...
- “雏鹰”的最近感想……
- 探讨mutex与semaphore
- 影视感悟专题---1、B站-魔兽世界代理及其它乱七八糟
- 8.19noip模拟题
- 晚安,2017。你好,2018。
- [dfs] 洛谷 P2535 收集资源
- 多个引用类型的变量“引用”同一个对象意味着什么
- python 正则表达式简介
- android+6.0中兴v5s,中兴v5s
- 华为 BGP路由聚合
- 网络安全等级保护拓扑图大全
- Python/python/xpath爬虫--妙招网
- 【古代文学论文】沈德潜诗学思想的调和格调与文化意蕴(节选)
- 台式计算机设置热点,台式机怎么设置无线热点
- mysql neq什么意思_【知识科普】标准中的Eqv、Idt和Neq分别代表什么?
热门文章
- linux 64位 shellcode,Linux Shellcode“你好,世界!”
- java final 变量 回收_java入门教程-Java中final,finally,finalize三个关键字的区别
- feignclient对象找不到_为什么我找不到对象呢,一个33岁大龄剩女的疑惑
- 鸿蒙系统明年上市巧,鸿蒙系统官网下载-鸿蒙系统官网下载手机版 v2.0下载-955游戏网...
- Ubuntu16.04中的可重定位目标文件
- java进销存培训_Java实例学习——企业进销存管理系统(2)
- oracle中执行动态sql语句吗,oracle中有没有可动态执行sql语句的函数
- 的g极串一个电阻_Ohm#39;s Law 简单系列D:从惠斯通(会石头)测电阻开始说
- python requests 接口测试_python+requests接口测试基础
- js和css和img,Node.js压缩web项目中的js,css和图片