Python版本:Python2.7 
mnist数据集 
博客来源:http://blog.csdn.net/c406495762/article/details/70306550

如何编译caffe的Python接口就不多说了 
下面的代码可以一次生成Lenet网络训练所需的train.prototxt和test.prototxt,还有solver.prototxt

代码:

# -*- coding: UTF-8 -*-
import caffe                                                     #导入caffe包def create_net(lmdb, mean_file, batch_size, include_acc=False):#网络规范net = caffe.NetSpec()#Data层net.data, net.label = caffe.layers.Data(source=lmdb, backend=caffe.params.Data.LMDB, batch_size=batch_size, ntop=2,transform_param = dict(mean_file=mean_file,scale= 0.00390625))#视觉层net.conv1 = caffe.layers.Convolution(net.data, num_output=20,kernel_size=5,weight_filler={"type": "xavier"},bias_filler={"type": "constant"})net.pool1 = caffe.layers.Pooling(net.conv1, pool=caffe.params.Pooling.MAX, kernel_size=2, stride=2)net.conv2 = caffe.layers.Convolution(net.pool1, kernel_size=5, stride=1,num_output=50, pad=0,weight_filler=dict(type='xavier'),bias_filler={"type": "constant"})net.pool2 = caffe.layers.Pooling(net.conv2, pool=caffe.params.Pooling.MAX, kernel_size=2, stride=2)#全连接层net.fc1 = caffe.layers.InnerProduct(net.pool2, num_output=500,weight_filler=dict(type='xavier'),bias_filler={"type": "constant"})net.fc_add1 = caffe.layers.InnerProduct(net.fc1, num_output=500,weight_filler=dict(type='xavier'),bias_filler={"type": "constant"})#没什么意义,加一层试试net.fc_add2 = caffe.layers.InnerProduct(net.fc_add1, num_output=500,weight_filler=dict(type='xavier'),bias_filler={"type": "constant"})#也没什么意义,再加一层试试#激活层net.relu1 = caffe.layers.ReLU(net.fc_add2, in_place=True)#dropout层net.drop3 = caffe.layers.Dropout(net.fc_add2, in_place=True)net.fc2 = caffe.layers.InnerProduct(net.fc_add2, num_output=10,weight_filler=dict(type='xavier'))#sofemax层net.loss = caffe.layers.SoftmaxWithLoss(net.fc2, net.label)#训练的prototxt文件不包括Accuracy层,测试的时候需要。if include_acc:net.acc = caffe.layers.Accuracy(net.fc2, net.label)return str(net.to_proto())return str(net.to_proto())def write_net(mean_file,train_proto, train_lmdb, test_proto, val_lmdb):#写入prototxt文件with open(train_proto, 'w') as f:f.write(str(create_net(train_lmdb,mean_file,batch_size = 64)))#写入prototxt文件with open(test_proto, 'w') as f:f.write(str(create_net(val_lmdb,mean_file,batch_size = 100, include_acc = True)))def write_sovler(my_project_root, solver_proto, train_proto, test_proto):sovler_string = caffe.proto.caffe_pb2.SolverParameter()                    #sovler存储sovler_string.train_net = train_proto                                    #train.prototxt位置指定sovler_string.test_net.append(test_proto)                                 #test.prototxt位置指定sovler_string.test_iter.append(100)                                        #10000/100 测试迭代次数sovler_string.test_interval = 938                                        #60000/64 每训练迭代test_interval次进行一次测试sovler_string.base_lr = 0.01                                            #基础学习率sovler_string.momentum = 0.9                                            #动量sovler_string.weight_decay = 5e-4                                        #权重衰减sovler_string.lr_policy = 'step'                                        #学习策略sovler_string.stepsize = 3000                                             #学习率变化频率sovler_string.gamma = 0.1                                                  #学习率变化指数sovler_string.display = 20                                                #每迭代display次显示结果sovler_string.max_iter = 9380                                            #10 epoch 938*10 最大迭代数sovler_string.snapshot = 938                                             #保存临时模型的迭代数sovler_string.snapshot_prefix = my_project_root + 'mnist/model/mnist'                #模型前缀sovler_string.solver_mode = caffe.proto.caffe_pb2.SolverParameter.GPU    #优化模式with open(solver_proto, 'w') as f:f.write(str(sovler_string))#def train(solver_proto):
#    caffe.set_device(0)
#    caffe.set_mode_gpu()
#    solver = caffe.SGDSolver(solver_proto)
#    solver.solve()if __name__ == '__main__':my_project_root = "F:/python/make_prototxt/"    #my-caffe-project目录train_lmdb = my_project_root + "mnist/data/mnist_train_lmdb"                #train_lmdb文件的位置val_lmdb = my_project_root + "mnist/data/mnist_test_lmdb"                    #val_lmdb文件的位置train_proto = my_project_root + "mnist/train.prototxt"                #保存train.prototxt文件的位置test_proto = my_project_root + "mnist/test.prototxt"                #保存test.prototxt文件的位置solver_proto = my_project_root + "mnist/solver.prototxt"            #保存solver.prototxt文件的位置mean_file = my_project_root + "mnist/data/trainMean.binaryproto"                          #均值文件的位置write_net(mean_file,train_proto, train_lmdb, test_proto, val_lmdb)print "生成train.prototxt test.prototxt成功"write_sovler(my_project_root, solver_proto, train_proto, test_proto)print "生成solver.prototxt成功"# train(solver_proto)# print "训练完成"

运行结果: 
生成的train.prototxt

layer {name: "data"type: "Data"top: "data"top: "label"transform_param {scale: 0.00390625mean_file: "F:/python/make_prototxt/mnist/data/trainMean.binaryproto"}data_param {source: "F:/python/make_prototxt/mnist/data/mnist_train_lmdb"batch_size: 64backend: LMDB}
}
layer {name: "conv1"type: "Convolution"bottom: "data"top: "conv1"convolution_param {num_output: 20kernel_size: 5weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layer {name: "pool1"type: "Pooling"bottom: "conv1"top: "pool1"pooling_param {pool: MAXkernel_size: 2stride: 2}
}
layer {name: "conv2"type: "Convolution"bottom: "pool1"top: "conv2"convolution_param {num_output: 50pad: 0kernel_size: 5stride: 1weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layer {name: "pool2"type: "Pooling"bottom: "conv2"top: "pool2"pooling_param {pool: MAXkernel_size: 2stride: 2}
}
layer {name: "fc1"type: "InnerProduct"bottom: "pool2"top: "fc1"inner_product_param {num_output: 500weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layer {name: "fc_add1"type: "InnerProduct"bottom: "fc1"top: "fc_add1"inner_product_param {num_output: 500weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layer {name: "fc_add2"type: "InnerProduct"bottom: "fc_add1"top: "fc_add2"inner_product_param {num_output: 500weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layer {name: "relu1"type: "ReLU"bottom: "fc_add2"top: "fc_add2"
}
layer {name: "drop3"type: "Dropout"bottom: "fc_add2"top: "fc_add2"
}
layer {name: "fc2"type: "InnerProduct"bottom: "fc_add2"top: "fc2"inner_product_param {num_output: 10weight_filler {type: "xavier"}}
}
layer {name: "loss"type: "SoftmaxWithLoss"bottom: "fc2"bottom: "label"top: "loss"
}

生成的test.prototxt

layer {name: "data"type: "Data"top: "data"top: "label"transform_param {scale: 0.00390625mean_file: "F:/python/make_prototxt/mnist/data/trainMean.binaryproto"}data_param {source: "F:/python/make_prototxt/mnist/data/mnist_test_lmdb"batch_size: 100backend: LMDB}
}
layer {name: "conv1"type: "Convolution"bottom: "data"top: "conv1"convolution_param {num_output: 20kernel_size: 5weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layer {name: "pool1"type: "Pooling"bottom: "conv1"top: "pool1"pooling_param {pool: MAXkernel_size: 2stride: 2}
}
layer {name: "conv2"type: "Convolution"bottom: "pool1"top: "conv2"convolution_param {num_output: 50pad: 0kernel_size: 5stride: 1weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layer {name: "pool2"type: "Pooling"bottom: "conv2"top: "pool2"pooling_param {pool: MAXkernel_size: 2stride: 2}
}
layer {name: "fc1"type: "InnerProduct"bottom: "pool2"top: "fc1"inner_product_param {num_output: 500weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layer {name: "fc_add1"type: "InnerProduct"bottom: "fc1"top: "fc_add1"inner_product_param {num_output: 500weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layer {name: "fc_add2"type: "InnerProduct"bottom: "fc_add1"top: "fc_add2"inner_product_param {num_output: 500weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layer {name: "relu1"type: "ReLU"bottom: "fc_add2"top: "fc_add2"
}
layer {name: "drop3"type: "Dropout"bottom: "fc_add2"top: "fc_add2"
}
layer {name: "fc2"type: "InnerProduct"bottom: "fc_add2"top: "fc2"inner_product_param {num_output: 10weight_filler {type: "xavier"}}
}
layer {name: "loss"type: "SoftmaxWithLoss"bottom: "fc2"bottom: "label"top: "loss"
}
layer {name: "acc"type: "Accuracy"bottom: "fc2"bottom: "label"top: "acc"
}

生成的solver.prototxt

train_net: "F:/python/make_prototxt/mnist/train.prototxt"
test_net: "F:/python/make_prototxt/mnist/test.prototxt"
test_iter: 100
test_interval: 938
base_lr: 0.01
display: 20
max_iter: 9380
lr_policy: "step"
gamma: 0.1
momentum: 0.9
weight_decay: 0.0005
stepsize: 3000
snapshot: 938
snapshot_prefix: "F:/python/make_prototxt/mnist/model/mnist"
solver_mode: GPU

接下来就可以训练了

利用caffe的Python接口生成prototxt文件相关推荐

  1. 利用caffe的python接口实现DeepImageSynthesis实例

    在之前实现faster rcnn的博客中,先是配置了caffe的python接口,但是在验证的时候用DeepTexture的实例没有成功.改用pycharm而不是jupyter notebook再试一 ...

  2. 利用 caffe的 python接口测试训练好的 mnist 模型

    参考博客:https://blog.csdn.net/auto1993/article/details/70941440 在上一篇博客中已经训练好了 mnist 识别手写数字的模型,这篇博客就利用 c ...

  3. 【caffe】Caffe的Python接口-官方教程-01-learning-Lenet-详细说明(含代码)

    01-learning-Lenet, 主要讲的是 如何用python写一个Lenet,以及用来对手写体数据进行分类(Mnist).从此教程可以知道如何用python写prototxt,知道如何单步训练 ...

  4. Windows10上使用Caffe的Python接口进行图像分类例程

    本文将会介绍Caffe的Python接口的使用方法.编辑Python可以使用很多种方法,我们采用的是IPython交互式编辑环境. 1 Python的安装 如果你的Windows电脑还没有安装Pyth ...

  5. Caffe的python接口安装

    点击此处返回总目录 这一节我们需要编译pycaffe.pycaffe是caffe的python接口.后面我们回用到python进行编程. 一.准备工作 准备工作1:Caffe release版本的编译 ...

  6. python图标的演变_把Python脚本生成exe文件并添加版本信息和自定义图标

    pyinstaller和py2exe把Python脚本生成exe文件,并添加版本信息和自定义图标. 写了一个查找产品通道号的小程序,目前还没进行异常处理. 以下是程序源码. # -*- coding: ...

  7. caffe python接口_ubuntu配置caffe的python接口pycaffe

    参考网站: ubuntu配置caffe的python接口pycaffe 依赖 前提caffe已经正确编译.见Ubuntu配置caffe库包sudo apt-get install python-pip ...

  8. python制作相册_《自拍教程73》Python 自动生成相册文件夹

    这里将告诉您<自拍教程73>Python 自动生成相册文件夹,具体操作过程:案例故事: 接Python mediainfo批量重命名图片文件,测试图片是批量重命名好了, 但是将测试图片放于 ...

  9. 利用LABVIEW的python接口调用Pluto SDR

    利用LABVIEW的python接口调用Pluto SDR 1. 介绍 LABVIEW是NI公司推出的一款非常适合用于实时信号处理的编程软件,其图形化的开发方式有效缩短了测试验证应用设计的时间,而且N ...

  10. caffe的python接口学习(2)生成solver文件

    caffe在训练的时候,需要一些参数设置,我们一般将这些参数设置在一个叫solver.prototxt的文件里面 有一些参数需要计算的,也不是乱设置. 假设我们有50000个训练样本,batch_si ...

最新文章

  1. Web高级征程:《大型网站技术架构》读书笔记系列
  2. springboot学习笔记1:springboot入门
  3. 【[网络流二十四题]最长不下降子序列问题】
  4. ASP.NET web.config中customErrors节点说明
  5. 个税扣缴又出新规!12月31日前,所有人必须完成这件事,否则明年到手的工资要变少!...
  6. Unity2017探究Layout布局
  7. 流程图基本图形的含义
  8. lnmp一键安装包 mysql_mysql - LNMP一键安装包
  9. android 手机锁 无服务器,安卓手机锁屏密码忘记了怎么办?adb解锁图文教程
  10. KK集团冲刺上市背后:亏损规模连年飙升,KK馆贡献占比正在衰减
  11. 手机流量充值 php代码,流量充值异步通知示例代码
  12. sql查询一个班级中男女各有多少人及总人数
  13. 用于ip伪装身份的网络爬虫
  14. 【CXY】JAVA基础 之 Runtime
  15. Prim算法实现最小生成树(Java)
  16. 酒水行业数字化转型解决方案
  17. 如何依靠副业赚钱,应对人到中年的职场危机
  18. arcmap中有火星坐标码_在ArcMap中使用坐标值转换一个CAD文件中的坐标位置
  19. 识破“钓鱼”伪装 邮箱防骗策略详解
  20. 【JavaScript】获取网页元素

热门文章

  1. centos7安装最新稳定版nginx
  2. Spring Cloud消息驱动整合
  3. (42)JS运动之多物体框架--多个div变宽
  4. 【原创】Kakfa log包源代码分析(一)
  5. SQL SERVER 2008 索引、数据存储基本理论【原创】
  6. Understand Rails Authenticity Token
  7. 双流国际机场公交线路到凤凰御庭
  8. phper的何去何从
  9. 安装debian 9.1后,中文环境下将home目录下文件夹改为对应的英文
  10. 实现MySQL的Replication