1、首先进入caffe的安装目录

CAFFE_ROOT='/home/lxc/caffe/'

2、运行脚本文件,数据集

cd CAFFE_ROOT
./data/mnist/gte_mnist.sh

3、吧数据集转换成caffe可读入的格式

./examples/mnist/create_mnist.sh

4、训练数据集,得到训练模型

./examples/mnist/train_lenet.sh

运行的过程:

407563 (* 1 = 0.00407563 loss)
I0314 09:58:05.226999 25029 sgd_solver.cpp:106] Iteration 7400, lr = 0.00660067
I0314 09:58:06.644390 25029 solver.cpp:337] Iteration 7500, Testing net (#0)
I0314 09:58:07.555140 25029 solver.cpp:404]     Test net output #0: accuracy = 0.9909
I0314 09:58:07.555181 25029 solver.cpp:404]     Test net output #1: loss = 0.0300942 (* 1 = 0.0300942 loss)
I0314 09:58:07.563323 25029 solver.cpp:228] Iteration 7500, loss = 0.00139727
I0314 09:58:07.563347 25029 solver.cpp:244]     Train net output #0: loss = 0.00139717 (* 1 = 0.00139717 loss)
I0314 09:58:07.563361 25029 sgd_solver.cpp:106] Iteration 7500, lr = 0.00657236
I0314 09:58:09.002777 25029 solver.cpp:228] Iteration 7600, loss = 0.00737701
I0314 09:58:09.002853 25029 solver.cpp:244]     Train net output #0: loss = 0.0073769 (* 1 = 0.0073769 loss)
I0314 09:58:09.002867 25029 sgd_solver.cpp:106] Iteration 7600, lr = 0.00654433
I0314 09:58:10.444634 25029 solver.cpp:228] Iteration 7700, loss = 0.0348312
I0314 09:58:10.444681 25029 solver.cpp:244]     Train net output #0: loss = 0.0348311 (* 1 = 0.0348311 loss)
I0314 09:58:10.444694 25029 sgd_solver.cpp:106] Iteration 7700, lr = 0.00651658
I0314 09:58:11.927762 25029 solver.cpp:228] Iteration 7800, loss = 0.00242758
I0314 09:58:11.927819 25029 solver.cpp:244]     Train net output #0: loss = 0.00242747 (* 1 = 0.00242747 loss)
I0314 09:58:11.927831 25029 sgd_solver.cpp:106] Iteration 7800, lr = 0.00648911
I0314 09:58:13.402936 25029 solver.cpp:228] Iteration 7900, loss = 0.00951368
I0314 09:58:13.402987 25029 solver.cpp:244]     Train net output #0: loss = 0.00951358 (* 1 = 0.00951358 loss)
I0314 09:58:13.403002 25029 sgd_solver.cpp:106] Iteration 7900, lr = 0.0064619
I0314 09:58:14.867146 25029 solver.cpp:337] Iteration 8000, Testing net (#0)
I0314 09:58:15.800400 25029 solver.cpp:404]     Test net output #0: accuracy = 0.9904
I0314 09:58:15.800444 25029 solver.cpp:404]     Test net output #1: loss = 0.0276247 (* 1 = 0.0276247 loss)
I0314 09:58:15.808650 25029 solver.cpp:228] Iteration 8000, loss = 0.00521269
I0314 09:58:15.808675 25029 solver.cpp:244]     Train net output #0: loss = 0.00521257 (* 1 = 0.00521257 loss)
I0314 09:58:15.808689 25029 sgd_solver.cpp:106] Iteration 8000, lr = 0.00643496
I0314 09:58:17.274581 25029 solver.cpp:228] Iteration 8100, loss = 0.00926444
I0314 09:58:17.274636 25029 solver.cpp:244]     Train net output #0: loss = 0.00926433 (* 1 = 0.00926433 loss)
I0314 09:58:17.274651 25029 sgd_solver.cpp:106] Iteration 8100, lr = 0.00640827
I0314 09:58:18.732739 25029 solver.cpp:228] Iteration 8200, loss = 0.00703852
I0314 09:58:18.732786 25029 solver.cpp:244]     Train net output #0: loss = 0.00703842 (* 1 = 0.00703842 loss)
I0314 09:58:18.732800 25029 sgd_solver.cpp:106] Iteration 8200, lr = 0.00638185
I0314 09:58:20.189698 25029 solver.cpp:228] Iteration 8300, loss = 0.0678537
I0314 09:58:20.189746 25029 solver.cpp:244]     Train net output #0: loss = 0.0678536 (* 1 = 0.0678536 loss)
I0314 09:58:20.189759 25029 sgd_solver.cpp:106] Iteration 8300, lr = 0.00635567
I0314 09:58:21.628165 25029 solver.cpp:228] Iteration 8400, loss = 0.00610364
I0314 09:58:21.628206 25029 solver.cpp:244]     Train net output #0: loss = 0.00610354 (* 1 = 0.00610354 loss)
I0314 09:58:21.628218 25029 sgd_solver.cpp:106] Iteration 8400, lr = 0.00632975
I0314 09:58:23.054644 25029 solver.cpp:337] Iteration 8500, Testing net (#0)
I0314 09:58:23.975236 25029 solver.cpp:404]     Test net output #0: accuracy = 0.9913
I0314 09:58:23.975289 25029 solver.cpp:404]     Test net output #1: loss = 0.0276445 (* 1 = 0.0276445 loss)
I0314 09:58:23.983639 25029 solver.cpp:228] Iteration 8500, loss = 0.00722562
I0314 09:58:23.983686 25029 solver.cpp:244]     Train net output #0: loss = 0.00722551 (* 1 = 0.00722551 loss)
I0314 09:58:23.983702 25029 sgd_solver.cpp:106] Iteration 8500, lr = 0.00630407
I0314 09:58:25.423691 25029 solver.cpp:228] Iteration 8600, loss = 0.000844742
I0314 09:58:25.423733 25029 solver.cpp:244]     Train net output #0: loss = 0.000844626 (* 1 = 0.000844626 loss)
I0314 09:58:25.423746 25029 sgd_solver.cpp:106] Iteration 8600, lr = 0.00627864
I0314 09:58:26.858505 25029 solver.cpp:228] Iteration 8700, loss = 0.00262191
I0314 09:58:26.858546 25029 solver.cpp:244]     Train net output #0: loss = 0.00262179 (* 1 = 0.00262179 loss)
I0314 09:58:26.858559 25029 sgd_solver.cpp:106] Iteration 8700, lr = 0.00625344
I0314 09:58:28.296435 25029 solver.cpp:228] Iteration 8800, loss = 0.00161585
I0314 09:58:28.296476 25029 solver.cpp:244]     Train net output #0: loss = 0.00161573 (* 1 = 0.00161573 loss)
I0314 09:58:28.296489 25029 sgd_solver.cpp:106] Iteration 8800, lr = 0.00622847
I0314 09:58:29.741562 25029 solver.cpp:228] Iteration 8900, loss = 0.000348777
I0314 09:58:29.741605 25029 solver.cpp:244]     Train net output #0: loss = 0.000348661 (* 1 = 0.000348661 loss)
I0314 09:58:29.741631 25029 sgd_solver.cpp:106] Iteration 8900, lr = 0.00620374
I0314 09:58:31.165171 25029 solver.cpp:337] Iteration 9000, Testing net (#0)
I0314 09:58:32.077903 25029 solver.cpp:404]     Test net output #0: accuracy = 0.9909
I0314 09:58:32.077941 25029 solver.cpp:404]     Test net output #1: loss = 0.0273136 (* 1 = 0.0273136 loss)
I0314 09:58:32.086107 25029 solver.cpp:228] Iteration 9000, loss = 0.0154975
I0314 09:58:32.086132 25029 solver.cpp:244]     Train net output #0: loss = 0.0154974 (* 1 = 0.0154974 loss)
I0314 09:58:32.086145 25029 sgd_solver.cpp:106] Iteration 9000, lr = 0.00617924
I0314 09:58:33.524173 25029 solver.cpp:228] Iteration 9100, loss = 0.00757405
I0314 09:58:33.524216 25029 solver.cpp:244]     Train net output #0: loss = 0.00757394 (* 1 = 0.00757394 loss)
I0314 09:58:33.524230 25029 sgd_solver.cpp:106] Iteration 9100, lr = 0.00615496
I0314 09:58:34.966588 25029 solver.cpp:228] Iteration 9200, loss = 0.00248411
I0314 09:58:34.966630 25029 solver.cpp:244]     Train net output #0: loss = 0.00248399 (* 1 = 0.00248399 loss)
I0314 09:58:34.966644 25029 sgd_solver.cpp:106] Iteration 9200, lr = 0.0061309
I0314 09:58:36.405937 25029 solver.cpp:228] Iteration 9300, loss = 0.00742113
I0314 09:58:36.405982 25029 solver.cpp:244]     Train net output #0: loss = 0.007421 (* 1 = 0.007421 loss)
I0314 09:58:36.405995 25029 sgd_solver.cpp:106] Iteration 9300, lr = 0.00610706
I0314 09:58:37.850505 25029 solver.cpp:228] Iteration 9400, loss = 0.0306143
I0314 09:58:37.850548 25029 solver.cpp:244]     Train net output #0: loss = 0.0306142 (* 1 = 0.0306142 loss)
I0314 09:58:37.850560 25029 sgd_solver.cpp:106] Iteration 9400, lr = 0.00608343
I0314 09:58:39.285089 25029 solver.cpp:337] Iteration 9500, Testing net (#0)
I0314 09:58:40.197036 25029 solver.cpp:404]     Test net output #0: accuracy = 0.9902
I0314 09:58:40.197090 25029 solver.cpp:404]     Test net output #1: loss = 0.0315135 (* 1 = 0.0315135 loss)
I0314 09:58:40.205340 25029 solver.cpp:228] Iteration 9500, loss = 0.0047698
I0314 09:58:40.205364 25029 solver.cpp:244]     Train net output #0: loss = 0.00476967 (* 1 = 0.00476967 loss)
I0314 09:58:40.205379 25029 sgd_solver.cpp:106] Iteration 9500, lr = 0.00606002
I0314 09:58:41.670626 25029 solver.cpp:228] Iteration 9600, loss = 0.00133942
I0314 09:58:41.670670 25029 solver.cpp:244]     Train net output #0: loss = 0.00133929 (* 1 = 0.00133929 loss)
I0314 09:58:41.670685 25029 sgd_solver.cpp:106] Iteration 9600, lr = 0.00603682
I0314 09:58:43.134160 25029 solver.cpp:228] Iteration 9700, loss = 0.0014465
I0314 09:58:43.134203 25029 solver.cpp:244]     Train net output #0: loss = 0.00144637 (* 1 = 0.00144637 loss)
I0314 09:58:43.134217 25029 sgd_solver.cpp:106] Iteration 9700, lr = 0.00601382
I0314 09:58:44.587978 25029 solver.cpp:228] Iteration 9800, loss = 0.0134798
I0314 09:58:44.588019 25029 solver.cpp:244]     Train net output #0: loss = 0.0134797 (* 1 = 0.0134797 loss)
I0314 09:58:44.588033 25029 sgd_solver.cpp:106] Iteration 9800, lr = 0.00599102
I0314 09:58:46.046571 25029 solver.cpp:228] Iteration 9900, loss = 0.00367283
I0314 09:58:46.046613 25029 solver.cpp:244]     Train net output #0: loss = 0.0036727 (* 1 = 0.0036727 loss)
I0314 09:58:46.046627 25029 sgd_solver.cpp:106] Iteration 9900, lr = 0.00596843
I0314 09:58:47.491849 25029 solver.cpp:454] Snapshotting to binary proto file examples/mnist/lenet_iter_10000.caffemodel
I0314 09:58:47.502293 25029 sgd_solver.cpp:273] Snapshotting solver state to binary proto file examples/mnist/lenet_iter_10000.solverstate
I0314 09:58:47.510488 25029 solver.cpp:317] Iteration 10000, loss = 0.00222019
I0314 09:58:47.510517 25029 solver.cpp:337] Iteration 10000, Testing net (#0)
I0314 09:58:48.453008 25029 solver.cpp:404]     Test net output #0: accuracy = 0.9917
I0314 09:58:48.453052 25029 solver.cpp:404]     Test net output #1: loss = 0.0266635 (* 1 = 0.0266635 loss)
I0314 09:58:48.453065 25029 solver.cpp:322] Optimization Done.
I0314 09:58:48.453073 25029 caffe.cpp:222] Optimization Done.

最终会得到两个模型

caffemodel文件

caffe中mnist数据集的运行相关推荐

  1. 【caffe】mnist数据集lenet训练与测试

    在上一篇中,费了九牛二虎之力总算是把Caffe编译通过了,现在我们可以借助mnist数据集,测试下Caffe的训练和检测效果. 准备工作:在自己的工作目录下,新建一个文件夹,命名为mnist_test ...

  2. [caffe(一)]使用caffe训练mnist数据集

    1.数据集的下载与转换 1)我们在mnist数据集上做测试,MNIST handwritten digit database, Yann LeCun, Corinna Cortes and Chris ...

  3. 实践详细篇-Windows下使用VS2015编译的Caffe训练mnist数据集

    上一篇记录的是学习caffe前的环境准备以及如何创建好自己需要的caffe版本.这一篇记录的是如何使用编译好的caffe做训练mnist数据集,步骤编号延用上一篇 <实践详细篇-Windows下 ...

  4. Windows Caffe中MNIST数据格式转换实现

    Caffe源码中src/caffe/caffe/examples/mnist/convert_mnist_data.cpp提供的实现代码并不能直接在Windows下运行,这里在源码的基础上进行了改写, ...

  5. 【已补蓝奏云链接】PyTorch中MNIST数据集(附datasets.MNIST离线包)下载慢/安装慢的解决方案

    一.问题背景 在学习MNIST数据集手写数字识别demo的时候,笔者碰到了一些问题,现记录如下: 1.必须先确保torchvision已经正确安.如何安装torchvision?请参考PyTorch/ ...

  6. TensorFlow中MNIST数据集不能下载的问题解决

    参考:https://blog.csdn.net/i8088/article/details/79126150

  7. VS2013+Windows+CPU下搭建caffe框架并利用mnist数据集实验

    <李凭箜篌引>--李贺 吴丝蜀桐张高秋,空山临云颓不流: 江娥啼竹素女愁,李凭中国弹箜篌: 昆山玉碎凤凰叫,芙蓉泣露香兰笑: 十二门前融冷光,二十三丝动紫皇: 女娲炼石补天处,石破天惊逗秋 ...

  8. caffe(ubuntu14.04)学习笔记1——运行MNIST数据集模型

    MNIST数据集简介: MNIST数据集是一个大型的手写体数据库,广泛用于机器学习领域的训练和测试,它是由纽约大学的Yann LeCun教授整理的,包括60000个训练样本和10000个测试样本,其图 ...

  9. 【Caffe学习01】在Caffe中trian MNIST

    在上次搭建好Caffe环境的基础上我们进行第一次实验,将Caffe自带的一个Mnist example跑一跑,对其在处理图像方面的能力有个初步了解. 如果还没有搭建好环境的朋友可以看看我的上一篇文章: ...

最新文章

  1. 微软亚洲研究院谭旭:AI音乐,技术与艺术的碰撞
  2. CodeForces - 1228B Filling the Grid(思维,水题)
  3. 子窗体菜单合并到父窗体菜单的解决办法
  4. rsa加密算法python_模拟新浪微博登录(Python+RSA加密算法)
  5. [html] 给内联元素加float与给块元素加float有什么区别?
  6. redis将散裂中某个值自增_Redis总结
  7. python提取tuple列表中的特定位置的值
  8. ubuntu jdk tomcat mysql_Ubuntu下安装JDK+TOMCAT+MYSQL
  9. 张近东发致家乐福中国员工内部信:唯有坚持、坚守才能取得更大的成功
  10. 文件中有一组整数,要求排序后输出到另一个文件中
  11. 零基础学python鱼c-《零基础入门学习Python》第二版和第一版的区别在哪里呢?...
  12. 8plus基带电源供电线路_iPhone7显示手机无服务还有感叹号,基带通病问题,你中招了吗?...
  13. Q-Vision+Kvaser CAN/CAN FD/LIN总线解决方案
  14. 没火多久就停业,故宫火锅店咋了?
  15. 华硕ac68u无线最佳设置_华硕AC86U,AC88U的掉线、断流问题何时彻底解决?
  16. 每日英语(2021-2-27)
  17. 教你如何安慰失戀人?
  18. ubantu14.04 微信wechat安装
  19. 跨境网上收款 找PayPal没错(php如何实现paypal支付)
  20. 物流运交管理系统 货运单管理

热门文章

  1. Ubuntu18.04安装Intel® oneAPI Toolkit
  2. 高通平台gpio简单调试
  3. Android下NDK开发环境搭建
  4. ARM汇编 beq和bne
  5. 机器学习领域中的六大误区
  6. 交换机生成树协议配置
  7. HDU1266 Reverse Number
  8. wxpython基本控件-静态文本控件
  9. gdo图形引擎中的旋转角
  10. 第一章 OSI参考模型