转载自:

Caffe代码导读(5):对数据集进行Testing - 卜居 - 博客频道 - CSDN.NET

http://blog.csdn.net/kkk584520/article/details/41694301

上一篇介绍了如何准备数据集,做好准备之后我们先看怎样对训练好的模型进行Testing。

先用手写体识别例子,MNIST是数据集(包括训练数据和测试数据),深度学习模型采用LeNet(具体介绍见http://yann.lecun.com/exdb/lenet/),由Yann LeCun教授提出。

如果你编译好了Caffe,那么在CAFFE_ROOT下运行如下命令:

[plain] view plaincopyprint?
  1. $ ./build/tools/caffe.bin test -model=examples/mnist/lenet_train_test.prototxt -weights=examples/mnist/lenet_iter_10000.caffemodel -gpu=0
$ ./build/tools/caffe.bin test -model=examples/mnist/lenet_train_test.prototxt -weights=examples/mnist/lenet_iter_10000.caffemodel -gpu=0

就可以实现Testing。参数说明如下:

test:表示对训练好的模型进行Testing,而不是training。其他参数包括train, time, device_query。

-model=XXX:指定模型prototxt文件,这是一个文本文件,详细描述了网络结构和数据集信息。我用的prototxt内容如下:

[plain] view plaincopyprint?
  1. name: "LeNet"
  2. layers {
  3. name: "mnist"
  4. type: DATA
  5. top: "data"
  6. top: "label"
  7. data_param {
  8. source: "examples/mnist/mnist_train_lmdb"
  9. backend: LMDB
  10. batch_size: 64
  11. }
  12. transform_param {
  13. scale: 0.00390625
  14. }
  15. include: { phase: TRAIN }
  16. }
  17. layers {
  18. name: "mnist"
  19. type: DATA
  20. top: "data"
  21. top: "label"
  22. data_param {
  23. source: "examples/mnist/mnist_test_lmdb"
  24. backend: LMDB
  25. batch_size: 100
  26. }
  27. transform_param {
  28. scale: 0.00390625
  29. }
  30. include: { phase: TEST }
  31. }
  32. layers {
  33. name: "conv1"
  34. type: CONVOLUTION
  35. bottom: "data"
  36. top: "conv1"
  37. blobs_lr: 1
  38. blobs_lr: 2
  39. convolution_param {
  40. num_output: 20
  41. kernel_size: 5
  42. stride: 1
  43. weight_filler {
  44. type: "xavier"
  45. }
  46. bias_filler {
  47. type: "constant"
  48. }
  49. }
  50. }
  51. layers {
  52. name: "pool1"
  53. type: POOLING
  54. bottom: "conv1"
  55. top: "pool1"
  56. pooling_param {
  57. pool: MAX
  58. kernel_size: 2
  59. stride: 2
  60. }
  61. }
  62. layers {
  63. name: "conv2"
  64. type: CONVOLUTION
  65. bottom: "pool1"
  66. top: "conv2"
  67. blobs_lr: 1
  68. blobs_lr: 2
  69. convolution_param {
  70. num_output: 50
  71. kernel_size: 5
  72. stride: 1
  73. weight_filler {
  74. type: "xavier"
  75. }
  76. bias_filler {
  77. type: "constant"
  78. }
  79. }
  80. }
  81. layers {
  82. name: "pool2"
  83. type: POOLING
  84. bottom: "conv2"
  85. top: "pool2"
  86. pooling_param {
  87. pool: MAX
  88. kernel_size: 2
  89. stride: 2
  90. }
  91. }
  92. layers {
  93. name: "ip1"
  94. type: INNER_PRODUCT
  95. bottom: "pool2"
  96. top: "ip1"
  97. blobs_lr: 1
  98. blobs_lr: 2
  99. inner_product_param {
  100. num_output: 500
  101. weight_filler {
  102. type: "xavier"
  103. }
  104. bias_filler {
  105. type: "constant"
  106. }
  107. }
  108. }
  109. layers {
  110. name: "relu1"
  111. type: RELU
  112. bottom: "ip1"
  113. top: "ip1"
  114. }
  115. layers {
  116. name: "ip2"
  117. type: INNER_PRODUCT
  118. bottom: "ip1"
  119. top: "ip2"
  120. blobs_lr: 1
  121. blobs_lr: 2
  122. inner_product_param {
  123. num_output: 10
  124. weight_filler {
  125. type: "xavier"
  126. }
  127. bias_filler {
  128. type: "constant"
  129. }
  130. }
  131. }
  132. layers {
  133. name: "accuracy"
  134. type: ACCURACY
  135. bottom: "ip2"
  136. bottom: "label"
  137. top: "accuracy"
  138. include: { phase: TEST }
  139. }
  140. layers {
  141. name: "loss"
  142. type: SOFTMAX_LOSS
  143. bottom: "ip2"
  144. bottom: "label"
  145. top: "loss"
  146. }
name: "LeNet"
layers {name: "mnist"type: DATAtop: "data"top: "label"data_param {source: "examples/mnist/mnist_train_lmdb"backend: LMDBbatch_size: 64}transform_param {scale: 0.00390625}include: { phase: TRAIN }
}
layers {name: "mnist"type: DATAtop: "data"top: "label"data_param {source: "examples/mnist/mnist_test_lmdb"backend: LMDBbatch_size: 100}transform_param {scale: 0.00390625}include: { phase: TEST }
}layers {name: "conv1"type: CONVOLUTIONbottom: "data"top: "conv1"blobs_lr: 1blobs_lr: 2convolution_param {num_output: 20kernel_size: 5stride: 1weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layers {name: "pool1"type: POOLINGbottom: "conv1"top: "pool1"pooling_param {pool: MAXkernel_size: 2stride: 2}
}
layers {name: "conv2"type: CONVOLUTIONbottom: "pool1"top: "conv2"blobs_lr: 1blobs_lr: 2convolution_param {num_output: 50kernel_size: 5stride: 1weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layers {name: "pool2"type: POOLINGbottom: "conv2"top: "pool2"pooling_param {pool: MAXkernel_size: 2stride: 2}
}
layers {name: "ip1"type: INNER_PRODUCTbottom: "pool2"top: "ip1"blobs_lr: 1blobs_lr: 2inner_product_param {num_output: 500weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layers {name: "relu1"type: RELUbottom: "ip1"top: "ip1"
}
layers {name: "ip2"type: INNER_PRODUCTbottom: "ip1"top: "ip2"blobs_lr: 1blobs_lr: 2inner_product_param {num_output: 10weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layers {name: "accuracy"type: ACCURACYbottom: "ip2"bottom: "label"top: "accuracy"include: { phase: TEST }
}
layers {name: "loss"type: SOFTMAX_LOSSbottom: "ip2"bottom: "label"top: "loss"
}

里面定义的网络结构如下图所示:

-weights=XXX:指定训练好的caffemodel二进制文件。如果你手头没有训练好的可以下载这个(http://download.csdn.net/detail/kkk584520/8219443)。

-gpu=0:指定在GPU上运行,GPUID=0。如果你没有GPU就去掉这个参数,默认在CPU上运行。

运行输出如下:

[plain] view plaincopyprint?
  1. I1203 18:47:00.073052  4610 caffe.cpp:134] Use GPU with device ID 0
  2. I1203 18:47:00.367065  4610 net.cpp:275] The NetState phase (1) differed from the phase (0) specified by a rule in layer mnist
  3. I1203 18:47:00.367269  4610 net.cpp:39] Initializing net from parameters:
  4. name: "LeNet"
  5. layers {
  6. top: "data"
  7. top: "label"
  8. name: "mnist"
  9. type: DATA
  10. data_param {
  11. source: "examples/mnist/mnist_test_lmdb"
  12. batch_size: 100
  13. backend: LMDB
  14. }
  15. include {
  16. phase: TEST
  17. }
  18. transform_param {
  19. scale: 0.00390625
  20. }
  21. }
  22. layers {
  23. bottom: "data"
  24. top: "conv1"
  25. name: "conv1"
  26. type: CONVOLUTION
  27. blobs_lr: 1
  28. blobs_lr: 2
  29. convolution_param {
  30. num_output: 20
  31. kernel_size: 5
  32. stride: 1
  33. weight_filler {
  34. type: "xavier"
  35. }
  36. bias_filler {
  37. type: "constant"
  38. }
  39. }
  40. }
  41. layers {
  42. bottom: "conv1"
  43. top: "pool1"
  44. name: "pool1"
  45. type: POOLING
  46. pooling_param {
  47. pool: MAX
  48. kernel_size: 2
  49. stride: 2
  50. }
  51. }
  52. layers {
  53. bottom: "pool1"
  54. top: "conv2"
  55. name: "conv2"
  56. type: CONVOLUTION
  57. blobs_lr: 1
  58. blobs_lr: 2
  59. convolution_param {
  60. num_output: 50
  61. kernel_size: 5
  62. stride: 1
  63. weight_filler {
  64. type: "xavier"
  65. }
  66. bias_filler {
  67. type: "constant"
  68. }
  69. }
  70. }
  71. layers {
  72. bottom: "conv2"
  73. top: "pool2"
  74. name: "pool2"
  75. type: POOLING
  76. pooling_param {
  77. pool: MAX
  78. kernel_size: 2
  79. stride: 2
  80. }
  81. }
  82. layers {
  83. bottom: "pool2"
  84. top: "ip1"
  85. name: "ip1"
  86. type: INNER_PRODUCT
  87. blobs_lr: 1
  88. blobs_lr: 2
  89. inner_product_param {
  90. num_output: 500
  91. weight_filler {
  92. type: "xavier"
  93. }
  94. bias_filler {
  95. type: "constant"
  96. }
  97. }
  98. }
  99. layers {
  100. bottom: "ip1"
  101. top: "ip1"
  102. name: "relu1"
  103. type: RELU
  104. }
  105. layers {
  106. bottom: "ip1"
  107. top: "ip2"
  108. name: "ip2"
  109. type: INNER_PRODUCT
  110. blobs_lr: 1
  111. blobs_lr: 2
  112. inner_product_param {
  113. num_output: 10
  114. weight_filler {
  115. type: "xavier"
  116. }
  117. bias_filler {
  118. type: "constant"
  119. }
  120. }
  121. }
  122. layers {
  123. bottom: "ip2"
  124. bottom: "label"
  125. top: "accuracy"
  126. name: "accuracy"
  127. type: ACCURACY
  128. include {
  129. phase: TEST
  130. }
  131. }
  132. layers {
  133. bottom: "ip2"
  134. bottom: "label"
  135. top: "loss"
  136. name: "loss"
  137. type: SOFTMAX_LOSS
  138. }
  139. I1203 18:47:00.367391  4610 net.cpp:67] Creating Layer mnist
  140. I1203 18:47:00.367409  4610 net.cpp:356] mnist -> data
  141. I1203 18:47:00.367435  4610 net.cpp:356] mnist -> label
  142. I1203 18:47:00.367451  4610 net.cpp:96] Setting up mnist
  143. I1203 18:47:00.367571  4610 data_layer.cpp:68] Opening lmdb examples/mnist/mnist_test_lmdb
  144. I1203 18:47:00.367609  4610 data_layer.cpp:128] output data size: 100,1,28,28
  145. I1203 18:47:00.367832  4610 net.cpp:103] Top shape: 100 1 28 28 (78400)
  146. I1203 18:47:00.367849  4610 net.cpp:103] Top shape: 100 1 1 1 (100)
  147. I1203 18:47:00.367863  4610 net.cpp:67] Creating Layer label_mnist_1_split
  148. I1203 18:47:00.367873  4610 net.cpp:394] label_mnist_1_split <- label
  149. I1203 18:47:00.367892  4610 net.cpp:356] label_mnist_1_split -> label_mnist_1_split_0
  150. I1203 18:47:00.367908  4610 net.cpp:356] label_mnist_1_split -> label_mnist_1_split_1
  151. I1203 18:47:00.367919  4610 net.cpp:96] Setting up label_mnist_1_split
  152. I1203 18:47:00.367929  4610 net.cpp:103] Top shape: 100 1 1 1 (100)
  153. I1203 18:47:00.367938  4610 net.cpp:103] Top shape: 100 1 1 1 (100)
  154. I1203 18:47:00.367950  4610 net.cpp:67] Creating Layer conv1
  155. I1203 18:47:00.367959  4610 net.cpp:394] conv1 <- data
  156. I1203 18:47:00.367969  4610 net.cpp:356] conv1 -> conv1
  157. I1203 18:47:00.367982  4610 net.cpp:96] Setting up conv1
  158. I1203 18:47:00.392133  4610 net.cpp:103] Top shape: 100 20 24 24 (1152000)
  159. I1203 18:47:00.392204  4610 net.cpp:67] Creating Layer pool1
  160. I1203 18:47:00.392217  4610 net.cpp:394] pool1 <- conv1
  161. I1203 18:47:00.392231  4610 net.cpp:356] pool1 -> pool1
  162. I1203 18:47:00.392247  4610 net.cpp:96] Setting up pool1
  163. I1203 18:47:00.392273  4610 net.cpp:103] Top shape: 100 20 12 12 (288000)
  164. I1203 18:47:00.392297  4610 net.cpp:67] Creating Layer conv2
  165. I1203 18:47:00.392307  4610 net.cpp:394] conv2 <- pool1
  166. I1203 18:47:00.392318  4610 net.cpp:356] conv2 -> conv2
  167. I1203 18:47:00.392330  4610 net.cpp:96] Setting up conv2
  168. I1203 18:47:00.392669  4610 net.cpp:103] Top shape: 100 50 8 8 (320000)
  169. I1203 18:47:00.392729  4610 net.cpp:67] Creating Layer pool2
  170. I1203 18:47:00.392756  4610 net.cpp:394] pool2 <- conv2
  171. I1203 18:47:00.392768  4610 net.cpp:356] pool2 -> pool2
  172. I1203 18:47:00.392781  4610 net.cpp:96] Setting up pool2
  173. I1203 18:47:00.392793  4610 net.cpp:103] Top shape: 100 50 4 4 (80000)
  174. I1203 18:47:00.392810  4610 net.cpp:67] Creating Layer ip1
  175. I1203 18:47:00.392819  4610 net.cpp:394] ip1 <- pool2
  176. I1203 18:47:00.392832  4610 net.cpp:356] ip1 -> ip1
  177. I1203 18:47:00.392844  4610 net.cpp:96] Setting up ip1
  178. I1203 18:47:00.397348  4610 net.cpp:103] Top shape: 100 500 1 1 (50000)
  179. I1203 18:47:00.397372  4610 net.cpp:67] Creating Layer relu1
  180. I1203 18:47:00.397382  4610 net.cpp:394] relu1 <- ip1
  181. I1203 18:47:00.397394  4610 net.cpp:345] relu1 -> ip1 (in-place)
  182. I1203 18:47:00.397407  4610 net.cpp:96] Setting up relu1
  183. I1203 18:47:00.397420  4610 net.cpp:103] Top shape: 100 500 1 1 (50000)
  184. I1203 18:47:00.397434  4610 net.cpp:67] Creating Layer ip2
  185. I1203 18:47:00.397442  4610 net.cpp:394] ip2 <- ip1
  186. I1203 18:47:00.397456  4610 net.cpp:356] ip2 -> ip2
  187. I1203 18:47:00.397469  4610 net.cpp:96] Setting up ip2
  188. I1203 18:47:00.397532  4610 net.cpp:103] Top shape: 100 10 1 1 (1000)
  189. I1203 18:47:00.397547  4610 net.cpp:67] Creating Layer ip2_ip2_0_split
  190. I1203 18:47:00.397557  4610 net.cpp:394] ip2_ip2_0_split <- ip2
  191. I1203 18:47:00.397565  4610 net.cpp:356] ip2_ip2_0_split -> ip2_ip2_0_split_0
  192. I1203 18:47:00.397583  4610 net.cpp:356] ip2_ip2_0_split -> ip2_ip2_0_split_1
  193. I1203 18:47:00.397593  4610 net.cpp:96] Setting up ip2_ip2_0_split
  194. I1203 18:47:00.397603  4610 net.cpp:103] Top shape: 100 10 1 1 (1000)
  195. I1203 18:47:00.397611  4610 net.cpp:103] Top shape: 100 10 1 1 (1000)
  196. I1203 18:47:00.397622  4610 net.cpp:67] Creating Layer accuracy
  197. I1203 18:47:00.397631  4610 net.cpp:394] accuracy <- ip2_ip2_0_split_0
  198. I1203 18:47:00.397640  4610 net.cpp:394] accuracy <- label_mnist_1_split_0
  199. I1203 18:47:00.397650  4610 net.cpp:356] accuracy -> accuracy
  200. I1203 18:47:00.397661  4610 net.cpp:96] Setting up accuracy
  201. I1203 18:47:00.397673  4610 net.cpp:103] Top shape: 1 1 1 1 (1)
  202. I1203 18:47:00.397687  4610 net.cpp:67] Creating Layer loss
  203. I1203 18:47:00.397696  4610 net.cpp:394] loss <- ip2_ip2_0_split_1
  204. I1203 18:47:00.397706  4610 net.cpp:394] loss <- label_mnist_1_split_1
  205. I1203 18:47:00.397714  4610 net.cpp:356] loss -> loss
  206. I1203 18:47:00.397725  4610 net.cpp:96] Setting up loss
  207. I1203 18:47:00.397737  4610 net.cpp:103] Top shape: 1 1 1 1 (1)
  208. I1203 18:47:00.397745  4610 net.cpp:109]     with loss weight 1
  209. I1203 18:47:00.397776  4610 net.cpp:170] loss needs backward computation.
  210. I1203 18:47:00.397785  4610 net.cpp:172] accuracy does not need backward computation.
  211. I1203 18:47:00.397794  4610 net.cpp:170] ip2_ip2_0_split needs backward computation.
  212. I1203 18:47:00.397801  4610 net.cpp:170] ip2 needs backward computation.
  213. I1203 18:47:00.397809  4610 net.cpp:170] relu1 needs backward computation.
  214. I1203 18:47:00.397816  4610 net.cpp:170] ip1 needs backward computation.
  215. I1203 18:47:00.397825  4610 net.cpp:170] pool2 needs backward computation.
  216. I1203 18:47:00.397832  4610 net.cpp:170] conv2 needs backward computation.
  217. I1203 18:47:00.397843  4610 net.cpp:170] pool1 needs backward computation.
  218. I1203 18:47:00.397851  4610 net.cpp:170] conv1 needs backward computation.
  219. I1203 18:47:00.397860  4610 net.cpp:172] label_mnist_1_split does not need backward computation.
  220. I1203 18:47:00.397867  4610 net.cpp:172] mnist does not need backward computation.
  221. I1203 18:47:00.397874  4610 net.cpp:208] This network produces output accuracy
  222. I1203 18:47:00.397884  4610 net.cpp:208] This network produces output loss
  223. I1203 18:47:00.397905  4610 net.cpp:467] Collecting Learning Rate and Weight Decay.
  224. I1203 18:47:00.397915  4610 net.cpp:219] Network initialization done.
  225. I1203 18:47:00.397923  4610 net.cpp:220] Memory required for data: 8086808
  226. I1203 18:47:00.432165  4610 caffe.cpp:145] Running for 50 iterations.
  227. I1203 18:47:00.435849  4610 caffe.cpp:169] Batch 0, accuracy = 0.99
  228. I1203 18:47:00.435879  4610 caffe.cpp:169] Batch 0, loss = 0.018971
  229. I1203 18:47:00.437434  4610 caffe.cpp:169] Batch 1, accuracy = 0.99
  230. I1203 18:47:00.437471  4610 caffe.cpp:169] Batch 1, loss = 0.0117609
  231. I1203 18:47:00.439000  4610 caffe.cpp:169] Batch 2, accuracy = 1
  232. I1203 18:47:00.439020  4610 caffe.cpp:169] Batch 2, loss = 0.00555977
  233. I1203 18:47:00.440551  4610 caffe.cpp:169] Batch 3, accuracy = 0.99
  234. I1203 18:47:00.440575  4610 caffe.cpp:169] Batch 3, loss = 0.0412139
  235. I1203 18:47:00.442105  4610 caffe.cpp:169] Batch 4, accuracy = 0.99
  236. I1203 18:47:00.442126  4610 caffe.cpp:169] Batch 4, loss = 0.0579313
  237. I1203 18:47:00.443619  4610 caffe.cpp:169] Batch 5, accuracy = 0.99
  238. I1203 18:47:00.443639  4610 caffe.cpp:169] Batch 5, loss = 0.0479742
  239. I1203 18:47:00.445159  4610 caffe.cpp:169] Batch 6, accuracy = 0.98
  240. I1203 18:47:00.445179  4610 caffe.cpp:169] Batch 6, loss = 0.0570176
  241. I1203 18:47:00.446712  4610 caffe.cpp:169] Batch 7, accuracy = 0.99
  242. I1203 18:47:00.446732  4610 caffe.cpp:169] Batch 7, loss = 0.0272363
  243. I1203 18:47:00.448249  4610 caffe.cpp:169] Batch 8, accuracy = 1
  244. I1203 18:47:00.448269  4610 caffe.cpp:169] Batch 8, loss = 0.00680142
  245. I1203 18:47:00.449801  4610 caffe.cpp:169] Batch 9, accuracy = 0.98
  246. I1203 18:47:00.449821  4610 caffe.cpp:169] Batch 9, loss = 0.0288398
  247. I1203 18:47:00.451352  4610 caffe.cpp:169] Batch 10, accuracy = 0.98
  248. I1203 18:47:00.451372  4610 caffe.cpp:169] Batch 10, loss = 0.0603264
  249. I1203 18:47:00.452883  4610 caffe.cpp:169] Batch 11, accuracy = 0.98
  250. I1203 18:47:00.452903  4610 caffe.cpp:169] Batch 11, loss = 0.0524943
  251. I1203 18:47:00.454407  4610 caffe.cpp:169] Batch 12, accuracy = 0.95
  252. I1203 18:47:00.454427  4610 caffe.cpp:169] Batch 12, loss = 0.106648
  253. I1203 18:47:00.455955  4610 caffe.cpp:169] Batch 13, accuracy = 0.98
  254. I1203 18:47:00.455976  4610 caffe.cpp:169] Batch 13, loss = 0.0450225
  255. I1203 18:47:00.457484  4610 caffe.cpp:169] Batch 14, accuracy = 1
  256. I1203 18:47:00.457504  4610 caffe.cpp:169] Batch 14, loss = 0.00531614
  257. I1203 18:47:00.459038  4610 caffe.cpp:169] Batch 15, accuracy = 0.98
  258. I1203 18:47:00.459056  4610 caffe.cpp:169] Batch 15, loss = 0.065209
  259. I1203 18:47:00.460577  4610 caffe.cpp:169] Batch 16, accuracy = 0.98
  260. I1203 18:47:00.460597  4610 caffe.cpp:169] Batch 16, loss = 0.0520317
  261. I1203 18:47:00.462123  4610 caffe.cpp:169] Batch 17, accuracy = 0.99
  262. I1203 18:47:00.462143  4610 caffe.cpp:169] Batch 17, loss = 0.0328681
  263. I1203 18:47:00.463656  4610 caffe.cpp:169] Batch 18, accuracy = 0.99
  264. I1203 18:47:00.463676  4610 caffe.cpp:169] Batch 18, loss = 0.0175973
  265. I1203 18:47:00.465188  4610 caffe.cpp:169] Batch 19, accuracy = 0.97
  266. I1203 18:47:00.465208  4610 caffe.cpp:169] Batch 19, loss = 0.0576884
  267. I1203 18:47:00.466749  4610 caffe.cpp:169] Batch 20, accuracy = 0.97
  268. I1203 18:47:00.466769  4610 caffe.cpp:169] Batch 20, loss = 0.0850501
  269. I1203 18:47:00.468278  4610 caffe.cpp:169] Batch 21, accuracy = 0.98
  270. I1203 18:47:00.468298  4610 caffe.cpp:169] Batch 21, loss = 0.0676049
  271. I1203 18:47:00.469805  4610 caffe.cpp:169] Batch 22, accuracy = 0.99
  272. I1203 18:47:00.469825  4610 caffe.cpp:169] Batch 22, loss = 0.0448538
  273. I1203 18:47:00.471328  4610 caffe.cpp:169] Batch 23, accuracy = 0.97
  274. I1203 18:47:00.471349  4610 caffe.cpp:169] Batch 23, loss = 0.0333992
  275. I1203 18:47:00.487124  4610 caffe.cpp:169] Batch 24, accuracy = 1
  276. I1203 18:47:00.487180  4610 caffe.cpp:169] Batch 24, loss = 0.0281527
  277. I1203 18:47:00.489002  4610 caffe.cpp:169] Batch 25, accuracy = 0.99
  278. I1203 18:47:00.489048  4610 caffe.cpp:169] Batch 25, loss = 0.0545881
  279. I1203 18:47:00.490890  4610 caffe.cpp:169] Batch 26, accuracy = 0.98
  280. I1203 18:47:00.490932  4610 caffe.cpp:169] Batch 26, loss = 0.115576
  281. I1203 18:47:00.492620  4610 caffe.cpp:169] Batch 27, accuracy = 1
  282. I1203 18:47:00.492640  4610 caffe.cpp:169] Batch 27, loss = 0.0149555
  283. I1203 18:47:00.494161  4610 caffe.cpp:169] Batch 28, accuracy = 0.98
  284. I1203 18:47:00.494181  4610 caffe.cpp:169] Batch 28, loss = 0.0398991
  285. I1203 18:47:00.495693  4610 caffe.cpp:169] Batch 29, accuracy = 0.96
  286. I1203 18:47:00.495713  4610 caffe.cpp:169] Batch 29, loss = 0.115862
  287. I1203 18:47:00.497226  4610 caffe.cpp:169] Batch 30, accuracy = 1
  288. I1203 18:47:00.497246  4610 caffe.cpp:169] Batch 30, loss = 0.0116793
  289. I1203 18:47:00.498785  4610 caffe.cpp:169] Batch 31, accuracy = 1
  290. I1203 18:47:00.498817  4610 caffe.cpp:169] Batch 31, loss = 0.00451814
  291. I1203 18:47:00.500329  4610 caffe.cpp:169] Batch 32, accuracy = 0.98
  292. I1203 18:47:00.500349  4610 caffe.cpp:169] Batch 32, loss = 0.0244668
  293. I1203 18:47:00.501878  4610 caffe.cpp:169] Batch 33, accuracy = 1
  294. I1203 18:47:00.501899  4610 caffe.cpp:169] Batch 33, loss = 0.00285445
  295. I1203 18:47:00.503411  4610 caffe.cpp:169] Batch 34, accuracy = 0.98
  296. I1203 18:47:00.503429  4610 caffe.cpp:169] Batch 34, loss = 0.0566256
  297. I1203 18:47:00.504940  4610 caffe.cpp:169] Batch 35, accuracy = 0.95
  298. I1203 18:47:00.504961  4610 caffe.cpp:169] Batch 35, loss = 0.154924
  299. I1203 18:47:00.506500  4610 caffe.cpp:169] Batch 36, accuracy = 1
  300. I1203 18:47:00.506520  4610 caffe.cpp:169] Batch 36, loss = 0.00451233
  301. I1203 18:47:00.508111  4610 caffe.cpp:169] Batch 37, accuracy = 0.97
  302. I1203 18:47:00.508131  4610 caffe.cpp:169] Batch 37, loss = 0.0572309
  303. I1203 18:47:00.509635  4610 caffe.cpp:169] Batch 38, accuracy = 0.99
  304. I1203 18:47:00.509655  4610 caffe.cpp:169] Batch 38, loss = 0.0192229
  305. I1203 18:47:00.511181  4610 caffe.cpp:169] Batch 39, accuracy = 0.99
  306. I1203 18:47:00.511200  4610 caffe.cpp:169] Batch 39, loss = 0.029272
  307. I1203 18:47:00.512725  4610 caffe.cpp:169] Batch 40, accuracy = 0.99
  308. I1203 18:47:00.512745  4610 caffe.cpp:169] Batch 40, loss = 0.0258552
  309. I1203 18:47:00.514317  4610 caffe.cpp:169] Batch 41, accuracy = 0.99
  310. I1203 18:47:00.514338  4610 caffe.cpp:169] Batch 41, loss = 0.0752082
  311. I1203 18:47:00.515854  4610 caffe.cpp:169] Batch 42, accuracy = 1
  312. I1203 18:47:00.515873  4610 caffe.cpp:169] Batch 42, loss = 0.0283319
  313. I1203 18:47:00.517379  4610 caffe.cpp:169] Batch 43, accuracy = 0.99
  314. I1203 18:47:00.517398  4610 caffe.cpp:169] Batch 43, loss = 0.0112394
  315. I1203 18:47:00.518925  4610 caffe.cpp:169] Batch 44, accuracy = 0.98
  316. I1203 18:47:00.518946  4610 caffe.cpp:169] Batch 44, loss = 0.0413653
  317. I1203 18:47:00.520457  4610 caffe.cpp:169] Batch 45, accuracy = 0.98
  318. I1203 18:47:00.520478  4610 caffe.cpp:169] Batch 45, loss = 0.0501227
  319. I1203 18:47:00.521989  4610 caffe.cpp:169] Batch 46, accuracy = 1
  320. I1203 18:47:00.522009  4610 caffe.cpp:169] Batch 46, loss = 0.0114459
  321. I1203 18:47:00.523540  4610 caffe.cpp:169] Batch 47, accuracy = 1
  322. I1203 18:47:00.523561  4610 caffe.cpp:169] Batch 47, loss = 0.0163504
  323. I1203 18:47:00.525075  4610 caffe.cpp:169] Batch 48, accuracy = 0.97
  324. I1203 18:47:00.525095  4610 caffe.cpp:169] Batch 48, loss = 0.0450363
  325. I1203 18:47:00.526633  4610 caffe.cpp:169] Batch 49, accuracy = 1
  326. I1203 18:47:00.526651  4610 caffe.cpp:169] Batch 49, loss = 0.0046898
  327. I1203 18:47:00.526662  4610 caffe.cpp:174] Loss: 0.041468
  328. I1203 18:47:00.526674  4610 caffe.cpp:186] accuracy = 0.9856
  329. I1203 18:47:00.526687  4610 caffe.cpp:186] loss = 0.041468 (* 1 = 0.041468 loss)
I1203 18:47:00.073052  4610 caffe.cpp:134] Use GPU with device ID 0
I1203 18:47:00.367065  4610 net.cpp:275] The NetState phase (1) differed from the phase (0) specified by a rule in layer mnist
I1203 18:47:00.367269  4610 net.cpp:39] Initializing net from parameters:
name: "LeNet"
layers {top: "data"top: "label"name: "mnist"type: DATAdata_param {source: "examples/mnist/mnist_test_lmdb"batch_size: 100backend: LMDB}include {phase: TEST}transform_param {scale: 0.00390625}
}
layers {bottom: "data"top: "conv1"name: "conv1"type: CONVOLUTIONblobs_lr: 1blobs_lr: 2convolution_param {num_output: 20kernel_size: 5stride: 1weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layers {bottom: "conv1"top: "pool1"name: "pool1"type: POOLINGpooling_param {pool: MAXkernel_size: 2stride: 2}
}
layers {bottom: "pool1"top: "conv2"name: "conv2"type: CONVOLUTIONblobs_lr: 1blobs_lr: 2convolution_param {num_output: 50kernel_size: 5stride: 1weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layers {bottom: "conv2"top: "pool2"name: "pool2"type: POOLINGpooling_param {pool: MAXkernel_size: 2stride: 2}
}
layers {bottom: "pool2"top: "ip1"name: "ip1"type: INNER_PRODUCTblobs_lr: 1blobs_lr: 2inner_product_param {num_output: 500weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layers {bottom: "ip1"top: "ip1"name: "relu1"type: RELU
}
layers {bottom: "ip1"top: "ip2"name: "ip2"type: INNER_PRODUCTblobs_lr: 1blobs_lr: 2inner_product_param {num_output: 10weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layers {bottom: "ip2"bottom: "label"top: "accuracy"name: "accuracy"type: ACCURACYinclude {phase: TEST}
}
layers {bottom: "ip2"bottom: "label"top: "loss"name: "loss"type: SOFTMAX_LOSS
}
I1203 18:47:00.367391  4610 net.cpp:67] Creating Layer mnist
I1203 18:47:00.367409  4610 net.cpp:356] mnist -> data
I1203 18:47:00.367435  4610 net.cpp:356] mnist -> label
I1203 18:47:00.367451  4610 net.cpp:96] Setting up mnist
I1203 18:47:00.367571  4610 data_layer.cpp:68] Opening lmdb examples/mnist/mnist_test_lmdb
I1203 18:47:00.367609  4610 data_layer.cpp:128] output data size: 100,1,28,28
I1203 18:47:00.367832  4610 net.cpp:103] Top shape: 100 1 28 28 (78400)
I1203 18:47:00.367849  4610 net.cpp:103] Top shape: 100 1 1 1 (100)
I1203 18:47:00.367863  4610 net.cpp:67] Creating Layer label_mnist_1_split
I1203 18:47:00.367873  4610 net.cpp:394] label_mnist_1_split <- label
I1203 18:47:00.367892  4610 net.cpp:356] label_mnist_1_split -> label_mnist_1_split_0
I1203 18:47:00.367908  4610 net.cpp:356] label_mnist_1_split -> label_mnist_1_split_1
I1203 18:47:00.367919  4610 net.cpp:96] Setting up label_mnist_1_split
I1203 18:47:00.367929  4610 net.cpp:103] Top shape: 100 1 1 1 (100)
I1203 18:47:00.367938  4610 net.cpp:103] Top shape: 100 1 1 1 (100)
I1203 18:47:00.367950  4610 net.cpp:67] Creating Layer conv1
I1203 18:47:00.367959  4610 net.cpp:394] conv1 <- data
I1203 18:47:00.367969  4610 net.cpp:356] conv1 -> conv1
I1203 18:47:00.367982  4610 net.cpp:96] Setting up conv1
I1203 18:47:00.392133  4610 net.cpp:103] Top shape: 100 20 24 24 (1152000)
I1203 18:47:00.392204  4610 net.cpp:67] Creating Layer pool1
I1203 18:47:00.392217  4610 net.cpp:394] pool1 <- conv1
I1203 18:47:00.392231  4610 net.cpp:356] pool1 -> pool1
I1203 18:47:00.392247  4610 net.cpp:96] Setting up pool1
I1203 18:47:00.392273  4610 net.cpp:103] Top shape: 100 20 12 12 (288000)
I1203 18:47:00.392297  4610 net.cpp:67] Creating Layer conv2
I1203 18:47:00.392307  4610 net.cpp:394] conv2 <- pool1
I1203 18:47:00.392318  4610 net.cpp:356] conv2 -> conv2
I1203 18:47:00.392330  4610 net.cpp:96] Setting up conv2
I1203 18:47:00.392669  4610 net.cpp:103] Top shape: 100 50 8 8 (320000)
I1203 18:47:00.392729  4610 net.cpp:67] Creating Layer pool2
I1203 18:47:00.392756  4610 net.cpp:394] pool2 <- conv2
I1203 18:47:00.392768  4610 net.cpp:356] pool2 -> pool2
I1203 18:47:00.392781  4610 net.cpp:96] Setting up pool2
I1203 18:47:00.392793  4610 net.cpp:103] Top shape: 100 50 4 4 (80000)
I1203 18:47:00.392810  4610 net.cpp:67] Creating Layer ip1
I1203 18:47:00.392819  4610 net.cpp:394] ip1 <- pool2
I1203 18:47:00.392832  4610 net.cpp:356] ip1 -> ip1
I1203 18:47:00.392844  4610 net.cpp:96] Setting up ip1
I1203 18:47:00.397348  4610 net.cpp:103] Top shape: 100 500 1 1 (50000)
I1203 18:47:00.397372  4610 net.cpp:67] Creating Layer relu1
I1203 18:47:00.397382  4610 net.cpp:394] relu1 <- ip1
I1203 18:47:00.397394  4610 net.cpp:345] relu1 -> ip1 (in-place)
I1203 18:47:00.397407  4610 net.cpp:96] Setting up relu1
I1203 18:47:00.397420  4610 net.cpp:103] Top shape: 100 500 1 1 (50000)
I1203 18:47:00.397434  4610 net.cpp:67] Creating Layer ip2
I1203 18:47:00.397442  4610 net.cpp:394] ip2 <- ip1
I1203 18:47:00.397456  4610 net.cpp:356] ip2 -> ip2
I1203 18:47:00.397469  4610 net.cpp:96] Setting up ip2
I1203 18:47:00.397532  4610 net.cpp:103] Top shape: 100 10 1 1 (1000)
I1203 18:47:00.397547  4610 net.cpp:67] Creating Layer ip2_ip2_0_split
I1203 18:47:00.397557  4610 net.cpp:394] ip2_ip2_0_split <- ip2
I1203 18:47:00.397565  4610 net.cpp:356] ip2_ip2_0_split -> ip2_ip2_0_split_0
I1203 18:47:00.397583  4610 net.cpp:356] ip2_ip2_0_split -> ip2_ip2_0_split_1
I1203 18:47:00.397593  4610 net.cpp:96] Setting up ip2_ip2_0_split
I1203 18:47:00.397603  4610 net.cpp:103] Top shape: 100 10 1 1 (1000)
I1203 18:47:00.397611  4610 net.cpp:103] Top shape: 100 10 1 1 (1000)
I1203 18:47:00.397622  4610 net.cpp:67] Creating Layer accuracy
I1203 18:47:00.397631  4610 net.cpp:394] accuracy <- ip2_ip2_0_split_0
I1203 18:47:00.397640  4610 net.cpp:394] accuracy <- label_mnist_1_split_0
I1203 18:47:00.397650  4610 net.cpp:356] accuracy -> accuracy
I1203 18:47:00.397661  4610 net.cpp:96] Setting up accuracy
I1203 18:47:00.397673  4610 net.cpp:103] Top shape: 1 1 1 1 (1)
I1203 18:47:00.397687  4610 net.cpp:67] Creating Layer loss
I1203 18:47:00.397696  4610 net.cpp:394] loss <- ip2_ip2_0_split_1
I1203 18:47:00.397706  4610 net.cpp:394] loss <- label_mnist_1_split_1
I1203 18:47:00.397714  4610 net.cpp:356] loss -> loss
I1203 18:47:00.397725  4610 net.cpp:96] Setting up loss
I1203 18:47:00.397737  4610 net.cpp:103] Top shape: 1 1 1 1 (1)
I1203 18:47:00.397745  4610 net.cpp:109]     with loss weight 1
I1203 18:47:00.397776  4610 net.cpp:170] loss needs backward computation.
I1203 18:47:00.397785  4610 net.cpp:172] accuracy does not need backward computation.
I1203 18:47:00.397794  4610 net.cpp:170] ip2_ip2_0_split needs backward computation.
I1203 18:47:00.397801  4610 net.cpp:170] ip2 needs backward computation.
I1203 18:47:00.397809  4610 net.cpp:170] relu1 needs backward computation.
I1203 18:47:00.397816  4610 net.cpp:170] ip1 needs backward computation.
I1203 18:47:00.397825  4610 net.cpp:170] pool2 needs backward computation.
I1203 18:47:00.397832  4610 net.cpp:170] conv2 needs backward computation.
I1203 18:47:00.397843  4610 net.cpp:170] pool1 needs backward computation.
I1203 18:47:00.397851  4610 net.cpp:170] conv1 needs backward computation.
I1203 18:47:00.397860  4610 net.cpp:172] label_mnist_1_split does not need backward computation.
I1203 18:47:00.397867  4610 net.cpp:172] mnist does not need backward computation.
I1203 18:47:00.397874  4610 net.cpp:208] This network produces output accuracy
I1203 18:47:00.397884  4610 net.cpp:208] This network produces output loss
I1203 18:47:00.397905  4610 net.cpp:467] Collecting Learning Rate and Weight Decay.
I1203 18:47:00.397915  4610 net.cpp:219] Network initialization done.
I1203 18:47:00.397923  4610 net.cpp:220] Memory required for data: 8086808
I1203 18:47:00.432165  4610 caffe.cpp:145] Running for 50 iterations.
I1203 18:47:00.435849  4610 caffe.cpp:169] Batch 0, accuracy = 0.99
I1203 18:47:00.435879  4610 caffe.cpp:169] Batch 0, loss = 0.018971
I1203 18:47:00.437434  4610 caffe.cpp:169] Batch 1, accuracy = 0.99
I1203 18:47:00.437471  4610 caffe.cpp:169] Batch 1, loss = 0.0117609
I1203 18:47:00.439000  4610 caffe.cpp:169] Batch 2, accuracy = 1
I1203 18:47:00.439020  4610 caffe.cpp:169] Batch 2, loss = 0.00555977
I1203 18:47:00.440551  4610 caffe.cpp:169] Batch 3, accuracy = 0.99
I1203 18:47:00.440575  4610 caffe.cpp:169] Batch 3, loss = 0.0412139
I1203 18:47:00.442105  4610 caffe.cpp:169] Batch 4, accuracy = 0.99
I1203 18:47:00.442126  4610 caffe.cpp:169] Batch 4, loss = 0.0579313
I1203 18:47:00.443619  4610 caffe.cpp:169] Batch 5, accuracy = 0.99
I1203 18:47:00.443639  4610 caffe.cpp:169] Batch 5, loss = 0.0479742
I1203 18:47:00.445159  4610 caffe.cpp:169] Batch 6, accuracy = 0.98
I1203 18:47:00.445179  4610 caffe.cpp:169] Batch 6, loss = 0.0570176
I1203 18:47:00.446712  4610 caffe.cpp:169] Batch 7, accuracy = 0.99
I1203 18:47:00.446732  4610 caffe.cpp:169] Batch 7, loss = 0.0272363
I1203 18:47:00.448249  4610 caffe.cpp:169] Batch 8, accuracy = 1
I1203 18:47:00.448269  4610 caffe.cpp:169] Batch 8, loss = 0.00680142
I1203 18:47:00.449801  4610 caffe.cpp:169] Batch 9, accuracy = 0.98
I1203 18:47:00.449821  4610 caffe.cpp:169] Batch 9, loss = 0.0288398
I1203 18:47:00.451352  4610 caffe.cpp:169] Batch 10, accuracy = 0.98
I1203 18:47:00.451372  4610 caffe.cpp:169] Batch 10, loss = 0.0603264
I1203 18:47:00.452883  4610 caffe.cpp:169] Batch 11, accuracy = 0.98
I1203 18:47:00.452903  4610 caffe.cpp:169] Batch 11, loss = 0.0524943
I1203 18:47:00.454407  4610 caffe.cpp:169] Batch 12, accuracy = 0.95
I1203 18:47:00.454427  4610 caffe.cpp:169] Batch 12, loss = 0.106648
I1203 18:47:00.455955  4610 caffe.cpp:169] Batch 13, accuracy = 0.98
I1203 18:47:00.455976  4610 caffe.cpp:169] Batch 13, loss = 0.0450225
I1203 18:47:00.457484  4610 caffe.cpp:169] Batch 14, accuracy = 1
I1203 18:47:00.457504  4610 caffe.cpp:169] Batch 14, loss = 0.00531614
I1203 18:47:00.459038  4610 caffe.cpp:169] Batch 15, accuracy = 0.98
I1203 18:47:00.459056  4610 caffe.cpp:169] Batch 15, loss = 0.065209
I1203 18:47:00.460577  4610 caffe.cpp:169] Batch 16, accuracy = 0.98
I1203 18:47:00.460597  4610 caffe.cpp:169] Batch 16, loss = 0.0520317
I1203 18:47:00.462123  4610 caffe.cpp:169] Batch 17, accuracy = 0.99
I1203 18:47:00.462143  4610 caffe.cpp:169] Batch 17, loss = 0.0328681
I1203 18:47:00.463656  4610 caffe.cpp:169] Batch 18, accuracy = 0.99
I1203 18:47:00.463676  4610 caffe.cpp:169] Batch 18, loss = 0.0175973
I1203 18:47:00.465188  4610 caffe.cpp:169] Batch 19, accuracy = 0.97
I1203 18:47:00.465208  4610 caffe.cpp:169] Batch 19, loss = 0.0576884
I1203 18:47:00.466749  4610 caffe.cpp:169] Batch 20, accuracy = 0.97
I1203 18:47:00.466769  4610 caffe.cpp:169] Batch 20, loss = 0.0850501
I1203 18:47:00.468278  4610 caffe.cpp:169] Batch 21, accuracy = 0.98
I1203 18:47:00.468298  4610 caffe.cpp:169] Batch 21, loss = 0.0676049
I1203 18:47:00.469805  4610 caffe.cpp:169] Batch 22, accuracy = 0.99
I1203 18:47:00.469825  4610 caffe.cpp:169] Batch 22, loss = 0.0448538
I1203 18:47:00.471328  4610 caffe.cpp:169] Batch 23, accuracy = 0.97
I1203 18:47:00.471349  4610 caffe.cpp:169] Batch 23, loss = 0.0333992
I1203 18:47:00.487124  4610 caffe.cpp:169] Batch 24, accuracy = 1
I1203 18:47:00.487180  4610 caffe.cpp:169] Batch 24, loss = 0.0281527
I1203 18:47:00.489002  4610 caffe.cpp:169] Batch 25, accuracy = 0.99
I1203 18:47:00.489048  4610 caffe.cpp:169] Batch 25, loss = 0.0545881
I1203 18:47:00.490890  4610 caffe.cpp:169] Batch 26, accuracy = 0.98
I1203 18:47:00.490932  4610 caffe.cpp:169] Batch 26, loss = 0.115576
I1203 18:47:00.492620  4610 caffe.cpp:169] Batch 27, accuracy = 1
I1203 18:47:00.492640  4610 caffe.cpp:169] Batch 27, loss = 0.0149555
I1203 18:47:00.494161  4610 caffe.cpp:169] Batch 28, accuracy = 0.98
I1203 18:47:00.494181  4610 caffe.cpp:169] Batch 28, loss = 0.0398991
I1203 18:47:00.495693  4610 caffe.cpp:169] Batch 29, accuracy = 0.96
I1203 18:47:00.495713  4610 caffe.cpp:169] Batch 29, loss = 0.115862
I1203 18:47:00.497226  4610 caffe.cpp:169] Batch 30, accuracy = 1
I1203 18:47:00.497246  4610 caffe.cpp:169] Batch 30, loss = 0.0116793
I1203 18:47:00.498785  4610 caffe.cpp:169] Batch 31, accuracy = 1
I1203 18:47:00.498817  4610 caffe.cpp:169] Batch 31, loss = 0.00451814
I1203 18:47:00.500329  4610 caffe.cpp:169] Batch 32, accuracy = 0.98
I1203 18:47:00.500349  4610 caffe.cpp:169] Batch 32, loss = 0.0244668
I1203 18:47:00.501878  4610 caffe.cpp:169] Batch 33, accuracy = 1
I1203 18:47:00.501899  4610 caffe.cpp:169] Batch 33, loss = 0.00285445
I1203 18:47:00.503411  4610 caffe.cpp:169] Batch 34, accuracy = 0.98
I1203 18:47:00.503429  4610 caffe.cpp:169] Batch 34, loss = 0.0566256
I1203 18:47:00.504940  4610 caffe.cpp:169] Batch 35, accuracy = 0.95
I1203 18:47:00.504961  4610 caffe.cpp:169] Batch 35, loss = 0.154924
I1203 18:47:00.506500  4610 caffe.cpp:169] Batch 36, accuracy = 1
I1203 18:47:00.506520  4610 caffe.cpp:169] Batch 36, loss = 0.00451233
I1203 18:47:00.508111  4610 caffe.cpp:169] Batch 37, accuracy = 0.97
I1203 18:47:00.508131  4610 caffe.cpp:169] Batch 37, loss = 0.0572309
I1203 18:47:00.509635  4610 caffe.cpp:169] Batch 38, accuracy = 0.99
I1203 18:47:00.509655  4610 caffe.cpp:169] Batch 38, loss = 0.0192229
I1203 18:47:00.511181  4610 caffe.cpp:169] Batch 39, accuracy = 0.99
I1203 18:47:00.511200  4610 caffe.cpp:169] Batch 39, loss = 0.029272
I1203 18:47:00.512725  4610 caffe.cpp:169] Batch 40, accuracy = 0.99
I1203 18:47:00.512745  4610 caffe.cpp:169] Batch 40, loss = 0.0258552
I1203 18:47:00.514317  4610 caffe.cpp:169] Batch 41, accuracy = 0.99
I1203 18:47:00.514338  4610 caffe.cpp:169] Batch 41, loss = 0.0752082
I1203 18:47:00.515854  4610 caffe.cpp:169] Batch 42, accuracy = 1
I1203 18:47:00.515873  4610 caffe.cpp:169] Batch 42, loss = 0.0283319
I1203 18:47:00.517379  4610 caffe.cpp:169] Batch 43, accuracy = 0.99
I1203 18:47:00.517398  4610 caffe.cpp:169] Batch 43, loss = 0.0112394
I1203 18:47:00.518925  4610 caffe.cpp:169] Batch 44, accuracy = 0.98
I1203 18:47:00.518946  4610 caffe.cpp:169] Batch 44, loss = 0.0413653
I1203 18:47:00.520457  4610 caffe.cpp:169] Batch 45, accuracy = 0.98
I1203 18:47:00.520478  4610 caffe.cpp:169] Batch 45, loss = 0.0501227
I1203 18:47:00.521989  4610 caffe.cpp:169] Batch 46, accuracy = 1
I1203 18:47:00.522009  4610 caffe.cpp:169] Batch 46, loss = 0.0114459
I1203 18:47:00.523540  4610 caffe.cpp:169] Batch 47, accuracy = 1
I1203 18:47:00.523561  4610 caffe.cpp:169] Batch 47, loss = 0.0163504
I1203 18:47:00.525075  4610 caffe.cpp:169] Batch 48, accuracy = 0.97
I1203 18:47:00.525095  4610 caffe.cpp:169] Batch 48, loss = 0.0450363
I1203 18:47:00.526633  4610 caffe.cpp:169] Batch 49, accuracy = 1
I1203 18:47:00.526651  4610 caffe.cpp:169] Batch 49, loss = 0.0046898
I1203 18:47:00.526662  4610 caffe.cpp:174] Loss: 0.041468
I1203 18:47:00.526674  4610 caffe.cpp:186] accuracy = 0.9856
I1203 18:47:00.526687  4610 caffe.cpp:186] loss = 0.041468 (* 1 = 0.041468 loss)

Caffe代码导读(5):对数据集进行Testing相关推荐

  1. Caffe代码导读(4):数据集准备

    转载自: Caffe代码导读(4):数据集准备 - 卜居 - 博客频道 - CSDN.NET http://blog.csdn.net/kkk584520/article/details/416492 ...

  2. Caffe代码导读(0):路线图

    转载自: Caffe代码导读(0):路线图 - 卜居 - 博客频道 - CSDN.NET http://blog.csdn.net/kkk584520/article/details/41681085 ...

  3. Caffe代码导读(1):Protobuf例子

    转载自: Caffe代码导读(1):Protobuf例子 - 卜居 - 博客频道 - CSDN.NET http://blog.csdn.net/kkk584520/article/details/4 ...

  4. Caffe代码导读(3):LevelDB例程

    Caffe自带例子Cifar10中使用leveldb存储输入数据,为此我们研究一下怎样使用它.安装步骤可以参考http://blog.csdn.net/kangqing2003/article/det ...

  5. Caffe代码导读(2):LMDB简介

    闪电般的内存映射型数据库管理(LMDB) 简介 LMDB是基于二叉树的数据库管理库,建模基于伯克利数据库的应用程序接口,但做了大幅精简.整个数据库都是内存映射型的,所有数据获取返回数据都是直接从映射的 ...

  6. VS2013+Windows+CPU下搭建caffe框架并利用mnist数据集实验

    <李凭箜篌引>--李贺 吴丝蜀桐张高秋,空山临云颓不流: 江娥啼竹素女愁,李凭中国弹箜篌: 昆山玉碎凤凰叫,芙蓉泣露香兰笑: 十二门前融冷光,二十三丝动紫皇: 女娲炼石补天处,石破天惊逗秋 ...

  7. 处理自己的数据集_手写代码实现KDD CUP99数据集的数据归一化处理

    归一化是数据处理的常用方法之一,目的是消除不同评价指标之间的量纲对数据分析结果的影响,使各指标处于同一数量级,以解决数据指标之间的可比性问题. 目前学术界关于归一化和标准化的概念还不统一,常常会把这两 ...

  8. Android工具HierarchyViewer 代码导读(3) -- 后台代码

    在上文中,我们讲解了如何把HierarchyViewer的项目导入到Eclipse中,以便更高效阅读代码.本文将讲解HierarchyViewer的后台代码,建议大家可以先阅读<Android工 ...

  9. Python编写caffe代码

    有时候,我们需要将网络使用caffe代码实现,人工手写容易出问题.可以使用Python完成网络编写. 卷积层: def generate_conv_layer_no_bias(name, bottom ...

最新文章

  1. 零基础学汇编 --小甲鱼
  2. BTC引领市场多头情绪爆发 BCH筑底完成望成上涨新风口
  3. git 常用命令 方法大全
  4. 1. 批量梯度下降法BGD 2. 随机梯度下降法SGD 3. 小批量梯度下降法MBGD
  5. Ubuntu 相关命令行工具
  6. 一致性 Hash 在负载均衡中的应用
  7. 【机器学习实战】意大利Covid-19病毒感染数学模型及预测
  8. 如何使用DNN中的Calendar控件
  9. python字符串处理函数汇总_Python函数汇总
  10. webapi 找到了与请求匹配的多个操作(ajax报500,4的错误)
  11. CentOS Linux 7编译安装Redis
  12. oracle主键从键怎么看,分析Oracle主键的跳号现象
  13. 详解MariaDB数据库的事务
  14. matlab阅读怎么放大镜,matlab局部放大
  15. 软考高项-了解软考高项
  16. 连接共享打印机出现0x000000bcb问题的解决方法
  17. windows11,安装maven。
  18. OMAPL多核异构通信驱动AD9833波形发生器-Notify组件
  19. 2021世界人工智能大会(WAIC2021):深思考人工智能获颁工信部人工智能产业创新“揭榜优胜单位”!
  20. Win10专业版启用.NET FrameWork 3.5

热门文章

  1. 推荐系统的实践与思考(上篇)【转】
  2. lucene源码分析(5)lucence-group
  3. Xcode执行Analyze静态分析
  4. Lesson 13.4 Dead ReLU Problem与学习率优化
  5. Python基础知识(第十一天)
  6. Facebook:使用Libra完成第一笔交易
  7. 独家专访:SequoiaDB 3.0 版本正式发布!协议级完整兼容MySQL!
  8. Apache Kafka-初体验Kafka(03)-Centos7下搭建kafka集群
  9. Java-CentoOS 7安装JDK8 (rpm格式 和 tar.gz格式) 多JDK设置默认的Java 版本
  10. Spring-不同配置方式的比较