简 介: 将所有机械旋转字符合成一个大的训练集合(3415个样本),使用其中80%作为训练样本集合,利用LeNet网络进行训练。最终在测试集合上获得95%的识别率。对于误差超过1的样本只要0.7%。

关键词LeNet旋转数字

#mermaid-svg-hILlEOxUr9Ce5ZuO .label{font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family);fill:#333;color:#333}#mermaid-svg-hILlEOxUr9Ce5ZuO .label text{fill:#333}#mermaid-svg-hILlEOxUr9Ce5ZuO .node rect,#mermaid-svg-hILlEOxUr9Ce5ZuO .node circle,#mermaid-svg-hILlEOxUr9Ce5ZuO .node ellipse,#mermaid-svg-hILlEOxUr9Ce5ZuO .node polygon,#mermaid-svg-hILlEOxUr9Ce5ZuO .node path{fill:#ECECFF;stroke:#9370db;stroke-width:1px}#mermaid-svg-hILlEOxUr9Ce5ZuO .node .label{text-align:center;fill:#333}#mermaid-svg-hILlEOxUr9Ce5ZuO .node.clickable{cursor:pointer}#mermaid-svg-hILlEOxUr9Ce5ZuO .arrowheadPath{fill:#333}#mermaid-svg-hILlEOxUr9Ce5ZuO .edgePath .path{stroke:#333;stroke-width:1.5px}#mermaid-svg-hILlEOxUr9Ce5ZuO .flowchart-link{stroke:#333;fill:none}#mermaid-svg-hILlEOxUr9Ce5ZuO .edgeLabel{background-color:#e8e8e8;text-align:center}#mermaid-svg-hILlEOxUr9Ce5ZuO .edgeLabel rect{opacity:0.9}#mermaid-svg-hILlEOxUr9Ce5ZuO .edgeLabel span{color:#333}#mermaid-svg-hILlEOxUr9Ce5ZuO .cluster rect{fill:#ffffde;stroke:#aa3;stroke-width:1px}#mermaid-svg-hILlEOxUr9Ce5ZuO .cluster text{fill:#333}#mermaid-svg-hILlEOxUr9Ce5ZuO div.mermaidTooltip{position:absolute;text-align:center;max-width:200px;padding:2px;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family);font-size:12px;background:#ffffde;border:1px solid #aa3;border-radius:2px;pointer-events:none;z-index:100}#mermaid-svg-hILlEOxUr9Ce5ZuO .actor{stroke:#ccf;fill:#ECECFF}#mermaid-svg-hILlEOxUr9Ce5ZuO text.actor>tspan{fill:#000;stroke:none}#mermaid-svg-hILlEOxUr9Ce5ZuO .actor-line{stroke:grey}#mermaid-svg-hILlEOxUr9Ce5ZuO .messageLine0{stroke-width:1.5;stroke-dasharray:none;stroke:#333}#mermaid-svg-hILlEOxUr9Ce5ZuO .messageLine1{stroke-width:1.5;stroke-dasharray:2, 2;stroke:#333}#mermaid-svg-hILlEOxUr9Ce5ZuO #arrowhead path{fill:#333;stroke:#333}#mermaid-svg-hILlEOxUr9Ce5ZuO .sequenceNumber{fill:#fff}#mermaid-svg-hILlEOxUr9Ce5ZuO #sequencenumber{fill:#333}#mermaid-svg-hILlEOxUr9Ce5ZuO #crosshead path{fill:#333;stroke:#333}#mermaid-svg-hILlEOxUr9Ce5ZuO .messageText{fill:#333;stroke:#333}#mermaid-svg-hILlEOxUr9Ce5ZuO .labelBox{stroke:#ccf;fill:#ECECFF}#mermaid-svg-hILlEOxUr9Ce5ZuO .labelText,#mermaid-svg-hILlEOxUr9Ce5ZuO .labelText>tspan{fill:#000;stroke:none}#mermaid-svg-hILlEOxUr9Ce5ZuO .loopText,#mermaid-svg-hILlEOxUr9Ce5ZuO .loopText>tspan{fill:#000;stroke:none}#mermaid-svg-hILlEOxUr9Ce5ZuO .loopLine{stroke-width:2px;stroke-dasharray:2, 2;stroke:#ccf;fill:#ccf}#mermaid-svg-hILlEOxUr9Ce5ZuO .note{stroke:#aa3;fill:#fff5ad}#mermaid-svg-hILlEOxUr9Ce5ZuO .noteText,#mermaid-svg-hILlEOxUr9Ce5ZuO .noteText>tspan{fill:#000;stroke:none}#mermaid-svg-hILlEOxUr9Ce5ZuO .activation0{fill:#f4f4f4;stroke:#666}#mermaid-svg-hILlEOxUr9Ce5ZuO .activation1{fill:#f4f4f4;stroke:#666}#mermaid-svg-hILlEOxUr9Ce5ZuO .activation2{fill:#f4f4f4;stroke:#666}#mermaid-svg-hILlEOxUr9Ce5ZuO .mermaid-main-font{font-family:"trebuchet ms", verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-hILlEOxUr9Ce5ZuO .section{stroke:none;opacity:0.2}#mermaid-svg-hILlEOxUr9Ce5ZuO .section0{fill:rgba(102,102,255,0.49)}#mermaid-svg-hILlEOxUr9Ce5ZuO .section2{fill:#fff400}#mermaid-svg-hILlEOxUr9Ce5ZuO .section1,#mermaid-svg-hILlEOxUr9Ce5ZuO .section3{fill:#fff;opacity:0.2}#mermaid-svg-hILlEOxUr9Ce5ZuO .sectionTitle0{fill:#333}#mermaid-svg-hILlEOxUr9Ce5ZuO .sectionTitle1{fill:#333}#mermaid-svg-hILlEOxUr9Ce5ZuO .sectionTitle2{fill:#333}#mermaid-svg-hILlEOxUr9Ce5ZuO .sectionTitle3{fill:#333}#mermaid-svg-hILlEOxUr9Ce5ZuO .sectionTitle{text-anchor:start;font-size:11px;text-height:14px;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-hILlEOxUr9Ce5ZuO .grid .tick{stroke:#d3d3d3;opacity:0.8;shape-rendering:crispEdges}#mermaid-svg-hILlEOxUr9Ce5ZuO .grid .tick text{font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-hILlEOxUr9Ce5ZuO .grid path{stroke-width:0}#mermaid-svg-hILlEOxUr9Ce5ZuO .today{fill:none;stroke:red;stroke-width:2px}#mermaid-svg-hILlEOxUr9Ce5ZuO .task{stroke-width:2}#mermaid-svg-hILlEOxUr9Ce5ZuO .taskText{text-anchor:middle;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-hILlEOxUr9Ce5ZuO .taskText:not([font-size]){font-size:11px}#mermaid-svg-hILlEOxUr9Ce5ZuO .taskTextOutsideRight{fill:#000;text-anchor:start;font-size:11px;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-hILlEOxUr9Ce5ZuO .taskTextOutsideLeft{fill:#000;text-anchor:end;font-size:11px}#mermaid-svg-hILlEOxUr9Ce5ZuO .task.clickable{cursor:pointer}#mermaid-svg-hILlEOxUr9Ce5ZuO .taskText.clickable{cursor:pointer;fill:#003163 !important;font-weight:bold}#mermaid-svg-hILlEOxUr9Ce5ZuO .taskTextOutsideLeft.clickable{cursor:pointer;fill:#003163 !important;font-weight:bold}#mermaid-svg-hILlEOxUr9Ce5ZuO .taskTextOutsideRight.clickable{cursor:pointer;fill:#003163 !important;font-weight:bold}#mermaid-svg-hILlEOxUr9Ce5ZuO .taskText0,#mermaid-svg-hILlEOxUr9Ce5ZuO .taskText1,#mermaid-svg-hILlEOxUr9Ce5ZuO .taskText2,#mermaid-svg-hILlEOxUr9Ce5ZuO .taskText3{fill:#fff}#mermaid-svg-hILlEOxUr9Ce5ZuO .task0,#mermaid-svg-hILlEOxUr9Ce5ZuO .task1,#mermaid-svg-hILlEOxUr9Ce5ZuO .task2,#mermaid-svg-hILlEOxUr9Ce5ZuO .task3{fill:#8a90dd;stroke:#534fbc}#mermaid-svg-hILlEOxUr9Ce5ZuO .taskTextOutside0,#mermaid-svg-hILlEOxUr9Ce5ZuO .taskTextOutside2{fill:#000}#mermaid-svg-hILlEOxUr9Ce5ZuO .taskTextOutside1,#mermaid-svg-hILlEOxUr9Ce5ZuO .taskTextOutside3{fill:#000}#mermaid-svg-hILlEOxUr9Ce5ZuO .active0,#mermaid-svg-hILlEOxUr9Ce5ZuO .active1,#mermaid-svg-hILlEOxUr9Ce5ZuO .active2,#mermaid-svg-hILlEOxUr9Ce5ZuO .active3{fill:#bfc7ff;stroke:#534fbc}#mermaid-svg-hILlEOxUr9Ce5ZuO .activeText0,#mermaid-svg-hILlEOxUr9Ce5ZuO .activeText1,#mermaid-svg-hILlEOxUr9Ce5ZuO .activeText2,#mermaid-svg-hILlEOxUr9Ce5ZuO .activeText3{fill:#000 !important}#mermaid-svg-hILlEOxUr9Ce5ZuO .done0,#mermaid-svg-hILlEOxUr9Ce5ZuO .done1,#mermaid-svg-hILlEOxUr9Ce5ZuO .done2,#mermaid-svg-hILlEOxUr9Ce5ZuO .done3{stroke:grey;fill:#d3d3d3;stroke-width:2}#mermaid-svg-hILlEOxUr9Ce5ZuO .doneText0,#mermaid-svg-hILlEOxUr9Ce5ZuO .doneText1,#mermaid-svg-hILlEOxUr9Ce5ZuO .doneText2,#mermaid-svg-hILlEOxUr9Ce5ZuO .doneText3{fill:#000 !important}#mermaid-svg-hILlEOxUr9Ce5ZuO .crit0,#mermaid-svg-hILlEOxUr9Ce5ZuO .crit1,#mermaid-svg-hILlEOxUr9Ce5ZuO .crit2,#mermaid-svg-hILlEOxUr9Ce5ZuO .crit3{stroke:#f88;fill:red;stroke-width:2}#mermaid-svg-hILlEOxUr9Ce5ZuO .activeCrit0,#mermaid-svg-hILlEOxUr9Ce5ZuO .activeCrit1,#mermaid-svg-hILlEOxUr9Ce5ZuO .activeCrit2,#mermaid-svg-hILlEOxUr9Ce5ZuO .activeCrit3{stroke:#f88;fill:#bfc7ff;stroke-width:2}#mermaid-svg-hILlEOxUr9Ce5ZuO .doneCrit0,#mermaid-svg-hILlEOxUr9Ce5ZuO .doneCrit1,#mermaid-svg-hILlEOxUr9Ce5ZuO .doneCrit2,#mermaid-svg-hILlEOxUr9Ce5ZuO .doneCrit3{stroke:#f88;fill:#d3d3d3;stroke-width:2;cursor:pointer;shape-rendering:crispEdges}#mermaid-svg-hILlEOxUr9Ce5ZuO .milestone{transform:rotate(45deg) scale(0.8, 0.8)}#mermaid-svg-hILlEOxUr9Ce5ZuO .milestoneText{font-style:italic}#mermaid-svg-hILlEOxUr9Ce5ZuO .doneCritText0,#mermaid-svg-hILlEOxUr9Ce5ZuO .doneCritText1,#mermaid-svg-hILlEOxUr9Ce5ZuO .doneCritText2,#mermaid-svg-hILlEOxUr9Ce5ZuO .doneCritText3{fill:#000 !important}#mermaid-svg-hILlEOxUr9Ce5ZuO .activeCritText0,#mermaid-svg-hILlEOxUr9Ce5ZuO .activeCritText1,#mermaid-svg-hILlEOxUr9Ce5ZuO .activeCritText2,#mermaid-svg-hILlEOxUr9Ce5ZuO .activeCritText3{fill:#000 !important}#mermaid-svg-hILlEOxUr9Ce5ZuO .titleText{text-anchor:middle;font-size:18px;fill:#000;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-hILlEOxUr9Ce5ZuO g.classGroup text{fill:#9370db;stroke:none;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family);font-size:10px}#mermaid-svg-hILlEOxUr9Ce5ZuO g.classGroup text .title{font-weight:bolder}#mermaid-svg-hILlEOxUr9Ce5ZuO g.clickable{cursor:pointer}#mermaid-svg-hILlEOxUr9Ce5ZuO g.classGroup rect{fill:#ECECFF;stroke:#9370db}#mermaid-svg-hILlEOxUr9Ce5ZuO g.classGroup line{stroke:#9370db;stroke-width:1}#mermaid-svg-hILlEOxUr9Ce5ZuO .classLabel .box{stroke:none;stroke-width:0;fill:#ECECFF;opacity:0.5}#mermaid-svg-hILlEOxUr9Ce5ZuO .classLabel .label{fill:#9370db;font-size:10px}#mermaid-svg-hILlEOxUr9Ce5ZuO .relation{stroke:#9370db;stroke-width:1;fill:none}#mermaid-svg-hILlEOxUr9Ce5ZuO .dashed-line{stroke-dasharray:3}#mermaid-svg-hILlEOxUr9Ce5ZuO #compositionStart{fill:#9370db;stroke:#9370db;stroke-width:1}#mermaid-svg-hILlEOxUr9Ce5ZuO #compositionEnd{fill:#9370db;stroke:#9370db;stroke-width:1}#mermaid-svg-hILlEOxUr9Ce5ZuO #aggregationStart{fill:#ECECFF;stroke:#9370db;stroke-width:1}#mermaid-svg-hILlEOxUr9Ce5ZuO #aggregationEnd{fill:#ECECFF;stroke:#9370db;stroke-width:1}#mermaid-svg-hILlEOxUr9Ce5ZuO #dependencyStart{fill:#9370db;stroke:#9370db;stroke-width:1}#mermaid-svg-hILlEOxUr9Ce5ZuO #dependencyEnd{fill:#9370db;stroke:#9370db;stroke-width:1}#mermaid-svg-hILlEOxUr9Ce5ZuO #extensionStart{fill:#9370db;stroke:#9370db;stroke-width:1}#mermaid-svg-hILlEOxUr9Ce5ZuO #extensionEnd{fill:#9370db;stroke:#9370db;stroke-width:1}#mermaid-svg-hILlEOxUr9Ce5ZuO .commit-id,#mermaid-svg-hILlEOxUr9Ce5ZuO .commit-msg,#mermaid-svg-hILlEOxUr9Ce5ZuO .branch-label{fill:lightgrey;color:lightgrey;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-hILlEOxUr9Ce5ZuO .pieTitleText{text-anchor:middle;font-size:25px;fill:#000;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-hILlEOxUr9Ce5ZuO .slice{font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-hILlEOxUr9Ce5ZuO g.stateGroup text{fill:#9370db;stroke:none;font-size:10px;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-hILlEOxUr9Ce5ZuO g.stateGroup text{fill:#9370db;fill:#333;stroke:none;font-size:10px}#mermaid-svg-hILlEOxUr9Ce5ZuO g.statediagram-cluster .cluster-label text{fill:#333}#mermaid-svg-hILlEOxUr9Ce5ZuO g.stateGroup .state-title{font-weight:bolder;fill:#000}#mermaid-svg-hILlEOxUr9Ce5ZuO g.stateGroup rect{fill:#ECECFF;stroke:#9370db}#mermaid-svg-hILlEOxUr9Ce5ZuO g.stateGroup line{stroke:#9370db;stroke-width:1}#mermaid-svg-hILlEOxUr9Ce5ZuO .transition{stroke:#9370db;stroke-width:1;fill:none}#mermaid-svg-hILlEOxUr9Ce5ZuO .stateGroup .composit{fill:white;border-bottom:1px}#mermaid-svg-hILlEOxUr9Ce5ZuO .stateGroup .alt-composit{fill:#e0e0e0;border-bottom:1px}#mermaid-svg-hILlEOxUr9Ce5ZuO .state-note{stroke:#aa3;fill:#fff5ad}#mermaid-svg-hILlEOxUr9Ce5ZuO .state-note text{fill:black;stroke:none;font-size:10px}#mermaid-svg-hILlEOxUr9Ce5ZuO .stateLabel .box{stroke:none;stroke-width:0;fill:#ECECFF;opacity:0.7}#mermaid-svg-hILlEOxUr9Ce5ZuO .edgeLabel text{fill:#333}#mermaid-svg-hILlEOxUr9Ce5ZuO .stateLabel text{fill:#000;font-size:10px;font-weight:bold;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-hILlEOxUr9Ce5ZuO .node circle.state-start{fill:black;stroke:black}#mermaid-svg-hILlEOxUr9Ce5ZuO .node circle.state-end{fill:black;stroke:white;stroke-width:1.5}#mermaid-svg-hILlEOxUr9Ce5ZuO #statediagram-barbEnd{fill:#9370db}#mermaid-svg-hILlEOxUr9Ce5ZuO .statediagram-cluster rect{fill:#ECECFF;stroke:#9370db;stroke-width:1px}#mermaid-svg-hILlEOxUr9Ce5ZuO .statediagram-cluster rect.outer{rx:5px;ry:5px}#mermaid-svg-hILlEOxUr9Ce5ZuO .statediagram-state .divider{stroke:#9370db}#mermaid-svg-hILlEOxUr9Ce5ZuO .statediagram-state .title-state{rx:5px;ry:5px}#mermaid-svg-hILlEOxUr9Ce5ZuO .statediagram-cluster.statediagram-cluster .inner{fill:white}#mermaid-svg-hILlEOxUr9Ce5ZuO .statediagram-cluster.statediagram-cluster-alt .inner{fill:#e0e0e0}#mermaid-svg-hILlEOxUr9Ce5ZuO .statediagram-cluster .inner{rx:0;ry:0}#mermaid-svg-hILlEOxUr9Ce5ZuO .statediagram-state rect.basic{rx:5px;ry:5px}#mermaid-svg-hILlEOxUr9Ce5ZuO .statediagram-state rect.divider{stroke-dasharray:10,10;fill:#efefef}#mermaid-svg-hILlEOxUr9Ce5ZuO .note-edge{stroke-dasharray:5}#mermaid-svg-hILlEOxUr9Ce5ZuO .statediagram-note rect{fill:#fff5ad;stroke:#aa3;stroke-width:1px;rx:0;ry:0}:root{--mermaid-font-family: '"trebuchet ms", verdana, arial';--mermaid-font-family: "Comic Sans MS", "Comic Sans", cursive}#mermaid-svg-hILlEOxUr9Ce5ZuO .error-icon{fill:#522}#mermaid-svg-hILlEOxUr9Ce5ZuO .error-text{fill:#522;stroke:#522}#mermaid-svg-hILlEOxUr9Ce5ZuO .edge-thickness-normal{stroke-width:2px}#mermaid-svg-hILlEOxUr9Ce5ZuO .edge-thickness-thick{stroke-width:3.5px}#mermaid-svg-hILlEOxUr9Ce5ZuO .edge-pattern-solid{stroke-dasharray:0}#mermaid-svg-hILlEOxUr9Ce5ZuO .edge-pattern-dashed{stroke-dasharray:3}#mermaid-svg-hILlEOxUr9Ce5ZuO .edge-pattern-dotted{stroke-dasharray:2}#mermaid-svg-hILlEOxUr9Ce5ZuO .marker{fill:#333}#mermaid-svg-hILlEOxUr9Ce5ZuO .marker.cross{stroke:#333}:root { --mermaid-font-family: "trebuchet ms", verdana, arial;}#mermaid-svg-hILlEOxUr9Ce5ZuO {color: rgba(0, 0, 0, 0.75);font: ;}

合并数字集合
文章目录
旋转机械数字
合并数字集合-灰度
训练LeNet网络
构建网络

§01 合并数字集合


1.1 旋转机械数字

  在 2021年人工神经网络第四次作业-第四题:旋转的数字 对于一组采集自机械 电能表 显示表盘的数字图片进行实验。发现训练的LeNet网络的推广性不强。

1.1.1 训练数字集合

(1)20

数据集合参数:
个数:1000
尺寸:38,56
色彩:彩色

  在训练之前将所有的数字都修改成了灰度图。

``▲ 图1.1.1 训练所使用的1000个彩色数字集合>``

(2)11

数据集合参数:
个数:180
尺寸:39,56
色彩:黑白

▲ 图1.1.2 测试数据集合

1.1.2 训练结果

  如下是使用1000个字符集合对于180个数据进行识别的结果。在传统的LeNet的基础上增加了Dropout层,测试集合的识别结果大体为50 ~ 60%左右。

▲ 图1.1.1 训练结果

1.2 合并数字集合-灰度

  在 处理图像数据:拉伸、旋转、剪切等等 对于六个数字集合进行和合并。

  下面是合并后的数字集合。

▲ 图1.2.1 合并后的数字集合

  后面使用该集合对于LeNet进行训练,测试训练的精度。

1.2.1 合并成灰度数据集合

  在训练之前将该数据集合都变成灰度图片。

(1)合并代码

grayimg = [mean(img,axis=2).T[:,:,newaxis].T for img in imgall]
print(type(grayimg))
print(shape(grayimg))
<class 'list'>
(3415, 1, 45, 35)
合并集合数据:
个数:3415
色彩:灰度
结构:(3415,1,45,35)
种类:List,Item=array

(2)检查合并结果

PIC_ROW         = 3
PIC_COL         = 5
plt.figure(figsize=(10,9))
for j in range(PIC_ROW):for i in range(PIC_COL):id = i+j*PIC_COLplt.subplot(PIC_ROW, PIC_COL, id+1)img = grayimg[index[id]][0]plt.imshow(img, cmap=plt.cm.gray)plt.title(str(imglabel[index[id]]), fontsize=20, color='r')

▲ 图1.2.2 变成灰度集合的数据集

(3)存储数据集合

outdir = '/home/aistudio/work/rotatedigit/alldigit30_45'
outfile = 'alldigit_gray'
savez(os.path.join(outdir, outfile), imgdata=grayimg, imglabel=imglabel)

  将数据存储在:

/home/aistudio/work/rotatedigit/alldigit30_45/alldigit_gray.npz

§02 训练LeNet网络


2.1 构建网络

2.1.1 加载数据

import sys,os,math,time
import matplotlib.pyplot as plt
from numpy import *import paddle#------------------------------------------------------------datafile = '/home/aistudio/work/rotatedigit/alldigit30_45/alldigit_gray.npz'
data = load(datafile)
print(data.files)imgdata = data['imgdata'].tolist()
imglabel = data['imglabel'].tolist()#------------------------------------------------------------
index = list(range(len(imglabel)))
random.shuffle(index)TRAIN_RATIO     = 0.8
train_number = int(len(imglabel)*TRAIN_RATIO)traindata = [imgdata[id] for id in index[:train_number]]
trainlabel = [int(imglabel[id]) for id in index[:train_number]]
testdata = [imgdata[id] for id in index[train_number:]]
testlabel = [int(imglabel[id]) for id in index[train_number:]]

(1)显示数据

print(type(traindata), "\n",  shape(traindata))
print(type(trainlabel), "\n",  shape(trainlabel))
['imgdata', 'imglabel']
<class 'list'> (2732, 1, 45, 35)
<class 'list'> (2732,)
PIC_ROW         = 3
PIC_COL         = 5
plt.figure(figsize=(10,8))
for j in range(PIC_ROW):for i in range(PIC_COL):id = i+j*PIC_COLplt.subplot(PIC_ROW, PIC_COL, id+1)plt.axis('off')plt.imshow(traindata[id][0], cmap=plt.cm.gray)plt.title(str(trainlabel[id]), fontsize=12, color='blue')

▲ 图2.1.1 测试几种的数字

(2)构建训练加载函数

class Dataset(paddle.io.Dataset):def __init__(self, num_samples):super(Dataset, self).__init__()self.num_samples = num_samplesdef __getitem__(self, index):data = traindata[index]label = trainlabel[index]return paddle.to_tensor(data,dtype='float32'), paddle.to_tensor(label,dtype='int64')def __len__(self):return self.num_samples_dataset = Dataset(len(trainlabel))
train_loader = paddle.io.DataLoader(_dataset, batch_size=100, shuffle=True)##### <font  color=purple>&emsp;Ⅰ.测试加载函数</font>
```python
tdata = train_loader().next()traind = tdata[0].numpy()
trainl = tdata[1].numpy()#------------------------------------------------------------PIC_ROW         = 3
PIC_COL         = 5
plt.figure(figsize=(10,8))
for j in range(PIC_ROW):for i in range(PIC_COL):id = i+j*PIC_COLplt.subplot(PIC_ROW, PIC_COL, id+1)plt.axis('off')plt.imshow(traind[id][0], cmap=plt.cm.gray)plt.title(str(trainl[id]), fontsize=12, color='blue')

▲ 图2.1.2 训练加载函数所获得图片

2.1.2 构建LeNet

imgwidth = 35
imgheight = 45
inputchannel = 1
kernelsize   = 5
targetsize = 10
ftwidth = ((imgwidth-kernelsize+1)//2-kernelsize+1)//2
ftheight = ((imgheight-kernelsize+1)//2-kernelsize+1)//2class lenet(paddle.nn.Layer):def __init__(self, ):super(lenet, self).__init__()self.conv1 = paddle.nn.Conv2D(in_channels=inputchannel, out_channels=6, kernel_size=kernelsize, stride=1, padding=0)self.conv2 = paddle.nn.Conv2D(in_channels=6, out_channels=16, kernel_size=kernelsize, stride=1, padding=0)self.mp1    = paddle.nn.MaxPool2D(kernel_size=2, stride=2)self.mp2    = paddle.nn.MaxPool2D(kernel_size=2, stride=2)self.L1     = paddle.nn.Linear(in_features=ftwidth*ftheight*16, out_features=120)self.L2     = paddle.nn.Linear(in_features=120, out_features=86)self.L3     = paddle.nn.Linear(in_features=86, out_features=targetsize)def forward(self, x):x = self.conv1(x)x = paddle.nn.functional.relu(x)x = self.mp1(x)x = self.conv2(x)x = paddle.nn.functional.relu(x)x = self.mp2(x)x = paddle.flatten(x, start_axis=1, stop_axis=-1)x = self.L1(x)x = paddle.nn.functional.relu(x)x = self.L2(x)x = paddle.nn.functional.relu(x)x = self.L3(x)return xmodel = lenet()

2.1.3 构建训练函数

imgwidth = 45
imgheight = 35
inputchannel = 1
kernelsize   = 5
targetsize = 10
ftwidth = ((imgwidth-kernelsize+1)//2-kernelsize+1)//2
ftheight = ((imgheight-kernelsize+1)//2-kernelsize+1)//2class lenet(paddle.nn.Layer):def __init__(self, ):super(lenet, self).__init__()self.conv1 = paddle.nn.Conv2D(in_channels=inputchannel, out_channels=6, kernel_size=kernelsize, stride=1, padding=0)self.conv2 = paddle.nn.Conv2D(in_channels=6, out_channels=16, kernel_size=kernelsize, stride=1, padding=0)self.mp1    = paddle.nn.MaxPool2D(kernel_size=2, stride=2)self.mp2    = paddle.nn.MaxPool2D(kernel_size=2, stride=2)self.L1     = paddle.nn.Linear(in_features=ftwidth*ftheight*16, out_features=120)self.L2     = paddle.nn.Linear(in_features=120, out_features=86)self.L3     = paddle.nn.Linear(in_features=86, out_features=targetsize)def forward(self, x):x = self.conv1(x)x = paddle.nn.functional.relu(x)x = self.mp1(x)x = self.conv2(x)x = paddle.nn.functional.relu(x)x = self.mp2(x)x = paddle.flatten(x, start_axis=1, stop_axis=-1)x = self.L1(x)x = paddle.nn.functional.relu(x)x = self.L2(x)x = paddle.nn.functional.relu(x)x = self.L3(x)return xmodel = lenet()#------------------------------------------------------------
optimizer = paddle.optimizer.Adam(learning_rate=0.01, parameters=model.parameters())
def train(model):model.train()epochs = 10accdim = []lossdim = []for epoch in range(epochs):for batch, data in enumerate(train_loader()):out = model(data[0])loss = paddle.nn.functional.cross_entropy(out, data[1])acc = paddle.metric.accuracy(out, data[1])loss.backward()optimizer.step()optimizer.clear_grad()accdim.append(acc.numpy())lossdim.append(loss.numpy())print('Epoch:{}, Loss:{}, Accuracys:{}'.format(epoch, loss.numpy(), acc.numpy()))plt.figure(figsize=(10, 6))plt.plot(accdim, label='Accuracy')plt.plot(lossdim, label='Loss')plt.xlabel('Step')plt.ylabel('Acc,Loss')plt.grid(True)plt.legend(loc='upper left')plt.tight_layout()plt.show()train(model)
paddle.save(model.state_dict(), './work/model.pdparams')

▲ 图2.1.7 WIdht=35,Height=45,LR=0.02

2.2 网络测试

  使用TestData对模型进行测试。

2.2.1 记录测试集合精度

imgwidth = 35
imgheight = 45
inputchannel = 1
kernelsize   = 5
targetsize = 10
ftwidth = ((imgwidth-kernelsize+1)//2-kernelsize+1)//2
ftheight = ((imgheight-kernelsize+1)//2-kernelsize+1)//2class lenet(paddle.nn.Layer):def __init__(self, ):super(lenet, self).__init__()self.conv1 = paddle.nn.Conv2D(in_channels=inputchannel, out_channels=6, kernel_size=kernelsize, stride=1, padding=0)self.conv2 = paddle.nn.Conv2D(in_channels=6, out_channels=16, kernel_size=kernelsize, stride=1, padding=0)self.mp1    = paddle.nn.MaxPool2D(kernel_size=2, stride=2)self.mp2    = paddle.nn.MaxPool2D(kernel_size=2, stride=2)self.L1     = paddle.nn.Linear(in_features=ftwidth*ftheight*16, out_features=120)self.L2     = paddle.nn.Linear(in_features=120, out_features=86)self.L3     = paddle.nn.Linear(in_features=86, out_features=targetsize)def forward(self, x):x = self.conv1(x)x = paddle.nn.functional.relu(x)x = self.mp1(x)x = self.conv2(x)x = paddle.nn.functional.relu(x)x = self.mp2(x)x = paddle.flatten(x, start_axis=1, stop_axis=-1)x = self.L1(x)x = paddle.nn.functional.relu(x)x = self.L2(x)x = paddle.nn.functional.relu(x)x = self.L3(x)return xmodel = lenet()#------------------------------------------------------------
optimizer = paddle.optimizer.Adam(learning_rate=0.01, parameters=model.parameters())
testinput = paddle.to_tensor(testdata, dtype='float32')
testl     = paddle.to_tensor(array(testlabel)[:,newaxis], dtype='int64')
def train(model):model.train()epochs = 10accdim = []lossdim = []testaccdim = []for epoch in range(epochs):for batch, data in enumerate(train_loader()):out = model(data[0])loss = paddle.nn.functional.cross_entropy(out, data[1])acc = paddle.metric.accuracy(out, data[1])loss.backward()optimizer.step()optimizer.clear_grad()testout = model(testinput)testacc = paddle.metric.accuracy(testout, testl)testaccdim.append(testacc.numpy())accdim.append(acc.numpy())lossdim.append(loss.numpy())print('Epoch:{}, Loss:{}, Accuracys:{},{}'.format(epoch, loss.numpy(), acc.numpy(), testacc.numpy()))plt.figure(figsize=(10, 6))plt.plot(accdim, label='Accuracy')plt.plot(lossdim, label='Loss')plt.plot(testaccdim, label='Test')plt.xlabel('Step')plt.ylabel('Acc,Loss')plt.grid(True)plt.legend(loc='upper right')plt.tight_layout()plt.show()train(model)
paddle.save(model.state_dict(), './work/model.pdparams')

▲ 图2.2.1 Loss,Acc,Test训练:Lr=0.01

2.2.2 观察错误样本

testout = model(testinput)
testtarget = paddle.fluid.layers.argmax(testout, axis=1).numpy()
errorid = where(testtarget != array(testlabel))[0]
print(errorid)print(len(errorid), "\n",  len(testlabel))
[  3  10  71  96 101 109 118 120 130 150 173 240 249 251 252 276 297 298316 321 365 375 420 475 490 512 513 516 561 571 582 593 609 677]
34 683

  在所有683个测试样本中,存在34个错误,错误率为:5%。

  下面将所有识别错误的样本显示出:

PIC_ROW         = 4
PIC_COL         = 9
plt.figure(figsize=(15,8))
for j in range(PIC_ROW):for i in range(PIC_COL):id = i+j*PIC_COLif id >= len(errorid): breakplt.subplot(PIC_ROW, PIC_COL, id+1)plt.axis('off')eid = errorid[id]plt.imshow(testdata[eid][0], cmap=plt.cm.gray)plt.title('%d->%d'%(testlabel[eid], testtarget[eid]), fontsize=12, color='blue')

▲ 图2.2.2 模型出错的情况

2.2.3 相差超过1的错误

  由于训练集合本身就会存在错误误差,因此如果忽略标签识别值与实际值之间相差为1的错误样本,将相差值超过1的找出来,则只有如下五个样本识别错误。

largeerrorid = [id for id in errorid if abs(testlabel[id]-testtarget[id]) > 1]
print(id)PIC_ROW         = 1
PIC_COL         = 5
plt.figure(figsize=(15,5))
for j in range(PIC_ROW):for i in range(PIC_COL):id = i+j*PIC_COLif id >= len(errorid): breakplt.subplot(PIC_ROW, PIC_COL, id+1)plt.axis('off')eid = largeerrorid[id]plt.imshow(testdata[eid][0], cmap=plt.cm.gray)plt.title('%d->%d'%(testlabel[eid], testtarget[eid]), fontsize=12, color='blue')

  识别错误值超过1 的为: 0.7%。

▲ 图2.2.3 识别值 与实际值之间相差超过1 的样本

※ 实验总结 ※


  将所有机械旋转字符合成一个大的训练集合(3415个样本),使用其中80%作为训练样本集合,利用LeNet网络进行训练。最终在测试集合上获得95%的识别率。对于误差超过1的样本只要0.7%。

3.1 进一步实验

  • 选择构造更小的网络,对网络进行简化;
  • 使用适合该数据集合的图片增强技术,进一步减少训练中的过学习现象;

3.2 附件

3.2.1 实验程序

#!/usr/local/bin/python
# -*- coding: gbk -*-
#============================================================
# TEST2.PY                     -- by Dr. ZhuoQing 2021-12-19
#
# Note:
#============================================================from headm import *                 # =import paddle#------------------------------------------------------------datafile = '/home/aistudio/work/rotatedigit/alldigit30_45/alldigit_gray.npz'
data = load(datafile)
#print(data.files)imgdata = data['imgdata']
imglabel = data['imglabel']#------------------------------------------------------------
printf(type(imgdata), shape(imgdata))#------------------------------------------------------------
index = list(range(len(imglabel)))
random.shuffle(index)TRAIN_RATIO     = 0.8
train_number = int(len(imglabel)*TRAIN_RATIO)traindata = [imgdata[id] for id in index[:train_number]]
trainlabel = [int(imglabel[id]) for id in index[:train_number]]
testdata = [imgdata[id] for id in index[train_number:]]
testlabel = [int(imglabel[id]) for id in index[train_number:]]#------------------------------------------------------------
'''
printf(type(traindata), shape(traindata))
printf(type(trainlabel), shape(trainlabel))printf(trainlabel[:100])
'''
#-----------------------------------------------------------
'''
print(traindata[0])
print(type(traindata[0]))
print(shape(traindata[0]))
'''
#------------------------------------------------------------
'''
PIC_ROW         = 3
PIC_COL         = 5
plt.figure(figsize=(10,8))
for j in range(PIC_ROW):for i in range(PIC_COL):id = i+j*PIC_COLplt.subplot(PIC_ROW, PIC_COL, id+1)plt.axis('off')plt.imshow(traindata[id][0], cmap=plt.cm.gray)plt.title(str(trainlabel[id]), fontsize=12, color='blue')
'''
#------------------------------------------------------------class Dataset(paddle.io.Dataset):def __init__(self, num_samples):super(Dataset, self).__init__()self.num_samples = num_samplesdef __getitem__(self, index):data = traindata[index]label = trainlabel[index]return paddle.to_tensor(data,dtype='float32'), paddle.to_tensor(label,dtype='int64')def __len__(self):return self.num_samples_dataset = Dataset(len(trainlabel))
train_loader = paddle.io.DataLoader(_dataset, batch_size=100, shuffle=True)#------------------------------------------------------------
tdata = train_loader().next()traind = tdata[0].numpy()
trainl = tdata[1].numpy()#------------------------------------------------------------PIC_ROW         = 3
PIC_COL         = 5
plt.figure(figsize=(10,8))
for j in range(PIC_ROW):for i in range(PIC_COL):id = i+j*PIC_COLplt.subplot(PIC_ROW, PIC_COL, id+1)plt.axis('off')plt.imshow(traind[id][0], cmap=plt.cm.gray)plt.title(str(trainl[id]), fontsize=12, color='blue')#------------------------------------------------------------
imgwidth = 35
imgheight = 45
inputchannel = 1
kernelsize   = 5
targetsize = 10
ftwidth = ((imgwidth-kernelsize+1)//2-kernelsize+1)//2
ftheight = ((imgheight-kernelsize+1)//2-kernelsize+1)//2class lenet(paddle.nn.Layer):def __init__(self, ):super(lenet, self).__init__()self.conv1 = paddle.nn.Conv2D(in_channels=inputchannel, out_channels=6, kernel_size=kernelsize, stride=1, padding=0)self.conv2 = paddle.nn.Conv2D(in_channels=6, out_channels=16, kernel_size=kernelsize, stride=1, padding=0)self.mp1    = paddle.nn.MaxPool2D(kernel_size=2, stride=2)self.mp2    = paddle.nn.MaxPool2D(kernel_size=2, stride=2)self.L1     = paddle.nn.Linear(in_features=ftwidth*ftheight*16, out_features=120)self.L2     = paddle.nn.Linear(in_features=120, out_features=86)self.L3     = paddle.nn.Linear(in_features=86, out_features=targetsize)def forward(self, x):x = self.conv1(x)x = paddle.nn.functional.relu(x)x = self.mp1(x)x = self.conv2(x)x = paddle.nn.functional.relu(x)x = self.mp2(x)x = paddle.flatten(x, start_axis=1, stop_axis=-1)x = self.L1(x)x = paddle.nn.functional.relu(x)x = self.L2(x)x = paddle.nn.functional.relu(x)x = self.L3(x)return xmodel = lenet()#------------------------------------------------------------
optimizer = paddle.optimizer.Adam(learning_rate=0.01, parameters=model.parameters())
testinput = paddle.to_tensor(testdata, dtype='float32')
testl     = paddle.to_tensor(array(testlabel)[:,newaxis], dtype='int64')
def train(model):model.train()epochs = 10accdim = []lossdim = []testaccdim = []for epoch in range(epochs):for batch, data in enumerate(train_loader()):out = model(data[0])loss = paddle.nn.functional.cross_entropy(out, data[1])acc = paddle.metric.accuracy(out, data[1])loss.backward()optimizer.step()optimizer.clear_grad()testout = model(testinput)testacc = paddle.metric.accuracy(testout, testl)testaccdim.append(testacc.numpy())accdim.append(acc.numpy())lossdim.append(loss.numpy())print('Epoch:{}, Loss:{}, Accuracys:{},{}'.format(epoch, loss.numpy(), acc.numpy(), testacc.numpy()))plt.figure(figsize=(10, 6))plt.plot(accdim, label='Accuracy')plt.plot(lossdim, label='Loss')plt.plot(testaccdim, label='Test')plt.xlabel('Step')plt.ylabel('Acc,Loss')plt.grid(True)plt.legend(loc='upper right')plt.tight_layout()plt.show()train(model)
paddle.save(model.state_dict(), './work/model.pdparams')#------------------------------------------------------------testout = model(testinput)
testtarget = paddle.fluid.layers.argmax(testout, axis=1).numpy()
#printf(testtarget)
#printf(testlabel)
errorid = where(testtarget != array(testlabel))[0]
print(errorid)printf(len(errorid), len(testlabel))#------------------------------------------------------------PIC_ROW         = 4
PIC_COL         = 9
plt.figure(figsize=(15,8))
for j in range(PIC_ROW):for i in range(PIC_COL):id = i+j*PIC_COLif id >= len(errorid): breakplt.subplot(PIC_ROW, PIC_COL, id+1)plt.axis('off')eid = errorid[id]plt.imshow(testdata[eid][0], cmap=plt.cm.gray)plt.title('%d->%d'%(testlabel[eid], testtarget[eid]), fontsize=12, color='blue')#------------------------------------------------------------
largeerrorid = [id for id in errorid if abs(testlabel[id]-testtarget[id]) > 1]
printf(id)PIC_ROW         = 1
PIC_COL         = 5
plt.figure(figsize=(15,5))
for j in range(PIC_ROW):for i in range(PIC_COL):id = i+j*PIC_COLif id >= len(errorid): breakplt.subplot(PIC_ROW, PIC_COL, id+1)plt.axis('off')eid = largeerrorid[id]plt.imshow(testdata[eid][0], cmap=plt.cm.gray)plt.title('%d->%d'%(testlabel[eid], testtarget[eid]), fontsize=12, color='blue')#------------------------------------------------------------
#        END OF FILE : TEST2.PY
#============================================================

3.2.2 数据处理程序

#!/usr/local/bin/python
# -*- coding: gbk -*-
#============================================================
# TEST1.PY                     -- by Dr. ZhuoQing 2021-12-19
#
# Note:
#============================================================from headm import *                 # =
import mat4py
import cv2#------------------------------------------------------------rotatedir = '/home/aistudio/work/rotatedigit'
filedim = os.listdir(rotatedir)
matfile = []
for f in filedim:if f.find('_mod') > 0:matfile.append(os.path.join(rotatedir, f))
print(len(matfile))#------------------------------------------------------------gifid = 5printf(matfile[gifid])gifpath = '/home/aistudio/GIF'
filedim = os.listdir(gifpath)for f in filedim:fn = os.path.join(gifpath, f)if os.path.isfile(fn):os.remove(fn)datafile = mat4py.loadmat(matfile[gifid])
datalabel = datafile['labels']
dataimage = datafile['digit2']
digitsize = datafile['digitsize']
printf(len(datalabel))
printf(shape(dataimage))
printf(digitsize)#------------------------------------------------------------
'''
img = array(dataimage[0]).reshape(3,digitsize[1], digitsize[0]).swapaxes(1,2).T
plt.imshow(img)
'''#------------------------------------------------------------
for i in range(len(datalabel)):img = array(dataimage[i]).reshape(3,digitsize[1], digitsize[0]).swapaxes(1,2).Tplt.figure(figsize=(5,5))plt.imshow(img)savefile = os.path.join(gifpath, '%03d.jpg'%i)plt.savefig(savefile)printf('%03d.jpg'%i)#------------------------------------------------------------outdir = '/home/aistudio/work/rotatedigit/alldigit30_45'
outfile = 'alldigit'
RESIZE_HEIGHT = 45
RESIZE_WIDTH = 35alldata = []
alllabel = []for f in matfile:datafile = mat4py.loadmat(f)datalabel = datafile['labels']dataimage = datafile['digit2']digitsize = datafile['digitsize']for i in tqdm(range(len(datalabel))):img = array(dataimage[i]).reshape(3, digitsize[1], digitsize[0]).swapaxes(1,2).T/255resizeimg = cv2.resize(img, (RESIZE_WIDTH, RESIZE_HEIGHT))alldata.append(resizeimg)alllabel.append(datalabel[i])#        plt.subplot(121)
#        plt.imshow(resizeimg)
#        plt.subplot(122)
#        plt.imshow(img)#        break
#    breaksavez(os.path.join(outdir, outfile), imgdata = alldata, imglabel=alllabel)printf('\a')#------------------------------------------------------------
allzip = load(os.path.join(outdir, outfile+'.npz'))
print(allzip.files)imgall = allzip['imgdata']
imglabel = allzip['imglabel']
printf(len(imgall))#------------------------------------------------------------
index = list(range(len(imgall)))
random.shuffle(index)
#printf(index)#------------------------------------------------------------
PIC_ROW         = 3
PIC_COL         = 5
plt.figure(figsize=(10,6))
for j in range(PIC_ROW):for i in range(PIC_COL):id = i+j*PIC_COLplt.subplot(PIC_ROW, PIC_COL, id+1)plt.imshow(imgall[index[id]])#------------------------------------------------------------
datafile = mat4py.loadmat(matfile[0])
datalabel = datafile['labels']
dataimage = datafile['digit2']
digitsize = datafile['digitsize']imgdata = array(dataimage[0]).reshape(3, digitsize[1], digitsize[0]).swapaxes(1,2).T/255
#plt.imshow(imgdata)#------------------------------------------------------------
from paddle.vision.transforms import Resizeimg1 = Resize(size=(32,32))(imgdata)
printf(type(img1))
print(img1.shape)
plt.imshow(img1)#------------------------------------------------------------
img2 = imgall[0]
printf(type(img2), shape(img2))
img3 = mean(img2, axis=2)[:,:,newaxis].T
print(type(img3), shape(img3))#------------------------------------------------------------
grayimg = [mean(img,axis=2).T[:,:,newaxis].T for img in imgall]
printf(type(grayimg))
printf(shape(grayimg))#------------------------------------------------------------
PIC_ROW         = 3
PIC_COL         = 5
plt.figure(figsize=(10,9))
for j in range(PIC_ROW):for i in range(PIC_COL):id = i+j*PIC_COLplt.subplot(PIC_ROW, PIC_COL, id+1)img = grayimg[index[id]][0]plt.imshow(img, cmap=plt.cm.gray)plt.title(str(imglabel[index[id]]), fontsize=20, color='r')#------------------------------------------------------------
outdir = '/home/aistudio/work/rotatedigit/alldigit30_45'
outfile = 'alldigit_gray'
savez(os.path.join(outdir, outfile), imgdata=grayimg, imglabel=imglabel)#------------------------------------------------------------
#        END OF FILE : TEST1.PY
#============================================================

■ 相关文献链接:

  • 2021年人工神经网络第四次作业-第四题:旋转的数字
  • 电能表原理
  • 处理图像数据:拉伸、旋转、剪切等等

● 相关图表链接:

  • 图1.1.1 训练所使用的1000个彩色数字集合>
  • 图1.1.2 测试数据集合
  • 图1.1.1 训练结果
  • 图1.2.1 合并后的数字集合
  • 图1.2.2 变成灰度集合的数据集
  • 图2.1.1 测试几种的数字
  • 图2.1.2 训练加载函数所获得图片
  • 图2.1.7 WIdht=35,Height=45,LR=0.02
  • 图2.2.1 Loss,Acc,Test训练:Lr=0.01
  • 图2.2.2 模型出错的情况
  • 图2.2.3 识别值 与实际值之间相差超过1 的样本

使用LeNet对于旋转数字进行识别:合并数字集合相关推荐

  1. pytorch 预测手写体数字_教你用PyTorch从零开始实现LeNet 5手写数字的识别

    背景 LeNET-5是最早的卷积神经网络之一,曾广泛用于美国银行.手写数字识别正确率在99%以上. PyTorch是Facebook 人工智能研究院在2017年1月,基于Torch退出的一个Pytho ...

  2. 【Python5】图像操作,数字验证码识别,图像拼接/保存器

    文章目录 1.安装 2.画图 3.几何变换 3.1 位计算 3.2 遮挡 3.3 通道切分合并 3.4 金字塔 3.5 缩放 3.6 平移 3.7 旋转 3.8 仿射变换 3.9 透视变换 4.形态学 ...

  3. PyTorch之LeNet-5:利用PyTorch实现最经典的LeNet-5卷积神经网络对手写数字图片识别CNN

    PyTorch之LeNet-5:利用PyTorch实现最经典的LeNet-5卷积神经网络对手写数字图片识别CNN 目录 训练过程 代码设计 训练过程 代码设计 #PyTorch:利用PyTorch实现 ...

  4. Android实现扫一扫识别图像数字(镂空图像数字Tesseract训练)(上)

    Android实现扫一扫识别图像数字(镂空图像数字训练)(上) 关于 需要的工具以及安装运行步骤如下 1.安装tesseract 2.下载使用jTessBoxEditor与素材准备 3.开始操作 步骤 ...

  5. Caffe手写数字训练识别 (4)

    Caffe手写数字训练识别 在配置caffe后,验证编译是否成功和caffe入门,那我们就从训练手写数字识别开始吧. 用手写数据库MInist数据库: THE MNIST DATABASEof han ...

  6. 图形数字的识别算法: 车牌识别及验证码识别的一般思路

    图形数字的识别算法: 车牌识别及验证码识别的一般思路 本文源自我之前花了2 天时间做的一个简单的车牌识别系统.那个项目,时间太紧,样本也有限,达不到对方要求的95% 识别率(主要对于车牌来说,D,0 ...

  7. K210图像检测(1~8)数字卡片识别

    前言   第一次使用该平台.想先找一个简单的识别,来走走流程.就想到了,前几年的送药小车的数字卡片识别.花了半天收集标记图片.在运行时要注意摄像头与数字卡片的高度.不过也有些不足,可能是收集某个数字的 ...

  8. linux手写数字识别opencv,opencv实现KNN手写数字的识别

    人工智能是当下很热门的话题,手写识别是一个典型的应用.为了进一步了解这个领域,我阅读了大量的论文,并借助opencv完成了对28x28的数字图片(预处理后的二值图像)的识别任务. 预处理一张图片: 首 ...

  9. 基matlab的水果识别的应用,基于MATLAB的水果识别的数字图像处理

    基于MATLAB的水果识别的数字图像处理 图像处理 ( 报告 ) 题目 基于 MATLAB 的 水果识别的数字图像处理 指导教师 职称 教授 学生姓名 学号 专 业 院(系) 完成时间 2016 年 ...

最新文章

  1. php动态验证码脚本,这个PHP脚本有什么问题吗? (验证码)
  2. 人工智能、机器学习、数据挖掘著名会议
  3. 饥荒机器人怎么用避雷针充电_新款iPhone充电线怎么这么好看~安卓也可以用!...
  4. Spring依赖注入技术的发展
  5. 渗透测试入门12之渗透测试简介
  6. 从头写一个Cucumber测试(二) Cucumber Test
  7. Highsoft.Highcharts 5.0.6439.38401 key
  8. seo优化之如何选择产品
  9. 计算机网络 中国大学MOOC 哈尔滨工业大学 习题答案
  10. linux下addr2line详解
  11. 判断是否为无损连接分解
  12. Unity Shader数学基础 -- Shader入门精要学习(3)
  13. 为什么现在的程序员那么卑微?青出于蓝而胜于蓝啊
  14. Hadoop HA 搭建
  15. 在c语言中作为字符串结束标志是什么,字符串的结束标志是什么?
  16. 杂谈:加班中离世的人
  17. 棱镜-分布式实时计算的跟踪校验系统
  18. JAVA——从基础学起(一)Java语言基础
  19. 本地文件与服务器同步,本地与服务器文件同步
  20. lottie实现动画效果

热门文章

  1. 到底会改名吗?微软GVFS 改名之争
  2. redis主从复制,读写分离
  3. 《分布式系统:概念与设计》一1.6 实例研究:万维网
  4. MapReduce: 提高MapReduce性能的七点建议【译】
  5. HTML中的map标签的使用
  6. 软件熵:软件开发中推倒重来的过程就是软件熵不断增加的过程
  7. OpenCV 脸部跟踪(1)
  8. 解决centos4不能使用yum的方法
  9. Apache Web服务器访问控制机制全解析
  10. 关于“插入图片”等功能无法使用的问题