前言

上一章研究了一些基本的构建神经网络所需的结构:层、激活函数、损失函数、优化器之类的,这一篇就解决上一章遗留的问题:使用CNN构建手写数字识别网络、保存模型参数、单张图片的识别

国际惯例,参考博客:

tensorflow之保存模型与加载模型

【tensorflow】保存模型、再次加载模型等操作

tensorflow_二分类模型之单张图片测试

TensorFlow-Examples

训练数据实现

先贴一下仿真数据集:链接:https://pan.baidu.com/s/1ugEy85182vjcXQ8VoMJAbg 密码:1o83

主要就是把手写数字数据集可视化成png图片保存,利用txt文本文档保存路径和标签,看上一篇博客就懂了。话不多说,折腾起来。

数据集处理

IMG_HEIGHT = 28 # 高
IMG_WIDTH = 28 # 宽
CHANNELS = 3 # 通道数
def read_images(dataset_path, batch_size):imagepaths, labels = list(), list()data = open(dataset_path, 'r').read().splitlines()for d in data:imagepaths.append(d.split(' ')[0])labels.append(int(d.split(' ')[1]))   # 转换为张量imagepaths = tf.convert_to_tensor(imagepaths, dtype=tf.string)labels = tf.convert_to_tensor(labels, dtype=tf.int32)# 建立TF队列,打乱数据image, label = tf.train.slice_input_producer([imagepaths, labels],shuffle=True)# 读取数据image = tf.read_file(image)image = tf.image.decode_jpeg(image, channels=CHANNELS)# 将图像resize成规定大小image = tf.image.resize_images(image, [IMG_HEIGHT, IMG_WIDTH])# 手动归一化image = image * 1.0/127.5 - 1.0# 创建batchinputX, inputY = tf.train.batch([image, label], batch_size=batch_size,capacity=batch_size * 8,num_threads=4)return inputX, inputY

构建网络

#训练参数
learning_rate = 0.001
num_steps = 1000
batch_size = 128
display_step = 10
#网络参数
num_classes = 10
dropout = 0.75#定义卷积操作
def conv2d(x, W, b, strides=1):   x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')x = tf.nn.bias_add(x, b)return tf.nn.relu(x)
#定义池化操作
def maxpool2d(x, k=2):return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],padding='SAME')
#定义网络结构
def conv_net(x, weights, biases, dropout):#输入数据x = tf.reshape(x, shape=[-1, IMG_HEIGHT, IMG_WIDTH, CHANNELS])#第一层卷积conv1 = conv2d(x, weights['wc1'], biases['bc1'])conv1 = maxpool2d(conv1, k=2)#第二层卷积conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])conv2 = maxpool2d(conv2, k=2)#全连接fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]])fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])fc1 = tf.nn.relu(fc1)fc1 = tf.nn.dropout(fc1, dropout)# 输出out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])return out#初始化权重,以后便于保存
weights = {'wc1': tf.Variable(tf.random_normal([5, 5, CHANNELS, 32])),'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),'wd1': tf.Variable(tf.random_normal([7*7*64, 1024])),'out': tf.Variable(tf.random_normal([1024, num_classes]))
}
#初始化偏置,以后便于保存
biases = {'bc1': tf.Variable(tf.random_normal([32])),'bc2': tf.Variable(tf.random_normal([64])),'bd1': tf.Variable(tf.random_normal([1024])),'out': tf.Variable(tf.random_normal([num_classes]))
}

定义网络输入输出

注意:如果想留个接口以便后续丢入单张图片进行测试,一定要留输入接口,方法如下:

#定义图结构的输入
X = tf.placeholder(tf.float32, [None, IMG_HEIGHT, IMG_WIDTH,CHANNELS],name='X')
Y = tf.placeholder(tf.float32, [None, num_classes],name='Y')
keep_prob = tf.placeholder(tf.float32,name='keep_prob')

一定要输入name,因为后续我们取这个placeholder就是依据这个名字来取的,因为这个方法比较常用

定义损失、优化器、训练/评估函数

# 构建模型
logits = conv_net(X, weights, biases, keep_prob)
prediction = tf.nn.softmax(logits,name='prediction')
#损失函数
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
#评估函数
correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

训练

记得单热度编码tf.one_hot()

#保存模型
saver=tf.train.Saver()
# tf.add_to_collection('predict',prediction)
#训练
init = tf.global_variables_initializer()#初始化模型
print('读取数据集:')
input_img,input_label=read_images('./mnist/train_labels.txt',batch_size=batch_size)
print('训练模型')
with tf.Session() as sess:coord=tf.train.Coordinator()sess.run(init)#初始化参数   tf.train.start_queue_runners(sess=sess,coord=coord)for step in range(1, num_steps+1):        batch_x, batch_y = sess.run([input_img,tf.one_hot(input_label,num_classes,1,0)])       sess.run(train_op, feed_dict={X: batch_x, Y: batch_y, keep_prob: 0.8})if step % display_step == 0 or step == 1:# Calculate batch loss and accuracyloss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,Y: batch_y,keep_prob: 1.0})print("Step " + str(step) + ", Minibatch Loss= " + \"{:.4f}".format(loss) + ", Training Accuracy= " + \"{:.3f}".format(acc))coord.request_stop()coord.join()print("Optimization Finished!")saver.save(sess,'./cnn_mnist_model/CNN_Mnist')

输出:

读取数据集:
训练模型
2018-08-03 12:38:05.596648: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2018-08-03 12:38:05.882851: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1392] Found device 0 with properties:
name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.759
pciBusID: 0000:01:00.0
totalMemory: 6.00GiB freeMemory: 4.96GiB
2018-08-03 12:38:05.889153: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1471] Adding visible gpu devices: 0
2018-08-03 12:38:06.600411: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:952] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-08-03 12:38:06.604687: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:958]      0
2018-08-03 12:38:06.606494: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:971] 0:   N
2018-08-03 12:38:06.608588: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4726 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
Step 1, Minibatch Loss= 206875.5000, Training Accuracy= 0.219
Step 10, Minibatch Loss= 71490.0000, Training Accuracy= 0.164
Step 20, Minibatch Loss= 27775.7266, Training Accuracy= 0.398
Step 30, Minibatch Loss= 15692.2725, Training Accuracy= 0.641
Step 40, Minibatch Loss= 18211.4141, Training Accuracy= 0.625
Step 50, Minibatch Loss= 7250.1758, Training Accuracy= 0.789
Step 60, Minibatch Loss= 10694.9902, Training Accuracy= 0.750
Step 70, Minibatch Loss= 10783.8535, Training Accuracy= 0.766
Step 80, Minibatch Loss= 6080.1138, Training Accuracy= 0.844
Step 90, Minibatch Loss= 6720.9380, Training Accuracy= 0.867
Step 100, Minibatch Loss= 3673.7524, Training Accuracy= 0.922
Step 110, Minibatch Loss= 7893.8228, Training Accuracy= 0.836
Step 120, Minibatch Loss= 6805.0176, Training Accuracy= 0.852
Step 130, Minibatch Loss= 2863.3728, Training Accuracy= 0.906
Step 140, Minibatch Loss= 3335.6992, Training Accuracy= 0.883
Step 150, Minibatch Loss= 3514.4031, Training Accuracy= 0.914
Step 160, Minibatch Loss= 1842.5328, Training Accuracy= 0.945
Step 170, Minibatch Loss= 3443.9966, Training Accuracy= 0.914
Step 180, Minibatch Loss= 1961.7180, Training Accuracy= 0.945
Step 190, Minibatch Loss= 2919.5215, Training Accuracy= 0.898
Step 200, Minibatch Loss= 4270.7686, Training Accuracy= 0.891
Step 210, Minibatch Loss= 3591.2534, Training Accuracy= 0.922
Step 220, Minibatch Loss= 4692.2163, Training Accuracy= 0.867
Step 230, Minibatch Loss= 1537.0554, Training Accuracy= 0.914
Step 240, Minibatch Loss= 3574.1797, Training Accuracy= 0.898
Step 250, Minibatch Loss= 5143.3276, Training Accuracy= 0.898
Step 260, Minibatch Loss= 2142.9756, Training Accuracy= 0.922
Step 270, Minibatch Loss= 1323.6707, Training Accuracy= 0.945
Step 280, Minibatch Loss= 2004.2051, Training Accuracy= 0.961
Step 290, Minibatch Loss= 1112.9484, Training Accuracy= 0.938
Step 300, Minibatch Loss= 1977.6018, Training Accuracy= 0.922
Step 310, Minibatch Loss= 876.0104, Training Accuracy= 0.977
Step 320, Minibatch Loss= 3448.3142, Training Accuracy= 0.953
Step 330, Minibatch Loss= 1173.9749, Training Accuracy= 0.961
Step 340, Minibatch Loss= 2152.9966, Training Accuracy= 0.938
Step 350, Minibatch Loss= 3113.6838, Training Accuracy= 0.938
Step 360, Minibatch Loss= 1779.6680, Training Accuracy= 0.922
Step 370, Minibatch Loss= 2738.2637, Training Accuracy= 0.930
Step 380, Minibatch Loss= 1666.9695, Training Accuracy= 0.922
Step 390, Minibatch Loss= 2076.6716, Training Accuracy= 0.914
Step 400, Minibatch Loss= 3356.1475, Training Accuracy= 0.914
Step 410, Minibatch Loss= 1222.7729, Training Accuracy= 0.953
Step 420, Minibatch Loss= 2422.6355, Training Accuracy= 0.898
Step 430, Minibatch Loss= 4377.9385, Training Accuracy= 0.914
Step 440, Minibatch Loss= 1566.1058, Training Accuracy= 0.969
Step 450, Minibatch Loss= 3540.1555, Training Accuracy= 0.875
Step 460, Minibatch Loss= 1136.4354, Training Accuracy= 0.961
Step 470, Minibatch Loss= 2821.9456, Training Accuracy= 0.938
Step 480, Minibatch Loss= 1804.5267, Training Accuracy= 0.945
Step 490, Minibatch Loss= 625.0988, Training Accuracy= 0.977
Step 500, Minibatch Loss= 2406.8958, Training Accuracy= 0.930
Step 510, Minibatch Loss= 1198.2866, Training Accuracy= 0.961
Step 520, Minibatch Loss= 680.7784, Training Accuracy= 0.953
Step 530, Minibatch Loss= 2329.2104, Training Accuracy= 0.961
Step 540, Minibatch Loss= 848.0190, Training Accuracy= 0.945
Step 550, Minibatch Loss= 1327.9423, Training Accuracy= 0.938
Step 560, Minibatch Loss= 1020.9082, Training Accuracy= 0.961
Step 570, Minibatch Loss= 1885.4563, Training Accuracy= 0.922
Step 580, Minibatch Loss= 820.5620, Training Accuracy= 0.953
Step 590, Minibatch Loss= 1448.5205, Training Accuracy= 0.938
Step 600, Minibatch Loss= 857.7993, Training Accuracy= 0.969
Step 610, Minibatch Loss= 1193.5856, Training Accuracy= 0.930
Step 620, Minibatch Loss= 1337.5518, Training Accuracy= 0.961
Step 630, Minibatch Loss= 2121.9165, Training Accuracy= 0.953
Step 640, Minibatch Loss= 1516.9609, Training Accuracy= 0.938
Step 650, Minibatch Loss= 666.7323, Training Accuracy= 0.977
Step 660, Minibatch Loss= 1004.4291, Training Accuracy= 0.953
Step 670, Minibatch Loss= 193.3173, Training Accuracy= 0.984
Step 680, Minibatch Loss= 1339.3765, Training Accuracy= 0.945
Step 690, Minibatch Loss= 709.9714, Training Accuracy= 0.961
Step 700, Minibatch Loss= 1380.6301, Training Accuracy= 0.953
Step 710, Minibatch Loss= 630.5464, Training Accuracy= 0.977
Step 720, Minibatch Loss= 667.1447, Training Accuracy= 0.953
Step 730, Minibatch Loss= 1253.6014, Training Accuracy= 0.977
Step 740, Minibatch Loss= 473.8666, Training Accuracy= 0.984
Step 750, Minibatch Loss= 809.3101, Training Accuracy= 0.961
Step 760, Minibatch Loss= 508.8592, Training Accuracy= 0.984
Step 770, Minibatch Loss= 308.9244, Training Accuracy= 0.969
Step 780, Minibatch Loss= 1291.0034, Training Accuracy= 0.984
Step 790, Minibatch Loss= 1884.8574, Training Accuracy= 0.938
Step 800, Minibatch Loss= 1481.6635, Training Accuracy= 0.961
Step 810, Minibatch Loss= 463.2684, Training Accuracy= 0.969
Step 820, Minibatch Loss= 1116.5591, Training Accuracy= 0.961
Step 830, Minibatch Loss= 2422.9155, Training Accuracy= 0.953
Step 840, Minibatch Loss= 471.8990, Training Accuracy= 0.984
Step 850, Minibatch Loss= 1480.4053, Training Accuracy= 0.945
Step 860, Minibatch Loss= 1062.6339, Training Accuracy= 0.938
Step 870, Minibatch Loss= 833.3881, Training Accuracy= 0.953
Step 880, Minibatch Loss= 2153.9014, Training Accuracy= 0.953
Step 890, Minibatch Loss= 1617.7456, Training Accuracy= 0.953
Step 900, Minibatch Loss= 347.2119, Training Accuracy= 0.969
Step 910, Minibatch Loss= 175.5020, Training Accuracy= 0.977
Step 920, Minibatch Loss= 680.8482, Training Accuracy= 0.969
Step 930, Minibatch Loss= 240.1681, Training Accuracy= 0.977
Step 940, Minibatch Loss= 882.4927, Training Accuracy= 0.977
Step 950, Minibatch Loss= 407.1322, Training Accuracy= 0.977
Step 960, Minibatch Loss= 300.9460, Training Accuracy= 0.969
Step 970, Minibatch Loss= 1848.9391, Training Accuracy= 0.945
Step 980, Minibatch Loss= 496.5137, Training Accuracy= 0.969
Step 990, Minibatch Loss= 473.6212, Training Accuracy= 0.969
Step 1000, Minibatch Loss= 124.8958, Training Accuracy= 0.992
Optimization Finished!

【更新日志】2019-9-2
tf.train.Saver中还可以提供额外三个参数,详细可以参考官方文档

  • max_to_keep:保存多少个最新模型,避免模型一直存,存到磁盘满了,当存到设置的数值时,会删掉前面的一个模型,保证当前存储的模型数目是设置值
  • keep_checkpoint_every_n_hours:每多少小时保存一次模型

模型载入与测试单张图片

我折腾很久的原因就在于:在定义训练网络的输入参数时,没有给name值,导致get_tensor_by_name一直取不出来对应输入接口,测试图片无法丢入到网络中,最后发现的时候气出猪叫。

读图片

直接使用opencv的读取函数处理图像,记得与训练时图像的处理方法一致,最后要reshape成(1,28,28,3)(1,28,28,3)(1,28,28,3)大小,其实这里也卡了好久,待会后面再说

images = []
image = cv2.imread('./mnist/test/5/5_9.png')
images.append(image)
images = np.array(images, dtype=np.uint8)
images = images.astype('float32')
images = np.subtract(np.multiply(images, 1.0/127.5) , 1.0)
x_batch = images.reshape(1,28,28,3)

载入模型

saver=tf.train.import_meta_graph('./cnn_mnist_model/CNN_Mnist.meta')
saver.restore(sess,'./cnn_mnist_model/CNN_Mnist')

预测

取出预测函数和测试图片接收器:

graph=tf.get_default_graph()
pred=graph.get_tensor_by_name('prediction:0')
#X = graph.get_operation_by_name('X').outputs[0]
X=graph.get_tensor_by_name('X:0')
keep_prob=graph.get_tensor_by_name('keep_prob:0')

直接预测

result=sess.run(pred,feed_dict={X:x_batch,keep_prob:1.0})
print(result)

结果

2018-08-03 12:46:50.098990: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:958]      0
2018-08-03 12:46:50.101351: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:971] 0:   N
2018-08-03 12:46:50.104446: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4726 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
[[0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]]

我也预测了另外几张图片,结果基本没问题。

上面说读取图片卡了很久,原因在于我之前使用训练时候读取图片的方法来读图片,也就是使用tensorflowimage类来读图片:

    image = tf.read_file('./mnist/test/5/5_9.png')image = tf.image.decode_jpeg(image, channels=3)# 将图像resize成规定大小image = tf.image.resize_images(image, [28, 28])# 手动归一化image = image * 1.0/127.5 - 1.0image = tf.reshape(image, shape=[1, 28, 28, 3])

出现如下错误:

TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, numpy ndarrays, or TensorHandles.For reference, the tensor object was Tensor("Reshape:0", shape=(1, 28, 28, 3), dtype=float32) which was passed to the feed with key Tensor("X:0", shape=(?, 28, 28, 3), dtype=float32).

这个错误意思是说无法将tf.Tensor类型的变量feed给输入接口,能够接受的有Pyhon的巴拉巴拉类型,真皮。所以又得把tf.Tensor的值取出来,如果看过之前的博客,就知道用eval()取值,跟Theano一模一样:

将原来测试的那句话

result=sess.run(pred,feed_dict={X:image,keep_prob:1.0})

改成:

result=sess.run(pred,feed_dict={X:image.eval(),keep_prob:1.0})

就阔以了,所以还是建议测试的时候直接用numpy变量处理,不要折腾成tensorflow的变量类型,最后还得转回来。

后记

好羡慕TensorLayertflearn那么简介的模型构建、保存、载入方法,好想脱坑。

博文代码:

训练:链接:https://pan.baidu.com/s/1zZYGgnGj3kttklzZnyJLQA 密码:zepl

测试:链接:https://pan.baidu.com/s/1BygjSatjxtuVIq_HHt9o7A 密码:ky87

【TensorFlow-windows】学习笔记四——模型构建、保存与使用相关推荐

  1. tensorflow笔记:模型的保存与训练过程可视化

    tensorflow笔记系列:  (一) tensorflow笔记:流程,概念和简单代码注释  (二) tensorflow笔记:多层CNN代码分析  (三) tensorflow笔记:多层LSTM代 ...

  2. Windows系统调用学习笔记(三)—— 保存现场

    Windows系统调用学习笔记(三)-- 保存现场 要点回顾 基本概念 Trap Frame 结构 线程相关的结构体 ETHREAD KTHREAD CPU相关的结构体 KPCR _NT_TIB KP ...

  3. TensorFlow 深度学习笔记 TensorFlow实现与优化深度神经网络

    TensorFlow 深度学习笔记 TensorFlow实现与优化深度神经网络 转载请注明作者:梦里风林 Github工程地址:https://github.com/ahangchen/GDLnote ...

  4. TensorFlow Lite学习笔记

    TensorFlow Lite学习笔记 目录 TensorFlow Lite学习笔记 Tensorflow LIte Demo 模型固化freeze_graph和模型优化optimize_for_in ...

  5. Opencv学习笔记(三) -- 图像压缩与保存

    1.图像压缩 1.1常用图像格式 bmp Windows位图格式.该格式为不压缩格式,缺点是图像文件较大. jpg JPEG是为静态图像所建立的第一个国际数字图像压缩标准,也是至今一直在使用的.应用最 ...

  6. 【http学习笔记四】安全篇

    [http学习笔记四]安全篇 文章目录 [http学习笔记四]安全篇 一.HTTPS 与 SSL/TLS ① 什么是安全? 机密性 完整性 身份认证 不可否认 ② 什么是HTTPS? ③ SSL/TL ...

  7. 吴恩达《机器学习》学习笔记四——单变量线性回归(梯度下降法)代码

    吴恩达<机器学习>学习笔记四--单变量线性回归(梯度下降法)代码 一.问题介绍 二.解决过程及代码讲解 三.函数解释 1. pandas.read_csv()函数 2. DataFrame ...

  8. ZooKeeper学习第四期---构建ZooKeeper应用

    ZooKeeper学习第一期---Zookeeper简单介绍 ZooKeeper学习第二期--ZooKeeper安装配置 ZooKeeper学习第三期---Zookeeper命令操作 ZooKeepe ...

  9. mysql新增表字段回滚_MySql学习笔记四

    MySql学习笔记四 5.3.数据类型 数值型 整型 小数 定点数 浮点数 字符型 较短的文本:char, varchar 较长的文本:text, blob(较长的二进制数据) 日期型 原则:所选择类 ...

最新文章

  1. 移植 thttpd Web服务器
  2. linux隐藏文件的方法,Linux下隐藏文件的操作方法
  3. DeepChem | 基于DeepChem的GCN预测化合物溶解度
  4. LAMBDA表达式常用 (全)
  5. Leaflet中使用awesome-markers插件显示带图标的marker
  6. gRPC的通信方式-客户端流式、服务端流式、双向流式在Java的调用示例
  7. 用特征码秒杀各程序语言按钮事件
  8. 你是不是在混日子,看着一点就知道了
  9. SP10707 COT2 - Count on a tree II
  10. redis简单学习3-redis常用命令总结
  11. C++课程上 有关“指针” 的小结
  12. stl取出字符串中的字符_在C ++ STL中使用比较运算符比较两个字符串
  13. 分享7个超实用的Emmet(zen coding)HTML代码使用技巧
  14. axure如何导出原件_AXURE教程:管理后台页面框架
  15. WINDOWS2016故障转移群集
  16. linux symbol字体下载,解决:WPS for Linux提示“系统缺失字体symbol、wingdings、wingdings 2、wingdings 3、webding”...
  17. 婚纱摄影、影楼、照相馆流量制造工具预约系统之种草社区
  18. 一纸学习康奈尔笔记法
  19. SDKD 2021 C3 7th Round
  20. 【choco 安装】chocolatey 安装步骤包管理工具

热门文章

  1. B. Bogosort codeforces(思维)
  2. python列表中数据类型可以不同吗_Python改变列表中数据类型的方法
  3. 方根法公式_仓储管理笔记之库存分析法:ABC分析法、区域合并法......
  4. 数字图像处理与python实现_数字图像处理学习(2)—— 图像直方图均衡与图像匹配(python实现)...
  5. pythonchar中的拟合方法_在python中利用numpy求解多项式以及多项式拟合的方法
  6. C#时间与时间戳格式互相转化
  7. 学数据库还不会Select,SQL Select详解,单表查询完全解析?
  8. 哇靠靠,这也行?零基础DIY无人驾驶小车(一)
  9. lightGBM GPU支持的安装、验证方法
  10. 关于RT-Thread的背景和成长