深度学习TF—5.tf.kears高层API
文章目录
- 一、metrics
- 1.实战
- 二、compile&fit&Evaluate&Predict
- 1.compile—编译模型
- 2.fit—训练模型
- 3.Evaluate—评估模型
- 4.predict—预测
- 三、自定义层或网络
- 1.keras.Sequential
- 2.keras.Model / keras.layers.Layer
- 3.自定义层
- 4.自定义网络
- 5.自定义网络实战—手写数字识别
- 6.自定义网络实战—CIFAR10
- 四、模型的加载与保存
- 1.save / load weights
- 2.save / load entire model
- 3.save_model
一、metrics
- 新建一个评价指标
acc_meter = metrics.Accuracy()
loss_meter = metrics.Mean()
- update data- 添加数据
loss_meter.update_state(loss)
acc_meter.update_state(y,pred)
- result().numpy()-显示结果
print(step,'loss:', loss_meter.result().numpy())
...
print(step,'Evaluate Acc:',total_correct/total, acc_meter.result().numpy())
- reset_states()-清零
if step % 100 == 0:print(step,'loss:', loss_meter.result().numpy())# 清除上一个时间戳的数据loss_meter.reset_states()if step % 500 == 0:print(step,'Evaluate Acc:',total_correct/total, acc_meter.result().numpy())acc_meter.reset_states()
1.实战
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'import tensorflow as tf
from tensorflow.keras import datasets, layers, optimizers, Sequential, metrics# 预处理函数
def preprocess(x, y):x = tf.cast(x, dtype=tf.float32) / 255.y = tf.cast(y, dtype=tf.int32)return x, ybatchsz = 128
(x, y), (x_val, y_val) = datasets.mnist.load_data()
print('datasets:', x.shape, y.shape, x.min(), x.max())db = tf.data.Dataset.from_tensor_slices((x, y))
db = db.map(preprocess).shuffle(60000).batch(batchsz).repeat(10)ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val))
ds_val = ds_val.map(preprocess).batch(batchsz)# 构建多层网络
network = Sequential([layers.Dense(256, activation='relu'),layers.Dense(128, activation='relu'),layers.Dense(64, activation='relu'),layers.Dense(32, activation='relu'),layers.Dense(10)])
network.build(input_shape=(None, 28 * 28))
network.summary()# 优化器
optimizer = optimizers.Adam(lr=0.01)# 评价指标-acc,loss
acc_meter = metrics.Accuracy()
loss_meter = metrics.Mean()for step, (x, y) in enumerate(db):with tf.GradientTape() as tape:# [b, 28, 28] => [b, 784]x = tf.reshape(x, (-1, 28 * 28))# [b, 784] => [b, 10]out = network(x)# [b] => [b, 10]y_onehot = tf.one_hot(y, depth=10)# [b]loss = tf.reduce_mean(tf.losses.categorical_crossentropy(y_onehot, out, from_logits=True))loss_meter.update_state(loss)grads = tape.gradient(loss, network.trainable_variables)optimizer.apply_gradients(zip(grads, network.trainable_variables))if step % 100 == 0:print(step, 'loss:', loss_meter.result().numpy())loss_meter.reset_states()# evaluateif step % 500 == 0:total, total_correct = 0., 0acc_meter.reset_states()for step, (x, y) in enumerate(ds_val):# [b, 28, 28] => [b, 784]x = tf.reshape(x, (-1, 28 * 28))# [b, 784] => [b, 10]out = network(x)# [b, 10] => [b]pred = tf.argmax(out, axis=1)pred = tf.cast(pred, dtype=tf.int32)# bool typecorrect = tf.equal(pred, y)# bool tensor => int tensor => numpytotal_correct += tf.reduce_sum(tf.cast(correct, dtype=tf.int32)).numpy()total += x.shape[0]acc_meter.update_state(y, pred)print(step, 'Evaluate Acc:', total_correct / total, acc_meter.result().numpy())
datasets: (60000, 28, 28) (60000,) 0 255
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) multiple 200960
_________________________________________________________________
dense_1 (Dense) multiple 32896
_________________________________________________________________
dense_2 (Dense) multiple 8256
_________________________________________________________________
dense_3 (Dense) multiple 2080
_________________________________________________________________
dense_4 (Dense) multiple 330
=================================================================
Total params: 244,522
Trainable params: 244,522
Non-trainable params: 0
_________________________________________________________________
0 loss: 2.3095727
78 Evaluate Acc: 0.1032 0.1032
100 loss: 0.49836162
200 loss: 0.24281283
300 loss: 0.20814449
400 loss: 0.19040857
500 loss: 0.1471103
78 Evaluate Acc: 0.956 0.956
600 loss: 0.15806517
700 loss: 0.13501912
800 loss: 0.13778095
900 loss: 0.13771541
1000 loss: 0.11204889
78 Evaluate Acc: 0.9666 0.9666
1100 loss: 0.10818114
1200 loss: 0.10698662
1300 loss: 0.10993517
1400 loss: 0.10309881
1500 loss: 0.092004016
78 Evaluate Acc: 0.9658 0.9658
1600 loss: 0.09988546
1700 loss: 0.09517718
1800 loss: 0.102653
1900 loss: 0.10128655
2000 loss: 0.084593534
78 Evaluate Acc: 0.9696 0.9696
2100 loss: 0.089395694
2200 loss: 0.084114745
2300 loss: 0.08294669
2400 loss: 0.0765419
2500 loss: 0.07786285
78 Evaluate Acc: 0.9716 0.9716
2600 loss: 0.08739958
2700 loss: 0.08950595
2800 loss: 0.08106578
2900 loss: 0.06466477
3000 loss: 0.077431396
78 Evaluate Acc: 0.9707 0.9707
3100 loss: 0.08382876
3200 loss: 0.076059125
3300 loss: 0.07230227
3400 loss: 0.05853687
3500 loss: 0.07312769
78 Evaluate Acc: 0.9703 0.9703
3600 loss: 0.07384481
3700 loss: 0.08926408
3800 loss: 0.066682965
3900 loss: 0.05534654
4000 loss: 0.073996484
78 Evaluate Acc: 0.9741 0.9741
4100 loss: 0.066883035
4200 loss: 0.070191
4300 loss: 0.08581101
4400 loss: 0.07324687
4500 loss: 0.056211904
78 Evaluate Acc: 0.9751 0.9751
4600 loss: 0.05384313
二、compile&fit&Evaluate&Predict
1.compile—编译模型
指定训练时loss(损失)、optimizer(优化器)和metrics(评价指标)的选择
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import datasets,optimizers,losses,metrics,layers,Sequential# 数据预处理
def preprocess(x, y):x = tf.cast(x, dtype=tf.float32) / 255.y = tf.cast(y, dtype=tf.int32)return x, ybatchsz = 128
(x, y), (x_val, y_val) = datasets.mnist.load_data()
print('datasets:', x.shape, y.shape, x.min(), x.max())
x = x.reshape((-1,28*28))
x_val = x_val.reshape((-1,28*28))
# 数据集加载并预处理
db = tf.data.Dataset.from_tensor_slices((x, y))
db = db.map(preprocess).shuffle(60000).batch(batchsz).repeat(10)ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val))
ds_val = ds_val.map(preprocess).batch(batchsz, drop_remainder=True)# 构建多层网络
network = Sequential([layers.Dense(256, activation='relu'),layers.Dense(128, activation='relu'),layers.Dense(64, activation='relu'),layers.Dense(32, activation='relu'),layers.Dense(1)])
network.build(input_shape=(None, 28 * 28))# 编译模型
network.compile(optimizer=optimizers.Adam(lr=0.01),loss = losses.CategoricalCrossentropy(from_logits=True),metrics = ['accuracy'])
2.fit—训练模型
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import datasets,optimizers,losses,metrics,layers,Sequential# 数据预处理
def preprocess(x, y):x = tf.cast(x, dtype=tf.float32) / 255.y = tf.cast(y, dtype=tf.int32)return x, ybatchsz = 128
(x, y), (x_val, y_val) = datasets.mnist.load_data()
print('datasets:', x.shape, y.shape, x.min(), x.max())
x = x.reshape((-1,28*28))
x_val = x_val.reshape((-1,28*28))
# 数据集加载并预处理
db = tf.data.Dataset.from_tensor_slices((x, y))
db = db.map(preprocess).shuffle(60000).batch(batchsz).repeat(10)ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val))
ds_val = ds_val.map(preprocess).batch(batchsz, drop_remainder=True)# 构建多层网络
network = Sequential([layers.Dense(256, activation='relu'),layers.Dense(128, activation='relu'),layers.Dense(64, activation='relu'),layers.Dense(32, activation='relu'),layers.Dense(1)])
network.build(input_shape=(None, 28 * 28))# 编译模型
network.compile(optimizer=optimizers.Adam(lr=0.01),loss = losses.CategoricalCrossentropy(from_logits=True),metrics = ['accuracy'])# 训练模型
network.fit(db,epochs=100)
3.Evaluate—评估模型
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import datasets,optimizers,losses,metrics,layers,Sequential
print(tf.__version__)
# 数据预处理
def preprocess(x, y):x = tf.cast(x, dtype=tf.float32) / 255.y = tf.cast(y, dtype=tf.int32)return x, ybatchsz = 128
(x, y), (x_val, y_val) = datasets.mnist.load_data()
print('datasets:', x.shape, y.shape, x.min(), x.max())
x = x.reshape((-1,28*28))
x_val = x_val.reshape((-1,28*28))
# 数据集加载并预处理
db = tf.data.Dataset.from_tensor_slices((x, y))
db = db.map(preprocess).shuffle(60000).batch(batchsz).repeat(10)ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val))
ds_val = ds_val.map(preprocess).batch(batchsz, drop_remainder=True)# 构建多层网络
network = Sequential([layers.Dense(256, activation='relu'),layers.Dense(128, activation='relu'),layers.Dense(64, activation='relu'),layers.Dense(32, activation='relu'),layers.Dense(1)])
network.build(input_shape=(None, 28 * 28))# 编译模型
network.compile(optimizer=optimizers.Adam(lr=0.01),loss = losses.CategoricalCrossentropy(from_logits=True),metrics = ['accuracy'])# 训练模型
network.fit(db,epochs=10,validation_data = ds_val)
4.predict—预测
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import datasets,optimizers,losses,metrics,layers,Sequential
from sklearn.metrics import accuracy_score
import numpy as np
print(tf.__version__)
# 数据预处理
def preprocess(x, y):x = tf.cast(x, dtype=tf.float32) / 255.y = tf.cast(y, dtype=tf.int32)return x, ybatchsz = 128
(x, y), (x_val, y_val) = datasets.mnist.load_data()
print('datasets:', x.shape, y.shape, x.min(), x.max())x = x.reshape((-1,28*28))
y = tf.one_hot(y,depth=10)
x_val = x_val.reshape((-1,28*28))
y_val = tf.one_hot(y_val,depth=10)# 数据集加载并预处理
db = tf.data.Dataset.from_tensor_slices((x, y))
db = db.map(preprocess).shuffle(60000).batch(batchsz).repeat(10)ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val))
ds_val = ds_val.map(preprocess).batch(batchsz, drop_remainder=True)# 构建多层网络
network = Sequential([layers.Dense(256, activation='relu'),layers.Dense(128, activation='relu'),layers.Dense(64, activation='relu'),layers.Dense(32, activation='relu'),layers.Dense(10)])
network.build(input_shape=(None, 28 * 28))# 编译模型
network.compile(optimizer=optimizers.Adam(lr=0.01),loss = losses.CategoricalCrossentropy(from_logits=True),metrics = ['accuracy'])# 训练模型
network.fit(db,epochs=5,validation_data = ds_val,validation_freq=2)
network.summary()# val
network.evaluate(ds_val)# predict
pred = network.predict(x_val)y_true = tf.argmax(y_val,axis=1)
y_pred = tf.argmax(pred,axis=1)
correct = tf.equal(y_true,y_pred)
total_correct = tf.reduce_sum(tf.cast(correct,dtype=np.int32)).numpy()
print(total_correct/x_val.shape[0])
Epoch 1/5
4690/4690 [==============================] - 16s 3ms/step - loss: 0.1098 - accuracy: 0.9695
Epoch 2/5
4690/4690 [==============================] - 18s 4ms/step - loss: 0.0531 - accuracy: 0.9873 - val_loss: 0.1227 - val_accuracy: 0.9776
Epoch 3/5
4690/4690 [==============================] - 19s 4ms/step - loss: 0.0448 - accuracy: 0.9902
Epoch 4/5
4690/4690 [==============================] - 18s 4ms/step - loss: 0.0376 - accuracy: 0.9923 - val_loss: 0.1778 - val_accuracy: 0.9763
Epoch 5/5
4690/4690 [==============================] - 19s 4ms/step - loss: 0.0368 - accuracy: 0.9921
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 256) 200960
_________________________________________________________________
dense_1 (Dense) (None, 128) 32896
_________________________________________________________________
dense_2 (Dense) (None, 64) 8256
_________________________________________________________________
dense_3 (Dense) (None, 32) 2080
_________________________________________________________________
dense_4 (Dense) (None, 10) 330
=================================================================
Total params: 244,522
Trainable params: 244,522
Non-trainable params: 0
_________________________________________________________________
78/78 [==============================] - 0s 3ms/step - loss: 0.1899 - accuracy: 0.9758
tf.Tensor([7 2 1 ... 4 5 6], shape=(10000,), dtype=int64)
tf.Tensor([7 2 1 ... 4 5 6], shape=(10000,), dtype=int64)
0.9737
三、自定义层或网络
1.keras.Sequential
# 构建多层网络
network = Sequential([layers.Dense(256, activation='relu'),layers.Dense(128, activation='relu'),layers.Dense(64, activation='relu'),layers.Dense(32, activation='relu'),layers.Dense(10)])
# 建立网络参数
network.build(input_shape=(None, 28 * 28))
2.keras.Model / keras.layers.Layer
继承keras.layers.Layer来实现自定义层,自己的逻辑在call()方法中
__init__
call()
继承keras.Model来实现自定义网络,中间有一个小类继承keras.layers.Layer
__init__
call()
Model:compile / fit / evaluate
3.自定义层
# 自定义Dense层
class MyDense(layers.Layer):# 初始化方法def __init__(self,inp_dim,outp_dim):# 调用母类的初始化super(MyDense,self).__init__()# self.add_variable作用是在创建这两个Variable时,同时告诉类这两个variable是需要创建的# 当两个容器拼接时,会把这两个variable交给上面的容器来管理,统一管理,不需要人为管理参数# 这个函数在母类中实现,所以可以直接调用self.kernel = self.add_variable('w',[inp_dim,outp_dim])self.bias = self.add_variable('b',[outp_dim])def call(self,inputs,training = None):out = inputs @ self.kernel + self.biasreturn out
4.自定义网络
# 自定义Dense层
class MyDense(layers.Layer):# 初始化方法def __init__(self,inp_dim,outp_dim):# 调用母类的初始化super(MyDense,self).__init__()# self.add_variable作用是在创建这两个Variable时,同时告诉类这两个variable是需要创建的# 当两个容器拼接时,会把这两个variable交给上面的容器来管理,统一管理,不需要人为管理参数# 这个函数在母类中实现,所以可以直接调用self.kernel = self.add_variable('w',[inp_dim,outp_dim])self.bias = self.add_variable('b',[outp_dim])def call(self,inputs,training = None):out = inputs @ self.kernel + self.biasreturn out# 利用自定义层,创建自定义网络(5层)
class MyModel(keras.Model):def __init__(self):super(MyModel,self).__init__()self.fc1 = MyDense(28*28,256)self.fc2 = MyDense(256,128)self.fc3 = MyDense(128,64)self.fc4 = MyDense(64,32)self.fc5 = MyDense(32,10)# 定义前向传播def call(self,inputs,training = None):x = self.fc1(inputs)x = tf.nn.relu(x)x = self.fc2(x)x = tf.nn.relu(x) x = self.fc3(x)x = tf.nn.relu(x)x = self.fc4(x)x = tf.nn.relu(x)x = self.fc5(x)return x
5.自定义网络实战—手写数字识别
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'import tensorflow as tf
from tensorflow.keras import datasets, layers, optimizers, Sequential, metrics
from tensorflow import keras# 数据预处理
def preprocess(x, y):"""x is a simple image, not a batch"""x = tf.cast(x, dtype=tf.float32) / 255.x = tf.reshape(x, [28 * 28])y = tf.cast(y, dtype=tf.int32)y = tf.one_hot(y, depth=10)return x, ybatchsz = 128
# 数据集加载
(x, y), (x_val, y_val) = datasets.mnist.load_data()
print('datasets:', x.shape, y.shape, x.min(), x.max())db = tf.data.Dataset.from_tensor_slices((x, y))
db = db.map(preprocess).shuffle(60000).batch(batchsz)ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val))
ds_val = ds_val.map(preprocess).batch(batchsz)sample = next(iter(db))
print(sample[0].shape, sample[1].shape)# 构建多层网络
network = Sequential([layers.Dense(256, activation='relu'),layers.Dense(128, activation='relu'),layers.Dense(64, activation='relu'),layers.Dense(32, activation='relu'),layers.Dense(10)])
network.build(input_shape=(None, 28 * 28))
network.summary()# 自定义构建多层网络
# 自定义层
class MyDense(layers.Layer):def __init__(self, inp_dim, outp_dim):super(MyDense, self).__init__()self.kernel = self.add_variable('w', [inp_dim, outp_dim])self.bias = self.add_variable('b', [outp_dim])def call(self, inputs, training=None):out = inputs @ self.kernel + self.biasreturn out# 自定义网络
class MyModel(keras.Model):def __init__(self):super(MyModel, self).__init__()self.fc1 = MyDense(28 * 28, 256)self.fc2 = MyDense(256, 128)self.fc3 = MyDense(128, 64)self.fc4 = MyDense(64, 32)self.fc5 = MyDense(32, 10)def call(self, inputs, training=None):x = self.fc1(inputs)x = tf.nn.relu(x)x = self.fc2(x)x = tf.nn.relu(x)x = self.fc3(x)x = tf.nn.relu(x)x = self.fc4(x)x = tf.nn.relu(x)x = self.fc5(x)return xnetwork = MyModel()network.compile(optimizer=optimizers.Adam(lr=0.01),loss=tf.losses.CategoricalCrossentropy(from_logits=True),metrics=['accuracy'])network.fit(db, epochs=5, validation_data=ds_val,validation_freq=2)network.evaluate(ds_val)sample = next(iter(ds_val))
x = sample[0]
y = sample[1] # one-hot
pred = network.predict(x) # [b, 10]
# convert back to number
y = tf.argmax(y, axis=1)
pred = tf.argmax(pred, axis=1)print(pred)
print(y)
6.自定义网络实战—CIFAR10
import osos.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import datasets, layers, optimizers, Sequential, metrics# 数据预处理
def preprocess(x,y):# [-1,1]x = 2 * tf.cast(x,dtype=tf.float32) / 255. - 1y = tf.cast(y,dtype=tf.int32)return x,ybatchsz = 128
# 数据集的加载
# x[b,32,32,3] y[b,1]
(x,y),(x_val,y_val) = datasets.cifar10.load_data()# 消去[b,1]的1这个维度
y = tf.squeeze(y)
y_val = tf.squeeze(y_val)y = tf.one_hot(y,depth=10)
y_val = tf.one_hot(y_val,depth=10)
print('datasets:',x.shape,y.shape,x.min(),x.max())
# datasets: (50000, 32, 32, 3) (50000, 10) 0 255# 构建两个数据集
train_db = tf.data.Dataset.from_tensor_slices((x,y))
train_db = train_db.map(preprocess).shuffle(10000).batch(batchsz)
test_db = tf.data.Dataset.from_tensor_slices((x_val,y_val))
test_db = test_db.map(preprocess).batch(batchsz)sample = next(iter(train_db))
print('batch:',sample[0].shape,sample[1].shape)# 创建自己的层
# replace standard layers.Dense
class MyDense(layers.Layer):def __init__(self,inp_dim,outp_dim):super(MyDense,self).__init__()self.kernel = self.add_variable('w',[inp_dim,outp_dim])# self.bias = self.add_variable('b',[outp_dim])# 构建前向传播def call(self,input,training = None):x = input @ self.kernelreturn x# 构建自定义网络(5层)
class MyNetwork(keras.Model):def __init__(self):super(MyNetwork,self).__init__()# 优化-使参数变大-但容易造成过拟合self.fc1 = MyDense(32*32*3,256)self.fc2 = MyDense(256,128)self.fc3 = MyDense(128,64)self.fc4 = MyDense(64,32)self.fc5 = MyDense(32,10)def call(self,inputs,training=None):""":param inputs: [b,32,32,3]:param training::return:"""# 打平操作x = tf.reshape(inputs,[-1,32*32*3])x = self.fc1(x)x = tf.nn.relu(x)x = self.fc2(x)x = tf.nn.relu(x)x = self.fc3(x)x = tf.nn.relu(x)x = self.fc4(x)x = tf.nn.relu(x)# x[b,32]->[b,10]x = self.fc5(x)return xnetwork = MyNetwork()
network.compile(optimizer = optimizers.Adam(lr = 1e-3),loss = tf.losses.CategoricalCrossentropy(from_logits=True),metrics = ['accuracy'])network.fit(train_db,epochs=15,validation_data = test_db,validation_freq=1)# 保存模型权值
network.evaluate(test_db)
network.save_weights('ckpt/weights.ckpt')
del network
print('saved to ckpt/weights.ckpt')
network = MyNetwork()
network.compile(optimizer = optimizers.Adam(lr = 1e-3),loss = tf.losses.CategoricalCrossentropy(from_logits=True),metrics = ['accuracy'])# 加载模型权值
network.load_weights('ckpt/weights.ckpt')
print('load weights from file')
network.evaluate(test_db)
Epoch 14/151/391 [..............................] - ETA: 2:59 - loss: 0.6248 - accuracy: 0.80478/391 [..............................] - ETA: 24s - loss: 0.6025 - accuracy: 0.7744 14/391 [>.............................] - ETA: 15s - loss: 0.5613 - accuracy: 0.795220/391 [>.............................] - ETA: 11s - loss: 0.5669 - accuracy: 0.796926/391 [>.............................] - ETA: 9s - loss: 0.5580 - accuracy: 0.8029 32/391 [=>............................] - ETA: 8s - loss: 0.5757 - accuracy: 0.793238/391 [=>............................] - ETA: 7s - loss: 0.5719 - accuracy: 0.792644/391 [==>...........................] - ETA: 6s - loss: 0.5721 - accuracy: 0.793350/391 [==>...........................] - ETA: 5s - loss: 0.5669 - accuracy: 0.796256/391 [===>..........................] - ETA: 5s - loss: 0.5710 - accuracy: 0.793962/391 [===>..........................] - ETA: 5s - loss: 0.5740 - accuracy: 0.794168/391 [====>.........................] - ETA: 4s - loss: 0.5731 - accuracy: 0.794575/391 [====>.........................] - ETA: 4s - loss: 0.5753 - accuracy: 0.792281/391 [=====>........................] - ETA: 4s - loss: 0.5745 - accuracy: 0.793688/391 [=====>........................] - ETA: 4s - loss: 0.5727 - accuracy: 0.793694/391 [======>.......................] - ETA: 3s - loss: 0.5742 - accuracy: 0.7927
101/391 [======>.......................] - ETA: 3s - loss: 0.5736 - accuracy: 0.7932
107/391 [=======>......................] - ETA: 3s - loss: 0.5724 - accuracy: 0.7934
114/391 [=======>......................] - ETA: 3s - loss: 0.5749 - accuracy: 0.7926
120/391 [========>.....................] - ETA: 3s - loss: 0.5757 - accuracy: 0.7934
126/391 [========>.....................] - ETA: 3s - loss: 0.5722 - accuracy: 0.7951
133/391 [=========>....................] - ETA: 3s - loss: 0.5721 - accuracy: 0.7955
139/391 [=========>....................] - ETA: 2s - loss: 0.5717 - accuracy: 0.7955
146/391 [==========>...................] - ETA: 2s - loss: 0.5715 - accuracy: 0.7954
152/391 [==========>...................] - ETA: 2s - loss: 0.5694 - accuracy: 0.7959
159/391 [===========>..................] - ETA: 2s - loss: 0.5688 - accuracy: 0.7957
166/391 [===========>..................] - ETA: 2s - loss: 0.5699 - accuracy: 0.7948
173/391 [============>.................] - ETA: 2s - loss: 0.5699 - accuracy: 0.7953
180/391 [============>.................] - ETA: 2s - loss: 0.5691 - accuracy: 0.7954
187/391 [=============>................] - ETA: 2s - loss: 0.5686 - accuracy: 0.7957
193/391 [=============>................] - ETA: 2s - loss: 0.5687 - accuracy: 0.7956
200/391 [==============>...............] - ETA: 2s - loss: 0.5694 - accuracy: 0.7952
207/391 [==============>...............] - ETA: 1s - loss: 0.5688 - accuracy: 0.7954
214/391 [===============>..............] - ETA: 1s - loss: 0.5673 - accuracy: 0.7951
221/391 [===============>..............] - ETA: 1s - loss: 0.5672 - accuracy: 0.7953
228/391 [================>.............] - ETA: 1s - loss: 0.5661 - accuracy: 0.7958
234/391 [================>.............] - ETA: 1s - loss: 0.5651 - accuracy: 0.7959
240/391 [=================>............] - ETA: 1s - loss: 0.5638 - accuracy: 0.7964
247/391 [=================>............] - ETA: 1s - loss: 0.5638 - accuracy: 0.7962
254/391 [==================>...........] - ETA: 1s - loss: 0.5627 - accuracy: 0.7971
261/391 [===================>..........] - ETA: 1s - loss: 0.5635 - accuracy: 0.7968
268/391 [===================>..........] - ETA: 1s - loss: 0.5642 - accuracy: 0.7966
275/391 [====================>.........] - ETA: 1s - loss: 0.5638 - accuracy: 0.7969
282/391 [====================>.........] - ETA: 1s - loss: 0.5633 - accuracy: 0.7972
289/391 [=====================>........] - ETA: 1s - loss: 0.5626 - accuracy: 0.7973
296/391 [=====================>........] - ETA: 0s - loss: 0.5625 - accuracy: 0.7973
302/391 [======================>.......] - ETA: 0s - loss: 0.5629 - accuracy: 0.7968
309/391 [======================>.......] - ETA: 0s - loss: 0.5641 - accuracy: 0.7967
318/391 [=======================>......] - ETA: 0s - loss: 0.5652 - accuracy: 0.7964
332/391 [========================>.....] - ETA: 0s - loss: 0.5661 - accuracy: 0.7960
347/391 [=========================>....] - ETA: 0s - loss: 0.5674 - accuracy: 0.7959
362/391 [==========================>...] - ETA: 0s - loss: 0.5676 - accuracy: 0.7957
376/391 [===========================>..] - ETA: 0s - loss: 0.5684 - accuracy: 0.7957
389/391 [============================>.] - ETA: 0s - loss: 0.5698 - accuracy: 0.7956
391/391 [==============================] - 4s 10ms/step - loss: 0.5697 - accuracy: 0.7956 - val_loss: 1.9200 - val_accuracy: 0.5195
Epoch 15/151/391 [..............................] - ETA: 2:55 - loss: 0.6455 - accuracy: 0.78128/391 [..............................] - ETA: 24s - loss: 0.5190 - accuracy: 0.8135 15/391 [>.............................] - ETA: 14s - loss: 0.5051 - accuracy: 0.816122/391 [>.............................] - ETA: 10s - loss: 0.4930 - accuracy: 0.822429/391 [=>............................] - ETA: 8s - loss: 0.4935 - accuracy: 0.8217 36/391 [=>............................] - ETA: 7s - loss: 0.4941 - accuracy: 0.823843/391 [==>...........................] - ETA: 6s - loss: 0.4999 - accuracy: 0.821250/391 [==>...........................] - ETA: 5s - loss: 0.5044 - accuracy: 0.818157/391 [===>..........................] - ETA: 5s - loss: 0.5097 - accuracy: 0.817764/391 [===>..........................] - ETA: 4s - loss: 0.5112 - accuracy: 0.817471/391 [====>.........................] - ETA: 4s - loss: 0.5097 - accuracy: 0.816878/391 [====>.........................] - ETA: 4s - loss: 0.5115 - accuracy: 0.817285/391 [=====>........................] - ETA: 4s - loss: 0.5161 - accuracy: 0.814892/391 [======>.......................] - ETA: 3s - loss: 0.5176 - accuracy: 0.814599/391 [======>.......................] - ETA: 3s - loss: 0.5187 - accuracy: 0.8149
106/391 [=======>......................] - ETA: 3s - loss: 0.5168 - accuracy: 0.8155
113/391 [=======>......................] - ETA: 3s - loss: 0.5177 - accuracy: 0.8148
119/391 [========>.....................] - ETA: 3s - loss: 0.5190 - accuracy: 0.8147
125/391 [========>.....................] - ETA: 3s - loss: 0.5164 - accuracy: 0.8159
132/391 [=========>....................] - ETA: 2s - loss: 0.5162 - accuracy: 0.8159
139/391 [=========>....................] - ETA: 2s - loss: 0.5149 - accuracy: 0.8156
146/391 [==========>...................] - ETA: 2s - loss: 0.5149 - accuracy: 0.8157
153/391 [==========>...................] - ETA: 2s - loss: 0.5139 - accuracy: 0.8161
159/391 [===========>..................] - ETA: 2s - loss: 0.5161 - accuracy: 0.8150
165/391 [===========>..................] - ETA: 2s - loss: 0.5156 - accuracy: 0.8154
171/391 [============>.................] - ETA: 2s - loss: 0.5135 - accuracy: 0.8162
177/391 [============>.................] - ETA: 2s - loss: 0.5148 - accuracy: 0.8158
183/391 [=============>................] - ETA: 2s - loss: 0.5155 - accuracy: 0.8155
189/391 [=============>................] - ETA: 2s - loss: 0.5171 - accuracy: 0.8147
195/391 [=============>................] - ETA: 2s - loss: 0.5189 - accuracy: 0.8140
201/391 [==============>...............] - ETA: 1s - loss: 0.5175 - accuracy: 0.8144
208/391 [==============>...............] - ETA: 1s - loss: 0.5165 - accuracy: 0.8144
214/391 [===============>..............] - ETA: 1s - loss: 0.5185 - accuracy: 0.8137
221/391 [===============>..............] - ETA: 1s - loss: 0.5182 - accuracy: 0.8140
228/391 [================>.............] - ETA: 1s - loss: 0.5175 - accuracy: 0.8143
234/391 [================>.............] - ETA: 1s - loss: 0.5170 - accuracy: 0.8144
240/391 [=================>............] - ETA: 1s - loss: 0.5161 - accuracy: 0.8150
246/391 [=================>............] - ETA: 1s - loss: 0.5168 - accuracy: 0.8142
253/391 [==================>...........] - ETA: 1s - loss: 0.5165 - accuracy: 0.8140
259/391 [==================>...........] - ETA: 1s - loss: 0.5169 - accuracy: 0.8136
265/391 [===================>..........] - ETA: 1s - loss: 0.5164 - accuracy: 0.8138
271/391 [===================>..........] - ETA: 1s - loss: 0.5161 - accuracy: 0.8139
278/391 [====================>.........] - ETA: 1s - loss: 0.5155 - accuracy: 0.8142
284/391 [====================>.........] - ETA: 1s - loss: 0.5156 - accuracy: 0.8140
291/391 [=====================>........] - ETA: 0s - loss: 0.5143 - accuracy: 0.8148
298/391 [=====================>........] - ETA: 0s - loss: 0.5146 - accuracy: 0.8146
305/391 [======================>.......] - ETA: 0s - loss: 0.5148 - accuracy: 0.8146
312/391 [======================>.......] - ETA: 0s - loss: 0.5151 - accuracy: 0.8144
325/391 [=======================>......] - ETA: 0s - loss: 0.5148 - accuracy: 0.8142
339/391 [=========================>....] - ETA: 0s - loss: 0.5161 - accuracy: 0.8137
354/391 [==========================>...] - ETA: 0s - loss: 0.5169 - accuracy: 0.8136
369/391 [===========================>..] - ETA: 0s - loss: 0.5193 - accuracy: 0.8127
383/391 [============================>.] - ETA: 0s - loss: 0.5197 - accuracy: 0.8126
391/391 [==============================] - 4s 10ms/step - loss: 0.5200 - accuracy: 0.8126 - val_loss: 2.0124 - val_accuracy: 0.51891/79 [..............................] - ETA: 0s - loss: 1.6155 - accuracy: 0.5625
10/79 [==>...........................] - ETA: 0s - loss: 1.8749 - accuracy: 0.5273
19/79 [======>.......................] - ETA: 0s - loss: 1.9776 - accuracy: 0.5169
27/79 [=========>....................] - ETA: 0s - loss: 1.9817 - accuracy: 0.5194
36/79 [============>.................] - ETA: 0s - loss: 1.9576 - accuracy: 0.5252
45/79 [================>.............] - ETA: 0s - loss: 1.9520 - accuracy: 0.5274
54/79 [===================>..........] - ETA: 0s - loss: 1.9581 - accuracy: 0.5268
63/79 [======================>.......] - ETA: 0s - loss: 1.9572 - accuracy: 0.5255
72/79 [==========================>...] - ETA: 0s - loss: 1.9786 - accuracy: 0.5215
79/79 [==============================] - 0s 6ms/step - loss: 2.0124 - accuracy: 0.5189
saved to ckpt/weights.ckpt
load weights from file1/79 [..............................] - ETA: 5s - loss: 1.6155 - accuracy: 0.5625
10/79 [==>...........................] - ETA: 0s - loss: 1.8749 - accuracy: 0.5273
19/79 [======>.......................] - ETA: 0s - loss: 1.9776 - accuracy: 0.5169
28/79 [=========>....................] - ETA: 0s - loss: 1.9824 - accuracy: 0.5176
37/79 [=============>................] - ETA: 0s - loss: 1.9426 - accuracy: 0.5283
46/79 [================>.............] - ETA: 0s - loss: 1.9576 - accuracy: 0.5275
55/79 [===================>..........] - ETA: 0s - loss: 1.9660 - accuracy: 0.5259
64/79 [=======================>......] - ETA: 0s - loss: 1.9604 - accuracy: 0.5242
73/79 [==========================>...] - ETA: 0s - loss: 1.9814 - accuracy: 0.5205
79/79 [==============================] - 1s 7ms/step - loss: 2.0124 - accuracy: 0.5189
四、模型的加载与保存
模型的加载与保存一共有三个模式,分别为:
- save / load weights
最干净、最轻量级的模型,只保存网络参数,适用于有源代码的情况下 - save / load entire model
最简单粗暴,将模型的所有状态都保存下来,可以用来恢复 - save_model
模型的一种保存模式,与pyt中的ONNX模式相对应,适用于工厂环境部署
python代码可以用C++来解析
1.save / load weights
import osos.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'import tensorflow as tf
from tensorflow.keras import datasets, layers, optimizers, Sequential, metricsdef preprocess(x, y):"""x is a simple image, not a batch"""x = tf.cast(x, dtype=tf.float32) / 255.x = tf.reshape(x, [28 * 28])y = tf.cast(y, dtype=tf.int32)y = tf.one_hot(y, depth=10)return x, ybatchsz = 128
(x, y), (x_val, y_val) = datasets.mnist.load_data()
print('datasets:', x.shape, y.shape, x.min(), x.max())db = tf.data.Dataset.from_tensor_slices((x, y))
db = db.map(preprocess).shuffle(60000).batch(batchsz)ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val))
ds_val = ds_val.map(preprocess).batch(batchsz)sample = next(iter(db))
print(sample[0].shape, sample[1].shape)network = Sequential([layers.Dense(256, activation='relu'),layers.Dense(128, activation='relu'),layers.Dense(64, activation='relu'),layers.Dense(32, activation='relu'),layers.Dense(10)])
network.build(input_shape=(None, 28 * 28))
network.summary()network.compile(optimizer=optimizers.Adam(lr=0.01),loss=tf.losses.CategoricalCrossentropy(from_logits=True),metrics=['accuracy'])network.fit(db, epochs=3, validation_data=ds_val, validation_freq=2)network.evaluate(ds_val)# 保存模型的参数
network.save_weights('weights.ckpt')
print('saved weights.')
del network# 构建多层网络
network = Sequential([layers.Dense(256, activation='relu'),layers.Dense(128, activation='relu'),layers.Dense(64, activation='relu'),layers.Dense(32, activation='relu'),layers.Dense(10)])
network.compile(optimizer=optimizers.Adam(lr=0.01),loss=tf.losses.CategoricalCrossentropy(from_logits=True),metrics=['accuracy'])
# 加载模型的参数
network.load_weights('weights.ckpt')
print('loaded weights!')
network.evaluate(ds_val)
Epoch 2/31/469 [..............................] - ETA: 13:50 - loss: 0.1488 - accuracy: 0.968819/469 [>.............................] - ETA: 43s - loss: 0.1262 - accuracy: 0.9634 39/469 [=>............................] - ETA: 20s - loss: 0.1342 - accuracy: 0.961957/469 [==>...........................] - ETA: 13s - loss: 0.1324 - accuracy: 0.963576/469 [===>..........................] - ETA: 10s - loss: 0.1379 - accuracy: 0.962393/469 [====>.........................] - ETA: 8s - loss: 0.1348 - accuracy: 0.9632
110/469 [======>.......................] - ETA: 6s - loss: 0.1370 - accuracy: 0.9618
130/469 [=======>......................] - ETA: 5s - loss: 0.1375 - accuracy: 0.9615
150/469 [========>.....................] - ETA: 4s - loss: 0.1384 - accuracy: 0.9618
169/469 [=========>....................] - ETA: 3s - loss: 0.1384 - accuracy: 0.9614
187/469 [==========>...................] - ETA: 3s - loss: 0.1369 - accuracy: 0.9619
207/469 [============>.................] - ETA: 2s - loss: 0.1385 - accuracy: 0.9612
229/469 [=============>................] - ETA: 2s - loss: 0.1387 - accuracy: 0.9612
251/469 [===============>..............] - ETA: 2s - loss: 0.1393 - accuracy: 0.9610
274/469 [================>.............] - ETA: 1s - loss: 0.1388 - accuracy: 0.9615
297/469 [=================>............] - ETA: 1s - loss: 0.1378 - accuracy: 0.9616
319/469 [===================>..........] - ETA: 1s - loss: 0.1373 - accuracy: 0.9618
342/469 [====================>.........] - ETA: 0s - loss: 0.1366 - accuracy: 0.9621
363/469 [======================>.......] - ETA: 0s - loss: 0.1356 - accuracy: 0.9624
385/469 [=======================>......] - ETA: 0s - loss: 0.1362 - accuracy: 0.9623
407/469 [=========================>....] - ETA: 0s - loss: 0.1358 - accuracy: 0.9624
429/469 [==========================>...] - ETA: 0s - loss: 0.1350 - accuracy: 0.9627
450/469 [===========================>..] - ETA: 0s - loss: 0.1342 - accuracy: 0.9629
466/469 [============================>.] - ETA: 0s - loss: 0.1343 - accuracy: 0.9629
469/469 [==============================] - 3s 7ms/step - loss: 0.1344 - accuracy: 0.9629 - val_loss: 0.1209 - val_accuracy: 0.9648
Epoch 3/31/469 [..............................] - ETA: 14:16 - loss: 0.1254 - accuracy: 0.960920/469 [>.............................] - ETA: 42s - loss: 0.1014 - accuracy: 0.9695 39/469 [=>............................] - ETA: 21s - loss: 0.1063 - accuracy: 0.970060/469 [==>...........................] - ETA: 13s - loss: 0.1006 - accuracy: 0.970382/469 [====>.........................] - ETA: 9s - loss: 0.1041 - accuracy: 0.9690
105/469 [=====>........................] - ETA: 7s - loss: 0.1089 - accuracy: 0.9676
128/469 [=======>......................] - ETA: 5s - loss: 0.1072 - accuracy: 0.9684
151/469 [========>.....................] - ETA: 4s - loss: 0.1056 - accuracy: 0.9692
171/469 [=========>....................] - ETA: 3s - loss: 0.1089 - accuracy: 0.9688
189/469 [===========>..................] - ETA: 3s - loss: 0.1094 - accuracy: 0.9688
208/469 [============>.................] - ETA: 2s - loss: 0.1122 - accuracy: 0.9681
228/469 [=============>................] - ETA: 2s - loss: 0.1099 - accuracy: 0.9687
250/469 [==============>...............] - ETA: 2s - loss: 0.1093 - accuracy: 0.9691
270/469 [================>.............] - ETA: 1s - loss: 0.1088 - accuracy: 0.9692
291/469 [=================>............] - ETA: 1s - loss: 0.1081 - accuracy: 0.9696
312/469 [==================>...........] - ETA: 1s - loss: 0.1079 - accuracy: 0.9700
334/469 [====================>.........] - ETA: 1s - loss: 0.1082 - accuracy: 0.9700
356/469 [=====================>........] - ETA: 0s - loss: 0.1086 - accuracy: 0.9699
378/469 [=======================>......] - ETA: 0s - loss: 0.1083 - accuracy: 0.9699
401/469 [========================>.....] - ETA: 0s - loss: 0.1071 - accuracy: 0.9700
422/469 [=========================>....] - ETA: 0s - loss: 0.1081 - accuracy: 0.9698
441/469 [===========================>..] - ETA: 0s - loss: 0.1089 - accuracy: 0.9697
459/469 [============================>.] - ETA: 0s - loss: 0.1083 - accuracy: 0.9700
469/469 [==============================] - 3s 6ms/step - loss: 0.1082 - accuracy: 0.97011/79 [..............................] - ETA: 0s - loss: 0.0620 - accuracy: 0.9844
11/79 [===>..........................] - ETA: 0s - loss: 0.1625 - accuracy: 0.9616
21/79 [======>.......................] - ETA: 0s - loss: 0.1902 - accuracy: 0.9576
32/79 [===========>..................] - ETA: 0s - loss: 0.1910 - accuracy: 0.9570
41/79 [==============>...............] - ETA: 0s - loss: 0.1845 - accuracy: 0.9573
50/79 [=================>............] - ETA: 0s - loss: 0.1695 - accuracy: 0.9605
60/79 [=====================>........] - ETA: 0s - loss: 0.1499 - accuracy: 0.9645
70/79 [=========================>....] - ETA: 0s - loss: 0.1389 - accuracy: 0.9667
79/79 [==============================] - 0s 5ms/step - loss: 0.1372 - accuracy: 0.9664
saved weights.
loaded weights!1/79 [..............................] - ETA: 6s - loss: 0.0620 - accuracy: 0.9844
11/79 [===>..........................] - ETA: 0s - loss: 0.1625 - accuracy: 0.9616
21/79 [======>.......................] - ETA: 0s - loss: 0.1902 - accuracy: 0.9576
30/79 [==========>...................] - ETA: 0s - loss: 0.1884 - accuracy: 0.9581
39/79 [=============>................] - ETA: 0s - loss: 0.1914 - accuracy: 0.9559
49/79 [=================>............] - ETA: 0s - loss: 0.1724 - accuracy: 0.9600
59/79 [=====================>........] - ETA: 0s - loss: 0.1522 - accuracy: 0.9640
69/79 [=========================>....] - ETA: 0s - loss: 0.1404 - accuracy: 0.9665
78/79 [============================>.] - ETA: 0s - loss: 0.1388 - accuracy: 0.9663
79/79 [==============================] - 1s 6ms/step - loss: 0.1372 - accuracy: 0.9664
2.save / load entire model
import osos.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow as tf
from tensorflow.keras import datasets, layers, optimizers, Sequential, metrics# 数据预处理
def preprocess(x, y):"""x is a simple image, not a batch"""x = tf.cast(x, dtype=tf.float32) / 255.x = tf.reshape(x, [28 * 28])y = tf.cast(y, dtype=tf.int32)y = tf.one_hot(y, depth=10)return x, ybatchsz = 128
# 数据集加载
(x, y), (x_val, y_val) = datasets.mnist.load_data()
print('datasets:', x.shape, y.shape, x.min(), x.max())db = tf.data.Dataset.from_tensor_slices((x, y))
db = db.map(preprocess).shuffle(60000).batch(batchsz)ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val))
ds_val = ds_val.map(preprocess).batch(batchsz)sample = next(iter(db))
print(sample[0].shape, sample[1].shape)network = Sequential([layers.Dense(256, activation='relu'),layers.Dense(128, activation='relu'),layers.Dense(64, activation='relu'),layers.Dense(32, activation='relu'),layers.Dense(10)])
network.build(input_shape=(None, 28 * 28))
network.summary()network.compile(optimizer=optimizers.Adam(lr=0.01),loss=tf.losses.CategoricalCrossentropy(from_logits=True),metrics=['accuracy'])network.fit(db, epochs=3, validation_data=ds_val, validation_freq=2)network.evaluate(ds_val)# 保存整个模型
network.save('model.h5')
print('saved total model.')
del networkprint('loaded model from file.')
# 加载整个模型
network = tf.keras.models.load_model('model.h5', compile=False)x_val = tf.cast(x_val, dtype=tf.float32) / 255.
x_val = tf.reshape(x_val, [-1, 28 * 28])
y_val = tf.cast(y_val, dtype=tf.int32)
y_val = tf.one_hot(y_val, depth=10)ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val)).batch(128)
network.evaluate(ds_val)
datasets: (60000, 28, 28) (60000,) 0 255
(128, 784) (128, 10)
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 256) 200960
_________________________________________________________________
dense_1 (Dense) (None, 128) 32896
_________________________________________________________________
dense_2 (Dense) (None, 64) 8256
_________________________________________________________________
dense_3 (Dense) (None, 32) 2080
_________________________________________________________________
dense_4 (Dense) (None, 10) 330
=================================================================
Total params: 244,522
Trainable params: 244,522
Non-trainable params: 0
_________________________________________________________________
Epoch 1/3
469/469 [==============================] - 1s 2ms/step - loss: 0.2723 - accuracy: 0.9182
Epoch 2/3
469/469 [==============================] - 1s 3ms/step - loss: 0.1363 - accuracy: 0.9628 - val_loss: 0.1280 - val_accuracy: 0.9637
Epoch 3/3
469/469 [==============================] - 1s 2ms/step - loss: 0.1101 - accuracy: 0.9692
79/79 [==============================] - 0s 3ms/step - loss: 0.1372 - accuracy: 0.9673
saved total model.
loaded model from file.
79/79 [==============================] - 0s 1ms/step - loss: 0.1372 - accuracy: 0.9673
3.save_model
tf.saved_model.save(m,'/tmp/saved_model/')imported = tf.saved_model.load(path)
f = imported.signatures['serving_default']
print(f(x = tf.ones([1,28,28,3])))
如果对您有帮助,麻烦点赞关注,这真的对我很重要!!!如果需要互关,请评论留言!
深度学习TF—5.tf.kears高层API相关推荐
- 日月光华深度学习(一、二)深度学习基础和tf.keras
日月光华深度学习(一.二)深度学习基础和tf.keras [2.2]--tf.keras实现线性回归 [2.5]--多层感知器(神经网络)的代码实现 [2.6]--逻辑回归与交叉熵 [2.7]--逻辑 ...
- MoXing——华为云深度学习服务提供的网络模型开发API
序言:MoXing是华为云深度学习服务提供的网络模型开发API.相对于TensorFlow和MXNet等原生API而言,MoXing API让模型的代码编写更加简单,而且能够自动获取高性能的分布式执行 ...
- 深度学习中的tf.nn.softmax(logits, axis=1)以及tf.argmax(prob, axis=1)两个函数的参数以及用法
参考了下面的两个链接: softmax: https://blog.csdn.net/q2519008/article/details/107086024?utm_medium=distribute. ...
- 深度学习部署(十八): CUDA RunTime API _wa_仿射变换的实现
1. 仿射变换 warpAffine是一种二维仿射变换技术,可以将图像从一种形式转换为另一种形式.它是OpenCV图像处理库中的一个函数,可用于对图像进行平移.旋转.缩放和剪切等操作. 仿射变换可以通 ...
- paddle2.0高层API实现自定义数据集文本分类中的情感分析任务
paddle2.0高层API实现自定义数据集文本分类中的情感分析任务 本文包含了: - 自定义文本分类数据集继承 - 文本分类数据处理 - 循环神经网络RNN, LSTM - ·seq2vec· - ...
- paddle2.0高层API实现人脸关键点检测(人脸关键点检测综述_自定义网络_paddleHub_趣味ps)
paddle2.0高层API实现人脸关键点检测(人脸关键点检测综述_自定义网络_paddleHub_趣味ps) 本文包含了: - 人脸关键点检测综述 - 人脸关键点检测数据集介绍以及数据处理实现 - ...
- Paddle高层API实现图像分类(CIFAR-100数据集_ResNet101)
Paddle高层API实现图像分类(CIFAR-100数据集_ResNet101) 『深度学习7日打卡营·大作业』 零基础解锁深度学习神器飞桨框架高层API,七天时间助你掌握CV.NLP领域最火模型及 ...
- 官方发布:深度学习高层API保姆级中文教程免费开放
很多小伙伴在后台给我留言,零基础如何入门深度学习?想要做算法工程师,自学了python基础,现在还来得及吗? 这个问题很大.很难说一篇文章几句话就能解决这个问题.今天我给大家说一下自己的一些个人经验, ...
- 重磅!深度学习神器 - 高层API 最强保姆级教程公开!
很多小伙伴在后台给我留言,零基础如何入门深度学习?想要做算法工程师,自学了python基础,现在还来得及吗? 这个问题很大.很难说一篇文章几句话就能解决这个问题.今天我给大家说一下自己的一些个人经验, ...
- 阿里 BladeDISC 深度学习编译器正式开源
简介:随着深度学习的不断发展,AI模型结构在快速演化,底层计算硬件技术更是层出不穷,对于广大开发者来说不仅要考虑如何在复杂多变的场景下有效的将算力发挥出来,还要应对计算框架的持续迭代.深度编译器就成了 ...
最新文章
- 重写equals就必须重写hashCode的原理分析
- 73. Leetcode 230. 二叉搜索树中第K小的元素 (二叉搜索树-中序遍历类)
- delphi内存泄露查找工具之MemProof教程
- java堆内存 和栈内存
- VS2008下最新X264(svn 2009.0216)编译不过的解决办法(附编译通过+修改内存泄露版本)
- 多目标进化优化_SDIM 学术讲座|分解多目标优化与帕累托多任务学习
- mysql如何下载msi_2、Windows下MySQL数据库下载与安装详细教程 MSI方式
- trackingmore快递查询平台_快递物流服务再升级!寄快递更便捷,看看都有哪些平台...
- ecshop彻底去版权把信息修改成自己的全教程
- python膨胀卷积_python里有没有轻量级的卷积网络库,不需要训练,只想快速前向计算?...
- 新年新计划-2021年
- virmach主机购买和使用
- html文本框中有一个叉号标志,html 输入框显示“小叉叉”的清空方法
- 关于php开发中用户请求数据的安全问题的一点想法
- tftp命令linux,tftp命令使用详解
- 转换小写金额为大写金额
- dota2收集服务器延迟,dota2亚服延迟高的解决办法!
- 司空见惯 - 大哲学家康德的作息时间表
- 《问佛》------------一篇精辟人生哲理短文(转)
- 数学牛人们的轶事[下]
热门文章
- java day35【Bootstrap】
- 第一章 计算机网络参考模型
- java Servlet Filter 拦截Ajax请求
- $.each(callback)方法
- 基于密度聚类的DBSCAN和kmeans算法比较
- XPath 元素及属性查找
- Android应用性能优化整体策略
- android打造一个简单的欢迎界面
- Oracle网络配置用到的sqlnet.ora,tnsnames.ora,listener.ora文件
- 让 ASP.NET AJAX 支持浏览器的 History Navigation - Part 1