【人工智能项目】深度学习实现10类猴子细粒度识别

任务说明

本次比赛需要选手准确识别10种猴子,数据集只有图片,没有boundbox等标注数据。

环境说明

!nvidia-smi
Fri Mar 27 11:01:18 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.64.00    Driver Version: 418.67       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla P100-PCIE...  Off  | 00000000:00:04.0 Off |                    0 |
| N/A   39C    P0    27W / 250W |      0MiB / 16280MiB |      0%      Default |
+-------------------------------+----------------------+----------------------++-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

下载数据集

!wget https://static.leiphone.com/48monkey.zip
--2020-03-27 11:01:28--  https://static.leiphone.com/48monkey.zip
Resolving static.leiphone.com (static.leiphone.com)... 47.246.19.234, 47.246.19.229, 47.246.19.231, ...
Connecting to static.leiphone.com (static.leiphone.com)|47.246.19.234|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 573224419 (547M) [application/zip]
Saving to: ‘48monkey.zip’48monkey.zip        100%[===================>] 546.67M  31.7MB/s    in 16s     2020-03-27 11:01:51 (33.9 MB/s) - ‘48monkey.zip’ saved [573224419/573224419]
!unzip 48monkey.zip
import ostrain_set_dir = "train/"test_set_dir = "test/"print(len(os.listdir(train_set_dir)))print(len(os.listdir(test_set_dir)))
1096
274

1. 探索数据

import osbird_dir = "./"
x_train_path = os.path.join(bird_dir,"train")
x_test_path = os.path.join(bird_dir,"test")y_train_path = os.path.join(bird_dir,"train.csv")
import pandas as pdy_train_df = pd.read_csv(y_train_path)
y_train_df.head()
filename label
0 0.jpg 9
1 1.jpg 3
2 2.jpg 0
3 3.jpg 1
4 4.jpg 5
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inlinesns.countplot(y_train_df["label"])
plt.xlabel("Label")
plt.title("Monkey")
Text(0.5, 1.0, 'Monkey')

x_train_img_path = y_train_df["filename"]
y_train = y_train_df["label"] print(x_train_img_path[:5])
print(y_train[:5])
0    0.jpg
1    1.jpg
2    2.jpg
3    3.jpg
4    4.jpg
Name: filename, dtype: object
0    9
1    3
2    0
3    1
4    5
Name: label, dtype: int64

2.加载数据

# 定义读取图片函数
import cv2
import numpy as npdef get_img(file_path,img_rows,img_cols):img = cv2.imread(file_path)img = cv2.resize(img,(img_rows,img_cols))if img.shape[2] == 1:img = np.dstack([img,img,img])else:img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)img = img.astype(np.float32)return img
# 加载训练集
x_train = []
for img_name in x_train_img_path:img = get_img(os.path.join(x_train_path,img_name),296,296)x_train.append(img)x_train = np.array(x_train,np.float32)
# 加载预测集
import rex_test_img_path = os.listdir(x_test_path)
x_test_img_path = sorted(x_test_img_path,key = lambda i:int(re.match(r"(\d+)",i).group()))print(x_test_img_path)x_test = []
for img_name in x_test_img_path:img = get_img(os.path.join(x_test_path,img_name),296,296)x_test.append(img)x_test = np.array(x_test,np.float32)
['0.jpg', '1.jpg', '2.jpg', '3.jpg', '4.jpg', '5.jpg', '6.jpg', '7.jpg', '8.jpg', '9.jpg', '10.jpg', '11.jpg', '12.jpg', '13.jpg', '14.jpg', '15.jpg', '16.jpg', '17.jpg', '18.jpg', '19.jpg', '20.jpg', '21.jpg', '22.jpg', '23.jpg', '24.jpg', '25.jpg', '26.jpg', '27.jpg', '28.jpg', '29.jpg', '30.jpg', '31.jpg', '32.jpg', '33.jpg', '34.jpg', '35.jpg', '36.jpg', '37.jpg', '38.jpg', '39.jpg', '40.jpg', '41.jpg', '42.jpg', '43.jpg', '44.jpg', '45.jpg', '46.jpg', '47.jpg', '48.jpg', '49.jpg', '50.jpg', '51.jpg', '52.jpg', '53.jpg', '54.jpg', '55.jpg', '56.jpg', '57.jpg', '58.jpg', '59.jpg', '60.jpg', '61.jpg', '62.jpg', '63.jpg', '64.jpg', '65.jpg', '66.jpg', '67.jpg', '68.jpg', '69.jpg', '70.jpg', '71.jpg', '72.jpg', '73.jpg', '74.jpg', '75.jpg', '76.jpg', '77.jpg', '78.jpg', '79.jpg', '80.jpg', '81.jpg', '82.jpg', '83.jpg', '84.jpg', '85.jpg', '86.jpg', '87.jpg', '88.jpg', '89.jpg', '90.jpg', '91.jpg', '92.jpg', '93.jpg', '94.jpg', '95.jpg', '96.jpg', '97.jpg', '98.jpg', '99.jpg', '100.jpg', '101.jpg', '102.jpg', '103.jpg', '104.jpg', '105.jpg', '106.jpg', '107.jpg', '108.jpg', '109.jpg', '110.jpg', '111.jpg', '112.jpg', '113.jpg', '114.jpg', '115.jpg', '116.jpg', '117.jpg', '118.jpg', '119.jpg', '120.jpg', '121.jpg', '122.jpg', '123.jpg', '124.jpg', '125.jpg', '126.jpg', '127.jpg', '128.jpg', '129.jpg', '130.jpg', '131.jpg', '132.jpg', '133.jpg', '134.jpg', '135.jpg', '136.jpg', '137.jpg', '138.jpg', '139.jpg', '140.jpg', '141.jpg', '142.jpg', '143.jpg', '144.jpg', '145.jpg', '146.jpg', '147.jpg', '148.jpg', '149.jpg', '150.jpg', '151.jpg', '152.jpg', '153.jpg', '154.jpg', '155.jpg', '156.jpg', '157.jpg', '158.jpg', '159.jpg', '160.jpg', '161.jpg', '162.jpg', '163.jpg', '164.jpg', '165.jpg', '166.jpg', '167.jpg', '168.jpg', '169.jpg', '170.jpg', '171.jpg', '172.jpg', '173.jpg', '174.jpg', '175.jpg', '176.jpg', '177.jpg', '178.jpg', '179.jpg', '180.jpg', '181.jpg', '182.jpg', '183.jpg', '184.jpg', '185.jpg', '186.jpg', '187.jpg', '188.jpg', '189.jpg', '190.jpg', '191.jpg', '192.jpg', '193.jpg', '194.jpg', '195.jpg', '196.jpg', '197.jpg', '198.jpg', '199.jpg', '200.jpg', '201.jpg', '202.jpg', '203.jpg', '204.jpg', '205.jpg', '206.jpg', '207.jpg', '208.jpg', '209.jpg', '210.jpg', '211.jpg', '212.jpg', '213.jpg', '214.jpg', '215.jpg', '216.jpg', '217.jpg', '218.jpg', '219.jpg', '220.jpg', '221.jpg', '222.jpg', '223.jpg', '224.jpg', '225.jpg', '226.jpg', '227.jpg', '228.jpg', '229.jpg', '230.jpg', '231.jpg', '232.jpg', '233.jpg', '234.jpg', '235.jpg', '236.jpg', '237.jpg', '238.jpg', '239.jpg', '240.jpg', '241.jpg', '242.jpg', '243.jpg', '244.jpg', '245.jpg', '246.jpg', '247.jpg', '248.jpg', '249.jpg', '250.jpg', '251.jpg', '252.jpg', '253.jpg', '254.jpg', '255.jpg', '256.jpg', '257.jpg', '258.jpg', '259.jpg', '260.jpg', '261.jpg', '262.jpg', '263.jpg', '264.jpg', '265.jpg', '266.jpg', '267.jpg', '268.jpg', '269.jpg', '270.jpg', '271.jpg', '272.jpg', '273.jpg']
print(x_train.shape)
print(y_train.shape)print(x_test.shape)
(1096, 296, 296, 3)
(1096,)
(274, 296, 296, 3)

3.查看数据

import matplotlib.pyplot as plt
%matplotlib inlineplt.imshow(x_train[0]/255)
print(y_train[0])
9

X_train = x_train
Y_train = y_trainprint(X_train.shape)
print(Y_train.shape)print(x_test.shape)
(1096, 296, 296, 3)
(1096,)
(274, 296, 296, 3)
sum = np.unique(y_train)
n_classes = len(sum)
# 直方图来显示图像训练集的各个类别的分别情况
def plot_y_train_hist():fig = plt.figure(figsize=(15,5))ax = fig.add_subplot(1,1,1)hist = ax.hist(Y_train,bins=n_classes)ax.set_title("the frequentcy of monkey")ax.set_xlabel("monkey")ax.set_ylabel("frequency")plt.show()return histhist = plot_y_train_hist()

# 对标签数据进行one-hot编码from keras.utils import np_utils
#Y_train = np_utils.to_categorical(Y_train,n_classes)
y_train = np_utils.to_categorical(y_train,n_classes)print("Shape after one-hot encoding:",y_train.shape)
Y_train = y_train
Using TensorFlow backend.

The default version of TensorFlow in Colab will switch to TensorFlow 2.x on the 27th of March, 2020.
We recommend you upgrade now or ensure your notebook will continue to use TensorFlow 1.x via the %tensorflow_version 1.x magic: more info.

Shape after one-hot encoding: (1096, 10)

# 划分数据集
from sklearn.model_selection import train_test_splitx_train,x_valid,y_train,y_valid = train_test_split(X_train,Y_train,test_size=0.2,random_state=2019)print(x_train.shape)
print(y_train.shape)print(x_valid.shape)
print(y_valid.shape)print(x_test.shape)
(876, 296, 296, 3)
(876, 10)
(220, 296, 296, 3)
(220, 10)
(274, 296, 296, 3)

4.定义模型

# 导入开发需要的库
from keras import optimizers, Input
from keras.applications import  imagenet_utilsfrom keras.preprocessing.image import ImageDataGenerator
from keras.models import *
from keras.layers import *
from keras.optimizers import *
from keras.callbacks import *
from keras.applications import *from sklearn.preprocessing import *
from sklearn.model_selection import *
from sklearn.metrics import *
# 绘制训练过程中的 loss 和 acc 变化曲线
import matplotlib.pyplot as plt
%matplotlib inlinedef history_plot(history_fit):plt.figure(figsize=(12,6))# summarize history for accuracyplt.subplot(121)plt.plot(history_fit.history["acc"])plt.plot(history_fit.history["val_acc"])plt.title("model accuracy")plt.ylabel("accuracy")plt.xlabel("epoch")plt.legend(["train", "valid"], loc="upper left")# summarize history for lossplt.subplot(122)plt.plot(history_fit.history["loss"])plt.plot(history_fit.history["val_loss"])plt.title("model loss")plt.ylabel("loss")plt.xlabel("epoch")plt.legend(["train", "test"], loc="upper left")plt.show()
# fine-tune 模型
def fine_tune_model(model, optimizer, batch_size, epochs, freeze_num):'''discription: 对指定预训练模型进行fine-tune,并保存为.hdf5格式MODEL:传入的模型,VGG16, ResNet50, ...optimizer: fine-tune all layers 的优化器, first part默认用adadeltabatch_size: 每一批的尺寸,建议32/64/128epochs: fine-tune all layers的代数freeze_num: first part冻结卷积层的数量'''# datagen = ImageDataGenerator(#     rescale=1.255,#     # shear_range=0.2,#     # zoom_range=0.2,#     # horizontal_flip=True,#     # vertical_flip=True,#     # fill_mode="nearest"#   )# datagen.fit(X_train)# first: 仅训练全连接层(权重随机初始化的)# 冻结所有卷积层for layer in model.layers[:freeze_num]:layer.trainable = Falsemodel.compile(optimizer=optimizer, loss="categorical_crossentropy",metrics=["accuracy"])# model.fit_generator(datagen.flow(x_train,y_train,batch_size=batch_size),#                     steps_per_epoch=len(x_train)/32,#                     epochs=3,#                     shuffle=True,#                     verbose=1,#                     datagen.flow(x_valid, y_valid))model.fit(x_train,y_train,batch_size=batch_size,epochs=10,shuffle=True,verbose=1,validation_data=(x_valid,y_valid))print('Finish step_1')# second: fine-tune all layersfor layer in model.layers[freeze_num:]:layer.trainable = Truerc = ReduceLROnPlateau(monitor="val_acc",factor=0.2,patience=4,verbose=1,mode='max')model_name = model.name  + ".hdf5"# mc = ModelCheckpoint(model_name, #            monitor="val_acc", #            save_best_only=True,#            verbose=1,#            mode='max')# el = EarlyStopping(monitor="val_acc",#           min_delta=0,#           patience=5,#           verbose=1,#           restore_best_weights=True)mc = ModelCheckpoint(model_name,monitor="val_loss",verbose=1,save_best_only=True,mode="min")el = EarlyStopping(monitor="val_loss",patience=5,verbose=1,restore_best_weights=True,mode="min")reduce_lr = ReduceLROnPlateau(monitor="val_loss",factor=0.5,patience=4,verbose=1,mode="min")model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=["accuracy"])# history_fit = model.fit_generator(datagen.flow(x_train,y_train,batch_size=32),#                                  steps_per_epoch=len(x_train)/32,#                                  epochs=epochs,#                                  shuffle=True,#                                  verbose=1,#                                  callbacks=[mc,rc,el],#                                  datagen.flow(x_valid, y_valid))history_fit = model.fit(x_train,y_train,batch_size=batch_size,epochs=epochs,shuffle=True,verbose=1,validation_data=(x_valid,y_valid),callbacks=[mc,rc,el])print('Finish fine-tune')return history_fit

5.VGG16模型

# fine-tune 模型
def fine_tune_model(model, optimizer, batch_size, epochs, freeze_num):'''discription: 对指定预训练模型进行fine-tune,并保存为.hdf5格式MODEL:传入的模型,VGG16, ResNet50, ...optimizer: fine-tune all layers 的优化器, first part默认用adadeltabatch_size: 每一批的尺寸,建议32/64/128epochs: fine-tune all layers的代数freeze_num: first part冻结卷积层的数量'''# datagen = ImageDataGenerator(#     rescale=1.255,#     # shear_range=0.2,#     # zoom_range=0.2,#     # horizontal_flip=True,#     # vertical_flip=True,#     # fill_mode="nearest"#   )# datagen.fit(X_train)# first: 仅训练全连接层(权重随机初始化的)# 冻结所有卷积层for layer in model.layers[:freeze_num]:layer.trainable = Falsemodel.compile(optimizer=optimizer, loss="categorical_crossentropy",metrics=["accuracy"])# model.fit_generator(datagen.flow(x_train,y_train,batch_size=batch_size),#                     steps_per_epoch=len(x_train)/32,#                     epochs=3,#                     shuffle=True,#                     verbose=1,#                     datagen.flow(x_valid, y_valid))model.fit(x_train,y_train,batch_size=batch_size,epochs=10,shuffle=True,verbose=1,validation_data=(x_valid,y_valid))print('Finish step_1')# second: fine-tune all layersfor layer in model.layers[freeze_num:]:layer.trainable = Truerc = ReduceLROnPlateau(monitor="val_acc",factor=0.2,patience=4,verbose=1,mode='max')model_name = model.name  + ".hdf5"# mc = ModelCheckpoint(model_name, #            monitor="val_acc", #            save_best_only=True,#            verbose=1,#            mode='max')# el = EarlyStopping(monitor="val_acc",#           min_delta=0,#           patience=5,#           verbose=1,#           restore_best_weights=True)mc = ModelCheckpoint(model_name,monitor="val_loss",verbose=1,save_best_only=True,mode="min")el = EarlyStopping(monitor="val_loss",patience=5,verbose=1,restore_best_weights=True,mode="min")reduce_lr = ReduceLROnPlateau(monitor="val_loss",factor=0.5,patience=4,verbose=1,mode="min")model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=["accuracy"])# history_fit = model.fit_generator(datagen.flow(x_train,y_train,batch_size=32),#                                  steps_per_epoch=len(x_train)/32,#                                  epochs=epochs,#                                  shuffle=True,#                                  verbose=1,#                                  callbacks=[mc,rc,el],#                                  datagen.flow(x_valid, y_valid))history_fit = model.fit(x_train,y_train,batch_size=batch_size,epochs=epochs,shuffle=True,verbose=1,validation_data=(x_valid,y_valid),callbacks=[mc,rc,el])print('Finish fine-tune')return history_fit
# 定义一个VGG16的模型
def vgg16_model(img_rows,img_cols):x = Input(shape=(img_rows, img_cols, 3))x = Lambda(imagenet_utils.preprocess_input)(x)base_model = VGG16(input_tensor=x,weights="imagenet",include_top=False, pooling='avg')x = base_model.outputx = Dense(1024,activation="relu",name="fc1")(x)x = Dropout(0.5)(x)predictions = Dense(n_classes,activation="softmax",name="predictions")(x)vgg16_model = Model(inputs=base_model.input,outputs=predictions,name="vgg16")return vgg16_model
# 创建VGG16模型
img_rows, img_cols = 296, 296
vgg16_model = vgg16_model(img_rows,img_cols)
for i,layer in enumerate(vgg16_model.layers):print(i,layer.name)
0 input_2
1 lambda_2
2 block1_conv1
3 block1_conv2
4 block1_pool
5 block2_conv1
6 block2_conv2
7 block2_pool
8 block3_conv1
9 block3_conv2
10 block3_conv3
11 block3_pool
12 block4_conv1
13 block4_conv2
14 block4_conv3
15 block4_pool
16 block5_conv1
17 block5_conv2
18 block5_conv3
19 block5_pool
20 global_average_pooling2d_2
21 fc1
22 dropout_2
23 predictions
optimizer = optimizers.Adam(lr=0.0001)
batch_size = 16
epochs = 30
freeze_num = 21%time vgg16_history = fine_tune_model(vgg16_model,optimizer,batch_size,epochs,freeze_num)
Train on 876 samples, validate on 220 samples
Epoch 1/10
876/876 [==============================] - 6s 7ms/step - loss: 4.1137 - acc: 0.3094 - val_loss: 0.4585 - val_acc: 0.8273
Epoch 2/10
876/876 [==============================] - 5s 6ms/step - loss: 1.1727 - acc: 0.6826 - val_loss: 0.1822 - val_acc: 0.9455
Epoch 3/10
876/876 [==============================] - 5s 6ms/step - loss: 0.5507 - acc: 0.8333 - val_loss: 0.1218 - val_acc: 0.9591
Epoch 4/10
876/876 [==============================] - 5s 6ms/step - loss: 0.3395 - acc: 0.9007 - val_loss: 0.0884 - val_acc: 0.9727
Epoch 5/10
876/876 [==============================] - 5s 6ms/step - loss: 0.2719 - acc: 0.9144 - val_loss: 0.0710 - val_acc: 0.9818
Epoch 6/10
876/876 [==============================] - 5s 6ms/step - loss: 0.1892 - acc: 0.9372 - val_loss: 0.0703 - val_acc: 0.9636
Epoch 7/10
876/876 [==============================] - 5s 6ms/step - loss: 0.2021 - acc: 0.9326 - val_loss: 0.0604 - val_acc: 0.9864
Epoch 8/10
876/876 [==============================] - 5s 6ms/step - loss: 0.1327 - acc: 0.9566 - val_loss: 0.0595 - val_acc: 0.9818
Epoch 9/10
876/876 [==============================] - 5s 6ms/step - loss: 0.1064 - acc: 0.9635 - val_loss: 0.0528 - val_acc: 0.9864
Epoch 10/10
876/876 [==============================] - 5s 6ms/step - loss: 0.1019 - acc: 0.9658 - val_loss: 0.0577 - val_acc: 0.9773
Finish step_1
Train on 876 samples, validate on 220 samples
Epoch 1/30
876/876 [==============================] - 6s 6ms/step - loss: 0.1953 - acc: 0.9498 - val_loss: 0.0706 - val_acc: 0.9682Epoch 00001: val_loss improved from inf to 0.07063, saving model to vgg16.hdf5
Epoch 2/30
876/876 [==============================] - 5s 6ms/step - loss: 0.1035 - acc: 0.9600 - val_loss: 0.0395 - val_acc: 0.9864Epoch 00002: val_loss improved from 0.07063 to 0.03949, saving model to vgg16.hdf5
Epoch 3/30
876/876 [==============================] - 5s 6ms/step - loss: 0.0705 - acc: 0.9772 - val_loss: 0.0377 - val_acc: 0.9909Epoch 00003: val_loss improved from 0.03949 to 0.03771, saving model to vgg16.hdf5
Epoch 4/30
876/876 [==============================] - 5s 6ms/step - loss: 0.0386 - acc: 0.9920 - val_loss: 0.0146 - val_acc: 0.9909Epoch 00004: val_loss improved from 0.03771 to 0.01462, saving model to vgg16.hdf5
Epoch 5/30
876/876 [==============================] - 5s 6ms/step - loss: 0.0206 - acc: 0.9932 - val_loss: 0.0203 - val_acc: 0.9955Epoch 00005: val_loss did not improve from 0.01462
Epoch 6/30
876/876 [==============================] - 5s 6ms/step - loss: 0.0165 - acc: 0.9954 - val_loss: 0.0195 - val_acc: 0.9955Epoch 00006: val_loss did not improve from 0.01462
Epoch 7/30
876/876 [==============================] - 5s 6ms/step - loss: 0.0183 - acc: 0.9943 - val_loss: 0.0233 - val_acc: 0.9955Epoch 00007: val_loss did not improve from 0.01462
Epoch 8/30
876/876 [==============================] - 5s 6ms/step - loss: 0.0119 - acc: 0.9966 - val_loss: 0.0165 - val_acc: 0.9955Epoch 00008: val_loss did not improve from 0.01462
Epoch 9/30
876/876 [==============================] - 5s 6ms/step - loss: 0.0091 - acc: 0.9966 - val_loss: 0.0150 - val_acc: 0.9909Epoch 00009: val_loss did not improve from 0.01462Epoch 00009: ReduceLROnPlateau reducing learning rate to 1.9999999494757503e-05.
Restoring model weights from the end of the best epoch
Epoch 00009: early stopping
Finish fine-tune
CPU times: user 39.3 s, sys: 12.6 s, total: 51.9 s
Wall time: 1min 41s
history_plot(vgg16_history)

6.EfficientNetB4

!pip install -U efficientnet
Requirement already up-to-date: efficientnet in /usr/local/lib/python3.6/dist-packages (1.1.0)
Requirement already satisfied, skipping upgrade: keras-applications<=1.0.8,>=1.0.7 in /usr/local/lib/python3.6/dist-packages (from efficientnet) (1.0.8)
Requirement already satisfied, skipping upgrade: scikit-image in /usr/local/lib/python3.6/dist-packages (from efficientnet) (0.16.2)Requirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib!=3.0.0,>=2.0.0->scikit-image->efficientnet) (46.0.0)
# 导入Efficient模块
from efficientnet.keras import EfficientNetB4
import keras.backend as K
# fine-tune 模型
def fine_tune_model(model, optimizer, batch_size, epochs, freeze_num):'''discription: 对指定预训练模型进行fine-tune,并保存为.hdf5格式MODEL:传入的模型,VGG16, ResNet50, ...optimizer: fine-tune all layers 的优化器, first part默认用adadeltabatch_size: 每一批的尺寸,建议32/64/128epochs: fine-tune all layers的代数freeze_num: first part冻结卷积层的数量'''# datagen = ImageDataGenerator(#     rescale=1.255,#     # shear_range=0.2,#     # zoom_range=0.2,#     # horizontal_flip=True,#     # vertical_flip=True,#     # fill_mode="nearest"#   )# datagen.fit(X_train)# first: 仅训练全连接层(权重随机初始化的)# 冻结所有卷积层for layer in model.layers[:freeze_num]:layer.trainable = Falsemodel.compile(optimizer=optimizer, loss="categorical_crossentropy",metrics=["accuracy"])# model.fit_generator(datagen.flow(x_train,y_train,batch_size=batch_size),#                     steps_per_epoch=len(x_train)/32,#                     epochs=3,#                     shuffle=True,#                     verbose=1,#                     datagen.flow(x_valid, y_valid))model.fit(x_train,y_train,batch_size=batch_size,epochs=10,shuffle=True,verbose=1,validation_data=(x_valid,y_valid))print('Finish step_1')# second: fine-tune all layersfor layer in model.layers[:]:layer.trainable = Truerc = ReduceLROnPlateau(monitor="val_acc",factor=0.2,patience=4,verbose=1,mode='max')model_name = model.name  + ".hdf5"# mc = ModelCheckpoint(model_name, #            monitor="val_acc", #            save_best_only=True,#            verbose=1,#            mode='max')# el = EarlyStopping(monitor="val_acc",#           min_delta=0,#           patience=5,#           verbose=1,#           restore_best_weights=True)mc = ModelCheckpoint(model_name,monitor="val_loss",verbose=1,save_best_only=True,mode="min")el = EarlyStopping(monitor="val_loss",patience=5,verbose=1,restore_best_weights=True,mode="min")reduce_lr = ReduceLROnPlateau(monitor="val_loss",factor=0.5,patience=4,verbose=1,mode="min")model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=["accuracy"])# history_fit = model.fit_generator(datagen.flow(x_train,y_train,batch_size=32),#                                  steps_per_epoch=len(x_train)/32,#                                  epochs=epochs,#                                  shuffle=True,#                                  verbose=1,#                                  callbacks=[mc,rc,el],#                                  datagen.flow(x_valid, y_valid))history_fit = model.fit(x_train,y_train,batch_size=batch_size,epochs=epochs,shuffle=True,verbose=1,validation_data=(x_valid,y_valid),callbacks=[mc,rc,el])print('Finish fine-tune')return history_fit
# 定义一个EfficientNet模型
def efficient_model(img_rows,img_cols):K.clear_session()x = Input(shape=(img_rows,img_cols,3))x = Lambda(imagenet_utils.preprocess_input)(x)base_model = EfficientNetB4(input_tensor=x,weights="imagenet",include_top=False,pooling="avg")x = base_model.outputx = Dense(1024,activation="relu",name="fc1")(x)x = Dropout(0.5)(x)predictions = Dense(n_classes,activation="softmax",name="predictions")(x)eB_model = Model(inputs=base_model.input,outputs=predictions,name="eB4")return eB_model
# 创建Efficient模型
img_rows,img_cols=296,296
eB_model = efficient_model(img_rows,img_cols)
Downloading data from https://github.com/Callidior/keras-applications/releases/download/efficientnet/efficientnet-b4_weights_tf_dim_ordering_tf_kernels_autoaugment_notop.h5
71892992/71892840 [==============================] - 1s 0us/step
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4432: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.
for i,layer in enumerate(eB_model.layers):print(i,layer.name)
0 input_1
1 lambda_1
2 stem_conv
3 stem_bn470 dropout_1
471 predictions
optimizer = optimizers.Adam(lr=0.0001)
batch_size = 16
epochs = 30
freeze_num = 469
eB_model_history  = fine_tune_model(eB_model,optimizer,batch_size,epochs,freeze_num)
Train on 876 samples, validate on 220 samples
Epoch 1/10
876/876 [==============================] - 21s 24ms/step - loss: 0.0795 - acc: 0.9726 - val_loss: 0.0368 - val_acc: 0.9864
Epoch 2/10
876/876 [==============================] - 7s 8ms/step - loss: 0.0553 - acc: 0.9840 - val_loss: 0.0366 - val_acc: 0.9909
Epoch 3/10
876/876 [==============================] - 7s 8ms/step - loss: 0.0533 - acc: 0.9840 - val_loss: 0.0345 - val_acc: 0.9864
Epoch 4/10
876/876 [==============================] - 7s 8ms/step - loss: 0.0541 - acc: 0.9829 - val_loss: 0.0366 - val_acc: 0.9864Epoch 10/10
876/876 [==============================] - 7s 8ms/step - loss: 0.0341 - acc: 0.9932 - val_loss: 0.0270 - val_acc: 0.9909
Finish step_1
Train on 876 samples, validate on 220 samples
Epoch 1/30
876/876 [==============================] - 73s 83ms/step - loss: 0.1622 - acc: 0.9612 - val_loss: 0.1712 - val_acc: 0.9545Epoch 00001: val_loss improved from inf to 0.17119, saving model to eB4.hdf5
Epoch 2/30
876/876 [==============================] - 30s 35ms/step - loss: 0.1159 - acc: 0.9658 - val_loss: 0.1020 - val_acc: 0.9682Epoch 00002: val_loss improved from 0.17119 to 0.10196, saving model to eB4.hdf5
Epoch 3/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0481 - acc: 0.9829 - val_loss: 0.1050 - val_acc: 0.9773Epoch 00003: val_loss did not improve from 0.10196
Epoch 4/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0186 - acc: 0.9943 - val_loss: 0.0807 - val_acc: 0.9818Epoch 00004: val_loss improved from 0.10196 to 0.08069, saving model to eB4.hdf5
Epoch 5/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0079 - acc: 0.9989 - val_loss: 0.0913 - val_acc: 0.9773Epoch 00005: val_loss did not improve from 0.08069Epoch 00010: val_loss did not improve from 0.04365
Epoch 11/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0113 - acc: 0.9943 - val_loss: 0.0868 - val_acc: 0.9773Epoch 00011: val_loss did not improve from 0.04365Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.9999999494757503e-05.
Epoch 12/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0165 - acc: 0.9932 - val_loss: 0.0814 - val_acc: 0.9818Epoch 00012: val_loss did not improve from 0.04365
Restoring model weights from the end of the best epoch
Epoch 00012: early stopping
Finish fine-tune
history_plot(eB_model_history)

7.efficientnet-with-attention

!pip install -U efficientnet
Collecting efficientnetDownloading https://files.pythonhosted.org/packages/97/82/f3ae07316f0461417dc54affab6e86ab188a5a22f33176d35271628b96e0/efficientnet-1.0.0-py3-none-any.whl
Requirement already satisfied, skipping upgrade: scikit-image in /usr/local/lib/python3.6/dist-packages (from efficientnet) (0.15.0)
Requirement already satisfied, skipping upgrade: keras-applications<=1.0.8,>=1.0.7 in /usr/local/lib/python3.6/dist-packages (from efficientnet) (1.0.8)
Requirement already satisfied, skipping upgrade: PyWavelets>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->efficientnet) (1.1.1)
Requirement already satisfied, skipping upgrade: matplotlib!=3.0.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->efficientnet) (3.1.2)Requirement already satisfied, skipping upgrade: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx>=2.0->scikit-image->efficientnet) (4.4.1)
Requirement already satisfied, skipping upgrade: six in /usr/local/lib/python3.6/dist-packages (from h5py->keras-applications<=1.0.8,>=1.0.7->efficientnet) (1.12.0)
Requirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib!=3.0.0,>=2.0.0->scikit-image->efficientnet) (42.0.1)
Installing collected packages: efficientnet
Successfully installed efficientnet-1.0.0
# 导入模块
from efficientnet.keras import EfficientNetB4
import keras.backend as K
# 定义一个加入Attention模块的Efficient网络架构即efficientnet-with-attentiondef efficient_attention_model(img_rows,img_cols):K.clear_session()in_lay = Input(shape=(img_rows,img_cols,3))base_model = EfficientNetB4(input_shape=(img_rows,img_cols,3),weights="imagenet",include_top=False)pt_depth = base_model.get_output_shape_at(0)[-1]pt_features = base_model(in_lay)bn_features = BatchNormalization()(pt_features)# here we do an attention mechanism to turn pixels in the GAP on an offatten_layer = Conv2D(64,kernel_size=(1,1),padding="same",activation="relu")(Dropout(0.5)(bn_features))atten_layer = Conv2D(16,kernel_size=(1,1),padding="same",activation="relu")(atten_layer)atten_layer = Conv2D(8,kernel_size=(1,1),padding="same",activation="relu")(atten_layer)atten_layer = Conv2D(1,kernel_size=(1,1),padding="valid",activation="sigmoid")(atten_layer)# H,W,1# fan it out to all of the channelsup_c2_w = np.ones((1,1,1,pt_depth)) #1,1,Cup_c2 = Conv2D(pt_depth,kernel_size=(1,1),padding="same",activation="linear",use_bias=False,weights=[up_c2_w])up_c2.trainable = Falseatten_layer = up_c2(atten_layer)# H,W,Cmask_features = multiply([atten_layer,bn_features])# H,W,Cgap_features = GlobalAveragePooling2D()(mask_features)# 1,1,C# gap_mask = GlobalAveragePooling2D()(atten_layer)# 1,1,C# # to account for missing values from the attention model# gap = Lambda(lambda x:x[0]/x[1],name="RescaleGAP")([gap_features,gap_mask])gap_dr = Dropout(0.25)(gap_features)dr_steps = Dropout(0.25)(Dense(1000,activation="relu")(gap_dr))out_layer = Dense(n_classes,activation="softmax")(dr_steps)eb_atten_model = Model(inputs=[in_lay],outputs=[out_layer])return eb_atten_model
img_rows,img_cols = 296,296
eB_atten_model = efficient_attention_model(img_rows,img_cols)
for i,layer in enumerate(eB_atten_model.layers):print(i,layer.name)
0 input_1
1 efficientnet-b4
2 batch_normalization_1
3 dropout_1
4 conv2d_1
5 conv2d_2
6 conv2d_3
7 conv2d_4
8 conv2d_5
9 multiply_1
10 global_average_pooling2d_1
11 dropout_2
12 dense_1
13 dropout_3
14 dense_2
optimizer = optimizers.Adam(lr=0.0001)
batch_size = 16
epochs = 30
freeze_num = 12
eB_atten_model_history  = fine_tune_model(eB_atten_model,optimizer,batch_size,epochs,freeze_num)
Train on 876 samples, validate on 220 samples
Epoch 1/10
876/876 [==============================] - 19s 22ms/step - loss: 0.4855 - acc: 0.9304 - val_loss: 0.3256 - val_acc: 0.9409
Epoch 2/10
876/876 [==============================] - 7s 8ms/step - loss: 0.4319 - acc: 0.9372 - val_loss: 0.2806 - val_acc: 0.9455
Epoch 3/10
876/876 [==============================] - 7s 8ms/step - loss: 0.3744 - acc: 0.9349 - val_loss: 0.2577 - val_acc: 0.9455Epoch 8/10
876/876 [==============================] - 7s 8ms/step - loss: 0.2456 - acc: 0.9475 - val_loss: 0.1869 - val_acc: 0.9591
Epoch 9/10
876/876 [==============================] - 7s 8ms/step - loss: 0.2243 - acc: 0.9612 - val_loss: 0.1847 - val_acc: 0.9591
Epoch 10/10
876/876 [==============================] - 7s 8ms/step - loss: 0.2205 - acc: 0.9658 - val_loss: 0.1764 - val_acc: 0.9591
Finish step_1
Train on 876 samples, validate on 220 samples
Epoch 1/30
876/876 [==============================] - 74s 84ms/step - loss: 0.1646 - acc: 0.9463 - val_loss: 0.0211 - val_acc: 0.9909Epoch 00001: val_loss improved from inf to 0.02109, saving model to model_1.hdf5
Epoch 2/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0971 - acc: 0.9715 - val_loss: 0.0082 - val_acc: 0.9955Epoch 00002: val_loss improved from 0.02109 to 0.00816, saving model to model_1.hdf5
Epoch 3/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0676 - acc: 0.9829 - val_loss: 0.0254 - val_acc: 0.9864Epoch 00003: val_loss did not improve from 0.00816
Epoch 4/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0334 - acc: 0.9932 - val_loss: 0.0175 - val_acc: 0.9909Epoch 00004: val_loss did not improve from 0.00816
Epoch 5/30
876/876 [==============================] - 30s 34ms/step - loss: 0.0242 - acc: 0.9909 - val_loss: 0.0157 - val_acc: 0.9909Epoch 00005: val_loss did not improve from 0.00816
Epoch 6/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0090 - acc: 0.9977 - val_loss: 0.0139 - val_acc: 0.9909Epoch 00006: val_loss did not improve from 0.00816Epoch 00006: ReduceLROnPlateau reducing learning rate to 1.9999999494757503e-05.
Epoch 7/30
876/876 [==============================] - 30s 34ms/step - loss: 0.0155 - acc: 0.9954 - val_loss: 0.0111 - val_acc: 0.9955Epoch 00007: val_loss did not improve from 0.00816
Restoring model weights from the end of the best epoch
Epoch 00007: early stopping
Finish fine-tune
history_plot(eB_atten_model_history)

8.EfficientNetB3 with attention v2

!pip install -U efficientnet
Collecting efficientnetDownloading https://files.pythonhosted.org/packages/97/82/f3ae07316f0461417dc54affab6e86ab188a5a22f33176d35271628b96e0/efficientnet-1.0.0-py3-none-any.whlRequirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib!=3.0.0,>=2.0.0->scikit-image->efficientnet) (42.0.1)
Installing collected packages: efficientnet
Successfully installed efficientnet-1.0.0
# 导入模块
from efficientnet.keras import EfficientNetB4
import keras.backend as K
import tensorflow as tf
from keras.layers import GlobalAveragePooling2D, GlobalMaxPooling2D, Reshape, Dense, multiply, Permute, Concatenate, Conv2D, Add, Activation, Lambda
from keras import backend as K
from keras.activations import sigmoiddef attach_attention_module(net, attention_module):if attention_module == 'se_block': # SE_blocknet = se_block(net)elif attention_module == 'cbam_block': # CBAM_blocknet = cbam_block(net)else:raise Exception("'{}' is not supported attention module!".format(attention_module))return netdef se_block(input_feature, ratio=8):"""Contains the implementation of Squeeze-and-Excitation(SE) block.As described in https://arxiv.org/abs/1709.01507."""channel_axis = 1 if K.image_data_format() == "channels_first" else -1channel = input_feature._keras_shape[channel_axis]se_feature = GlobalAveragePooling2D()(input_feature)se_feature = Reshape((1, 1, channel))(se_feature)assert se_feature._keras_shape[1:] == (1,1,channel)se_feature = Dense(channel // ratio,activation='relu',kernel_initializer='he_normal',use_bias=True,bias_initializer='zeros')(se_feature)assert se_feature._keras_shape[1:] == (1,1,channel//ratio)se_feature = Dense(channel,activation='sigmoid',kernel_initializer='he_normal',use_bias=True,bias_initializer='zeros')(se_feature)assert se_feature._keras_shape[1:] == (1,1,channel)if K.image_data_format() == 'channels_first':se_feature = Permute((3, 1, 2))(se_feature)se_feature = multiply([input_feature, se_feature])return se_featuredef cbam_block(cbam_feature, ratio=8):"""Contains the implementation of Convolutional Block Attention Module(CBAM) block.As described in https://arxiv.org/abs/1807.06521."""cbam_feature = channel_attention(cbam_feature, ratio)cbam_feature = spatial_attention(cbam_feature)return cbam_featuredef channel_attention(input_feature, ratio=8):channel_axis = 1 if K.image_data_format() == "channels_first" else -1channel = input_feature._keras_shape[channel_axis]shared_layer_one = Dense(channel//ratio,activation='relu',kernel_initializer='he_normal',use_bias=True,bias_initializer='zeros')shared_layer_two = Dense(channel,kernel_initializer='he_normal',use_bias=True,bias_initializer='zeros')avg_pool = GlobalAveragePooling2D()(input_feature)    avg_pool = Reshape((1,1,channel))(avg_pool)assert avg_pool._keras_shape[1:] == (1,1,channel)avg_pool = shared_layer_one(avg_pool)assert avg_pool._keras_shape[1:] == (1,1,channel//ratio)avg_pool = shared_layer_two(avg_pool)assert avg_pool._keras_shape[1:] == (1,1,channel)max_pool = GlobalMaxPooling2D()(input_feature)max_pool = Reshape((1,1,channel))(max_pool)assert max_pool._keras_shape[1:] == (1,1,channel)max_pool = shared_layer_one(max_pool)assert max_pool._keras_shape[1:] == (1,1,channel//ratio)max_pool = shared_layer_two(max_pool)assert max_pool._keras_shape[1:] == (1,1,channel)cbam_feature = Add()([avg_pool,max_pool])cbam_feature = Activation('sigmoid')(cbam_feature)if K.image_data_format() == "channels_first":cbam_feature = Permute((3, 1, 2))(cbam_feature)return multiply([input_feature, cbam_feature])def spatial_attention(input_feature):kernel_size = 7if K.image_data_format() == "channels_first":channel = input_feature._keras_shape[1]cbam_feature = Permute((2,3,1))(input_feature)else:channel = input_feature._keras_shape[-1]cbam_feature = input_featureavg_pool = Lambda(lambda x: K.mean(x, axis=3, keepdims=True))(cbam_feature)assert avg_pool._keras_shape[-1] == 1max_pool = Lambda(lambda x: K.max(x, axis=3, keepdims=True))(cbam_feature)assert max_pool._keras_shape[-1] == 1concat = Concatenate(axis=3)([avg_pool, max_pool])assert concat._keras_shape[-1] == 2cbam_feature = Conv2D(filters = 1,kernel_size=kernel_size,strides=1,padding='same',activation='sigmoid',kernel_initializer='he_normal',use_bias=False)(concat)  assert cbam_feature._keras_shape[-1] == 1if K.image_data_format() == "channels_first":cbam_feature = Permute((3, 1, 2))(cbam_feature)return multiply([input_feature, cbam_feature])
# 定义一个EfficientNet模型
def efficient__atten2_model(img_rows,img_cols):K.clear_session()in_lay = Input(shape=(img_rows,img_cols,3))base_model = EfficientNetB4(input_shape=(img_rows,img_cols,3),weights="imagenet",include_top=False)pt_features = base_model(in_lay)bn_features = BatchNormalization()(pt_features)atten_features = attach_attention_module(bn_features,"se_block")gap_features = GlobalAveragePooling2D()(atten_features)gap_dr = Dropout(0.25)(gap_features)dr_steps = Dropout(0.25)(Dense(1000,activation="relu")(gap_dr))out_layer = Dense(n_classes,activation="softmax")(dr_steps)eb_atten_model = Model(inputs=[in_lay],outputs=[out_layer])return eb_atten_model
img_rows,img_cols = 296,296
eB_atten2_model = efficient__atten2_model(img_rows,img_cols)
for i,layer in enumerate(eB_atten2_model.layers):print(i,layer.name)
0 input_1
1 efficientnet-b4
2 batch_normalization_1
3 global_average_pooling2d_1
4 reshape_1
5 dense_1
6 dense_2
7 multiply_1
8 global_average_pooling2d_2
9 dropout_1
10 dense_3
11 dropout_2
12 dense_4
optimizer = optimizers.Adam(lr=0.0001)
batch_size = 16
epochs = 30
freeze_num = 19
eB_atten2_model_history  = fine_tune_model(eB_atten2_model,optimizer,batch_size,epochs,freeze_num)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1033: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.Train on 876 samples, validate on 220 samples
Epoch 1/10
876/876 [==============================] - 17s 19ms/step - loss: 2.3097 - acc: 0.1096 - val_loss: 2.9290 - val_acc: 0.0955Epoch 7/10
876/876 [==============================] - 6s 7ms/step - loss: 2.3012 - acc: 0.1210 - val_loss: 2.9290 - val_acc: 0.0955
Epoch 8/10
876/876 [==============================] - 6s 7ms/step - loss: 2.3046 - acc: 0.1233 - val_loss: 2.9290 - val_acc: 0.0955
Epoch 9/10
876/876 [==============================] - 6s 7ms/step - loss: 2.3167 - acc: 0.1050 - val_loss: 2.9290 - val_acc: 0.0955
Epoch 10/10
876/876 [==============================] - 6s 7ms/step - loss: 2.3035 - acc: 0.1267 - val_loss: 2.9290 - val_acc: 0.0955
Finish step_1Train on 876 samples, validate on 220 samples
Epoch 1/30
876/876 [==============================] - 67s 76ms/step - loss: 0.7490 - acc: 0.8242 - val_loss: 0.0197 - val_acc: 0.9955Epoch 00001: val_loss improved from inf to 0.01974, saving model to model_1.hdf5
Epoch 2/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0931 - acc: 0.9692 - val_loss: 0.0393 - val_acc: 0.9818Epoch 00002: val_loss did not improve from 0.01974
Epoch 3/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0332 - acc: 0.9920 - val_loss: 0.0197 - val_acc: 0.9909Epoch 00004: val_loss did not improve from 0.01296
Epoch 5/30
672/876 [======================>.......] - ETA: 6s - loss: 0.0311 - acc: 0.9926
Epoch 00005: ReduceLROnPlateau reducing learning rate to 1.9999999494757503e-05.
Epoch 6/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0179 - acc: 0.9977 - val_loss: 0.0092 - val_acc: 0.9955Epoch 00006: val_loss improved from 0.00997 to 0.00916, saving model to model_1.hdf5
Epoch 7/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0099 - acc: 0.9989 - val_loss: 0.0098 - val_acc: 1.0000Epoch 00007: val_loss did not improve from 0.00916
Epoch 8/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0226 - acc: 0.9920 - val_loss: 0.0091 - val_acc: 1.0000Epoch 00008: val_loss improved from 0.00916 to 0.00912, saving model to model_1.hdf5
Epoch 9/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0073 - acc: 0.9989 - val_loss: 0.0091 - val_acc: 1.0000Epoch 00009: val_loss improved from 0.00912 to 0.00906, saving model to model_1.hdf5
Epoch 10/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0187 - acc: 0.9943 - val_loss: 0.0068 - val_acc: 1.0000Epoch 00012: val_loss improved from 0.00619 to 0.00562, saving model to model_1.hdf5
Epoch 13/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0132 - acc: 0.9954 - val_loss: 0.0053 - val_acc: 1.0000Epoch 00013: val_loss improved from 0.00562 to 0.00527, saving model to model_1.hdf5
Epoch 14/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0086 - acc: 0.9989 - val_loss: 0.0053 - val_acc: 1.0000Epoch 00014: val_loss did not improve from 0.00527
Epoch 15/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0096 - acc: 0.9977 - val_loss: 0.0051 - val_acc: 1.0000Epoch 00015: val_loss improved from 0.00527 to 0.00509, saving model to model_1.hdf5Epoch 00015: ReduceLROnPlateau reducing learning rate to 7.999999979801942e-07.
Epoch 16/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0110 - acc: 0.9966 - val_loss: 0.0052 - val_acc: 1.0000Epoch 00016: val_loss did not improve from 0.00509
Epoch 17/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0061 - acc: 0.9989 - val_loss: 0.0052 - val_acc: 1.0000Epoch 00017: val_loss did not improve from 0.00509
Epoch 18/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0129 - acc: 0.9954 - val_loss: 0.0052 - val_acc: 1.0000Epoch 00018: val_loss did not improve from 0.00509
Epoch 19/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0150 - acc: 0.9966 - val_loss: 0.0053 - val_acc: 1.0000Epoch 00019: val_loss did not improve from 0.00509Epoch 00019: ReduceLROnPlateau reducing learning rate to 1.600000018697756e-07.
Epoch 20/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0069 - acc: 0.9989 - val_loss: 0.0054 - val_acc: 1.0000Epoch 00020: val_loss did not improve from 0.00509
Restoring model weights from the end of the best epoch
Epoch 00020: early stopping
Finish fine-tune
history_plot(eB_atten2_model_history)

9.双线性EfficientNet

!pip install -U efficientnet
Requirement already up-to-date: efficientnet in /usr/local/lib/python3.6/dist-packages (1.1.0)
Requirement already satisfied, skipping upgrade: scikit-image in /usr/local/lib/python3.6/dist-packages (from efficientnet) (0.16.2)
matplotlib!=3.0.0,>=2.0.0->scikit-image->efficientnet) (2.4.6)
Requirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib!=3.0.0,>=2.0.0->scikit-image->efficientnet) (46.0.0)
# 导入开发需要的库
from keras import optimizers, Input
from keras.applications import  imagenet_utilsfrom keras.preprocessing.image import ImageDataGenerator
from keras.models import *
from keras.layers import *
from keras.optimizers import *
from keras.callbacks import *
from keras.applications import *from sklearn.preprocessing import *
from sklearn.model_selection import *
from sklearn.metrics import *
# 导入模块
from efficientnet.keras import EfficientNetB4
import keras.backend as K
import tensorflow as tf
from keras.layers import GlobalAveragePooling2D, GlobalMaxPooling2D, Reshape, Dense, multiply, Permute, Concatenate, Conv2D, Add, Activation, Lambda
from keras import backend as K
from keras.activations import sigmoiddef attach_attention_module(net, attention_module):if attention_module == 'se_block': # SE_blocknet = se_block(net)elif attention_module == 'cbam_block': # CBAM_blocknet = cbam_block(net)else:raise Exception("'{}' is not supported attention module!".format(attention_module))return netdef se_block(input_feature, ratio=8):"""Contains the implementation of Squeeze-and-Excitation(SE) block.As described in https://arxiv.org/abs/1709.01507."""channel_axis = 1 if K.image_data_format() == "channels_first" else -1channel = input_feature._keras_shape[channel_axis]se_feature = GlobalAveragePooling2D()(input_feature)se_feature = Reshape((1, 1, channel))(se_feature)assert se_feature._keras_shape[1:] == (1,1,channel)se_feature = Dense(channel // ratio,activation='relu',kernel_initializer='he_normal',use_bias=True,bias_initializer='zeros')(se_feature)assert se_feature._keras_shape[1:] == (1,1,channel//ratio)se_feature = Dense(channel,activation='sigmoid',kernel_initializer='he_normal',use_bias=True,bias_initializer='zeros')(se_feature)assert se_feature._keras_shape[1:] == (1,1,channel)if K.image_data_format() == 'channels_first':se_feature = Permute((3, 1, 2))(se_feature)se_feature = multiply([input_feature, se_feature])return se_featuredef cbam_block(cbam_feature, ratio=8):"""Contains the implementation of Convolutional Block Attention Module(CBAM) block.As described in https://arxiv.org/abs/1807.06521."""cbam_feature = channel_attention(cbam_feature, ratio)cbam_feature = spatial_attention(cbam_feature)return cbam_featuredef channel_attention(input_feature, ratio=8):channel_axis = 1 if K.image_data_format() == "channels_first" else -1channel = input_feature._keras_shape[channel_axis]shared_layer_one = Dense(channel//ratio,activation='relu',kernel_initializer='he_normal',use_bias=True,bias_initializer='zeros')shared_layer_two = Dense(channel,kernel_initializer='he_normal',use_bias=True,bias_initializer='zeros')avg_pool = GlobalAveragePooling2D()(input_feature)    avg_pool = Reshape((1,1,channel))(avg_pool)assert avg_pool._keras_shape[1:] == (1,1,channel)avg_pool = shared_layer_one(avg_pool)assert avg_pool._keras_shape[1:] == (1,1,channel//ratio)avg_pool = shared_layer_two(avg_pool)assert avg_pool._keras_shape[1:] == (1,1,channel)max_pool = GlobalMaxPooling2D()(input_feature)max_pool = Reshape((1,1,channel))(max_pool)assert max_pool._keras_shape[1:] == (1,1,channel)max_pool = shared_layer_one(max_pool)assert max_pool._keras_shape[1:] == (1,1,channel//ratio)max_pool = shared_layer_two(max_pool)assert max_pool._keras_shape[1:] == (1,1,channel)cbam_feature = Add()([avg_pool,max_pool])cbam_feature = Activation('sigmoid')(cbam_feature)if K.image_data_format() == "channels_first":cbam_feature = Permute((3, 1, 2))(cbam_feature)return multiply([input_feature, cbam_feature])def spatial_attention(input_feature):kernel_size = 7if K.image_data_format() == "channels_first":channel = input_feature._keras_shape[1]cbam_feature = Permute((2,3,1))(input_feature)else:channel = input_feature._keras_shape[-1]cbam_feature = input_featureavg_pool = Lambda(lambda x: K.mean(x, axis=3, keepdims=True))(cbam_feature)assert avg_pool._keras_shape[-1] == 1max_pool = Lambda(lambda x: K.max(x, axis=3, keepdims=True))(cbam_feature)assert max_pool._keras_shape[-1] == 1concat = Concatenate(axis=3)([avg_pool, max_pool])assert concat._keras_shape[-1] == 2cbam_feature = Conv2D(filters = 1,kernel_size=kernel_size,strides=1,padding='same',activation='sigmoid',kernel_initializer='he_normal',use_bias=False)(concat)  assert cbam_feature._keras_shape[-1] == 1if K.image_data_format() == "channels_first":cbam_feature = Permute((3, 1, 2))(cbam_feature)return multiply([input_feature, cbam_feature])
# 定义一个双线性EfficientNet Attention模型
def blinear_efficient__atten_model(img_rows,img_cols):K.clear_session()in_lay = Input(shape=(img_rows,img_cols,3))base_model = EfficientNetB4(input_shape=(img_rows,img_cols,3),weights="imagenet",include_top=False)pt_depth = base_model.get_output_shape_at(0)[-1]cnn_features_a = base_model(in_lay)cnn_bn_features_a = BatchNormalization()(cnn_features_a)# attention mechanism# here we do an attention mechanism to turn pixels in the GAP on an offatten_layer = Conv2D(64,kernel_size=(1,1),padding="same",activation="relu")(Dropout(0.5)(cnn_bn_features_a))atten_layer = Conv2D(16,kernel_size=(1,1),padding="same",activation="relu")(atten_layer)atten_layer = Conv2D(8,kernel_size=(1,1),padding="same",activation="relu")(atten_layer)atten_layer = Conv2D(1,kernel_size=(1,1),padding="valid",activation="sigmoid")(atten_layer)# H,W,1# fan it out to all of the channelsup_c2_w = np.ones((1,1,1,pt_depth)) #1,1,Cup_c2 = Conv2D(pt_depth,kernel_size=(1,1),padding="same",activation="linear",use_bias=False,weights=[up_c2_w])up_c2.trainable = Trueatten_layer = up_c2(atten_layer)# H,W,Ccnn_atten_out_a = multiply([atten_layer,cnn_bn_features_a])# H,W,Ccnn_atten_out_b = cnn_atten_out_acnn_out_dot = multiply([cnn_atten_out_a,cnn_atten_out_b])gap_features = GlobalAveragePooling2D()(cnn_out_dot)gap_dr = Dropout(0.25)(gap_features)dr_steps = Dropout(0.25)(Dense(1000,activation="relu")(gap_dr))out_layer = Dense(n_classes,activation="softmax")(dr_steps)b_eff_atten_model = Model(inputs=[in_lay],outputs=[out_layer],name="blinear_efficient_atten")return b_eff_atten_model
# 创建双线性EfficientNet Attention模型
img_rows,img_cols = 296,296
befficient_model = blinear_efficient__atten_model(img_rows,img_cols)
optimizer = optimizers.Adam(lr=0.0001)
batch_size = 16
epochs = 30
freeze_num = 19
befficient_model_history  = fine_tune_model(befficient_model,optimizer,batch_size,epochs,freeze_num)
Train on 876 samples, validate on 220 samples
Epoch 1/10
876/876 [==============================] - 14s 16ms/step - loss: 2.3208 - acc: 0.1084 - val_loss: 2.8606 - val_acc: 0.1136
Epoch 2/10
876/876 [==============================] - 7s 8ms/step - loss: 2.3244 - acc: 0.1005 - val_loss: 2.8606 - val_acc: 0.1136
Epoch 3/10
876/876 [==============================] - 7s 8ms/step - loss: 2.3251 - acc: 0.1062 - val_loss: 2.8606 - val_acc: 0.1136
Epoch 4/10
876/876 [==============================] - 7s 8ms/step - loss: 2.3316 - acc: 0.0936 - val_loss: 2.8606 - val_acc: 0.1136
Epoch 5/10
876/876 [==============================] - 7s 8ms/step - loss: 2.3185 - acc: 0.1039 - val_loss: 2.8606 - val_acc: 0.1136
Epoch 6/10
876/876 [==============================] - 7s 8ms/step - loss: 2.3270 - acc: 0.1005 - val_loss: 2.8606 - val_acc: 0.1136
Epoch 7/10
876/876 [==============================] - 7s 8ms/step - loss: 2.3250 - acc: 0.0993 - val_loss: 2.8606 - val_acc: 0.1136
Epoch 8/10
876/876 [==============================] - 7s 8ms/step - loss: 2.3289 - acc: 0.1005 - val_loss: 2.8606 - val_acc: 0.1136
Epoch 9/10
876/876 [==============================] - 7s 8ms/step - loss: 2.3284 - acc: 0.0913 - val_loss: 2.8606 - val_acc: 0.1136
Epoch 10/10
876/876 [==============================] - 7s 8ms/step - loss: 2.3308 - acc: 0.1027 - val_loss: 2.8606 - val_acc: 0.1136
Finish step_1
Train on 876 samples, validate on 220 samplesEpoch 5/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0747 - acc: 0.9749 - val_loss: 0.0289 - val_acc: 0.9864Epoch 00024: ReduceLROnPlateau reducing learning rate to 3.199999980552093e-08.
Epoch 25/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0078 - acc: 0.9977 - val_loss: 0.0138 - val_acc: 0.9955Epoch 00025: val_loss did not improve from 0.01312
Restoring model weights from the end of the best epoch
Epoch 00025: early stopping
Finish fine-tune
history_plot(befficient_model_history)

10.双线性VGG16模型

# fine-tune 模型
def fine_tune_model(model, optimizer, batch_size, epochs, freeze_num):'''discription: 对指定预训练模型进行fine-tune,并保存为.hdf5格式MODEL:传入的模型,VGG16, ResNet50, ...optimizer: fine-tune all layers 的优化器, first part默认用adadeltabatch_size: 每一批的尺寸,建议32/64/128epochs: fine-tune all layers的代数freeze_num: first part冻结卷积层的数量'''# datagen = ImageDataGenerator(#     rescale=1.255,#     # shear_range=0.2,#     # zoom_range=0.2,#     # horizontal_flip=True,#     # vertical_flip=True,#     # fill_mode="nearest"#   )# datagen.fit(X_train)# first: 仅训练全连接层(权重随机初始化的)# 冻结所有卷积层for layer in model.layers[:freeze_num]:layer.trainable = Falsemodel.compile(optimizer=optimizer, loss="categorical_crossentropy",metrics=["accuracy"])# model.fit_generator(datagen.flow(x_train,y_train,batch_size=batch_size),#                     steps_per_epoch=len(x_train)/32,#                     epochs=3,#                     shuffle=True,#                     verbose=1,#                     datagen.flow(x_valid, y_valid))model.fit(x_train,y_train,batch_size=batch_size,epochs=10,shuffle=True,verbose=1,validation_data=(x_valid,y_valid))print('Finish step_1')# second: fine-tune all layersfor layer in model.layers[freeze_num:]:layer.trainable = Truerc = ReduceLROnPlateau(monitor="val_acc",factor=0.2,patience=4,verbose=1,mode='max')model_name = model.name  + ".hdf5"# mc = ModelCheckpoint(model_name, #            monitor="val_acc", #            save_best_only=True,#            verbose=1,#            mode='max')# el = EarlyStopping(monitor="val_acc",#           min_delta=0,#           patience=5,#           verbose=1,#           restore_best_weights=True)mc = ModelCheckpoint(model_name,monitor="val_loss",verbose=1,save_best_only=True,mode="min")el = EarlyStopping(monitor="val_loss",patience=5,verbose=1,restore_best_weights=True,mode="min")reduce_lr = ReduceLROnPlateau(monitor="val_loss",factor=0.5,patience=4,verbose=1,mode="min")model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=["accuracy"])# history_fit = model.fit_generator(datagen.flow(x_train,y_train,batch_size=32),#                                  steps_per_epoch=len(x_train)/32,#                                  epochs=epochs,#                                  shuffle=True,#                                  verbose=1,#                                  callbacks=[mc,rc,el],#                                  datagen.flow(x_valid, y_valid))history_fit = model.fit(x_train,y_train,batch_size=batch_size,epochs=epochs,shuffle=True,verbose=1,validation_data=(x_valid,y_valid),callbacks=[mc,rc,el])print('Finish fine-tune')return history_fit
# 定义双线性VGG16模型from keras import backend as Kdef batch_dot(cnn_ab):return K.batch_dot(cnn_ab[0], cnn_ab[1], axes=[1, 1])def sign_sqrt(x):return K.sign(x) * K.sqrt(K.abs(x) + 1e-10)def l2_norm(x):return K.l2_normalize(x, axis=-1)def bilinear_vgg16(img_rows,img_cols):input_tensor = Input(shape=(img_rows,img_cols,3))input_tensor = Lambda(imagenet_utils.preprocess_input)(input_tensor)model_vgg16 = VGG16(include_top=False, weights="imagenet",input_tensor=input_tensor,pooling="avg")cnn_out_a = model_vgg16.layers[-2].outputcnn_out_shape = model_vgg16.layers[-2].output_shapecnn_out_a = Reshape([cnn_out_shape[1]*cnn_out_shape[2],cnn_out_shape[-1]])(cnn_out_a)cnn_out_b = cnn_out_acnn_out_dot = Lambda(batch_dot)([cnn_out_a, cnn_out_b])cnn_out_dot = Reshape([cnn_out_shape[-1]*cnn_out_shape[-1]])(cnn_out_dot)sign_sqrt_out = Lambda(sign_sqrt)(cnn_out_dot)l2_norm_out = Lambda(l2_norm)(sign_sqrt_out)fc1 = Dense(1024,activation="relu",name="fc1")(l2_norm_out)dropout = Dropout(0.5)(fc1)output = Dense(n_classes, activation="softmax",name="output")(dropout)bvgg16_model = Model(inputs=model_vgg16.input, outputs=output,name="bvgg16")return bvgg16_model
# 创建双线性VGG16模型
img_rows,img_cols = 296,296
bvgg16_model = bilinear_vgg16(img_rows,img_cols)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4267: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.
for i,layer in enumerate(bvgg16_model.layers):print(i,layer.name)
0 input_3
1 lambda_1
2 block1_conv1
3 block1_conv2
4 block1_pool
5 block2_conv1
6 block2_conv2
7 block2_pool
8 block3_conv1
9 block3_conv2
10 block3_conv323 lambda_3
24 lambda_4
25 fc1
26 dropout_4
27 output
optimizer = optimizers.Adam(lr=0.0001)
batch_size = 32
epochs = 100
freeze_num = 25
bvgg16_history = fine_tune_model(bvgg16_model,optimizer,batch_size,epochs,freeze_num)
Train on 876 samples, validate on 220 samples
Epoch 1/10
876/876 [==============================] - 22s 26ms/step - loss: 1.9859 - acc: 0.6233 - val_loss: 1.6014 - val_acc: 0.9636
Epoch 2/10
876/876 [==============================] - 8s 9ms/step - loss: 1.2607 - acc: 0.9680 - val_loss: 1.0053 - val_acc: 0.9864
history_plot(bvgg16_history)

加载

xception_model.load_weights("xception.hdf5")

预测

predict = bvgg16_model.predict(x_test)
predict=np.argmax(predict,axis=1)
predict = predict
predict.shape
(274,)
print(predict[:5])
[1 9 8 0 2]
print(x_test_img_path)
['0.jpg', '1.jpg', '2.jpg', '3.jpg', '4.jpg', '5.jpg', '6.jpg', '7.jpg', '8.jpg', '9.jpg', '10.jpg', '11.jpg', '12.jpg', '13.jpg', '14.jpg', '15.jpg', '16.jpg', '17.jpg', '18.jpg', '19.jpg', '20.jpg', '21.jpg', '22.jpg', '23.jpg', '24.jpg', '25.jpg', '26.jpg', '27.jpg', '28.jpg', '29.jpg', '30.jpg', '31.jpg', '32.jpg', '33.jpg', '34.jpg', '35.jpg', '36.jpg', '37.jpg', '38.jpg', '39.jpg', '40.jpg', '41.jpg', '42.jpg', '43.jpg', '44.jpg', '45.jpg', '46.jpg', '47.jpg', '48.jpg', '49.jpg', '50.jpg', '51.jpg', '52.jpg', '53.jpg', '54.jpg', '55.jpg', '56.jpg', '57.jpg', '58.jpg', '59.jpg', '60.jpg', '61.jpg', '62.jpg', '63.jpg', '64.jpg', '65.jpg', '66.jpg', '67.jpg', '68.jpg', '69.jpg', '70.jpg', '71.jpg', '72.jpg', '73.jpg', '74.jpg', '75.jpg', '76.jpg', '77.jpg', '78.jpg', '79.jpg', '80.jpg', '81.jpg', '82.jpg', '83.jpg', '84.jpg', '85.jpg', '86.jpg', '87.jpg', '88.jpg', '89.jpg', '90.jpg', '91.jpg', '92.jpg', '93.jpg', '94.jpg', '95.jpg', '96.jpg', '97.jpg', '98.jpg', '99.jpg', '100.jpg', '101.jpg', '102.jpg', '103.jpg', '104.jpg', '105.jpg', '106.jpg', '107.jpg', '108.jpg', '109.jpg', '110.jpg', '111.jpg', '112.jpg', '113.jpg', '114.jpg', '115.jpg', '116.jpg', '117.jpg', '118.jpg', '119.jpg', '120.jpg', '121.jpg', '122.jpg', '123.jpg', '124.jpg', '125.jpg', '126.jpg', '127.jpg', '128.jpg', '129.jpg', '130.jpg', '131.jpg', '132.jpg', '133.jpg', '134.jpg', '135.jpg', '136.jpg', '137.jpg', '138.jpg', '139.jpg', '140.jpg', '141.jpg', '142.jpg', '143.jpg', '144.jpg', '145.jpg', '146.jpg', '147.jpg', '148.jpg', '149.jpg', '150.jpg', '151.jpg', '152.jpg', '153.jpg', '154.jpg', '155.jpg', '156.jpg', '157.jpg', '158.jpg', '159.jpg', '160.jpg', '161.jpg', '162.jpg', '163.jpg', '164.jpg', '165.jpg', '166.jpg', '167.jpg', '168.jpg', '169.jpg', '170.jpg', '171.jpg', '172.jpg', '173.jpg', '174.jpg', '175.jpg', '176.jpg', '177.jpg', '178.jpg', '179.jpg', '180.jpg', '181.jpg', '182.jpg', '183.jpg', '184.jpg', '185.jpg', '186.jpg', '187.jpg', '188.jpg', '189.jpg', '190.jpg', '191.jpg', '192.jpg', '193.jpg', '194.jpg', '195.jpg', '196.jpg', '197.jpg', '198.jpg', '199.jpg', '200.jpg', '201.jpg', '202.jpg', '203.jpg', '204.jpg', '205.jpg', '206.jpg', '207.jpg', '208.jpg', '209.jpg', '210.jpg', '211.jpg', '212.jpg', '213.jpg', '214.jpg', '215.jpg', '216.jpg', '217.jpg', '218.jpg', '219.jpg', '220.jpg', '221.jpg', '222.jpg', '223.jpg', '224.jpg', '225.jpg', '226.jpg', '227.jpg', '228.jpg', '229.jpg', '230.jpg', '231.jpg', '232.jpg', '233.jpg', '234.jpg', '235.jpg', '236.jpg', '237.jpg', '238.jpg', '239.jpg', '240.jpg', '241.jpg', '242.jpg', '243.jpg', '244.jpg', '245.jpg', '246.jpg', '247.jpg', '248.jpg', '249.jpg', '250.jpg', '251.jpg', '252.jpg', '253.jpg', '254.jpg', '255.jpg', '256.jpg', '257.jpg', '258.jpg', '259.jpg', '260.jpg', '261.jpg', '262.jpg', '263.jpg', '264.jpg', '265.jpg', '266.jpg', '267.jpg', '268.jpg', '269.jpg', '270.jpg', '271.jpg', '272.jpg', '273.jpg']
id = np.arange(0,predict.shape[0])
import pandas as pddf = pd.DataFrame({"img_path":id, "tags":predict})
df.to_csv("bvgg16_model.csv",index=None,header=None)

小结

【人工智能项目】深度学习实现10类猴子细粒度识别相关推荐

  1. 【人工智能】深度学习、数据库选择和人工智能的革命;人工智能是解锁IoT潜力的钥匙

    深度学习(DL)和人工智能(AI)已经不再是科幻小说中遥不可及的目标,目前已成为了互联网和大数据等领域的前沿研究内容. 由于云计算提供强的计算能力.提出的先进算法以及充裕的资金,这创造了五年前难以想象 ...

  2. 2018 年,关于深度学习的 10 个预测

    我有一种预感:2018年,所有的事情都会发生巨变.我们在2017年看到的深度学习取得的惊人突破将会以一种强大的方式延续到2018年.2017年在深度学习领域的研究成果将会应用于日常的软件应用中. 下面 ...

  3. AlphaGo、人工智能、深度学习解读以及应用

    经过几天的比拼,AlphaGo最终还是胜出,创造了人机大战历史上的一个新的里程碑.几乎所有的人都在谈论这件事情,这使得把"人工智能"."深度学习"的热潮推向了新 ...

  4. 《深度学习:Java语言实现》一一1.3人工智能与深度学习

    1.3人工智能与深度学习 机器学习是人工智能第三波浪潮中碰撞出来的火花,作为一种数据挖掘方法,它既实用又强大:然而,即便采用了这种新的机器学习方法,要实现真正的人工智能似乎依旧遥遥无期.因为定义特征一 ...

  5. H2O机器学习:一种强大的可扩展的人工智能和深度学习技术

    书名:基于H2O的机器学习实用方法:一种强大的可扩展的人工智能和深度学习技术 原书名:Practical Machine Learning with H2O:Powerful, Scalable Te ...

  6. 人工智能、深度学习和AIoT

    1 引言 如果从人类最初的幻想开始算起,人工智能的历史非常久远,也许能与人类文明比肩.而现代化的人工智能历史是从1956年达特茅斯会议开始的.在这之后,人工智能的研究几经起落,符号主义.联结主义.专家 ...

  7. 理解这25个概念,你的人工智能,深度学习,机器学习才算入门!

    相关阅读: 300本计算机编程的经典书籍下载 人工智能,深度学习,机器学习-无论你在做什么,如果你对它不是很了解的话-去学习它.否则的话不用三年你就跟不上时代的潮流了. --马克.库班 马克.库班的这 ...

  8. 基于可解释人工智能和深度学习的组织病理学图像中的副结核病诊断;用于恶意软件检测的安全稳健的认知系统设计;带有涂鸦注释的弱监督伪装对象检测;Time Majority Voting:一种面向非专家用户的

    可解释的机器学习 中文标题:基于可解释人工智能和深度学习的组织病理学图像中的副结核病诊断 英文标题:Diagnosis of Paratuberculosis in Histopathological ...

  9. 关于人工智能与深度学习技术的发展历程和未来展望

    引言 自2016年AlphaGo击败围棋冠军李世石后,人工智能话题逐渐火热起来.究竟什么人工智能呢?从上世纪人工智能诞生以来,都被赋予神秘的面纱.1950年,现代计算机科学之父阿兰·图灵提出了图灵测试 ...

最新文章

  1. 美团确定进军自动驾驶,滴滴如何应对?
  2. 最后一个 IPV4 地址分配完毕,正式向IPV6过渡!
  3. css 添加 referer,http中Referer和Referrer Policy
  4. asterisk 操作mysql
  5. SQL中常见的6个报错
  6. 图像融合亮度一致_博文精选 | 基于深度学习的低光照图像增强方法总结
  7. 【Acwing 219. 剪纸游戏】
  8. python多态_Python基础入门18节-第十六节 面向对象如何理解多态
  9. 一对多 java_mybatis一对多和多对一
  10. 【python教程入门学习】用Python制作迷宫GIF
  11. 3dmax导出fbx没有贴图_实例讲解ArcGIS 与 3DMax 结合建模
  12. 1.1.2 Greedy Gift Givers 贪婪的送礼者
  13. python检查exe运行是否报错_python打包成exe格式后,在部分机子上没法运行
  14. 新中大财务软件服务器路径修改,新中大软件最常用的操作手册
  15. 用css+jquery实现视频永远占满全屏效果
  16. 苹果手机双卡双待是哪一款_等等党大获全胜 多款5G+5G双卡双待手机值得推荐
  17. 什么?四六级及格线不是425分?!
  18. 登录中国人民银行征信中心
  19. ssm毕设项目焦虑自测与交流平台k43cf(java+VUE+Mybatis+Maven+Mysql+sprnig)
  20. 六轴机器人轨迹规划之五段位置s曲线插补

热门文章

  1. TJPU-32 分解质因数
  2. 鸿蒙系统图片大全,江南百景图鸿蒙版
  3. 多普达s1装了无驱U盘后往手机复制文件提示路径太深(附解决办法)
  4. 2017天猫双11,1682亿背后的阿里绝密50+技术(长图下载)
  5. onedrive共享文件的下载
  6. [附源码]Java计算机毕业设计SSM高校互联网班级管理系统
  7. DNF2022版本增幅模拟器 JAVA代码演示
  8. 随机接入协议简介及发展方向思考
  9. 用Python自动识别验证码(完整教程,陆续更新12306验证码识别,抢票)
  10. jython使用_使用Jython收集数据