导言

图像分类对于我们来说是一件非常容易的事情,但是对于一台机器来说,在人工智能和深度学习广泛使用之前,这是一项艰巨的任务。自动驾驶汽车能够实时检测物体并采取相应必要的行动,并且由于TensorFlow图像分类,大部分都可以实现。

在本文中,将你共同学习以下内容:

  • 什么是TensorFlow?

  • 什么是图像分类?

  • TensorFlow图像分类:Fashion-MNIST

  • CIFAR 10:CNN

什么是TensorFlow?

TensorFlow是Google的开源机器学习框架,用于跨越一些列任务进行数据流编程。图中的节点表示数学运算,而图表边表示在它们之间传递的多维数据阵列。

Tensors是多维数组,是二维表到具有更高维度的数据的扩展。TensorFlow的许多功能使其适合深度学习,它的核心开源库可以帮助大家开发和训练ML模型。

什么是图像分类?

图像分类的目的是将数字图像中的所有像素分类为若干类或主题之一。然后,该分类数据可用于显示图像中的物体是否存在与以上分类或主题。

根据分类过程中的交互,有两种类型的分类:

  • 监督

  • 无监督

所以,我们直接通过两个例子学习TensorFlow图像分类。

TensorFlow图像分类:Fashion-MNIST

Fashion-MNIST数据集

在这里,我们将使用Fashion MNIST Dataset,它包含10个类别中的70,000个灰度图像。我们将使用60,000个进行训练,10,000个进行测试。如果你想自己尝试,可以直接从TensorFlow访问Fashion MNIST,导入并加载数据即可。

  • 导入库

1from __future__ import absolute_import, division, print_function2# TensorFlow and tf.keras3import tensorflow as tf4from tensorflow import keras5# Helper libraries6import numpy as np7import matplotlib.pyplot as pltimport absolute_import, division, print_function2# TensorFlow and tf.keras3import tensorflow as tf4from tensorflow import keras5# Helper libraries6import numpy as np7import matplotlib.pyplot as plt
  • 加载数据

1fashion_mnist = keras.datasets.fashion_mnist2(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()2(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
  • 将把图像映射到类中

1class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat','Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']'T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat','Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
  • 探索数据

1train_images.shape2#Each Label is between 0-93train_labels4test_images.shape2#Each Label is between 0-93train_labels4test_images.shape
  • 预处理数据

1plt.figure()2plt.imshow(train_images[0])3plt.colorbar()4plt.grid(False)5plt.show()6#If you inspect the first image in the training set, you will see that the pixel values fall in the range of 0 to 255.2plt.imshow(train_images[0])3plt.colorbar()4plt.grid(False)5plt.show()6#If you inspect the first image in the training set, you will see that the pixel values fall in the range of 0 to 255.

  • 缩放0-1图像,将其输入神经网络

1train_images = train_images / 255.02test_images = test_images / 255.0255.02test_images = test_images / 255.0
  • 显示部分图像

 1plt.figure(figsize=(10,10)) 2for i in range(25): 3    plt.subplot(5,5,i+1) 4    plt.xticks([]) 5    plt.yticks([]) 6    plt.grid(False) 7    plt.imshow(train_images[i], cmap=plt.cm.binary) 8    plt.xlabel(class_names[train_labels[i]]) 9plt.show()1010,10)) 2for i in range(25): 3    plt.subplot(5,5,i+1) 4    plt.xticks([]) 5    plt.yticks([]) 6    plt.grid(False) 7    plt.imshow(train_images[i], cmap=plt.cm.binary) 8    plt.xlabel(class_names[train_labels[i]]) 9plt.show()10

  • 设置层

1model = keras.Sequential([2    keras.layers.Flatten(input_shape=(28, 28)),3    keras.layers.Dense(128, activation=tf.nn.relu),4    keras.layers.Dense(10, activation=tf.nn.softmax)5])2    keras.layers.Flatten(input_shape=(28, 28)),3    keras.layers.Dense(128, activation=tf.nn.relu),4    keras.layers.Dense(10, activation=tf.nn.softmax)5])
  • 编译模型

1model.compile(optimizer='adam',2              loss='sparse_categorical_crossentropy',3              metrics=['accuracy'])'adam',2              loss='sparse_categorical_crossentropy',3              metrics=['accuracy'])
  • 模型训练

1model.fit(train_images, train_labels, epochs=10)10)

  • 评估准确性

1test_loss, test_acc = model.evaluate(test_images, test_labels)2print('Test accuracy:', test_acc)2print('Test accuracy:', test_acc)

  • 预测

1predictions = model.predict(test_images)2predictions[0]2predictions[0]

预测结果是10个数字的数组,即对应于图像的10种不同服装中的每一种。我们可以看到哪个标签具有最高的置信度值。

1np.argmax(predictions[0])2#Model is most confident that it's an ankle boot. Let's see if it's correct30])2#Model is most confident that it's an ankle boot. Let's see if it's correct3


输出:9

1test_labels[0]0]
  • 查看10个全集

 1def plot_image(i, predictions_array, true_label, img): 2  predictions_array, true_label, img = predictions_array[i], true_label[i], img[i] 3  plt.grid(False) 4  plt.xticks([]) 5  plt.yticks([]) 6  plt.imshow(img, cmap=plt.cm.binary) 7  predicted_label = np.argmax(predictions_array) 8  if predicted_label == true_label: 9    color = 'green'10  else:11    color = 'red'12  plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],13                                100*np.max(predictions_array),14                                class_names[true_label]),15                                color=color)16def plot_value_array(i, predictions_array, true_label):17  predictions_array, true_label = predictions_array[i], true_label[i]18  plt.grid(False)19  plt.xticks([])20  plt.yticks([])21  thisplot = plt.bar(range(10), predictions_array, color="#777777")22  plt.ylim([0, 1])23  predicted_label = np.argmax(predictions_array)24  thisplot[predicted_label].set_color('red')25  thisplot[true_label].set_color('green')def plot_image(i, predictions_array, true_label, img): 2  predictions_array, true_label, img = predictions_array[i], true_label[i], img[i] 3  plt.grid(False) 4  plt.xticks([]) 5  plt.yticks([]) 6  plt.imshow(img, cmap=plt.cm.binary) 7  predicted_label = np.argmax(predictions_array) 8  if predicted_label == true_label: 9    color = 'green'10  else:11    color = 'red'12  plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],13                                100*np.max(predictions_array),14                                class_names[true_label]),15                                color=color)16def plot_value_array(i, predictions_array, true_label):17  predictions_array, true_label = predictions_array[i], true_label[i]18  plt.grid(False)19  plt.xticks([])20  plt.yticks([])21  thisplot = plt.bar(range(10), predictions_array, color="#777777")22  plt.ylim([0, 1])23  predicted_label = np.argmax(predictions_array)24  thisplot[predicted_label].set_color('red')25  thisplot[true_label].set_color('green')
  • 第0张和第10张图片

1i = 02plt.figure(figsize=(6,3))3plt.subplot(1,2,1)4plot_image(i, predictions, test_labels, test_images)5plt.subplot(1,2,2)6plot_value_array(i, predictions,  test_labels)7plt.show()02plt.figure(figsize=(6,3))3plt.subplot(1,2,1)4plot_image(i, predictions, test_labels, test_images)5plt.subplot(1,2,2)6plot_value_array(i, predictions,  test_labels)7plt.show()

1i = 102plt.figure(figsize=(6,3))3plt.subplot(1,2,1)4plot_image(i, predictions, test_labels, test_images)5plt.subplot(1,2,2)6plot_value_array(i, predictions,  test_labels)7plt.show()102plt.figure(figsize=(6,3))3plt.subplot(1,2,1)4plot_image(i, predictions, test_labels, test_images)5plt.subplot(1,2,2)6plot_value_array(i, predictions,  test_labels)7plt.show()

  • 绘制几幅图像进行预测。正确为绿色,不正确为红色

 1num_rows = 5 2num_cols = 3 3num_images = num_rows*num_cols 4plt.figure(figsize=(2*2*num_cols, 2*num_rows)) 5for i in range(num_images): 6  plt.subplot(num_rows, 2*num_cols, 2*i+1) 7  plot_image(i, predictions, test_labels, test_images) 8  plt.subplot(num_rows, 2*num_cols, 2*i+2) 9  plot_value_array(i, predictions, test_labels)10plt.show()5 2num_cols = 3 3num_images = num_rows*num_cols 4plt.figure(figsize=(2*2*num_cols, 2*num_rows)) 5for i in range(num_images): 6  plt.subplot(num_rows, 2*num_cols, 2*i+1) 7  plot_image(i, predictions, test_labels, test_images) 8  plt.subplot(num_rows, 2*num_cols, 2*i+2) 9  plot_value_array(i, predictions, test_labels)10plt.show()

  • 使用训练的模型对单个图像进行预测

 1# Grab an image from the test dataset 2img = test_images[0] 3 4print(img.shape) 5 6# Add the image to a batch where it's the only member. 7img = (np.expand_dims(img,0)) 8 9print(img.shape)1011predictions_single = model.predict(img) 12print(predictions_single)# Grab an image from the test dataset 2img = test_images[0] 3 4print(img.shape) 5 6# Add the image to a batch where it's the only member. 7img = (np.expand_dims(img,0)) 8 9print(img.shape)1011predictions_single = model.predict(img) 12print(predictions_single)

1plot_value_array(0, predictions_single, test_labels)2plt.xticks(range(10), class_names, rotation=45)3plt.show()0, predictions_single, test_labels)2plt.xticks(range(10), class_names, rotation=45)3plt.show()

  • 批量处理唯一图像的预测

1prediction_result = np.argmax(predictions_single[0])0])


CIFAR-10: CNN

CIFAR-10数据集由飞机、狗、猫和其他物体组成。对图像进行预处理,然后在所有样本上训练卷积神经网络。需要对图像进行标准化。通过这个用例肯定能解释你曾经对TensorFlow图像分类的疑虑。

  • 下载数据

 1from urllib.request import urlretrieve 2from os.path import isfile, isdir 3from tqdm import tqdm  4import tarfile 5cifar10_dataset_folder_path = 'cifar-10-batches-py' 6class DownloadProgress(tqdm): 7    last_block = 0 8    def hook(self, block_num=1, block_size=1, total_size=None): 9        self.total = total_size10        self.update((block_num - self.last_block) * block_size)11        self.last_block = block_num12""" 13    check if the data (zip) file is already downloaded14    if not, download it from "https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz" and save as cifar-10-python.tar.gz15"""16if not isfile('cifar-10-python.tar.gz'):17    with DownloadProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:18        urlretrieve(19            'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',20            'cifar-10-python.tar.gz',21            pbar.hook)22if not isdir(cifar10_dataset_folder_path):23    with tarfile.open('cifar-10-python.tar.gz') as tar:24        tar.extractall()25        tar.close()from urllib.request import urlretrieve 2from os.path import isfile, isdir 3from tqdm import tqdm  4import tarfile 5cifar10_dataset_folder_path = 'cifar-10-batches-py' 6class DownloadProgress(tqdm): 7    last_block = 0 8    def hook(self, block_num=1, block_size=1, total_size=None): 9        self.total = total_size10        self.update((block_num - self.last_block) * block_size)11        self.last_block = block_num12""" 13    check if the data (zip) file is already downloaded14    if not, download it from "https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz" and save as cifar-10-python.tar.gz15"""16if not isfile('cifar-10-python.tar.gz'):17    with DownloadProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:18        urlretrieve(19            'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',20            'cifar-10-python.tar.gz',21            pbar.hook)22if not isdir(cifar10_dataset_folder_path):23    with tarfile.open('cifar-10-python.tar.gz') as tar:24        tar.extractall()25        tar.close()
  • 导入必要的库

1import pickle2import numpy as np3import matplotlib.pyplot as pltimport pickle2import numpy as np3import matplotlib.pyplot as plt
  • 了解数据

原始数据批量为10000*3072张,用numpy数组表示,其中10000是样本数据的数量。图像时彩色的,尺寸为32*32.可以(width x height x num_channel)或(num_channel x width x height)的格式进行输入。我们定义标签。

  • 重塑数据

将分为两个阶段重塑数据。

首先,将行向量(3072)分成3个。每个部分对应于每个通道,维度将是3*1024.然后将上一步的结果除以32,这里的32是图像的宽度,则将为3*32*32.

其次,我们必须将数据从(num_channel,width,height)转置为(width,height,num_channel)。使用转置函数。

1def load_cfar10_batch(cifar10_dataset_folder_path, batch_id):2    with open(cifar10_dataset_folder_path + '/data_batch_' + str(batch_id), mode='rb') as file:3        # note the encoding type is 'latin1'4        batch = pickle.load(file, encoding='latin1')5    features = batch['data'].reshape((len(batch['data']), 3, 32, 32)).transpose(0, 2, 3, 1)6    labels = batch['labels']7    return features, labeldef load_cfar10_batch(cifar10_dataset_folder_path, batch_id):2    with open(cifar10_dataset_folder_path + '/data_batch_' + str(batch_id), mode='rb') as file:3        # note the encoding type is 'latin1'4        batch = pickle.load(file, encoding='latin1')5    features = batch['data'].reshape((len(batch['data']), 3, 32, 32)).transpose(0, 2, 3, 1)6    labels = batch['labels']7    return features, label
  • 探索数据

1%matplotlib inline2%config InlineBackend.figure_format = 'retina'3import numpy as np4# Explore the dataset5batch_id = 36sample_id = 70007display_stats(cifar10_dataset_folder_path, batch_id, sample_id)2%config InlineBackend.figure_format = 'retina'3import numpy as np4# Explore the dataset5batch_id = 36sample_id = 70007display_stats(cifar10_dataset_folder_path, batch_id, sample_id)

  • 实现预处理功能

通过Min-Max Normalization标准化数据。可以简单的是所有x值的范围在0和1之间。

y = (x-min) / (max-min)

  • 编码

 1def one_hot_encode(x): 2    """ 3        argument 4            - x: a list of labels 5        return 6            - one hot encoding matrix (number of labels, number of class) 7    """ 8    encoded = np.zeros((len(x), 10)) 9    for idx, val in enumerate(x):10        encoded[idx][val] = 111    return encodeddef one_hot_encode(x): 2    """ 3        argument 4            - x: a list of labels 5        return 6            - one hot encoding matrix (number of labels, number of class) 7    """ 8    encoded = np.zeros((len(x), 10)) 9    for idx, val in enumerate(x):10        encoded[idx][val] = 111    return encoded
  • 预处理和保存数据

 1def _preprocess_and_save(normalize, one_hot_encode, features, labels, filename): 2    features = normalize(features) 3    labels = one_hot_encode(labels) 4    pickle.dump((features, labels), open(filename, 'wb')) 5def preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode): 6    n_batches = 5 7    valid_features = [] 8    valid_labels = [] 9    for batch_i in range(1, n_batches + 1):10        features, labels = load_cfar10_batch(cifar10_dataset_folder_path, batch_i)11        # find index to be the point as validation data in the whole dataset of the batch (10%)12        index_of_validation = int(len(features) * 0.1)13        # preprocess the 90% of the whole dataset of the batch14        # - normalize the features15        # - one_hot_encode the lables16        # - save in a new file named, "preprocess_batch_" + batch_number17        # - each file for each batch18        _preprocess_and_save(normalize, one_hot_encode,19                             features[:-index_of_validation], labels[:-index_of_validation], 20                             'preprocess_batch_' + str(batch_i) + '.p')21        # unlike the training dataset, validation dataset will be added through all batch dataset22        # - take 10% of the whold dataset of the batch23        # - add them into a list of24        #   - valid_features25        #   - valid_labels26        valid_features.extend(features[-index_of_validation:])27        valid_labels.extend(labels[-index_of_validation:])28    # preprocess the all stacked validation dataset29    _preprocess_and_save(normalize, one_hot_encode,30                         np.array(valid_features), np.array(valid_labels),31                         'preprocess_validation.p')32    # load the test dataset33    with open(cifar10_dataset_folder_path + '/test_batch', mode='rb') as file:34        batch = pickle.load(file, encoding='latin1')35    # preprocess the testing data36    test_features = batch['data'].reshape((len(batch['data']), 3, 32, 32)).transpose(0, 2, 3, 1)37    test_labels = batch['labels']38    # Preprocess and Save all testing data39    _preprocess_and_save(normalize, one_hot_encode,40                         np.array(test_features), np.array(test_labels),41                         'preprocess_training.p')def _preprocess_and_save(normalize, one_hot_encode, features, labels, filename): 2    features = normalize(features) 3    labels = one_hot_encode(labels) 4    pickle.dump((features, labels), open(filename, 'wb')) 5def preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode): 6    n_batches = 5 7    valid_features = [] 8    valid_labels = [] 9    for batch_i in range(1, n_batches + 1):10        features, labels = load_cfar10_batch(cifar10_dataset_folder_path, batch_i)11        # find index to be the point as validation data in the whole dataset of the batch (10%)12        index_of_validation = int(len(features) * 0.1)13        # preprocess the 90% of the whole dataset of the batch14        # - normalize the features15        # - one_hot_encode the lables16        # - save in a new file named, "preprocess_batch_" + batch_number17        # - each file for each batch18        _preprocess_and_save(normalize, one_hot_encode,19                             features[:-index_of_validation], labels[:-index_of_validation], 20                             'preprocess_batch_' + str(batch_i) + '.p')21        # unlike the training dataset, validation dataset will be added through all batch dataset22        # - take 10% of the whold dataset of the batch23        # - add them into a list of24        #   - valid_features25        #   - valid_labels26        valid_features.extend(features[-index_of_validation:])27        valid_labels.extend(labels[-index_of_validation:])28    # preprocess the all stacked validation dataset29    _preprocess_and_save(normalize, one_hot_encode,30                         np.array(valid_features), np.array(valid_labels),31                         'preprocess_validation.p')32    # load the test dataset33    with open(cifar10_dataset_folder_path + '/test_batch', mode='rb') as file:34        batch = pickle.load(file, encoding='latin1')35    # preprocess the testing data36    test_features = batch['data'].reshape((len(batch['data']), 3, 32, 32)).transpose(0, 2, 3, 1)37    test_labels = batch['labels']38    # Preprocess and Save all testing data39    _preprocess_and_save(normalize, one_hot_encode,40                         np.array(test_features), np.array(test_labels),41                         'preprocess_training.p')
  • 建立网络

整个模型共有14层。

 1import tensorflow as tf 2def conv_net(x, keep_prob): 3    conv1_filter = tf.Variable(tf.truncated_normal(shape=[3, 3, 3, 64], mean=0, stddev=0.08)) 4    conv2_filter = tf.Variable(tf.truncated_normal(shape=[3, 3, 64, 128], mean=0, stddev=0.08)) 5    conv3_filter = tf.Variable(tf.truncated_normal(shape=[5, 5, 128, 256], mean=0, stddev=0.08)) 6    conv4_filter = tf.Variable(tf.truncated_normal(shape=[5, 5, 256, 512], mean=0, stddev=0.08)) 7    # 1, 2 8    conv1 = tf.nn.conv2d(x, conv1_filter, strides=[1,1,1,1], padding='SAME') 9    conv1 = tf.nn.relu(conv1)10    conv1_pool = tf.nn.max_pool(conv1, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')11    conv1_bn = tf.layers.batch_normalization(conv1_pool)12    # 3, 413    conv2 = tf.nn.conv2d(conv1_bn, conv2_filter, strides=[1,1,1,1], padding='SAME')14    conv2 = tf.nn.relu(conv2)15    conv2_pool = tf.nn.max_pool(conv2, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')    16    conv2_bn = tf.layers.batch_normalization(conv2_pool)17    # 5, 618    conv3 = tf.nn.conv2d(conv2_bn, conv3_filter, strides=[1,1,1,1], padding='SAME')19    conv3 = tf.nn.relu(conv3)20    conv3_pool = tf.nn.max_pool(conv3, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')  21    conv3_bn = tf.layers.batch_normalization(conv3_pool)22    # 7, 823    conv4 = tf.nn.conv2d(conv3_bn, conv4_filter, strides=[1,1,1,1], padding='SAME')24    conv4 = tf.nn.relu(conv4)25    conv4_pool = tf.nn.max_pool(conv4, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')26    conv4_bn = tf.layers.batch_normalization(conv4_pool)27    # 928    flat = tf.contrib.layers.flatten(conv4_bn)  29    # 1030    full1 = tf.contrib.layers.fully_connected(inputs=flat, num_outputs=128, activation_fn=tf.nn.relu)31    full1 = tf.nn.dropout(full1, keep_prob)32    full1 = tf.layers.batch_normalization(full1)33    # 1134    full2 = tf.contrib.layers.fully_connected(inputs=full1, num_outputs=256, activation_fn=tf.nn.relu)35    full2 = tf.nn.dropout(full2, keep_prob)36    full2 = tf.layers.batch_normalization(full2)37    # 1238    full3 = tf.contrib.layers.fully_connected(inputs=full2, num_outputs=512, activation_fn=tf.nn.relu)39    full3 = tf.nn.dropout(full3, keep_prob)40    full3 = tf.layers.batch_normalization(full3)    41    # 1342    full4 = tf.contrib.layers.fully_connected(inputs=full3, num_outputs=1024, activation_fn=tf.nn.relu)43    full4 = tf.nn.dropout(full4, keep_prob)44    full4 = tf.layers.batch_normalization(full4)        45    # 1446    out = tf.contrib.layers.fully_connected(inputs=full3, num_outputs=10, activation_fn=None)47    return outimport tensorflow as tf 2def conv_net(x, keep_prob): 3    conv1_filter = tf.Variable(tf.truncated_normal(shape=[3, 3, 3, 64], mean=0, stddev=0.08)) 4    conv2_filter = tf.Variable(tf.truncated_normal(shape=[3, 3, 64, 128], mean=0, stddev=0.08)) 5    conv3_filter = tf.Variable(tf.truncated_normal(shape=[5, 5, 128, 256], mean=0, stddev=0.08)) 6    conv4_filter = tf.Variable(tf.truncated_normal(shape=[5, 5, 256, 512], mean=0, stddev=0.08)) 7    # 1, 2 8    conv1 = tf.nn.conv2d(x, conv1_filter, strides=[1,1,1,1], padding='SAME') 9    conv1 = tf.nn.relu(conv1)10    conv1_pool = tf.nn.max_pool(conv1, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')11    conv1_bn = tf.layers.batch_normalization(conv1_pool)12    # 3, 413    conv2 = tf.nn.conv2d(conv1_bn, conv2_filter, strides=[1,1,1,1], padding='SAME')14    conv2 = tf.nn.relu(conv2)15    conv2_pool = tf.nn.max_pool(conv2, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')    16    conv2_bn = tf.layers.batch_normalization(conv2_pool)17    # 5, 618    conv3 = tf.nn.conv2d(conv2_bn, conv3_filter, strides=[1,1,1,1], padding='SAME')19    conv3 = tf.nn.relu(conv3)20    conv3_pool = tf.nn.max_pool(conv3, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')  21    conv3_bn = tf.layers.batch_normalization(conv3_pool)22    # 7, 823    conv4 = tf.nn.conv2d(conv3_bn, conv4_filter, strides=[1,1,1,1], padding='SAME')24    conv4 = tf.nn.relu(conv4)25    conv4_pool = tf.nn.max_pool(conv4, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')26    conv4_bn = tf.layers.batch_normalization(conv4_pool)27    # 928    flat = tf.contrib.layers.flatten(conv4_bn)  29    # 1030    full1 = tf.contrib.layers.fully_connected(inputs=flat, num_outputs=128, activation_fn=tf.nn.relu)31    full1 = tf.nn.dropout(full1, keep_prob)32    full1 = tf.layers.batch_normalization(full1)33    # 1134    full2 = tf.contrib.layers.fully_connected(inputs=full1, num_outputs=256, activation_fn=tf.nn.relu)35    full2 = tf.nn.dropout(full2, keep_prob)36    full2 = tf.layers.batch_normalization(full2)37    # 1238    full3 = tf.contrib.layers.fully_connected(inputs=full2, num_outputs=512, activation_fn=tf.nn.relu)39    full3 = tf.nn.dropout(full3, keep_prob)40    full3 = tf.layers.batch_normalization(full3)    41    # 1342    full4 = tf.contrib.layers.fully_connected(inputs=full3, num_outputs=1024, activation_fn=tf.nn.relu)43    full4 = tf.nn.dropout(full4, keep_prob)44    full4 = tf.layers.batch_normalization(full4)        45    # 1446    out = tf.contrib.layers.fully_connected(inputs=full3, num_outputs=10, activation_fn=None)47    return out
  • 超参数

1epochs = 102batch_size = 1283keep_probability = 0.74learning_rate = 0.001102batch_size = 1283keep_probability = 0.74learning_rate = 0.001
1logits = conv_net(x, keep_prob)2model = tf.identity(logits, name='logits') # Name logits Tensor, so that can be loaded from disk after training3# Loss and Optimizer4cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))5optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)6# Accuracy7correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))8accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')2model = tf.identity(logits, name='logits') # Name logits Tensor, so that can be loaded from disk after training3# Loss and Optimizer4cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))5optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)6# Accuracy7correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))8accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
  • 训练神经网络

1#Single Optimizationdef train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):2    session.run(optimizer, 3                feed_dict={4                    x: feature_batch,5                    y: label_batch,6                    keep_prob: keep_probability7                })#Single Optimizationdef train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):2    session.run(optimizer, 3                feed_dict={4                    x: feature_batch,5                    y: label_batch,6                    keep_prob: keep_probability7                })
 1#Showing Stats 2def print_stats(session, feature_batch, label_batch, cost, accuracy): 3    loss = sess.run(cost,  4                    feed_dict={ 5                        x: feature_batch, 6                        y: label_batch, 7                        keep_prob: 1. 8                    }) 9    valid_acc = sess.run(accuracy, 10                         feed_dict={11                             x: valid_features,12                             y: valid_labels,13                             keep_prob: 1.14                         })15    print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_acc))#Showing Stats 2def print_stats(session, feature_batch, label_batch, cost, accuracy): 3    loss = sess.run(cost,  4                    feed_dict={ 5                        x: feature_batch, 6                        y: label_batch, 7                        keep_prob: 1. 8                    }) 9    valid_acc = sess.run(accuracy, 10                         feed_dict={11                             x: valid_features,12                             y: valid_labels,13                             keep_prob: 1.14                         })15    print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_acc))
  • 全面训练和保存模型

 1def batch_features_labels(features, labels, batch_size): 2    """ 3    Split features and labels into batches 4    """ 5    for start in range(0, len(features), batch_size): 6        end = min(start + batch_size, len(features)) 7        yield features[start:end], labels[start:end] 8def load_preprocess_training_batch(batch_id, batch_size): 9    """10    Load the Preprocessed Training data and return them in batches of <batch_size> or less11    """12    filename = 'preprocess_batch_' + str(batch_id) + '.p'13    features, labels = pickle.load(open(filename, mode='rb'))14    # Return the training data in batches of size <batch_size> or less15    return batch_features_labels(features, labels, batch_size)def batch_features_labels(features, labels, batch_size): 2    """ 3    Split features and labels into batches 4    """ 5    for start in range(0, len(features), batch_size): 6        end = min(start + batch_size, len(features)) 7        yield features[start:end], labels[start:end] 8def load_preprocess_training_batch(batch_id, batch_size): 9    """10    Load the Preprocessed Training data and return them in batches of <batch_size> or less11    """12    filename = 'preprocess_batch_' + str(batch_id) + '.p'13    features, labels = pickle.load(open(filename, mode='rb'))14    # Return the training data in batches of size <batch_size> or less15    return batch_features_labels(features, labels, batch_size)
 1#Saving Model and Pathsave_model_path = './image_classification' 2print('Training...') 3with tf.Session() as sess: 4    # Initializing the variables 5    sess.run(tf.global_variables_initializer()) 6    # Training cycle 7    for epoch in range(epochs): 8        # Loop over all batches 9        n_batches = 510        for batch_i in range(1, n_batches + 1):11            for batch_features, batch_labels in load_preprocess_training_batch(batch_i, batch_size):12                train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)13            print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')14            print_stats(sess, batch_features, batch_labels, cost, accuracy)#Saving Model and Pathsave_model_path = './image_classification' 2print('Training...') 3with tf.Session() as sess: 4    # Initializing the variables 5    sess.run(tf.global_variables_initializer()) 6    # Training cycle 7    for epoch in range(epochs): 8        # Loop over all batches 9        n_batches = 510        for batch_i in range(1, n_batches + 1):11            for batch_features, batch_labels in load_preprocess_training_batch(batch_i, batch_size):12                train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)13            print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')14            print_stats(sess, batch_features, batch_labels, cost, accuracy)
1# Save Model2    saver = tf.train.Saver()3    save_path = saver.save(sess, save_model_path)# Save Model2    saver = tf.train.Saver()3    save_path = saver.save(sess, save_model_path)

现在,TensorFlow图像分类的重要部分已经完成了,接着该测试模型。

  • 测试模型

 1import pickle 2import numpy as np 3import matplotlib.pyplot as plt 4from sklearn.preprocessing import LabelBinarizer 5def batch_features_labels(features, labels, batch_size): 6    """ 7    Split features and labels into batches 8    """ 9    for start in range(0, len(features), batch_size):10        end = min(start + batch_size, len(features))11        yield features[start:end], labels[start:end]12def display_image_predictions(features, labels, predictions, top_n_predictions):13    n_classes = 1014    label_names = load_label_names()15    label_binarizer = LabelBinarizer()16    label_binarizer.fit(range(n_classes))17    label_ids = label_binarizer.inverse_transform(np.array(labels))18    fig, axies = plt.subplots(nrows=top_n_predictions, ncols=2, figsize=(20, 10))19    fig.tight_layout()20    fig.suptitle('Softmax Predictions', fontsize=20, y=1.1)21    n_predictions = 322    margin = 0.0523    ind = np.arange(n_predictions)24    width = (1. - 2. * margin) / n_predictions25    for image_i, (feature, label_id, pred_indicies, pred_values) in enumerate(zip(features, label_ids, predictions.indices, predictions.values)):26        if (image_i < top_n_predictions):27            pred_names = [label_names[pred_i] for pred_i in pred_indicies]28            correct_name = label_names[label_id]29            axies[image_i][0].imshow((feature*255).astype(np.int32, copy=False))30            axies[image_i][0].set_title(correct_name)31            axies[image_i][0].set_axis_off()32            axies[image_i][1].barh(ind + margin, pred_values[:3], width)33            axies[image_i][1].set_yticks(ind + margin)34            axies[image_i][1].set_yticklabels(pred_names[::-1])35            axies[image_i][1].set_xticks([0, 0.5, 1.0])import pickle 2import numpy as np 3import matplotlib.pyplot as plt 4from sklearn.preprocessing import LabelBinarizer 5def batch_features_labels(features, labels, batch_size): 6    """ 7    Split features and labels into batches 8    """ 9    for start in range(0, len(features), batch_size):10        end = min(start + batch_size, len(features))11        yield features[start:end], labels[start:end]12def display_image_predictions(features, labels, predictions, top_n_predictions):13    n_classes = 1014    label_names = load_label_names()15    label_binarizer = LabelBinarizer()16    label_binarizer.fit(range(n_classes))17    label_ids = label_binarizer.inverse_transform(np.array(labels))18    fig, axies = plt.subplots(nrows=top_n_predictions, ncols=2, figsize=(20, 10))19    fig.tight_layout()20    fig.suptitle('Softmax Predictions', fontsize=20, y=1.1)21    n_predictions = 322    margin = 0.0523    ind = np.arange(n_predictions)24    width = (1. - 2. * margin) / n_predictions25    for image_i, (feature, label_id, pred_indicies, pred_values) in enumerate(zip(features, label_ids, predictions.indices, predictions.values)):26        if (image_i < top_n_predictions):27            pred_names = [label_names[pred_i] for pred_i in pred_indicies]28            correct_name = label_names[label_id]29            axies[image_i][0].imshow((feature*255).astype(np.int32, copy=False))30            axies[image_i][0].set_title(correct_name)31            axies[image_i][0].set_axis_off()32            axies[image_i][1].barh(ind + margin, pred_values[:3], width)33            axies[image_i][1].set_yticks(ind + margin)34            axies[image_i][1].set_yticklabels(pred_names[::-1])35            axies[image_i][1].set_xticks([0, 0.5, 1.0])
 1%matplotlib inline 2%config InlineBackend.figure_format = 'retina' 3import tensorflow as tf 4import pickle 5import random 6save_model_path = './image_classification' 7batch_size = 64 8n_samples = 10 9top_n_predictions = 510def test_model():11    test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))12    loaded_graph = tf.Graph()13    with tf.Session(graph=loaded_graph) as sess:14        # Load model15        loader = tf.train.import_meta_graph(save_model_path + '.meta')16        loader.restore(sess, save_model_path) 2%config InlineBackend.figure_format = 'retina' 3import tensorflow as tf 4import pickle 5import random 6save_model_path = './image_classification' 7batch_size = 64 8n_samples = 10 9top_n_predictions = 510def test_model():11    test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))12    loaded_graph = tf.Graph()13    with tf.Session(graph=loaded_graph) as sess:14        # Load model15        loader = tf.train.import_meta_graph(save_model_path + '.meta')16        loader.restore(sess, save_model_path)
1# Get Tensors from loaded model2        loaded_x = loaded_graph.get_tensor_by_name('input_x:0')3        loaded_y = loaded_graph.get_tensor_by_name('output_y:0')4        loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')5        loaded_logits = loaded_graph.get_tensor_by_name('logits:0')6        loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')# Get Tensors from loaded model2        loaded_x = loaded_graph.get_tensor_by_name('input_x:0')3        loaded_y = loaded_graph.get_tensor_by_name('output_y:0')4        loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')5        loaded_logits = loaded_graph.get_tensor_by_name('logits:0')6        loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
1# Get accuracy in batches for memory limitations2        test_batch_acc_total = 03        test_batch_count = 04        for train_feature_batch, train_label_batch in batch_features_labels(test_features, test_labels, batch_size):5            test_batch_acc_total += sess.run(6                loaded_acc,7                feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})8            test_batch_count += 19        print('Testing Accuracy: {}'.format(test_batch_acc_total/test_batch_count))# Get accuracy in batches for memory limitations2        test_batch_acc_total = 03        test_batch_count = 04        for train_feature_batch, train_label_batch in batch_features_labels(test_features, test_labels, batch_size):5            test_batch_acc_total += sess.run(6                loaded_acc,7                feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})8            test_batch_count += 19        print('Testing Accuracy: {}'.format(test_batch_acc_total/test_batch_count))
1# Print Random Samples2        random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))3        random_test_predictions = sess.run(4            tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),5            feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})6        display_image_predictions(random_test_features, random_test_labels, random_test_predictions, top_n_predictions)7test_model()# Print Random Samples2        random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))3        random_test_predictions = sess.run(4            tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),5            feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})6        display_image_predictions(random_test_features, random_test_labels, random_test_predictions, top_n_predictions)7test_model()

输出测试精度:0.5882762738853503

结语

如果你训练神经网络以获得更多功能,可能会具有更高准确度的结果。通过这个详细的实例,你应该已经可以使用它来分类任何类型的图像了。

长按订阅更多精彩▼

TensorFlow图像分类:如何构建分类器相关推荐

  1. keras构建卷积神经网络(CNN(Convolutional Neural Networks))进行图像分类模型构建和学习

    keras构建卷积神经网络(CNN(Convolutional Neural Networks))进行图像分类模型构建和学习 全连接神经网络(Fully connected neural networ ...

  2. ML之分类预测之LARS:利用回归工具将二分类转为回归问题并采用LARS算法构建分类器

    ML之分类预测之LARS:利用回归工具将二分类转为回归问题并采用LARS算法构建分类器 目录 输出结果 设计思路 代码实现 输出结果 ['V10', 'V48', 'V44', 'V11', 'V35 ...

  3. 【tensorflow速成】Tensorflow图像分类从模型自定义到测试

    文章首发于微信公众号<与有三学AI> [tensorflow速成]Tensorflow图像分类从模型自定义到测试 这是给大家准备的tensorflow速成例子 上一篇介绍了 Caffe , ...

  4. TensorFlow练习11: 图像分类器 – retrain谷歌Inception模型(转)

    原文地址:https://www.tuicool.com/articles/ieQZVfa 前一帖< TensorFlow练习10: 实现谷歌Deep Dream >使用到了谷歌训练的In ...

  5. 超干货|使用Keras和CNN构建分类器(内含代码和讲解)

    摘要: 为了让文章不那么枯燥,我构建了一个精灵图鉴数据集(Pokedex)这都是一些受欢迎的精灵图.我们在已经准备好的图像数据集上,使用Keras库训练一个卷积神经网络(CNN). 为了让文章不那么枯 ...

  6. TensorFlow图像分类教程

    深度学习算法与计算机硬件性能的发展,使研究人员和企业在图像识别.语音识别.推荐引擎和机器翻译等领域取得了巨大的进步.六年前,视觉模式识别领域取得了第一个超凡的成果.两年前,Google大脑团队开发了T ...

  7. C++调用Python文件,TensorFlow和PyTorch构建的深度学习模型,无法使用GPU的情况分析。

    C++调用Python深度学习模型,包含TensorFlow和PyTorch等构造的模型,然后使用GPU出现问题.包含C++调用Python函数,C++加载模型到GPU,GPU内存占用过大,计算完毕内 ...

  8. 今晚直播 | 谷歌资深工程师手把手教你使用TensorFlow最新API构建学习模型

    目前,深度学习的研究和应用大受追捧,各种开源的深度学习框架层出不穷.TensorFlow 作为目前最受欢迎的深度学习框架,已经在 GitHub 上获得了 112194 个 star,受欢迎程序可见一斑 ...

  9. 如何利用Tensorflow和OpenCV构建实时对象识别程序?

    点击上方"小白学视觉",选择加"星标"或"置顶" 重磅干货,第一时间送达 引言 在本文中,将逐步介绍如何使用Tensorflow(TF)的新 ...

最新文章

  1. c语言生成文件可以删,C语言-文件操作下
  2. python 文件和目录操作
  3. lua实现stack(lua程序设计21.7 练习21.1题)
  4. Qt工作笔记-QDialogButtonBox的使用
  5. Linux启动脚本rc.local 不执行的解决方法
  6. Samza框架-----学习笔记
  7. C语言程序设计谭浩强第五版课后答案
  8. CEO的行为风格会影响公司业绩吗?
  9. navicat 12 for mac 中文破解版使用说明
  10. volte短信流程-注册
  11. rsync udr——远程大文件传输加速
  12. 没有基础学习java编程,去培训机构怎么样?
  13. 【计算机毕业设计】课堂考勤微信小程序 基于微信小程序的课堂考勤管理系统
  14. DVWA-文件上传与文件包含
  15. 2017全球智慧城市战略指数分析
  16. 批处理(.bat)文件
  17. 2020美国纽约大学计算机科学排名,纽约大学计算机科学与工程世界排名2020年最新排名第27(ARWU世界排名)...
  18. 锂离子电池被动均衡深度理解
  19. fedora 16 x64 安装anjuta,在编译时提示libtool、glib、intltool包不存在
  20. 解决Macbook Pro 2017安装Windows10双系统后在Windows系统中Apple蓝牙鼠标不能使用问题

热门文章

  1. 链表的基本操作(c++实现)
  2. 思维 ---- 两两匹配问题 2021杭电多校第6场 E - Median
  3. 计算机工程实践,【计算机工程论文】计算机工程实践能力培养(共3056字)
  4. 【常用技巧精选】尺取法
  5. cidr斜线记法地址块网络前缀_学习笔记之《计算机网络》- 网络层(一)
  6. 转型不该只是一句空话 还应该有更多实质
  7. 继续过中等难度.0309
  8. 基于 WebSocket 实现 WebGL 3D 拓扑图实时数据通讯同步(一)
  9. 使用spring aop实现业务层mysql 读写分离
  10. ExtJs中column与form布局的再次领悟