1.Fashion-MNIST数据集

Fashion-MNIST数据集包括一个包含60,000个示例的训练集和一个包含10,000个示例的测试集。每个示例是一个28x28灰度图像,与来自以下10个类的标签相关联:

  • T恤/上衣
  • 裤子
  • 套衫
  • 连衣裙
  • 外套
  • 凉鞋
  • 衬衫
  • 运动鞋
  • 短靴

    Source: https://github.com/zalandoresearch/fashion-mnist/

由于给的样例是.ipynb格式的,因为我习惯使用pycharm,因此我将程序改到.py格式,但是由于内容较多,只能一点一点来研究相关程序。

2.cnn_preprocess.py

这个程序预处理数据,给神经网络提供训练数据。


(1)探索数据

import helper
import numpy as np
# 模块 pickle 实现了对一个 Python 对象结构的二进制序列化和反序列化。
# 这里用来读取fashion-mnist.p程序
import picklefilename = "fashion-mnist.p"sample_id = 17
helper.display_stats(filename, sample_id)

这是helper.py中的程序:

def _load_label_names():"""从文件中加载标签名称"""return ['t-shirt', 'trouser', 'pullover', 'dress', 'coat', 'sandal', 'shirt', 'sneaker', 'bag', 'ankle_boot']def load_dataset(dataset_folder_path):"""加载训练和测试数据集"""with open(dataset_folder_path, mode='rb') as file:pickle_data = pickle.load(file)# 整理训练和测试数据train_features = pickle_data[0].reshape((len(pickle_data[0]), 1, 28, 28)).transpose(0, 2, 3, 1)train_labels = pickle_data[1]test_features = pickle_data[2].reshape((len(pickle_data[2]), 1, 28, 28)).transpose(0, 2, 3, 1)test_labels = pickle_data[3]   return train_features, train_labels, test_features, test_labelsdef display_stats(dataset_folder_path, sample_id):"""显示数据集的状态"""train_features, train_labels, test_features, test_labels = load_dataset(dataset_folder_path)if not (0 <= sample_id < len(train_features)):print('{} samples in training set.  {} is out of range.'.format(len(train_features), sample_id))return Noneprint('Samples: {}'.format(len(train_features)))# 用于统计相关标签数据print('Label Counts: {}'.format(dict(zip(*np.unique(train_labels, return_counts=True)))))print('First 20 Labels: {}'.format(train_labels[:20]))sample_image = train_features[sample_id]sample_label = train_labels[sample_id]label_names = _load_label_names()print('\nExample of Image {}:'.format(sample_id))print('Image - Min Value: {} Max Value: {}'.format(sample_image.min(), sample_image.max()))print('Image - Shape: {}'.format(sample_image.shape))print('Label - Label Id: {} Name: {}'.format(sample_label, label_names[sample_label]))plt.axis('off')plt.imshow(sample_image.squeeze(), cmap="gray")plt.show()

输出如下:

Samples: 60000
Label Counts: {0: 6000, 1: 6000, 2: 6000, 3: 6000, 4: 6000, 5: 6000, 6: 6000, 7: 6000, 8: 6000, 9: 6000}
First 20 Labels: [9 0 0 3 0 2 7 2 5 5 0 9 5 5 7 9 1 0 6 4]Example of Image 17:
Image - Min Value: 0 Max Value: 254
Image - Shape: (28, 28, 1)
Label - Label Id: 0 Name: t-shirt


(2)数据预处理

实现normalize函数以获取图像数据x,并将其作为归一化的Numpy数组返回。值应在0到1范围内。返回对象的形状应与x相同。

import problem_unittests as tests
def normalize(x):"""将0到1范围内的样本图像数据列表进行规范化"""# TODO: Implement Functionoutput = np.array([image/255 for image in x])return output

(3)one-hot码

实现该one_hot_encode功能。输入,x是标签列表。实现该功能以将标签列表作为One-Hot编码的Numpy数组返回。标签的可能值为0到9。“one-hot code”功能应在每次调用one_hot_encode之间为每个值返回相同的编码。

def one_hot_encode(x):"""一个独热码的样本标签列表。为每个标签返回一个独热码向量。"""# TODO: Implement Functionone_hot = np.eye(10)[x]return None

(4)预处理和保存

预处理所有数据,并将其保存,是由10%的训练数据进行验证。

def _preprocess_and_save(normalize, one_hot_encode, features, labels, filename):"""对数据进行预处理并保存到文件中"""features = normalize(features)labels = one_hot_encode(labels)pickle.dump((features, labels), open(filename, 'wb'))def preprocess_and_save_data(dataset_folder_path, normalize, one_hot_encode):"""预处理训练和验证数据"""valid_features = []valid_labels = []train_features, train_labels, test_features, test_labels = load_dataset(dataset_folder_path)validation_count = int(len(train_features) * 0.1)# 对新的培训数据进行预处理和保存_preprocess_and_save(normalize,one_hot_encode,train_features[:-validation_count],train_labels[:-validation_count],'preprocess_train' + '.p')# 使用部分训练batch进行验证valid_features.extend(train_features[-validation_count:])valid_labels.extend(train_labels[-validation_count:])# 对所有验证数据进行预处理和保存_preprocess_and_save(normalize,one_hot_encode,np.array(valid_features),np.array(valid_labels),'preprocess_validation.p')# 对所有测试数据进行预处理和保存_preprocess_and_save(normalize,one_hot_encode,np.array(test_features),np.array(test_labels),'preprocess_test.p')

完整cnn_preprocess.py程序

import helper
import numpy as np
import problem_unittests as tests
import picklefilename = "fashion-mnist.p"def normalize(x):"""将0到1范围内的样本图像数据列表进行归一化"""output = np.array([image / 255 for image in x])return outputdef one_hot_encode(x):"""一个独热码的样本标签列表。为每个标签返回一个独热码向量。"""one_hot = np.eye(10)[x]return one_hot# 对所有数据进行预处理
helper.preprocess_and_save_data(filename, normalize, one_hot_encode)

3.cnn_train.py

import helper
import numpy as np
import problem_unittests as tests
import pickle
import tensorflow as tfdef neural_net_image_input(image_shape):"""返回一批图像输入的张量"""input_image = tf.placeholder(tf.float32, shape=(None, *image_shape), name="x")return input_imagedef neural_net_label_input(n_classes):"""返回一个batch标签输入的张量"""input_label = tf.placeholder(tf.int32, shape=(None, n_classes), name="y")return input_labeldef neural_net_keep_prob_input():"""返回一个保持概率的张量"""keep_prob = tf.placeholder(tf.float32, name="keep_prob")return keep_probdef conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):"""对x_张量应用卷积和最大池化"""filter_size = [conv_ksize[0], conv_ksize[1], x_tensor.get_shape().as_list()[3], conv_num_outputs]weight = tf.Variable(tf.truncated_normal(filter_size, stddev=0.01))conv = tf.nn.conv2d(x_tensor, weight, [1, conv_strides[0], conv_strides[1], 1], padding="SAME")bias = tf.Variable(tf.zeros([conv_num_outputs]))conv = tf.nn.bias_add(conv, bias)conv = tf.nn.relu(conv)conv = tf.nn.max_pool(conv, [1, pool_ksize[0], pool_ksize[1], 1], [1, pool_strides[0], pool_strides[1], 1],padding="SAME")return convdef flatten(x_tensor):"""将x_张量压平到(批处理大小,压平图像大小),以方便接下来构建全连接层"""conv_flatten = tf.contrib.layers.flatten(x_tensor)return conv_flattendef fully_conn(x_tensor, num_outputs):""":使用权值和偏置将一个全连接层应用到x_tensor上"""fc_layer = tf.layers.dense(x_tensor, num_outputs, activation=tf.nn.relu)return fc_layerdef output(x_tensor, num_outputs):"""使用权值和偏置应用一个输出层"""output_layer = tf.layers.dense(x_tensor, num_outputs, activation=None)return output_layerdef conv_net(x, keep_prob):"""创建一个卷积神经网络模型: x: 持有图像数据的占位张量。: keep_prob: 持有dropout的占位张量保持概率。: return:表示logits的张量"""conv_layer1 = conv2d_maxpool(x, conv_num_outputs=64, conv_ksize=(5, 5), conv_strides=(2, 2),pool_ksize=(2, 2), pool_strides=(2, 2))conv_layer1 = tf.nn.dropout(conv_layer1, keep_prob)conv_layer2 = conv2d_maxpool(conv_layer1, conv_num_outputs=128, conv_ksize=(3, 3), conv_strides=(2, 2),pool_ksize=(2, 2), pool_strides=(2, 2))conv_layer2 = tf.nn.dropout(conv_layer2, keep_prob)flat_layer = flatten(conv_layer2)fc_layer1 = fully_conn(flat_layer, 256)fc_layer2 = fully_conn(fc_layer1, 128)fc_layer3 = fully_conn(fc_layer2, 64)output_layer = output(fc_layer3, 10)return output_layerdef train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):"""对批处理图像和标签的会话进行优化: session: 当前TensorFlow会话: optimizer: TensorFlow优化函数: keep_probability: 保持概率: feature_batch: 批量Numpy图像数据: label_batch: 批量Numpy标签数据"""session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})def print_stats(session, feature_batch, label_batch, cost, accuracy):"""打印关于损失和验证准确性的信息: session: 当前TensorFlow会话: feature_batch: 批量Numpy图像数据: label_batch: 批量Numpy标签数据: cost: TensorFlow成本函数: accuracy: TensorFlow精度函数"""l = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})validation_accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})print("The loss is: {0}, and the Validation Accuracy is: {1}".format(l, validation_accuracy))valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))# 可调整参数
epochs = 5
batch_size = 64
keep_probability = 0.5# gpu参数
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)# 删除以前的 weights, bias, inputs, etc..
tf.reset_default_graph()# Inputs
x = neural_net_image_input((28, 28, 1))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()# Model
logits = conv_net(x, keep_prob)# 命名为logits张量,这样就可以在训练后从磁盘加载
logits = tf.identity(logits, name='logits')# 损失和优化器
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)# 正确率
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')# 模型保存路径
save_model_path = './image_classification'with tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) as sess:# 初始化变量sess.run(tf.global_variables_initializer())# 训练周期for epoch in range(epochs):for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_size):train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)print('Epoch {:>2}:  '.format(epoch + 1), end='')print_stats(sess, batch_features, batch_labels, cost, accuracy)# 保存模型saver = tf.train.Saver()save_path = saver.save(sess, save_model_path)

其输出为:

Epoch  1:  The loss is: 0.5249987244606018, and the Validation Accuracy is: 0.8186666369438171
Epoch  2:  The loss is: 0.4325884282588959, and the Validation Accuracy is: 0.8506666421890259
Epoch  3:  The loss is: 0.38672173023223877, and the Validation Accuracy is: 0.8629999756813049
Epoch  4:  The loss is: 0.34116020798683167, and the Validation Accuracy is: 0.8736666440963745
Epoch  5:  The loss is: 0.311458021402359, and the Validation Accuracy is: 0.8741666674613953

之后我们就可以加载训练好的模型来测试其效果了。

4.cnn_test.py

我们使用cnn_test.py函数来确定最终的正确率,并画出判断结果。

import tensorflow as tf
import pickle
import helper
import random
import matplotlib.pyplot as plt
# 可调整参数
epochs = 5
batch_size = 64
keep_probability = 0.5
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3def test_model():"""根据测试数据集测试保存的模型"""test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))loaded_graph = tf.Graph()config = tf.ConfigProto(device_count={'GPU': 0})with tf.Session(config=config, graph=loaded_graph) as sess:# 加载模型loader = tf.train.import_meta_graph(save_model_path + '.meta')loader.restore(sess, save_model_path)# 从加载的模型中获取张量loaded_x = loaded_graph.get_tensor_by_name('x:0')loaded_y = loaded_graph.get_tensor_by_name('y:0')loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')loaded_logits = loaded_graph.get_tensor_by_name('logits:0')loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')# 批量获取内存限制的准确性test_batch_acc_total = 0test_batch_count = 0for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels,batch_size):test_batch_acc_total += sess.run(loaded_acc,feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})test_batch_count += 1print('Testing Accuracy: {}\n'.format(test_batch_acc_total / test_batch_count))# 打印随机抽样random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))random_test_predictions = sess.run(tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)test_model()
plt.show()

输出结果如下:

Testing Accuracy: 0.8765923566878981

5.helper.py

因为使用pycharm的关系,我对helper.py进行了少许更改。

import pickle
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelBinarizerdef _load_label_names():"""从文件中加载标签名称"""return ['t-shirt', 'trouser', 'pullover', 'dress', 'coat', 'sandal', 'shirt', 'sneaker', 'bag', 'ankle_boot']def load_dataset(dataset_folder_path):"""加载训练和测试数据集"""with open(dataset_folder_path, mode='rb') as file:pickle_data = pickle.load(file)train_features = pickle_data[0].reshape((len(pickle_data[0]), 1, 28, 28)).transpose(0, 2, 3, 1)print(train_features.shape)train_labels = pickle_data[1]test_features = pickle_data[2].reshape((len(pickle_data[2]), 1, 28, 28)).transpose(0, 2, 3, 1)test_labels = pickle_data[3]   return train_features, train_labels, test_features, test_labelsdef display_stats(dataset_folder_path, sample_id):"""显示数据集的状态"""train_features, train_labels, test_features, test_labels = load_dataset(dataset_folder_path)if not (0 <= sample_id < len(train_features)):print('{} samples in training set.  {} is out of range.'.format(len(train_features), sample_id))return Noneprint('Samples: {}'.format(len(train_features)))print('Label Counts: {}'.format(dict(zip(*np.unique(train_labels, return_counts=True)))))print('First 20 Labels: {}'.format(train_labels[:20]))sample_image = train_features[sample_id]sample_label = train_labels[sample_id]label_names = _load_label_names()print('\nExample of Image {}:'.format(sample_id))print('Image - Min Value: {} Max Value: {}'.format(sample_image.min(), sample_image.max()))print('Image - Shape: {}'.format(sample_image.shape))print('Label - Label Id: {} Name: {}'.format(sample_label, label_names[sample_label]))plt.axis('off')plt.imshow(sample_image.squeeze(), cmap="gray")plt.show()def _preprocess_and_save(normalize, one_hot_encode, features, labels, filename):"""对数据进行预处理并保存到文件中"""features = normalize(features)labels = one_hot_encode(labels)pickle.dump((features, labels), open(filename, 'wb'))def preprocess_and_save_data(dataset_folder_path, normalize, one_hot_encode):"""预处理训练和验证数据"""valid_features = []valid_labels = []train_features, train_labels, test_features, test_labels = load_dataset(dataset_folder_path)validation_count = int(len(train_features) * 0.1)# 对新的培训数据进行预处理和保存_preprocess_and_save(normalize,one_hot_encode,train_features[:-validation_count],train_labels[:-validation_count],'preprocess_train' + '.p')# 使用部分训练batch进行验证valid_features.extend(train_features[-validation_count:])valid_labels.extend(train_labels[-validation_count:])# 对所有验证数据进行预处理和保存_preprocess_and_save(normalize,one_hot_encode,np.array(valid_features),np.array(valid_labels),'preprocess_validation.p')# 对所有测试数据进行预处理和保存_preprocess_and_save(normalize,one_hot_encode,np.array(test_features),np.array(test_labels),'preprocess_test.p')def batch_features_labels(features, labels, batch_size):"""将特征和标签分成批次"""assert features is not None, 'features is None!'assert labels is not None, 'labels is None!'for start in range(0, len(features), batch_size):end = min(start + batch_size, len(features))yield features[start:end], labels[start:end]def load_preprocess_training_batch(batch_size):"""载预处理的训练数据,并以<batch_size>或更小的批量返回"""filename = 'preprocess_train' + '.p'features, labels = pickle.load(open(filename, mode='rb'))# 返回批量<batch_size>或更小的训练数据return batch_features_labels(features, labels, batch_size)def display_image_predictions(features, labels, predictions):n_classes = 10label_names = _load_label_names()label_binarizer = LabelBinarizer()label_binarizer.fit(range(n_classes))label_ids = label_binarizer.inverse_transform(np.array(labels))fig, axies = plt.subplots(nrows=4, ncols=2)fig.tight_layout()fig.suptitle('Softmax Predictions', fontsize=20, y=1.1)n_predictions = 3margin = 0.05ind = np.arange(n_predictions)width = (1. - 2. * margin) / n_predictionsfor image_i, (feature, label_id, pred_indicies, pred_values) in enumerate(zip(features, label_ids, predictions.indices, predictions.values)):pred_names = [label_names[pred_i] for pred_i in pred_indicies]correct_name = label_names[label_id]axies[image_i][0].imshow(feature.squeeze(), cmap='gray')axies[image_i][0].set_title(correct_name)axies[image_i][0].set_axis_off()axies[image_i][1].barh(ind + margin, pred_values[::-1], width)axies[image_i][1].set_yticks(ind + margin)axies[image_i][1].set_yticklabels(pred_names[::-1])axies[image_i][1].set_xticks([0, 0.5, 1.0])

以下是所有本次程序使用或者建立的数据,其中problem_unittests.py是一个关于某些模块的测试程序,在主要程序中使用不到,只做编写程序时测试之用。
接下来便是完全卷积网络了。

Udacity机器人软件工程师课程笔记(二十八) - 卷积神经网络实例 - Fashion-MNIST数据集相关推荐

  1. Udacity机器人软件工程师课程笔记(十八)-机械臂仿真控制实例(其三)-KR210机械臂反向运动学

    机械臂仿真控制实例(其二)-KR210正向运动学 目录 反向运动学概述 为Kuka KR210创建IK解算器 1.反向运动学概述 KR210的最后三个关节是满足三个相邻的关节轴线在单点处相交的旋转关节 ...

  2. Udacity机器人软件工程师课程笔记(十二)-ROS-编写更复杂的ROS节点(arm_mover节点 和 look_away 节点)

    更复杂的ROS节点 1. Arm_mover节点 为了打好更好的基础,这是在Arm_mover节点还需要学习的内容 自定义消息生成 服务 参数 启动文件 为了理解上述内容,我们将编写另一个名为arm_ ...

  3. Udacity机器人软件工程师课程笔记(十五)-运动学-正向运动学和反向运动学(其二)-DH参数等

    正向运动学和反向运动学 目录 2D中的旋转矩阵 sympy包 旋转的合成 旋转矩阵中的欧拉角 平移 齐次变换及其逆变换 齐次变换的合成 Denavit-Hartenberg 参数 DH参数分配算法 正 ...

  4. Udacity机器人软件工程师课程笔记(十)-ROS-Catkin-包(package)和gazebo

    包和gazebo仿真 1.添加包 (1)克隆simple_arm包 克隆现有的包并将其添加到我们新创建的工作区. 首先导航到src目录,然后从其github仓库克隆本课程 simple_arm 的包. ...

  5. Udacity机器人软件工程师课程笔记(十六)-机械臂仿真控制实例(其一)-Gazebo、RViz和Moveit!

    机械臂仿真控制实例 目录 环境设置 项目工具介绍 Gazebo (1)Gazebo组件 (2)Gazebo界面 统一机器人描述格式(URDF) RViz Moveit! 1.环境设置 对于此项目,使用 ...

  6. Udacity机器人软件工程师课程笔记(十九) - 3D感知介绍 - 主动/被动式传感器、RGB-D相机、点云

    3D感知介绍 目录 传感器 RGB-D相机 点云 1.传感器 主动式传感器是指向目标发射电磁波,然后收集从目标反射回来的电磁波信息的传感器,如合成孔径雷达等. 被动式传感器指只能收集地而目标反时来自太 ...

  7. Udacity机器人软件工程师课程笔记(十四)-运动学-正向运动学和反向运动学(其一)

    正向运动学和反向运动学 目录 2D中的旋转矩阵 sympy包 旋转的合成 旋转矩阵中的欧拉角 平移 齐次变换及其逆变换 齐次变换的合成 Denavit-Hartenberg 参数 DH参数分配算法 正 ...

  8. Udacity机器人软件工程师课程笔记(五)-样本搜索和找回-基于漫游者号模拟器-自主驾驶

    9.自主驾驶 在接下来的环节中,我们要实现漫游者号的自动驾驶功能. 完成这个功能我们需要四个程序,第一个为感知程序,其对摄像头输入的图片进行变换处理和坐标变换使用.第二个程序为决策程序,功能是帮助漫游 ...

  9. Udacity机器人软件工程师课程笔记(二十七) - 卷积神经网络(CNN)

    1.卷积神经网络介绍 **卷积神经网络(Convolutional Neural Network,CNN)**是一种前馈神经网络,它的人工神经元可以响应一部分覆盖范围内的周围单元,对于大型图像处理有出 ...

  10. Udacity机器人软件工程师课程笔记(七)-ROS介绍和Turtlesim包的使用

    Robotics Software engineer笔记 1.ROS简介与虚拟机配置 (1)ROS简介 ROS是一款机器人软件框架,即机器人操作系统(Robot Operating System). ...

最新文章

  1. 如何在剃须刀中使用三元运算符(特别是在HTML属性上)?
  2. 如何datagrid分页保持每页先前选择的checkbox的状态?
  3. MATLAB 未找到支持的编译器或 SDK。您可以安装免费提供的 MinGW-w64 C/C++ 编译器
  4. iphone发送邮件html,iPhone使用smtp服务器发送电子邮件?
  5. SharePoint 2013 Workflow - Advanced Workflow Debugging with Fiddler
  6. 深度学习(七十三)pytorch学习笔记
  7. 1小时搞懂设计模式之原型模式
  8. 旋转数组leetcode 189
  9. Laravel中的日志与上传
  10. 各省简称 拼音 缩写_近50个拼音/英文缩写合集 (一)
  11. PCB中常见的单位换算
  12. 关于错误local variable ‘str‘ referenced before assignment
  13. crt和zoc7的快捷键记录
  14. pygame的基础知识详解(主窗口创建、图像绘制、时钟对象和事件响应等知识点),请惠存
  15. N皇后问题---线性方程处理
  16. 一步一步学Silverlight 2系列(1):创建一个基本的Silverlight应用
  17. Symantec Backup Exec 2012 Agent For Linux安装
  18. C++程序设计教程(钱能)第四章 学习笔记
  19. 【重磅】个税上调作为程序员更要上班做私单了!
  20. 微软飞行模拟服务器,微软飞行模拟器配置要求一览 最低/最高PC配置详情

热门文章

  1. 从壹开始微服务 [ DDD ] 之一 ║ D3模式设计初探 与 我的计划书
  2. 不同级域名中的 Cookie 共享
  3. react-navigation
  4. 2022-2028年中国TPE弹性体行业市场研究及前瞻分析报告
  5. MySQL 学习笔记(9)— 事务控制语句、事务属性以及并发和隔离级别
  6. 动态路由选择协议(三)链路状态路由选择协议
  7. css选择器及float(浮动)
  8. 金额阿拉伯数字转换为中文大写
  9. linux wifi-tools,Linux下WiFi工具wireless_tools交叉编译,及其支持生成iwconfig使用的内核配置...
  10. java 定时器获得外部参数_JMeter定时器使用小结