本篇博文为【唐宇迪】计算机视觉实训营第二天-Pytorch框架实战课程的个人笔记。
代码来自:qiuzitao深度学习之PyTorch实战(十),与视频教学流程记录一致,课程详情可参考该篇。
下文数据集及对应json文件
链接:https://pan.baidu.com/s/14MO6dP_Zax-DlUFfs-NLww 提取码:j11h

目录

  • 数据可视化
  • train
  • test

数据可视化

import matplotlib.pyplot as plt
import numpy as np
import torch
from torchvision import transforms, models, datasets
import os
import json#数据集位置
data_dir = './flower_data/'#数据增强操作
data_transforms = {'train': transforms.Compose([transforms.RandomRotation(45),#随机旋转,-45到45度之间随机选transforms.CenterCrop(224),#从中心开始裁剪,留下224*224的。(随机裁剪得到的数据更多)transforms.RandomHorizontalFlip(p=0.5),#随机水平翻转 选择一个概率去翻转,0.5就是50%翻转,50%不翻转transforms.RandomVerticalFlip(p=0.5),#随机垂直翻转transforms.ColorJitter(brightness=0.2, contrast=0.1, saturation=0.1, hue=0.1),#参数1为亮度,参数2为对比度,参数3为饱和度,参数4为色相transforms.RandomGrayscale(p=0.025),#概率转换成灰度率,3通道就是R=G=Btransforms.ToTensor(),  #转成tensor的格式transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])#均值,标准差(拿人家算好的)]),'valid': transforms.Compose([transforms.Resize(256),transforms.CenterCrop(224),transforms.ToTensor(),transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) #要和训练集保持一致的标准化操作]),
}batch_size = 16#制作数据(传入数据集,并作数据增强操作)
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'valid']}
#将数据集按batch制作成一个数据包
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=batch_size, shuffle=True) for x in ['train', 'valid']}
class_names = image_datasets['train'].classesdef im_convert(tensor):""" 展示数据"""image = tensor.to("cpu").clone().detach()image = image.numpy().squeeze()image = image.transpose(1, 2, 0)image = image * np.array((0.229, 0.224, 0.225)) + np.array((0.485, 0.456, 0.406))  # 还原回去image = image.clip(0, 1)return image# 生成画布
fig=plt.figure(figsize=(20, 12))
# 这里batch=16所以用4x4的布局
columns = 4
rows = 4dataiter = iter(dataloaders['valid'])
inputs, classes = dataiter.next()print('classes',classes)# 数据集标签名的索引(可有可无,个人觉得按照文件夹名来看反而容易查看是否识别正确,如果需要做项目任务,可自行补充上)
with open('cat_to_name.json', 'r') as f:cat_to_name = json.load(f)
#print('cat_to_name',cat_to_name)for idx in range(columns*rows):ax = fig.add_subplot(rows, columns, idx+1, xticks=[], yticks=[])# ax.set_title(cat_to_name[str(int(class_names[classes[idx]]))])ax.set_title("{} ({})".format(str(int(class_names[classes[idx]])), cat_to_name[str(int(class_names[classes[idx]]))]))# ax.set_title(str(int(class_names[classes[idx]])))plt.imshow(im_convert(inputs[idx]))
plt.show()

输出:(前面是文件夹名,后面是索引到的花的名字,可以查看下是否正确对应所属文件夹,因为我在做的过程中,test测试时显示的类名与所属文件夹对不上,这是文件索引问题,后面的代码稍微改了改索引)

train

下面代码将不进行可视化,直接就是训练的代码。

import os
import torch
from torch import nn
import torch.optim as optim
from torchvision import transforms, models, datasets
# https://pytorch.org/docs/stable/torchvision/index.html  #模块的官方网址,上面例子有教你怎么用
import time
import copydata_dir = './flower_data/'data_transforms = {'train': transforms.Compose([transforms.RandomRotation(45),#随机旋转,-45到45度之间随机选transforms.CenterCrop(224),#从中心开始裁剪,留下224*224的。(随机裁剪得到的数据更多)transforms.RandomHorizontalFlip(p=0.5),#随机水平翻转 选择一个概率去翻转,0.5就是50%翻转,50%不翻转transforms.RandomVerticalFlip(p=0.5),#随机垂直翻转transforms.ColorJitter(brightness=0.2, contrast=0.1, saturation=0.1, hue=0.1),#参数1为亮度,参数2为对比度,参数3为饱和度,参数4为色相transforms.RandomGrayscale(p=0.025),#概率转换成灰度率,3通道就是R=G=Btransforms.ToTensor(),  #转成tensor的格式transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])#均值,标准差(拿人家算好的)]),'valid': transforms.Compose([transforms.Resize(256),transforms.CenterCrop(224),transforms.ToTensor(),transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) #要和训练集保持一致的标准化操作]),
}# 自定义batch大小
batch_size = 64#制作数据(传入数据集,并作数据增强操作)
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'valid']}
#将数据集按batch制作成一个数据包
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=batch_size, shuffle=True) for x in ['train', 'valid']}
class_names = image_datasets['train'].classes# print('image_datasets',image_datasets)
# print('dataloaders',dataloaders)
# print('dataset_sizes',dataset_sizes)
# print('class_names',class_names)model_name = 'resnet'  #可选的比较多 ['resnet', 'alexnet', 'vgg', 'squeezenet', 'densenet', 'inception']
#是否用人家训练好的特征来做
feature_extract = True# 是否用GPU训练
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:print('CUDA is not available.  Training on CPU ...')
else:print('CUDA is available!  Training on GPU ...')
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")# 定义该层是否需要训练
# 在数据量小的情况可以采用将前面的网络层冻结,只训练后面fc层。
def set_parameter_requires_grad(model, feature_extracting):if feature_extracting:for param in model.parameters():#如需训练改为Trueparam.requires_grad = Falsedef initialize_model(model_name, num_classes, feature_extract, use_pretrained=True):# 选择合适的模型,不同模型的初始化方法稍微有点区别model_ft = Noneinput_size = 0if model_name == "resnet":""" Resnet152"""model_ft = models.resnet152(pretrained=use_pretrained)set_parameter_requires_grad(model_ft, feature_extract)num_ftrs = model_ft.fc.in_featuresmodel_ft.fc = nn.Sequential(nn.Linear(num_ftrs, 102),nn.LogSoftmax(dim=1))input_size = 224elif model_name == "alexnet":""" Alexnet"""model_ft = models.alexnet(pretrained=use_pretrained)set_parameter_requires_grad(model_ft, feature_extract)num_ftrs = model_ft.classifier[6].in_featuresmodel_ft.classifier[6] = nn.Linear(num_ftrs,num_classes)input_size = 224elif model_name == "vgg":""" VGG11_bn"""model_ft = models.vgg16(pretrained=use_pretrained)set_parameter_requires_grad(model_ft, feature_extract)num_ftrs = model_ft.classifier[6].in_featuresmodel_ft.classifier[6] = nn.Linear(num_ftrs,num_classes)input_size = 224elif model_name == "squeezenet":""" Squeezenet"""model_ft = models.squeezenet1_0(pretrained=use_pretrained)set_parameter_requires_grad(model_ft, feature_extract)model_ft.classifier[1] = nn.Conv2d(512, num_classes, kernel_size=(1,1), stride=(1,1))model_ft.num_classes = num_classesinput_size = 224elif model_name == "densenet":""" Densenet"""model_ft = models.densenet121(pretrained=use_pretrained)set_parameter_requires_grad(model_ft, feature_extract)num_ftrs = model_ft.classifier.in_featuresmodel_ft.classifier = nn.Linear(num_ftrs, num_classes)input_size = 224elif model_name == "inception":""" Inception v3Be careful, expects (299,299) sized images and has auxiliary output"""model_ft = models.inception_v3(pretrained=use_pretrained)set_parameter_requires_grad(model_ft, feature_extract)# Handle the auxilary netnum_ftrs = model_ft.AuxLogits.fc.in_featuresmodel_ft.AuxLogits.fc = nn.Linear(num_ftrs, num_classes)# Handle the primary netnum_ftrs = model_ft.fc.in_featuresmodel_ft.fc = nn.Linear(num_ftrs,num_classes)input_size = 299else:print("Invalid model name, exiting...")exit()return model_ft, input_size#改成我们要训练的102类
model_ft, input_size = initialize_model(model_name, 102, feature_extract, use_pretrained=True)
#GPU计算
model_ft = model_ft.to(device)# 模型保存的文件名(需要保存到别的路径可接着加)
filename='checkpoint.pth'# 是否训练所有层
params_to_update = model_ft.parameters()
print("Params to learn:")
if feature_extract:params_to_update = []for name,param in model_ft.named_parameters():if param.requires_grad == True:params_to_update.append(param)print("\t",name)
else:for name,param in model_ft.named_parameters():if param.requires_grad == True:print("\t",name)# 查看模型具体框架信息
print('model_ft',model_ft)# 优化器设置
optimizer_ft = optim.Adam(params_to_update, lr=1e-2)
scheduler = optim.lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)#学习率每7个epoch衰减成原来的1/10
#最后一层已经LogSoftmax()了,所以不能nn.CrossEntropyLoss()来计算了,nn.CrossEntropyLoss()相当于logSoftmax()和nn.NLLLoss()整合
criterion = nn.NLLLoss()def train_model(model, dataloaders, criterion, optimizer, num_epochs=25, is_inception=False, filename=filename):since = time.time()best_acc = 0"""checkpoint = torch.load(filename)best_acc = checkpoint['best_acc']model.load_state_dict(checkpoint['state_dict'])optimizer.load_state_dict(checkpoint['optimizer'])model.class_to_idx = checkpoint['mapping']"""model.to(device)  # 用GPU训练val_acc_history = []train_acc_history = []train_losses = []valid_losses = []LRs = [optimizer.param_groups[0]['lr']]best_model_wts = copy.deepcopy(model.state_dict())for epoch in range(num_epochs):print('Epoch {}/{}'.format(epoch, num_epochs - 1))print('-' * 10)# 训练和验证for phase in ['train', 'valid']:if phase == 'train':model.train()  # 训练else:model.eval()  # 验证running_loss = 0.0running_corrects = 0# 把数据都取个遍for inputs, labels in dataloaders[phase]:inputs = inputs.to(device)  # 将input传入GPU计算labels = labels.to(device)  # 将labels传入GPU计算# 清零optimizer.zero_grad()# 只有训练的时候计算和更新梯度with torch.set_grad_enabled(phase == 'train'):if is_inception and phase == 'train':outputs, aux_outputs = model(inputs)loss1 = criterion(outputs, labels)loss2 = criterion(aux_outputs, labels)loss = loss1 + 0.4 * loss2else:  # resnet执行的是这里outputs = model(inputs)loss = criterion(outputs, labels)_, preds = torch.max(outputs, 1)# 训练阶段更新权重if phase == 'train':loss.backward()optimizer.step()# 计算损失running_loss += loss.item() * inputs.size(0)running_corrects += torch.sum(preds == labels.data)epoch_loss = running_loss / len(dataloaders[phase].dataset)epoch_acc = running_corrects.double() / len(dataloaders[phase].dataset)time_elapsed = time.time() - sinceprint('Time elapsed {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc))# 得到最好那次的模型if phase == 'valid' and epoch_acc > best_acc:best_acc = epoch_accbest_model_wts = copy.deepcopy(model.state_dict())state = {'state_dict': model.state_dict(),'best_acc': best_acc,'optimizer': optimizer.state_dict(),}torch.save(state, filename)if phase == 'valid':val_acc_history.append(epoch_acc)valid_losses.append(epoch_loss)scheduler.step(epoch_loss)if phase == 'train':train_acc_history.append(epoch_acc)train_losses.append(epoch_loss)print('Optimizer learning rate : {:.7f}'.format(optimizer.param_groups[0]['lr']))LRs.append(optimizer.param_groups[0]['lr'])print()time_elapsed = time.time() - sinceprint('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))print('Best val Acc: {:4f}'.format(best_acc))# 训练完后用最好的一次当做模型最终的结果model.load_state_dict(best_model_wts)return model, val_acc_history, train_acc_history, valid_losses, train_losses, LRsmodel_ft,val_acc_history, train_acc_history, valid_losses, train_losses, LRs = train_model(model_ft,\dataloaders,\criterion,\optimizer_ft,\num_epochs=20, \is_inception=(model_name=="inception"))

训练完生成模型权重文件:

test

利用上面训好的模型进行测试:

import os
import matplotlib.pyplot as plt
import numpy as np
import torch
from torch import nn
import torch.optim as optim
import torchvision
# pip install torchvision    #如果你的电脑没有安装torchvision模块就得去用这个指令安装
from torchvision import transforms, models, datasets
# https://pytorch.org/docs/stable/torchvision/index.html  #模块的官方网址,上面例子有教你怎么用
import imageio
import time
import warnings
import random
import sys
import copy
import json
from PIL import Imagedata_dir = './flower_data/'data_transforms = {'train': transforms.Compose([transforms.RandomRotation(45),#随机旋转,-45到45度之间随机选transforms.CenterCrop(224),#从中心开始裁剪,留下224*224的。(随机裁剪得到的数据更多)transforms.RandomHorizontalFlip(p=0.5),#随机水平翻转 选择一个概率去翻转,0.5就是50%翻转,50%不翻转transforms.RandomVerticalFlip(p=0.5),#随机垂直翻转transforms.ColorJitter(brightness=0.2, contrast=0.1, saturation=0.1, hue=0.1),#参数1为亮度,参数2为对比度,参数3为饱和度,参数4为色相transforms.RandomGrayscale(p=0.025),#概率转换成灰度率,3通道就是R=G=Btransforms.ToTensor(),  #转成tensor的格式transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])#均值,标准差(拿人家算好的)]),'valid': transforms.Compose([transforms.Resize(256),transforms.CenterCrop(224),transforms.ToTensor(),transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) #要和训练集保持一致的标准化操作]),
}batch_size = 16image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'valid']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=batch_size, shuffle=True) for x in ['train', 'valid']}
class_names = image_datasets['train'].classesmodel_name = 'resnet'  #可选的比较多 ['resnet', 'alexnet', 'vgg', 'squeezenet', 'densenet', 'inception']
#是否用人家训练好的特征来做
feature_extract = Truewith open('cat_to_name.json', 'r') as f:cat_to_name = json.load(f)
# print('cat_to_name',cat_to_name)# 是否用GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:print('CUDA is not available.  Training on CPU ...')
else:print('CUDA is available!  Training on GPU ...')
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")def set_parameter_requires_grad(model, feature_extracting):if feature_extracting:for param in model.parameters():param.requires_grad = Falsedef initialize_model(model_name, num_classes, feature_extract, use_pretrained=True):# 选择合适的模型,不同模型的初始化方法稍微有点区别model_ft = Noneinput_size = 0if model_name == "resnet":""" Resnet152"""model_ft = models.resnet152(pretrained=use_pretrained)set_parameter_requires_grad(model_ft, feature_extract)num_ftrs = model_ft.fc.in_featuresmodel_ft.fc = nn.Sequential(nn.Linear(num_ftrs, 102),nn.LogSoftmax(dim=1))input_size = 224elif model_name == "alexnet":""" Alexnet"""model_ft = models.alexnet(pretrained=use_pretrained)set_parameter_requires_grad(model_ft, feature_extract)num_ftrs = model_ft.classifier[6].in_featuresmodel_ft.classifier[6] = nn.Linear(num_ftrs,num_classes)input_size = 224elif model_name == "vgg":""" VGG11_bn"""model_ft = models.vgg16(pretrained=use_pretrained)set_parameter_requires_grad(model_ft, feature_extract)num_ftrs = model_ft.classifier[6].in_featuresmodel_ft.classifier[6] = nn.Linear(num_ftrs,num_classes)input_size = 224elif model_name == "squeezenet":""" Squeezenet"""model_ft = models.squeezenet1_0(pretrained=use_pretrained)set_parameter_requires_grad(model_ft, feature_extract)model_ft.classifier[1] = nn.Conv2d(512, num_classes, kernel_size=(1,1), stride=(1,1))model_ft.num_classes = num_classesinput_size = 224elif model_name == "densenet":""" Densenet"""model_ft = models.densenet121(pretrained=use_pretrained)set_parameter_requires_grad(model_ft, feature_extract)num_ftrs = model_ft.classifier.in_featuresmodel_ft.classifier = nn.Linear(num_ftrs, num_classes)input_size = 224elif model_name == "inception":""" Inception v3Be careful, expects (299,299) sized images and has auxiliary output"""model_ft = models.inception_v3(pretrained=use_pretrained)set_parameter_requires_grad(model_ft, feature_extract)# Handle the auxilary netnum_ftrs = model_ft.AuxLogits.fc.in_featuresmodel_ft.AuxLogits.fc = nn.Linear(num_ftrs, num_classes)# Handle the primary netnum_ftrs = model_ft.fc.in_featuresmodel_ft.fc = nn.Linear(num_ftrs,num_classes)input_size = 299else:print("Invalid model name, exiting...")exit()return model_ft, input_sizemodel_ft, input_size = initialize_model(model_name, 102, feature_extract, use_pretrained=True)# GPU模式
model_ft = model_ft.to(device)# 加载模型文件
filename='checkpoint.pth'# 加载模型
checkpoint = torch.load(filename)
best_acc = checkpoint['best_acc']
model_ft.load_state_dict(checkpoint['state_dict'])# 得到一个batch的测试数据
dataiter = iter(dataloaders['valid'])
images, labels = dataiter.next()
print('labels',labels)model_ft.eval()if train_on_gpu:output = model_ft(images.cuda())
else:output = model_ft(images)
# print('output.shape',output.shape)#预测的最优值
_, preds_tensor = torch.max(output, 1)preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
print('preds',preds)fig=plt.figure(figsize=(20, 20))
columns =4
rows = 4def im_convert(tensor):""" 展示数据"""image = tensor.to("cpu").clone().detach()image = image.numpy().squeeze()image = image.transpose(1, 2, 0)image = image * np.array((0.229, 0.224, 0.225)) + np.array((0.485, 0.456, 0.406))  # 还原回去image = image.clip(0, 1)return imagefor idx in range (columns*rows):labels_true=str(class_names[labels[idx]])# print('labels_true', labels_true)labels_preds=str(class_names[preds[idx]])# print('labels_preds', labels_preds)ax = fig.add_subplot(rows, columns, idx+1, xticks=[], yticks=[])ax.set_title("{}-{} ({}-{})".format(labels_preds,cat_to_name[labels_preds], labels_true ,cat_to_name[labels_true]),color=("green" if cat_to_name[labels_preds]==cat_to_name[labels_true] else "red"))plt.imshow(im_convert(images[idx]))
plt.savefig('1.png')
plt.show()

测试结果将存入1.png:

做别的分类任务:

Pytorch框架实战——102类花卉分类相关推荐

  1. 102类花卉分类数据集(已划分,有训练集、测试集、验证集标签)

    102类花卉分类数据集(已划分,有训练集.测试集.验证集标签)+完整运行代码 数据集已经经过处理划分好了,并且附带了训练集,测试集,验证集的txt文本标签.配合完整运行代码即可训练. 数据集链接在文章 ...

  2. Pytorch实现102类鲜花分类(102 Category Flower Dataset)

    Pytorch实现102类鲜花分类(VGG19和ResNet152模型) 本文主要讲解该算法的实现过程,原理部分需读者自行研究,可以找一些论文之类的. 实验环境 python3.6+pytorch1. ...

  3. 基于华为云ModelArts平台利用MobileNetV2算法实现5类花卉分类

    *************************************************** 码字不易,收藏之余,别忘了给我点个赞吧! *************************** ...

  4. 7 Resnet深度残差网络实现102种花卉分类

    Resnet(Deep residual network, ResNet),深度残差神经网络,卷积神经网络历史在具有划时代意义的神经网络.与Alexnet和VGG不同的是,网络结构上就有很大的改变,在 ...

  5. 关于用pytorch构建vgg网络实现花卉分类的学习笔记

    需要的第三方库: pytorch.matplotlib.json.os.tqdm 一.model.py的编写 (1)准备工作 1.参照vgg网络结构图(如下图1),定义一个字典,用于存放各种vgg网络 ...

  6. 【视频课】永久免费!5小时快速掌握Pytorch框架入门及实战

    前言 PyTorch是深度学习的主流框架之一,新手入门相对容易.为了帮助初学者解决PyTorch入门及实践的问题,有三AI推出<深度学习之PyTorch-入门及实战>课程,课程将算法.模型 ...

  7. 【CV实战】年轻人的第一个深度学习图像分割项目应该是什么样的(Pytorch框架)?...

    我们上次给新手们介绍了第一个合适入门的深度学习CV项目,可阅读[CV实战]年轻人的第一个深度学习CV项目应该是什么样的?(支持13大深度学习开源框架),本次我们再给大家介绍一个新的任务,图像分割,包括 ...

  8. 【Pytorch神经网络实战案例】21 基于Cora数据集实现Multi_Sample Dropout图卷积网络模型的论文分类

    Multi-sample Dropout是Dropout的一个变种方法,该方法比普通Dropout的泛化能力更好,同时又可以缩短模型的训练时间.XMuli-sampleDropout还可以降低训练集和 ...

  9. 断点续训 Pytorch 和 Tensorflow 框架 VGG16 模型 猫狗大战 鸢尾花分类

    神经网络训练模型的过程中,如果程序突然中断,竹篮打水一场空? >>>断点续训来解决! 目录 (1)Pytorch框架的断点续训(猫狗大战) (2)Tensorflow框架的断点续训( ...

最新文章

  1. 从飞天到倚天 阿里云底层自研技术大爆发
  2. 第十一届全国大学生智能汽车竞赛获奖名单
  3. l298n电机驱动模块_带DRV8825驱动器模块和Arduino的控制步进电机
  4. 5177. 【NOIP2017提高组模拟6.28】TRAVEL (Standard IO)
  5. React开发(169):ant design Popconfirm 使用
  6. “天才”少年!4位90后摘得全球顶尖数学大奖,90%获奖者不满30岁
  7. React 第八章 列表渲染以及key的使用
  8. 英雄联盟微信登录服务器怎么回事,英雄联盟微信怎么登陆 lol微信登录功能开放大区一览...
  9. LeetCode 数组 容易 python
  10. Unknown symbol platform_driver_unregister (err 0)
  11. 5道经典基础编程题让你入门C语言
  12. 1.供给与需求分析(交通流理论)
  13. pandas安装报错
  14. BDF 字体文件格式
  15. 就想了解服务器为什么1M带宽网速却达不到1M
  16. 威斯康星大学计算机科学教授,UW机器学习和数据科学活动吸引了国际知名的计算机科学家...
  17. 【转】selenium怎样定位web提示语
  18. 动词不定式 to do 和 动名词 V-ing 的区别(infinitive vs gerund)
  19. 【数据结构】哈希(Hash)
  20. STM32互补PWM输出使能控制

热门文章

  1. ARP协议,ARP诈骗图
  2. android显示view在屏幕中间,android – 如何在屏幕中央显示imageview?
  3. mysql实验视图及索引_MySQL视图及索引
  4. Effective C++读书笔记 第1章
  5. 自动驾驶——标注工具的开发笔记(legacy)
  6. Jupyter Notebook——Windows平台上中如何切换虚拟环境
  7. C语言实现2048游戏(Windows版)
  8. python 批量自动搜索、自动抓取需要的信息简单教程【selenium】
  9. [转]邵雍其诗、其人、其事
  10. Python装饰器几个有用又好玩的例子