TORCHVISION 目标检测微调教程

前言

前四节,我们初步掌握了通过PyTorch构建神经网络模型,以及优化参数,模型集成等问题,本章程将微调Penn-Fudan 数据库中用于行人检测和分割的预训练Mask R-CNN模型。它包含 170 张图像和 345 个行人实例,我们将用它来说明如何使用 torchvision 中的新功能在自定义数据集上训练实例分割模型。

定义数据集

用于训练对象检测、实例分割和人物关键点检测的参考脚本允许轻松支持添加新的自定义数据集。数据集应该继承自标准 torch.utils.data.Dataset类,并实现lengetitem

  • 图像:大小的 PIL 图像 (H, W)

  • 目标:包含以下字段的字典

    • boxes(FloatTensor[N, 4]):N 边界框的坐标格式,范围从toto[x0, y0, x1, y1] 0 W 0 H
    • labels(Int64Tensor[N]):每个边界框的标签。0始终代表背景类。
    • image_id (Int64Tensor[1]):图像标识符。
    • area (Tensor[N]):边界框的面积。在使用coco指标评估期间用以分离小,中,和大框之间的指标分数。
    • iscrowd (UInt8Tensor[N]):在评估期间将忽略 iscrowd=True 的实例。

如果使用上述类,使其适用于训练和评估,并且将使用 pycocotools可以通过pip install pycocotools上的一个注释labels

!pip install cython
!pip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI

注意:安装git教程参考https://git-scm.com。第二条命令可能会出现一些问题,请确认正确git地址。如果其中一个图像具有两个类,则您的 labels张量为[1,2]。另外,如果在训练时使用纵横比分组(每个batch只包含纵横比相似的图像)建议使用get_height_and_width返回图像高度和宽度。

PennFudan 编写自定义数据集

为 PennFudan 数据集编写一个数据集点击下载地址。文件结构如下;

PennFudanPed/
PedMasks/FudanPed00001_mask.pngFudanPed00002_mask.pngFudanPed00003_mask.pngFudanPed00004_mask.png...
PNGImages/FudanPed00001.pngFudanPed00002.pngFudanPed00003.pngFudanPed00004.png

如下图所示,这是一对图像和分割掩码的例子

from PIL import Image
img=Image.open('C:/Users/12499/Documents/PennFudanPed/PNGImages/FudanPed00001.png')
img.show()
mask = Image.open('C:/Users/12499/Documents/PennFudanPed/PNGImages/FudanPed00001.png')


每个图像都有一个对应的分割掩码,其中每种颜色对应不同的实例。使用torch.utils.data.Dataset为该数据集编写一个类

import os
import numpy as np
import torch
from PIL import Imageclass PennFudanDataset(torch.utils.data.Dataset):def __init__(self, root, transforms):self.root = rootself.transforms = transforms# load all image files, sorting them to# ensure that they are alignedself.imgs = list(sorted(os.listdir(os.path.join(root, "PNGImages"))))self.masks = list(sorted(os.listdir(os.path.join(root, "PedMasks"))))def __getitem__(self, idx):# load images and masksimg_path = os.path.join(self.root, "PNGImages", self.imgs[idx])mask_path = os.path.join(self.root, "PedMasks", self.masks[idx])img = Image.open(img_path).convert("RGB")# note that we haven't converted the mask to RGB,# because each color corresponds to a different instance# with 0 being backgroundmask = Image.open(mask_path)# convert the PIL Image into a numpy arraymask = np.array(mask)# instances are encoded as different colorsobj_ids = np.unique(mask)# first id is the background, so remove itobj_ids = obj_ids[1:]# split the color-encoded mask into a set# of binary masksmasks = mask == obj_ids[:, None, None]# get bounding box coordinates for each masknum_objs = len(obj_ids)boxes = []for i in range(num_objs):pos = np.where(masks[i])xmin = np.min(pos[1])xmax = np.max(pos[1])ymin = np.min(pos[0])ymax = np.max(pos[0])boxes.append([xmin, ymin, xmax, ymax])# convert everything into a torch.Tensorboxes = torch.as_tensor(boxes, dtype=torch.float32)# there is only one classlabels = torch.ones((num_objs,), dtype=torch.int64)masks = torch.as_tensor(masks, dtype=torch.uint8)image_id = torch.tensor([idx])area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])# suppose all instances are not crowdiscrowd = torch.zeros((num_objs,), dtype=torch.int64)target = {}target["boxes"] = boxestarget["labels"] = labelstarget["masks"] = maskstarget["image_id"] = image_idtarget["area"] = areatarget["iscrowd"] = iscrowdif self.transforms is not None:img, target = self.transforms(img, target)return img, targetdef __len__(self):return len(self.imgs)

模型定义

我们将使用Faster R-CNNMask R-CNN。Faster R-CNN 是一种用于对象检测的深度卷积网络,在用户看来是一个单一的、端到端的、统一的网络。该网络可以准确快速地预测不同物体的位置。


Mask R-CNN 在 Faster R-CNN 中添加了一个额外的分支,它还可以为每个实例生成高质量的分割掩码,在图像分割方面是最先进的。


在预训练模型最后一层进行微调或者使用不同模型替换模型主干时可以通过以下方法进行调整

微调预训练模型

coco上预训练模型开始,针对特定类进行微调,调整方式如下所示;

import torchvision
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor# load a model pre-trained pre-trained on COCO
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)# replace the classifier with a new one, that has
# num_classes which is user-defined
num_classes = 2  # 1 class (person) + background
# get number of input features for the classifier
in_features = model.roi_heads.box_predictor.cls_score.in_features
# replace the pre-trained head with a new one
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)

修改模型以添加不同主干

import torchvision
from torchvision.models.detection import FasterRCNN
from torchvision.models.detection.rpn import AnchorGenerator# load a pre-trained model for classification and return
# only the features
backbone = torchvision.models.mobilenet_v2(pretrained=True).features
# FasterRCNN needs to know the number of
# output channels in a backbone. For mobilenet_v2, it's 1280
# so we need to add it here
backbone.out_channels = 1280# let's make the RPN generate 5 x 3 anchors per spatial
# location, with 5 different sizes and 3 different aspect
# ratios. We have a Tuple[Tuple[int]] because each feature
# map could potentially have different sizes and
# aspect ratios
anchor_generator = AnchorGenerator(sizes=((32, 64, 128, 256, 512),),aspect_ratios=((0.5, 1.0, 2.0),))# let's define what are the feature maps that we will
# use to perform the region of interest cropping, as well as
# the size of the crop after rescaling.
# if your backbone returns a Tensor, featmap_names is expected to
# be [0]. More generally, the backbone should return an
# OrderedDict[Tensor], and in featmap_names you can choose which
# feature maps to use.
roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=['0'],output_size=7,sampling_ratio=2)# put the pieces together inside a FasterRCNN model
model = FasterRCNN(backbone,num_classes=2,rpn_anchor_generator=anchor_generator,box_roi_pool=roi_pooler)

PennFudan 数据集的实例分割模型

鉴于使用数据集较小,希望从预训练模型进行微调并且计算实例分割掩码,因而使用Mask R-CNN

import torchvision
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from torchvision.models.detection.mask_rcnn import MaskRCNNPredictordef get_model_instance_segmentation(num_classes):# load an instance segmentation model pre-trained pre-trained on COCOmodel = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)# get number of input features for the classifierin_features = model.roi_heads.box_predictor.cls_score.in_features# replace the pre-trained head with a new onemodel.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)# now get the number of input features for the mask classifierin_features_mask = model.roi_heads.mask_predictor.conv5_mask.in_channelshidden_layer = 256# and replace the mask predictor with a new onemodel.roi_heads.mask_predictor = MaskRCNNPredictor(in_features_mask,hidden_layer,num_classes)return model

构建模型后准备在我们自定义数据集上进行培训和评估。

模型集成

references/detection/,我们可以使用辅助函数来简化训练和评估检测模型。
通过编写一些辅助函数提高数据增强/转化功能

import transforms as T
def get_transform(train):transforms = []transforms.append(T.ToTensor())if train:transforms.append(T.RandomHorizontalFlip(0.5))return T.Compose(transforms)

测试forward()方法

在迭代数据之前,建议先观察模型在训练和预测期间对样本数据的期望

model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
dataset = PennFudanDataset('PennFudanPed', get_transform(train=True))
data_loader = torch.utils.data.DataLoader(
dataset, batch_size=2, shuffle=True, num_workers=4,
collate_fn=utils.collate_fn)
# For Training
images,targets = next(iter(data_loader))
images = list(image for image in images)
targets = [{k: v for k, v in t.items()} for t in targets]
output = model(images,targets)   # Returns losses and detections
# For inference
model.eval()
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
predictions = model(x)           # Returns predictions

执行训练和验证函数

from engine import train_one_epoch, evaluate
import utilsdef main():# train on the GPU or on the CPU, if a GPU is not availabledevice = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')# our dataset has two classes only - background and personnum_classes = 2# use our dataset and defined transformationsdataset = PennFudanDataset('PennFudanPed', get_transform(train=True))dataset_test = PennFudanDataset('PennFudanPed', get_transform(train=False))# split the dataset in train and test setindices = torch.randperm(len(dataset)).tolist()dataset = torch.utils.data.Subset(dataset, indices[:-50])dataset_test = torch.utils.data.Subset(dataset_test, indices[-50:])# define training and validation data loadersdata_loader = torch.utils.data.DataLoader(dataset, batch_size=2, shuffle=True, num_workers=4,collate_fn=utils.collate_fn)data_loader_test = torch.utils.data.DataLoader(dataset_test, batch_size=1, shuffle=False, num_workers=4,collate_fn=utils.collate_fn)# get the model using our helper functionmodel = get_model_instance_segmentation(num_classes)# move model to the right devicemodel.to(device)# construct an optimizerparams = [p for p in model.parameters() if p.requires_grad]optimizer = torch.optim.SGD(params, lr=0.005,momentum=0.9, weight_decay=0.0005)# and a learning rate schedulerlr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,step_size=3,gamma=0.1)# let's train it for 10 epochsnum_epochs = 10for epoch in range(num_epochs):# train for one epoch, printing every 10 iterationstrain_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)# update the learning ratelr_scheduler.step()# evaluate on the test datasetevaluate(model, data_loader_test, device=device)print("That's it!")

输出结果如下所示;

Epoch: [0]  [ 0/60]  eta: 0:01:18  lr: 0.000090  loss: 2.5213 (2.5213)  loss_classifier: 0.8025 (0.8025)  loss_box_reg: 0.2634 (0.2634)  loss_mask: 1.4265 (1.4265)  loss_objectness: 0.0190 (0.0190)  loss_rpn_box_reg: 0.0099 (0.0099)  time: 1.3121  data: 0.3024  max mem: 3485
Epoch: [0]  [10/60]  eta: 0:00:20  lr: 0.000936  loss: 1.3007 (1.5313)  loss_classifier: 0.3979 (0.4719)  loss_box_reg: 0.2454 (0.2272)  loss_mask: 0.6089 (0.7953)  loss_objectness: 0.0197 (0.0228)  loss_rpn_box_reg: 0.0121 (0.0141)  time: 0.4198  data: 0.0298  max mem: 5081
Epoch: [0]  [20/60]  eta: 0:00:15  lr: 0.001783  loss: 0.7567 (1.1056)  loss_classifier: 0.2221 (0.3319)  loss_box_reg: 0.2002 (0.2106)  loss_mask: 0.2904 (0.5332)  loss_objectness: 0.0146 (0.0176)  loss_rpn_box_reg: 0.0094 (0.0123)  time: 0.3293  data: 0.0035  max mem: 5081
Epoch: [0]  [30/60]  eta: 0:00:11  lr: 0.002629  loss: 0.4705 (0.8935)  loss_classifier: 0.0991 (0.2517)  loss_box_reg: 0.1578 (0.1957)  loss_mask: 0.1970 (0.4204)  loss_objectness: 0.0061 (0.0140)  loss_rpn_box_reg: 0.0075 (0.0118)  time: 0.3403  data: 0.0044  max mem: 5081
Epoch: [0]  [40/60]  eta: 0:00:07  lr: 0.003476  loss: 0.3901 (0.7568)  loss_classifier: 0.0648 (0.2022)  loss_box_reg: 0.1207 (0.1736)  loss_mask: 0.1705 (0.3585)  loss_objectness: 0.0018 (0.0113)  loss_rpn_box_reg: 0.0075 (0.0112)  time: 0.3407  data: 0.0044  max mem: 5081
Epoch: [0]  [50/60]  eta: 0:00:03  lr: 0.004323  loss: 0.3237 (0.6703)  loss_classifier: 0.0474 (0.1731)  loss_box_reg: 0.1109 (0.1561)  loss_mask: 0.1658 (0.3201)  loss_objectness: 0.0015 (0.0093)  loss_rpn_box_reg: 0.0093 (0.0116)  time: 0.3379  data: 0.0043  max mem: 5081
Epoch: [0]  [59/60]  eta: 0:00:00  lr: 0.005000  loss: 0.2540 (0.6082)  loss_classifier: 0.0309 (0.1526)  loss_box_reg: 0.0463 (0.1405)  loss_mask: 0.1568 (0.2945)  loss_objectness: 0.0012 (0.0083)  loss_rpn_box_reg: 0.0093 (0.0123)  time: 0.3489  data: 0.0042  max mem: 5081
Epoch: [0] Total time: 0:00:21 (0.3570 s / it)
creating index...
index created!
Test:  [ 0/50]  eta: 0:00:19  model_time: 0.2152 (0.2152)  evaluator_time: 0.0133 (0.0133)  time: 0.4000  data: 0.1701  max mem: 5081
Test:  [49/50]  eta: 0:00:00  model_time: 0.0628 (0.0687)  evaluator_time: 0.0039 (0.0064)  time: 0.0735  data: 0.0022  max mem: 5081
Test: Total time: 0:00:04 (0.0828 s / it)
Averaged stats: model_time: 0.0628 (0.0687)  evaluator_time: 0.0039 (0.0064)
Accumulating evaluation results...
DONE (t=0.01s).
Accumulating evaluation results...
DONE (t=0.01s).
IoU metric: bbox
Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.606
Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.984
Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.780
Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.313
Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.582
Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.612
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.270
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.672
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.672
Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.650
Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.755
Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.664
IoU metric: segm
Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.704
Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.979
Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.871
Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.325
Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.488
Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.727
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.316
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.748
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.749
Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.650
Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.673
Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.758

通过一个epoch训练, COCOmAP分数为60.6,以及 掩码 mAP分数为70.4。经过10个epoch训练,我们得到以下指标;

IoU metric: bbox
Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.799
Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.969
Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.935
Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.349
Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.592
Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.831
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.324
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.844
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.844
Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.400
Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.777
Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.870
IoU metric: segm
Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.761
Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.969
Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.919
Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.341
Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.464
Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.788
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.303
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.799
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.799
Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.400
Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.769
Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.818

接下来我们可以验证预测效果,调用图像进行验证

Image.fromarray(img.mul(255).permute(1, 2, 0).byte().numpy())
Image.fromarray(prediction[0]['masks'][0, 0].mul(255).byte().cpu().numpy())

结语

如何在自定义数据集上创建自己的实例分割模型,重要分为两部分;编写类返回图像,真实值和分割掩码。使用恰当的模型在新数据集上执行迁移学习。在本节学习本节提供代码可能会出现一些问题,在编写DEMO时不建议使用版本过高的python,因为cgi等相关库无法兼容过高版本需要一些改写。完整代码请查看DEMO

推荐阅读

  • 微分算子法
  • 使用PyTorch构建神经网络模型进行手写识别
  • 使用PyTorch构建神经网络模型以及反向传播计算
  • 如何优化模型参数,集成模型
  • TORCHVISION目标检测微调教程

TORCHVISION 目标检测微调教程相关推荐

  1. **目录:Retinanet-FPN做目标检测详细教程**

    Retinanet-FPN做目标检测详细教程 一. 项目环境 二.项目连接 三.项目搭建 3.1 数据准备 3.2 搭建流程 四.源码分析 五.参考文章 一. 项目环境 以下是我工程的环境(基于ubu ...

  2. 深度学习和目标检测系列教程 9-300:TorchVision和Albumentation性能对比,如何使用Albumentation对图片数据做数据增强

    @Author:Runsen 上次对xml文件进行提取,使用到一个Albumentation模块.Albumentation模块是一个数据增强的工具,目标检测图像预处理通过使用"albume ...

  3. 深度学习和目标检测系列教程 3-300:了解常见的目标检测的开源数据集

    @Author:Runsen 计算机视觉中具有挑战性的主题之一,对象检测,可帮助组织借助数字图片作为输入来理解和识别实时对象.大量的论文基于常见的目标检测的开源数据集而来,因此需要了解常见的目标检测的 ...

  4. 深度学习和目标检测系列教程 2-300:小试牛刀,使用 ImageAI 进行对象检测

    @Author:Runsen 对象检测是一种属于更广泛的计算机视觉领域的技术.它处理识别和跟踪图像和视频中存在的对象.目标检测有多种应用,如人脸检测.车辆检测.行人计数.自动驾驶汽车.安全系统等.Im ...

  5. 深度学习和目标检测系列教程 1-300:什么是对象检测和常见的8 种基础目标检测算法

    @Author:Runsen 由于毕业入了CV的坑,在内卷的条件下,我只好把别人卷走. 对象检测 对象检测是一种计算机视觉技术,用于定位图像或视频中的对象实例.对象检测算法通常利用机器学习或深度学习来 ...

  6. 皮卡丘检测器-CNN目标检测入门教程

    目标检测通俗的来说是为了找到图像或者视频里的所有目标物体.在下面这张图中,两狗一猫的位置,包括它们所属的类(狗/猫),需要被正确的检测到. 所以和图像分类不同的地方在于,目标检测需要找到尽量多的目标物 ...

  7. 深度学习和目标检测系列教程 15-300:在 Python 中使用 OpenCV 执行 YOLOv3 对象检测

    @Author:Runsen 上次讲了yolov3,这是使用yolov3的模型通过opencv的摄像头来执行YOLOv3 对象检测. 导入所需模块: import cv2 import numpy a ...

  8. 深度学习和目标检测系列教程 12-300:常见的opencv的APi和用法总结

    @Author:Runsen 由于CV需要熟练使用opencv,因此总结了opencv常见的APi和用法. OpenCV(opensourcecomputervision)于1999年正式推出,它来自 ...

  9. 深度学习和目标检测系列教程 7-300:先进的目标检测Faster R-CNN架构

    @Author:Runsen Faster R-CNN 由于Fast R-CNN 过程中仍然存在一个瓶颈,即ROI Projection.众所周知,检测对象的第一步是在对象周围生成一组潜在的边界框.在 ...

  10. 深度学习和目标检测系列教程 5-300:早期的目标检测RCNN架构

    @Author:Runsen 最早期的目标检测基于RCNN的算法,下面介绍RCNN的架构 RCNN架构 R-CNN 的目标是获取图像,并正确识别图片中的主要对象(通过边界框)的位置. 输入:图像: 输 ...

最新文章

  1. oracle服务器环境建立,oracle 透明网关环境的建立
  2. ux的重要性_UX中清晰的重要性
  3. C# HttpWebRequest post 数据与上传图片到server
  4. 计算机组成原理和体系结构----软考(到处copy)
  5. 使用双向链表构建二叉树_python:26.二叉搜索树与双向链表
  6. 2017百度之星资格赛:1002. 度度熊的王国战略
  7. oracle表内连接和外连接
  8. turtle画动态时钟
  9. Spring定时器@Scheduled
  10. AT指令集超详细解析(内含EC20模块datasheet)
  11. CompactRIO安装RT linux系统注意要点
  12. 十大排序算法(附动态图解)- Java版
  13. 通信用水泥杆和防腐木电杆在使用中有什么不同
  14. 黑莓BlackBerry手机辐射大小实测
  15. Android Camera之Deferred Surface
  16. 智能建筑弱电工程基本的一些施工项目
  17. Java 3D 开发
  18. “数”峰亮剑,优炫数据库助力国产数据库算法对抗赛成功举办
  19. ajax请求数据成功但是success中拿不到数据。
  20. 手机短信真的可信吗# 传统短信伪造攻击的可能性证明

热门文章

  1. 项目配置文件----.eslintignore,eslint在做风格检查的时候忽略 dist 和 vender(第三方库) 不去检查。
  2. android悬浮按钮实现方法
  3. vs2017部分快捷键
  4. angular自定义指令 directive
  5. 我用Python分析了1500家电商的销售数据,竟发现了进口车厘子的秘密
  6. 只有韦小宝最适合当产品经理
  7. spinner requestlayout() improperly called by during layout running second layout pass
  8. java耗时操作_耗时操作方案总结
  9. 闰秒问题的全面解读与防范
  10. 网易云音乐黑胶会员免费领取