【1】网络结构

UNet网络模型图

Unet包括两部分:

1  特征提取部分,每经过一个池化层就一个尺度,包括原图尺度一共有5个尺度。

2  上采样部分,每上采样一次,就和特征提取部分对应的通道数相同尺度融合,但是融合之前要将其crop。这里的融合也是拼接。

该网络由收缩路径(contracting path)和扩张路径(expanding path)组成。其中,收缩路径用于获取上下文信

【1.1】网络优点

(1) overlap-tile策略

(2)数据增强(data augmentation)

(3)加权loss

【1.2】网络缺点

U-Net++作者分析U-Net不足并如何做改进:https://zhuanlan.zhihu.com/p/44958351

参考文献:https://zhuanlan.zhihu.com/p/118540575

【2】网络训练

代码以及权重下载地址:https://github.com/JavisPeng/u_net_liver

data and trained weight link: https://pan.baidu.com/s/1dgGnsfoSmL1lbOUwyItp6w code: 17yr

all dataset you can access from: https://competitions.codalab.org/competitions/15595

【2.1】代码展示

文件夹介绍

(1)data文件夹中放的是训练和测试的图片

(2)dowoload是下载的权重文件

Unetmodel.py

#!/usr/bin/env python
# -*- encoding: utf-8 -*-
'''
@File    :   Unetmodel.py
@Time    :   2021/03/23 20:09:25
@Author  :   Jian Song
@Contact :   1248975661@qq.com
@Desc    :   None
'''
# here put the import libimport torch.nn as nn
import torch
from torch import autograd'''
文件介绍:定义了unet网络模型,
******pytorch定义网络只需要定义模型的具体参数,不需要将数据作为输入定义到网络中。
仅需要在使用时实例化这个网络,然后将数据输入。
******tensorflow定义网络时则需要将输入张量输入到模型中,即用占位符完成输入数据的输入。
'''
#把常用的2个卷积操作简单封装下
class DoubleConv(nn.Module):#通过此处卷积,特征图的大小减4,但是通道数保持不变;def __init__(self, in_ch, out_ch):super(DoubleConv, self).__init__()self.conv = nn.Sequential(nn.Conv2d(in_ch, out_ch, 3, padding=1),#添加了BN层nn.BatchNorm2d(out_ch), nn.ReLU(inplace=True),nn.Conv2d(out_ch, out_ch, 3, padding=1),nn.BatchNorm2d(out_ch),nn.ReLU(inplace=True))def forward(self, input):return self.conv(input)class Unet(nn.Module):def __init__(self, in_ch, out_ch):super(Unet, self).__init__()#定义网络模型#下采样-》编码self.conv1 = DoubleConv(in_ch, 64)self.pool1 = nn.MaxPool2d(2)self.conv2 = DoubleConv(64, 128)self.pool2 = nn.MaxPool2d(2)self.conv3 = DoubleConv(128, 256)self.pool3 = nn.MaxPool2d(2)self.conv4 = DoubleConv(256, 512)self.pool4 = nn.MaxPool2d(2)self.conv5 = DoubleConv(512, 1024)# 逆卷积,也可以使用上采样(保证k=stride,stride即上采样倍数)self.up6 = nn.ConvTranspose2d(1024, 512, 2, stride=2)#反卷积self.conv6 = DoubleConv(1024, 512)self.up7 = nn.ConvTranspose2d(512, 256, 2, stride=2)self.conv7 = DoubleConv(512, 256)self.up8 = nn.ConvTranspose2d(256, 128, 2, stride=2)self.conv8 = DoubleConv(256, 128)self.up9 = nn.ConvTranspose2d(128, 64, 2, stride=2)self.conv9 = DoubleConv(128, 64)self.conv10 = nn.Conv2d(64, out_ch, 1)#定义网络前向传播过程def forward(self, x):c1 = self.conv1(x)p1 = self.pool1(c1)c2 = self.conv2(p1)p2 = self.pool2(c2)c3 = self.conv3(p2)p3 = self.pool3(c3)c4 = self.conv4(p3)p4 = self.pool4(c4)c5 = self.conv5(p4)#上采样up_6 = self.up6(c5)#cat函数讲解:https://www.cnblogs.com/JeasonIsCoding/p/10162356.htmlmerge6 = torch.cat([up_6, c4], dim=1)#此处横着拼接,dim=1表示在行的后面添加上原有矩阵c6 = self.conv6(merge6)up_7 = self.up7(c6)merge7 = torch.cat([up_7, c3], dim=1)c7 = self.conv7(merge7)up_8 = self.up8(c7)merge8 = torch.cat([up_8, c2], dim=1)c8 = self.conv8(merge8)up_9 = self.up9(c8)merge9 = torch.cat([up_9, c1], dim=1)c9 = self.conv9(merge9)c10 = self.conv10(c9)out = nn.Sigmoid()(c10)return outif __name__ == '__main__': myUnet=Unet(1,1)print(myUnet)

main.py

#!/usr/bin/env python
# -*- encoding: utf-8 -*-'''
(1)参考文献:UNet网络简单实现
https://blog.csdn.net/jiangpeng59/article/details/80189889(2)FCN和unet的区别
https://zhuanlan.zhihu.com/p/118540575'''
import torch
import argparse
from torch.utils.data import DataLoader
from torch import nn, optim
from torchvision.transforms import transforms
from  Unetmodel import Unet
from  setdata import LiverDataset
from  setdata import *# 是否使用cuda
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")#定义输入数据的预处理模式,因为分为原始图片和研磨图像,所以也分为两种
#image转换为0~1的数据类型
x_transforms = transforms.Compose([transforms.ToTensor(),transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
])# mask只需要转换为tensor
y_transforms = transforms.ToTensor()def train_model(model, criterion, optimizer, dataload, num_epochs=5):for epoch in range(num_epochs):#.format参考,https://blog.csdn.net/u012149181/article/details/78965472print('Epoch {}/{}'.format(epoch, num_epochs - 1))print('-' * 10)dt_size = len(dataload.dataset)epoch_loss = 0step = 0for x, y in dataload:step += 1#判断是否调用GPUinputs = x.to(device)labels = y.to(device)# zero the parameter gradientsoptimizer.zero_grad()# forwardoutputs = model(inputs)loss = criterion(outputs, labels) #计算损失值loss.backward()optimizer.step()#item()是得到一个元素张量里面的元素值epoch_loss += loss.item()print("%d/%d,train_loss:%0.3f" % (step, (dt_size - 1) // dataload.batch_size + 1, loss.item()))print("epoch %d loss:%0.3f" % (epoch, epoch_loss/step))#保存模型torch.save(model.state_dict(), 'weights_%d.pth' % epoch)return model#训练模型
def train(batch_size):#模型初始化model = Unet(3, 1).to(device)batch_size = batch_size#定义损失函数criterion = nn.BCEWithLogitsLoss()#定义优化器optimizer = optim.Adam(model.parameters())#加载训练数据liver_dataset = LiverDataset("data/train",transform=x_transforms,target_transform=y_transforms)dataloaders = DataLoader(liver_dataset, batch_size=batch_size, shuffle=True, num_workers=4)train_model(model, criterion, optimizer, dataloaders)#模型的测试结果
def test(ckptpath):model = Unet(3, 1)model.load_state_dict(torch.load(ckptpath,map_location='cpu'))liver_dataset = LiverDataset("data/val", transform=x_transforms,target_transform=y_transforms)#一次加载一张图像dataloaders = DataLoader(liver_dataset, batch_size=1)#eval函数是将字符串转化为list、dict、tuple,但是字符串里的字符必须是标准的格式,不然会出错model.eval()import matplotlib.pyplot as pltplt.ion()# 打开交互模式with torch.no_grad():for x, _ in dataloaders:y=model(x).sigmoid()#a.squeeze(i)   压缩第i维,如果这一维维数是1,则这一维可有可无,便可以压缩img_y=torch.squeeze(y).numpy()plt.imshow(img_y)plt.pause(0.01)plt.show()# def trainmodel(batchsize):
#     train(batchsize)# def testmodel(ckptpath):
#     test(ckptpath)if __name__ == '__main__':#参数解析# parse=argparse.ArgumentParser()# parse = argparse.ArgumentParser()# parse.add_argument("action", type=str, help="train or test")# parse.add_argument("--batch_size", type=int, default=8)# parse.add_argument("--ckpt", type=str, help="the path of model weight file")# args = parse.parse_args()# if args.action=="train":#     train(args)# elif args.action=="test":#     test(args)batchsize=10train(batchsize)ckptpath='./dowoload/weights_19.pth'test(ckptpath)

setdata.py

#!/usr/bin/env python
# -*- encoding: utf-8 -*-from torch.utils.data import Dataset
import PIL.Image as Image
import os#创建一个列表,存放图像和研磨图像的图像路径
def make_dataset(root):imgs=[]n=len(os.listdir(root))//2for i in range(n):        '''%3d--可以指定宽度,不足的左边补空格%-3d--左对齐%03d---一种左边补0 的等宽格式,比如数字12,%03d出来就是: 012'''#img=root+00i.png#mask=root+00i_mask.pngimg=os.path.join(root,"%03d.png"%i) mask=os.path.join(root,"%03d_mask.png"%i)imgs.append((img,mask))return imgsclass LiverDataset(Dataset):def __init__(self, root, transform=None, target_transform=None):imgs = make_dataset(root)self.imgs = imgsself.transform = transform                    #原始图像的预处理self.target_transform = target_transform      #研磨图像的预处理def __getitem__(self, index):x_path, y_path = self.imgs[index]img_x = Image.open(x_path)img_y = Image.open(y_path)if self.transform is not None:                 #若设置了预处理img_x = self.transform(img_x)if self.target_transform is not None:           #若设置了预处理img_y = self.target_transform(img_y)return img_x, img_ydef __len__(self):return len(self.imgs)

【3】UNet模型参数展示

Unet((conv1): DoubleConv((conv): Sequential((0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)))(pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)(conv2): DoubleConv((conv): Sequential((0): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)))(pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)(conv3): DoubleConv((conv): Sequential((0): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)))(pool3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)(conv4): DoubleConv((conv): Sequential((0): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)))(pool4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)(conv5): DoubleConv((conv): Sequential((0): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)))(up6): ConvTranspose2d(1024, 512, kernel_size=(2, 2), stride=(2, 2))(conv6): DoubleConv((conv): Sequential((0): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)))(up7): ConvTranspose2d(512, 256, kernel_size=(2, 2), stride=(2, 2))(conv7): DoubleConv((conv): Sequential((0): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)))(up8): ConvTranspose2d(256, 128, kernel_size=(2, 2), stride=(2, 2))(conv8): DoubleConv((conv): Sequential((0): Conv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)))(up9): ConvTranspose2d(128, 64, kernel_size=(2, 2), stride=(2, 2))(conv9): DoubleConv((conv): Sequential((0): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)))(conv10): Conv2d(64, 2, kernel_size=(1, 1), stride=(1, 1))
)
PS F:\PytorchTest\torchdeeplearnmodel\Unet> & G:/Anaconda3/envs/tensorflow/python.exe f:/PytorchTest/torchdeeplearnmodel/Unet/Unetmodel.py
Unet((conv1): DoubleConv((conv): Sequential((0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)))(pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)(conv2): DoubleConv((conv): Sequential((0): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)))(pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)(conv3): DoubleConv((conv): Sequential((0): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)))(pool3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)(conv4): DoubleConv((conv): Sequential((0): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)))(pool4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)(conv5): DoubleConv((conv): Sequential((0): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)))(up6): ConvTranspose2d(1024, 512, kernel_size=(2, 2), stride=(2, 2))(conv6): DoubleConv((conv): Sequential((0): Conv2d(1024, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)))(up7): ConvTranspose2d(512, 256, kernel_size=(2, 2), stride=(2, 2))(conv7): DoubleConv((conv): Sequential((0): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)))(up8): ConvTranspose2d(256, 128, kernel_size=(2, 2), stride=(2, 2))(conv8): DoubleConv((conv): Sequential((0): Conv2d(256, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)))(up9): ConvTranspose2d(128, 64, kernel_size=(2, 2), stride=(2, 2))(conv9): DoubleConv((conv): Sequential((0): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True)))(conv10): Conv2d(64, 2, kernel_size=(1, 1), stride=(1, 1))
)

【4】参考文献

(1)pytorch笔记:05)UNet网络简单实现

(2)UNet网络简单实现

(3)FCN和unet的区别

【22】Unet网络的复现和理解相关推荐

  1. 【医学图像分割网络】之Res U-Net网络PyTorch复现

    [医学图像分割网络]之Res U-Net网络PyTorch复现 1.内容 U-Net网络算是医学图像分割领域的开山之作,我接触深度学习到现在大概将近大半年时间,看到了很多基于U-Net网络的变体,后续 ...

  2. U-Net网络理解与应用

    U-Net网络理解与应用 文章目录 U-Net网络理解与应用 前言 What is U-Net 简单介绍和基本特点 U-Net网络的优点和不同点 数据增强 笔者当时的一些疑问 为什么引入FCN,FCN ...

  3. 【27】unet网络复现及学习(1)

    [1]config.ini #配置文件 [base] batchsize=10 ckptpath='./dowoload/weights_19.pth' train_path="F:/Pyt ...

  4. python网络编程视频教程_Java网络开发视频教程 – 一站式学习Java网络编程视频教程 全面理解BIO(无密)...

    Java网络开发视频教程 – 一站式学习Java网络编程视频教程 全面理解BIO(无密) 全面理解BIO/NIO/AIO 网络层编程,是每一个开发者都要面对的技术.课程为解决大家学习网络层知识的难题, ...

  5. 【CV】基于UNet网络实现的人像分割 | 附数据集

    文章来源于AI算法与图像处理,作者AI_study 今天要分享的是人像分割相关的内容,如果你喜欢的话,欢迎三连哦 主要内容 人像分割简介 UNet的简介 UNet实现人像分割 人像分割简介 人像分割的 ...

  6. unet训练自己的数据集_基于UNet网络实现的人像分割 | 附数据集

    点击上方↑↑↑"OpenCV学堂"关注我 来源:公众号 AI算法与图像处理 授权 以后我会在公众号分享一些关于算法的应用(美颜相关的),工作之后,发现更重要的能力如何理解业务并将算 ...

  7. 对学姐的U-net网络的学习杂记

    .npy文件是一种存放数据的文件格式..npy格式是用python程序生成的,只能用python程序读取和显示. 转自:https://jingyan.baidu.com/article/7f41ec ...

  8. CNN经典之VGG网络+PyTorch复现

    一.前情说明: 写在前面的话 本系列博客在于汇总CSDN的精华帖,类似自用笔记,方便以后的复习回顾,博文中的引用都注明出处,并点赞收藏原博主. 博客大致分为两部分,第一部是转载于<雪饼>大 ...

  9. U-Net网络变形综述

    U-Net网络变形综述 U-Net and its variants for Medical Image Segmentation : A short review Abstract 本文简要回顾了U ...

最新文章

  1. 自制机械臂,能给葡萄缝针的那种,成本1万块,网友:能把脑子开源一下?
  2. 漫画:大学教师暑期真实生活图鉴
  3. php谁写的,谁写过 PHP的 demo? 我写了一个 返回3301 求助
  4. html中隔行的代码,js+css 控制表格隔行变色与单行高亮的代码
  5. vue 声明周期函数_【Vue】详解Vue生命周期
  6. arm-linux下如何安装GDB?pc-linux下如何升级GDB?
  7. wxWidgets:wxAboutDialogInfo类用法
  8. shell的执行流控制
  9. 大数据反欺诈技术架构
  10. 网络防火墙单向和双向_单向晶闸管与双向晶闸管之间的不同之处
  11. 算法导论 思考题6-3(Young氏矩阵)
  12. Linux管理员手册
  13. 【WC2008】【BZOJ1271】秦腾与教学评估(二分,前缀和,奇偶性乱搞)
  14. HTML5 Geolocation(地理定位)用于定位用户的位置。
  15. SpringBoot(四):mybatis之通用mapper、分页插件PageHelper
  16. useradd添加用户
  17. nginx启动vue_nginx下部署vue项目的方法步骤
  18. CSS中文手册下载、使用技巧(附下载链接,压缩包被禁用了)
  19. spring揭秘总结(一)——spring的Ioc容器
  20. win7默认网关不可用_Win7自带图片查看器异常

热门文章

  1. c# 与 Access数据库 dataset操作
  2. 我得了手机选择恐惧症
  3. 王文京一把抓住ASP
  4. java设计画图工具下载_java 版画图工具
  5. 最新斩获2022字节暑期实习生 一二三面(已过|新鲜面经)
  6. C++中调用SPLUS对象经典例子
  7. 【Qt5开发及实例】16、实现一个简单的文本编辑器(over)
  8. 转载:ubuntu 安装code blocks全记录
  9. 大华条码秤开发之-条码格式发送
  10. 基于电动汽车的带时间窗的路径优化(PythonMatlab代码实现)