GraphSAGE研究意义:

1. 图卷积神经网络最常用的几个模型(GCN、GAT、GraphSAGE)

2、归纳式学习(inductive learning)

3、不同于之前的学习node embedding,提出学习aggregators等函数的方式

4、探讨了多种aggregator方式(mean、pool、lstm)

5、图表征学习的经典baseline

论文主要结构:

一、摘要Abstract

介绍图的广泛应用,主要引出本文的motivations是做图的归纳式学习,通过学习一组函数对节点的邻居采样,然后汇聚得到向量式表达,具体可以总结为以下几点:

1、提出一种归纳式学习模型,可以得到新点/新图的表征

2、GraphSAGE模型通过学习一组函数来得到点的特征

3、采样并汇聚点的邻居特征与节点的特征拼接得到点的特征

4、GraphSAGE算法在直推式和归纳式学习均达到最优结果

二、Introduction

介绍了图的广泛应用,介绍之前的工作主要是基于静态图的算法,GraphSAGE处理新点甚至新图,总结了DeepWalk、Node2vec、GCN等算法,提出本文算法主要是训练aggregate函数

三、Related Work

介绍之前的算法,基于随机游走、矩阵分解、图卷积等算法

四、GraphSAGE模型

主要介绍前向传播算法、模型参数介绍、aggregator模型结构

GraphSAGE算法如上图Algorithm1,主要的部分就是归纳也就是(4)、(5)两部分,所有邻居信息汇聚,以及自身信息和邻居信息合并计算

接着,文章又介绍了目标函数(如上图3.2),不仅可以进行有监督学习,还可以进行无监督学习,无监督学习的目标函数和之前的图算法目标函数一致,说的就是图结构中,两个节点关系比较紧密,那么学出来的两个节点的embedding也比较相似

之后介绍了aggregate函数的几种方式,包括Mean、LSTM、Pooling,论文附录中还给出批量学习的算法

五、Experiments

实验设置、数据集选择、直推式学习实验、参数分析、不同aggregate函数对模型的影响分析

主要介绍了一些实验参数以及对·实验数据集的介绍,最后实验结果对比

六、Theoretical Analysis && Conclusion

总结提出的GraphSAGE模型具有归纳式的能力,邻居汇聚时考虑不同的aggregator方式,讨论了几种未来方向和subgraph embedding 邻居采样方式等

创新点:

1、归纳式学习(inductive learning)

2、多种aggregators探讨

3、文中并给出一些理论分析

关键点:

1、模型结构

2、邻居节点的sampling

3、Batch训练方式

启发点:

1、归纳式学习方式

2、多种aggregate函数讨论

3、Batch 训练方式 sample 邻居性能高效

4、GCN、GAT、GraphSAGE经典的baselines

七、Coding

论文中的数据集-cora数据集主要包含两个文件,一个是cora.cites表示两个节点节点是否有边另一个是cora.content 表示每个节点的特征以及labelexample:cora.cites35    1033
35  103482
35  103515
35  1050679
35  1103960
35  1103985
35  1109199
35  1112911
35  1113438
35  1113831
35  1114331
35  1117476
35  1119505
35  1119708
35  1120431
35  1123756
35  1125386
35  1127430
35  1127913
.....cora.content31336  0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   1   0   0   0   0   0   0   Neural_Networks
......
""" 加载数据并对数据进行处理 """def load_cora():import numpy as npnum_nodes = 2708num_feats = 1433feat_data = np.zeros((num_nodes, num_feats))labels = np.empty((num_nodes, 1), dtype=np.int64)node_map = {}label_map = {}with open('../cora/cora.content') as fp:for i,line in enumerate(fp):info = line.strip().split()tmp = []for ss in info[1:-1]:tmp.append(float(ss))feat_data[i,:] = tmpnode_map[info[0]] = iif not info[-1] in label_map:label_map[info[-1]] = len(label_map)labels[i] = label_map[info[-1]]from collections import defaultdictadj_lists = defaultdict(set)with open('../cora/cora.cites') as fp:for i,line in enumerate(fp):info = line.strip().split()uid = node_map[info[0]]target_uid = node_map[info[1]]adj_lists[uid].add(target_uid)adj_lists[target_uid].add(uid)return feat_data,labels,adj_lists""" 构建aggregate 函数"""import torch
import torch.nn as nn
from torch.autograd import Variable
import randomclass MeanAggregator(nn.Module):def __init__(self,features,cuda=False,gcn=False):super(MeanAggregator,self).__init__()self.features = featuresself.cuda = cudaself.gcn = gcndef forward(self,nodes,to_neighs,num_sample=10):_set = setif not num_sample is None:_sample = random.samplesamp_neighs = [_set(_sample(to_neigh, num_sample)) if len(to_neigh) >= num_sample else to_neigh for to_neigh in to_neighs]else:sample_neighs = to_neighsif self.gcn:sample_neighs = [samp_neigh + set([nodes[i]]) for i,samp_neigh in enumerate(samp_neighs)]unique_nodes_list = list(set.union(*samp_neighs))unique_nodes = {n:i for i,n in enumerate(unique_nodes_list)}mask = Variable(torch.zeros(len(samp_neighs),len(unique_nodes)))column_indices = [unique_nodes[n] for samp_neigh in samp_neighs for n in samp_neigh]row_indices = [i for i in range(len(samp_neighs)) for j in range(len(samp_neighs[i]))]mask[row_indices,column_indices] = 1if self.cuda:mask = mask.cuda()num_neigh = mask.sum(1,keepdim=True)mask = mask.div(num_neigh)if self.cuda:embed_matrix = self.features(torch.LongTensor(unique_nodes_list).cuda())else:embed_matrix = self.features(torch.LongTensor(unique_nodes_list))to_feats = mask.mm(embed_matrix)return to_feats""" 自身节点和邻居节点进行聚合 """import torch
import torch.nn as nn
from torch.nn import init
import torch.nn.functional as Fclass Encoder(nn.Module):"""Encodes a node's using 'convolutional' GraphSage approach"""def __init__(self, features, feature_dim, embed_dim, adj_lists, aggregator,num_sample=10,base_model=None, gcn=False, cuda=False, feature_transform=False): super(Encoder, self).__init__()self.features = features# 变换前的hidden_size/维度self.feat_dim = feature_dimself.adj_lists = adj_lists# 即邻居聚合后的mebeddingself.aggregator = aggregatorself.num_sample = num_sampleif base_model != None:self.base_model = base_modelself.gcn = gcn# 变换后的hidden_size/维度self.embed_dim = embed_dimself.cuda = cudaself.aggregator.cuda = cuda# 矩阵W维度 = 变换后维度 * 变换前维度# 其中gcn表示是否拼接,如果拼接的话由于是"自身向量||邻居聚合向量", 所以维度为2倍self.weight = nn.Parameter(torch.FloatTensor(embed_dim, self.feat_dim if self.gcn else 2 * self.feat_dim))init.xavier_uniform(self.weight)def forward(self, nodes):"""Generates embeddings for a batch of nodes.nodes     -- list of nodes"""neigh_feats = self.aggregator.forward(nodes, [self.adj_lists[int(node)] for node in nodes], self.num_sample)if not self.gcn:if self.cuda:self_feats = self.features(torch.LongTensor(nodes).cuda())else:self_feats = self.features(torch.LongTensor(nodes))# 将自身和聚合邻居的向量拼接, algorithm 1 line 5的拼接部分combined = torch.cat([self_feats, neigh_feats], dim=1)else:# 只用聚合邻居的向量来表示,不用自身信息, algorithm 1 line 5的拼接部分combined = neigh_feats# 送入到神经网络,algorithm 1 line 5乘以矩阵Wcombined = F.relu(self.weight.mm(combined.t()))# 经过一层GNN layer后的点的embedding,维度为embed_dim * nodesreturn combined""" 定义整体结构 """class SupervisedGraphSage(nn.Module):def __init__(self, num_classes, enc):super(SupervisedGraphSage, self).__init__()# 这里面赋值为enc2(经过两层GNN)self.enc = encself.xent = nn.CrossEntropyLoss()# 全连接参数矩阵,映射到labels num_classes维度做分类self.weight = nn.Parameter(torch.FloatTensor(num_classes, enc.embed_dim))init.xavier_uniform(self.weight)def forward(self, nodes):# embeds实际是我们两层GNN后的输出nodes embeddingembeds = self.enc(nodes)# 最后将nodes * hidden size 映射到 nodes * num_classes(= 7)之后做softmax计算cross entropyscores = self.weight.mm(embeds)return scores.t()def loss(self, nodes, labels):# 钱箱传播scores = self.forward(nodes)# 定义的cross entropyreturn self.xent(scores, labels.squeeze())""" 训练模型 """def run_cora():# 随机数设置seed(种子)np.random.seed(1)random.seed(1)# cora数据集点数num_nodes = 2708# 加载cora数据集, 分别是# feat_data: 特征# labels: 标签# adj_lists: 邻接表,dict (key: node, value: neighbors set)feat_data, labels, adj_lists = load_cora()# 设置输入的input features矩阵X的维度 = 点的数量 * 特征维度features = nn.Embedding(2708, 1433)# 为矩阵X赋值,参数不更新features.weight = nn.Parameter(torch.FloatTensor(feat_data), requires_grad=False)# features.cuda()# 一共两层GNN layer# 第一层GNN# 以mean的方式聚合邻居, algorithm 1 line 4agg1 = MeanAggregator(features, cuda=True)# 将自身和聚合邻居的向量拼接后送入到神经网络(可选是否只用聚合邻居的信息来表示), algorithm 1 line 5enc1 = Encoder(features, 1433, 128, adj_lists, agg1, gcn=True, cuda=False)# 第二层GNN# 将第一层的GNN输出作为输入传进去# 这里面.t()表示转置,是因为Encoder class的输出维度为embed_dim * nodesagg2 = MeanAggregator(lambda nodes : enc1(nodes).t(), cuda=False)# enc1.embed_dim = 128, 变换后的维度还是128enc2 = Encoder(lambda nodes : enc1(nodes).t(), enc1.embed_dim, 128, adj_lists, agg2,base_model=enc1, gcn=True, cuda=False)# 采样的邻居点的数量enc1.num_samples = 5enc2.num_samples = 5# 7分类问题# enc2是经过两层GNN layer时候得到的 node embedding/featuresgraphsage = SupervisedGraphSage(7, enc2)# graphsage.cuda()# 目的是打乱节点顺序rand_indices = np.random.permutation(num_nodes)# 划分测试集、验证集、训练集test = rand_indices[:1000]val = rand_indices[1000:1500]train = list(rand_indices[1500:])# 用SGD的优化,设置学习率optimizer = torch.optim.SGD(filter(lambda p : p.requires_grad, graphsage.parameters()), lr=0.7)# 记录每个batch训练时间times = []# 共训练100个batchfor batch in range(100):# 取256个nodes作为一个batchbatch_nodes = train[:256]# 打乱训练集的顺序,使下次迭代batch随机random.shuffle(train)# 记录开始时间start_time = time.time()optimizer.zero_grad()# 这个是SupervisedGraphSage里面定义的cross entropy lossloss = graphsage.loss(batch_nodes, Variable(torch.LongTensor(labels[np.array(batch_nodes)])))# 反向传播和更新参数loss.backward()optimizer.step()# 记录结束时间end_time = time.time()times.append(end_time-start_time)# print (batch, loss.data[0])print (batch, loss.data)# 做validationval_output = graphsage.forward(val)# 计算micro F1 scoreprint ("Validation F1:", f1_score(labels[val], val_output.data.numpy().argmax(axis=1), average="micro"))# 计算每个batch的平均训练时间print ("Average batch time:", np.mean(times))
""" 模型运行结果 """run_cora()0 tensor(1.9649)
1 tensor(1.9406)
2 tensor(1.9115)
3 tensor(1.8925)
4 tensor(1.8731)
5 tensor(1.8354)
6 tensor(1.8018)
7 tensor(1.7535)
8 tensor(1.6938)
9 tensor(1.6029)
10 tensor(1.6312)
11 tensor(1.5248)
12 tensor(1.4800)
13 tensor(1.4503)
14 tensor(1.4162)
15 tensor(1.3210)
16 tensor(1.2243)
17 tensor(1.2255)
18 tensor(1.0978)
19 tensor(1.1330)
20 tensor(0.9534)
21 tensor(0.9112)
22 tensor(0.9170)
23 tensor(0.7924)
24 tensor(0.8008)
25 tensor(0.7142)
26 tensor(0.7839)
27 tensor(0.8878)
28 tensor(1.2177)
29 tensor(0.9943)
30 tensor(0.8073)
31 tensor(0.6588)
32 tensor(0.6254)
33 tensor(0.5622)
34 tensor(0.5158)
35 tensor(0.4763)
36 tensor(0.5298)
37 tensor(0.5419)
38 tensor(0.5098)
39 tensor(0.4122)
40 tensor(0.4262)
41 tensor(0.4451)
42 tensor(0.4126)
43 tensor(0.4409)
44 tensor(0.3913)
45 tensor(0.4496)
46 tensor(0.4365)
47 tensor(0.4601)
48 tensor(0.4714)
49 tensor(0.4090)
50 tensor(0.4145)
51 tensor(0.3428)
52 tensor(0.3454)
53 tensor(0.3531)
54 tensor(0.3131)
55 tensor(0.2719)
56 tensor(0.3519)
57 tensor(0.3286)
58 tensor(0.3125)
59 tensor(0.2529)
60 tensor(0.3033)
61 tensor(0.2332)
62 tensor(0.3049)
63 tensor(0.3026)
64 tensor(0.3770)
65 tensor(0.3811)
66 tensor(0.3223)
67 tensor(0.2450)
68 tensor(0.2620)
69 tensor(0.2846)
70 tensor(0.2482)
71 tensor(0.3044)
72 tensor(0.4133)
73 tensor(0.3156)
74 tensor(0.4421)
75 tensor(0.2596)
76 tensor(0.2585)
77 tensor(0.2639)
78 tensor(0.2035)
79 tensor(0.2328)
80 tensor(0.1748)
81 tensor(0.1730)
82 tensor(0.1978)
83 tensor(0.1614)
84 tensor(0.1890)
85 tensor(0.1227)
86 tensor(0.1568)
87 tensor(0.1527)
88 tensor(0.2365)
89 tensor(0.2297)
90 tensor(0.1787)
91 tensor(0.1920)
92 tensor(0.1864)
93 tensor(0.1254)
94 tensor(0.1678)
95 tensor(0.1336)
96 tensor(0.1562)
97 tensor(0.2531)
98 tensor(0.2392)
99 tensor(0.2089)
Validation F1: 0.864
Average batch time: 0.047979302406311035

GraphSAGE: Inductive Representation Learning on Large Graphs相关推荐

  1. 【论文阅读|深读】 GraphSAGE:Inductive Representation Learning on Large Graphs

    目录 前言 简介 Abstract 1 Introduction 2 Related work 3 Proposed method: GraphSAGE 3.1 Embedding generatio ...

  2. 论文阅读GraphSAGE《Inductive Representation Learning on Large Graphs》

    目录 研究背景 算法模型 采样邻居顶点 生成向量的伪代码 聚合函数的选取 参数的学习 实验结果 GraphSAGE的核心: 改进方向: 其他补充学习知识 归纳式与直推式 为什么GCN是transduc ...

  3. Inductive Representation Learning In Temporal Networks via Causal Anonymous Walks

    文章目录 1 前言 2 问题定义 3 CAW思路 3.1 Causal Anonymous Walk 3.1.1 Causality Extraction 3.1.2 Set-based Anonym ...

  4. 【无标题】graphsage--inductive representation learing on large graphs

    一.简单的总结 1.graphSASE针对新点甚至新图,主要训练aggregate函数 2.论文最后讨论未来可能的方向:subgraph embedding,邻居采样方式,多模态图 二.数据集 1.c ...

  5. ECCV 2020 Representation Learning on Visual-Symbolic Graphs for Video Understanding

    动机 自然视频中的事件通常产生于演员和目标之间的时空交互,并且涉及多个共同发生的活动和目标类.因此,需要开发能够对时空视觉和语义上下文进行有效建模的算法. 捕捉这种上下文的一种方法是使用基于图的建模, ...

  6. cs224w(图机器学习)2021冬季课程学习笔记21 Scaling Up GNNs to Large Graphs

    诸神缄默不语-个人CSDN博文目录 cs224w(图机器学习)2021冬季课程学习笔记集合 文章目录 1. 介绍scale up GNN问题 2. GraphSAGE Neighbor Samplin ...

  7. Rule-Guided Compositional Representation Learning on Knowledge Graphs-学习笔记

    Rule-Guided Compositional Representation Learning on Knowledge Graphs 1.表示学习知识图谱(KG)是将KG的实体和关系嵌入到低维连 ...

  8. PGE - A Representation Learning Framework for Property Graphs 属性图表示学习框架 KDD 2019

    文章目录 1 相关介绍 1.1 背景 1.2 现有方法的局限性 1.3 contributions 2 相关工作 矩阵分解 随机游走 图神经网络中的邻接聚合 3 PGE框架 3.1 符号定义 3.2 ...

  9. 2018-GaAN: Gated Attention Networks for Learning on Large and Spatiotemporal Graphs

    2018-GaAN: Gated Attention Networks for Learning on Large and Spatiotemporal Graphs 摘要 1 INTRODUCTIO ...

最新文章

  1. 自动飞行控制系统_波音737MAX,安全评估竟是自己做的!飞行员仅用iPad学习驾驶!...
  2. 拼音表大全图_一年级语文26个汉语拼音字母表读法+写法+笔顺,孩子现在正需要...
  3. 2021-03-01 英文写作中的“许多”
  4. lc滤波电路电感电容值选择_这几种常见的无源滤波电路,你都了解吗 ?
  5. 效果直逼flash的Div+Css+Js菜单
  6. python中的异常如何处理
  7. python 在线编程 实现_Python进阶开发之网络编程,socket实现在线聊天机器人
  8. Java简单语法与访问权限修饰符
  9. linux内存源码分析 - 内存池
  10. 高等代数(第四版) 王萼芳、石生明编|高等教育出版社 大学课后习题答案
  11. 2022道路运输企业安全生产管理人员复训题库及答案
  12. Synonyms 中文近义词工具包 -- 支持文本对齐,推荐算法,相似度计算,语义偏移,关键字提取,概念提取,自动摘要,搜索引擎等
  13. arduino的L298N电机驱动模块
  14. c++读写json,JsonCpp配置
  15. 更改windows 2003 密钥
  16. 执行linux操作时提示:权限不够
  17. canvas画三角形
  18. android 播放视频文件格式,Android视频文件格式解析相关分析
  19. win7下批处理bat文件:切换网络设置
  20. MDI窗体的优化---下

热门文章

  1. 安装CentOs 5.5后无法显中文(中文乱码)
  2. 用户,群组和权限 二
  3. 信号量与条件变量的区别
  4. Oracle将NetBeans交给了Apache基金会
  5. 云级Key-value数据库大比较
  6. 非你所想:eigrp非等价负载均衡
  7. ASP.NET MVC 音乐商店 - 7.成员管理和授权
  8. 802.11协议之BA/BAR帧
  9. NUC120 SPI 模拟I2S
  10. 并查集板子:acwing836. 合并集合