本文结构安排

  • M-NMF
  • LANE
  • LINE

什么是Network Embedding?

dataislinked.png

low-demensionsl.png

LINE

一阶二阶相似度衡量.png

  • [Information Network]
    An information network is defined as G=(V,E)G = (V,E)G=(V,E), where VVV is the set
    of vertices, each representing a data object and EEE is the
    set of edges between the vertices, each representing a relationship between two data objects. Each edge e∈Ee\in EeE is an ordered pair e=(u,v)e = (u,v)e=(u,v) and is associated with a weight wuv>0w_{uv} > 0wuv>0, which indicates the strength of the relation. If GGG is undirected, we have (u,v)≡(v,u)(u,v) ≡ (v,u)(u,v)(v,u) and wuv≡wvuw_{uv} \equiv w_{vu}wuvwvu; if G is directed, we have (u,v)≠(v,u)(u,v) \neq (v,u)(u,v)̸=(v,u) and wuv≠wvuw uv \neq w vuwuv̸=wvu

  • [First-order Proximity] The first-order proximity in a network is the local pairwise proximity between two vertices. For each pair of vertices linked by an edge (u,v)(u,v)(u,v), the weight on that edge,wuvw_{uv}wuv, indicates the first-order proximity between u and v. If no edge is observed between u and v, their first-order proximity is 0. The first-order proximity usually implies the similarity of two nodes in a real-world network.

    LINE with First-order Proximity:The first-order proximity refers to the local pairwise proximity between the vertices in the network. For each undirected edge (i,j)(i,j)(i,j), the joint probability between vertex viv_{i}vi and vjv_{j}vj as follows:
    p1(vi,vj)=11+exp⁡(−u⃗iT⋅u⃗j)p_{1}(v_{i},v_{j})=\frac{1}{1+\exp(-\vec{u}_{i}^{T} \cdot \vec{u}_{j})}p1(vi,vj)=1+exp(u

    iTu

    j
    )1
    where $u_{i} \in R^{d} $ is the low-dimensional vector representation of vertex viv_{i}vi . p^1(i,j)=wijW\hat{p}_{1}(i,j) = \frac{w_{ij}}{W}p^1(i,j)=Wwij,where W=∑(i,j)∈EwijW = \sum_{(i,j) \in E}^{ }w_{ij}W=(i,j)Ewij .
    And its empirical probability can be defined asp^1(i,j)=wijW\hat{p}_{1}(i,j)=\frac{w_{ij}}{W}p^1(i,j)=Wwij,where W=∑(i,j)∈EwijW=\sum_{(i,j)\in E}^{ }w_{ij}W=(i,j)Ewij.

    To preserve the first-order proximity we can minimize the following objective function:
    O1=d(p^1(⋅,⋅),p1(⋅,⋅))O_{1}=d(\hat{p}_{1}(\cdot,\cdot),p_{1}(\cdot,\cdot))O1=d(p^1(,),p1(,))
    where d(⋅,⋅)d(\cdot,\cdot)d(,) is the distance between two distributions. We choose to minimize the KL-divergence of two probability distributions. Replacing d(⋅,⋅)d(\cdot,\cdot)d(,) with KL-divergence and omitting some constants, we have:
    O1=−∑(i,j)∈Ewijlog⁡p1(vi,vj)O_{1}=-\sum_{(i,j)\in E}^{ }w_{ij}\log p_{1}(v_{i},v_{j})O1=(i,j)Ewijlogp1(vi,vj)

  • [Second-order Proximity] The second-order proximity between a pair of vertices (u,v) in a network is the similarity between their neighborhood network structures. Mathematically, let pu=(wu,1,...,wu,∣V∣)p_{u} = (w_{u,1} ,...,w_{u,|V|})pu=(wu,1,...,wu,V) denote the first-order proximity of u with all the other vertices,then the second-order proximity between u and v is determined by the similarity between p u and p v . If no vertex is linked from/to both u and v, the second-order proximity between u and v is 0.

    The second-order proximity assumes that vertices sharing many connections to other vertices are similar to each other. In this case, each vertex is also treated as a specific “context” and vertices with similar distributions over the “contexts” are assumed to be similar.
    Therefore, each vertex plays two roles: the vertex itself and a specific “context” of other vertices.We introduce two vectors u⃗i\vec{u}_{i}u

    i and u⃗i′\vec{u}_{i}^{'}u

    i
    , where u⃗i\vec{u}_{i}u

    i
    is the representation of viv_{i}vi when it is treated as a vertex while u⃗i′\vec{u}_{i}^{'}u

    i
    is the representation of viv_{i}vi when it is treated as a specific “context”. For each directed edge (i,j)(i,j)(i,j),we first define the probability of “context” vjv_{j}vj generated by vertex viv_{i}vi as:

    p2(vj,vi)=exp⁡(u⃗i′T⋅u⃗i)∑k=1∣V∣exp⁡(u⃗k′T⋅u⃗i)p_{2}(v_{j},v_{i})=\frac{\exp(\vec{u}_{i}^{'T} \cdot \vec{u}_{i}) }{\sum_{k=1}^{|V|}\exp(\vec{u}_{k}^{'T} \cdot \vec{u}_{i})}p2(vj,vi)=k=1Vexp(u

    kTu

    i
    )exp(u

    iT
    u

    i
    )

    where ∣V∣|V|V is the number of vertices or “contexts”.p^2(vi,vj)=wijd\hat{p}_{2}(v_{i},v_{j}) = \frac{w_{ij}}{d}p^2(vi,vj)=dwij,where d=∑k∈Neibour(i)wikd = \sum_{k \in Neibour(i)}^{ }w_{ik}d=kNeibour(i)wik.

    The second-order proximity assumes that vertices with similar distributions over the contexts are similar to each other. To preserve the second-order proximity, we should make the conditional distribution of the contexts p2(⋅∣vi)p_{2}(\cdot|v_{i})p2(vi) specified by the low-dimensional representation be close to the empirical distribution p^2(⋅∣vi)\hat{p}_{2}(\cdot |v_{i})p^2(vi).Therefore, we minimize the following objective function:

    O2=∑i∈Vλid(p^2(⋅∣vi),p2(⋅∣vi))O_{2}=\sum_{i \in V}^{ }\lambda_{i}d(\hat{p}_{2}(\cdot | v_{i}),p_{2}(\cdot | v_{i}))O2=iVλid(p^2(vi),p2(vi))

    where d(⋅,⋅)d(\cdot,\cdot)d(,) is the distance between two distributions.
    $\lambda_{i} $ in the objective function is to represent the prestige of vertex i in the network,which can be measured by the degree or estimated through algorithms.

    The empirical distribution p^2(⋅∣vi)\hat{p}_{2}(\cdot |v_{i})p^2(vi) is defined as
    p^2(vj∣vi)=wijdi\hat{p}_{2}(v_{j} |v_{i})=\frac{w_{ij}}{d_{i}}p^2(vjvi)=diwij,where wijw_{ij}wij is the weight of the edge (i,j)(i,j)(i,j) and did_{i}di is the out-degree of vertex i. Here we adopt KL-divergence as the distance function:

    O2=−∑(i,j)∈Ewijlog⁡p2(vj∣vi)O_{2}=-\sum_{(i,j)\in E}^{ }w_{ij}\log p_{2}(v_{j}|v_{i})O2=(i,j)Ewijlogp2(vjvi)

    minimize this objective O2O_{2}O2, we are able to represent every vertex viv{i}vi with a d-dimensional vector u⃗i\vec{u}_{i}u

    i

  • [Large-scale Information Network Embedding] Given a large network G=(V,E)G = (V,E)G=(V,E), the problem of Large-scale Information Network Embedding aims to represent each vertex v∈Vv \in VvV into a low-dimensional space RdR^{d}Rd,learning a function fG:V→Rdf_{G}:V \rightarrow R^{d}fG:VRd , where ∣V∣≫d|V| \gg dVd. In the space RdR^{d}Rd , both the first-order proximity and the second-order proximity between the vertices are preserved.

    We adopt the asynchronous stochastic gradient algorithm (ASGD) for optimizing O2O_{2}O2,In each step, the ASGD algorithm samples a mini-batch of edges and then updates the model parameters. If an edge (i,j)(i,j)(i,j) is sampled, the gradient the embedding vector u⃗i\vec{u}_{i}u

    i of vertex i will be calculated as:

    ∂O2∂u⃗i=wij∂log⁡p2(vj∣vi)∂u⃗i\frac{\partial O_{2}}{\partial \vec{u}_{i}}=w_{ij}\frac{\partial \log p_{2}(v_{j}|v_{i})}{\partial \vec{u}_{i}}u

    iO2=wiju

    i
    logp2(vjvi)

    Optimizing objectives are computationally expensive,which requires the summation over the entire set of vertices when calculating the conditional probability p2(⋅∣vi)p_{2}(\cdot |v_{i})p2(vi). To address this problem, we adopt the approach of\textbf{ negative sampling }proposed.

    argmin⁡U,U′O2=∑(i,j)∈Ewij[log⁡σ(u⃗j′T⋅u⃗i)+∑i=1KEvn∼Pn(v)[log⁡σ(−u⃗k′T⋅u⃗i)]]arg \min_{U,U'} O_{2} = \sum_{(i,j)\in E}^{ }w_{ij}[\log \sigma(\vec{u}_{j}^{'T}\cdot \vec{u}_{i}) + \sum_{i=1}^{K}E_{v_{n}}\sim P_{n}(v)[\log \sigma(-\vec{u}_{k}^{'T}\cdot \vec{u}_{i})]]argU,UminO2=(i,j)Ewij[logσ(u

    jTu

    i
    )+
    i=1KEvnPn(v)[logσ(u

    kT
    u

    i
    )]]

∂O2∂u⃗i=−wij[u⃗j′(1−σ(u⃗j′T⋅u⃗i))−∑k=1Ku⃗k′(u⃗k′T⋅u⃗i)]\frac{\partial O_{2}}{\partial \vec{u}_{i}} = -w_{ij}[\vec{u}_{j}^{'}(1-\sigma(\vec{u}_{j}^{'T}\cdot \vec{u}_{i})) -\sum_{k=1}^{K}\vec{u}_{k}^{'}(\vec{u}_{k}^{'T}\cdot \vec{u}_{i})] u

iO2=wij[u

j
(1
σ(u

jT
u

i
))
k=1Ku

k
(u

kT
u

i
)]

∂O2∂u⃗j′=−wiju⃗i[1−σ(u⃗j′T⋅u⃗i)]\frac{\partial O_{2}}{\partial \vec{u}_{j}^{'}} = -w_{ij}\vec{u}_{i}[1-\sigma(\vec{u}_{j}^{'T}\cdot \vec{u}_{i})] u

jO2=wiju

i
[1
σ(u

jT
u

i
)]

∂O2∂u⃗k′=wiju⃗iσ(u⃗k′T⋅u⃗i)\frac{\partial O_{2}}{\partial \vec{u}_{k}^{'}} = w_{ij}\vec{u}_{i}\sigma(\vec{u}_{k}^{'T}\cdot \vec{u}_{i}) u

kO2=wiju

i
σ(u

kT
u

i
)

Update parameter u⃗i,u⃗j′,u⃗k′\vec{u}_{i},\vec{u}_{j}^{'},\vec{u}_{k}^{'}u

i,u

j
,u

k
:

u⃗i=u⃗i−ρ∂O2∂u⃗i\vec{u}_{i} = \vec{u}_{i} - \rho \frac{\partial O_{2}}{\partial \vec{u}_{i}}u

i=u

i
ρu

i
O2

u⃗j′=u⃗j′ρ∂O2∂u⃗j′\vec{u}_{j}^{'} = \vec{u}_{j}^{'} \rho \frac{\partial O_{2}}{\partial \vec{u}_{j}^{'}}u

j=u

j
ρu

j
O2

u⃗k′=u⃗k′−ρ∂O2∂u⃗k′\vec{u}_{k}^{'} = \vec{u}_{k}^{'} - \rho \frac{\partial O_{2}}{\partial \vec{u}_{k}^{'}}u

k=u

k
ρu

k
O2

The above is the result of optimizing O2O_{2}O2, and the obtained UUU is the result of the second-order similarity. The optimization of O1O_{1}O1 is similar to optimization of O2O_{2}O2, only one variable U needs to be updated. Just change $\vec{u}_{j}^{’} $ to

M-NMF

The objective function is not convex, and we separate the
optimization to four subproblems and iteratively optimize them, which guarantees each subproblem converges to the local minima.

objective function:
min⁡M,U,H,C=∣∣S−MU∣∣F2+α∣∣H−UCT∣∣F2−βtr(HTBH)\min_{M,U,H,C}=||S-MU||_{F}^{2}+\alpha||H-UC^{T}||_{F}^{2}-\beta tr(H^{T}BH)M,U,H,Cmin=SMUF2+αHUCTF2βtr(HTBH)

s.t.,M≥0,U≥0,H≥,C≥,tr(HTH)=ns.t.,M\geq 0,U\geq0,H\geq,C\geq,tr(H^{T}H)=ns.t.,M0,U0,H,C,tr(HTH)=n

M-subproblem: With other parameters in objective function fixed leads to a standard NMF formulation,the updating rule for M is:

M←M⊙SUMUTUM \leftarrow M \odot \frac{SU}{MU^{T}U}MMMUTUSU

U-subproblem: Updating U with other parameters in objective function
fixed leads to a joint NMF problem,the updating rule is:

U←U⊙STM+αHCU(MTM+αCTC)U \leftarrow U \odot \frac{S^{T}M+\alpha HC}{U(M^{T}M+\alpha C^{T}C)}UUU(MTM+αCTC)STM+αHC

C-subproblem: Updating C with other parameters in objective function
fixed also leads to a standard NMF formulation,the updating rule of C is:

C←C⊙HTUCUTUC \leftarrow C \odot \frac{H^{T}U}{CU^{T}U}CCCUTUHTU

H-subproblem: This is the fixed point equation that the solution must satisfy at convergence. Given an initial value of H, the successive updating rule of H is:

H←H⊙−wβB1H+△8λHHTHH \leftarrow H \odot \sqrt{ \frac{-w\beta B_{1}H+\sqrt{ \bigtriangleup}}{8\lambda HH^{T}H}}HH8λHHTHwβB1H+

where △=2β(B1H)⊙2β(B1H)+16λ(HHTH)⊙(2βAH+2αUCT+(4λ−2α)H)\bigtriangleup = 2\beta(B_{1}H) \odot 2\beta(B_{1}H) + 16\lambda(HH^{T}H)\odot(2\beta AH+2\alpha UC^{T}+(4\lambda - 2\alpha)H)=2β(B1H)2β(B1H)+16λ(HHTH)(2βAH+2αUCT+(4λ2α)H)

LANE

see as another article of my blog
[论文阅读——LANE-Label Informed Attributed Network Embedding原理即实现]https://www.jianshu.com/p/1abb24bb8a04

LINE-O2 Spark实现

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import breeze.linalg._
import breeze.numerics._
import breeze.stats.distributions.Rand
import scala.math._object LINE {//生成一个随机数序列List,Range是范围,num是随机序列个数def RandList(Range:Int,num:Int) : List[Int] = {var resultList:List[Int]=Nilwhile (resultList.length < num){val randomNum = (new util.Random).nextInt(Range)if(!resultList.exists(s => s==randomNum )){resultList=resultList:::List(randomNum)}}return resultList}def RandNumber(Range:Int) : Int = {val randomNum = (new util.Random).nextInt(Range)return randomNum}def Sigmoid(In:Double): Double = {var Out:Double = 1.0/(math.exp(-1.0*In)+1)return Out}def main(args: Array[String]) {if (args.length < 4) {System.err.println("Usage: LINE <Adjacent Matrix> <Adjacent Edge> <Negative Sample> <dimension>")System.exit(1)}//负采样个数val NS = args(2).toIntprintln("Negative Sample: "+NS)//图嵌入的维度val Dim = args(3).toIntprintln("Embedding dimension: "+Dim)//spark配置和上下文val conf = new SparkConf().setAppName("LINE")val sc = new SparkContext(conf)//输入邻接矩阵val InputFile = sc.textFile(args(0),3)//输入邻接表文件val EgdeFile = sc.textFile(args(1),3)//输出输入的文件行数val InputFileCount = InputFile.count().toIntprintln("InputFileCount(number of lines): "+InputFileCount)//随机采样率val sample_rate : Double = 0.1//负采样哈希表的映射长度val HashTableSize: Int = 50000println("HashTableSize: "+HashTableSize)//LINE O_2 的二阶相似度变量 var U_vertex = DenseMatrix.rand(InputFileCount, Dim, Rand.uniform)var U_context = DenseMatrix.rand(InputFileCount, Dim, Rand.uniform)//邻接矩阵RDDval Adjacent = InputFile.map(line => line.split(",")).map(splitline => splitline.map(word => word.toDouble))val EgdeSet = EgdeFile.map(line => line.split(",")).map(splitline => splitline.map(word => word.toDouble))//当数据量变大,collect操作将会有崩溃 待优化点1val AdjacentCollect = Adjacent.collect()//邻接矩阵的行和列val rows = AdjacentCollect.lengthval cols = AdjacentCollect(0).length//邻接矩阵拉长为一维向量val flattenAdjacent = AdjacentCollect.flatten//邻接矩阵转为 breeze 矩阵val AdjacentMatrix = new DenseMatrix(cols,rows,flattenAdjacent).t//println(Adjacent.take(10).toList)// Adjacent.foreach{//    rdd => println(rdd.toList)// }//每个点的度RDDval VertexDegree = Adjacent.map(line => line.reduce((x,y) => x+y))//所有点的度求和var SumOfDegree = VertexDegree.reduce((x,y)=>x+y)//var SumOfDegree = sc.accumulator(0)//VertexDegree.foreach(x => SumOfDegree += x)  //对点的概率进行平滑,3/4次幂val SmoothProbability = VertexDegree.map(degree => degree/SumOfDegree).map(math.pow(_,0.75))//求SmoothProbability的累积概率CumulativeProbabilityval p : Array[Double] = SmoothProbability.collect()val CumulativeProbability : Array[Double] = new Array[Double](InputFileCount)for(i <- 0 to InputFileCount-1) {var inner_sum : Double = 0.0for(j <- 0 to i){inner_sum = inner_sum + p(j)}CumulativeProbability(i) = inner_sum}//归一化后的累积概率后,乘以HashTableSize并取整,可以得到0~HashTableSize之内的整数val HashProbability : Array[Int] = new Array[Int](InputFileCount)//累积概率的最大值var max_cpro = CumulativeProbability(InputFileCount-1)for(i <- 0  to InputFileCount-1){HashProbability(i) = ((CumulativeProbability(i)/max_cpro)*HashTableSize).toInt}//点的id的哈希表val HashTable : Array[Int] = new Array[Int](HashTableSize+1)//循环生成哈希映射,HashTableSize大小的数组,数组内存储的是点的id标识for(i <- 0 to InputFileCount-1) {if (i==0) {var start : Int = 0var end : Int = HashProbability(1)for(j <- start to end) {HashTable(j) = i}}else {var start : Int = HashProbability(i-1)var end : Int = HashProbability(i)for(j <- start to end) {HashTable(j) = i}}}println("HashTable(HashTableSize):"+HashTable(HashTableSize))val sample_num = (sample_rate*InputFileCount).toIntprintln("sample_num "+sample_num)var O2_Array: Array[Double] = new Array[Double](100)for(iterator <- 0 to 99){//println("the iterator is "+iterator)var learningrate = 0.1var O_2 = 0.0//false表示无放回采样 选取预先选定的采样数量var sampling = EgdeSet.takeSample(false,sample_num)for(i <- 0 to sample_num-1){var objective = 0.0//println("i is " + i)var row:Int = sampling(i)(0).toIntvar col:Int = sampling(i)(1).toInt//println("row:"+row)//println("col:"+col)var u_j_context = U_context(col,::).tvar u_j_context_t = U_context(col,::)var u_i_vertex = U_vertex(row,::).tvar part1=(-1)*sampling(i)(2)*u_j_context*(1-Sigmoid((u_j_context_t*u_i_vertex).toDouble))//println("part1: "+part1)//生成0~50000的NS个随机数,用于挑选负采样样本var negativeSampleSum = DenseVector.zeros[Double](Dim)var RandomSet : List[Int] = RandList(50000,NS)//println("RandomSet is:"+RandomSet)for(j <- 0 to RandomSet.length-1){//println(RandomSet(j))var u_k_context = U_context(HashTable(RandomSet(j)),::).tvar u_k_context_t = U_context(HashTable(RandomSet(j)),::)negativeSampleSum = negativeSampleSum + u_k_context*Sigmoid((u_k_context_t*u_i_vertex).toDouble)}//println("negativeSampleSum: "+negativeSampleSum)var part2 = sampling(i)(2)*negativeSampleSum//println("part2: "+part2)var d_O2_ui = part1-part2//println("d_O2_ui: "+d_O2_ui)//更新u_ivar tmp1 = u_i_vertex - learningrate*(d_O2_ui)//println(tmp1(0)+" "+tmp1(1))// println("previous U_context(row,::): "+U_context(row,::))for(k1 <- 0 to Dim-1){U_vertex(row,k1) = tmp1(k1)}//println("after U_context(row,::): "+U_context(row,::))var d_O2_uj_context = (-1)*sampling(i)(2)*u_i_vertex*(1-Sigmoid((u_j_context_t*u_i_vertex).toDouble))//更新u_j'var tmp2 = u_j_context - learningrate*(d_O2_uj_context)for(k2 <- 0 to Dim-1){U_context(row,k2) = tmp2(k2)}//更新u_k'var negative_cal = 0.0for(j <- 0 to RandomSet.length-1){var u_k_context = U_context(HashTable(RandomSet(j)),::).tvar u_k_context_t = U_context(HashTable(RandomSet(j)),::)//这两行用于计算目标函数的值var sigmoid_uk_ui = Sigmoid((u_k_context_t*u_i_vertex).toDouble)negative_cal = negative_cal + math.log(sigmoid_uk_ui)//对u_k'求导var d_O2_uk_context = sampling(i)(2)*u_i_vertex*sigmoid_uk_uivar tmp3 = u_k_context - learningrate*d_O2_uk_contextfor(k3 <- 0 to Dim-1){U_context(HashTable(RandomSet(j)),k3) = tmp2(k3)}}//计算误差的变化objective = (-1)*sampling(i)(2)*(math.log(Sigmoid((u_j_context_t*u_i_vertex).toDouble)) + negative_cal)O_2 = O_2 + objective}O2_Array(iterator) = O_2}val U2_HDFS = sc.parallelize(U_vertex.toArray,3)val O2_HDFS = sc.parallelize(O2_Array,3)//a(::, 2)   println("======================")//println(formZeroToOneRandomMatrix)//VertexDegree.saveAsTextFile("file:///usr/local/data/line")//IndexSmoothProbability.saveAsTextFile("file:///usr/local/data/line")//HashProbability.saveAsTextFile("file:///usr/local/data/line")U2_HDFS.saveAsTextFile("file:///usr/local/data/U2")O2_HDFS.saveAsTextFile("file:///usr/local/data/O2")println("======================")sc.stop()}
}

网络嵌入算法-Network Embedding-LINE/LANE/M-NMF相关推荐

  1. 论文 | 属性网络嵌入 Attributed Network Embedding

    论文 | 属性网络嵌入 Attributed Network Embedding 前言 Why this article? Network Embedding相关工作 Network Embeddin ...

  2. 基于数据中心光网络的节能虚拟网络嵌入算法

    摘要 针对数据中心光网络(DCONs)的能耗节省问题,文章提出了一种基于匹配网络资源的节能嵌入(3E-MNR)算法.该算法采用协调的虚拟网络嵌入,提出了嵌入匹配网络资源的概念,在虚拟嵌入的同时考虑节点 ...

  3. 2020 ICDM | AANE: Anomaly Aware Network Embedding For Anomalous Link Detection

    2020 ICDM | AANE: Anomaly Aware Network Embedding For Anomalous Link Detection Paper Link: https://i ...

  4. 远程实习 | 达特茅斯学院招收网络嵌入和图挖掘方向研究型实习生

    来源:AI求职 Dartmouth College 达特茅斯学院(Dartmouth College)是位于美国新罕布什尔州汉诺威镇的一所私立研究型综合性大学.达特茅斯是八所常春藤联盟(Ivy Lea ...

  5. NE(Network Embedding)论文小览

    #NE(Network Embedding)论文小览 自从word2vec横空出世,似乎一切东西都在被embedding,今天我们要关注的这个领域是Network Embedding,也就是基于一个G ...

  6. 推荐|NE(Network Embedding)论文小览,附21篇经典论文和代码

    文章转自:NE(Network Embedding)论文小览,附21篇经典论文和代码 自从word2vec横空出世,似乎一切东西都在被embedding,今天我们要关注的这个领域是Network Em ...

  7. Network Embedding 与 Graph Embedding

    网络嵌入与图嵌入在目标和假设上具有实质性的差异.网络嵌入有两个目标:重建原始网络和支持网络推断.图嵌入的目标主要为图重建.图嵌入可以看作是网络嵌入的一种特例.此外,图嵌入算法经常作用在特征表示数据集所 ...

  8. 员外带你读论文:LINE: Large-scale Information Network Embedding

    本次要总结和分享的论文是 LINE: Large-scale Information Network Embedding,其链接 论文[1],所参考的实现代码 code[2],这篇论文某些细节读起来有 ...

  9. 网络表示学习Network Representation Learning/Embedding

    网络表示学习Network Representation Learning/Embedding 网络表示学习相关资料 网络表示学习(network representation learning,NR ...

最新文章

  1. linux yum nothing,Centos6.9 yum安装htop报错解决过程
  2. 手术步骤_近视飞秒激光手术和传统Lasik手术比较
  3. [Swift实际操作]八、实用进阶-(7)使用通知的方法进行对象间的消息传递
  4. 你需要知道的Linux 系统下外设时钟管理
  5. JAVA入门级教学之(变量)
  6. C4.5决策树算法概念学习
  7. 十年后,你在元宇宙中的一天是什么样?
  8. 1333:【例2-2】Blah数集
  9. 2020 版 Python 数据清理终极指南!
  10. arcgis_spatialjoin
  11. #include和 #includefilename.h的区别
  12. 加勒比海盗海盗不雅镜头_土豆,海盗和……编程?
  13. 【转】android builder.setPositiveButton处 报错
  14. python打印心形改成中文之后变形了,Python打印心形图案
  15. Windows Mac上搭建个人云盘——kiftd开源网盘系统
  16. 关于阿里云与mangoDB的关系,以及uni-app基于阿里云打包H5以及app的讲解及注意事项
  17. Thingsboard 2.5 CE版本项目结构说明
  18. Java项目:流浪猫狗救助管理系统(java+SSM+JSP+bootstrap+jQuery+mysql)
  19. 「干货」12.5米数字高程DEM专题图制作教程
  20. 量子计算 18 量子算法3 (RSA Shor)

热门文章

  1. PAT1108 String复读机
  2. #3. 复读机(LibreOJ)
  3. linux 磁盘参数优化: barrier
  4. 2022年焊工(初级)试题及答案
  5. 第2章KNN算法笔记_函数classify0
  6. 【目标检测】SSD+yolo系列(v1-v7)
  7. 求区间[1,n]之间的回文数
  8. OFDM完整仿真过程及解释(MATLAB)
  9. linux日志文件不能清空,定期清空Linux系统日志文件
  10. c语言中unsigned long,unsigned long 的用法