Gauss quadrature approximation by Lanczos algorithm

  • 一、 Gauss Quadrature
    • 1.1 Gauss quadrature with weight function
    • 1.2 The caculation of the weights and nodes
  • 二、Lanczos algorithm
    • 2.1 Krylov subspace
    • 2.2 Arnoldi algorithm
    • 2.3 Lanczos algorithm
  • 三、Two important theory
    • 3.1 Theorem 1
    • 3.2 Theorem 2
  • 四、Gauss quadrature by Lanczos algorithm

一、 Gauss Quadrature

1.1 Gauss quadrature with weight function

For
I(f)=∫abρ(x)f(x)dx(1)I(f)=\int_{a}^{b} \rho(x) f(x)dx \tag{1} I(f)=∫ab​ρ(x)f(x)dx(1)
If the following quadrature formula[2,p220]
∫abρ(x)f(x)dx≈∑k=0nAkf(xk)(2)\int_{a}^{b}\rho(x)f(x)dx\approx \sum_{k=0}^{n}A_{k}f(x_{k}) \tag{2} ∫ab​ρ(x)f(x)dx≈k=0∑n​Ak​f(xk​)(2)
with the (2n+1)(2n+1)(2n+1) algebraic degree of precision, then it’s called the Gauss-type quadrature formula with weights ρ(x)\rho(x)ρ(x).
The nodes xkx_{k}xk​ are called Gauss points (nodes), AkA_{k}Ak​s are called quadrature coefficients about ρ(x)\rho(x)ρ(x) (weights or coefficients).

1.2 The caculation of the weights and nodes

(Note: This section is mainly about how to caculate the Gauss points and coefficents. The theoretical foundation can refer to [2, p223]. )

  • Orthogonal polynomials method: For a given special weight function ρ(x)\rho(x)ρ(x), find the orthogonal polynomials related to the weight funciton ρ(x)\rho(x)ρ(x).
    Let pn+1(x)p_{n+1}(x)pn+1​(x) is exact degree nnn, suppose its zeros are t1,⋯,tnt_{1},\cdots,t_{n}t1​,⋯,tn​, then they are also the Gauss points{xk}k=1n\{x_{k}\}_{k=1}^{n}{xk​}k=1n​, their coresponding coefficients {Ak}k=1n\{A_{k}\}_{k=1}^{n}{Ak​}k=1n​ are derived by
    Ak=∫abρ(x)lk(x)dx,(3)A_{k}=\int_{a}^{b}\rho(x)l_{k}(x)dx, \tag{3}Ak​=∫ab​ρ(x)lk​(x)dx,(3)
    where lk(t)l_{k}(t)lk​(t) are base function of interpolation
    lk(t)=∏j=1,j≠knt−tjtk−tj.l_{k}(t)=\prod_{j=1,j\neq k}^{n}\frac{t-t_{j}}{t_{k}-t_{j}}.lk​(t)=j=1,j=k∏n​tk​−tj​t−tj​​.
    Note: Using the zeros of orthogoanl polynomials to construct the Gauss quadrature formula, which is only effective for some special weight functions, such as 111(Legendre), 11−x2\frac{1}{\sqrt{1-x^{2}}}1−x2​1​(Chebyshev 1), 1−x2\sqrt{1-x^{2}}1−x2​(Chebyshev 2) and e−x2e^{-x^{2}}e−x2(Hermite), et.al.
    However, for the general weight function, the following undertermined coefficient method is usually adopted.
  • Undeterminated coefficient method: As equation (2) is accurately established for {1,x,x2,⋯,x2n−1}\{1,x,x^{2},\cdots,x^{2n-1}\}{1,x,x2,⋯,x2n−1}.
    Then we can construct the following (2n+2)(2n+2)(2n+2) equations about quadrature nodes {xk}k=0n\{x_{k}\}_{k=0}^{n}{xk​}k=0n​ and quadrature coefficient {Ak}k=0n\{A_{k}\}_{k=0}^{n}{Ak​}k=0n​ :
    {∑k=0nAk=∫ab1dx∑k=0nxkAk=∫abxdx∑k=0nxk2Ak=∫abx2dx⋮∑k=0nxk2n+1Ak=∫abx2n+1dx\left\{\begin{array}{l}\sum_{k=0}^{n} A_{k}=\int_{a}^{b} 1 \mathrm{~d} x \\ \sum_{k=0}^{n} x_{k} A_{k}=\int_{a}^{b} x \mathrm{~d} x \\ \sum_{k=0}^{n} x_{k}^{2} A_{k}=\int_{a}^{b} x^{2} \mathrm{~d} x \\ \vdots \\ \sum_{k=0}^{n} x_{k}^{2 n+1} A_{k}=\int_{a}^{b} x^{2 n+1} \mathrm{~d} x\end{array}\right.⎩⎨⎧​∑k=0n​Ak​=∫ab​1 dx∑k=0n​xk​Ak​=∫ab​x dx∑k=0n​xk2​Ak​=∫ab​x2 dx⋮∑k=0n​xk2n+1​Ak​=∫ab​x2n+1 dx​

二、Lanczos algorithm

2.1 Krylov subspace

  • Krylov subspace: AAA is real matrix (unsymmetric) of order nnn. Let vvv be a given vector and
    Kk=(v,Av,⋯,Ak−1v)K_{k}=\left(v, \quad A v, \quad \cdots, \quad A^{k-1} v\right)Kk​=(v,Av,⋯,Ak−1v)
    be the Krylov matrix of dimension n×kn\times kn×k. The subspace spanned by the columns of matrix KkK_{k}Kk​ is called a Krylov subspace and denoted by Kk(A,v)K_{k}(A,v)Kk​(A,v) or K(A,v)K(A,v)K(A,v).

Note: The natural basis of the Krylov subspace K(A,v)K(A,v)K(A,v) given by the columns of the Krylov matirx KkK_{k}Kk​ is badly conditioned when kkk is large.

2.2 Arnoldi algorithm

Arnoldi algorithm constructs an orthonormal basis of the Krylov subspace K(A,v)K(A,v)K(A,v) by applying a variant of the Gram-Schmidt orthogonalization process.

  • Arnoldi algorithm: Set vectors v(j+1)=Av(j)v^{(j+1)}=Av^{(j)}v(j+1)=Av(j) with v(1)=vv^{(1)}=vv(1)=v, then K(A,v)K(A,v)K(A,v) is spanned by the vectors {v(j)}j=1k\{v^{(j)}\}_{j=1}^{k}{v(j)}j=1k​.For constucting orthogonal basis vectors vjv^{j}vj, instead of orthogonalizing AjvA^{j}vAjv against the previous vectors, orthogonalize AvjAv^{j}Avj.
    Starting from v1=vv^{1}=vv1=v(normalized),the (j+1)(j+1)(j+1)st vector of the basis is computed by using the previous vectors,
  • Projection: hi,j=(Avj,vi),i=1,⋯,j,h_{i,j}=(Av^{j},v^{i}),i=1,\cdots,j,hi,j​=(Avj,vi),i=1,⋯,j,
  • Orthogonalize: v~j+1=Avj−∑i=1jhi,jvi,\tilde{v}^{j+1}=Av^{j}-\sum_{i=1}^{j}h_{i,j}v^{i},v~j+1=Avj−∑i=1j​hi,j​vi,
  • Length: hj+1,j=∥v~j+1∥h_{j+1,j}=\|\tilde{v}^{j+1}\|hj+1,j​=∥v~j+1∥,(if hj+1,j=0h_{j+1,j}=0hj+1,j​=0 then stop)
  • Normalize: vj+1=v~j+1hj+1,jv^{j+1}=\frac{\tilde{v}^{j+1}}{h_{j+1,j}}vj+1=hj+1,j​v~j+1​

As hj+1,jvj+1=Avj−∑i=1jhi,jvih_{j+1,j}v^{j+1}=Av^{j}-\sum_{i=1}^{j}h_{i,j}v^{i}hj+1,j​vj+1=Avj−∑i=1j​hi,j​vi,
If we collect the vectors vj,j=1,⋯,kv^{j},j=1,\cdots,kvj,j=1,⋯,k in a matrix VkV_{k}Vk​, that is Vk=[v1,v2,⋯,vk]V_{k}=[v_{1},v_{2},\cdots,v_{k}]Vk​=[v1​,v2​,⋯,vk​], the relations defining the vectors vk+1v^{k+1}vk+1 can be written in the matrix form as
AVk=VkHk+hk+1,kvk+1(ek)T,(4)A V_{k}=V_{k} H_{k}+h_{k+1, k} v^{k+1}\left(e^{k}\right)^{T}, \tag{4} AVk​=Vk​Hk​+hk+1,k​vk+1(ek)T,(4)
where HkH_{k}Hk​ is an upper Hessenberg matrix with elements hi,jh_{i,j}hi,j​, hi,j=0,j=1,⋯,i−2,i>2h_{i,j}=0,j=1,\cdots,i-2,i>2hi,j​=0,j=1,⋯,i−2,i>2.
Hk=[h1,1h1,2h1,3⋯h1,kh2,1h2,2h2,3⋯h2,k0h3,2h3,3⋯h3,k00h4,3⋯h4,k⋮⋮⋮⋱⋮000hk,k−1hk,k]H_{k}=\left[ \begin{array}{lllll} h_{1,1} & h_{1,2} & h_{1,3} & \cdots & h_{1,k}\\ h_{2,1} & h_{2,2} & h_{2,3} & \cdots & h_{2,k}\\ 0 & h_{3,2} & h_{3,3} & \cdots & h_{3,k}\\ 0 & 0 & h_{4,3} & \cdots & h_{4,k}\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & h_{k,k-1} & h_{k,k}\\ \end{array} \right ] Hk​=⎣⎡​h1,1​h2,1​00⋮0​h1,2​h2,2​h3,2​0⋮0​h1,3​h2,3​h3,3​h4,3​⋮0​⋯⋯⋯⋯⋱hk,k−1​​h1,k​h2,k​h3,k​h4,k​⋮hk,k​​⎦⎤​

2.3 Lanczos algorithm

Multiplying equation (4) by VkTV_{k}^{T}VkT​ and using orthogonality, we have
Hk=VkTAVk.H_{k}=V_{k}^{T}A V_{k}.Hk​=VkT​AVk​.
If matrix AAA is symmetric, then the Hessenberg matrix HkH_{k}Hk​ is tridiagonal and denoted by JkJ_{k}Jk​, that is Ji,j=0,j=i+2,⋯,k.J_{i,j}=0,j=i+2,\cdots,k.Ji,j​=0,j=i+2,⋯,k. (or hi,j=0,i=1,⋯,j−2,j+2,⋯,kh_{i,j}=0,i=1,\cdots,j-2,j+2,\cdots,khi,j​=0,i=1,⋯,j−2,j+2,⋯,k). This implies that the new vector vk+1v^{k+1}vk+1 can be computed by using only the two previous vkv^{k}vk and vk−1v^{k-1}vk−1.
hj+1,jvj+1=Avj−∑i=1jhi,jvi=Avj−hj−1,jvj−1−hj,jvj.h_{j+1,j}v^{j+1}=Av^{j}-\sum_{i=1}^{j}h_{i,j}v^{i}=Av^{j}-h_{j-1,j}v^{j-1}-h_{j,j}v^{j}.hj+1,j​vj+1=Avj−i=1∑j​hi,j​vi=Avj−hj−1,j​vj−1−hj,j​vj.
In matrix form, set ηi\eta_{i}ηi​ denote the nonzero off-diagonal entries of JkJ_{k}Jk​,
AVk=VkJk+ηkvk+1(ek)T.(5)A V_{k}=V_{k} J_{k}+\eta_{k} v^{k+1}\left(e^{k}\right)^{T}. \tag{5} AVk​=Vk​Jk​+ηk​vk+1(ek)T.(5)

Equation (5) describes in matrix form the elgant Lanczos algorithm. For simple notation system, starting from a nonzero vector v1=v/∥v∥,α1=(Av1,v1),v~2=Av1−α1v1v^{1}=v/\|v\|,\alpha_{1}=(Av^{1},v^{1}),\tilde{v}^{2}=Av^{1}-\alpha_{1}v^{1}v1=v/∥v∥,α1​=(Av1,v1),v~2=Av1−α1​v1, and then for k=2,3,⋯,k=2,3,\cdots,k=2,3,⋯,
ηk−1=∥v~k∥,vk=v~kηk−1,αk=(vk,Avk)=(vk)TAvk,v~k+1=Avk−αkvk−ηk−1vk−1.\begin{array}{c} \eta_{k-1}=\|\tilde{v}^{k}\|,\\ v^{k}=\frac{\tilde{v}^{k}}{\eta_{k-1}},\\ \alpha_{k}=\left(v^{k}, A v^{k}\right)=\left(v^{k}\right)^{T} A v^{k},\\ \tilde{v}^{k+1}=A v^{k}-\alpha_{k} v^{k}-\eta_{k-1} v^{k-1}.\\ \end{array}ηk−1​=∥v~k∥,vk=ηk−1​v~k​,αk​=(vk,Avk)=(vk)TAvk,v~k+1=Avk−αk​vk−ηk−1​vk−1.​
Jk=[α1η10⋯0η1α2η2⋯00η2α3⋯000η3⋯0⋮⋮⋮⋱⋮000ηk−1αk]J_{k}=\left[ \begin{array}{lllll} \alpha_{1} & \eta_{1} & 0 & \cdots & 0\\ \eta_{1} & \alpha_{2} & \eta_{2} & \cdots & 0\\ 0 & \eta_{2} & \alpha_{3} & \cdots & 0\\ 0 & 0 & \eta_{3} & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \eta_{k-1} & \alpha_{k}\\ \end{array} \right ] Jk​=⎣⎡​α1​η1​00⋮0​η1​α2​η2​0⋮0​0η2​α3​η3​⋮0​⋯⋯⋯⋯⋱ηk−1​​0000⋮αk​​⎦⎤​

三、Two important theory

3.1 Theorem 1

Theorem1 [1,Th.4.1]: Let χk(λ)\chi_{k}(\lambda)χk​(λ) be the determinant of Jk−λIJ_{k}-\lambda IJk​−λI(which is a monic polynomial); then
vk=pk(A)v1,pk(λ)=(−1)k−1χk−1(λ)η1⋯ηk−1,k>1,p1≡1v^{k}=p_{k}(A) v^{1}, \quad p_{k}(\lambda)=(-1)^{k-1} \frac{\chi_{k-1}(\lambda)}{\eta_{1} \cdots \eta_{k-1}}, k>1, \quad p_{1} \equiv 1vk=pk​(A)v1,pk​(λ)=(−1)k−1η1​⋯ηk−1​χk−1​(λ)​,k>1,p1​≡1
The polynomials pkp_{k}pk​ of degree k−1k-1k−1 are called the normalized Lanczos polynomials.

This theorem describes the most important property of Lanczos algorithm:The Lanczos vectors vkv^{k}vk are given as a polynomial in matrix AAA applied to the initial vector v1v^{1}v1.

It is easy to verify that
det⁡(Jk+1)=αk+1det⁡(Jk)−ηkdet⁡(Jk−1),(6)\det(J_{k+1}) = \alpha_{k+1}\det(J_{k})-\eta_{k}\det(J_{k-1}), \tag{6} det(Jk+1​)=αk+1​det(Jk​)−ηk​det(Jk−1​),(6)
and
det⁡(Jk+1−λI)=(αk+1−λ)det⁡(Jk−λI)−ηkdet⁡(Jk−1−λI).(7)\det(J_{k+1}-\lambda I) = (\alpha_{k+1}-\lambda)\det(J_{k}-\lambda I)-\eta_{k}\det(J_{k-1}-\lambda I). \tag{7} det(Jk+1​−λI)=(αk+1​−λ)det(Jk​−λI)−ηk​det(Jk−1​−λI).(7)
From the expression of polynomial pk(λ)p_{k}(\lambda)pk​(λ) in Theorem 1, we know that the Lanczos polynomials satisfy a scalar three-term recurrence,
ηkpk+1(λ)=(λ−αk)pk(λ)−ηk−1pk−1(λ),k=1,2,…,(8)\eta_{k} p_{k+1}(\lambda)=\left(\lambda-\alpha_{k}\right) p_{k}(\lambda)-\eta_{k-1} p_{k-1}(\lambda), k=1,2, \ldots, \tag{8} ηk​pk+1​(λ)=(λ−αk​)pk​(λ)−ηk−1​pk−1​(λ),k=1,2,…,(8)
with initial conditions p0≡0,p1≡1p_{0} \equiv 0, p_{1} \equiv 1p0​≡0,p1​≡1.

3.2 Theorem 2

Theorem2 [1, Th.4.2]:
Consider the Lanczos vectors vkv^{k}vk. There exists a measure α(λ)\alpha(\lambda)α(λ) such that
(vk,vl)=⟨pk,pl⟩=∫abpk(λ)pl(λ)dα(λ),(9)\left(v^{k}, v^{l}\right)=\left\langle p_{k}, p_{l}\right\rangle=\int_{a}^{b} p_{k}(\lambda) p_{l}(\lambda) d \alpha(\lambda), \tag{9} (vk,vl)=⟨pk​,pl​⟩=∫ab​pk​(λ)pl​(λ)dα(λ),(9)
where α≤λ1=λmin\alpha\leq \lambda_{1}=\lambda_{min}α≤λ1​=λmin​ and b≥λn=λmaxb\geq \lambda_{n}=\lambda_{max}b≥λn​=λmax​, λmin\lambda_{min}λmin​ and λmax\lambda_{max}λmax​ being the smallest and largest eigenvalues of AAA, and pip_{i}pi​ are the Lanczos polynomials associated with AAA and v1v^{1}v1.
α(λ)={0,if λ<λ1∑j=1i[v^j]2,if λi≤λ<λi+1∑j=1n[v^j]2,if λn≤λ(10)\alpha(\lambda)=\left\{\begin{array}{ll}0, & \text { if } \lambda<\lambda_{1} \\ \sum_{j=1}^{i}\left[\hat{v}_{j}\right]^{2}, & \text { if } \lambda_{i} \leq \lambda<\lambda_{i+1} \\ \sum_{j=1}^{n}\left[\hat{v}_{j}\right]^{2}, & \text { if } \lambda_{n} \leq \lambda\end{array}\right. \tag{10} α(λ)=⎩⎨⎧​0,∑j=1i​[v^j​]2,∑j=1n​[v^j​]2,​ if λ<λ1​ if λi​≤λ<λi+1​ if λn​≤λ​(10)

Note: For the sake of simplicity, here suppose that the eigenvalues of AAA are distinct.

四、Gauss quadrature by Lanczos algorithm

For symetric matrix AAA with eigenvalue decomposition A=QΛQTA=Q\Lambda Q^{T}A=QΛQT, fff is smooth funciton, vvv is random unit vector, then
vTf(A)v=vTQf(Λ)QTvv^{T}f(A)v=v^{T}Qf(\Lambda)Q^{T}vvTf(A)v=vTQf(Λ)QTv
set v^=QTv\hat{v}=Q^{T}vv^=QTv, then
vTf(A)v=v^Tf(Λ)v=∑j=1nf(λi)[v^j]2v^{T}f(A)v=\hat{v}^{T}f(\Lambda)v=\sum_{j=1}^{n}f(\lambda_{i})[\hat{v}_{j}]^{2}vTf(A)v=v^Tf(Λ)v=j=1∑n​f(λi​)[v^j​]2
Consider the above sum as Riemann-Stieltjes integral
∑j=1nf(λi)[v^j]2=∫abf(t)dα(t).(11)\sum_{j=1}^{n}f(\lambda_{i})[\hat{v}_{j}]^{2}=\int_{a}^{b}f(t)d\alpha(t). \tag{11} j=1∑n​f(λi​)[v^j​]2=∫ab​f(t)dα(t).(11)
where the measure α(t)\alpha(t)α(t) is defined as:
α(t)={0,if t<λ1=a∑j=1i[v^j]2,if λi≤t<λi+1∑j=1n[v^j]2,if b=λn≤t.\alpha(t)=\left\{\begin{array}{ll}0, & \text { if } t<\lambda_{1}=a \\ \sum_{j=1}^{i}\left[\hat{v}_{j}\right]^{2}, & \text { if } \lambda_{i} \leq t<\lambda_{i+1} \\ \sum_{j=1}^{n}\left[\hat{v}_{j}\right]^{2}, & \text { if } b=\lambda_{n} \leq t.\end{array}\right. α(t)=⎩⎨⎧​0,∑j=1i​[v^j​]2,∑j=1n​[v^j​]2,​ if t<λ1​=a if λi​≤t<λi+1​ if b=λn​≤t.​

The integral in Equation (11) can be estimated using the Gauss quadrature,
∫abf(t)dα(t)≈∑k=1nAkf(xk).(12)\int_{a}^{b}f(t)d\alpha(t)\approx \sum_{k=1}^{n}A_{k}f(x_{k}).\tag{12} ∫ab​f(t)dα(t)≈k=1∑n​Ak​f(xk​).(12)
As stated in the Theorem2, {pk}k=1n\{p_{k}\}_{k=1}^{n}{pk​}k=1n​ are orthonormal polynomial and the measure funciton is α(t)\alpha(t)α(t), the same with Equation (10).
We can rewirte the three-term recurrence in Equation (8) in matrix form. Let p(λ)=[p1(λ),p2(λ),⋯,pk(λ)]T\mathbf{p}(\lambda)=[p_{1}(\lambda),p_{2}(\lambda),\cdots,p_{k}(\lambda)]^{T}p(λ)=[p1​(λ),p2​(λ),⋯,pk​(λ)]T and p0(λ)=0p_{0}(\lambda)=0p0​(λ)=0, then
λp(λ)=p(λ)Jk+ηkpk+1(λ)ekT.(13)\lambda \mathbf{p}(\lambda)=\mathbf{p}(\lambda) J_{k}+\eta_{k}p_{k+1}(\lambda)e_{k}^{T}. \tag{13} λp(λ)=p(λ)Jk​+ηk​pk+1​(λ)ekT​.(13)
Then the zeros of pk+1(λ)p_{k+1}(\lambda)pk+1​(λ) are precisely the eigenvalues of JkJ_{k}Jk​.

Note:pk+1(λ)p_{k+1}(\lambda)pk+1​(λ) is polynomial of degree k.

So, the Gauss point prolbem in Equation(12) can be get by the eigenvalues of JkJ_{k}Jk​ which derived by Lanczos algorithm for matrix AAA.


References:
[1] Golub, G. H. and Meurant, G. Matrices, Moments and Quadrature with Applications, Princeton University Press, 2010.
[2] 孙志忠,袁慰平,闻震初. 数值分析(第3版),东南大学出版社.

Gauss quadrature approximation by Lanczos algorithm相关推荐

  1. Gauss Quadrature数值求积基础与R语言实践

    Gauss Quadrature数值求积基础与R语言实践 Gauss Quadrature的基本概念 R语言实践 例1:Gauss-Legendre Quadrature 例2:Gauss-Lague ...

  2. Gauss型(Gaussian quadrature)求积公式和方法

    目录 0.Gauss型积分通用形式 1.Gauss–Legendre quadrature勒让德 2.Gauss–Laguerre quadrature拉盖尔--积分区间[0,inf] 3.Cheby ...

  3. 多相位图像插值算法(Lanczos、sinc)

    Lanczos Algorithm Analyse 在公司时候研究过的Lanczos图像缩略算法,今天整理出来给大家分享,分析的是imagemagic里面的lanczos resize的源码. 1 图 ...

  4. 论文笔记:Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering

    前言 初代频域GCN简单粗暴的将diag(g^(λl))diag(\hat{g}{(\lambda_l)})diag(g^​(λl​))变成了卷积核diag(θl)diag(\theta_l)diag ...

  5. MATLAB工具箱,应用程序,软件和资源的精选清单

    精选的MATLAB工具箱,应用程序,软件和资源的精选清单. # Awesome MATLAB [![Awesome](https://cdn.rawgit.com/sindresorhus/aweso ...

  6. 有限元方法基础入门教程(一维弹性问题的有限元程序)

    有限元方法(冯康首次发现时称为基于变分原理的差分方法),是一种用于求解微分方程组或积分方程组数值解的数值技术.这一解法基于完全消除微分方程,即将微分方程转化为代数方程组(稳定情形):或将偏微分方程(组 ...

  7. 推荐系统——开源代码

    最近这两年推荐系统特别火,本文搜集整理了一些比较好的开源推荐系统,即有轻量级的适用于做研究的SVDFeature.LibMF.LibFM等,也有重量级的适用于工业系统的 Mahout.Oryx.Eas ...

  8. 十大开源推荐系统简介 [转自oschina]

    最近这两年推荐系统特别火,本文搜集整理了一些比较好的开源推荐系统,即有轻量级的适用于做研究的SVDFeature.LibMF.LibFM等,也有重 量级的适用于工业系统的 Mahout.Oryx.Ea ...

  9. ffmpeg builds by zeranoe_FFmpeg

    FFmpeg 是一个用于处理音视频的开源程序,但它的入门较为复杂,难度较大,且没有较为清晰明了的简易教程,因此有必要,系统性地讲解 FFmpeg 的一些实用操作. 关于下载: 官方下载链接: http ...

最新文章

  1. AI进军服装零售产业:微软小冰与特步推出定制化服装设计生产及零售平台
  2. nginx的error.log日志常见的几个错误解决方法
  3. python从入门到精通书-清华大学出版社-图书详情-《Python从入门到精通》
  4. BJUI修改弹窗dialog的宽度和高度
  5. MyBatis 获取数据库中自增主键值
  6. 【STM32】随机数发生器相关函数和类型
  7. SpringBoot 全局异常处理
  8. 二、Java 面向对象高级——Collection、泛型
  9. 搭建简易留言板过程中遇到的问题
  10. solid 设计原则 php,面向对象设计SOLID五大原则
  11. android 相机权限_暴力破姐权限,吹爆这款软件...
  12. Zookeeper开源客户端curator 分布式锁
  13. java项目-第37期基于springboot+layui实现的医院His系统【毕业设计】
  14. LaTeX技巧100:LaTeX如何输入大小写罗马数字?
  15. android 自定义searchview,android自定义searchView圆角
  16. WAF - SQL注入之绕过云锁 靶场实战
  17. Mol Plant |中科院微生物所郭惠珊组和中科院上海植物逆境中心段成国组合作揭示油菜生长与免疫动态调节的新机制...
  18. 摘要:英语词汇记忆方法大家谈
  19. python 正则 匹配任意字符串_python中正则匹配
  20. OCI跨租户(Tenancy)Object Storage文件复制

热门文章

  1. React项目前端开发总结
  2. Opencv convertScaleAbs函数 和灰度图上进行透明彩色绘制
  3. [评估指标] 敏感性/特异性/PPV/NPV等指标原理与计算方法
  4. 基因调控分析之转录因子结合位点分析
  5. python计算化学浓度_计算化学操作流程-孙磊.pdf
  6. termux python3-dev_termux进阶
  7. window10下运行激活软件时会提示病毒文件,并且自动删除问题
  8. 为什么黑客不攻击支付宝?
  9. 电脑中常用的“扇区”、“簇”、“块”、“页”等概念
  10. wms地图绘制工具_地图空间分析工具MapViewer下载-地图空间分析(MapViewer)下载v8.4.406 官方版-西西软件下载...