前言

如果你对这篇文章可感兴趣,可以点击「【访客必读 - 指引页】一文囊括主页内所有高质量博客」,查看完整博客分类与对应链接。

文章目录

  • 前言
  • 6. Linear Systems Ax = b
    • 6.1 Basic Concepts
      • 6.1.1 LSEs
      • 6.1.2 Operations of LSEs
      • 6.1.3 Augmented Matirx
    • 6.2 Gaussian Elimination Method
      • 6.2.1 Overall Description
      • 6.2.2 Algorithm
    • 6.3 Pivoting Strategies
      • 6.3.1 Background
      • 6.3.2 Maximal Column Pivoting Technique
      • 6.3.3 Maximal Row Pivoting Technique
      • 6.3.4 Partial Pivoting Technique
      • 6.3.5 Scaled Partial Pivoting Technique
    • 6.4 LU Factorization
      • 6.4.1 The advantage of LU Factorization
      • 6.4.2 LU Factorization through Gaussian Elimination
      • 6.4.3 LU Factorization through Gaussian Elimination
    • 6.5 Strictly Diagonally dominant Matrix
      • 6.5.1 Definition
      • 6.5.2 Property
    • 6.6 Positive Definite Symmetric Matrix
      • 6.6.1 Definition
      • 6.6.2 Property
      • 6.6.3 Theorem
    • 6.7 LLTLL^TLLT Factorization
      • 6.7.1 Definition
      • 6.7.2 Choleski's Algorithm
    • 6.8 LDLTLDL^TLDLT Factorization
      • 6.8.1 Definition
      • 6.8.2 Algorithm
    • 6.9 Tri-diagonal Linear System
      • 6.9.1 Definition
      • 6.9.2 LU Factorization
      • 6.9.3 Remarks

6. Linear Systems Ax = b

6.1 Basic Concepts

6.1.1 LSEs

the Linear System of Equations (LSEs):
(I){E1:a11x1+a12x2+...+a1nxn=b1E2:a21x1+a22x2+...+a2nxn=b2...En:an1x1+an2x2+...+annxn=bn(I)\left\{ \begin{aligned} E_1: & a_{11}x_1+a_{12}x_2+...+a_{1n}x_n=b_1 \\ E_2: & a_{21}x_1+a_{22}x_2+...+a_{2n}x_n=b_2 \\ ... & \\ E_n: & a_{n1}x_1+a_{n2}x_2+...+a_{nn}x_n=b_n \end{aligned} \right. (I)⎩⎪⎪⎪⎪⎨⎪⎪⎪⎪⎧​E1​:E2​:...En​:​a11​x1​+a12​x2​+...+a1n​xn​=b1​a21​x1​+a22​x2​+...+a2n​xn​=b2​an1​x1​+an2​x2​+...+ann​xn​=bn​​

6.1.2 Operations of LSEs

Multiplied operation - 数乘

Equation EiE_iEi​ can be multiplied by any nonzero constant λ\lambdaλ
(λEi)→Ei(\lambda E_i)\rightarrow E_i (λEi​)→Ei​

Multiplied and added operation - 倍加

Equation EjE_jEj​ can be multiplied by any nonzero constant λ\lambdaλ, and added to Equation EiE_iEi​ in place of EiE_iEi​, denoted by
(λEj+Ei)→Ei(\lambda E_j+E_i)\rightarrow E_i (λEj​+Ei​)→Ei​

Transposition - 交换

Equation EiE_iEi​ and EjE_jEj​ can be transposed in order, denoted by
Ei↔EjE_i \leftrightarrow E_j Ei​↔Ej​

6.1.3 Augmented Matirx

A~=[A,b]=(a11a12...a1na21a22...a2n............an1an2...annb1b2...bn)\tilde{A}=[A,\textbf{b}]= \left ( \begin{array}{c:c} \begin{matrix} a_{11}&a_{12}&...&a_{1n}\\ a_{21}&a_{22}&...&a_{2n}\\ ... & ... & ... &... \\ a_{n1}&a_{n2}&...&a_{nn}\\ \end{matrix}& \begin{matrix} b_1\\ b_2\\ ...\\ b_n \end{matrix} \end{array} \right ) A~=[A,b]=⎝⎜⎜⎛​a11​a21​...an1​​a12​a22​...an2​​............​a1n​a2n​...ann​​​b1​b2​...bn​​​⎠⎟⎟⎞​

6.2 Gaussian Elimination Method

6.2.1 Overall Description

The key point of Gaussian Elimination Method is changing the original matrix into upper-triangular matrix, then using backward–substitution method to calculate the answer.

6.2.2 Algorithm

  • INPUT: NNN-dimension, A(N,N),B(N)A(N,N), B(N)A(N,N),B(N)

  • OUTPUT: Solution x(N)x(N)x(N) or Message that LESs has no unique solution.

  • Step 111: For k=1,2,...,N−1k = 1,2,...,N-1k=1,2,...,N−1, do step 2-4.

  • Step 222: Set ppp be the smallest integer with k≤p≤Nk\leq p\leq Nk≤p≤N and Ap,k≠0A_{p,k}\not= 0Ap,k​​=0. If no ppp can be found, output: “no unique solution exists”; stop.

  • Step 333: If p≠kp\not=kp​=k, do transposition Ep↔EkE_p\leftrightarrow E_kEp​↔Ek​.

  • Step 444: For i=k+1,...,Ni=k+1,...,Ni=k+1,...,N

    1. Set mi,k=A(i,k)A(k,k)m_{i,k}=\displaystyle\frac{A(i,k)}{A(k,k)}mi,k​=A(k,k)A(i,k)​
    2. Set B(i)=B(i)−mi,kB(k)B(i)=B(i)-m_{i,k}B(k)B(i)=B(i)−mi,k​B(k)
    3. For j=k+1,...,Nj=k+1,...,Nj=k+1,...,N, set A(i,j)=A(i,j)−mi,kA(k,j)A(i,j)=A(i,j)-m_{i,k}A(k,j)A(i,j)=A(i,j)−mi,k​A(k,j);
  • Step 555: If A(N,N)≠0A(N,N)\not=0A(N,N)​=0, set x(N)=B(N)A(N,N)x(N)=\displaystyle\frac{B(N)}{A(N,N)}x(N)=A(N,N)B(N)​; Else, output:“no unique solution exists.”

  • Step 666: For i=N−1,N−2,...,1i=N-1,N-2,...,1i=N−1,N−2,...,1, set
    X(i)=[B(i)−∑j=i+1NA(i,j)x(j)]/A(i,i)X(i)=[B(i)-\sum\limits_{j=i+1}^{N}A(i,j)x(j)]/A(i,i) X(i)=[B(i)−j=i+1∑N​A(i,j)x(j)]/A(i,i)

  • Step 777: Output the solution x(N)x(N)x(N).

6.3 Pivoting Strategies

6.3.1 Background

According to the process of Gaussian Elimination Method, We find that if akk(k−1)a_{kk}^{(k-1)}akk(k−1)​ is too small, the roundoff error will be larger.
mi,k=A(i,k)A(k,k)X(i)=[B(i)−∑j=i+1NA(i,j)x(j)]/A(i,i)m_{i,k}=\displaystyle\frac{A(i,k)}{A(k,k)}\\ X(i)=[B(i)-\sum\limits_{j=i+1}^{N}A(i,j)x(j)]/A(i,i)\\ mi,k​=A(k,k)A(i,k)​X(i)=[B(i)−j=i+1∑N​A(i,j)x(j)]/A(i,i)

Therefore, in order to reduce the roundoff error, we need to make the value of akk(k−1)a_{kk}^{(k-1)}akk(k−1)​ larger.

6.3.2 Maximal Column Pivoting Technique

This method is to make akk(k−1)a_{kk}^{(k-1)}akk(k−1)​ equal to the maximal value in its column.

6.3.3 Maximal Row Pivoting Technique

This method is to make akk(k−1)a_{kk}^{(k-1)}akk(k−1)​ equal to the maximal value in its row.

6.3.4 Partial Pivoting Technique

This method is to make akk(k−1)a_{kk}^{(k-1)}akk(k−1)​ equal to the maximal value in its remaining area.

6.3.5 Scaled Partial Pivoting Technique

si=max⁡1≤j≤n∣ai,j∣akksk=max⁡k≤i≤nai,1sis_i=\max_{1\leq j\leq n}|a_{i,j}| \\ \displaystyle\frac{a_{kk}}{s_k}=\max_{k\leq i\leq n}\displaystyle\frac{a_{i,1}}{s_i} si​=1≤j≤nmax​∣ai,j​∣sk​akk​​=k≤i≤nmax​si​ai,1​​

This method is to make akk(k−1)sk\displaystyle\frac{a_{kk}^{(k-1)}}{s_{k}}sk​akk(k−1)​​ equal to the maximal value in its remaining area.

6.4 LU Factorization

6.4.1 The advantage of LU Factorization

Ax=bA=LUL=(100...0l2110...0l31l321...0...............ln1ln2ln3...1),R=(u11u12u13...u1n0u22u23...u2n00u33...u3n...............000...unn)Ax=b\\ A=LU\\ L= \left( \begin{matrix} 1 & 0 & 0 & ... & 0 \\ l_{21} & 1 & 0 & ... & 0 \\ l_{31} & l_{32} & 1 & ... & 0 \\ ... & ... & ... & ... & ... \\ l_{n1} & l_{n2} & l_{n3} & ... & 1 \end{matrix} \right), R= \left( \begin{matrix} u_{11} & u_{12} & u_{13} & ... & u_{1n} \\ 0 & u_{22} & u_{23} & ... & u_{2n} \\ 0 & 0 & u_{33} & ... & u_{3n} \\ ... & ... & ... & ... & ... \\ 0 & 0 & 0 & ... & u_{nn} \end{matrix} \right) Ax=bA=LUL=⎝⎜⎜⎜⎜⎛​1l21​l31​...ln1​​01l32​...ln2​​001...ln3​​...............​000...1​⎠⎟⎟⎟⎟⎞​,R=⎝⎜⎜⎜⎜⎛​u11​00...0​u12​u22​0...0​u13​u23​u33​...0​...............​u1n​u2n​u3n​...unn​​⎠⎟⎟⎟⎟⎞​

We can use two-step process to solve LUx=bLUx=bLUx=b.

  1. y=Ux,Ly=by=Ux,Ly=by=Ux,Ly=b
  2. Solve Ly=bLy=bLy=b determining yyy with forward substitution method.
  3. Solve Ux=yUx=yUx=y determining xxx with forward substitution method.

6.4.2 LU Factorization through Gaussian Elimination

Theorem

If Gaussian elimination can be performed on the linear system Ax=bAx=bAx=b without row interchanges, then the matrix AAA can be factored into the product of a lower-triangular LLL and an upper-triangular matrix UUU,
A=LUA=LU A=LU
where
L=(100...0m2110...0m31m321...0...............mn1mn2mn3...1),R=(a111a121a131...a1n10a222a232...a2n200a333...a3n3...............000...annn)L= \left( \begin{matrix} 1 & 0 & 0 & ... & 0 \\ m_{21} & 1 & 0 & ... & 0 \\ m_{31} & m_{32} & 1 & ... & 0 \\ ... & ... & ... & ... & ... \\ m_{n1} & m_{n2} & m_{n3} & ... & 1 \end{matrix} \right), R= \left( \begin{matrix} a_{11}^1 & a_{12}^1 & a_{13}^1 & ... & a_{1n}^1 \\ 0 & a_{22}^2 & a_{23}^2 & ... & a_{2n}^2 \\ 0 & 0 & a_{33}^3 & ... & a_{3n}^3 \\ ... & ... & ... & ... & ... \\ 0 & 0 & 0 & ... & a_{nn}^n \end{matrix} \right) L=⎝⎜⎜⎜⎜⎛​1m21​m31​...mn1​​01m32​...mn2​​001...mn3​​...............​000...1​⎠⎟⎟⎟⎟⎞​,R=⎝⎜⎜⎜⎜⎛​a111​00...0​a121​a222​0...0​a131​a232​a333​...0​...............​a1n1​a2n2​a3n3​...annn​​⎠⎟⎟⎟⎟⎞​

Proof

mj,1=aj,1a1,1M1=(100...0−m2110...0−m3101...0...............−mn100...1)m_{j,1}=\displaystyle\frac{a_{j,1}}{a_{1,1}}\\ M^1 = \left( \begin{matrix} 1 & 0 & 0 & ... & 0 \\ -m_{21} & 1 & 0 & ... & 0 \\ -m_{31} & 0 & 1 & ... & 0 \\ ... & ... & ... & ... & ... \\ -m_{n1} & 0 & 0 & ... & 1 \end{matrix} \right) mj,1​=a1,1​aj,1​​M1=⎝⎜⎜⎜⎜⎛​1−m21​−m31​...−mn1​​010...0​001...0​...............​000...1​⎠⎟⎟⎟⎟⎞​
Thus,
An=Mn−1Mn−2...M1A.A^n=M^{n-1}M^{n-2}...M^{1}A. An=Mn−1Mn−2...M1A.
Let U=AnU=A^nU=An, then
[M1]−1...[Mn−2]−1[Mn−1]−1U=A[M1]−1=(100...0m2110...0m3101...0...............mn100...1)L=[M1]−1...[Mn−2]−1[Mn−1]−1[M^1]^{-1}...[M^{n-2}]^{-1}[M^{n-1}]^{-1}U=A \\ [M^1]^{-1} = \left( \begin{matrix} 1 & 0 & 0 & ... & 0 \\ m_{21} & 1 & 0 & ... & 0 \\ m_{31} & 0 & 1 & ... & 0 \\ ... & ... & ... & ... & ... \\ m_{n1} & 0 & 0 & ... & 1 \end{matrix} \right)\\ L = [M^1]^{-1}...[M^{n-2}]^{-1}[M^{n-1}]^{-1}\\ [M1]−1...[Mn−2]−1[Mn−1]−1U=A[M1]−1=⎝⎜⎜⎜⎜⎛​1m21​m31​...mn1​​010...0​001...0​...............​000...1​⎠⎟⎟⎟⎟⎞​L=[M1]−1...[Mn−2]−1[Mn−1]−1

6.4.3 LU Factorization through Gaussian Elimination

LU=(100...0l2110...0l31l321...0...............ln1ln2ln3...1)(u11u12u13...u1n0u22u23...u2n00u33...u3n...............000...unn)LU=A=(a11a12a13...a1na21a22a23...a2na31a32a33...a3n...............an1an2an3...ann)LU= \left( \begin{matrix} 1 & 0 & 0 & ... & 0 \\ l_{21} & 1 & 0 & ... & 0 \\ l_{31} & l_{32} & 1 & ... & 0 \\ ... & ... & ... & ... & ... \\ l_{n1} & l_{n2} & l_{n3} & ... & 1 \end{matrix} \right) \left( \begin{matrix} u_{11} & u_{12} & u_{13} & ... & u_{1n} \\ 0 & u_{22} & u_{23} & ... & u_{2n} \\ 0 & 0 & u_{33} & ... & u_{3n} \\ ... & ... & ... & ... & ... \\ 0 & 0 & 0 & ... & u_{nn} \end{matrix} \right)\\ LU=A= \left( \begin{matrix} a_{11} & a_{12} & a_{13} & ... & a_{1n} \\ a_{21} & a_{22} & a_{23} & ... & a_{2n} \\ a_{31} & a_{32} & a_{33} & ... & a_{3n} \\ ... & ... & ... & ... & ... \\ a_{n1} & a_{n2} & a_{n3} & ... & a_{nn} \end{matrix} \right) LU=⎝⎜⎜⎜⎜⎛​1l21​l31​...ln1​​01l32​...ln2​​001...ln3​​...............​000...1​⎠⎟⎟⎟⎟⎞​⎝⎜⎜⎜⎜⎛​u11​00...0​u12​u22​0...0​u13​u23​u33​...0​...............​u1n​u2n​u3n​...unn​​⎠⎟⎟⎟⎟⎞​LU=A=⎝⎜⎜⎜⎜⎛​a11​a21​a31​...an1​​a12​a22​a32​...an2​​a13​a23​a33​...an3​​...............​a1n​a2n​a3n​...ann​​⎠⎟⎟⎟⎟⎞​

Algorithm

6.5 Strictly Diagonally dominant Matrix

6.5.1 Definition

The n∗nn*nn∗n matrix is said to be strictly diagonally dominant (严格对角占优) when
∣aii∣>∑j=1,j≠in∣aij∣|a_{ii}|>\sum\limits_{j=1,j\not=i}^{n} |a_{ij}| ∣aii​∣>j=1,j​=i∑n​∣aij​∣
holds for each i=1,2,3,...,ni=1,2,3,...,ni=1,2,3,...,n.

6.5.2 Property

  1. A strictly diagonally dominant matrix AAA is nonsingular.
  2. Moreover, in this case, Gaussian elimination can be performed on any linear system of the form Ax=bAx=bAx=b to obtain its unique solution without row or column interchanges, and the computations are stable with respect to the growth of roundoff errors.

Proof for First Property

A matrix is singular means its determinant is zero.

A matrix’s determinant is zero means the n vectors in the matrix are linearly dependent.

Thus, matrix AAA is singular means there exists a column vector uuu that Ax=0Ax=0Ax=0.

6.6 Positive Definite Symmetric Matrix

6.6.1 Definition

A matrix AAA is positive definite if it is symmetric and if xTAx>0x^TAx > 0xTAx>0 for every nnn-dimensional column vector x≠0x\not=0x​=0.

6.6.2 Property

If A is an n∗nn*nn∗n positive definite matrix, then

  1. A is nonsingular;
  2. aii>0a_{ii}>0aii​>0 for each i=1,2,...,ni=1,2,...,ni=1,2,...,n;
  3. max⁡1≤k,j≤n∣ak,j∣≤max⁡1≤i≤n∣ai,i∣\max_{1\leq k,j\leq n}|a_{k,j}|\leq \max_{1\leq i\leq n}|a_{i,i}|max1≤k,j≤n​∣ak,j​∣≤max1≤i≤n​∣ai,i​∣;
  4. aij2<aiiajja_{ij}^2<a_{ii}a_{jj}aij2​<aii​ajj​ for each i≠ji\not=ji​=j.

6.6.3 Theorem

6.7 LLTLL^TLLT Factorization

6.7.1 Definition

For a n∗nn*nn∗n symmetric and positive definite matrix AAA with the form
A=(a11a12a13...a1na12a22a23...a2na13a23a33...a3n...............a1na2na3n...ann)A= \left( \begin{matrix} a_{11} & a_{12} & a_{13} & ... & a_{1n} \\ a_{12} & a_{22} & a_{23} & ... & a_{2n} \\ a_{13} & a_{23} & a_{33} & ... & a_{3n} \\ ... & ... & ... & ... & ... \\ a_{1n} & a_{2n} & a_{3n} & ... & a_{nn} \end{matrix} \right) A=⎝⎜⎜⎜⎜⎛​a11​a12​a13​...a1n​​a12​a22​a23​...a2n​​a13​a23​a33​...a3n​​...............​a1n​a2n​a3n​...ann​​⎠⎟⎟⎟⎟⎞​
where AT=A.A^T=A.AT=A. We can factorize this matrix to the form like LLT=ALL^T=ALLT=A, where LLL is a lower triangular matrix with form as follows
(l1100⋯0l21l220⋯0l31l32l33⋯0⋮⋮⋮⋱⋮ln1ln2ln3⋯lnn)\left( \begin{matrix} l_{11} & 0 & 0 & \cdots & 0 \\ l_{21} & l_{22} & 0 & \cdots & 0 \\ l_{31} & l_{32} & l_{33} & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ l_{n1} & l_{n2} & l_{n3} & \cdots & l_{nn} \end{matrix} \right) ⎝⎜⎜⎜⎜⎜⎛​l11​l21​l31​⋮ln1​​0l22​l32​⋮ln2​​00l33​⋮ln3​​⋯⋯⋯⋱⋯​000⋮lnn​​⎠⎟⎟⎟⎟⎟⎞​

Thus, we need to determine the elements lijl_{ij}lij​, for i∈[1,n]i\in[1,n]i∈[1,n] and j∈[1,n]j\in[1,n]j∈[1,n].
A=(a11a12a13...a1na12a22a23...a2na13a23a33...a3n...............a1na2na3n...ann)=LLTA= \left( \begin{matrix} a_{11} & a_{12} & a_{13} & ... & a_{1n} \\ a_{12} & a_{22} & a_{23} & ... & a_{2n} \\ a_{13} & a_{23} & a_{33} & ... & a_{3n} \\ ... & ... & ... & ... & ... \\ a_{1n} & a_{2n} & a_{3n} & ... & a_{nn} \end{matrix} \right)=LL^T A=⎝⎜⎜⎜⎜⎛​a11​a12​a13​...a1n​​a12​a22​a23​...a2n​​a13​a23​a33​...a3n​​...............​a1n​a2n​a3n​...ann​​⎠⎟⎟⎟⎟⎞​=LLT

6.7.2 Choleski’s Algorithm

Calculate the value one row by one row

To factor the positive definite n∗nn*nn∗n matrix AAA into LLTLL^TLLT, where LLL is lower triangular:

  • INPUT: the dimension nnn; entries aija_{ij}aij​ of AAA, for i∈[1,n]i\in [1,n]i∈[1,n] and j∈[1,i]j\in[1,i]j∈[1,i].

  • OUTPUT: the entries lijl_{ij}lij​ of LLL, for i∈[1,n]i\in [1,n]i∈[1,n] and j∈[1,i]j\in[1,i]j∈[1,i].

  • Step 111: Set l11=a11l_{11} = \sqrt{a_{11}}l11​=a11​​.

  • Step 222: For j∈[2,n]j\in[2,n]j∈[2,n], set lj1=a1jl11l_{j1}=\displaystyle\frac{a_{1j}}{l_{11}}lj1​=l11​a1j​​

  • Step 333: For i∈[2,n−1]i\in[2,n-1]i∈[2,n−1], do Steps 4 and 5.

  • Step 444: Set lii=[aii−∑j=1i−1lij2]12l_{ii}=[a_{ii}-\sum_{j=1}^{i-1}l_{ij}^2]^{\frac{1}{2}}lii​=[aii​−∑j=1i−1​lij2​]21​.

  • Step 555: For j∈[i+1,n]j\in[i+1,n]j∈[i+1,n], set lji=aij−∑k=1i−1likljkliil_{ji}=\displaystyle\frac{a_{ij}-\sum_{k=1}^{i-1}l_{ik}l_{jk}}{l_{ii}}lji​=lii​aij​−∑k=1i−1​lik​ljk​​

  • Step 666: Set lnn=[ann−∑k=1n−1lnk2]12l_{nn}=[a_{nn}-\sum_{k=1}^{n-1}l_{nk}^2]^\frac{1}{2}lnn​=[ann​−∑k=1n−1​lnk2​]21​

  • Step 777: OUTPUT lijl_{ij}lij​ for j∈[1,i]j\in[1,i]j∈[1,i] and i∈[1,n]i\in[1,n]i∈[1,n]. STOP!

Example

6.8 LDLTLDL^TLDLT Factorization

6.8.1 Definition

Matrix AAA is a positive definite matrix, thus
A=LDLT.A = LDL^T. A=LDLT.

We can calculate the value of LLL and DDD one row by one row.

6.8.2 Algorithm

6.9 Tri-diagonal Linear System

6.9.1 Definition

An n∗nn*nn∗n matrix AAA is called a band matrix (带状矩阵), if integers ppp and qqq, with 1<p,q<n1<p, q<n1<p,q<n, exist having the property that aij=0a_{ij}=0aij​=0 whenever i+p≤ji+p\leq ji+p≤j or j+q≤ij+q\leq ij+q≤i. The bandwidth (带宽) of a band matrix is defined as w=p+q−1w=p+q-1w=p+q−1.

6.9.2 LU Factorization

A=LUA = LU A=LU


In order to solve the problem of Ax=LUx=bAx=LUx=bAx=LUx=b, there are two steps to do.

  1. z=Uxz = Uxz=Ux, and solve Lz=bLz=bLz=b
  2. solve Ux=zUx=zUx=z

6.9.3 Remarks

  1. Band matrices usually are sparse matrices, thus we need to substitute two-dimensional array to one-dimensional array to store the value of the matrices.
  2. Banded matrices appear in numerical calculation methods of partial differential equations and are common matrix forms.

数值计算详细笔记(三):线性方程组解法相关推荐

  1. 吴恩达《机器学习》学习笔记三——多变量线性回归

    吴恩达<机器学习>学习笔记三--多变量线性回归 一. 多元线性回归问题介绍 1.一些定义 2.假设函数 二. 多元梯度下降法 1. 梯度下降法实用技巧:特征缩放 2. 梯度下降法的学习率 ...

  2. ROS学习笔记三:创建ROS软件包

    ,# ROS学习笔记三:创建ROS软件包 catkin软件包的组成 一个软件包必须满足如下条件才能被称之为catkin软件包: 这个软件包必须包含一个catkin编译文件package.xml(man ...

  3. 学习javascript这一篇就够了超详细笔记(建议收藏)上

    学习javascript这一篇就够了超详细笔记(建议收藏)上 1.初识 计算机基础导读 编程语言 计算机基础 初识js 浏览器执行 js组成 js初体验-三种书写位置 js注释 js输入输出语句 2. ...

  4. 【Visual C++】游戏开发笔记三十七 浅墨DirectX提高班之五 顶点缓存的红颜知己 索引缓存的故事

    分享一下我老师大神的人工智能教程!零基础,通俗易懂!http://blog.csdn.net/jiangjunshow 也欢迎大家转载本篇文章.分享知识,造福人民,实现我们中华民族伟大复兴! 本系列文 ...

  5. python3常用模块_Python学习笔记三(常用模块)

    Python 学习笔记三 (常用模块) 1.os模块 os模块包装了不同操作系统的通用接口,使用户在不同操作系统下,可以使用相同的函数接口,返回相同结构的结果. os.name:返回当前操作系统名称( ...

  6. Spring5学习详细笔记

    学习链接 Spring5学习详细笔记 Spring学习总结 - 工厂 第一章 :引言 1. 什么是spring 2. 设计模式 3. 工厂设计模式 3.1 什么是工厂设计模式 3.2 简单工厂的设计 ...

  7. 【Visual C++】游戏开发笔记三十三 浅墨DirectX提高班之二 化腐朽为神奇:DirectX初始化四步曲

    这篇文章里,我们将迈出精通DirectX的第一步,先了解典型Direct3D程序的书写流程,然后学习COM接口的对象的一些思想,然后按照"四步曲"的思路,系统地学习DirectX的 ...

  8. Linux达人养成计划I详细笔记(二)Linux分区与安装

    本系列博文是听了课程Linux达人养成计划后的个人学习笔记,较为详细,供大家参考. 目录 第2章 Linux分区与安装 2.1系统分区 2.2 Linux安装 第2章 Linux分区与安装 2.1系统 ...

  9. 尚硅谷-康师傅-MySQL详细笔记(1-9章)

    mysq详细笔记 第1章 数据库概述 1.1. 为什么要使用数据库 1.2. 数据库与数据库管理系统 1.2.1 数据库的相关概念 1.2.2 数据库与数据库管理系统的关系 1.2.3 常见的数据库管 ...

  10. 进程管理笔记三、完全公平调度算法CFS

    进程管理笔记三.CFS调度算法 引言:CFS是英文Completely Fair Scheduler的缩写,即完全公平调度器,负责进程调度.在Linux Kernel 2.6.23之后采用,它负责将C ...

最新文章

  1. printf(%d, -10u); 这个输出什么呀, 0或1?
  2. OpenCV3.3中逻辑回归(Logistic Regression)使用举例
  3. iOS-生成国际化包-配置App多语言支持
  4. vb怎么运用api对文件夹进行加密_[MIUI玩法篇 30] | 小米私密文件夹 深度揭秘
  5. android 设置 媒体服务器,第三讲:如何掌握媒体服务器的运行情况
  6. 超全流程-idea对springboot+ssm的部署实现增删改查
  7. flink读写hive-代码方式
  8. .NET中国峰会议题征集
  9. kibana客户端工具操作ElasticSearch(增删改查三)
  10. eslint+prettier+husky的配置说明
  11. rgb转nv12 nv12转rgb
  12. 华为防火墙笔记-报文处理流程
  13. oracle归档日志 delete obsolete 保留一次全备,Rman Crosscheck删除失效归档
  14. Android开发中的drawable-(hdpi,mdpi,ldpi)和WVGA,HVGA,QVGA的区别以及联系
  15. 计算机网络技术职称论文,计算机职称论文范文(2)
  16. ASP.NET+C#+Sql Server数据库0968 校园二手物品交易网站的设计与实现-毕业设计
  17. java syslog-ng_syslog-ng详细安装配置
  18. QRcode 二维码中插入图像分析
  19. unityhub下载地址
  20. 3D Builder

热门文章

  1. 【转】poj pku 线段树题目20道汇总+简要算法+分类+难度
  2. Linux报文硬件时间戳,linux 时间戳,打戳代码分析,用于PTP报文协议(示例代码)...
  3. HDU 2545 树上战争(并查集)
  4. NYOJ 156 Hangover
  5. 【hdu3183】A Magic Lamp(思维+st表(含模版))
  6. 【总结】动态规划 or 组合数学解决棋盘(迷宫)路径问题(持续更新中)
  7. linux下实用小脚本,linux下shell常用脚本大集合啦
  8. java看懂程序_手把手教你运行第一个 Java 程序,看不懂你来骂我!
  9. C/C++[入门最后两题]
  10. 机器学习- 吴恩达Andrew Ng - week3-1 Classification