Linear Algebra Handnote(1)

  • If LL is lower triangular with 1’s on the diagonal, so is L−1L^{-1}

  • Elimination = Facotization: A=LUA=LU

  • ATA^T is the matrix that makes these two inner products equal for every xx and yy:

    (Ax)Ty=xT(ATy)

    (Ax)^Ty=x^T(A^Ty)
    Inner product of AxAx with yy = Inner product of xx with ATyA^Ty

  • DEFINITION: The space RnR^n consists of all column vectors vv with nn components

  • DEFINITION: A subspace of a vector space is a set of vectors (including 0) that satisfies two requirements: (1) v+wv+w is in the subspace, (2) cvcv is in the subspace

  • The colomn space consists of all linear combinations of the columns. The combinations are all possible vectors AxAx. They fill the column space C(A)C(A)

    The system Ax=bAx=b is solvable if and only if bb is in the column space of AA

  • The nullspace of A consists of all solutions to Ax=0Ax=0. These vectors xx are in RnR^n. The nullspace containing all solutions of Ax=0Ax=0 is denoted by N(A)N(A)

  • the nullspace is a subspace of RnR^n, the column space is a subspace of RmR^m
  • the nullspace consists of all combinations of the special solutions
  • Nullspace(plane) perpendicular to row space(line)

  • Ax=0Ax=0 has rr pivots and n−rn-r free variables: nn columns minus rr pivot columns. The nullspace matrix NN (contains all special solutions) contains the n−rn-r special solutions. Then AN=0AN=0

  • Ax=0Ax=0 has rr independent equations so it has n−rn-r independent solutions.

  • xparticularx_{particular}: the particular solution solves Axp=bAx_p=b

  • xnullspacex_{nullspace}: the n−rn-r special solutions solve Axn=0Ax_n=0

  • Complete solution: one xpx_p, many xnx_n: x=xp+xnx=x_p+x_n

  • The four possibilities for linear equations depend on the rank rr:

    • r=mr=m, and r=nr=n: Square ane invertible, Ax=bAx=b has 1 solution
    • r=mr=m, and r<nr: Short and wide, Ax=bAx=b has ∞\infty solutions
    • r<mr, and r=nr=n: Tall and thin, Ax=bAx=b has 0 or 1 solutions
    • r<mr, and r<nr: Not full rank, Ax=bAx=b has 0 or ∞\infty solutions
    • Independent vections (no extra vectors)

    • Spanning a space (enough vectors to produce the rest)
    • Basis for a space (not too many or too few)
    • Dimension of a space (the number of vectors in a basis)

    • Any set of nn vectors in RmR^m must be linearly dependent if n>mn>m

    • The columns spans the column space. The rows span the row space

      • The column space / row space of a matrix is the subspace of RmR^m/RnR^n spanned by the columns/rows.
    • A basis for a vector space is a sequence of vectors with two properties: linear independent and span the space.

      • The basis is not unique. But the combination that produces the vector is unique.
      • The columns of a n×nn \times n invertible matrix are a basis for RnR^n.
      • The pivot columns of A are a basis for its column space.
    • DEFINITION: The dimension of a space is the number of vectors in every basis.

    • The space ZZ that contains only the zero vector. The dimension of this space is zero. The empty set (containing no vectors) is a basis for Z. We can never allow the zero vector into a basis, because then linear independence is lost.

      Four Fundamental Subspaces
      1. The row space C(AT)C(A^T), a subspace of RnR^n
      2. The column space C(A)C(A), a subspace of RmR^m
      3. The nullspace is N(A)N(A), a subspace of RnR^n
      4. The left nullspace N(AT)N(A^T), a subspace of RmR^m

      1. AA has the same row space as RR. Same dimension rr and same basis.
      2. The column space of AA has dimension rr. The number of independent columns equals the number of independent rows.
      3. AA has the same nullspace as RR. Same dimension n−rn-r and same basis.
      4. The left nullspace of AA (the nullspace of ATA^T has dimension m−rm-r.

      Fundamental Theorem of Linear Algebra, Part 1

      • The column space and row space both have dimension rr.
      • The nullspaces have dimensions n−rn-r and m−rm-r.

      • Every rank one matrix has the special form A=uvT=column×row.A=uv^T=column \times row.

      • The nullspace N(A)N(A) and the row space C(AT)C(A^T) are orthogonal subspaces of RnR^n.

      • DEFINITION: The orthogonal complement of a subspace VV contains every vector that is perpendicular to VV.

      Fundamental Theorem of Linear Algebra, Part 2
      * N(A)N(A) is the orthogonal complement of the row space C(AT)C(A^T) (in RnR^n)
      * N(AT)N(A^T) is the orthogonal complement of the column space C(A)C(A) (in RmR^m)

      Projection Onto a Line
      * The projection matrix P=aaTaTaP=\frac{aa^T}{a^Ta} onto the line through aa
      * The projection p=x¯a=aTbaTaap=\bar{x}a=\frac{a^Tb}{a^Ta}a

      Projection Onto a Subspace

      Problem: Find the combination p=x1¯a1+⋯+xn¯anp=\bar{x_1}a_1+\cdots+\bar{x_n}a_n closest to a given vector bb. The nn vectors a1,⋯,ana_1, \cdots, a_n in RmR_m span the column space of AA. Thus the problem is to find the particular combination p=Ax¯p=A\bar{x}(the projection) that is closest to bb. When n=1n=1, the best choice is aTbaTa\frac{a^Tb}{a^Ta}

      • AT(b−Ax¯)=0A^T(b-A\bar{x})=0, or ATAx¯=ATbA^TA\bar{x}=A^Tb

      • The symmetric matrix ATAA^TA is n×nn\times n. It is inverible if the aa’s are independent.

      • The solution is x¯=(ATA)−1ATb\bar{x}=(A^TA)^{-1}A^Tb
      • The projection of bb onto the subspace p=Ax¯=A(ATA)−1ATbp=A\bar{x}=A(A^TA)^{-1}A^Tb
      • The projection matrix P=A(ATA)−1ATP=A(A^TA)^{-1}A^T

      * ATAA^TA is invertible if and only if AA has linearly independent columns*

      Least Squares Approximations

      • When Ax=bAx=b has no solution, multiply by ATA^T and solve ATAx¯=ATbA^TA\bar{x}=A^Tb

        • The least squares solution x¯\bar{x} minimizes E=||Ax−b||2E=||Ax-b||^2. This is the sum of squares of the errors in the mm equations (m>nm>n)

        • The best x¯\bar{x} comes from the normal equations ATAx¯=ATbA^TA\bar{x}=A^Tb
        • Orthogonal Bases and Gram-Schmidt

          • orthonormal vectors

            • A matrix with orthonormal columns is assigned the special letter QQ. The matrix QQ is easy to work with because QTQ=IQ^TQ=I
            • When QQ is square, QTQ=IQ^TQ=I means that QT=Q−1Q^T=Q^{-1}: transpose = inverse.
            • If the columns are only orthogonal (not unit vectors), dot products give a diagonal matrix (not the identity matrix)
          • Every permutation matrix is an orthogonal matrix.

          • If QQ has orthonormal columns (QTQ=IQ^TQ=I), it leaves lengths unchanged

          • Orthogonal is good

          • Use Gram-Schmidt for the Factorization A=QRA=QR

          [abc]=[q1q2q3]⎡⎣⎢⎢qT1aqT1bqT2bqT1cqT2cqT3c⎤⎦⎥⎥\left[ {\begin{array}{*{20}{c}} {\rm{a}}&b&c \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {{q_1}}}} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {q_1^Ta}\\ {}\\ {}\end{array}} \right]

          • (Gram-Schmidt) From independent vectors a1,⋯,ana_1, \cdots, a_n, Gram-Schmidt constructs orthonormal vectors q1,⋯,qnq_1, \cdots, q_n. The matrces with these columns satisfy A=QRA=QR. Then R=QTAR=Q^TA is upper triangular because later qq’s are orthogonal to earlier aa’s.

          • Least squares: RTRx¯=RTQTbR^TR\bar{x}=R^TQ^Tb or Rx¯=QTbR\bar{x}=Q^Tb or x¯=R−1QTb\bar{x}=R^{-1}Q^Tb

          Determinants

          • The determinant is zero when the matrix has no inverse
          • The product of the pivots is the determinant
          • The determinant changes sign when two rows (or two columns) are exchanged
          • Determinants give A−1A^{-1} and A−1bA^{-1}b (this formulat is called Cramer’s Rule)
          • When the edge of a box are the rows of AA, the volume is |detA||det A|
          • For nn special numbers λ\lambda, called eigenvalues, the determinants of A−λIA-\lambda I is zero.

          The properties of the determinant

          1. The determinant of the n×nn \times n identity matrix is 1.
          2. The determinant changes sign when two rows are exchanged
          3. The determinant is a linear function of each row separately (all other rows stay fixed!)
          4. If two rows of AA are equal, then detA=0det A = 0
          5. Subtracting a multiple of one row from another row leave detAdet A unchanged.
            • |ab c−lad−lb|=∣∣∣acbd∣∣∣\left| {\begin{array}{*{20}{c}} {\rm{a}}&b\ {c - la}\end{array}} \right| = \left| {\begin{array}{*{20}{c}} a&b\\ c&d \end{array}} \right|
          6. A matrix with a row of zeros has detA=0det A=0
          7. If AA is triangular then detA=a11a22⋯ann=productofdiagonalentriesdet A=a_{11}a_{22}\cdots a_{nn}=product of diagonal entries
          8. If AA is singular then detA=0det A=0. If AA is invertible then detA≠0det A \neq 0
            • Elimination goes from AA to UU.
            • detA=+−detU=+−(productofthepivots)det A = +-det U = +-(product of the pivots)
          9. The determinant of ABAB is detAtimesdetBdet A times det B
          10. The transpose ATA^T has the same determinant as AA

          Every rule of the rows can apply to columns*

          Cramer’s Rule

          • If detAdet A is not zero, Ax=bAx=b is solved by determinants:

            • x1=detB1detA,x2=detB2detA,⋯,xn=detBndetAx_1=\frac{det B_1}{det A}, x_2=\frac{det B_2}{det A}, \cdots, x_n=\frac{det B_n}{det A}
            • The matrix BjB_j has the jjth column of AA replaced by the vector bb

          Cross Product

          • ||u×v||=||u||||v|||sinθ|||u \times v|| = ||u||||v|||\sin \theta|
          • |u⋅v|=||u||||v|||cosθ||u \cdot v| = ||u||||v|||\cos \theta|

          • The length of u×vu \times v equals the area of the parallelogram with sides uu and vv

          • It points by the right hand rule (points along your right thumb when the fingers curl from uu to vv

          Eigenvalues and Eigenvectors

          • The basic equation is Ax=λxAx=\lambda x, The number λ\lambda is an eigenvalue of AA

            • When AA is squared, the eigenvectors stay the same. The eigenvalues are squared.
          • The projection matrix has eigenvalues λ=1\lambda = 1 and λ=0\lambda = 0

            • PP is singular, so λ=0\lambda=0 is an eigenvalue
            • Each column of PP adds to 1, so λ=1\lambda=1 is an eigenvalue
            • PP is symmetric, so its eigenvectors are perpendicular
          • Permutations have all |λ|=1|\lambda|=1
          • The reflection matrix has eigenvalues 1 and -1

          • Solve the eigenvalue problem for an n×nn \times n matrix

            • Compute the determinant of A−λIA-\lambda I. It is a polynomial in λ\lambda of degree nn
            • Find the roots of this polynomial
            • For each eigenvalue λ\lambda, solve (A−λI)x=0(A-\lambda I)x=0 to find an eigenvector xx
          • Bad news: elimination does not preserve the λ\lambda’s

          • Good news: the product of eigenvalues equals the determinant, the sum of the eigenvalues equals the sum of the diagonal entries (trace)

          Diagonalizing a Matrix

          • Suppose the n×nn \times n matrix AA has nn linearly independent eigenvectors x1,⋯,xnx_1, \cdots, x_n. Put them into the columns of an eigenvector matrix SS. Then S−1ASS^{-1}AS is the eigenvalue matrix Λ\Lambda:

            • S−1AS=Λ=[λ1 ⋱ λn]{S^{ - 1}}AS = \Lambda = \left[ {\begin{array}{*{20}{c}} {{\lambda _1}}\ {}& \ddots \ {}} \end{array}} \right]
          • There is no connection between invertibility and diagonalizability:

            • Invertibility is concerned with the eigenvalues (λ=0\lambda=0 or λ≠0\lambda \neq 0)
            • Diagonalizability is concerned with the eigenvectors (too few or enough for SS)

          Applications to differential equations

          • One equation dudt=λu\frac{du}{dt}=\lambda u has the solution u(t)=Ceλtu(t)=Ce^{\lambda t}

          • nn equations dudt=Au\frac{d\textbf{u}}{dt}=A\textbf{u} starting from the vector u(0)\textbf{u}(0) at t=0t=0

          • Solve linear constant coefficient equations by exponentials eλtxe^{\lambda t}\textbf{x}, when Ax=λxA\textbf{x}=\lambda \textbf{x}

          Symmetric Matrices

          • A symmetric matrix has only real eigenvalues.
          • The eigenvectors can be chosen orthonormal.

          • (Spectral Theorem) Every symmetric matrix haas the facorization A=QΛQTA=Q\Lambda Q^T with real eigenvalues in Λ\Lambda and orthonormal eigenvectors in S=QS=Q:

            • Symmetric diagonalization: A=QΛQ−1=QΛQTA=Q\Lambda Q^{-1}=Q \Lambda Q^T with Q−1=QTQ^{-1}=Q^T
          • (Orthogonal Eigenvectors) Eigenvectors of a real symmetric matrix (when they correspond to different λ\lambda’s) are always perpendicular.

          • product of pivots = determinant = product of eigenvalues

          • Eigenvalues VS. Pivots

          • For symmetric matrices the pivots and the eigenvalues have the same signs:

            • The number of positive eigenvalues of A=ATA=A^T equals the number of positive pivots.
          • All symmetric matrices are diagonalizable

          Positive Definite Matrices

          • Symmetric matrices that have positive eigenvalues

          • 2 ×\times 2 matrices

            • The eigenvalues of AA are positive if and only if a>0a>0 and ac−b2>0ac-b^2>0.
          • xTAxx^TAx is positive for all nonzero vectors xx

            • If AA and BB are symmetric positive definite, so is A+BA+B
          • When a symmetric matrix has one of these five properties, it has them all:

            • All nn pivots are positive
            • All nn upper left determinants are positive
            • All nn eigenvalues are positive
            • xTAxx^TAx is positive except at x=0x=0. This is the energy-based definition
            • AA equals RTRR^TR for a matrix RR with independent columns

          Positive Semidefinite Matrices

          Similar Matrices

          • DEFINITION: Let MM be any invertible matrix. Then B=M−1AMB=M^{-1}AM is similar to AA

          • (No change in λ\lambda’s) Similar matrices AA and M−1AMM^{-1}AM have the same eigenvalues. If xx is an eigenvector of AA, then M−1xM^{-1}x is an eigenvector of BB. But two matrices can have the same repeated λ\lambda, and fail to be similar.

          Jordan Form

          • What is “Jordan Form”?

            • For every AA, we want to choose MM so that M−1AMM^{-1}AM is nearly diagonal as possible
          • JTJ^T is the similar to JJ, the matrix MM that produces the similarity happens to be the reverse identity

          • (Jordan form) If AA has ss independent eigenvectors, it is similar to a matrix JJ that has ss Jordan blocks on its diagonal: Some matrix MM puts AA into Jordan form.

            • Jordan block: The eigenvalue is on the diagonal with 11’s just above it. Each block in JJ has one eigenvalue λi\lambda_i, one eigenvector. and 1’s above the diagonal

          M−1AM=[J1 ⋱ Js]=J{{\rm{M}}^{ - 1}}AM = \left[ {\begin{array}{*{20}{c}} {{J_1}}\ {}& \ddots \ {}} \end{array}} \right] = J

          Ji=[λi1 ⋱1 ⋱1 λi]{J_i} = \left[ {\begin{array}{*{20}{c}} {{\lambda _i}}&1\ {}& \ddots &1\ {}& \ddots &1\ {}} \end{array}} \right]

          • AA is similar to BB if they share the same Jordan form JJ – not otherwise

          Singular Value Decomposition (SVD)

          • Two sets of singular vectors, uu’s and vv’s. The uu’s are eigenvectors of AATAA^T and the vv’s are eigenvectors of ATAA^TA.

            • The singular vectos v1,⋯,vrv_1, \cdots, v_r are in the row space of AA. The outputs u1,⋯,uru_1, \cdots, u_r are in the column space of AA. The singular values σ1,⋯,σr\sigma_1, \cdots, \sigma_r are all positive numbers, the equatinos Avi=σiuiAv_i=\sigma_i u_i tell us:

          A[v1⋯vr]=[u1⋯ur]⎡⎣⎢⎢σ1⋱σr⎤⎦⎥⎥A[\begin{array}{*{20}{c}} {{v_1}}& \cdots } \end{array}] = \left[ {\begin{array}{*{20}{c}} {{u_1}}& \cdots } \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {{\sigma _1}}\\ {}& \ddots \\ {}} \end{array}} \right]

          • We need n−rn-r more vv’s and m−rm-r more uu’s, from the nullspace N(A)N(A) and the left nullspace N(AT)N(A^T). They can be orthonormal bases for those two nullspaces. Include all the vv’s and uu’s in VV and UU, so these matrices become square.

          A[v1⋯vr⋯vn]=[u1⋯ur⋯um]⎡⎣⎢⎢⎢⎢⎢σ1⋱σr⎤⎦⎥⎥⎥⎥⎥A[\begin{array}{*{20}{c}} {{v_1}}& \cdots } \end{array} \cdots {v_n}] = \left[ {\begin{array}{*{20}{c}} {{u_1}}& \cdots \cdots u{}_m} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {{\sigma _1}}\\ {}& \ddots \\ {}}\\ {}\end{array}} \right]

          VV is now a square orthogonal matrix, with V−1=VTV^{-1}=V^T. So AV=UΣAV=U\Sigma can become A=UΣVTA=U\Sigma V^T. This is the Singular Value Decomposition:

          A=UΣVT=u1σ1vT1+⋯+urσrvTr

          A=U \Sigma V^T=u_1 \sigma_1 v_1^T+\cdots +u_r \sigma_r v_r^T

          The orthonormal columns of UU and VV are eigenvectors of AATAA^T and ATAA^TA

线性代数笔记(网易公开课)相关推荐

  1. 统计学相关(网易公开课笔记)

    机器学习算法 要学习数据挖掘十大算法 抽时间学习一下Hadoop集群性能测试,测试一下集群性能 决策树模型 逻辑斯蒂回归 采用了极大似然估计估计模型参数 最大熵模型 使用连续变量的最大熵模型的公式会简 ...

  2. 麻省理工计算机导论公开课,网易公开课给大一新生“量身订做”精品课程

    速途网讯 耶鲁大学的<金融理论>.哈佛大学的<计算机导论>.可汗学院的<线性代数>,除了专业课,你还可以感受哈佛的<幸福课>,还可以透过麻省理工学院&l ...

  3. 网易公开课 matlab,数学专业各学科视频网站【珍藏版】

    1.1<数学分析>:复旦,陈纪修,214集,151小时 http://www.youku.com/playlist_show/id_3559597_ascending_1_mode_pic ...

  4. Python语言程序设计之urllib.request抓取页面,网易公开课之《麻省理工学院公开课:算法导论》

    Python语言用urllib.request模块抓取页面非常简单,再将抓取的页面内容用re模块解析,找出自己想要的东西.下面就就此方法来抓取网易公开课之<麻省理工学院公开课:算法导论>, ...

  5. 如何做到像百度云或者网易公开课一样动态更换APP启动图

    http://www.code4app.com/forum.php?mod=viewthread&tid=7632&extra=page%3D2%26filter%3Dsortid%2 ...

  6. Auto.js Pro安卓免ROOT引流脚本开发系列教程27网易公开课(5)-UI界面构建

    课程内容 脚本前端UI界面的构建 创建话术输入框(随机话术) 创建勾选框(性别选择.话术前加入昵称.话术后添加随机符号表情) 开发文档 在线文档 APP名称 网易公开课 APP版本 安卓客户端:v6. ...

  7. ionic2入门教程(三)高仿网易公开课(1)

    Ionic2系列之高仿网易公开课(1) 0.登录界面实现截图和官方图片对比 我的 官方 1.新建一个blank项目 打开cmd,输入ionic start Ionic-NetEaseOpenCours ...

  8. Auto.js Pro安卓免ROOT引流脚本开发系列教程23网易公开课(1)-前言

    APP名称 网易公开课 APP版本 安卓客户端:v6.8.1 APP简介 网易公开课提供来自世界一流名校和著名机构的上万集精品视频课程,涵盖各类热门领域,与Web版保持同步更新.速度流畅,画面高清.支 ...

  9. Auto.js Pro安卓免ROOT引流脚本开发系列教程26网易公开课(4)-关注用户

    APP_关注用户() 返回值类型 说明 布尔型 true,关注成功 false,关注失败 等待个人资料页出现(判断是否在个人资料页) 判断关注按钮节点是否存在 判断是否已关注 关注成功后随机延时 开发 ...

  10. Auto.js Pro安卓免ROOT引流脚本开发系列教程25网易公开课(3)-取用户性别

    APP_取用户性别() 返回值类型 说明 整数型 返回值 性别 0 女 1 男 2 无 等待个人资料页出现(判断是否在个人资料页) 判断性别节点是否存在 在性别节点范围内取色 根据色值判断性别 开发文 ...

最新文章

  1. linux 下使用crontab 定时打包日志并删除已被打包的日志
  2. shell匹配IP和shell正则匹配捕获引用
  3. 堆积木(基本数据结构-ArrayList数组的使用)
  4. java通过HTTPS协议POST提交接收JSON格式数据
  5. java自定义变量解析,Thymeleaf内置对象、定义变量、URL参数及标签自定义属性
  6. 高通cpu排行_安卓手机芯片排行:麒麟990 5G仅排第三,980还输给了765G?
  7. Flask 中的数据库迁移
  8. 走进波分 -- 15.Optix OSN1800产品介绍
  9. 小米笔记本bios版本大全_聊一款被“差别对待”的笔记本电脑
  10. Minimum supported Gradle version is 5.4.1. Current version is 4.10.1. If using the gradle wrapper
  11. 【教程】安装torch_sparse、torch_cluster、torch_scatter、torch_spline
  12. android 首页里布局,android复杂首页布局
  13. 接下来的认证考试 阿里云云计算专业认证考试(ACP级)
  14. CSS3实现倒影效果
  15. 隐私泄露中的人性剖析
  16. PID原理的详细分析及调节过程
  17. 北大计算机陈鹏,2021届毕业颁证仪式 | 特邀嘉宾北京大学陈鹏教授主旨演讲
  18. 【历史上的今天】9 月 8 日:阿里开放平台计划;英特尔发布首款双核酷睿处理器;我国研制全数字高清晰度电视系统
  19. Java 基础 --- Java 历史背景、体系特点以及实现原理
  20. (重装mysql)在处理时有错误发生: mysql-server-5.7 mysql-serverE: Sub-process /usr/bin/dpkg returned an error c

热门文章

  1. Django 图书管理
  2. linux刷新fstab,linux之fstab的一次记录
  3. 零基础通关PMP!看我怎么学?
  4. Android4.0 SDK新功能详解!
  5. 华为android升级国内版,华为手机能不能升级新版安卓?官方给出答案,即将有大动作...
  6. Python学习之selenium库
  7. Yeslab现任明教教主数据中心第二门课程UCS 视频教程下载
  8. 软件测试“老司机”的经验总结,看完你会感谢我的
  9. 机器学习中的 7 大损失函数实战总结(附Python演练)
  10. centos7.6配置网络并固定ip地址