转载自:https://blog.csdn.net/tiandijun/article/details/44917237

RPCA

关于RPCA的博客:

原文:http://blog.csdn.net/abcjennifer/article/details/8572994

译文:http://blog.csdn.net/u010545732/article/details/19066725

数据降维的总结:数据降维(RPCA,LRR.LE等)
http://download.csdn.net/detail/tiandijun/8569653

低秩的子空间恢复:http://download.csdn.net/detail/tiandijun/8569675

LRR

Tutorials

  1. Low-Rank Matrix Recovery: From Theory to Imaging Applications, 
    John Wright, Zhouchen Lin, and Yi Ma. Presented at International Conference on Image and Graphics (ICIG), August 2011.
  2. Low-Rank Matrix Recovery, 
    John Wright, Zhouchen Lin, and Yi Ma. Presented at IEEE International Conference on Image Processing (ICIP), September 2010.

Theory

  1. Robust Principal Component Analysis?, 
    Emmanuel Candès, Xiaodong Li, Yi Ma, and John Wright. Journal of the ACM, volume 58, no. 3, May 2011.
  2. Dense Error Correction via L1-Minimization, 
    John Wright, and Yi Ma. IEEE Transactions on Information Theory, volume 56, no. 7, July 2010.
  3. Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices via Convex Optimization, 
    John Wright, Arvind Ganesh, Shankar Rao, Yigang Peng, and Yi Ma. In Proceedings of Neural Information Processing Systems (NIPS), December 2009.
  4. Stable Principal Component Pursuit, 
    Zihan Zhou, Xiaodong Li, John Wright, Emmanuel Candès, and Yi Ma. In Proceedings of IEEE International Symposium on Information Theory (ISIT), June 2010.
  5. Dense Error Correction for Low-Rank Matrices via Principal Component Pursuit, 
    Arvind Ganesh, John Wright, Xiaodong Li, Emmanuel Candès, and Yi Ma. In Proceedings of IEEE International Symposium on Information Theory (ISIT), June 2010.
  6. Principal Component Pursuit with Reduced Linear Measurements, 
    Arvind Ganesh, Kerui Min, John Wright, and Yi Ma. submitted to International Symposium on Information Theory, 2012.
  7. Compressive Principal Component Pursuit, 
    John Wright, Arvind Ganesh, Kerui Min, and Yi Ma. submitted to International Symposium on Information Theory, 2012.
代码
Robust PCA

We provide MATLAB packages to solve the RPCA optimization problem by different methods. All of our code below is Copyright 2009 Perception and Decision Lab, University of Illinois at Urbana-Champaign, and Microsoft Research Asia, Beijing. We also provide links to some publicly available packages to solve the RPCA problem. Please contact John Wright or Arvind Ganesh if you have any questions or comments. If you are looking for the code to our RASL and TILT algorithms, please refer to the applications section.

  1. Augmented Lagrange Multiplier (ALM) Method [exact ALM - MATLAB zip] [inexact ALM - MATLAB zip]
    Usage - The most basic form of the exact ALM function is [A, E] = exact_alm_rpca(D, λ), and that of the inexact ALM function is [A, E] = inexact_alm_rpca(D, λ), where D is a real matrix and λ is a positive real number. We solve the RPCA problem using the method of augmented Lagrange multipliers. The method converges Q-linearly to the optimal solution. The exact ALM algorithm is simple to implement, each iteration involves computing a partial SVD of a matrix the size of D, and converges to the true solution in a small number of iterations. The algorithm can be further speeded up by using a fast continuation technique, thereby yielding the inexact ALM algorithm. 
    Reference - The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices, Z. Lin, M. Chen, L. Wu, and Y. Ma (UIUC Technical Report UILU-ENG-09-2215, November 2009).
  2. Accelerated Proximal Gradient [full SVD version - MATLAB zip] [partial SVD version - MATLAB zip]
    Usage - The most basic form of the full SVD version of the function is [A, E] = proximal_gradient_rpca(D, λ), where D is a real matrix and λ is a positive real number. We consider a slightly different version of the original RPCA problem by relaxing the equality constraint. The algorithm is simple to implement, each iteration involves computing the SVD of a matrix the size of D, and converges to the true solution in a small number of iterations. The algorithm can be further speeded up by computing partial SVDs at each iteration. The most basic form of the partial SVD version of the function is [A, E] = partial_proximal_gradient_rpca(D, λ), where D is a real matrix and λ is a positive real number. 
    Reference - Fast Convex Optimization Algorithms for Exact Recovery of a Corrupted Low-Rank Matrix, Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen, and Y. Ma (UIUC Technical Report UILU-ENG-09-2214, August 2009).
  3. Dual Method [MATLAB zip]
    Usage - The most basic form of the function is [A, E] = dual_rpca(D, λ), where D is a real matrix and λ is a positive real number. We solve the convex dual of the RPCA problem, and retrieve the low-rank and sparse error matrices from the dual optimal solution. The algorithm computes only a partial SVD in each iteration and hence, scales well with the size of the matrix D.
    Reference - Fast Convex Optimization Algorithms for Exact Recovery of a Corrupted Low-Rank Matrix, Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen, and Y. Ma (UIUC Technical Report UILU-ENG-09-2214, August 2009).
  4. Singular Value Thresholding [MATLAB zip]
    Usage - The most basic form of the function is [A, E] = singular_value_rpca(D, λ), where D is a real matrix and λ is a positive real number. Here again, we solve a relaxation of the original RPCA problem, albeit different from the one solved by the Accelerated Proximal Gradient (APG) method. The algorithm is extremely simple to implement, and the computational complexity of each iteration is about the same as that of the APG method. However, the number of iterations to convergence is typically quite large. 
    Reference - A Singular Value Thresholding Algorithm for Matrix Completion,
    J. -F. Cai, E. J. Candès, and Z. Shen (2008).
  5. Alternating Direction Method [MATLAB zip] 
    Reference - Sparse and Low-Rank Matrix Decomposition via Alternating Direction Methods, X. Yuan, and J. Yang (2009).
Matrix Completion

We provide below links to publicly available code and references to solve the matrix completion problem faster than conventional algorithms.

  1. Augmented Lagrange Multiplier (ALM) Method[inexact ALM - MATLAB zip]
    Usage - The most basic form of the inexact ALM function is A = inexact_alm_mc(D), where D is the incomplete matrix defined in the MATLAB sparse matrix format and the output A is a structure with two components - A.U and A.V (the left and right singular vectors scaled respectively by the square root of the corresponding non-zero singular values). Please refer to the file test_alm_mc.m for details on defining Dappropriately. The algorithm is identical to the inexact ALM method described above to solve the RPCA prblem, and enjoys the same convergence properties. 
    Reference - The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices, Z. Lin, M. Chen, L. Wu, and Y. Ma (UIUC Technical Report UILU-ENG-09-2215, November 2009).
  2. Singular Value Thresholding
    Reference - A Singular Value Thresholding Algorithm for Matrix Completion, J. -F. Cai, E. J. Candès, and Z. Shen (2008).
  3. OptSpace 
    Reference - Matrix Completion from a Few Entries, R.H. Keshavan, A. Montanari, and S. Oh (2009).
  4. Accelerated Proximal Gradient
    Reference - An Accelerated Proximal Gradient Algorithm for Nuclear Norm Regularized Least Squares Problems, K. -C. Toh, and S. Yun (2009).
  5. Subspace Evolution and Transfer (SET) [MATLAB zip]
    Reference - SET: An Algorithm for Consistent Matrix Completion, W. Dai, and O. Milenkovic (2009).
  6. GROUSE: Grassmann Rank-One Update Subspace Estimation
    Reference - Online Identification and Tracking of Subspaces from Highly Incomplete Information, L. Balzano, R. Nowak, and B. Recht (2010).
Comparison of Algorithms

We provide a simple comparison of the speed and accuracy of various RPCA algorithms. Each algorithm was tested on a rank-20 matrix of size 400 x 400 with 5% of its entries corrupted by large errors. The low-rank matrix A is generated as the product LRT, where L and R are 400 x 20 matrices whose entries are i.i.d. according to the standard Gaussian distribution. The error matrix E is a sparse matrix whose support is chosen uniformly at random and whose non-zero entries are independent and uniformly distributed in the range [-50,50]. The value of λ was fixed as 0.05. The accuracy of the solution is indicated by the rank of the estimated low-rank matrix A and its relative error (in Frobenius norm) with respect to the true solution. All simulations were carried out on a Macbook Pro with a 2.8 GHz processor, two cores, and 4 GB memory.

Please note that the following tables represent typical performance, using default parameters, on random matrices drawn according to the distribution specified earlier. The performance could vary when dealing with matrices drawn from other distributions or with real data.

Robust PCA Algorithm Comparison

Algorithm Rank of estimate Relative error in estimate of A Time (s)
Singular Value Thresholding 20 3.4 x 10-4 877
Accelerated Proximal Gradient 20 2.0 x 10-5 43
Accelerated Proximal Gradient
(with partial SVDs)
20 1.8 x 10-5 8
Dual Method 20 1.6 x 10-5 177
Exact ALM 20 7.6 x 10-8 4
Inexact ALM 20 4.3 x 10-8 2
Alternating Direction Methods 20 2.2 x 10-5 5

note:If you would like to list your code related to this topic on this website, please contact the webmaster Kerui Min.

原 RPCA以及LRR相关推荐

  1. 【机器学习】LPP\NPE\SR\SPP\CRP\RPCA\LRR\LRPP\LRPE\ LR-2DNPP\OMF-2DPCA等

    文章目录 LPP NPE SR SPP LSPE CRP RPCA LRR LRPP LRPE NN_LRR ----------------华丽的分割线----------------------- ...

  2. 【LRPP-GRR】Low-Rank Preserving Projection Via Graph Regularized Reconstruction

    文章目录 思想 模型 参考论文:Low-Rank Preserving Projection Via Graph Regularized Reconstruction 作者:Jie Wen, Na H ...

  3. RPCA 稳健主成分分析/鲁棒主成分分析

    RPCA(robust principal component analysis) 稳健主成分分析/鲁棒主成分分析 一.基础知识 (一)范数(Norm) 1.基本概念 范数是一个函数,表示方式为||x ...

  4. 主成分分析(PCA)原理和鲁棒主成分分析(RPCA)详解

    主成分分析(PCA)原理和鲁棒主成分分析(RPCA)详解 1.相关背景 在许多领域的研究与应用中,通常需要对含有多个变量的数据进行观测,收集大量数据后进行分析寻找规律.多变量大数据集无疑会为研究和应用 ...

  5. 什么是原码、反码、补码?什么是按位与?范围数字按位与!

    前言:学过计算机基础的大家都知道什么是二进制,什么是"与"运算,这里先给大家复习一下. 举一个简单的例子: 5的二进制表示是0101(补齐4位) 7的二进制表示是0111(补齐4位 ...

  6. Go 学习笔记(18)— 函数(04)[闭包定义、闭包修改变量、闭包记忆效应、闭包实现生成器、闭包复制原对象指针]

    1. 闭包定义 Go 语言中闭包是引用了自由变量的函数,被引用的自由变量和函数一同存在,即使已经离开了自由变量的环境也不会被释放或者删除,在闭包中可以继续使用这个自由变量,因此,简单的说: 函数 + ...

  7. [原]SSL 开发简述(Delphi)

    一.            简介 现在网上有关SSL的资料较多的是基于VC开发,Delphi的SSL开发资源很少. 本文主要使用OpenSSL为基础,讲述SSL的有关开发流程.OpenSSL功能非常丰 ...

  8. unigui中弹出对话框原窗体是没有了_最前线 | 微信对话框“搜一搜”功能上线,独辟蹊径的腾讯打着什么算盘?...

    更新界的"劳模"微信又出新花样了.9月9日,微信在对话框全量上线了搜一搜功能.简单来说,就是用户在微信对话过程中,如果遇到知识盲区,可以通过长按对话框文本,选择导航栏中的" ...

  9. 关于计算机中二进制原码,反码,补码的简要解释

    原码,补码,反码的概念 正数原码:正数的原码为取绝对值的数转二进制,5的原码为   00000000   00000000   00000000    00000101 负数原码:负数的原码为取绝对值 ...

最新文章

  1. 获取应用的当前版本号获取当前android系统的版本号
  2. apache httpclient 工具类_Httpclient实现文件上传、文件下载看这篇文章就够了
  3. 关系型数据库表结构的两个设计技巧
  4. Codeforces 1188 题解
  5. Atitit 插件机制原理与设计微内核 c# java 的实现attilax总结
  6. CSS基础(part1)--引入CSS的方式
  7. 关于 create-react-app 自定义 eslint文件配置解决方案
  8. Docker框架的使用系列教程(一)
  9. 漫画 | 产品经理的八大罪状(上)
  10. “我在小公司混,有没有资格去知名技术大会上做分享?”
  11. UC:我们是怎么做出 Chromium M35 内核浏览器
  12. oracle主机修改IP后客户端无法连接
  13. 反击网络执法官[转]
  14. XLNet: Generalized Autoregressive Pretraining for Language Understanding
  15. 印象笔记终于支持默认markdown预览模式
  16. el-date-picker element时间选择器 先选择年 再选择月 年月日依次选择
  17. 详解GaussDB(DWS) 资源监控
  18. 字节流与字符流(一)
  19. 【SAP Abap】记录一次增强开发之销售交货开票VF04增强
  20. java.lang.Throwable: Substituted for the exception com.bea.xml.SchemaTypeLoaderException which lack

热门文章

  1. org.tigris.subversion.javahl.ClientException: svn: This client is too old to work with working cop
  2. 学习光线追踪(13)---改进一下三角形碰撞光线的算法
  3. 程序报错:OSError: [E050] Can‘t find model ‘en_core_web_sm‘. It doesn‘t seem to be a Python package or a
  4. Tesseract字体识别 及 jTessBoxEditor工具进行训练 及 Java API实现字体识别
  5. 销售宝:电商crm系统有什么功能优势?
  6. 第二章 进程的描述与控制
  7. Ubuntu NFS 服务器的搭建和使用
  8. 斑马打印机wifi连接
  9. Selenium Tips - CSS定位元素
  10. 学习NBA球衣配色做页面配色设计