1. 偏最小二乘法(NIPALS经典实现--未简化)
  2. 偏最小二乘法 基本性质推导
  3. 偏最小二乘法(SIMPLS---未简化)

前面讲了许多PLS的性质,这里介绍一下PLS的matlab自带的PLS代码,对PLS有一个全面的认识

matlab自带的plsregress函数实现了SIMPLS算法,主要是对X0和Y0的协方差矩阵不断做正交分解

function [Xloadings,Yloadings,Xscores,Yscores, ...beta,pctVar,mse,stats] = plsregress(X,Y,ncomp,varargin)
%PLSREGRESS Partial least squares regression.....
% Center both predictors and response, and do PLS
%对X,Y进行中心化,在模型中去掉截距项
meanX = mean(X,1);
meanY = mean(Y,1);
X0 = bsxfun(@minus, X, meanX);
Y0 = bsxfun(@minus, Y, meanY);% 根据输出的需要,返回不同的参数,下面重点关注SIMPLS,
%这是Dejone在九几年提出的,matlab用这个算法来实现PLS
,可见这个算法的意义。
if nargout <= 2[Xloadings,Yloadings] = simpls(X0,Y0,ncomp);elseif nargout <= 4[Xloadings,Yloadings,Xscores,Yscores] = simpls(X0,Y0,ncomp);
....
end%------------------------------------------------------------------------------
%SIMPLS Basic SIMPLS.  Performs no error checking.
function [Xloadings,Yloadings,Xscores,Yscores,Weights] = simpls(X0,Y0,ncomp)[n,dx] = size(X0);
dy = size(Y0,2);% An orthonormal basis for the span of the X loadings, to make the successive
% deflation X0'*Y0 simple - each new basis vector can be removed from Cov
% separately.%计算协方差矩阵Cov = X0'*Y0;
for i = 1:ncomp% Find unit length ti=X0*ri and ui=Y0*ci whose covariance, ri'*X0'*Y0*ci, is% jointly maximized, subject to ti'*tj=0 for j=1:(i-1).%计算协方差的主成分方向   [ri,si,ci] = svd(Cov,'econ');ri = ri(:,1); ci = ci(:,1); si = si(1);%计算得分ti = X0*ri;normti = norm(ti); ti = ti ./ normti; % ti'*ti == 1%计算载荷Xloadings(:,i) = X0'*ti;qi = si*ci/normti; % = Y0'*tiYloadings(:,i) = qi;if nargout > 2Xscores(:,i) = ti;Yscores(:,i) = Y0*qi; % = Y0*(Y0'*ti), and proportional to Y0*ciif nargout > 4Weights(:,i) = ri ./ normti; % rescaled to make ri'*X0'*X0*ri == ti'*ti == 1endend% Update the orthonormal basis with modified Gram Schmidt (more stable),% repeated twice (ditto).vi = Xloadings(:,i);for repeat = 1:2for j = 1:i-1vj = V(:,j);vi = vi - (vj'*vi)*vj;endendvi = vi ./ norm(vi);V(:,i) = vi;% Deflate Cov, i.e. project onto the ortho-complement of the X loadings.% First remove projections along the current basis vector, then remove any% component along previous basis vectors that's crept in as noise from% previous deflations.
%更新协方差矩阵Cov = Cov - vi*(vi'*Cov);Vi = V(:,1:i);Cov = Cov - Vi*(Vi'*Cov);
endif nargout > 2% By convention, orthogonalize the Y scores w.r.t. the preceding Xscores,% i.e. XSCORES'*YSCORES will be lower triangular.  This gives, in effect, only% the "new" contribution to the Y scores for each PLS component.  It is also% consistent with the PLS-1/PLS-2 algorithms, where the Y scores are computed% as linear combinations of a successively-deflated Y0.  Use modified% Gram-Schmidt, repeated twice.for i = 1:ncompui = Yscores(:,i);for repeat = 1:2for j = 1:i-1tj = Xscores(:,j);ui = ui - (tj'*ui)*tj;endendYscores(:,i) = ui;end
end%------------------------------------------------------------------------------
%PLSCV Efficient cross-validation for X and Y mean squared error in PLS.
function mse = plscv(X,Y,ncomp,cvp,mcreps,ParOptions)[n,dx] = size(X);% Return error for as many components as asked for; some columns may be NaN
% if ncomp is too large for CV.
mse = NaN(2,ncomp+1);% The CV training sets are smaller than the full data; may not be able to fit as
% many PLS components.  Do the best we can.
if isa(cvp,'cvpartition')cvpType = 'partition';maxncomp = min(min(cvp.TrainSize)-1,dx);nTest = sum(cvp.TestSize);
elsecvpType = 'Kfold';
%    maxncomp = min(min( floor((n*(cvp-1)/cvp)-1), dx));maxncomp = min( floor((n*(cvp-1)/cvp)-1), dx);nTest = n;
end
if ncomp > maxncompwarning(message('stats:plsregress:MaxComponentsCV', maxncomp));ncomp = maxncomp;
end% Cross-validate sum of squared errors for models with 1:ncomp components,
% simultaneously.  Sum the SSEs over CV sets, and compute the mean squared
% error
CVfun = @(Xtr,Ytr,Xtst,Ytst) sseCV(Xtr,Ytr,Xtst,Ytst,ncomp);
sumsqerr = crossval(CVfun,X,Y,cvpType,cvp,'mcreps',mcreps,'options',ParOptions);
mse(:,1:ncomp+1) = reshape(sum(sumsqerr,1)/(nTest*mcreps), [2,ncomp+1]);%------------------------------------------------------------------------------
%SSECV Sum of squared errors for cross-validation
function sumsqerr = sseCV(Xtrain,Ytrain,Xtest,Ytest,ncomp)XmeanTrain = mean(Xtrain);
YmeanTrain = mean(Ytrain);
X0train = bsxfun(@minus, Xtrain, XmeanTrain);
Y0train = bsxfun(@minus, Ytrain, YmeanTrain);% Get and center the test data
X0test = bsxfun(@minus, Xtest, XmeanTrain);
Y0test = bsxfun(@minus, Ytest, YmeanTrain);% Fit the full model, models with 1:(ncomp-1) components are nested within
[Xloadings,Yloadings,~,~,Weights] = simpls(X0train,Y0train,ncomp);
XscoresTest = X0test * Weights;% Return error for as many components as the asked for.
outClass = superiorfloat(Xtrain,Ytrain);
sumsqerr = zeros(2,ncomp+1,outClass); % this will get reshaped to a row by CROSSVAL% Sum of squared errors for the null model
sumsqerr(1,1) = sum(sum(abs(X0test).^2, 2));
sumsqerr(2,1) = sum(sum(abs(Y0test).^2, 2));% Compute sum of squared errors for models with 1:ncomp components
for i = 1:ncompX0reconstructed = XscoresTest(:,1:i) * Xloadings(:,1:i)';sumsqerr(1,i+1) = sum(sum(abs(X0test - X0reconstructed).^2, 2));Y0reconstructed = XscoresTest(:,1:i) * Yloadings(:,1:i)';sumsqerr(2,i+1) = sum(sum(abs(Y0test - Y0reconstructed).^2, 2));
end

偏最小二乘法PLS(matlab自带代码)相关推荐

  1. matlab pls rmsecv,偏最小二乘法PLS回归NIPALS算法及Matlab程序及例子.doc

    偏最小二乘法PLS回归NIPALS算法及Matlab程序及例子 偏最小二乘法PLS回归NIPALS算法的Matlab程序及例子 function [T,P,W,Wstar,U,b,C,B_pls,.. ...

  2. 偏最小二乘法PLS分类,多输入单输出模型。

    %%  清空环境变量 warning off             % 关闭报警信息 close all               % 关闭开启的图窗 clear                 ...

  3. PLS回归 (OLS)最小二乘法 PCA) 偏最小二乘法 (PLS) SIMPLS算法 20200723

  4. 偏最小二乘回归 Matlab

    什么是偏最小二乘回归? 偏最小二乘回归(英语:Partial least squares regression, PLS回归)是一种统计学方法,与主成分回归有关系,但不是寻找响应变量和自变量之间最大方 ...

  5. r语言pls分析_R语言中的偏最小二乘PLS回归算法

    偏最小二乘回归: 我将围绕结构方程建模(SEM)技术进行一些咨询,以解决独特的业务问题.我们试图识别客户对各种产品的偏好,传统的回归是不够的,因为数据集的高度分量以及变量的多重共线性.PLS是处理这些 ...

  6. 偏最小二乘法 Partial Least square

    最小二乘法:http://baike.so.com/doc/723226.html 偏最小二乘法(Partial Least square) http://blog.sciencenet.cn/blo ...

  7. Matlab中的偏最小二乘法(PLS)回归模型,离群点检测和变量选择

    全文下载:http://tecdat.cn/?p=22319 本文建立偏最小二乘法(PLS)回归(PLSR)模型,以及预测性能评估.为了建立一个可靠的模型,我们还实现了一些常用的离群点检测和变量选择方 ...

  8. 回归预测 | MATLAB实现PLS(偏最小二乘法)和PCR(主成分回归)多输入单输出

    回归预测 | MATLAB实现PLS(偏最小二乘法)和PCR(主成分回归)多输入单输出 目录 回归预测 | MATLAB实现PLS(偏最小二乘法)和PCR(主成分回归)多输入单输出 预测效果 基本介绍 ...

  9. 最小二乘模型 matlab程序,我提供给大家一个偏最小二乘法的代码

    下面是一个偏最小二乘法的完整代码,就是结果不尽如人意,望高人指点一二! 实测数据矩阵: X = [ 30 405 1.5 47.5 40 435 3.0 45.0 50 465 1.0 42.5 60 ...

  10. MATLAB自带工具箱实现PCA降维代码

    进行PCA降维,环境是MATLAB, 网上找了很多都是介绍PCA原理的,两篇介绍的不错的PCA 原理文章,只是想实现pCA的大可不必看.原理文章1  原理文章2 下面开始介绍用MATLAB自带工具包函 ...

最新文章

  1. 在DataTable中创建计算列
  2. 这三类问题是工控系统最大的威胁
  3. 【网络】高性能网络编程--下一个10年,是时候考虑C10M并发问题了
  4. 深入理解pts,dts,time_base
  5. PL\SQL 打开时出现动态执行表不可访问,本会话的自动统计被禁止
  6. android 屏幕分辨率 屏幕密度,Android屏幕适配——多分辨率多屏幕密度
  7. c语言smile定义函数,【C初始之习题五】
  8. 过来人的亲身经验告诉你,如何从菜鸟晋升月薪过万的测试工程师
  9. 分享小记:指数族分布
  10. c语言数据结构单链表输出链表操作,单链表一系列操作c语言实现(按照严蔚敏那本数据结构编写)...
  11. spring 跨域 CORS (Cross Origin Resources Share) 跨域
  12. [Luogu P2522] [HAOI2011]Problem b (莫比乌斯反演)
  13. logisim软件使用学习
  14. MagicDraw二次开发过程
  15. 全局最小割集Stoer-Wagner算法
  16. 对话腾讯金融云胡利明:金融机构数字化,迈进了“深水区”
  17. Blue Coat:打击移动领域的坏人
  18. 【爬虫】IP代理池的总结、实现与维护,IP代理池小工具(小框架),自建代理ip池
  19. ios html fixed,ios下position:fixed失效的问题解决
  20. 油溶性CdSeTe/ZnS量子点(以CdSeTe为核心,ZnS为壳层)

热门文章

  1. typora中插入化学反应式
  2. cvpr论文阅读之Deep Spatio-Temporal Random Fields for Efficient Video Segmentation(用于视频分割的深度时空随机场)
  3. java+biz+impl_为何在UserBizImpl实体类注入时…-体系课
  4. DIV+CSS图片和文字如何显示同一行
  5. httpClient4 设置代理
  6. 如何用计算机ip连接打印机共享,ip共享打印机怎么设置
  7. 微信ubuntu版服务器,Ubuntu 18.04 安装微信(Linux通用)
  8. 注塑机设备工业物联网智能解决方案
  9. 浏览器html中加入word,web网页中加载word
  10. 网络通信编程大作业--深度研究爬虫技术