极限学习机

单隐藏层反馈神经网络具有两个比较突出的能力:

(1)可以直接从训练样本中拟 合 出 复 杂 的 映 射 函 数f :x ^ t

(2 )可以为大量难以用传统分类参数技术处理的自然或者人工现象提供模型。但是单隐藏层反馈神经网络缺少比较快速的学习方 法 。误差反向传播算法每次迭代需要更新n x(L+ 1) +L x (m+ 1 )个 值 ,所花费的时间远远低于所容忍的时间。经常可以看到为训练一个单隐藏层反馈神经网络花费了数小时,数天或者更多的时间。
       基于以上原因,黄广斌教授对单隐藏层反馈神经网络进行了深入的研究,提
出并证明了两个大胆的理论:

从以上两条理论我们可以看出,只要激励函数g:R ^ R满足在任意区间上无限可微,那么wt和bt可以从R的n维和R空间的任何区间内根据任意连续的概率分布随机生成 ,也就是说单隐藏层前馈神经网络无需再对wt和bt进行调整;又因为式子||TH*beta||_f=0以概率一成立,我们发现输出层的偏置也不再需要。那么一个新型的但隐藏层反馈神经网络如图2 .3表示 。
对比图2.2,缺少了输出层偏 置bs,而输入权重w和隐藏层偏置bi随机产生不需要调整,那么整个网络仅仅剩下输出权重beta一项没有确定。因此极限学习机
应运而生。令神经网络的输出等于样本标签,如式 (2-11) 表示

n x(L+1) +Lx(m+1)个值,且反向传播算法为了保证系统的稳定性通常选取较小的学习率,使得学习时间大大加长。因此极限学习机在这一方法优势非常巨大,在实验中 ,极限学习机往往在数秒内就完成了运算。而一些比较经典的算法在训练一个单隐藏层神经网络的时候即使是很小的应用也要花费大量的时间,似乎这些算法存在着一个无法逾越的虚拟速度壁垒。

(2)在大多数的应用中,极限学习机的泛化能力大于类似于误差反向传播算法这类的基于梯度的算法。
( 3 ) 传统的基于梯度的算法需要面对诸如局部最小,合适的学习率、过拟合等问题 ,而极限学习机一步到位直接构建起单隐藏层反馈神经网络,避免了这些难以处理的棘手问题。
极限学习机由于这些优势,广大研究人员对极限学习机报以极大的兴趣,使得这几年极限学习机的理论发展很快,应用也在不断地拓宽。
小结
本文主要对极限学习机的本质原理和来龙去脉作了详尽的介绍,阐明极限学习机所具有的各种各样优势吸引广大学者进行研究。然而极限学习机也有自己的劣势,接下来我们将对极限学习机的劣势进行分析和研究并加以改进。

MATLAB程序:可以到GB.Huang老师个人网站上下载

1、一个回归的例子:(数据、程序放在同一文件夹)

主函数main.m(数据在这sinc_train,sinc_test)

clear all;
close all;
clc;
[TrainingTime, TestingTime, TrainingAccuracy, TestingAccuracy] =   ELM('sinc_train', 'sinc_test', 0, 20, 'sig')

ELM.m

function [TrainingTime, TestingTime, TrainingAccuracy, TestingAccuracy] = elm(TrainingData_File, TestingData_File, Elm_Type, NumberofHiddenNeurons, ActivationFunction)% Usage: elm(TrainingData_File, TestingData_File, Elm_Type, NumberofHiddenNeurons, ActivationFunction)
% OR:    [TrainingTime, TestingTime, TrainingAccuracy, TestingAccuracy] = elm(TrainingData_File, TestingData_File, Elm_Type, NumberofHiddenNeurons, ActivationFunction)
%
% Input:
% TrainingData_File     - Filename of training data set
% TestingData_File      - Filename of testing data set
% Elm_Type              - 0 for regression; 1 for (both binary and multi-classes) classification
% NumberofHiddenNeurons - Number of hidden neurons assigned to the ELM
% ActivationFunction    - Type of activation function:
%                           'sig' for Sigmoidal function
%                           'sin' for Sine function
%                           'hardlim' for Hardlim function
%                           'tribas' for Triangular basis function
%                           'radbas' for Radial basis function (for additive type of SLFNs instead of RBF type of SLFNs)
%
% Output:
% TrainingTime          - Time (seconds) spent on training ELM
% TestingTime           - Time (seconds) spent on predicting ALL testing data
% TrainingAccuracy      - Training accuracy:
%                           RMSE for regression or correct classification rate for classification
% TestingAccuracy       - Testing accuracy:
%                           RMSE for regression or correct classification rate for classification
%
% MULTI-CLASSE CLASSIFICATION: NUMBER OF OUTPUT NEURONS WILL BE AUTOMATICALLY SET EQUAL TO NUMBER OF CLASSES
% FOR EXAMPLE, if there are 7 classes in all, there will have 7 output
% neurons; neuron 5 has the highest output means input belongs to 5-th class
%
% Sample1 regression: [TrainingTime, TestingTime, TrainingAccuracy, TestingAccuracy] = elm('sinc_train', 'sinc_test', 0, 20, 'sig')
% Sample2 classification: elm('diabetes_train', 'diabetes_test', 1, 20, 'sig')
%%%%%    Authors:    MR QIN-YU ZHU AND DR GUANG-BIN HUANG%%%%    NANYANG TECHNOLOGICAL UNIVERSITY, SINGAPORE%%%%    EMAIL:      EGBHUANG@NTU.EDU.SG; GBHUANG@IEEE.ORG%%%%    WEBSITE:    http://www.ntu.edu.sg/eee/icis/cv/egbhuang.htm%%%%    DATE:       APRIL 2004%%%%%%%%%%% Macro definition
REGRESSION=0;
CLASSIFIER=1;%%%%%%%%%%% Load training dataset
train_data=load(TrainingData_File);
T=train_data(:,1)';
P=train_data(:,2:size(train_data,2))';
clear train_data;                                   %   Release raw training data array%%%%%%%%%%% Load testing dataset
test_data=load(TestingData_File);
TV.T=test_data(:,1)';
TV.P=test_data(:,2:size(test_data,2))';
clear test_data;                                    %   Release raw testing data arrayNumberofTrainingData=size(P,2);
NumberofTestingData=size(TV.P,2);
NumberofInputNeurons=size(P,1);if Elm_Type~=REGRESSION%%%%%%%%%%%% Preprocessing the data of classificationsorted_target=sort(cat(2,T,TV.T),2);label=zeros(1,1);                               %   Find and save in 'label' class label from training and testing data setslabel(1,1)=sorted_target(1,1);j=1;for i = 2:(NumberofTrainingData+NumberofTestingData)if sorted_target(1,i) ~= label(1,j)j=j+1;label(1,j) = sorted_target(1,i);endendnumber_class=j;NumberofOutputNeurons=number_class;%%%%%%%%%% Processing the targets of trainingtemp_T=zeros(NumberofOutputNeurons, NumberofTrainingData);for i = 1:NumberofTrainingDatafor j = 1:number_classif label(1,j) == T(1,i)break; endendtemp_T(j,i)=1;endT=temp_T*2-1;%%%%%%%%%% Processing the targets of testingtemp_TV_T=zeros(NumberofOutputNeurons, NumberofTestingData);for i = 1:NumberofTestingDatafor j = 1:number_classif label(1,j) == TV.T(1,i)break; endendtemp_TV_T(j,i)=1;endTV.T=temp_TV_T*2-1;end                                                 %   end if of Elm_Type%%%%%%%%%%% Calculate weights & biases
start_time_train=cputime;%%%%%%%%%%% Random generate input weights InputWeight (w_i) and biases BiasofHiddenNeurons (b_i) of hidden neurons
InputWeight=rand(NumberofHiddenNeurons,NumberofInputNeurons)*2-1;
BiasofHiddenNeurons=rand(NumberofHiddenNeurons,1);
tempH=InputWeight*P;
clear P;                                            %   Release input of training data
ind=ones(1,NumberofTrainingData);
BiasMatrix=BiasofHiddenNeurons(:,ind);              %   Extend the bias matrix BiasofHiddenNeurons to match the demention of H
tempH=tempH+BiasMatrix;%%%%%%%%%%% Calculate hidden neuron output matrix H
switch lower(ActivationFunction)case {'sig','sigmoid'}%%%%%%%% Sigmoid H = 1 ./ (1 + exp(-tempH));case {'sin','sine'}%%%%%%%% SineH = sin(tempH);    case {'hardlim'}%%%%%%%% Hard LimitH = double(hardlim(tempH));case {'tribas'}%%%%%%%% Triangular basis functionH = tribas(tempH);case {'radbas'}%%%%%%%% Radial basis functionH = radbas(tempH);%%%%%%%% More activation functions can be added here
end
clear tempH;                                        %   Release the temparary array for calculation of hidden neuron output matrix H%%%%%%%%%%% Calculate output weights OutputWeight (beta_i)
OutputWeight=pinv(H') * T';                        % implementation without regularization factor //refer to 2006 Neurocomputing paper
%OutputWeight=inv(eye(size(H,1))/C+H * H') * H * T';   % faster method 1 //refer to 2012 IEEE TSMC-B paper
%implementation; one can set regularizaiton factor C properly in classification applications
%OutputWeight=(eye(size(H,1))/C+H * H') \ H * T';      % faster method 2 //refer to 2012 IEEE TSMC-B paper
%implementation; one can set regularizaiton factor C properly in classification applications%If you use faster methods or kernel method, PLEASE CITE in your paper properly: %Guang-Bin Huang, Hongming Zhou, Xiaojian Ding, and Rui Zhang, "Extreme Learning Machine for Regression and Multi-Class Classification," submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence, October 2010. end_time_train=cputime;
TrainingTime=end_time_train-start_time_train        %   Calculate CPU time (seconds) spent for training ELM%%%%%%%%%%% Calculate the training accuracy
Y=(H' * OutputWeight)';                             %   Y: the actual output of the training data
if Elm_Type == REGRESSIONTrainingAccuracy=sqrt(mse(T - Y))               %   Calculate training accuracy (RMSE) for regression case
end
clear H;%%%%%%%%%%% Calculate the output of testing input
start_time_test=cputime;
tempH_test=InputWeight*TV.P;
clear TV.P;             %   Release input of testing data
ind=ones(1,NumberofTestingData);
BiasMatrix=BiasofHiddenNeurons(:,ind);              %   Extend the bias matrix BiasofHiddenNeurons to match the demention of H
tempH_test=tempH_test + BiasMatrix;
switch lower(ActivationFunction)case {'sig','sigmoid'}%%%%%%%% Sigmoid H_test = 1 ./ (1 + exp(-tempH_test));case {'sin','sine'}%%%%%%%% SineH_test = sin(tempH_test);        case {'hardlim'}%%%%%%%% Hard LimitH_test = hardlim(tempH_test);        case {'tribas'}%%%%%%%% Triangular basis functionH_test = tribas(tempH_test);        case {'radbas'}%%%%%%%% Radial basis functionH_test = radbas(tempH_test);        %%%%%%%% More activation functions can be added here
end
TY=(H_test' * OutputWeight)';                       %   TY: the actual output of the testing data
end_time_test=cputime;
TestingTime=end_time_test-start_time_test           %   Calculate CPU time (seconds) spent by ELM predicting the whole testing dataif Elm_Type == REGRESSIONTestingAccuracy=sqrt(mse(TV.T - TY));            %   Calculate testing accuracy (RMSE) for regression case
end
if Elm_Type == REGRESSIONfigureplot(TV.T,'LineWidth',1.2);hold on ;plot(TY,'LineWidth',1.2)legend('真实值','估计值')
end
if Elm_Type == CLASSIFIER
%%%%%%%%%% Calculate training & testing classification accuracyMissClassificationRate_Training=0;MissClassificationRate_Testing=0;for i = 1 : size(T, 2)[x, label_index_expected]=max(T(:,i));[x, label_index_actual]=max(Y(:,i));if label_index_actual~=label_index_expectedMissClassificationRate_Training=MissClassificationRate_Training+1;endendTrainingAccuracy=1-MissClassificationRate_Training/size(T,2)for i = 1 : size(TV.T, 2)[x, label_index_expected]=max(TV.T(:,i));[x, label_index_actual]=max(TY(:,i));if label_index_actual~=label_index_expectedMissClassificationRate_Testing=MissClassificationRate_Testing+1;endendTestingAccuracy=1-MissClassificationRate_Testing/size(TV.T,2)
end

测试结果:

TrainingTime =0.1248

TestingTime = 0

TrainingAccuracy = 0.1158

TestingAccuracy =0.0074

2、一个分类的例子

主函数main.m(数据在这diabetes_train,diabetes_test)

clear all;
close all;
clc;
[TrainingTime, TestingTime, TrainingAccuracy, TestingAccuracy] =  ELM('diabetes_train', 'diabetes_test', 1, 20, 'sig')

ELM.m还是上面的那个

仿真结果:

TrainingTime =0.0624

TestingTime = 0

TrainingAccuracy = 0.7674

TestingAccuracy =0.7865

欢迎评论,点赞!

极限学习机(ELM) 算法及MATLAB程序实现相关推荐

  1. 基于极限学习机ELM的人脸识别程序

    前言 有关极限学习机基础知识请参考 极限学习机详解 目标 基于YALE人脸库,15组人脸(每组照片代表一个人),进行人脸分类识别.(下载地址:YALE人脸库) 将每类人脸前10张照片用于学习,第11张 ...

  2. 【ELM】动态自适应可变加权极限学习机ELM预测(Matlab代码实现)

  3. 基于樽海鞘算法的极限学习机(ELM)回归预测-附代码

    基于樽海鞘算法的极限学习机(ELM)回归预测 文章目录 基于樽海鞘算法的极限学习机(ELM)回归预测 1.极限学习机原理概述 2.ELM学习算法 3.回归问题数据处理 4.基于樽海鞘算法优化的ELM ...

  4. 基于哈里斯鹰算法的极限学习机(ELM)分类算法-附代码

    基于哈里斯鹰算法的极限学习机(ELM)分类算法 文章目录 基于哈里斯鹰算法的极限学习机(ELM)分类算法 1.极限学习机原理概述 2.ELM学习算法 3.分类问题 4.基于哈里斯鹰算法优化的ELM 5 ...

  5. 基于粒子群算法的极限学习机(ELM)分类算法-附代码

    基于粒子群算法的极限学习机(ELM)分类算法 文章目录 基于粒子群算法的极限学习机(ELM)分类算法 1.极限学习机原理概述 2.ELM学习算法 3.分类问题 4.基于粒子群算法优化的ELM 5.测试 ...

  6. ALO_DELM 蚁狮算法优化深度极限学习机回归预测算法

    ALO_DELM 蚁狮算法优化深度极限学习机回归预测算法 蚁狮算法 优化 ELM-AE网络结构 Ant Lion Optimization Deep Extreme Learning Machine ...

  7. Python机器学习17——极限学习机(ELM)

    本系列基本不讲数学原理,只从代码角度去让读者们利用最简洁的Python代码实现机器学习方法. (2023年3月11日,已更新--针对评论区指出没有加入激活函数,现在已更新,加入了sigmod激活函数, ...

  8. fcm算法的MATLAB实现,FCM算法的matlab程序(初步)

    FCM算法的matlab程序 1.采用iris数据库 iris_data.txt 5.1 3.5 1.4 0.2 4.9 3 1.4 0.2 4.7 3.2 1.3 0.2 4.6 3.1 1.5 0 ...

  9. matlab dfp法,DFP算法及Matlab程序.docx

    DFP算法及Matlab程序 作业二 用DFP算法求解,取,.一.求解:求迭代点x1令,得的极小值点,所以得:于是,由DFP修正公式有下一个搜索方向为求迭代点x2令,得的极小值点于是得:,所以:,因H ...

  10. fdtd算法的matlab程序,FDTD算法的Matlab程序

    <FDTD算法的Matlab程序>由会员分享,可在线阅读,更多相关<FDTD算法的Matlab程序(6页珍藏版)>请在人人文库网上搜索. 1.* 5= T$h;O % 3-D ...

最新文章

  1. 【 MATLAB 】MATLAB 实现模拟信号采样后的重建(一)
  2. vba 窗体所有组件 enabled_Csharp设计闪烁窗体制作教程
  3. 安卓怎么显示res文件夹中的html_安卓手机如何打开.mhtml文件?
  4. BZOJ-3110-K大数查询-ZJOI2013-暴力
  5. GetWindowRect GetClientRect
  6. Spring模板对象
  7. java组合与继承始示例_排列组合:用公式示例解释的差异
  8. 使用Keil5构建GD32450i-EVAL工程
  9. 从Windows到鸿蒙——操作系统的前世今生
  10. 钉钉小程序使用vant_高效钉钉小程序开发丨详解Hello,dingtalk
  11. mysql show slave_MySQL show slave status 参考
  12. 安卓app执行linux命令,如何在android程序中执行adb shell指令
  13. C语言实现任意进制的转换,主要注意代码的小技巧
  14. 网友发来ifeng网址,打开后却是QQ空间,总提示QQ未登录?原来是一个阴险的诱骗网页...
  15. rtl驱动 ubuntu 禁用_【Ubuntu】UEFI安装Windows 10和Ubuntu 18.04双系统(深度爬坑)
  16. 软件测试python版本的决策表法解决preDate返回前一天日期
  17. 国产之光!Mac必备长截图软件!iShot 1.7.7中文版
  18. mysql中查询没有选修某两门课的_mysql-学生表32题
  19. SNF开发平台WinForm-EasyQuery统计分析-效果-非常牛逼的报表查询工具
  20. Cadence PSpice 补充1:脉冲信号源的详细介绍与使用方法图文演示

热门文章

  1. 身份证实名认证api接口验证不一致怎么办
  2. 韩顺平的php东方航空_韩顺平老师最新PHP开发班 泰牛PHP实战开发教程全集 四大模块全面出击 最强PHP视频教程...
  3. 3DMax教程 教你在3DMax中怎么渲染黑色的描边 渲染黑色的描边有三种方法:
  4. zcu102 zynq Mpsoc uart hello world
  5. 最新云赏视频付费打赏平台源码V8.1(带详细安装教程)
  6. 苹果自带的清理软件_苹果电脑清理软件 CleanMyMac X v4.4.1 中文
  7. JavaScript匿名函数写法
  8. UI自动化脚本运行找不到元素解决方案
  9. 如何打造又快又好的PPT (三)
  10. CMOS版图视频课程-第十二讲-Mentor Calibre版图验证工具 -现在五一有优惠