MATLAB实现各种熵:香农熵、条件熵、模糊熵、样本熵等

  • 0 引言
  • 1 香农熵Shannon Entropy
    • 1.1 基本原理
    • 1.2 信息熵的3个性质
    • 1.3 MATLAB代码实现
  • 2 两随机变量系统中熵的相关概念
    • 2.1 互信息Mutual Information
      • 2.1.1 基本原理
      • 2.1.2 MATLAB代码实现
    • 2.2 联合熵Joint Entropy
    • 2.3 条件熵Conditional Entropy
      • 2.3.1 基本原理
      • 2.3.2 MATLAB代码实现
    • 2.4 互信息、联合熵、条件熵之间的关系
    • 2.5 纠正条件熵Corrected Conditional Entropy
      • 2.5.1 基本原理
      • 2.5.2 MATLAB代码实现
  • 3 两分布系统中熵的相关概念
    • 3.1 交叉熵Cross entropy
    • 3.2 相对熵Relative Entropy
    • 3.3 相对熵与交叉熵的关系
  • 4 时间序列分析相关熵
    • 4.1 模糊熵Fuzzy Entropy
      • 4.1.1 基本原理
      • 4.1.2 MATLAB代码实现
    • 4.2 样本熵Sample Entropy
      • 4.2.1 基本原理
      • 4.2.2 MATLAB代码实现
    • 4.3 近似熵Approximate Entropy
      • 4.3.1 基本原理
      • 4.3.2 MATLAB代码实现
    • 4.4 排列熵Permutation Entropy
      • 4.4.1 基本原理
      • 4.4.2 MATLAB代码实现
    • 4.5 模糊熵、样本熵、近似熵与排列熵的关系
  • 5 参考

0 引言

信息熵(entropy)的原始定义是离散(discrete)的,后来发展了在连续域上的微分熵(differential entropy)。然而,通常在给定的数据集上,无法知道连续变量的概率分布,其概率密度函数也就无法获得,不能够用微分熵的计算公式。那要如何计算呢?

  • 一种常见的方式是直方图法,它将连续变量的取值离散化,通过变量范围内划分bins,将不同的变量取值放入一个个bins中,然后统计其频率,继而使用离散信息熵的计算公式进行计算。然而,每个bin应该取多大的范围是很难确定的,通常需要反复计算获得最优的解。
  • 一种无参熵估计法(non-parametric entropy estimation) 可以避免划分bins来计算熵值,包括了核密度估计(kernel density estimator, KDE)和k-近邻估计(k-NN estimator)。
  • 相比之下,直方图法不够精确,而核密度估计法运算量太大,k-近邻估计成为了普遍使用的一种计算连续随机变量的熵值方式。

1 香农熵Shannon Entropy

1948年,Shannon将玻尔兹曼熵的概念引入到信息论中,作为度量一个随机变量不确定性或信息量的定量指标。

1.1 基本原理


变量的不确定性越大,熵也就越大,把它搞清楚所需要的信息量也就越大。

1.2 信息熵的3个性质

信息论之父克劳德·香农给出的信息熵的三个性质:
1.单调性,发生概率越高的事件,其携带的信息量越低;
2.非负性,信息熵可以看作为一种广度量,非负性是一种合理的必然;
3.累加性,即多随机事件同时发生存在的总不确定性的量度是可以表示为各事件不确定性的量度的和,这也是广度量的一种体现。

1.3 MATLAB代码实现

MATLAB代码(直方图法):

function [SE,unique] = ShannonEn(series,L,num_int)
%{
Function which computes the Shannon Entropy (SE) of a time series of length
'N' using an embedding dimension 'L' and 'Num_int' uniform intervals of
quantification. The algoritm presented by Porta et al. at "Measuring
regularity by means of a corrected conditional entropy in sympathetic
outflow" (PMID: 9485587) has been followed.INPUT:series: the time series.L: the embedding dimension.num_int: the number of uniform intervals used in the quantificationof the series.OUTPUT:SE: the SE value.unique: the number of patterns which have appeared only once. Thisoutput is only useful for computing other more complex entropymeasures such as Conditional Entorpy or Corrected ConditionalEntorpy. If you do not want to use it, put '~' in the call of thefunction.PROJECT: Research Master in signal theory and bioengineering - University of ValladolidDATE: 15/10/2014VERSION: 1�AUTHOR: Jes鷖 Monge 羖varez
%}
%% Checking the ipunt parameters:
control = ~isempty(series);
assert(control,'The user must introduce a time series (first inpunt).');
control = ~isempty(L);
assert(control,'The user must introduce an embbeding dimension (second inpunt).');
control = ~isempty(num_int);
assert(control,'The user must introduce a number of intervals (third inpunt).');%% Processing:
% Normalization of the input time series:
series = (series-mean(series))/std(series);% We the values of the parameters required for the quantification:
epsilon = (max(series)-min(series))/num_int;
partition = min(series):epsilon:max(series);
codebook = -1:num_int;
% Uniform quantification of the time series:
[~,quants] = quantiz(series, partition, codebook);
% The minimum value of the signal quantified assert passes -1 to 0:
quants(logical(quants == -1)) = 0;% We compose the patterns of length 'L':
N = length(quants); X = quants(1:N);
for j = 1:L-1X=[X quants(j+1:N) zeros(1,j)];
end
% We eliminate the last 'L-1' columns of 'X' since they are not real patterns:
X = X(:,1:N-L+1);% We get the number of repetitions of each pattern:
num = ones(1,N-L+1); % This vector will contain the repetition of each pattern
% This loop goes over the columns of 'X':
for j = 1:(N-L+1)for i = j+1:(N-L+1)tmp = ~isnan(X(:,j));if (tmp(1)) && (isequal(X(:,j),X(:,i)))num(j) = num(j) + 1; % The counter is incremented one unitX(:,i) = NaN(L,1); % The pattern is replace by NaN valuesend% Reset of the auxiliar variable each iteration:tmp = NaN;end
end% We get those patterns which are not NaN:
aux = ~isnan(X(1,:));
% Now, we can compute the number of different patterns:
new_num = num(logical(aux));% We get the number of patterns which have appeared only once:
unique = sum(new_num == 1);% We compute the probability of each pattern:
p_i = new_num/(N-L+1);% Finally, the Shannon Entropy is computed as:
SE = (-1) * ((p_i)*(log(p_i)).');end % End of the 'ShannonEn.m' function

2 两随机变量系统中熵的相关概念

2.1 互信息Mutual Information

2.1.1 基本原理

在概率论和信息论中,两个随机变量的互信息(Mutual Information,简称MI)或转移信息(transinformation)是变量间相互依赖性的量度。不同于相关系数,互信息并不局限于实值随机变量,它更加一般且决定着联合分布 p(X,Y) 和分解的边缘分布的乘积 p(X)p(Y) 的相似程度。互信息(Mutual Information)是度量两个事件集合之间的相关性(mutual dependence)。互信息是点间互信息(PMI)的期望值。互信息最常用的单位是bit。

互信息包含两个不同随机变量间平均共同信息量的度量,互信息越高,变量间相关性越强;反之,变量间相关性越弱,当变量相互独立时,相关性最小,互信息为0。

2.1.2 MATLAB代码实现

1.核密度估计(kernel density estimator, KDE)

function [Ixy,lambda]=MutualInfo(X,Y)
%%
% Estimating Mutual Information with Moon et al. 1995
% between X and Y
% Input parameter
% X and Y : data column vectors (nL*1, nL is the record length)
%
% Output
% Ixy : Mutual Information
% lambda: scaled mutual information similar comparabble to
% cross-correlation coefficient
%
%  Programmed by
%  Taesam Lee, Ph.D., Research Associate
%  INRS-ETE, Quebecc
%  Hydrologist
%  Oct. 2010
%
%X=X';
Y=Y';d=2;
nx=length(X);
hx=(4/(d+2))^(1/(d+4))*nx^(-1/(d+4));Xall=[X;Y];
sum1=0;
for is=1:nxpxy=p_mkde([X(is),Y(is)]',Xall,hx);px=p_mkde([X(is)],X,hx);py=p_mkde([Y(is)],Y,hx);sum1=sum1+log(pxy/(px*py));
endIxy=sum1/nx;lambda=sqrt(1-exp(-2*Ixy));end%% Multivariate kernel density estimate using a normal kernel
% with the same h
% input data X : dim * number of records
%            x : the data point in order to estimate mkde (d*1) vector
%            h : smoothing parameter
function [pxy]=p_mkde(x,X,h);s1=size(X);
d=s1(1);
N=s1(2);Sxy=cov(X');
sum=0;
%p1=1/sqrt((2*pi)^d*det(Sxy))*1/(N*h^d);% invS=inv(Sxy);
detS=det(Sxy);
for ix=1:Np2=(x-X(:,ix))'*(Sxy^(-1))*(x-X(:,ix));sum=sum+1/sqrt((2*pi)^d*detS)*exp(-p2/(2*h^2));
end
pxy=1/(N*h^d)*sum;
end%% Reference
%     Moon, Y. I., B. Rajagopalan, and U. Lall (1995),
%     Estimation of Mutual Information Using Kernel Density Estimators,
%     Phys Rev E, 52(3), 2318-2321.

2.2 联合熵Joint Entropy

2.3 条件熵Conditional Entropy

条件熵是在联合符号集合XY熵的条件自信息量的数学期望,在已知随机变量X的条件下,随机变量Y的条件熵定义如下:

条件熵是一个确定值,表示信宿在收到X后,信源Y仍然存在的不确定度。这是传输失真所造成的。有时称H(X|Y)为信道疑义度,也称损失熵。称条件熵H(X|Y)为噪声熵

2.3.1 基本原理

2.3.2 MATLAB代码实现

MATLAB代码:

function [CE,unique] = CondEn(series,L,num_int)
%{
Function which computes the Conditional Entropy (CE) of a time series of
length 'N' using an embedding dimension 'L' and 'Num_int' uniform intervals
of quantification. The algoritm presented by Porta et al. at "Measuring
regularity by means of a corrected conditional entropy in sympathetic
outflow" (PMID: 9485587) has been followed. INPUT:series: the time series.L: the embedding dimension.num_int: the number of uniform intervals used in the quantificationof the series.OUTPUT:CE: the CE value.unique: the number of patterns which have appeared only once. Thisoutput is only useful for computing other more complex entropymeasures such as Corrected Conditional Entorpy. If you do not want to use it, put '~' in the call of the function.PROJECT: Research Master in signal theory and bioengineering - University of ValladolidDATE: 15/10/2014VERSION: 1�AUTHOR: Jes鷖 Monge 羖varez
%}
%% Checking the ipunt parameters:
control = ~isempty(series);
assert(control,'The user must introduce a time series (first inpunt).');
control = ~isempty(L);
assert(control,'The user must introduce an embbeding dimension (second inpunt).');
control = ~isempty(num_int);
assert(control,'The user must introduce a number of intervals (third inpunt).');%% Processing:
% First, we call the Shannon Entropy function:
% 'L' as embedding dimension:
[SE,unique] = ShannonEn(series,L,num_int);
% 'L-1' as embedding dimension:
[SE_1,~] = ShannonEn(series,(L-1),num_int);
% The Conditional Entropy is defined as a differential entropy:
CE = SE - SE_1;end % End of the 'CondEn.m' function

2.4 互信息、联合熵、条件熵之间的关系




上述变量之间的关系,可以用韦恩( Venn) 图来表示:

2.5 纠正条件熵Corrected Conditional Entropy

2.5.1 基本原理

2.5.2 MATLAB代码实现

function [CCE_min] = CorrecCondEn(series,Lmax,num_int)
%{
Function which computes the Corrected Conditional Entropy (CCE) of a time
series of length 'N' using an embedding dimension 'L' and 'Num_int' uniform
intervals of quantification. The algoritm presented by Porta et al. at
"Measuring regularity by means of a corrected conditional entropy in
sympathetic outflow" (PMID: 9485587) has been followed. INPUT:series: the time series.Lmax: the maximum embedding dimension employed.num_int: the number of uniform intervals used in the quantificationof the series.OUTPUT:CCE_min: the CCE value. The best estimation of the CCE is theminimum value of all the CCE that have been computed.PROJECT: Research Master in signal theory and bioengineering - University of ValladolidDATE: 15/10/2014VERSION: 1�AUTHOR: Jes鷖 Monge 羖varez
%}
%% Checking the ipunt parameters:
control = ~isempty(series);
assert(control,'The user must introduce a time series (first inpunt).');
control = ~isempty(Lmax);
assert(control,'The user must introduce an embbeding dimension (second inpunt).');
control = ~isempty(num_int);
assert(control,'The user must introduce a number of intervals (third inpunt).');%% Processing:
N = length(series);
% We will use this for the correction term: (L=1)
[E_est_1,~] = ShannonEn(series,1,num_int);%Incializaci髇 de la primera posici髇 del vector que almacena la CCE a un
%n鷐ero elevado para evitar que se salga del bucle en L=2 (primera
%iteraci髇):
% CCE is a vector that will contian the several CCE values computed:
CCE = NaN(1,Lmax); CCE(1) = 100;
CE = NaN(1,Lmax);
uniques = NaN(1,Lmax);
correc_term = NaN(1,Lmax);
for L = 2:1:Lmax% First, we compute the CE for the current embedding dimension: ('L')[CE(L),uniques(L)] = CondEn(series,L,num_int);% Second, we compute the percentage of patterns which are not repeated:perc_L = uniques(L)/(N-L+1);correc_term(L) = perc_L*E_est_1;% Third, the CCE is the CE plus the correction term:CCE(L) = CE(L) + correc_term(L);
end% Finally, the best estimation of the CCE is the minimum value of all the
% CCE that have been computed:
CCE_min = min(CCE);end % End of the 'CorrecCondEn.m' function

3 两分布系统中熵的相关概念

3.1 交叉熵Cross entropy

3.2 相对熵Relative Entropy

3.3 相对熵与交叉熵的关系

4 时间序列分析相关熵

以下四类熵,均是用于衡量时间序列复杂程度的指标。

4.1 模糊熵Fuzzy Entropy

4.1.1 基本原理

4.1.2 MATLAB代码实现

MATLAB代码:

function [FuzzyEn] = FuzzyEn(series,dim,r,n)
%{
Function which computes the Fuzzy Entropy (FuzzyEn) of a time series. The
alogorithm presented by Chen et al. at "Charactirization of surface EMG
signal based on fuzzy entropy" (DOI: 10.1109/TNSRE.2007.897025) has been
followed.INPUT:series: the time series.dim: the embedding dimesion employed in the SampEn algorithm.r: the width of the fuzzy exponential function.n: the step of the fuzzy exponential function.OUTPUT:FuzzyEn: the FuzzyEn value. PROJECT: Research Master in signal theory and bioengineering - University of ValladolidDATE: 11/10/2014VERSION: 1�AUTHOR: Jes鷖 Monge 羖varez
%}
%% Checking the ipunt parameters:
control = ~isempty(series);
assert(control,'The user must introduce a time series (first inpunt).');
control = ~isempty(dim);
assert(control,'The user must introduce a embbeding dimension (second inpunt).');
control = ~isempty(r);
assert(control,'The user must introduce a width for the fuzzy exponential function: r (third inpunt).');
control = ~isempty(n);
assert(control,'The user must introduce a step for the fuzzy exponential function: n (fourth inpunt).');%% Processing:
% Normalization of the input time series:
series = (series-mean(series))/std(series);
N = length(series);
phi = zeros(1,2);
% Value of 'r' in case of not normalized time series:
% r = r*std(series);for j = 1:2m = dim+j-1; % 'm' is the embbeding dimension used each iteration% Pre-definition of the varialbes for computational efficiency:patterns = zeros(m,N-m+1);aux = zeros(1,N-m+1);% First, we compose the patterns% The columns of the matrix 'patterns' will be the (N-m+1) patterns of 'm' length:if m == 1 % If the embedding dimension is 1, each sample is a patternpatterns = series;else % Otherwise, we build the patterns of length 'm':for i = 1:mpatterns(i,:) = series(i:N-m+i);endend% We substract the baseline of each pattern to itself:for i = 1:N-m+1patterns(:,i) = patterns(:,i) - (mean(patterns(:,i)));end% This loop goes over the columns of matrix 'patterns':for i = 1:N-m% Second, we compute the maximum absolut distance between the% scalar components of the current pattern and the rest:if m == 1 dist = abs(patterns - repmat(patterns(:,i),1,N-m+1));elsedist = max(abs(patterns - repmat(patterns(:,i),1,N-m+1)));end% Third, we get the degree of similarity:simi = exp(((-1)*((dist).^n))/r);% We average all the degrees of similarity for the current pattern:aux(i) = (sum(simi)-1)/(N-m-1); % We substract 1 to the sum to avoid the self-comparisonend% Finally, we get the 'phy' parameter as the as the mean of the first% 'N-m' averaged drgees of similarity:phi(j) = sum(aux)/(N-m);
endFuzzyEn = log(phi(1)) - log(phi(2));end %End of the 'FuzzyEn' function

4.2 样本熵Sample Entropy

4.2.1 基本原理

4.2.2 MATLAB代码实现

MATLAB代码:

function [SampEn] = SampEn(series,dim,r)
%{
Function which computes the Sample Entropy (SampEn) of a time series. The
alogorithm presented by Richman and Moorman at "Physiological time-series
analysis using approximate entropy and sample entropy" (PMID: 10843903) has
been followed.INPUT:serie: the time series.dim: the embedding dimesion employed in the SampEn algorithm.r: the tolerance employed in the SampEn algorithm.OUTPUT:SampEn: the SampEn value. PROJECT: Research Master in signal theory and bioengineering - University of ValladolidDATE: 10/10/2014VERSION: 1�AUTHOR: Jes鷖 Monge 羖varez
%}%% Checking the ipunt parameters:
control = ~isempty(series);
assert(control,'The user must introduce a time series (first inpunt).');
control = ~isempty(dim);
assert(control,'The user must introduce a embbeding dimension (second inpunt).');
control = ~isempty(r);
assert(control,'The user must introduce a tolerand: r (third inpunt).');%% Processing:
% Normalization of the input time series:
series = (series-mean(series))/std(series);
N = length(series);
result = zeros(1,2);
% Value of 'r' in case of not normalized time series:
% r = r*std(series);for j = 1:2m = dim+j-1; % 'm' is the embbeding dimension used each iteration% Pre-definition of the varialbes for computational efficiency:patterns = NaN(m,N-m+1);count = NaN(1,N-m);% First, we compose the patterns% The columns of the matrix 'patterns' will be the (N-m+1) patterns of 'm' length:if m == 1 % If the embedding dimension is 1, each sample is a patternpatterns = series;else % Otherwise, we build the patterns of length 'm':for i = 1:mpatterns(i,:) = series(i:N-m+i);endend% Second, we compute the number of patterns whose distance is less than the tolerance.% This loop goes over the columns of matrix 'patterns':for i = 1:N-m% We compute the maximum absolut distance between each pattern and the rest:if m == 1 temp = abs(patterns - repmat(patterns(:,i),1,N-m+1));elsetemp = max(abs(patterns - repmat(patterns(:,i),1,N-m+1)));end% We determine which elements of 'temp' are smaller than the tolerance:bool = (temp <= r);% We sum the numeber of patters which are similar to the current one:count(i) = (sum(bool)-1); % We rest 1 to avoid self-comparisonend% Third, we average the number of similar patterns:count = count/(N-m-1);% Finally, we average the mean of similar patterns:result(j) = mean(count);
endSampEn = log(result(1)/result(2));end % End of the 'SampEn.m' function

4.3 近似熵Approximate Entropy

4.3.1 基本原理

4.3.2 MATLAB代码实现

MATLAB代码:

function [apen] = ApproxEn(n,r,a)
%% Code for computing approximate entropy for a time series: Approximate
% Entropy is a measure of complexity. It quantifies the unpredictability of
% fluctuations in a time series% To run this function- type: approx_entropy('window length','similarity measure','data set')% i.e  approx_entropy(5,0.5,a)% window length= length of the window, which should be considered in each iteration
% similarity measure = measure of distance between the elements
% data set = data vector% small values of apen (approx entropy) means data is predictable, whereas
% higher values mean that data is unpredictable% concept boorowed from http://www.physionet.org/physiotools/ApEn/% Author: Avinash Parnandi, parnandi@usc.edu, http://robotics.usc.edu/~parnandi/data =a;for m=n:n+1 % run it twice, with window size differing by 1set = 0;
count = 0;
counter = 0;
window_correlation = zeros(1,(length(data)-m+1));for i=1:(length(data))-m+1current_window = data(i:i+m-1); % current window stores the sequence to be compared with other sequences%%  计算两个之间的差超过多少的个数for j=1:length(data)-m+1sliding_window = data(j:j+m-1); % get a window for comparision with the current_window% compare two windows, element by element% can also use some kind of norm measure; that will perform betterfor k=1:mif((abs(current_window(k)-sliding_window(k))>r) && set == 0)set = 1; % i.e. the difference between the two sequence is greater than the given value  只要有两个的距离大于r就置为1endend%k=1:m,if(set==0) count = count+1; % this measures how many sliding_windows are similar to the current_windowendset = 0; % reseting 'set'end%j=1:length(data)-m+1, end结束后得到所有落在模板i的r内的个数counter(i)=log(count/(length(data)-m+1)); % we need the number of similar windows for every cuurent_windowcount=0;
i;
end  %  for i=1:(length(data))-m+1, ends herecounter;  % this tells how many similar windows are present for each window of length m
%total_similar_windows = sum(counter);%window_correlation = counter/(length(data)-m+1);
correlation(m-n+1) = ((sum(counter))/(length(data)-m+1));end % for m=n:n+1; % run it twice   correlation(1);correlation(2);
apen = correlation(1)-correlation(2);
% apen = log(correlation(1)/correlation(2));

4.4 排列熵Permutation Entropy

4.4.1 基本原理

4.4.2 MATLAB代码实现

MATLAB代码:

function [PermEn] = PermEn(series,L)
%{
Function which computes the Permutation Entropy (PermEn) of a time series.
The alogorithm presented by Bandt and Pompe at "Permutation entropy: a
natural complexity measure for time series" has been followed.
(DOI: http://dx.doi.org/10.1103/PhysRevLett.88.174102).INPUT:series: the time series.L: the embedding dimension. OUTPUT:PermEn: the PermEn value. PROJECT: Research Master in signal theory and bioengineering - University of ValladolidDATE: 15/10/2014VERSION: 1�AUTHOR: Jes鷖 Monge 羖varez
%}
%% Checking the ipunt parameters:
control = ~isempty(series);
assert(control,'The user must introduce a time series (first inpunt).');
control = ~isempty(L);
assert(control,'The user must introduce a embbeding dimension (second inpunt).');%% Processing:
N = length(series);
% First, we compose the patterns of length 'L':
X = series(1:N);
for j = 1:L-1X = [X series(j+1:N) zeros(1,j)];
end
% We eliminate the last 'L-1' columns of 'X' since they are not real patterns:
X = X(:,1:N-L+1); % 'X' is a matrix of size Lx(N-L+1) whose columns are the patterns% Second, we get the permutation vectors:
shift = 0:(L-1); % The original shift vector of length 'L'
perm = NaN((N-L+1),L); % This matrix will include the permutation vectors of each pattern
aux = NaN(1,L); % Auxiliar variable
% This loop goes over the columns of matrix 'X':
for j = 1:(N-L+1)Y = X(:,j).';% We order the pattern upstream:Z = sort(Y);% This loop goes over the 'Z' vector:for i = 1:Lif (i==1) % Thi first element cannot be repeatedaux(i) = shift(find(Y==Z(i),1));else% If two numbers are repeated consecutively:if (Z(i) == Z(i-1))aux(i) = aux(i-1) + 1;else% If the are not repetead:aux(i) = shift(find(Y==Z(i),1));endendend% We save the permutation vector of the current pattern:perm (j,:) = aux;% Cleaning the auxiliary variable each iteration: aux = NaN(1,L);
end% Third, we compute the relative frequency of each permutation vector:
num = zeros(1,N-L+1); % This variable will contain the repetition of each permutation vector
% This loop goes over 'perm':
for j = 1:(N-L+1)% If it is NaN, the permutation vector has already been computed:if (isnan(perm(j,:)))continue;else% The permutation vector has not been computed. It appears at least one time:num(j) = num(j) + 1;% This loop goes over the rest of permutation vector to see if they% are equal to the current permutation vector:for i = j+1:(N-L+1)if (isnan(perm(i,:)))elseif (isequal(perm(j,:),perm(i,:)))num(j) = num(j) + 1; % The counter is incremented one unitperm(i,:)= NaN(1,L); % The permutation vector is replace by NaN valuesendendend
end% Finally, we get the probability of each permutation vector:
num = num / (N-L+1);
% We get those ones which are different form zero:
num_dist = num(find(num ~= 0));
% We compute the Shannon entropy of the permutation vectors:
PermEn = (-1) * (num_dist * (log(num_dist)'));end % End of the 'PermEn.m' function

4.5 模糊熵、样本熵、近似熵与排列熵的关系

5 参考

1.在数据集上计算连续随机变量的信息熵和互信息–k-近邻估计方法

MATLAB实现各种熵:香农熵、条件熵、模糊熵、样本熵等相关推荐

  1. 【熵与特征提取】从近似熵,到样本熵,到模糊熵,再到排列熵,究竟实现了什么?(第一篇)——近似熵及其MATLAB实现

    在特征提取领域,近似熵.样本熵.排列熵和模糊熵是比较经常出现的概念. 首先一句话总结一下:这几个熵值都是用来表征信号序列复杂程度的无量纲指标,熵值越大代表信号复杂度越大.信号复杂程度的表征在机械设备状 ...

  2. 【熵与特征提取】从近似熵,到样本熵,到模糊熵,再到排列熵,究竟实现了什么?(第四篇)——“排列熵”及其MATLAB实现

    今天讲排列熵,之前用了三篇文章分别讲述了近似熵.样本熵和模糊熵: Mr.看海:[熵与特征提取]从近似熵,到样本熵,到模糊熵,再到排列熵,究竟实现了什么?(第一篇)--"近似熵"及其 ...

  3. 特征选择,熵,条件熵,信息增益

    特征选择,熵,条件熵,信息增益 特征选择 例子 熵 条件熵 信息增益 例子 特征选择 特征选择是在于选取能够提高分类器学习效率的特征.对于没有分类能力的特征,经验上扔掉这样的特征对最终的分类结果并没有 ...

  4. 近似熵、样本熵、模糊熵、排列熵摘抄总结

    本文总结摘抄于多个网上链接,用于学习和整理用途 1.模糊熵.样本熵.近似熵都是什么?反映了什么? 2.近似熵理论相关知识与代码实现 3.样本熵理论相关知识与代码实现 4.模糊熵理论相关知识与代码实现 ...

  5. 近似熵 样本熵 模糊熵

    特征提取field 表征信号序列复杂程度的无量纲指标,熵值越大代表信号复杂度越大. 应用:机械设备状态监测.故障诊断以及心率.血压信号检测 优点:抵抗环境干扰 1. 近似熵 Def: 近似熵(Appr ...

  6. 排列熵、模糊熵、近似熵、样本熵的原理及MATLAB实现之近似熵

    说明:"本博文为排列熵.模糊熵.近似熵.样本熵的原理及MATLAB实现"系列博文的最后一篇,关于排列熵.模糊熵.样本熵的内容请阅读博客: 排列熵 模糊熵 样本熵 近似熵 四.近似熵 ...

  7. 模糊熵 matlab,近似熵/模糊熵/样本熵MATLAB代码

    [实例简介]近似熵.模糊熵和样本熵的MATLAB代码,可用于信号特征提取,比如轴承振动信号特征处理,脑电波信号等时间序列信号 [实例截图] [核心代码] function [apen] = appro ...

  8. 排列熵、模糊熵、近似熵、样本熵的原理及MATLAB实现之模糊熵

    目录 二.模糊熵 1.概念 2.基本原理 3.MATLAB代码 4.参考文献 说明:关于排列熵的内容请阅读博客:排列熵.模糊熵.近似熵.样本熵的原理及MATLAB实现 二.模糊熵 1.概念 在样本熵的 ...

  9. kl距离 java_信息量、熵、最大熵、联合熵、条件熵、相对熵、互信息。

    一直就对机器学习中各种XX熵的概念比较模糊,现在总结一下自己的学习心得. 信息量 先说一下信息量的概念,其实熵就是信息量的集合. 摘抄个例子: 英文有26个字母,假设每个字母出现的概率是一样的,每个字 ...

最新文章

  1. MySQL基础篇:子查询
  2. 和php工具箱一样的,将php工具箱或其它类似工具正确安装好,并写出安装步骤与注意事项...
  3. Spring Boot 2.4 对多环境配置的支持更改
  4. java中异常和集合
  5. java正则 环视_正则表达式之环视(java)
  6. 如何 ssh 到内网服务器?
  7. linux怎么杀死线程c语言,教程-linux下c语言编程 第一弹-线程的使用
  8. 软件开发能力成熟度模型CMM介绍
  9. kibana 查询_Exceptionless服务端+kibana部署实时日志纪要
  10. (转)基于MVC4+EasyUI的Web开发框架形成之旅--权限控制
  11. Java Singleton类中的线程安全
  12. 实验项目三 基于A*搜索算法迷宫游戏开发
  13. Linux7安装oracle11g报错 Error in invoking target 'agen
  14. XPS文件怎么打开?可以转成PDF格式吗?
  15. man fgetc fgets getc getchar ungetc
  16. 阿里的世界版图——“风清扬”的全球梦
  17. 列车停站方案_基于节点分级的高速铁路列车停站方案设计策略
  18. 470余万条疑似12306用户数据遭贩卖 嫌疑人被刑拘
  19. 香橙派PC Plus电脑开发板制作网络打印服务器
  20. 更新至OSX 10.10后MBA外接网卡无法使用的解决

热门文章

  1. 最适合使用低代码平台的四个行业
  2. VxWorks网络编程
  3. 大数据只是复杂商业智能的另一个名字
  4. 【Maven】No valid Maven installation found. Either set the home directory in the configuration dialog
  5. 熬夜整理!200道Python数据分析习题+50个办公自动化案例!
  6. 《匠人手记》系列原创文章【转】
  7. MD5算法的Java实现
  8. Java ArrayList 某中学有若干学生,每个学生有一个姓名属性、班级名称属性和考试成绩属性
  9. 7-3 括号匹配 (20分)(C语言版)
  10. 1次演讲13次提到技术,马云在阿里巴巴年会上都说了什么?