《最优状态估计-卡尔曼,H∞及非线性滤波》:第10章 有关卡尔曼滤波的其他讨论

  • 前言
  • 1. MATLAB仿真:示例10.1
  • 2. MATLAB仿真:示例10.2
  • 3. MATLAB仿真:示例10.3
  • 4. MATLAB仿真:示例10.4
  • 5. 小结

前言

《最优状态估计-卡尔曼,H∞及非线性滤波》由国外引进的一本关于状态估计的专业书籍,2006年正式出版,作者是Dan Simon教授,来自克利夫兰州立大学,电气与计算机工程系。主要应用于运动估计与控制,学习本文的过程中需要有一定的专业基础知识打底。

本书共分为四个部分,全面介绍了最优状态估计的理论和方法。第1部分为基础知识,回顾了线性系统、概率论和随机过程相关知识,介绍了最小二乘法、维纳滤波、状态的统计特性随时间的传播过程。第2部分详细介绍了卡尔曼滤波及其等价形式,介绍了卡尔曼滤波的扩展形式,包括相关噪声和有色噪声条件下的卡尔曼滤波、稳态滤波、衰减记忆滤波和带约束的卡尔曼滤波等(掌握了卡尔曼,基本上可以说这本书掌握了一半)。第3部分详细介绍了H∞滤波,包括时域和频域的H∞滤波,混合卡尔曼/H∞滤波,带约束的H∞ 滤波。第4部分介绍非线性系统滤波方法,包括扩展卡尔曼滤波、无迹卡尔曼滤波及粒子滤波。本书适合作为最优状态估计相关课程的高年级本科生或研究生教材,或从事相关研究工作人员的参考书。

其实自己研究生期间的主研方向并不是运动控制,但自己在本科大三时参加过智能车大赛,当时是采用PID对智能车的运动进行控制,彼时凭借着自学的一知半解,侥幸拿到了奖项。到了研究生期间,实验室正好有研究平衡车的项目,虽然自己不是那个方向,但实验室经常有组内报告,所以对运动控制在实际项目中的应用也算有了基本的了解。参加工作后,有需要对运动估计与控制进行相关研究,所以接触到这本书。

这次重新捡起运动控制,是希望自己可以将这方面的知识进行巩固再学习,结合原书的习题例程进行仿真,简单记录一下这个过程。主要以各章节中习题仿真为主,这是本书的第十章的4个仿真示例(仿真平台:32位MATLAB2015b),话不多说,开始!

1. MATLAB仿真:示例10.1

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%功能:《最优状态估计-卡尔曼,H∞及非线性滤波》示例仿真
%示例10.1: Multiple.m
%环境:Win7,Matlab2015b
%Modi: C.S
%时间:2022-05-02
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%function Multiple% Multiple model Kalman filtering.
% Second order system.
% Corrected March 17, 2009, due to typos in Equations 10.35 and 10.37.zeta = 0.1; % damping ratio
wn = [sqrt(4); sqrt(4.4); sqrt(4.8)]; % possible wn values
N = size(wn, 1); % number of parameter sets
pr = [0.1; 0.6; 0.3]; % a priori probabilities
% Compute the initial estimate of wn
wnhat = 0;
for i = 1 : Nwnhat = wnhat + wn(i) * pr(i);
end
T = 0.1; % sample period
Qc = 1000; % continuous time process noise variance
R = diag([10 10]); % discrete time measurement noise covariance
H = eye(2); % measurement matrix
q = size(H, 1); % number of measurements
x = [0; 0]; % initial state% Compute the alternative Lambda and phi matrices.
for i = 1 : NAi = [0 1; -wn(i)^2 -2*zeta*wn(i)];Bi = [0; wn(i)^2];Fi = expm(Ai*T);F(:,:,i) = Fi;%Lambda(:,:,i) = (phii - eye(size(phii))) * inv(Fi) * Li;Pplus(:,:,i) = zeros(size(Fi));xhat(:,i) = x;
endB = [0; wn(1)^2];
Q = B * Qc * B' * T; % discrete time process noise covariancetf = 60; % Length of simulation
% Create arrays for later plotting
wnhatArray = [wnhat];
prArray = [pr];
for t = T : T : tf% Simulate the system.% The first parameter set is the true parameter set.w = sqrt(Q) * randn(2, 1);x = F(:,:,1) * x + w;y = H * x + sqrt(R) * randn(2, 1);% Run a separate Kalman filter for each parameter set.for i = 1 : NPminus(:,:,i) = F(:,:,i) * Pplus(:,:,i) * F(:,:,i)';Pminus(:,:,i) = Pminus(:,:,i) + Q;K = Pminus(:,:,i) * H' * inv(H * Pminus(:,:,i) * H' + R);xhat(:,i) = F(:,:,i) * xhat(:,i);r = y - H * xhat(:,i); % measurment residualS = H * Pminus(:,:,i) * H' + R; % covariance of measurement residualpdf(i) = exp(-r'*inv(S)*r/2) / ((2*pi)^(q/2)) / sqrt(det(S));xhat(:,i) = xhat(:,i) + K * (y - H * xhat(:,i));Pplus(:,:,i) = (eye(2) - K * H) * Pminus(:,:,i) * (eye(2) - K * H)' + K * R * K';end% Compute the sum that appears in the denominator of the probability expression.Prsum = 0;for i = 1 : NPrsum = Prsum + pdf(i) * pr(i);end% Update the probability of each parameter set.for i = 1 : Npr(i) = pdf(i) * pr(i) / Prsum;end% Compute the best state estimate and the best parameter estimate.xhatbest = 0;wnhat = 0;for i = 1 : Nxhatbest = xhatbest + pr(i) * xhat(:,i);wnhat = wnhat + pr(i) * wn(i);end% Save data for plotting.wnhatArray = [wnhatArray wnhat];prArray = [prArray pr];
endclose all;t = 0 : T : tf;
figure;
plot(t, wnhatArray.^2);
title('Estimate of square of natural frequency', 'FontSize', 12);
set(gca,'FontSize',12); set(gcf,'Color','White');
xlabel('Seconds');figure;
plot(t, prArray(1,:), 'b-', t, prArray(2,:), 'k--', t, prArray(3,:), 'r:');
title('Probabilities of square of natural frequency', 'FontSize', 12);
set(gca,'FontSize',12); set(gcf,'Color','White');
xlabel('Seconds');
legend('Probability that \omega_n^2 = 4', 'Probability that \omega_n^2 = 4.4', 'Probability that \omega_n^2 = 4.8');


2. MATLAB仿真:示例10.2

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%功能:《最优状态估计-卡尔曼,H∞及非线性滤波》示例仿真
%示例10.2: Reduced.m
%环境:Win7,Matlab2015b
%Modi: C.S
%时间:2022-05-02
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%function Reduced% Reduced order Kalman filter simulation.
% Estimate the first state of a two-state system.F = [.9 .1; .2 .7]; % System matrix
%F = [1.1 -.1; .2 .7]; % System matrix
Lambda1 = 1;
Lambda2 = 0;
Q = 0.1; % Process noise covariance
H1 = 0;
H2 = 1;
R = 1; % Measurement noise covarianceLambda = [Lambda1; Lambda2]; % Noise input matrix
H = [H1 H2]; % Measurement matrixKtol = 0.00001; % tolerance for convergence of gain to steady state
NumSteps = 50; % number of simulation stepsclose all; % close all figures% Iteratively compute the steady state reduced filter Kalman gain
P = 0;
PIt = 0;
PItOld = 0;
PItt = 0;
PIttOld = 0;
Ptt = 0;
Pt = 0;
Sigma = 0;
K = 0;x = [0; 0]; % Initial state vector
xhat = [0; 0]; % Initial state estimate
xhatReduced = 0; % Initial reduced order state estimate
Pplus = [0 0; 0 0]; % Initial Kalman filter estimation error covariance
I = eye(2); % Identity matrix
ErrArray = [x - xhat]; % Array of Kalman filter estimation errors
ErrReducedArray = [x(1) - xhatReduced]; % Array of reduced order filter estimation errors
x1Array = [x(1)];
xhatReducedArray = [x(1)];
KArray = [];
PttArray = [];
PtArray = [];
PArray = [];
SigmaArray = [];
PfullArray = [];randn('state', sum(100*clock)); % initialize the random number generator% Try to find a steady state reduced order gain.
temp1 = H1 * F(1,2) + H2 * F(2,2);
temp2 = H1 * Lambda1 + H2 * Lambda2;
for k = 1 : 200A = H1 * F(1,1) * P * F(1,1)' * H1';A = A + H1 * F(1,1) * Sigma * temp1';A = A + temp1 * Sigma' * F(1,1) * H1';A = A - H1 * F(1,1) * PItt * temp1';A = A - temp1 * PItt' * F(1,1) * H1';A = A + H1 * F(1,1) * Pt * F(2,1) * H2';A = A + H2 * F(2,1) * Pt * F(1,1) * H1';A = A - H1 * F(1,1) * PIt * F(2,1) * H2';A = A - H2 * F(2,1) * PIt * F(1,1) * H1';   A = A + temp1 * Ptt * temp1';A = A + temp1 * Sigma' * F(2,1) * H2';A = A + H2 * F(2,1) * Sigma * temp1';A = A + H2 * F(2,1) * Pt * F(2,1) * H2';A = A + R;A = A + temp2 * Q * temp2';B = F(1,1) * P * F(1,1) * H1';B = B + F(1,2) * Sigma' * F(1,1) * H1';B = B + F(1,1) * Sigma * temp1';B = B - F(1,2) * PItt' * F(1,1) * H1';B = B - F(1,1) * PItt * temp1';   B = B + F(1,1) * Pt * F(2,1) * H2';B = B - F(1,1) * PIt * F(2,1) * H2';B = B + F(1,2) * Ptt * temp1';B = B + F(1,2) * Sigma' * F(2,1) * H2';B = B + Lambda1 * Q * temp2';KOld = K;K = B * inv(A);KArray = [KArray K];if (k > 3) & (abs((K - KOld) / K) < Ktol)break;endPIttOld = PItt;PItt = (1 - K * H1) * F(1,1) * (PIt * F(2,1) + PItt * F(2,2));PItt = PItt + K * (H1 * F(1,1) + H2 * F(2,1)) * (Pt * F(2,1) + Sigma * F(2,2));PItt = PItt + K * temp1 * (Sigma' * F(2,1) + Ptt * F(2,2));PItt = PItt + K * temp2 * Q * Lambda2';PItOld = PIt;PIt = (1 - K * H1) * F(1,1) * (PIt * F(1,1) + PIttOld * F(1,2));PIt = PIt + K * (H1 * F(1,1) + H2 * F(2,1)) * (Pt * F(1,1) + Sigma * F(1,2));PIt = PIt + K * temp1 * (Sigma' * F(1,1) + Ptt * F(1,2));PIt = PIt + K * temp2 * Q * Lambda1';temp3 = F(1,2) - K * H1 * F(1,2) - K * H2 * F(2,2);P = (1 - K * H1)^2 * F(1,1)^2 * P;P = P + 2 * (1 - K * H1) * F(1,1) * Sigma * temp3;P = P - 2 * (1 - K * H1) * F(1,1) * PIttOld * temp3;P = P - 2 * (1 - K * H1) * F(1,1) * Pt * F(2,1) * H2' * K';P = P + 2 * (1 - K * H1) * F(1,1) * PItOld * F(2,1) * H2' * K';   P = P + temp3^2 * Ptt;P = P - 2 * temp3 * Sigma' * F(2,1) * H2' * K;P = P + K^2 * H2^2 * F(2,1)^2 * Pt;P = P + K^2 * R;P = P + (Lambda1 - K * H1 * Lambda1 - K * H2 * Lambda2)^2 * Q;PttOld = Ptt;PtOld = Pt;SigmaOld = Sigma;Ptt = F(2,1)^2 * Pt + 2 * F(2,1) * Sigma * F(2,2) + F(2,2)^2 * Ptt + Lambda2^2 * Q;Pt = F(1,1)^2 * Pt + 2 * F(1,1) * Sigma * F(1,2) + F(1,2)^2 * PttOld + Lambda1^2 * Q;Sigma = F(1,1) * PtOld * F(2,1) + F(1,1) * SigmaOld * F(2,2);Sigma = Sigma + F(1,2) * SigmaOld * F(2,1) + F(1,2) * PttOld * F(2,2);Sigma = Sigma + Lambda1 * Q * Lambda2;PttArray = [PttArray Ptt];PtArray = [PtArray Pt];PArray = [PArray P];SigmaArray = [SigmaArray Sigma];endfigure; plot(KArray);
set(gca,'FontSize',12); set(gcf,'Color','White');
xlabel('Iteration Number'); ylabel('Reduced Order Gain');if abs((K - KOld) / K) > Ktoldisp('Reduced order Kalman gain did not converge to a steady state value');return;
endfor k = 1 : NumSteps% System simultionx = F * x + Lambda * sqrt(Q) * randn;z = H * x + sqrt(R) * randn;% Full order Kalman filter simulation (time varying)Pminus = F * Pplus * F' + Lambda * Q * Lambda';KStd = Pminus * H' * inv(H * Pminus * H' + R);xhat = F * xhat;xhat = xhat + KStd  * (z - H * xhat);Pplus = (I - KStd  * H) * Pminus * (I - KStd  * H)' + KStd  * R * KStd';% Reduced order Kalman filter simulation (steady state)xhatReduced = F(1,1) * xhatReduced + K * (z - H1 * F(1,1) * xhatReduced);% Save data for plottingx1Array = [x1Array x(1)];xhatReducedArray = [xhatReducedArray xhatReduced ];ErrArray = [ErrArray x-xhat];ErrReducedArray = [ErrReducedArray x(1)-xhatReduced];PfullArray = [PfullArray Pplus(1,1)];
end% Compute estimation errors
Err = sqrt(norm(ErrArray(1,:))^2 / size(ErrArray,2));
disp(['Full order estimation std dev (analytical and experimental) = ',num2str(sqrt(Pplus(1,1))),', ',num2str(Err)]);
ErrReduced = sqrt(norm(ErrReducedArray(1,:))^2 / size(ErrReducedArray,2));
disp(['Reduced order estimation std dev (analytical and experimental) = ',num2str(sqrt(P)),', ',num2str(ErrReduced)]);% Plot results
k = 1 : NumSteps;figure; plot(PttArray); title('Ptt', 'FontSize', 12);
set(gca,'FontSize',12); set(gcf,'Color','White');
xlabel('Time Step');figure; plot(PtArray); title('Pt', 'FontSize', 12);
set(gca,'FontSize',12); set(gcf,'Color','White');
xlabel('Time Step');figure; plot(k, PArray(1:max(k)), 'r:');
hold on;
plot(k, PfullArray(1:max(k)), 'b');
title('P', 'FontSize', 12);
set(gca,'FontSize',12); set(gcf,'Color','White');
xlabel('Time Step'); ylabel('Estimation Error Variance');
legend('Reduced order variance', 'Full order variance');figure; plot(SigmaArray); title('Sigma', 'FontSize', 12);
set(gca,'FontSize',12); set(gcf,'Color','White');
xlabel('Time Step');figure; plot(k,ErrArray(1,1:max(k)),'r:');
hold on;
plot(k,ErrReducedArray(1:max(k)), 'b');
set(gca,'FontSize',12); set(gcf,'Color','White');
xlabel('Time Step'); ylabel('Estimation Error');
legend('Std Kalman filter error', 'Reduced filter error');






>> Reduced
Full order estimation std dev (analytical and experimental) = 0.69709, 0.77517
Reduced order estimation std dev (analytical and experimental) = 0.72606, 0.73999

3. MATLAB仿真:示例10.3

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%功能:《最优状态估计-卡尔曼,H∞及非线性滤波》示例仿真
%示例10.3: Schmidt.m
%环境:Win7,Matlab2015b
%Modi: C.S
%时间:2022-05-02
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%function Schmidt% Reduced order Kalman filter using consider states.phix = 1; phiy = 1;
phi = [phix 0; 0 phiy];
Qx = 1; Qy = 0;
Q = [Qx 0; 0 Qy];
Hx = 1; Hy = 1;
H = [Hx Hy];
R = 1;
Pxminus = 1.6; Pxyminus = 0; Pyxminus = Pxyminus'; Pyminus = 1;
Pminus = [Pxminus Pxyminus; Pyxminus Pyminus];x = [0; 0];
xhatminus = [0; 0];
xhatminusSchmidt = [0];xhatErr = [];
xhatErrSchmidt = [];tf = 20;
for t = 0 : tf% Simulate the measurementz = H * x + sqrt(R) * randn;% Simulate the full order filterK = Pminus * H' * inv(H * Pminus * H' + R);xhatplus = xhatminus + K * (z - H * xhatminus);Pplus = (eye(2) - K * H) * Pminus * (eye(2) - K * H)' + K * R * K';xhatminus = phi * xhatplus;Pminus = phi * Pplus * phi' + Q;% Simulate the Kalman-Schmidt filteralpha = Hx * Pxminus * Hx' + Hx * Pxyminus * Hy' + Hy * Pxyminus * Hx' + Hy * Pyminus * Hy' + R;Kx = (Pxminus * Hx' + Pxyminus * Hy') * inv(alpha);xhatplusSchmidt = xhatminusSchmidt + Kx * (z - Hx * xhatminusSchmidt);Pxplus = (eye(1) - Kx * Hx) * Pxminus - Kx * Hy * Pyxminus;Pxyplus = (eye(1) - Kx * Hx) * Pxyminus - Kx * Hy * Pyminus;Pyxplus = Pxyplus';Pyplus = Pyminus;xhatminusSchmidt = phix * xhatplusSchmidt;Pxminus = phix * Pxplus * phix' + Qx;Pxyminus = phix * Pxyplus * phiy';Pyxminus = Pxyminus';Pyminus = phiy * Pyplus * phiy' + Qy;    % Save data for laterxhatErr = [xhatErr; x(1) - xhatplus(1)];xhatErrSchmidt = [xhatErrSchmidt; x(1) - xhatplusSchmidt];% Simulate the state dynamicsx = phi * x + [Qx * randn; Qy * randn];
endt = 0 : tf;
close all;
plot(t, xhatErr(1:21), 'r-', t, xhatErrSchmidt(1:21), 'b--');
set(gca,'FontSize',12); set(gcf,'Color','White');
legend('Full Order Filter', 'Reduced Order Filter');
xlabel('Time Step'); ylabel('Estimation Error');xhatErr = std(xhatErr);
xhatErrSchmidt = std(xhatErrSchmidt);
disp(['RMS Error = ', num2str(xhatErr), ' (full order filter), ', num2str(xhatErrSchmidt), ' (Schmidt filter)']);

>> Schmidt
RMS Error = 0.92247 (full order filter), 0.91047 (Schmidt filter)

4. MATLAB仿真:示例10.4

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%功能:《最优状态估计-卡尔曼,H∞及非线性滤波》示例仿真
%示例10.4: Robust.m
%环境:Win7,Matlab2015b
%Modi: C.S
%时间:2022-05-02
%仅供调用
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%function [x1_rms_robu, x1_rms_opt, x2_rms_robu, x2_rms_opt] = Robust(alpha, beta)% Simulate a robust Kalman filter.
% INPUTS:
%   alpha = relative change in Q from nominal
%   beta = relative change in R from nominalK_optimal = [0.02; 0.002]; % optimal Kalman gain for this problem
duration = 100; % simulation length
dt = 0.1; % step size
a = [1 dt;0 1]; % system matrix
b = [dt^2;dt]; % input matrix
B_w = eye(2); % noise input matrix
c = [1 0]; % measurement matrixrho1 = 0.5; % relative weight on nominal Kalman filter performance
rho2 = 0.5; % relative weight on robustness
randn('state',sum(100*clock));
measnoise = 10; % std dev of measurement noise
accelnoise = 0.2; % std dev of process noise
R = measnoise^2; %measurement noise covariance
Q = accelnoise^2*[dt^4/4 dt^3/2;dt^3/2 dt^2]; %process noise covariance
x = [0; 0]; % initial state
xhat = x; % initial state estimate
xhat_new = x; % initial robust state estimatex1hatnew=0; x2hatnew=0;
x1=0; x2=0;
x1hat=0; x2hat=0;
epsilon = 0.01;
zero_temp = eps*eye(2,2);
JArray = [];
K = K_optimal;J = inf;
% Minimize the robust Kalman filter cost function
while (1 == 1)   X_1 = DARE ( (a-K*c*a)', zeros(2,2), (B_w-K*c*B_w)*Q*(B_w-K*c*B_w)', zero_temp, zeros(2,2), eye(2) );X_2 = DARE ( (a-K*c*a)', zeros(2,2), (K*R*K'), zero_temp, zeros(2,2), eye(2) );J_old = J;Jnominal = trace(X_1)+trace(X_2);Jrobust = trace(X_1)^2+trace(X_2)^2;J = rho1*Jnominal + rho2*Jrobust;disp(['Jnominal = ', num2str(Jnominal), ', Jrobust = ', num2str(Jrobust), ', J = ',num2str(J)]);JArray = [JArray J];if (J > 0.999*J_old ) break; % convergenceend;% Partial of J with respect to X1 and X2par_J_X1 = rho1*eye(2)+2*rho2*trace(X_1)*eye(2);par_J_X2 = rho1*eye(2)+2*rho2*trace(X_2)*eye(2);% Change K so that the partial of X1 and X2 w/r to K can be computed% numerically.D_K_1 = [K(1,1)*(1 + epsilon);K(2,1)];D_K_2 = [K(1,1);K(2,1)*(1 + epsilon)];% PARTIAL OF X_1 w/r to KX_1_new_K1 = DARE ( (a-D_K_1*c*a)', zeros(2,2), (B_w-D_K_1*c*B_w)*Q*(B_w-D_K_1*c*B_w)', zero_temp, zeros(2,2), eye(2) );X_1_delta_K1 = X_1 -X_1_new_K1;deltaX_1_K1 = [X_1_delta_K1(1,1) X_1_delta_K1(1,2) X_1_delta_K1(2,1) X_1_delta_K1(2,2)];X_1_new_K2 = DARE ( (a-D_K_2*c*a)', zeros(2,2), (B_w-D_K_2*c*B_w)*Q*(B_w-D_K_2*c*B_w)', zero_temp, zeros(2,2), eye(2) );X_1_delta_K2 = X_1 -X_1_new_K2;deltaX_1_K2 = [X_1_delta_K2(1,1) X_1_delta_K2(1,2) X_1_delta_K2(2,1) X_1_delta_K2(2,2)];DeltaX_1_K = [deltaX_1_K1;deltaX_1_K2];% Partial of X_2 w/r to KX_2_new_K1 = DARE ( (a-D_K_1*c*a)', zeros(2,2), (D_K_1*R*D_K_1'), zero_temp, zeros(2,2), eye(2) );  X_2_delta_K1 = X_2 -X_2_new_K1;deltaX_2_K1 = [X_2_delta_K1(1,1) X_2_delta_K1(1,2) X_2_delta_K1(2,1) X_2_delta_K1(2,2)];X_2_new_K2 = DARE ( (a-D_K_2*c*a)', zeros(2,2), (D_K_2*R*D_K_2'), zero_temp, zeros(2,2), eye(2) );X_2_delta_K2 = X_2 -X_2_new_K2;deltaX_2_K2 = [X_2_delta_K2(1,1) X_2_delta_K2(1,2) X_2_delta_K2(2,1) X_2_delta_K2(2,2)];DeltaX_2_K = [deltaX_2_K1;deltaX_2_K2];% Partial of J w/r to K temp1 = par_J_X1(1,1)*DeltaX_1_K(1,1)+par_J_X1(1,2)*DeltaX_1_K(1,2)+par_J_X1(2,1)*DeltaX_1_K(1,3)+...par_J_X1(2,2)*DeltaX_1_K(1,4);temp2 = par_J_X1(1,1)*DeltaX_1_K(2,1)+par_J_X1(1,2)*DeltaX_1_K(2,2)+par_J_X1(2,1)*DeltaX_1_K(2,3)+...par_J_X1(2,2)*DeltaX_1_K(2,4);temp3 = par_J_X2(1,1)*DeltaX_2_K(1,1)+par_J_X2(1,2)*DeltaX_2_K(1,2)+par_J_X2(2,1)*DeltaX_2_K(1,3)+...par_J_X2(2,2)*DeltaX_2_K(1,4);temp4 = par_J_X2(1,1)*DeltaX_2_K(2,1)+par_J_X2(1,2)*DeltaX_2_K(2,2)+par_J_X2(2,1)*DeltaX_2_K(2,3)+...par_J_X2(2,2)*DeltaX_2_K(2,4);Delta_J_K = [temp1;temp2]+[temp3;temp4];% Use gradient descent to compute a new KK_new = K + epsilon*Delta_J_K;K = K_new;
endfor t = 0 : dt : durationu = 1; % control inputProcessNoise = (1+alpha)*accelnoise*[(dt^2/2)*randn; dt*randn];x = a*x+b*u+ProcessNoise;x1 = [x1 x(1)];x2 = [x2 x(2)];MeasNoise = (1+beta)*measnoise*randn;y = c*x+MeasNoise;xhat = a*xhat+b*u;xhat_new = a*xhat_new+b*u;Inn1 = y-c*xhat;Inn2 = y-c*xhat_new;% Standard Kalman filter estimatexhat = xhat+K_optimal*Inn1;x1hat = [x1hat xhat(1)];x2hat = [x2hat xhat(2)];% Robust Kalman filter estimatexhat_new = xhat_new+K_new*Inn2;x1hatnew = [x1hatnew xhat_new(1)];x2hatnew = [x2hatnew xhat_new(2)];
endx1_sqr = (x1-x1hatnew).^2;
x1_rms_robu = (mean(x1_sqr))^0.5;
x1_sqr_op = (x1-x1hat).^2;
x1_rms_opt = (mean(x1_sqr_op))^0.5;
disp(['RMS x1 error = ',num2str(x1_rms_robu),' (robust), ',num2str(x1_rms_opt),' (optimal)']);x2_sqr = (x2-x2hatnew).^2;
x2_rms_robu = (mean(x2_sqr))^0.5;
x2_sqr_op = (x2-x2hat).^2;
x2_rms_opt = (mean(x2_sqr_op))^0.5;
disp(['RMS x2 error = ',num2str(x2_rms_robu),' (robust), ',num2str(x2_rms_opt),' (optimal)']);

5. 小结

运动控制在现代生活中的实际应用非常广泛,除了智能工厂中各种智能设备的自动运转控制,近几年最火的自动驾驶技术,以及航空航天领域,都缺少不了它的身影,所以熟练掌握状态估计理论,对未来就业也是非常有帮助的。切记矩阵理论与概率论等知识的基础一定要打好。对本章内容感兴趣或者想充分学习了解的,建议去研习书中第十章节的内容,有条件的可以通过习题的联系进一步巩固充实。后期会对其中一些知识点在自己理解的基础上进行讨论补充,欢迎大家一起学习交流。

原书链接:Optimal State Estimation:Kalman, H-infinity, and Nonlinear Approaches

《最优状态估计-卡尔曼,H∞及非线性滤波》:第10章 有关卡尔曼滤波的其他讨论相关推荐

  1. 《最优状态估计-卡尔曼,H∞及非线性滤波》:第7章 卡尔曼滤波的扩展

    <最优状态估计-卡尔曼,H∞及非线性滤波>:第7章 卡尔曼滤波的扩展 前言 1. MATLAB仿真:示例7.1 2. MATLAB仿真:示例7.2 3. MATLAB仿真:示例7.12 4 ...

  2. 《最优状态估计-卡尔曼,H∞及非线性滤波》:第1章 线性系统理论

    <最优状态估计-卡尔曼,H∞及非线性滤波>:第1章 线性系统理论 前言 1. MATLAB仿真一:示例1.2 2. MATLAB仿真二:示例1.3 3. 小结 前言 <最优状态估计- ...

  3. 《最优状态估计-卡尔曼,H∞及非线性滤波》:第12章 H∞滤波器的其他问题

    <最优状态估计-卡尔曼,H∞及非线性滤波>:第12章 H∞滤波器的其他问题 前言 1. MATLAB仿真:示例12.1 2. MATLAB仿真:示例12.2 3. MATLAB仿真:示例1 ...

  4. 【飞控理论】从零开始学习Kalman Filters之二:最优状态估计、最优估计算法和方程

    文章目录 前言 学习目录 1.开始之前先需要知道的几个概念 2.最优状态估计 3.最优估计算法和方程   本文是博主学习笔记,点击这里观看原视频. 前言   在上次的学习中我们学习了卡尔曼滤波器的常见 ...

  5. 机器人学中的状态估计 中文版_机器人学中的状态估计/State Estimation for Robotics—第三章习题答案...

    机器人学中的状态估计/State Estimation for Robotic 第三章习题答案 英文最新版已于2020年4月11日勘误此第二题,维度改为六维矩阵,即图片上的形式.

  6. 优麒麟安装php环境,优麒麟Ubuntu Kylin 最新版 15.10下载地址及五大优势

    2015年10月23日,优麒麟官方发布了其最新版本Ubuntu Kylin 15.10正式版本下载. 官方下载地址:http://www.ubuntukylin.com/downloads/(该地址包 ...

  7. 一次线上JVM调优实践,FullGC40次/天到10天一次的优化过程

    作者 | cmlbeliever 来源 | https://blog.csdn.net/cml_blog/article/details/81057966 通过这一个多月的努力,将FullGC从40次 ...

  8. 商品规格js_品优购电商系统开发 第3章 规格及模板管理

    课程目标 目标1:理解和运用angularJS的service 目标2:理解和运用控制器继承 目标3:掌握代码生成器的使用 目标4:实现规格管理 目标5:实现模板管理 1.前端分层开发 1.1 需求分 ...

  9. 优贝共享数据交易所网_2020.10.4号币圈简报:优贝兼职视界卖单积压,耀健康上涨...

    1.多多短视频摆摊交易目前一豆61.4多宝,打赏给自己然后币安提现,换算预计5块多一个豆.交易所10.15才重新开放,转过去的币又闲置在那边. 2.趣乐享频繁改动做任务规则,目前需要看完十几个视频才算 ...

最新文章

  1. SAP零售行业解决方案初阶 4 – 维护品类
  2. MultiObjective using Evolutionary Algorithms (2) -- Multi-Objective Optimization
  3. python 配置文件对比_Python运维自动化之nginx配置文件对比操作示例
  4. Install gevent in AIX with gcc
  5. 迁移学习 简而言之_简而言之Java.io:22个案例研究
  6. sqoop导出数据单mysql_sqoop导出hive表数据到mysql
  7. 5G版iPhone更多细节曝光:骁龙X55基带+A14处理器
  8. 500+ 精选 Java 面试题大放送
  9. nginx集群报错“upstream”directive is not allow here 错误 [
  10. webdriver原理
  11. OpenStreetMap + Leaflet 当前位置定位
  12. LA 5713 Qin Shi Huang's National Road System 最小生成树
  13. homepod换wifi网络_HomeKit的最佳入口——HomePod使用体验
  14. Sentinel采用SphO方式定义资源,报错:The order of entry exit can‘t be paired with the order of entry
  15. Atitit.论图片类型 垃圾文件的识别与清理  流程与设计原则 与api概要设计 v2 pbj...
  16. vue中利用gif.js实现GIF动图下载
  17. 概率统计Python计算:离散型随机变量分布(bernoulli geom)
  18. 最新双十一淘宝查历史最低价流量主小程序源码/亲测
  19. [词性] 十八、介词 8 [ within ] [ within ] [ across ] [ among ] [ outside ] [ into ] [ beyond ] [ against ]
  20. 【JAVA】UDP通信

热门文章

  1. KLOOK客路旅行获2亿美元D轮融资,将投入全球拓展
  2. 新手必看:生成对抗网络的初学者入门指导
  3. Hive--窗口函数
  4. 武汉星起航:“短视频+电商”如何抓住用户“关键时刻”?
  5. 简练软考知识点整理-创建工作分解结构过程
  6. == 和 ===的区别
  7. 为什么lol计算机内存不足怎么办,win7玩LOL英雄联盟提示“内存不足”怎么处理?(图文)...
  8. 原生js实现fadein 和 fadeout淡入淡出效果
  9. HEIC格式是什么?
  10. 雨天在火车站台上撑伞会触电吗?