Cramer-Rao Lower Bound 推导
充分完备统计量在绝大多数情况下根本找不到,但即使在这种情况下,仍然可以求出统计量优化的极限,即克拉美罗下界。
Cramer-Rao Lower Bound (CRLB) 很明显是一个MSE值,并且和以下因素有关:
- 统计模型p(x,θ)p(\mathbf{x},\theta)p(x,θ);
- 样本数量nnn。
1. 推导过程
下面推导Cramer-Rao Lower Bound:
假设样本数为nnn,样本向量为x\mathbf{x}x,待估计参数是一个标量θ∈R\theta \in \Rθ∈R。假设估计量为θ^(x)\hat\theta(\mathbf{x})θ^(x)。需要注意的是,CRLB和这个θ^\hat\thetaθ^无关,它是最好的θ^\hat\thetaθ^对应的MSE。
对于无偏统计量θ^(x)\hat\theta(\mathbf{x})θ^(x),考察它的MSE最小值。既然无偏:
E(θ^−θ)=∫Rn(θ^−θ)p(x,θ)dx=0\mathrm{E}(\hat\theta - \theta) = \int_{\R^n} (\hat\theta - \theta)p(\mathbf{x}, \theta) \mathrm{d}\mathbf{x} = 0 E(θ^−θ)=∫Rn(θ^−θ)p(x,θ)dx=0
对θ\thetaθ求导:
∫Rn[∂p(x,θ)∂θ(θ^−θ)−p(x,θ)]dx=0\int_{\R^n} \left[\frac{\partial p(\mathbf{x}, \theta)}{\partial \theta}(\hat\theta - \theta) - p(\mathbf{x},\theta) \right] \mathrm{d}\mathbf{x} = 0 ∫Rn[∂θ∂p(x,θ)(θ^−θ)−p(x,θ)]dx=0
即:
∫Rn∂p(x,θ)∂θ(θ^−θ)dx=1\int_{\R^n} \frac{\partial p(\mathbf{x}, \theta)}{\partial \theta}(\hat\theta - \theta) \mathrm{d}\mathbf{x} = 1 ∫Rn∂θ∂p(x,θ)(θ^−θ)dx=1
引入对数进行构造:
∫Rn[((θ^−θ)∂lnp(x,θ)∂θ)⋅p(x,θ)]dx=1\int_{\R^n} \left[ \left( (\hat\theta - \theta) \frac{\partial \ln p(\mathbf{x}, \theta)}{\partial \theta}\right) \cdot p(\mathbf{x}, \theta) \right] \mathrm{d}\mathbf{x} = 1 ∫Rn[((θ^−θ)∂θ∂lnp(x,θ))⋅p(x,θ)]dx=1
引入概率上的柯西-施瓦茨不等式:(乘积期望操作可以被当作内积)
E(XY)≤E(X2)⋅E(Y2)\mathrm{E}(XY) \leq \sqrt{\mathrm{E}(X^2)\cdot \mathrm{E}(Y^2)} E(XY)≤E(X2)⋅E(Y2)
所以:
E(θ^−θ)2⋅E[∂lnp(x,θ)∂θ]2≥E[(θ^−θ)∂lnp(x,θ)∂θ]=1\mathrm{E}(\hat\theta - \theta)^2 \cdot \mathrm{E}\left[ \frac{\partial \ln p(\mathbf{x}, \theta)}{\partial \theta} \right]^2 \geq \mathrm{E}\left[ (\hat\theta - \theta) \frac{\partial \ln p(\mathbf{x}, \theta)}{\partial \theta}\right] = 1 E(θ^−θ)2⋅E[∂θ∂lnp(x,θ)]2≥E[(θ^−θ)∂θ∂lnp(x,θ)]=1
即:
MSE(θ^)=Var(θ^)≥[E(∂lnp(x,θ)∂θ)2]−1\mathrm{MSE}(\hat\theta) = \mathrm{Var}(\hat\theta) \geq \left[\mathrm{E}\left( \frac{\partial \ln p(\mathbf{x}, \theta)}{\partial \theta} \right)^2 \right]^{-1} MSE(θ^)=Var(θ^)≥[E(∂θ∂lnp(x,θ))2]−1
2. 案例
用一个案例展示CRLB的计算方式。
考虑一种常见的估计:测量直流电压,观测量上存在一个AWGN。进行nnn次采样,每次采样之间是i.i.d。即参数化模型为:
p(x,θ)=1(2πσ)nexp[−∑i=1n(xi−θ)22σ2]p(\mathbf{x},\theta) = \frac{1}{(\sqrt{2\pi}\sigma)^n}\exp\left[-\frac{\sum_{i = 1}^n(x_i - \theta)^2}{2\sigma^2}\right] p(x,θ)=(2πσ)n1exp[−2σ2∑i=1n(xi−θ)2]
先做对数似然:
lnp(x,θ)=−nln(2πσ)−12σ2∑i=1n(xi−θ)2\ln p(\mathbf{x}, \theta) = -n\ln(\sqrt{2\pi}\sigma) - \frac{1}{2\sigma^2}\sum_{i = 1}^n (x_i - \theta)^2 lnp(x,θ)=−nln(2πσ)−2σ21i=1∑n(xi−θ)2
然后再求导:
∂lnp(x,θ)∂θ=1σ2∑i=1n(xi−θ)\frac{\partial \ln p(\mathbf{x}, \theta)}{\partial \theta} = \frac{1}{\sigma^2}\sum_{i = 1}^n(x_i - \theta) ∂θ∂lnp(x,θ)=σ21i=1∑n(xi−θ)
下面计算Fisher Information,即把求导后的对数似然函数平方求期望。
I(x,θ)=E[∂lnp(x,θ)∂θ]2=1σ4E[∑i=1n(xi−θ)]2=1σ4E[∑i=1n(xi−θ)2+∑j≠k(xj−θ)(xk−θ)]=nσ2\begin{aligned} I(\mathbf{x}, \theta) &= \mathrm{E}\left[\frac{\partial \ln p(\mathbf{x}, \theta)}{\partial \theta}\right]^2\\ &= \frac{1}{\sigma^4}\mathrm{E}\left[ \sum_{i = 1}^n (x_i - \theta) \right]^2 \\ &= \frac{1}{\sigma^4}\mathrm{E}\left[ \sum_{i = 1}^n (x_i - \theta)^2 + \sum_{j \neq k}(x_j - \theta)(x_k - \theta) \right]\\ &= \frac{n}{\sigma^2} \end{aligned} I(x,θ)=E[∂θ∂lnp(x,θ)]2=σ41E[i=1∑n(xi−θ)]2=σ41E⎣⎡i=1∑n(xi−θ)2+j=k∑(xj−θ)(xk−θ)⎦⎤=σ2n
所以CRLB就是:(这个就是算数均值的MSE)
MSE(x,θ)≥σ2n\mathrm{MSE}(\mathbf{x},\theta) \geq \frac{\sigma^2}{n} MSE(x,θ)≥nσ2
3. Fisher Information 简化计算方法
这个Fisher Information有另一种计算方法:
I(x,θ)=E(∂lnp(x,θ)∂θ)2=−E(∂2∂θ2lnp(x,θ))\begin{aligned} I(\mathbf{x}, \theta) &= \mathrm{E}\left( \frac{\partial \ln p(\mathbf{x}, \theta)}{\partial \theta} \right)^2\\ &= -\mathrm{E}\left( \frac{\partial^2}{\partial \theta^2} \ln p(\mathbf{x}, \theta) \right) \end{aligned} I(x,θ)=E(∂θ∂lnp(x,θ))2=−E(∂θ2∂2lnp(x,θ))
下面证明这个结论。
首先,基于概率的特性:
∫Rnp(x,θ)dx=1\int_{\R^n} p(\mathbf{x}, \theta) \mathrm{d}\mathbf{x} = 1 ∫Rnp(x,θ)dx=1
然后开始对θ\thetaθ求导:
∂∂θ∫Rnp(x,θ)dx=0\frac{\partial}{\partial \theta}\int_{\R^n} p(\mathbf{x}, \theta) \mathrm{d}\mathbf{x} = 0 ∂θ∂∫Rnp(x,θ)dx=0
交换导数和积分:
∫Rn∂p(x,θ)∂θdx=0\int_{\R^n} \frac{\partial p(\mathbf{x}, \theta)}{\partial \theta}\mathrm{d}\mathbf{x} = 0 ∫Rn∂θ∂p(x,θ)dx=0
这个积分形式不好,因为这玩意不伦不类,既不是个概率,也不是个期望。所以希望进行一些构从而得到有意义的形式:
∫Rn∂p(x,θ)∂θdx=∫Rn[∂∂θln[p(x,θ)]]⋅p(x,θ)dx=0\int_{\R^n} \frac{\partial p(\mathbf{x}, \theta)}{\partial \theta}\mathrm{d}\mathbf{x} = \int_{\R^n} \left[ \frac{\partial}{\partial \theta}\ln [p(\mathbf{x}, \theta)] \right] \cdot p(\mathbf{x},\theta) \mathrm{d}\mathbf{x} = 0 ∫Rn∂θ∂p(x,θ)dx=∫Rn[∂θ∂ln[p(x,θ)]]⋅p(x,θ)dx=0
然后再来一次求导:
∂∂θ∫Rn∂p(x,θ)∂θdx=∂∂θ∫Rn[∂∂θln[p(x,θ)]]⋅p(x,θ)dx=0\frac{\partial }{\partial \theta}\int_{\R^n} \frac{\partial p(\mathbf{x}, \theta)}{\partial \theta}\mathrm{d}\mathbf{x} = \frac{\partial }{\partial \theta}\int_{\R^n} \left[ \frac{\partial}{\partial \theta}\ln [p(\mathbf{x}, \theta)] \right] \cdot p(\mathbf{x},\theta) \mathrm{d}\mathbf{x} = 0 ∂θ∂∫Rn∂θ∂p(x,θ)dx=∂θ∂∫Rn[∂θ∂ln[p(x,θ)]]⋅p(x,θ)dx=0
中间那玩意进行求导积分顺序交换,然后应用乘积求导公式,求出来是这样的:
∂∂θ∫Rn[∂∂θln[p(x,θ)]]⋅p(x,θ)dx=∫Rn[p(x,θ)⋅∂2∂θ2lnp(x,θ)+∂p(x,θ)∂θ⋅∂lnp(x,θ)∂θ]dx=E[∂2∂θ2lnp(x,θ)]+∫Rn[∂p(x,θ)∂θ⋅∂lnp(x,θ)∂θ]dx\begin{aligned} \frac{\partial }{\partial \theta}\int_{\R^n} \left[ \frac{\partial}{\partial \theta}\ln [p(\mathbf{x}, \theta)] \right] \cdot p(\mathbf{x},\theta) \mathrm{d}\mathbf{x} &= \int_{\R^n} \bigg[ p(\mathbf{x}, \theta) \cdot \frac{\partial^2}{\partial \theta^2}\ln p(\mathbf{x}, \theta) + \frac{\partial p(\mathbf{x}, \theta)}{\partial \theta}\cdot \frac{\partial \ln p(\mathbf{x}, \theta)}{\partial \theta} \bigg] \mathrm{d}\mathbf{x}\\ &= \mathrm{E}\left[ \frac{\partial^2}{\partial\theta^2} \ln p(\mathbf{x}, \theta) \right] + \int_{\R^n} \bigg[ \frac{\partial p(\mathbf{x}, \theta)}{\partial \theta}\cdot \frac{\partial \ln p(\mathbf{x}, \theta)}{\partial \theta} \bigg] \mathrm{d}\mathbf{x} \end{aligned} ∂θ∂∫Rn[∂θ∂ln[p(x,θ)]]⋅p(x,θ)dx=∫Rn[p(x,θ)⋅∂θ2∂2lnp(x,θ)+∂θ∂p(x,θ)⋅∂θ∂lnp(x,θ)]dx=E[∂θ2∂2lnp(x,θ)]+∫Rn[∂θ∂p(x,θ)⋅∂θ∂lnp(x,θ)]dx
后面那东西仍然是个不伦不类的积分,所以再用一下对数似然函数一阶导数的性质:
∫Rn[∂p(x,θ)∂θ⋅∂lnp(x,θ)∂θ]dx=∫Rn[∂p(x,θ)∂θ⋅∂lnp(x,θ)∂θp(x,θ)p(x,θ)]dx=∫Rn(∂lnp(x,θ)∂θ)2p(x,θ)dx=E[∂lnp(x,θ)∂θ]2\begin{aligned} \int_{\R^n} \bigg[ \frac{\partial p(\mathbf{x}, \theta)}{\partial \theta}\cdot \frac{\partial \ln p(\mathbf{x}, \theta)}{\partial \theta} \bigg] \mathrm{d}\mathbf{x} &= \int_{\R^n} \bigg[ \frac{\partial p(\mathbf{x}, \theta)}{\partial \theta}\cdot \frac{\partial \ln p(\mathbf{x}, \theta)}{\partial \theta} \frac{p(\mathbf{x}, \theta)}{p(\mathbf{x}, \theta)} \bigg] \mathrm{d}\mathbf{x}\\ &= \int_{\R^n} \left(\frac{\partial \ln p(\mathbf{x}, \theta)}{\partial \theta}\right)^2 p(\mathbf{x}, \theta) \mathrm{d}\mathbf{x}\\ &= \mathrm{E}\left[ \frac{\partial \ln p(\mathbf{x}, \theta)}{\partial \theta} \right]^2 \end{aligned} ∫Rn[∂θ∂p(x,θ)⋅∂θ∂lnp(x,θ)]dx=∫Rn[∂θ∂p(x,θ)⋅∂θ∂lnp(x,θ)p(x,θ)p(x,θ)]dx=∫Rn(∂θ∂lnp(x,θ))2p(x,θ)dx=E[∂θ∂lnp(x,θ)]2
所以:
I(x,θ)=E(∂lnp(x,θ)∂θ)2=−E(∂2∂θ2lnp(x,θ))\begin{aligned} I(\mathbf{x}, \theta) &= \mathrm{E}\left( \frac{\partial \ln p(\mathbf{x}, \theta)}{\partial \theta} \right)^2\\ &= -\mathrm{E}\left( \frac{\partial^2}{\partial \theta^2} \ln p(\mathbf{x}, \theta) \right) \end{aligned} I(x,θ)=E(∂θ∂lnp(x,θ))2=−E(∂θ2∂2lnp(x,θ))
Cramer-Rao Lower Bound 推导相关推荐
- Cramer-Rao Lower Bound的推导
文章目录 Why we need CRLB CRLB for single parameter CRLB for multiple parameters (vector) Why we need CR ...
- 论文笔记 -- Communication Lower Bound in Convolution Accelerators 卷积加速器中的通信下界
论文笔记 – Communication Lower Bound in Convolution Accelerators 卷积加速器中的通信下界 @(论文笔记) 文章目录 论文笔记 -- Commun ...
- 【学习笔记】Cramer-Rao Lower Bound 克拉美-罗界
Cramér–Rao bound 参考来源: CSDN:克拉美-罗下界(Cramer-Rao Lower Bound,CRLB) CSDN:详解统计信号处理之克拉美罗界 Cramer-Rao下界 TU ...
- Cramer-Rao Lower Bound (CRLB) Simple Introduction
The Cramer-Rao Lower Bound (CRLB) gives a lower estimate for the variance of an unbiased estimator. ...
- 先验、后验概率,似然,EM算法,ELBO(Evidence Lower Bound),多变量条件概率公式(多变量贝叶斯公式)
Probability 先验概率.后验概率.似然概率 在学习朴素贝叶斯(Naive Bayes)的时候,总是会混淆先验概率.后验概率和似然概率.通过这篇博客,我将对这三个概率的定义进行详细阐释,以更好 ...
- 克拉美-罗下界(Cramer-Rao Lower Bound,CRLB)
1. 估计量的衡量标准 对于参数估计问题,目前存在着很多估计算法.那么如何去衡量一个估计器(estimator, 也称估计量或估计算法)的性能,我们主要考量以下三个方面 无偏性(unbiased).对 ...
- 【 Notes 】Best linear unbiased estimator(BLUE) approach for time-of-arrival based localisation
目录 Abstract Introduction BLUE-based positioning BLUE-LSC algorithm BLUE-LLS algorithm Abstract A com ...
- mimo雷达信号处理_雷达学术入门脉冲雷达信号处理概述
Reviewed by :@甜草莓 @Robert Zhou: 前置知识:概率论与统计学.面向人群:本科生.研究生/信号处理博士. 编者:对于信号处理来说,雷达和通信一直是一体两面,从MIMO通信到M ...
- 第八课.EM算法的合理性与算法推导
目录 EM算法背景 EM算法迭代的合理性 EM算法推导 EM算法的E步和M步 分析EM算法的由来 EM算法背景 在硬币投掷和班级身高问题中,引入了隐变量问题,模型变成了混合模型,我们不能直接利用极大似 ...
最新文章
- 对人脑而言,阅读计算机代码和阅读语言有何不同?
- 上传Android或Java库到Maven central repository(转载)
- Oracle自动备份脚本(Linux)
- cocos2d-x初探学习笔记(3)--动作(CCAction)
- Ajax跨域post请求后端无法获取登录态原因及解决办法
- Android Studio中AndroidManifest.xml文件中application标签
- 小米蓝牙耳机airdots青春版双耳模式
- passenger多ruby版本共存部署
- 解决问题win10无线网卡:无法连接到此网络
- Barefoot:可编程交换在5G中的潜力
- 什么是云中台系统_什么是云中的超融合?
- 共享充电宝再涨价达每小时6元 客服:市场需求决定的
- 使用AutoJS实现2019天猫双11喵币自动领取
- 【高德地图】iOS 开发汇总(一)
- 哈佛的计算机视觉医学方向排名,搜狐科学 | 美国医学院十强排名出炉 哈佛大学再次排名榜首...
- 计算机专业硕士学费,2015计算机工程硕士专业学费汇总
- [luogu]P1600 天天爱跑步[LCA]
- 元宇宙中国江湖进化录
- Assignment of attribute weights with belief distributions for MADM under uncertainties
- 经济应用文写作【2】