Chapter 9. Integration with Respect to a Probability Measure1

南京审计大学统计学研究生第一学期课程,《高等概率论》。

欢迎大家来我的github下载源码呀,https://github.com/Berry-Wen/statistics-note-system

Background

Let (Ω,A,P)(\Omega,\mathcal{A},P)(Ω,A,P) be a probability space.
We want to define the expectation, or what is equivalent, the “integral”, of general r.v.r.v.r.v..
We have of course already done this for r.v.sr.v.sr.v.s on a countable space Ω\OmegaΩ.
The general case (for arbitrary Ω\OmegaΩ) is more delicate.

Definition 9.1

  1. A r.v.Xr.v. \ Xr.v. X is called simple if it takes on only a finite number of values and hence can be written in the form
    X=∑i=1naiIAi(1)X = \sum_{i=1}^{n} a_i I_{A_i} \tag{1} X=i=1∑n​ai​IAi​​(1)
    Where ai∈Ra_i \in \Rai​∈R, and Ai∈A,1≤i≤nA_i \in \mathcal{A},1\le i \le nAi​∈A,1≤i≤n

若 Ak∈F,k=1,2,...,nA_k \in \mathcal{F},k=1,2,...,nAk​∈F,k=1,2,...,n两两不交,且
∪k=1nAk=Ω,ak∈R^(1),k=1,2,...,n\cup_{k=1}^n A_k = \Omega,a_k \in \hat{\mathcal{R}}^{(1)},k=1,2,...,n∪k=1n​Ak​=Ω,ak​∈R^(1),k=1,2,...,n
则称函数
KaTeX parse error: Undefined control sequence: \notag at position 56: …d w \in \Omega \̲n̲o̲t̲a̲g̲ ̲
为 (Ω,F)(\Omega,\mathcal{F})(Ω,F)上的简单函数

  • Such an XXX is clearly measurable; (Why?) 思考这里为什么XXX是可测的

    ∀B∈BkX−1(B)={w:X(w)∈B}=∪ak∈B{w:X(w)=ak}=∪ak∈BAk∈F逆象的定义简单函数的定义简单函数的定义简单函数中Ak∈F⇒∪Ak∈BAk∈F\forall B \in \mathcal{B}^k \\ \begin{array}{ll} \begin{aligned} X^{-1}(B) &= \{w: X(w) \in B\} \\ &= \cup_{a_k \in B} \{w:X(w) = a_k\} \\ &= \cup_{a_k \in B} A_k \\ &\in \mathcal{F} \end{aligned} & \begin{array}{ll} \text{逆象的定义} \\ \text{简单函数的定义} \\ \text{简单函数的定义} \\ \text{简单函数中} A_k \in \mathcal{F} \Rightarrow \cup_{A_k \in B} A_k \in \mathcal{F} \end{array} \end{array} ∀B∈BkX−1(B)​={w:X(w)∈B}=∪ak​∈B​{w:X(w)=ak​}=∪ak​∈B​Ak​∈F​​逆象的定义简单函数的定义简单函数的定义简单函数中Ak​∈F⇒∪Ak​∈B​Ak​∈F​​

由定理8.1可知XXX可测。

  • Conversely, if XXX is measurable and takes on the values a1,...,ana_1,...,a_na1​,...,an​ it must have the representation (1) with Ai={X=ai}A_i = \{X=a_i\}Ai​={X=ai​} ;

  • A simple r.v.r.v.r.v. has of course many different representation of the form (1).

  1. If XXX is simple, its expectation (or “integral” with respect to PPP) is the number
    E{X}=∑i=1naiP(Ai)(2)E\{X\} = \sum_{i=1}^{n} a_i P(A_i) \tag{2} E{X}=i=1∑n​ai​P(Ai​)(2)

    • This is also written ∫X(w)P(dw)\int X(w) P(dw)∫X(w)P(dw) and even more simply ∫XdP\int X dP∫XdP;
    • A little algebra shows that E{X}E\{X\}E{X} does not depend on the particular representation (1) chosen for XXX. 练习:

Exercise

Let (Ω,A,P)(\Omega,\mathcal{A},P)(Ω,A,P) be a probability space
Let X:Ω→RX:\Omega \to \RX:Ω→R be such that it admits two representations
X=∑i=1naiIAiandX=∑j=1mbjIBjX = \sum_{i=1}^{n} a_i I_{A_i} \quad \text{and} \quad X = \sum_{j=1}^{m} b_j I_{B_j} X=i=1∑n​ai​IAi​​andX=j=1∑m​bj​IBj​​
where ai,bj∈Ra_i,b_j \in \Rai​,bj​∈R , and Ai,Bj∈AA_i,B_j \in \mathcal{A}Ai​,Bj​∈A for all i,j.i,j.i,j. Show that
∑i=1naiP(Ai)=∑j=1mbjP(Bj)\sum_{i=1}^{n} a_i P(A_i) = \sum_{j=1}^{m} b_j P(B_j) i=1∑n​ai​P(Ai​)=j=1∑m​bj​P(Bj​)


First, prove that ∪i=1nAi=∪j=1mBj\cup_{i=1}^{ n}A_i = \cup_{j=1}^{m}B_j∪i=1n​Ai​=∪j=1m​Bj​.

Assume
ai≠0Ai∩Aj=∅bj≠0Bi∩Bj=∅(i≠j)\begin{array}{lll} a_i \neq 0 & A_i \cap A_j = \empty \\ b_j \neq 0 & B_i \cap B_j = \empty \end{array} (i\neq j) ai​​=0bj​​=0​Ai​∩Aj​=∅Bi​∩Bj​=∅​(i​=j)

∀w∈∪i=1nAi∃i0∈{1,2,...,n}s.t.X(w)=ai0≠0sow∈∪j=1mBjelseX(w)=0∴∪i=1nAi⊂∪j=1mBj∀w∈∪j=1mBj∃j0∈{1,2,...,n}s.t.X(w)=bj0≠0sow∈∪i=1nAielseX(w)=0∴∪j=1mBj⊂∪i=1nAi⇓∪j=1mBj=∪i=1nAi\begin{array}{lll} \hline \begin{array}{lll} \forall w \in \cup_{i=1}^{n} A_i \\ \exists i_0 \in \left\{ 1,2,...,n \right\} \\ s.t. \ X(w) = a_{i_0} \neq 0 \\ so \ w \in \cup_{j=1}^{m}B_j \\ \quad else \ X(w) = 0 \\ \therefore \ \cup_{i=1}^{n}A_i \subset \cup_{j=1}^{m}B_j \\ \end{array} & \begin{array}{lll} \forall w \in \cup_{j=1}^{m}B_j \\ \exists j_0 \in \left\{ 1,2,...,n \right\} \\ s.t. \ X(w) = b_{j_0} \neq 0 \\ so \ w \in \cup_{i=1}^{n}A_i \\ \quad else \ X(w) = 0 \\ \therefore \ \cup_{j=1}^{m}B_j \subset \cup_{i=1}^{n}A_i \end{array} \Downarrow \\ \cup_{j=1}^{m}B_j = \cup_{i=1}^{n}A_i\\ \hline \end{array} ∀w∈∪i=1n​Ai​∃i0​∈{1,2,...,n}s.t. X(w)=ai0​​​=0so w∈∪j=1m​Bj​else X(w)=0∴ ∪i=1n​Ai​⊂∪j=1m​Bj​​∪j=1m​Bj​=∪i=1n​Ai​​∀w∈∪j=1m​Bj​∃j0​∈{1,2,...,n}s.t. X(w)=bj0​​​=0so w∈∪i=1n​Ai​else X(w)=0∴ ∪j=1m​Bj​⊂∪i=1n​Ai​​⇓​​

Second, if AiBj≠∅A_i B_j \neq \emptysetAi​Bj​​=∅, for w∈AiBjX(w)=ai=bjw \in A_i B_j \quad X(w) = a_i = b_jw∈Ai​Bj​X(w)=ai​=bj​

X=∑i=1naiIAi=∑i=1naiIAi∩(∪i=1nAi)=∑i=1naiIAi∩(∪j=1nBj)=∑i=1naiI∪j=1mAiBj=∑i=1n∑j=1maiIAiBjX=∑j=1mbjIBj=∑j=1mbjIBj∩(∪j=1mBj)=∑j=1mbjIBj∩(∪i=1nAi)=∑j=1mbjI∪i=1nAiBj=∑i=1n∑j=1mbjIAiBj⇓for w∈AiBj≠∅,X(w)=ai=bj\begin{array}{lll} \hline \begin{aligned} X &= \sum_{i=1}^{n} a_i I_{A_i} \\ &= \sum_{i=1}^{n} a_i I_{A_i \cap ( \cup_{i=1}^n A_i)} \\ &= \sum_{i=1}^{n} a_i I_{A_i \cap ( \cup_{j=1}^n B_j)} \\ &= \sum_{i=1}^{n} a_i I_{ \cup_{j=1}^m A_i B_j} \\ &= \sum_{i=1}^{n} \sum_{j=1}^{m} a_i I_{A_i B_j} \\ \end{aligned} & \begin{aligned} X &= \sum_{j=1}^{m} b_j I_{B_j} \\ &= \sum_{j=1}^{m} b_j I_{B_j \cap ( \cup_{j=1}^m B_j)} \\ &= \sum_{j=1}^{m} b_j I_{B_j \cap ( \cup_{i=1}^n A_i)} \\ &= \sum_{j=1}^{m} b_j I_{ \cup_{i=1}^n A_i B_j} \\ &= \sum_{i=1}^{n} \sum_{j=1}^{m} b_j I_{A_i B_j} \\ \end{aligned}\\ \Downarrow \\ \text{for } w \in A_iB_j\neq \emptyset, X(w) = a_i = b_j \\ \hline \end{array} X​=i=1∑n​ai​IAi​​=i=1∑n​ai​IAi​∩(∪i=1n​Ai​)​=i=1∑n​ai​IAi​∩(∪j=1n​Bj​)​=i=1∑n​ai​I∪j=1m​Ai​Bj​​=i=1∑n​j=1∑m​ai​IAi​Bj​​​⇓for w∈Ai​Bj​​=∅,X(w)=ai​=bj​​X​=j=1∑m​bj​IBj​​=j=1∑m​bj​IBj​∩(∪j=1m​Bj​)​=j=1∑m​bj​IBj​∩(∪i=1n​Ai​)​=j=1∑m​bj​I∪i=1n​Ai​Bj​​=i=1∑n​j=1∑m​bj​IAi​Bj​​​​​

如果 AiBj=∅A_iB_j = \emptyAi​Bj​=∅ , aiP(AiBj)=bjP(AiBj)=0a_iP(A_iB_j)=b_jP(A_iB_j)=0ai​P(Ai​Bj​)=bj​P(Ai​Bj​)=0,不影响计算

Last,Prove EX=EYEX=EYEX=EY

EX=∑i=1naiP(Ai)=∑i=1naiP(Ai∩(∪i=1nAi))=∑i=1naiP(Ai∩(∪j=1mBj))=∑i=1naiP(∪j=1mAiBj)=∑i=1n∑j=1maiP(AiBj)EY=∑j=1mbjP(Bj)=∑j=1mbjP(Bj∩(∪j=1mBj))=∑j=1mbjP(Bj∩(∪i=1nAi))=∑j=1mbjP(∪i=1nAiBj)pairwise disjoint=∑i=1n∑j=1mbjP(AiBj)⇓∵ai=bjEX=EY\begin{array}{lll} \hline \begin{aligned} EX &= \sum_{i=1}^{n} a_i P(A_i) \\ &= \sum_{i=1}^{n} a_i P(A_i \cap (\cup_{i=1}^{n}A_i)) \\ &= \sum_{i=1}^{n} a_i P(A_i \cap ( \cup_{j=1}^{m} B_j)) \\ &= \sum_{i=1}^{n} a_i P( \cup_{j=1}^{m}A_i B_j) \\ &= \sum_{i=1}^{n} \sum_{j=1}^{m} a_i P(A_i B_j) \\ \end{aligned} & \begin{aligned} EY &= \sum_{j=1}^{m} b_j P(B_j) \\ &= \sum_{j=1}^{m} b_j P(B_j \cap (\cup_{j=1}^m B_j)) \\ &= \sum_{j=1}^{m} b_j P(B_j \cap ( \cup_{i=1}^{n}A_i)) \\ &= \sum_{j=1}^{m} b_j P( \cup_{i=1}^{n}A_i B_j) \ \text{ pairwise disjoint}\\ &= \sum_{i=1}^{n} \sum_{j=1}^{m} b_j P(A_i B_j) \\ \end{aligned} \\ \Downarrow \because a_i = b_j \\ EX=EY \\ \hline \end{array} EX​=i=1∑n​ai​P(Ai​)=i=1∑n​ai​P(Ai​∩(∪i=1n​Ai​))=i=1∑n​ai​P(Ai​∩(∪j=1m​Bj​))=i=1∑n​ai​P(∪j=1m​Ai​Bj​)=i=1∑n​j=1∑m​ai​P(Ai​Bj​)​⇓∵ai​=bj​EX=EY​EY​=j=1∑m​bj​P(Bj​)=j=1∑m​bj​P(Bj​∩(∪j=1m​Bj​))=j=1∑m​bj​P(Bj​∩(∪i=1n​Ai​))=j=1∑m​bj​P(∪i=1n​Ai​Bj​)  pairwise disjoint=i=1∑n​j=1∑m​bj​P(Ai​Bj​)​​​


Remark 测度与概率—2.3节 期望与积分 - 知乎 (zhihu.com)

  • Let X,YX,YX,Y be two simple r.v.sr.v.sr.v.s and β\betaβ a real number. We clearly have

    X=∑i=1naiIAiY=∑j=1mbjIBjX = \sum_{i=1}^n a_i I_{A_i} \quad Y = \sum_{j=1}^m b_j I_{B_j}X=∑i=1n​ai​IAi​​Y=∑j=1m​bj​IBj​​
    EX=∑i=1naiP(Ai)EY=∑j=1mbjP(Bj)EX = \sum_{i=1}^n a_i P(A_i) \quad EY = \sum_{j=1}^m b_j P(B_j)EX=∑i=1n​ai​P(Ai​)EY=∑j=1m​bj​P(Bj​)

    • E{βX}=βE{X}E\{\beta X\} = \beta E\{X\}E{βX}=βE{X};

      E{βX}=∑i=1nβaiP(Ai)=β∑i=1naiP(Ai)=βE{X}E\{\beta X\}=\sum_{i=1}^n \beta a_i P(A_i)=\beta \sum_{i=1}^n a_i P(A_i)=\beta E\{X\}E{βX}=∑i=1n​βai​P(Ai​)=β∑i=1n​ai​P(Ai​)=βE{X}

    • E{X+Y}=E{X}+E{Y}E\{X+Y\}=E\{X\}+E\{Y\}E{X+Y}=E{X}+E{Y};

      E{X+Y}=∑i=1n∑j=1m(ai+bj)P(AiBj)=∑i=1n∑j=1maiP(AiBj)+∑i=1n∑j=1mbjP(AiBj)=∑i=1naiP(Ai)+∑j=1mbjP(Bj)=E{X}+E{Y}\begin{aligned} E\{X+Y\} &= \sum_{i=1}^n \sum_{j=1}^m (a_i + b_j) P(A_i B_j) \\ &= \sum_{i=1}^n \sum_{j=1}^m a_i P(A_i B_j) + \sum_{i=1}^n \sum_{j=1}^m b_j P(A_i B_j) \\ &= \sum_{i=1}^n a_i P(A_i) + \sum_{j=1}^m b_j P(B_j) \\ &= E\{X\} +E\{Y\} \end{aligned}E{X+Y}​=i=1∑n​j=1∑m​(ai​+bj​)P(Ai​Bj​)=i=1∑n​j=1∑m​ai​P(Ai​Bj​)+i=1∑n​j=1∑m​bj​P(Ai​Bj​)=i=1∑n​ai​P(Ai​)+j=1∑m​bj​P(Bj​)=E{X}+E{Y}​

    • If X≤YX\le YX≤Y,then E{X}≤E{Y}E\{X\}\le E\{Y\}E{X}≤E{Y}.

      X≤Y⇒ai≤bjX\le Y \Rightarrow a_i \le b_jX≤Y⇒ai​≤bj​
      E{X}=∑i=1naiP(Ai)=∑i=1n∑j=1maiP(AiBj)≤∑i=1n∑j=1mbjP(AiBj)=∑j=1mbjP(Bj)=E{Y}\begin{aligned} E\{X\} = \sum_{i=1}^n a_i P(A_i) &= \sum_{i=1}^n \sum_{j=1}^m a_i P(A_i B_j) \\ &\le \sum_{i=1}^n \sum_{j=1}^m b_j P(A_i B_j) \\ &= \sum_{j=1}^m b_j P( B_j) \\ &= E\{Y\} \end{aligned}E{X}=i=1∑n​ai​P(Ai​)​=i=1∑n​j=1∑m​ai​P(Ai​Bj​)≤i=1∑n​j=1∑m​bj​P(Ai​Bj​)=j=1∑m​bj​P(Bj​)=E{Y}​

  • Thus, expectation is linear on the vector space of all simple r.v.sr.v.sr.v.s.

    因此,对于向量空间中的简单随机变量,期望是线性的。

  • Next, we define expectation for positive r.v.sr.v.sr.v.s.
    定义正的随机变量

    For XXX positive,

    • By this, we assume that XXX may take all values in [0,∞][0,\infty][0,∞], including +∞+\infty+∞;
      在这种假设下,随机变量XXX的取值为[0,∞][0,\infty][0,∞]

    • This innocuous extension is necessary for the coherence of some of our further results.
      这种无害的扩展对于我们某些进一步结果的一致性是必需的。

      Let
      E{X}=sup⁡{E{Y}:Ya simple r.v. with 0≤Y≤X}(3)E\{X\} = \sup \{E\{Y\}: Y \text{ a simple r.v. with } 0\le Y \le X\} \tag{3} E{X}=sup{E{Y}:Y a simple r.v. with 0≤Y≤X}(3)

    • This supremum always exists in [0,∞][0,\infty][0,∞].
      这个上确界在 [0,∞][0,\infty][0,∞] 上总是存在的

      Since expectation is a positive operator on the set of simple r.v.′sr.v.'sr.v.′s,
      既然期望是在简单随机变量集合上的正的运算

      it is clear that the definition above for E{X}E\{X\}E{X} coincides with Definition 9.1.
      则上面的关于期望的定义和定义9.1是一致的

      定义9.1里面的关于期望的定义为 E{X}=∑i=1naiP(Ai)E\{X\} = \sum_{i=1}^{n} a_i P(A_i)E{X}=∑i=1n​ai​P(Ai​)
      思考这里的一致性是为什么?

Remark

  • Note that E{X}≥0E\{X\}\ge 0E{X}≥0,but we can have E{X}=∞E\{X\}=\inftyE{X}=∞,even when XXX is never equal +∞+\infty+∞.
    注意到 E{X}≥0E\{X\}\ge 0E{X}≥0 ,但是我们可以有 E{X}=∞E\{X\}=\inftyE{X}=∞,即使随机变量XXX不等于无穷。

  • Finally, let XXX be an arbitrary r.v.r.v.r.v..
    最后,令 XXX 为任意的随机变量

    Let X+=max(X,0)X−=−min(X,0)X^+ = max(X,0) \quad X^- = - min(X,0)X+=max(X,0)X−=−min(X,0).

    Then
    X=X+−X−∣X∣=X++X−X = X^+ - X^- \quad |X| = X^+ + X^- X=X+−X−∣X∣=X++X−
    and X+,X−X^+,X^-X+,X− are positive r.v.sr.v.sr.v.s.

Definition 9.2

  • A r.v.Xr.v. \ Xr.v. X has a finite expectation (is “integrable”) if both E{X+}E\{X^+\}E{X+} and E{X−}E\{X^-\}E{X−} are finite.
    若 E{X+}E\{X^+\}E{X+} 和 E{X−}E\{X^-\}E{X−} 都是有限的,则随机变量 XXX 期望有限(也叫做可积)

    In this case, its expectation is the number 期望是两数之和
    E{X}=E{X+}+E{X−}(4)E\{X\} = E\{X^+\} + E\{X^-\} \tag{4} E{X}=E{X+}+E{X−}(4)

also written ∫X(w)dP(w)\int X(w) dP(w)∫X(w)dP(w) or ∫XdP\int XdP∫XdP

  • If X>0X>0X>0 then X−=0X^- =0X−=0 and X+=XX^+=XX+=X and, since obviously E{0}=0E\{0\}=0E{0}=0,this definition coincides with (3)
    如果 X>0X>0X>0 ,则 X−=0X^- =0X−=0 、X+=XX^+=XX+=X 、E{0}=0E\{0\}=0E{0}=0,在这种情况下,和式(3)一致

We write L1\mathcal{L}^1L1 to denote the set of all integrable r.v.sr.v.sr.v.s. (Sometimes we write L1(Ω,A,P)\mathcal{L}^1 (\Omega,\mathcal{A},P)L1(Ω,A,P) to remove any possible ambiguity.)
用 L1\mathcal{L}^1L1 代表所有可积的随机变量的集合

  • A r.v.Xr.v. \ Xr.v. X admits an expectation if E{X+}E\{X^+\}E{X+} and E{X−}E\{X^-\}E{X−} are not both equal to +∞+\infty+∞.
    随机变量XXX有期望,则 E{X+}E\{X^+\}E{X+} 和 E{X−}E\{X^-\}E{X−} 不都等于正无穷。

    • Then the expectation of XXX is still given by (4), with the conventions +∞+a=+∞+\infty+a=+\infty+∞+a=+∞ and −∞+a=−∞-\infty+a=-\infty−∞+a=−∞ when a∈Ra\in \Ra∈R.
      则XXX的期望还是和(4)式一样,因为 +∞+a=+∞+\infty+a=+\infty+∞+a=+∞ 、 −∞+a=−∞-\infty+a=-\infty−∞+a=−∞ a∈Ra\in \Ra∈R.
    • If X≥0X\ge 0X≥0 this definition again coincides with (3)
      如果 X≥0X\ge 0X≥0 ,则(4)和(3)是一致的
    • Note that if XXX admits an expectation, then E{X}∈[−∞,+∞]E\{X\} \in [-\infty,+\infty]E{X}∈[−∞,+∞], and XXX is integrable if and only if its expectation is finite.
      如果XXX有期望,则 E{X}∈[−∞,+∞]E\{X\} \in [-\infty,+\infty]E{X}∈[−∞,+∞],
      XXX 可积 ⇔\Leftrightarrow⇔ 期望有限

Remark 9.1

When Ω\OmegaΩ is finite or countable we have thus two different definitions for the expectation of a r.v.Xr.v. \ Xr.v. X, the one above and the one given in Chapter 5.
当 Ω\OmegaΩ 是有限或可数,则有两个不同的关于随机变量 XXX 的期望定义,一个是上面给出的,另一个是第五章给出的

In fact these two definitions coincides: it is enough to verify this for a simple r.v.Xr.v. \ Xr.v. X, and in this case the formulas (5.1) and (9.2) are identical.
事实上,这两种定义是一致的

E{X}=∑j∈T′jP(X=j)(5.1)E\{X\} = \sum_{j \in T'} j P(X=j) \tag{5.1} \\ E{X}=j∈T′∑​jP(X=j)(5.1)

E{X}=∑i=1naiP(Ai)(9.2)E\{X\} = \sum_{i=1}^n a_i P(A_i) \tag{9.2} E{X}=i=1∑n​ai​P(Ai​)(9.2)

这个留作练习,下次课讲

Theorem 9.1

  • (a) L1\mathcal{L}^1L1 is a vector space, and expectation is a linear map on L1\mathcal{L}^1L1, and it is also positive (i.e.X≥0⇒E{X}≥0i.e. X \ge 0 \Rightarrow E\{X\}\ge 0i.e.X≥0⇒E{X}≥0).
    L1\mathcal{L}^1L1 是一个向量空间,期望是L1\mathcal{L}^1L1上的线性映射,它也是正的。

If further 0≤X≤Y0\le X \le Y0≤X≤Y are two r.v.sr.v.sr.v.s and Y∈L1Y \in \mathcal{L}^1Y∈L1 and E{X}≤E{Y}E\{X\}\le E\{Y\}E{X}≤E{Y}.

  • (b) X∈L1X\in \mathcal{L}^1X∈L1 iff ∣X∣∈L1|X| \in \mathcal{L}^1∣X∣∈L1 and in this case ∣E{X}∣≤E{∣X∣}|E \left\{ X \right\}| \le E \left\{ |X| \right\}∣E{X}∣≤E{∣X∣}.

    In particular any bounded r.v.r.v.r.v. is integrable.

  • © If X=YX = YX=Y almost surely (s.a.s.a.s.a.) , then E{X}=E{Y}E \left\{ X \right\}= E \left\{ Y \right\}E{X}=E{Y}.

    X=Ya.s. if P(X=Y)=P({w:X(w)=Y(w)})=1X = Y \text{ a.s. if } P(X=Y)=P(\{w:X(w)=Y(w)\})=1 X=Y a.s. if P(X=Y)=P({w:X(w)=Y(w)})=1

  • (d) (Monotone convergence theorem): 单调收敛定理

    If the r.v.sr.v.sr.v.s XnX_nXn​ are positive and increasing a.s.a.s.a.s. to XXX, then lim⁡n→∞E{Xn}=E{X}\lim_{n\to \infty}E \left\{ X_n \right\} = E \left\{ X \right\}limn→∞​E{Xn​}=E{X} (even if E{X}=∞E \left\{ X \right\}= \inftyE{X}=∞).

  • (e) (Fatou’s lemma):

    If the r.v.sr.v.sr.v.s XnX_nXn​ satisfy Xn>YX_n > YXn​>Y a.s.a.s.a.s. (Y∈L1Y \in \mathcal{L}^1Y∈L1), all nnn, we have
    E{lim⁡n→∞inf⁡E{Xn}}≤lim⁡n→∞inf⁡E{Xn}E \left\{ \lim_{n \to \infty} \inf E \left\{ X_n \right\} \right\} \le \lim_{n \to \infty} \inf E \left\{ X_n \right\} E{n→∞lim​infE{Xn​}}≤n→∞lim​infE{Xn​}
    In particular, if Xn≥0X_n \ge 0Xn​≥0 a.s.a.s.a.s. then
    E{lim⁡n→∞inf⁡Xn}≤lim⁡n→∞inf⁡E{Xn}E \left\{ \lim_{n \to \infty} \inf X_n \right\} \le \lim_{n \to \infty} \inf E \left\{ X_n \right\} E{n→∞lim​infXn​}≤n→∞lim​infE{Xn​}

  • (f) (Lebesgue’s dominated convergence theorem): 勒贝格控制收敛定理

    If the r.v.sr.v.sr.v.s XnX_nXn​ converge a.s.a.s.a.s. to XXX and if ∣Xn∣≤Ya.s.∈L1|X_n|\le Y \ a.s. \in \mathcal{L}^1∣Xn​∣≤Y a.s.∈L1, all nnn,

    then Xn∈L1,X∈L1X_n \in \mathcal{L}^1, X \in \mathcal{L}^1Xn​∈L1,X∈L1 and E{Xn}→E{X}E \left\{ X_n \right\} \to E \left\{ X \right\}E{Xn​}→E{X}.

Statement

  • The a.s.a.s.a.s. equality between r.v.sr.v.sr.v.s is clearly an equivalence relation, and two equivalent (i.e. almost surely equal) r.v.s have the same expectation:
    在随机变量间的几乎必然相等时一个等价关系,两个几乎必然相等的随机变量具有相同的表示。

    Thus :

    one can define a space L1L^1L1 by considering " L1\mathcal{L}^1L1 modulo this equivalence relation"

  • In other words, an element of L1L^1L1 is an equivalence class, that is a collection of all r.v.r.v.r.v. in L1\mathcal{L}^1L1 which are pairwise a.s.a.s.a.s. equal.
    L1L^1L1 是一个相等的类,是所有在 L1\mathcal{L}^1L1 上的两两相等的类

  • In view of © above, one may speak of the “expectation” of the equivalence class (which is the expectation of any one element belonging to this class).
    鉴于上面的(c),可以说等价类的“期望”(对属于该类的任何一个元素的期望)。

  • Since further the addition of r.v.sr.v.sr.v.s or the product of a r.v.r.v.r.v. by a constant preserve a.s.a.s.a.s. equality, the set L1L^1L1 is also a vector space.
    随机变量的加法或者乘法by a constant preserve a.s.a.s.a.s. equality,则 L1L^1L1 集合也是一个向量空间。

    Therefore, we commit the (innocuous) abuse of identifying a r.v.r.v.r.v. with its equivalence class, and commonly write X∈L1X \in L^1X∈L1 instead of X∈L1X \in \mathcal{L}^1X∈L1.

  • If 1≤p<∞1\le p < \infty1≤p<∞, we define Lp\mathcal{L}^pLp to be the space of r.v.sr.v.sr.v.s such that ∣X∣p∈L1|X|^p \in \mathcal{L}^1∣X∣p∈L1 ;

    LpL^pLp is defined analogously to L1L^1L1.
    LpL^pLp和L1L^1L1的定义类似

    That is, LpL^pLp is Lp\mathcal{L}^pLp modulo the equivalence relation “almost surely”.

  • Put more simply, two elements of Lp\mathcal{L^p}Lp that are a.s.a.s.a.s. equal are considered to be representatives of one element of LpL^pLp.

Two auxiliary results.

Result 1

  For every positive r.v. XXX there exists a sequence {Xn}n≥1\{X_n\}_{n\ge 1}{Xn​}n≥1​ of positive simple r.v.s which increases toward XXX as nnn increases to infinity.

  An example of such a sequence is given by 为什么要定义为 k2n\frac{k}{2^n}2nk​
Xn(w)={k2nifk2n≤X(w)<k+12nand0≤k≤n2n−1nifX(w)≥nX_n(w) = \left\{ \begin{array}{lll} \frac{k}{2^n} & if \ \frac{k}{2^n} \le X(w) < \frac{k+1}{2^n} \ and \ 0 \le k \le n2^{n}-1 \\ n & if \ X(w) \ge n \end{array} \right. Xn​(w)={2nk​n​if 2nk​≤X(w)<2nk+1​ and 0≤k≤n2n−1if X(w)≥n​

Result 2

​ If XXX is a positive r.v., and if {Xn}n≥1\{X_n\}_{n\ge 1}{Xn​}n≥1​ is any sequence of positive simple r.v.s increasing to XXX, then E{Xn}E\{X_n\}E{Xn​} increases to E{X}E\{X\}E{X}.

高等概率论 Chapter 9. Integration with Respect to a Probability Measure1相关推荐

  1. 高等概率论 Chapter 6 Construction of a Probability Measure

    Chapter 6 Construction of a Probability Measure 南京审计大学统计学研究生第一学期课程,<高等概率论>. 欢迎大家来我的github下载源码呀 ...

  2. 高等概率论 Chapter 5. Random Variables on a Countable Space

    Chapter 5 Random Variables on a Countable Space 南京审计大学统计学研究生第一学期课程,<高等概率论>. 欢迎大家来我的github下载源码呀 ...

  3. 高等概率论 Chapter 2. Axioms of Probability

    Chapter 2. Axioms of Probability 欢迎大家来我的github下载源码呀,https://github.com/Berry-Wen/statistics-note-sys ...

  4. 高等概率论和随机过程比较本质的介绍

    关于研究生阶段的概率论主干课程,说得简单一些,就是在测度论的基础之上,将本科学过的内容重新梳理,逐步加深.先前也提到过,在没学测度论的时候,很多基本的概率概念和定理的数学论证是没有办法严格给出的,相应 ...

  5. 实变函数与高等概率论--如何理解生成的σ代数

    如何理解生成的σ代数? 现代概率论是基于σ代数讨论的, 因为它本身具有概率测度也就是P(x)所必须的性质, 但在面对一个集合时, 其本身并不一定是一个σ代数, 所以需要用一种比较方便的概念来确保σ代数 ...

  6. 高等概率论的一些学习心得兼推荐一些相关书籍 zz

    学习概率已经有快2年了,几乎查阅了所有跟概率相关的书籍,到目前为止没有找到我认为特别好的.有人认为Feller的概率论及其应用是经典,我买了两本中译本,对我来说帮助不大.看了程士宏的测度论与概率论基础 ...

  7. 人工智能中的概率论与统计学修炼秘籍之著名教材

    概率论与统计学的学习者众多,为了迎合不同学习者的需求,各种教材种类繁多.眼花缭乱.为此,非常有必要推荐一些常用的教材给人工智能学习人员,提升学习的效率,提高学习的效果.根据学习逐渐深入的顺序,本文将按 ...

  8. 纪念我逝去的概率论基础

    Photo: from book The Unravelers 在数学系的研究生阶段有一门课,名字非常谦逊,叫做<概率论基础>.没错,不是神马高等概率论,也不是神马现代概率论,而是基础,仅 ...

  9. 张志华-统计机器学习-概率论导论

    统计机器学习-概率论导论 文章目录 统计机器学习-概率论导论 一. 复习 二. 参数方法和非参数方法 三. 测度空间的建立 本节内容延续第一节的内容,进行简短回顾,并对概率论中概率测度相关知识进行介绍 ...

最新文章

  1. Python 实现九九乘法表
  2. Azkaban入门(启动一个Simple Example)
  3. html列表的列选择事件,html5 datalist 选中option选项后的触发事件
  4. vscode使用教程python-VsCode使用教程
  5. 通过selenium模拟登陆新浪微博爬取首页和评论
  6. Linux系统调用--getrusage函数详解
  7. ad如何选中当前层上的器件_82条AD转换设计经验总结!
  8. 希尔排序(分而治之)
  9. 95-136-070-源码-Operator-扩展有状态的operators
  10. 手把手教你入侵网站修改数据_手把手教你使用Python抓取QQ音乐数据(第四弹)...
  11. spss26没有典型相关性分析_【spss典型相关分析】数学建模__SPSS_典型相关分析
  12. oracle 查询优化
  13. 马王堆汉墓帛书‧老子甲本——德经
  14. C++语言Switch函数使用小贴士
  15. Cellular Pro简介
  16. 金庸的小说人生(1)
  17. 明日边缘 Edge of Tomorrow (2014)旋风下载
  18. 先睹为快 | 卓越示范中心 ETB003 云原生安全实验测试床
  19. git fatal: unable to access  Failed to connect to localhost port 1080: Connection refused
  20. 如何修复损坏的硬盘分区

热门文章

  1. 备忘录:PC(win7 64位)读取雷电2接口阵列
  2. 浅谈ToB市场精细化运营
  3. PDA手持无线POS机,打印条形码小票凭条系统案例
  4. c语言中a 10是否等于a%3e=10,A2-3A-10E=0,则A的逆矩阵为() 答案:(A-3E)/10
  5. 数字图像处理基础试讲准备
  6. 如何删除服务器出现的.nfs文件
  7. 【汇正财经】大消费强势,三大指数集体大涨
  8. 为什么年轻人喜欢北上广? 因为这里点燃改变世界的梦想!
  9. centos 安装php mysql_学习centos安装php的mysql扩展
  10. STM32F429入门(十八):DMA