Algorithms - Lecture 12 - Randomized Algorithms
Lecture 12: Randomized Algorithms
- 1 Linearity of Expectation
- 1.1 Expectation E(x)=∑xxP(x)E(x)=\sum_x xP(x)E(x)=∑xxP(x)
- 1.2 E(X+Y)=E(X)+E(Y)E(X+Y)=E(X)+E(Y)E(X+Y)=E(X)+E(Y) for all X,YX, YX,Y
- 1.3 Ex 1, Expected waiting time for heads
- 1.4 Ex 2, A coupon collector process
- 2 Markov and Chebyshev inequalities
- 2.1 Markov’s inequality (Nonnegative random variable XXX)
- 2.2 E(XY)=E(X)E(Y)E(XY)=E(X)E(Y)E(XY)=E(X)E(Y) if XXX and YYY are ***independent***
- 2.3 Var(x)=E(X2)−E(X)2Var(x)=E(X^2)-E(X)^2Var(x)=E(X2)−E(X)2
- 2.4 Variance of a sum of independent random variables Var(X+Y)=Var(X)+Var(Y)Var(X+Y)=Var(X)+Var(Y)Var(X+Y)=Var(X)+Var(Y)
- 2.5 Chebyshev’s inequality (any random variable XXX)
- 3 Chernoff bounds
- 3.1 Binomial distribution
- 3.2 Lower bound
- 3.3 Upper bound
- 3.4 When p=12p=\frac{1}{2}p=21
- 4 Balls and bins
- 4.1 Balls and bins problems
- 4.2 Upper bounding the load of a specific bin
- 4.3 Upper bounding maximum bin load
- 4.4 Lower bounding the load of a specific bin
- 4.5 Lower bounding the minimum bin load
1 Linearity of Expectation
1.1 Expectation E(x)=∑xxP(x)E(x)=\sum_x xP(x)E(x)=∑xxP(x)
1.2 E(X+Y)=E(X)+E(Y)E(X+Y)=E(X)+E(Y)E(X+Y)=E(X)+E(Y) for all X,YX, YX,Y
1.3 Ex 1, Expected waiting time for heads
- Let the random variable XXX denote the number of flips of a p-biased coin until we get the first “heads”.
- Indicator variable Xi=1X_i=1Xi=1 if the number of flips is at least iii, 000 otherwise. E(Xi)=(1−p)i−1E(X_i)=(1-p)^{i-1}E(Xi)=(1−p)i−1, XiX_iXis are dependent.
- X=∑iXiX=\sum_{i}X_iX=∑iXi
- E(X)=∑i(1−pi)i−1=1pE(X)=\sum_i (1-p_i)^{i-1}=\frac{1}{p}E(X)=∑i(1−pi)i−1=p1
1.4 Ex 2, A coupon collector process
- Suppose we repeatedly draw a uniformly random number from {1,…,n}\{1,\dots,n\}{1,…,n} until we have drawn each number at least once.
- What is the expected number of draws?
- Partition into nnn phases,
- Phase iii begins once i−1i-1i−1 distinct integers have been drawn.
- XiX_iXi denote the number of draws in Phase iii.
- XXX denotes ∑1≤i≤nXi\sum_{1\leq i\leq n}X_i∑1≤i≤nXi, XiX_iXi like flip a n−(i−1)n\frac{n-(i-1)}{n}nn−(i−1) biased coin, E(Xi)=nn−(i−1)E(X_i)=\frac{n}{n-(i-1)}E(Xi)=n−(i−1)n
- E(X)=∑i≤i≤nE(Xi)=∑1≤i≤n1i=nHn≈nlnnE(X)=\sum_{i\leq i\leq n}E(X_i)=\sum_{1\leq i \leq n}\frac{1}{i}=nH_n\approx n\ln nE(X)=∑i≤i≤nE(Xi)=∑1≤i≤ni1=nHn≈nlnn
2 Markov and Chebyshev inequalities
2.1 Markov’s inequality (Nonnegative random variable XXX)
For any nonnegative random variable XXX and any a>0a>0a>0.
Pr(X≥a)≤E(x)aP_r(X\geq a)\leq \frac{E(x)}{a}Pr(X≥a)≤aE(x)
2.2 E(XY)=E(X)E(Y)E(XY)=E(X)E(Y)E(XY)=E(X)E(Y) if XXX and YYY are independent
2.3 Var(x)=E(X2)−E(X)2Var(x)=E(X^2)-E(X)^2Var(x)=E(X2)−E(X)2
Ex, variance of a p-biased coin flip, Var(X)=p−p2Var(X)=p-p^2Var(X)=p−p2
2.4 Variance of a sum of independent random variables Var(X+Y)=Var(X)+Var(Y)Var(X+Y)=Var(X)+Var(Y)Var(X+Y)=Var(X)+Var(Y)
Ex, variance of a series of p-biased coin flips, Var(∑Xi)=np(1−p)Var(\sum X_i)=np(1-p)Var(∑Xi)=np(1−p)
2.5 Chebyshev’s inequality (any random variable XXX)
Pr(∣X−E(X)∣≥a)≤Var(x)a2P_r(|X-E(X)|\geq a)\leq \frac{Var(x)}{a^2}Pr(∣X−E(X)∣≥a)≤a2Var(x)
3 Chernoff bounds
3.1 Binomial distribution
For 1≤i≤n1\leq i\leq n1≤i≤n, let pi∈[0,1]p_i\in [0,1]pi∈[0,1], let XiX_iXi be a 0-1 random variable such that Pr(Xi=1)=piP_r(X_i=1)=p_iPr(Xi=1)=pi, denote p=1n∑1≤i≤npip=\frac{1}{n} \sum_{1\leq i\leq n}p_ip=n1∑1≤i≤npi, X=∑XiX=\sum X_iX=∑Xi, thus E(X)=npE(X)=npE(X)=np.
3.2 Lower bound
Pr(X≤(1−δ)np)≤e−δ2np/2P_r(X\leq (1-\delta)np)\leq e^{-\delta^2np/2}Pr(X≤(1−δ)np)≤e−δ2np/2
δ∈[0,1)\delta \in [0,1)δ∈[0,1)
3.3 Upper bound
Pr(X≥(1+δ)np)≤e−δ2np/3P_r(X\geq (1+\delta)np)\leq e^{-\delta^2np/3}Pr(X≥(1+δ)np)≤e−δ2np/3
δ∈[0,1)\delta \in [0,1)δ∈[0,1)
3.4 When p=12p=\frac{1}{2}p=21
Pr(X≤(1−δ)n/2)≤e−δ2n/2P_r(X\leq (1-\delta)n/2)\leq e^{-\delta^2n/2}Pr(X≤(1−δ)n/2)≤e−δ2n/2
Pr(X≥(1+δ)n/2)≤e−δ2n/2P_r(X\geq (1+\delta)n/2)\leq e^{-\delta^2n/2}Pr(X≥(1+δ)n/2)≤e−δ2n/2
δ∈[0,1)\delta \in [0,1)δ∈[0,1)
4 Balls and bins
4.1 Balls and bins problems
Suppose we throw a series of balls independently and uniformly at random into nnn bins.
4.2 Upper bounding the load of a specific bin
Assume nnn balls into nnn bins uniformly and independently. XiX_iXi denotes indicator variable that =1=1=1 when ball iii land into bin 1. X=∑iXiX=\sum_i X_iX=∑iXi denotes the load of bin 1 X∼B(n,1/n)X \sim B(n,1/n)X∼B(n,1/n).
Pr(X≥c′lnnlnlnn)≤n−cP_r(X\geq c'\frac{\ln n}{\ln \ln n})\leq n^{-c}Pr(X≥c′lnlnnlnn)≤n−c
4.3 Upper bounding maximum bin load
EiE_iEi denotes bin iii exceeds c′f(n)c'f(n)c′f(n), where f(n)=lnnlnlnnf(n)=\frac{\ln n}{\ln \ln n}f(n)=lnlnnlnn, we see Pr(Ei)≤n−cP_r(E_i)\leq n^{-c}Pr(Ei)≤n−c, by union bound, the probability of bad events is ≤n1−c\leq n^{1-c}≤n1−c, the maximum load of any bin is O(f(n))O(f(n))O(f(n)) with high probability.
4.4 Lower bounding the load of a specific bin
Pr(X≥ε′f(n))≥n−εP_r(X\geq \varepsilon'f(n) )\geq n^{-\varepsilon}Pr(X≥ε′f(n))≥n−ε
4.5 Lower bounding the minimum bin load
Minimum less than ε′f(n)\varepsilon'f(n)ε′f(n) balls, (1−n−ε)n(1-n^{-\varepsilon})^n(1−n−ε)n
Algorithms - Lecture 12 - Randomized Algorithms相关推荐
- 算法设计技巧与分析(八):随机算法(Randomized Algorithms)
文章目录 随机算法(Randomized Algorithms) 一.随机选择(Randomized Selection) 二.测试字符串相等性(Testing String Equality) 三. ...
- 3P5 Industrial Engineering Lecture 1-2: Method of Study
3P5 Industrial Engineering Lecture 1-2: Method of Study 文章目录 3P5 Industrial Engineering Lecture 1-2: ...
- 【原】Coursera—Andrew Ng机器学习—课程笔记 Lecture 12—Support Vector Machines 支持向量机...
Lecture 12 支持向量机 Support Vector Machines 12.1 优化目标 Optimization Objective 支持向量机(Support Vector Machi ...
- 因子分析,主成分分析,主因子分析,因子分析函数,极大似然法——数据分析与R语言 Lecture 12
因子分析,主成分分析,主因子分析,因子分析函数,极大似然法--数据分析与R语言 Lecture 12 因子分析 因子分析的主要用途 与主成分分析的区别 因子分析使用了复杂的数学手段 统计意义 因子载荷 ...
- Lecture 12 : Nonlinear Transformation
Lecture 12 : Nonlinear Transformation [参考]https://redstonewill.com/246/ [概括] 主要介绍了非线性分类模型: 通过非线性变换,将 ...
- 【数字图像处理与应用】Lecture 12 图像描述(子)
数字图像处理与应用 Class 12. 20200603 Lecture 12 Representation Overview Representation 重构/描述 Boundary descri ...
- CS269I:Incentives in Computer Science 学习笔记 Lecture 12 对称信息和声誉系统
Lecture 12 Asymmetric Information and Reputation Systems(对称信息和声誉系统) 1 Preamble(前言) 之前的几讲,我们都在声誉系统的边缘 ...
- 【Introductory Biology】Lecture 12 - Chemical Genetics 1 - Cell Division Segregating Genetic Materia
Lecture 10 - Translation Contents Lecture 12 文章目录 Slides 无丝分裂 amitosis Ref \begin{aligned} \end{alig ...
- Randomized Algorithms: Median Finding
目录 More Divide-and-Conquer The Problem - Finding the Median Design the Algorithm 基于 Splitters 的简单算法 ...
最新文章
- JavaScript模块化 --- Commonjs、AMD、CMD、es6 modules
- HTML5 的优点与缺点
- 开通了一个gmail邮箱
- MeeGo 1.2发布
- ubuntu make menuconfig error
- Struts1.2+Spring2.5+Hibernate3.2框架搭建(十三)
- linspace函数matlab_Matlab入门2-莫比乌斯环
- 终止正在运行的VBS脚本
- 初识Python 04 day
- 【PAT】A-1076:Forwards on Weibo(有向图的BFS遍历)
- linux不要了装windows,从windows到linux —— 装linux吧,你不要怕!
- talentcentral测评结果_校招的时候性格测试直接跪了,到底是怎么个机制?
- AMS1117S三端稳压低压降稳压器ic
- 软路由和OpenWrt/LEDE
- 阿里字体库的运用(网站上面的购物车或者一些小图标 箭头)
- mysql过期数据_mysql过期数据如何删除
- 天津计算机软件工程学院,天津市软件学院是几本
- NESSUS 安装 6.12 及使用
- UE5/C++ 基于GAS创建攻击伤害 5.1.1准备碰撞体
- 作为产品经理的你,画原型图时崩溃过吗?