Notes for Statistics

分享一下自己商统的笔记
by Feiran Jia

Lecture 1 Introduction

Variables

  • Quantitative
  • Categorical 明确的
    • Ordinal 有先后顺序的
    • Nominal 定义好赋予数值

Data sets

  • Cross-section
  • Time series

Sampling Error or Noise

Sampling error is a purely random difference between a sample and population of interest that arises because the sample is a random subset of the population.

Lecture 2 Displaying and Describing Quantitative Data

Histogram

  1. Frequency histogram: bar height = frequency
  2. Relative histogram:bar height = 频数/总数,直方纵坐标之和为1
  3. Density histogram: bar height = fraction/bin width

Central Tendency 数据聚集程度

Mean

Population Sample
Numbers of observations N
Mean μ=∑Ni=1yiN\mu = \frac{\sum_{i = 1}^Ny_i}{N}

Median

  • Order observations from smallest (in value) to the largest
  • Find the middle - that would be the median of your data

Mode

  • Observation that occurs more often
  • Not unique (unimodal, bimodal)

Spread 数据离散程度

Range 极差 is an absolute difference between the smallest and the largest value in the data.

Interquartile Range 四分位数 IQR

  • Sort your data in ascending order.
  • Divide your data into two equal groups at the median.
  • Find the median of the first, “low” group. This is called Q1, or first quartile.
  • The median of the second, “high” group is the third quartile, Q3.
  • The interquartile range (IQR) is the difference between Q3 and Q1
数列 参数 四分差
1 102
2 104
3 105 Q1
4 107
5 108
6 109 Q2 (中位数)
7 110
8 112
9 115 Q3
10 118
11 118

Percentiles

  • median: 50th50^{th} percentile
  • first quartile Q1: 25th25^{th} percentile
  • third quartile Q3: 75th75^{th} percentile

Variance

Population Sample
Number of observations N
Variance σ2=∑Ni=1(yi−μ)2N\sigma^2 = \frac{\sum_{i=1}^{N}(y_i - \mu)^2}{N}

Total Sum of Squares = TSS = ∑ni=1(yi−y¯)2\sum_{i=1}^{n}(y_i - \bar y)^2

degrees of freedom = ν\nu = n - 1

Standard Devation

population standard deviation: σ=∑Ni=1(yi−μ)2N‾‾‾‾‾‾‾‾‾‾√\sigma = \sqrt \frac{\sum_{i=1}^{N}(y_i - \mu)^2}{N}

sample standard deviation: s=∑ni=1(yi−y¯)2n−1‾‾‾‾‾‾‾‾‾√s =\sqrt \frac{\sum_{i=1}^{n}(y_i - \bar y)^2}{n-1}

Comparison/Standardization

Coefficient of Variation (CV) 变异系数

  • CV = Standard deviation / Mean
  • how much variability is in the data compared to the mean: 变量值平均水平高,其离散程度的测度值越大,反之越小。在进行数据统计分析时,如果变异系数大于15%,则要考虑该数据可能不正常,应该剔除。

z-score

  • z=y−y¯sz = \frac{y-\bar y}{s}
  • Variable z has a mean of 0 and standard deviation equal to 1
  • Value of z-score indicates how many standard deviations a value is from the mean

Lecture 3&4 Linear Relationship: Association, Correlation and Linear Regression

Covariance

Population Sample
Number of observations N
Covariance σxy=∑Ni=1(xi−μ)(yi−μy)N\sigma_{xy} = \frac{\sum_{i = 1}^N(x_i-\mu)(y_i-\mu_y)}{N}

两个变量在变化过程中是同方向变化

Correlation 相关系数

为了能准确的研究两个变量在变化过程中的相似程度,我们就要把变化幅度对协方差的影响,从协方差中剔除掉。

Population Sample
Covariance σxy\sigma_{xy}
Standard Deviations σx,σy\sigma_x,\sigma_y
How to find ρ=σxyσxσy\rho = \frac{\sigma_{xy}}{\sigma_x \sigma_y}
  • Coefficient of correlation is always between -1 and 1.

  • -1: strong negative linear relationship.

  • 1: strong positive linear relationship

  • 0: no linear relationship

The Linear Model

y=b0+b1∗xy = b_0 + b_1 * x

b0b_0: y-intercept

b1b_1: slope of the line

e=y−ŷ e = y - \hat{y} observed yy, Predicted ŷ \hat{y}

OLS=Ordinary Least Squares 证明

Minimize sum of squares: min ∑ni=1(yi−yi^)2min\ \sum_{i=1}^n (y_i - \hat{y_i})^2 or min ∑ni=1(yi−b0−b1∗x)2min\ \sum_{i=1}^n (y_i - b_0 - b_1*x)^2

solution:

​ b1=rsysxb_1 = r \frac{s_y}{s_x}

​ b0=y¯−b1x¯b_0 = \bar y -b_1\bar x

need calculate

Proof:

b1=∑ni=1(xi−x¯)(yi−y¯)∑ni=1(xi−x¯)2=∑ni=1(xi−x¯)(yi−y¯)/(n−1)∑ni=1(xi−x¯)2/(n−1)=sxys2xb_1 = \frac{\sum_{i=1}^n(x_i-\bar x)(y_i-\bar y)}{\sum_{i=1}^n(x_i-\bar x)^2} =\frac{\sum_{i=1}^n(x_i-\bar x)(y_i-\bar y)/(n-1)}{\sum_{i=1}^n(x_i-\bar x)^2/(n-1)}=\frac{s_{xy}}{s_x^2}

sxy=rxysxsts_{xy} = r_{xy}s_xs_t

b1=rsysxb_1 = r \frac{s_y}{s_x}

The regression line always passes through point (y¯,x¯)(\bar y,\bar x)

Math Box Cont’d

SST=SSE+SSRSST = SSE +SSR

SST=∑(yi−y¯)2SST = \sum (y_i - \bar y)^2

SSE=∑(yi−ŷ i)2SSE = \sum (y_i - \hat{y}_i)^2

SSR=∑(ŷ i−y¯)2SSR = \sum (\hat y_i - \bar y)^2

SSTSST=SSESST+SSRSST\frac{SST}{SST} = \frac{SSE}{SST} + \frac{SSR}{SST}

1=SSESST+R21 = \frac{SSE}{SST} + R^2

R2=1−SSESSTR^2 = 1 - \frac{SSE}{SST}


Standardized Regression=Regression to the Mean


“standardized” regression

b1=rsysxb_1 = r \frac{s_y}{s_x} but szy=1s_{zy} = 1 and szx=1s_{zx} = 1 ⇒b1=r\Rightarrow b_1 = r

“Standardized” Regression Line

zy^=rzx\hat{z_y} = rz_x

R2=r2⇒0%≤R2≤100R^2 = r^2 \Rightarrow 0\% \leq R^2 \leq 100%

R2R^2 is measured in percentage

100:

0:

Lecture 6 Introduction to Probability

Definitions

Random experiment the process of observing an outcome of a
chance event

Sample space S is a collection of all possible outcomes

Event a collection of particular outcomes

Probability

  1. Subjective probability 主观概论: an individual’s assessment of the likelihood of certain event

  2. Theoretical probability 根据事件本身本性推理而得的概论, 就是 priori概论

    P(A)=# of outcomes in ATotal # of outcomesP(A) = \frac{\#\ of\ outcomes\ in\ A }{Total\ \#\ of \ outcomes}

  3. Empirical probability 经验

    relative frequency of event’s occurrence in thelong-run

  4. Joint probability P(A⋂B)P(A\bigcap B)

  5. Marginal Probability P(A)P(A)

  6. Conditional probability P(A|B)P(A|B)

    • Probability of event A given that event B has already occurred

Probability Rules

  1. 0≤P(A)≤10\leq P(A) \leq 1
  2. P(S)=1P(S) =1
  3. P(A)=1−P(Ac)P(A) = 1 - P(A^c)

Event

  • independent P(A⋂B)=P(A)⋅P(B)P(A\bigcap B) = P(A)\cdot P(B)
  • disjoint P(A⋃B)=P(A)+P(B)P(A\bigcup B) = P(A) + P(B)
  • P(A⋃B)=P(A)+P(B)−P(AB)P(A\bigcup B) = P(A) + P(B) -P(AB)

Random Variable

Discrete RV - takes a countable (finite) number of values.

Continuous RV - takes an uncountable (infinite) number of values.

Expected Value μ=E(x)=∑x∗p(x)\mu = E(x) = \sum x * p(x)

Variance of a random variable x σ2=E[(X−μ)2∗p(x)]\sigma^2 = E[(X-\mu)^2 * p(x)]

E(a)=aE(a) = a

E(a⋅X)=aE(X)E(a\cdot X) = aE(X)

E(X+a)=E(X)+aE(X+a) = E(X) +a

E[X1+X2+…+Xn]=E[X1]+E[X2]+..+E[Xn]E[X_1+X_2+…+X_n] = E[X_1]+E[X_2]+..+E[X_n]

V(a)=0V(a) = 0

V(X+a)=V(X)V(X + a) = V(X)

V(a⋅X)=a2V(X)V(a\cdot X) = a^2 V(X)

V(X)=E(X2)−[E(X)]2V(X) = E(X^2)-[E(X)]^2

Z-score

  • standardize random variable.

  • apply linear transformation such that the resulting variable has mean 0 and standard deviation equal to 1.

  • Z-score tells us how many standard deviation a data point is from the

    mean.

    Z=X−E[X]V[X]√=x−μxσxZ = \frac{X-E[X]}{\sqrt{V[X]}} = \frac{x-\mu_x}{\sigma_x}

Covariance

σxy=E[(X−μx)(Y−μy)]=∑x∑y(x−μx)(y−μy)∗p(x,y)\sigma_{xy} = E[(X-\mu_x)(Y-\mu_y)] = \sum_x \sum_y (x-\mu_x)(y-\mu_y)*p(x,y)

Correlation

ρ=σxyσxσy\rho = \frac{\sigma_{xy}}{\sigma_x \sigma_y}

COV(X,c)=0COV (X , c ) = 0

COV(X+a,Y+b)=COV(X,Y)COV (X + a, Y + b) = COV (X , Y )

COV(aX,bY)=a∗b∗COV(X,Y)COV (aX , bY ) = a ∗ b ∗ COV (X , Y )

V(c)=0V(c) = 0

V(X+c)=V(X)V (X + c ) = V (X )

V(cX)=c2V(X)V(cX) = c^2V(X)

V(aX+bY)=a2V(X)+b2V(Y)+2abCOV(X,Y)V(aX +bY)=a^2V(X)+b^2V(Y)+2abCOV(X,Y)

V(aX+bY)=a2V(X)+b2V(Y)+2ab∗r(X,Y)∗SD(X)∗SD(Y)V(aX +bY) = a^2V(X)+b^2V(Y)+2ab∗r(X,Y)∗SD(X)∗SD(Y)

Lecture 7 Discrete Probability Distributions: Bernoulli and Binomial

Bernoulli Experiment

  1. two possible outcomes, not necessarily equally likely, that are labeled generically “success” and “failure”.
  2. If the uncertain situation is repeated, the probabilities of success andfailure are unchanged. The probability of success is not affected bythe successes or failures that already have been experienced.

x P(X=x)
0 (failure) 1-p
1 (success) p



E[X]=1∗p+0∗(1−p)=pE[X] = 1 ∗ p + 0 ∗ (1 − p) = p
V[X]=(1−p)2∗p+(0−p)2∗(1−p)=p∗(1−p)V[X]=(1−p)^2 ∗p+(0−p)^2 ∗(1−p)=p∗(1−p)


Binomial Probability Distribution

P(x successes)=P(x)=Cnxpx(1−p)n−xP(x\ successes) = P(x) = C_x^np^x(1 − p)^{n−x}

binomial coefficient, read ”n choose x” Cnx=n!x!(n−x)!C_x^n = \frac{n!}{x!(n-x)!}

Binomial RV is a sum of Bernoulli

E[Y]=E[∑ni=1Xi]=E[nXi]=n∗E[Xi]=npE [Y ] = E [\sum_{i=1}^n X_i ] = E [nX_i ] = n ∗ E [X_i ] = np

V[Y]=V[∑ni=1Xi]=∑ni=1V[Xi]=nV[Xi]=np(1−p)V [Y ] = V [ \sum_{i=1}^n X_i ] = \sum_{i=1}^n V [X_i ] = nV [X_i ] = np(1 − p)

Cumulative Probability

  • P(X ≤ x)

  • P(X ≥x)=1−P(X ≤x−1)

  • P(X =x)=P(X ≤x)−P(X ≤x−1)=p(x)

    • P(x1 ≤X ≤x2)=P(X ≤x2)−P(X ≤x1 −1)

Lecture 8 Uniform and Triangle / Normal

Probability Density Function

  • For continuous RV, area under the curve f(x) is the probability of a range of values.
  • Probability density function (pdf) satisfies two conditions:
    • f (x) ≥ 0 for all possible values of X.

      • The total area under the curve is 1 ( ∫f(x)dx=1\int f (x)dx = 1)

Uniform Distribution

x ~ U[a,b]

PDF f(x)=1b−af(x) = \frac{1}{b-a}, a≤x≤ba\leq x\leq b are parameters , [a,b] is bounded support

μ=a+b2\mu = \frac{a+b}{2}

σ2=(b−a)212\sigma^2 = \frac{(b-a)^2}{12}

Triangle Distribution

We can create triangle distribution by adding up two independent and identically distributed uniform random variables.

  • X1∼U[a, b]
  • X2∼U[a, b]
  • X1 and X2 are independent
  • T=X1+X2
  • T∼T[2a, 2b]

μx1+x2=E[X1+X2]=E[X1]+E[X2]=a+b\mu_{x_1+x_2} = E[X_1+X_2] = E[X_1] +E[X_2] = a+b

σ2x1+x2=V[X1+X2]=V[X1]+V[X2]=(b−a)26\sigma^2_{x_1+x_2} = V[X_1+X_2] = V[X_1] +V[X_2] =\frac{(b-a)^2}{6}

Normal Distribution

XX~N(μ,σ2)N(\mu,\sigma^2)

Normal Density Fuction

f(x)=1σ2π√e−12(x−μσ)2f(x) =\frac{1}{\sigma \sqrt{2\pi}}e^{-\frac{1}{2}(\frac{x-\mu}{\sigma})^2}


Standard Normal: Z ∼ N(0,1)

z=X−μσ=−μσ+1σxz = \frac{X-\mu}{\sigma} = \frac{-\mu}{\sigma}+\frac{1}{\sigma}x

  • about 68.3% within 1 s.d. of mean

  • about 95.4% within 2 s.d. of mean

  • about 99.7% within 3 s.d. of mean

Rule of Thumb for Binomial Distribution

To determine if Normal distribution is a good approximation for the
Binomial: Check if the entire interval lies between 0 and n, where interval is given by:

np±3np(1−p)‾‾‾‾‾‾‾‾‾√np±3\sqrt{np(1-p)}

Lecture 9 Sampling Distributions of Sample Statistics: Sample Mean

  1. List every sample with n observations from the population of interest.

  2. Find the probability of obtaining each sample (use probability rules).

  3. Calculate the sample statistics for each sample.

  4. Link values in 3 with probabilities in 2 (use probability rules).


    Population Distribution vs Sampling Distribution

Can sampling error explain the discrepancy between population meanand sample mean? What is the chance that sample mean is equal to2? In other words, is it statistically plausible to observe value of thesample mean equal to 2 simply by chance?

Non-sampling errors (HW: Identify non-sampling errors)Population parameters are not what professor claimed

Feasibility of Analytical Method

  • Sample size n=3, the number of all possible samples is 333^3.What if we would like to increase sample size to 20? Number of samples 3203^{20}.

What if the number of values for the population is greater than 3?

Number of samples x3.

Sampling Distribution of Sample Mean

Population

μx\mu_x, σ2x\sigma_x^2

Sample mean X¯=X1+X2+…+Xnn\bar X = \frac{X_1+X_2+…+X_n}{n}

E[X¯]=E[X1+X2+…+Xnn]=1nnμx=μxE[\bar X] = E[\frac{X_1+X_2+…+X_n}{n}] = \frac{1}{n}n\mu_x =\mu_x

V[X¯]=V[X1+X2+…+Xnn]=1n2nμx=μx/nV[\bar X] = V[\frac{X_1+X_2+…+X_n}{n}] = \frac{1}{n^2}n\mu_x =\mu_x/n

σX¯=σxn√\sigma_{\bar X} = \frac{\sigma_x}{\sqrt{n}}


Central Limit Theorem (CLT) no matter what the underlying distribution is,
sample mean will tend to a normal distribution.

The sum of n independent, identically distributed random variables approaches a normal distribution as n increases.​

X¯\bar X ~N(μx,σxn√)N(\mu_x,\frac{\sigma_x}{\sqrt{n}})

Notation

P(X¯|μx=6,σx=1.5,n=16)P(\bar X|\mu_x = 6,\sigma_x = 1.5,n=16), 算出 σX¯\sigma_{\bar X}

Lecture 10 Estimation: Confidence Interval Estimator for Population Mean

1−α=P[−zα/2<Z<zα/2]1 − α = P[−z_{α/2}

Z=X¯−μx¯σx¯Z = \frac{\bar X -\mu_{\bar x}}{\sigma_{\bar x}}

Un-standardize

1−α=P[−zα/2<X¯−μx¯σx¯<zα/2]1- \alpha =P[-z_{\alpha/2}

1−α=P[μ−zα/2αn√<X¯<μ+zα/2α√n]1 − α = P[\mu − z_{α/2} \frac{\alpha}{\sqrt n}

Derive Confidence Interval

1−α=P[X¯−zα/2α√n<μ<X¯+zα/2α√n]1 − α = P[\bar X − z_{α/2} \frac{\alpha}{\sqrt n}
Interpretation: For a random sample n from a population with mean µ and standard deviation σ there is 1-α chance that this interval contains µ.

Confidence interval estimator of µ X¯±zα/2α√n\bar X ± z_{α/2} \frac{\alpha}{\sqrt{n}}

Confidence level 1-α

Lower confidence limit (LCL): X¯−zα/2α√n\bar X - z_{α/2} \frac{\alpha}{\sqrt{n}}

Upper confidence limit (UCL): X¯+zα/2α√n\bar X + z_{α/2} \frac{\alpha}{\sqrt{n}}
Margin of samoling error zα/2α√nz_{α/2} \frac{\alpha}{\sqrt{n}}

Comparing Means of Two Populations

  • sampling distribution of difference between two sample means
  • A linear combination of independent normal random variables yields a normal variable.

X¯1\bar X_1~N(μ1,σ21n1)N(\mu_1,\frac{\sigma_1^2}{n_1})

X¯2\bar X_2~N(μ2,σ22n2)N(\mu_2,\frac{\sigma_2^2}{n_2})

E[X¯1−X¯2]=μ1−μ2E[\bar X_1 - \bar X_2] = \mu_1 - \mu_2

V[X¯1−X¯2]=σ21n1+σ22n2V[\bar X_1 - \bar X_2] = \frac{\sigma_1^2}{n_1}+\frac{\sigma_2^2}{n_2}

X¯1−X¯2\bar X_1 - \bar X_2~N(μ1−μ2,σ21n+σ22n)5N(\mu_1-\mu_2,\frac{\sigma_1^2}{n}+\frac{\sigma_2^2}{n}) 5

Lecture 11 Estimation: Confidence Interval Estimator for Population Mean

现实,我们不知道σ\sigma, 我们使用ss去估计,make confidence interval wider to allow for a little more margin for error.

t Distribution

t-statistic : t=X¯−μs/n√t = \frac{\bar X -\mu}{s/\sqrt n}

  • randam variable
  • sample n→∞n\rightarrow \infty s=σs=\sigma,t∞⇔Zt \infty \Leftrightarrow Z

Finding Student t Probabilities

  • Table reports P(T>tA|ν=n−1)=AP(T>t_A|\nu = n-1) =A
  • P(T>tα|ν=n−1)=αP(T>t_\alpha|\nu = n-1) =\alpha
  • P(−tα/2<t<tα/2)=1−αP(-t_{\alpha/2}

CI Estimator of µ when σ is unknown

Confidence interval estimator of µ x¯±tα/2s√n\bar x ± t_{\alpha/2}\frac{s}{\sqrt{n}}

Lower confidence limit (LCL) x¯−tα/2s√n\bar x - t_{\alpha/2}\frac{s}{\sqrt{n}}

Upper confidence limit(UCL) x¯−tα/2s√n\bar x - t_{\alpha/2}\frac{s}{\sqrt{n}}

  • Symmetric and ”mound-shaped” distribution.
  • Values near mean (which is 0) are more likely.
  • One parameter, ν, or degrees of freedom.
  • Unbounded support, (-∞, ∞)
  • To find probabilities, use Student t table.
Sample size σ\sigma is unknown σ\sigma is known
n<30n t-statistic & critical values of t distribution t-statistic & critical values of t distribution
n>30n>30 z-statistic & critical values of standard normal distribution t-statistic & critical values of t distribution

统计原理笔记 Notes for Statistics I相关推荐

  1. 找回Google Reader丢失的笔记(notes)

    如何找回Google Reader丢失的笔记,以及新版Google Reader样式的调整 以前用Google Reader的时候,为图方便,有一些纯文本的笔记或记事,就直接用Google Reade ...

  2. 财务分析笔记 Notes for Accounting I

    by Feiran Jia 1 Introduction Accounting Information Financial Accounting: External reporting 证交所能看到, ...

  3. NTU 课程笔记:Nonparametric statistics

    0 前言 我们回顾一下chi-square 分布,t分布和F分布 不难发现,这三个都需要样本满足N(0,1)的条件,那么如果不满足N(0,1)的样本,怎么办呢? 这时候就需要用到nonparametr ...

  4. 遍历二维数组_Java编程基础阶段笔记 day06 二维数组

    二维数组 笔记Notes 二维数组 二维数组声明 二维数组静态初始化与二位初始化 二维数组元素赋值与获取 二维数组遍历 二维数组内存解析 打印杨辉三角 Arrays工具类 数组中常见的异常 二维数组 ...

  5. Java编程基础阶段笔记 day 07 面向对象编程(上)

    ​ 面向对象编程 笔记Notes 面向对象三条学习主线 面向过程 VS 面向对象 类和对象 创建对象例子 面向对象的内存分析 类的属性:成员变量 成员变量 VS 局部变量 类的方法 方法的重载 可变个 ...

  6. Java编程基础阶段笔记 day04 Java基础语法(下)

    ​ 面向对象编程 笔记Notes 面向对象三条学习主线 面向过程 VS 面向对象 类和对象 创建对象例子 面向对象的内存分析 类的属性:成员变量 成员变量 VS 局部变量 类的方法 方法的重载 可变个 ...

  7. 苹果备忘录怎么用计算机,电脑上看iPhone笔记!教你用Win10“安装”苹果备忘录...

    用iPhone.iPad的朋友一直有这样一个困惑,在手机平板上记录的苹果Notes备忘录笔记,要怎么在电脑上查看?如果你用的是Mac,预装了苹果Notes,自然没有问题:然而Notes没有Window ...

  8. [小黄书小程序]主页面笔记图片高度自适应及上拉无限加载及下拉更新

    上一章我们实现了小黄书小程序标签栏的左右滑动和弹出框UI功能,今天我们会开始实现主页面中笔记的呈现. 主要的功能会囊括以下几个方面: 笔记的两列式布局: 一行只是显示两个笔记.且每个笔记的封面图片的高 ...

  9. Java编程基础阶段笔记 day06 二维数组

    二维数组 笔记Notes 二维数组 二维数组声明 二维数组静态初始化与二位初始化 二维数组元素赋值与获取 二维数组遍历 二维数组内存解析 打印杨辉三角 Arrays工具类 数组中常见的异常 二维数组 ...

  10. 逻辑回归 自由度_回归自由度的官方定义

    逻辑回归 自由度 Back in middle and high school you likely learned to calculate the mean and standard deviat ...

最新文章

  1. 生成ftp文件的目录树
  2. Java中使用SQLite数据库
  3. 技术分享|手机推送原理剖析指南
  4. mysql 注入 file load_Mysql注入中into outfile和load_file()总结
  5. python seed()
  6. python有趣函数_python中有趣的函数
  7. 转:VMware安装Mac OS X Mavericks系统图文教程
  8. 深鸿会深大小组学习笔记:第二周,从零开发鸿蒙小游戏2048app(下)
  9. 随机森林算法原理解析
  10. html5怎么唤起支付宝支付,H5唤起支付宝支付
  11. 【洛谷】P1428 小鱼比可爱
  12. 微信公众号后台添加安全域名 提示:无法访问xxx指向的web服务器(或虚拟主机)的目录,请检查网络设置
  13. 计算机程序 申请专利,计算机程序能申请专利吗
  14. python--1、入门
  15. 萧县机器人_萧县共享碾米机多少钱一台?
  16. 爬虫实战之华为应用市场
  17. 【全源码及文档】基于JSP实现的影视创作论坛系统
  18. 有限元计算 求解笔记(上)
  19. 【刘教链比特币原理】2-3 私钥和地址是什么
  20. 关于HC-05蓝牙模块部分命令不可用的问题

热门文章

  1. dfuse 现在提供持币人的完整视图及其历史
  2. JavaScript工具类:util.js用法实例
  3. IM即时通讯需要解决的问题
  4. Sem 2---Web Database---XML学习笔记[2]
  5. 通过ktr文件写交换代码
  6. itunes计算机丢失,itunes怎么找不到app store了 最新版itunes app store在哪
  7. 【Day5.4】高棉风格的柴瓦塔纳兰寺
  8. java基础-宇宙第一YWM:数组数算题目记录
  9. 查询SCI期刊分区及影响因子
  10. 数字电路——流水灯(二)_往返流水灯