UA MATH571A 回归分析 概念与R code总结

  • Simple Linear Regression
  • Multivariate Linear Regression

Part 0 Basic R code
Tip 1: Read in data

  1. Based on path of the data file
    read.csv(“D:/Stat PhD/taking course/summer1/ref/regression/Salary1.csv”, header = TRUE, sep = “,”, quote = “”", dec = “.”, fill = TRUE, comment.char = “”)
  2. Choosing the data file from dictionary (Suggested)
    read.csv( file.choose() )

Tip 2: R code plot()

  1. plot(x,y,…) or plot(y~x,…) to select variables

  2. type = , “p” (point), “l” (line), “b” (both), “c” (line connetced points), “h” (histogram)

  3. pch = (input number 0-25, indicating 26 point characteristics)

  4. lty = (input number 1-6, indicating 6 line types)

  5. lwd = (input number, line width)

  6. col = “” (input can be )

  7. title = “”, xlab = “”, ylab = “”

  8. xlim = c(,), ylim = c(,)

  9. can use par( new = TRUE ) to plot two objects in one window (use the same xlim and ylim)

Tip 3: R code lm()

  1. lm(formula, weights) will return a model object (a list). Use summary() to take a glance.
  2. formula: basic form y~model, model can be variables with +, :, *. + indicates follow, i.e. y~a+b means using variable b following a to regress y. : indicates interaction. * indicates full model, i.e a*b means a + b + a:b. -1 indicates regression through the origin and +0 indicates regression without intercept.
  3. If no value for weights, indicates OLS. weights = numeric vector indicates WLS

Tip 4: R code predict()

  1. model object (a list), can be lm object, or loess object.
  2. data.frame(model = ), input data for model variables in y~model
  3. interval = “” (conf or pred)
  4. level = (input number for alpha)

Simple Linear Regression

Part 1 Inference and Prediction
Tip 1: Model Assumption Yi∼iidN(β0+β1Xi,σ2)Y_i\sim_{iid} N(\beta_0+\beta_1X_i,\sigma^2)Yi​∼iid​N(β0​+β1​Xi​,σ2)
Tip 2: Define ki=Xi−Xˉ∑i=1n(Xi−Xˉ)2k_i = \frac{X_i-\bar{X}}{\sum_{i=1}^n (X_i-\bar{X})^2}ki​=∑i=1n​(Xi​−Xˉ)2Xi​−Xˉ​. Estimators of coefficients are
β^1=∑i=1nkiYi=β1+∑i=1nkiϵi∼N(β1,σ2∑i=1n(Xi−Xˉ)2)β^0=∑i=1n(1n−kiXˉ)Yi=β0+∑i=1n(1n−kiXˉ)ϵi∼N(β0,σ2(1n+Xˉ2∑i=1n(Xi−Xˉ)2))\hat{\beta}_1= \sum_{i=1}^{n} k_i Y_i = \beta_1 + \sum_{i=1}^{n} k_i \epsilon_i \sim N(\beta_1, \frac{\sigma^2}{\sum_{i=1}^n(X_i-\bar{X})^2} ) \\ \hat{\beta}_0 = \sum_{i=1}^{n} ( \frac{1}{n}- k_i \bar{X}) Y_i = \beta_0 + \sum_{i=1}^{n} ( \frac{1}{n}- k_i \bar{X}) \epsilon_i \sim N(\beta_0, \sigma^2 (\frac{1}{n}+\frac{ \bar{X}^2}{\sum_{i=1}^n (X_i-\bar{X})^2} )) β^​1​=i=1∑n​ki​Yi​=β1​+i=1∑n​ki​ϵi​∼N(β1​,∑i=1n​(Xi​−Xˉ)2σ2​)β^​0​=i=1∑n​(n1​−ki​Xˉ)Yi​=β0​+i=1∑n​(n1​−ki​Xˉ)ϵi​∼N(β0​,σ2(n1​+∑i=1n​(Xi​−Xˉ)2Xˉ2​))

Tip 3: Fitted value and residuals are
Y^h=β^0+β^1Xh∼N(Yh,σ2(1n+(Xh−Xˉ)2∑i=1n(Xi−Xˉ)2))ej=Yj−Y^j∼N(0,σ2(1+1n+(Xj−Xˉ)2∑i=1n(Xi−Xˉ)2))\hat{Y}_h = \hat{\beta}_0 + \hat{\beta}_1 X_h\sim N(Y_h,\sigma^2 (\frac{1}{n} + \frac{(X_h - \bar{X})^2}{\sum_{i=1}^{n}(X_i - \bar{X})^2} )) \\ e_j = Y_j - \hat Y_j \sim N(0,\sigma^2 (1+\frac{1}{n} + \frac{(X_j - \bar{X})^2}{\sum_{i=1}^{n}(X_i - \bar{X})^2} ))Y^h​=β^​0​+β^​1​Xh​∼N(Yh​,σ2(n1​+∑i=1n​(Xi​−Xˉ)2(Xh​−Xˉ)2​))ej​=Yj​−Y^j​∼N(0,σ2(1+n1​+∑i=1n​(Xi​−Xˉ)2(Xj​−Xˉ)2​))

Tip 4: Predictive intervals are
Var(Y^h−Yh)=σ2(1+1n+(Xh−Xˉ)2∑i=1n(Xi−Xˉ)2)Y^h−se{Y^h−Yh}t(1−α2,n−2)<Yh<Y^h+se{Y^h−Yh}t(1−α2,n−2)Var(\hat{Y}_h-Y_h)=\sigma^2 (1+\frac{1}{n} + \frac{(X_h - \bar{X})^2}{\sum_{i=1}^{n}(X_i - \bar{X})^2} ) \\ \hat{Y}_h-se\{\hat{Y}_h-Y_h\}t(1-\frac{\alpha}{2},n-2)< Y_h < \hat{Y}_h+se\{\hat{Y}_h-Y_h\}t(1-\frac{\alpha}{2},n-2) Var(Y^h​−Yh​)=σ2(1+n1​+∑i=1n​(Xi​−Xˉ)2(Xh​−Xˉ)2​)Y^h​−se{Y^h​−Yh​}t(1−2α​,n−2)<Yh​<Y^h​+se{Y^h​−Yh​}t(1−2α​,n−2)

Tip 5: ANOVA

Source SS df MS F
Regression SSR=∑i=1n(Y^i−Yˉ)2SSR=\sum_{i=1}^n (\hat{Y}_i - \bar{Y})^2SSR=∑i=1n​(Y^i​−Yˉ)2 1 MSR=SSRdfRMSR = \frac{SSR}{df_R}MSR=dfR​SSR​ F=MSRMSEF = \frac{MSR}{MSE}F=MSEMSR​
Residuals SSE=∑i=1n(Yi−Y^i)2SSE=\sum_{i=1}^n (Y_i - \hat{Y}_i )^2SSE=∑i=1n​(Yi​−Y^i​)2 n-2 MSE=SSEdfEMSE = \frac{SSE}{df_E}MSE=dfE​SSE​
Total SST=∑i=1n(Yi−Yˉ)2SST=\sum_{i=1}^n (Y_i - \bar{Y})^2SST=∑i=1n​(Yi​−Yˉ)2 n-1 MST=SSTdfTMST = \frac{SST}{df_T}MST=dfT​SST​

MSE∼χn−22,F∼F(1,n−2)H0:β0=β1=0Ha:β0≠0orβ1≠0MSE \sim \chi^2_{n-2}, F \sim F(1,n-2) \\ H_0:\beta_0 = \beta_1 = 0\ \ H_a:\beta_0 \ne 0\ or\ \beta_1 \ne 0MSE∼χn−22​,F∼F(1,n−2)H0​:β0​=β1​=0  Ha​:β0​​=0 or β1​​=0

Part 2 Residual Plots and Diagnotics
Tip 1: Residuals against sample index: to check whether there’s sequential correlation

Tip 2: Residual against independent variable: to check whether there’s missing higher order term

Tip 3: Residuals against fitted value: to check whether there’s heteroscedasticity

Tip 4: Residuals against potential independent variable: to check whether there’s important missing variable

Tip 5: Normal Probability Plot: to check whether normality holds
R code qqnorm(), qqline() to create QQ plot.

Tip 6: Shapiro-Wilk test
R code shapiro.test() to implement Shapiro-Wilk test. Accept normality for large p value.

Tip 7: Brown-Forsythe test
R code leveneTest() to implement Brown-Forsythe test to test heteroscedasticity in different groups. Accept homogeneity for large p value. Require R package car. Example,

ei = resid(Ex3.lm)
G<-(X<80)[order(X)]
group<-as.factor(G)
BF.htest = leveneTest(ei[order(X)],group)

Tip 8: Lack-of-fit F test
Fit reduced model and full model and use R code anova(Reduced, Full) to do Lack-of-fit F test. Accept reduced model for large p value.

Part 3 Remedial Methods: Box-Cox, Family-wise Adjustment and Nonparametrics
Tip 1: Box-Cox transformation
If not linearity, use Box-Cox transformation. Example

require( MASS )
Ex3.bc = boxcox( Ex3.lm,lambda=seq(-1, 1, 0.1), interp=F )
cbind( Ex3.bc$x, Ex3.bc$y )

Tip 2: Bonferroni Adjustment
Familywise alpha = # of estimators times pointwise alpha
For statistic TiT_iTi​, familywise CI is
B=t(1−α2m,n−2)[Ti−Bse(Ti),Ti+Bse(Ti)]B = t(1-\frac{\alpha}{2m},n-2) \\ [T_i - Bse(T_i),T_i + Bse(T_i)] B=t(1−2mα​,n−2)[Ti​−Bse(Ti​),Ti​+Bse(Ti​)]

Tip 3: Working-Hotelling-Scheffe Bound
For statistic TiT_iTi​, familywise CI is
W=mF(1−α,m,n−2)[Ti−Wse(Ti),Ti+Wse(Ti)]W = \sqrt{mF(1-\alpha,m,n-2)}\\ [T_i - Wse(T_i),T_i + Wse(T_i)] W=mF(1−α,m,n−2)​[Ti​−Wse(Ti​),Ti​+Wse(Ti​)]

α\alphaα is pointwise alpha. If m is small, use Bonferroni, otherwise use WHS.

Tip 4: Heteroscedasticity and WLS, if homogeinity of variance is rejected, use WLS.

Tip 5: Nonparametric Regression, if normality is rejected, use nonparametric regression. Use R code loess() to fit and predict() to get the nonparametrics curve. Example

baseballSLR.lm <- lm(Y ~ X)
absresid = abs(resid(baseballSLR.lm))
Yhat = fitted(baseballSLR.lm)
baseball.lo = loess( absresid~Yhat, span = 0.5, degree = 2,
family=‘symmetric’ )
Ysmooth = predict( baseball.lo,
data.frame(Yhat = seq(min(Yhat),max(Yhat),.001)) )
plot( absresid~Yhat, xlim=c(.25,.29), ylim=c(0,.11) )
par( new=TRUE )
plot( Ysmooth~seq(min(Yhat),max(Yhat),.001), type=‘l’, lwd=2,
xaxt=‘n’, yaxt=‘n’ , xlab=’’, ylab=’’, xlim=c(.25,.29), ylim=c(0,.11))

Part 4 Correlation Analysis
Tip 1: PPMCC
Tip 2: Spearman’s Rank test

Part 5 Inverse Prediction and Bootstraps
Tip 1: Inverse prediction
X^h=Yh−β^0β^1X^h−Xhse{predX}∼t(N−2)s2{predX}=MSEβ^12[1+1n+X^h−Xh∑i=1N(Xi−Xˉ)2]\hat{X}_h = \frac{Y_h - \hat{\beta}_0}{\hat{\beta}_1} \\ \frac{\hat{X}_h - X_h}{se\{predX\}} \sim t(N-2) \\ s^2\{predX\} = \frac{MSE}{\hat{\beta}_1^2} [1+\frac{1}{n}+\frac{\hat{X}_h - X_h}{\sum_{i=1}^{N} (X_i - \bar{X})^2}] X^h​=β^​1​Yh​−β^​0​​se{predX}X^h​−Xh​​∼t(N−2)s2{predX}=β^​12​MSE​[1+n1​+∑i=1N​(Xi​−Xˉ)2X^h​−Xh​​]

Tip 2: Bootstrapped confidential interval: above β^1\hat\beta_1β^​1​ is treated as constant which is not really true. A better way for CI is bootstrap. Example

theta_hat <- (50-finalpr2a.lm$coefficients[1])/finalpr2a.lm$coefficients[2]
ei <- resid(finalpr2a.lm)
Yhat <- fitted(finalpr2a.lm)
b1origin <- finalpr2a.lm$coefficients[2]
b0origin <- finalpr2a.lm$coefficients[1]
n <- length(Y)
B <- 5000
b0 <- numeric(B)
b1 <- numeric(B)
set.seed(2019)
for(b in 1:B){
esti <- sample(ei,n,replace=T)
Yest <- Yhat + esti
b0[b] <- coef(lm(Yest~X1))[1]
b1[b] <- coef(lm(Yest~X1))[2]
}
theta <- (50-b0)/b1
summary(theta)
theta <- sort(theta)
thetaL <- theta[126]
thetaU <- theta[4875]
c(thetaL,thetaU)
hist(theta, probability = T)
abline(v=thetaL, lty=2, lwd=2)
abline(v=thetaU, lty=2, lwd=2)

Multivariate Linear Regression

Part 1 Inference and Prediction
Tip 1: Model Assumption Y∼N(Xβ,σ2In)Y \sim N(X\beta,\sigma^2I_n)Y∼N(Xβ,σ2In​)
Tip 2: Estimators of coefficients are β^=(XTX)−1XTY\hat{\beta} = (X^TX)^{-1}X^TYβ^​=(XTX)−1XTY
Tip 3: Fitted value and residuals are

  1. Hat matrix H=X(XTX)−1XTH = X(X^TX)^{-1}X^TH=X(XTX)−1XT
  2. Fitted value Y^=HY\hat Y = HYY^=HY
  3. Residual e=(In−H)Ye = (I_n - H)Ye=(In​−H)Y

Tip 4: ANOVA

Source SS df MS
Regression SSR=β^TXTY−1NYTJYSSR=\hat{\beta}^TX^TY - \frac{1}{N}Y^TJYSSR=β^​TXTY−N1​YTJY p-1 MSR=SSRdfRMSR = \frac{SSR}{df_R}MSR=dfR​SSR​
Residuals SSE=YTY−β^TXTYSSE=Y^TY - \hat{\beta}^TX^TYSSE=YTY−β^​TXTY N-p MSE=SSEdfEMSE = \frac{SSE}{df_E}MSE=dfE​SSE​
Total SST=YT(IN−JN)YSST=Y^T(I_N-\frac{J}{N})YSST=YT(IN​−NJ​)Y N-1 MST=SSTdfTMST = \frac{SST}{df_T}MST=dfT​SST​

Tip 5: Sequential ANOVA, When given a sequence of objects, anova() tests the models against one another in the order specified.

Part 2 Variable Selection
Tip 1: Coefficient of determination. Use R code leaps(x,y,method = “adjr2”)

library(leaps)
CH09PR11.r2 <- leaps( x=cbind(X1,X2,X3,X4), y=Y, method=‘adjr2’)
p = seq( min(CH09PR11.r2$size), max(CH09PR11.r2$size) )
plot( CH09PR11.r2$adjr2 ~ CH09PR11.r2$size , ylab=expression(R^2), xlab=‘p’ )
Rp2 = by( data=CH09PR11.r2$adjr2,INDICES=factor(CH09PR11.r2$size), FUN=max )
lines( Rp2 ~ p )
CH09PR11.r2[[“adjr2”]]
CH09PR11.r2[[“which”]]

Tip 2: Mallow’s CpC_pCp​. Use R code leaps(x,y,method = “Cp”)

library(leaps)
CH09PR11.cp <- leaps( x=cbind(X1,X2,X3,X4), y=Y, method=‘Cp’)
p = seq( min(CH09PR11.cp$size), max(CH09PR11.cp$size) )
plot( CH09PR11.cp$Cp ~ CH09PR11.cp$size , ylab=expression(R^2), xlab=‘p’ )
Rp2 = by( data=CH09PR11.cp$Cp,INDICES=factor(CH09PR11.cp$size), FUN=min )
lines( Rp2 ~ p )
CH09PR11.cp[[“Cp”]]
CH09PR11.cp[[“which”]]

Tip 3: AIC and BIC. Use R code step() and input lm object to calculate AIC.

CH09PR10.lm <- lm( Y ~ X1+X2+X3+X4 )
CH09PR10.step <- step(CH09PR10.lm)

Tip 4: PRESSpPRESS_pPRESSp​. Use PRESS() to get PRESS.

library(MPV)
CH09PR23.r2 <- leaps( x=cbind(X1,X2,X3), y=Y, method=‘adjr2’)
p = seq( min(CH09PR23.r2$size), max(CH09PR23.r2$size) )
CH09PR23.r2[[“which”]]
PRESSp = numeric( length(CH09PR23.r2$size) )
SSEp = numeric( length(CH09PR23.r2$size) )
PRESSp[1] = PRESS( lm( Y ~ X1 ) )
PRESSp[2] = PRESS( lm( Y ~ X2 ) )
PRESSp[3] = PRESS( lm( Y ~ X3 ) )
PRESSp[4] = PRESS( lm( Y ~ X1+X2 ) )
PRESSp[5] = PRESS( lm( Y ~ X1+X3 ) )
PRESSp[6] = PRESS( lm( Y ~ X2+X3 ) )
PRESSp[7] = PRESS( lm( Y ~ X1+X2+X3 ) )
PRESSp[8] = PRESS( lm( Y ~ X1+X2+X1sq+X2sq ) )
PRESSp[9] = PRESS( lm( Y ~ X1+X2+X1X2 ) )
PRESSp[10] = PRESS( lm( Y ~ X1+X2+X1sq+X2X3 ) )
SSEp[1] = anova(lm(Y~X1))[2,2]
SSEp[2] = anova(lm(Y~X2))[2,2]
SSEp[3] = anova(lm(Y~X3))[2,2]
SSEp[4] = anova(lm(Y~X1+X2))[3,2]
SSEp[5] = anova(lm(Y~X1+X3))[3,2]
SSEp[6] = anova(lm(Y~X2+X3))[3,2]
SSEp[7] = anova(lm(Y~X1+X2+X3))[4,2]
SSEp[8] = anova(lm(Y~X1+X2+X1sq+X2sq))[5,2]
SSEp[9] = anova(lm(Y~X1+X2+X1X2))[4,2]
SSEp[10] = anova(lm(Y~X1+X2+X1sq+X2X3))[5,2]
plot( PRESSp[1:7] ~ CH09PR23.r2$size , ylab=expression(PRESS[p]), xlab=‘p’ )
minPRESSp = by( data=PRESSp[1:7],INDICES=factor(CH09PR23.r2$size), FUN=min )
lines(minPRESSp ~ p )

Tip 5: Forward Stepwise Regression

library(rms)
step( lm(Y~1),Y~X1+X2+X3+X4+X51999+X52000+X52001, direction=‘forward’)

Tip 6: Backward Elimination. Use fastbw(). k is the multiple of the number of degrees of freedom used for the penalty. k = 2 for AIC and k = log(n) for BIC (n = length(Y))

library(rms)
CH09PR20.lm <- lm(Y~X1+X2+X3+X4+X51999+X52000+X52001)
step( CH09PR20.lm, direction=‘backward’,k=2)
CH09PR20.ols <- ols( Y~X1+X2+X3+X4+X51999+X52000+X52001 )
fastbw( fit=CH09PR20.ols, rule=‘p’,type=‘individual’, sls=.10 )

Part 3 Outliears Detection
Tip 1: Studentized Detected Residual: internally studentized residual (fitting) and externally studentized residual (LOOCV). R code rstudent() for externally studentized residual.

tcrit <- qt( 1-.5*(.05/52), 52-4-1 ) # Bonferroni adjustment needed
CH10PR10.lm <- lm( Y ~ X)
plot( rstudent(CH10PR10.lm) ~ fitted(CH10PR10.lm), ylim=c(-4,4) )
abline( h=tcrit , lty=2 )
abline( h=-tcrit, lty=2 )
abline( h=0 )

Tip 2: Hat Matrix Leverage Value. Use hatvalues() to find hat value. Fails when number of predictors is large.

n <- length(Y) # sample size
p <- 3 # number of predictors
hii = hatvalues( CH10PR10.lm )
hii>2*p/n # Empirical Rule to determine outliers.

Part 4 Influential Analysis
Tip 1: DFFITS. The influence of the i-th data to i-th fitted value. Use R code dffits().
Tip 2: Cook’s Distance. The influence of the i-th data to all fitted value. Use R code influence.measures().
Tip 3: DFBETAS. The influence of the i-th data to k-th coefficients. Use R code cooks.distance(). Input is model object.

DFF <- dffits(CH10PR10.lm)
abs(DFF)>2*sqrt(p/n)
DFB <- influence.measures( CH10PR10.lm )
DFB[[“infmat”]][c(16,22,43,48,10,32,38,40),]
plot( cooks.distance(CH10PR10.lm), type=‘o’,pch=19 )
ei = resid( CH10PR10.lm )
yhat = fitted(CH10PR10.lm)
radius = sqrt( cooks.distance(CH10PR10.lm)/pi )
plot( ei ~ yhat, pch=’’)
abline( h=0 )
symbols( yhat,ei,circles=radius,inches=.15,bg=‘black’,fg=‘white’,add=T )

Part 5 Multicolinearity and Ridge Regression
Tip 1: Variance Inflation Factor

  1. Use pairs() and cor() to get pairwise dot plots and correlation matrix.
  2. require( car )
    vif( lm(Y ~ X1+X2+X3) )

  3. If max(VIF)>10 and mean(VIF)>6, suggesting Multicolinearity.

Tip 2: Ridge Regression. Use R package genridge.

c = seq( from=.01,to=10,by=.01 )
traceplot( ridge(U~Z1+Z2+Z3, lambda=c) )

UA MATH571A 回归分析 概念与R code总结相关推荐

  1. UA MATH571A R语言回归分析实践 一元回归3 NBA球员的工资

    UA MATH571A R语言回归分析实践 一元回归3 NBA球员的工资 残差分析 正态性.同方差性的检验 欠拟合检验 前两讲已经完成了大致的分析了,我们已经明确了NBA球员名次与工资的负相关关系,接 ...

  2. UA MATH571A QE练习 R语言 单因子试验的回归分析

    UA MATH571A QE练习 R语言 单因子试验的回归分析 2015年5月的第六题是单因子试验,因为历年只有这一道,所以单独做一下. 土壤中的硅主要以硅酸盐矿物的形式存在,受成土母质和成土过程的影 ...

  3. UA MATH571A R语言回归分析实践 多元回归2 医疗费用的决定

    UA MATH571A R语言回归分析实践 多元回归2 医疗费用的决定 系数的推断与模型预测 模型诊断 这一讲展示一下一元回归中的模型诊断的手段怎么用在多元回归中,同时介绍一下多元回归做推断和预测的方 ...

  4. UA MATH571A R语言回归分析实践 多元回归1 医疗费用的决定

    UA MATH571A R语言回归分析实践 多元回归1 医疗费用 基础回归分析 这一讲开始讨论多元回归,这里选择的例子是寻找家庭医疗费用的决定因素.家庭医疗费用由哪些因素决定是卫生经济学.保险精算等领 ...

  5. UA MATH571A R语言回归分析实践 一元回归4 NBA球员的工资

    UA MATH571A R语言回归分析实践 一元回归4 NBA球员的工资 Box-Cox变换 Full Model 模型再诊断 总结 上一讲对一元线性回归模型进行了诊断,发现模型主要存在三个问题: 工 ...

  6. UA MATH571A R语言回归分析实践 一元回归2 NBA球员的工资

    UA MATH571A R语言回归分析实践 一元回归2 NBA球员的工资 方差分析 相关性分析 上一讲完成了解释NBA球员工资的一个简单的一元线性回归模型的估计.分析,展示了一下简单的预测,这一讲我们 ...

  7. UA MATH571A R语言回归分析实践 一元回归1 NBA球员的工资

    UA MATH571A R语言回归分析实践 一元回归1 NBA球员的工资 基础回归分析 571A另一个系列的文章介绍了回归分析的理论,这个系列的文章介绍R语言做回归分析的实践,但不会涉及R语言编程,只 ...

  8. UA MATH571A QE练习 R语言 非参数回归 上

    UA MATH571A QE练习 R语言 非参数回归上 2014年5月第五题 2015年1月第四题 2015年5月第四题 这一篇介绍2014年5月第五题.2015年1月第四题.2015年5月第四题. ...

  9. UA MATH571A QE练习 R语言 多重共线性与岭回归

    UA MATH571A QE练习 R语言 多重共线性与岭回归 QE回归2017年1月的第4题目的是通过高中成绩排名(X1X_1X1​)与ACT分数(X2X_2X2​)预测大学第一年的GPA(YYY). ...

最新文章

  1. debian,ubuntu 安装mongodb 允许外网访问,修改端口,设置用户和密码
  2. zcmu1713(模拟)
  3. 【Python7】csv/excel/matplotlib,排序/树遍历,线/进程,文件/xml操作,百度人脸API,aiohttp/hal/restful/curl
  4. ambari搭建注意事项
  5. php的具体配置学习笔记
  6. 开课吧:Java开发常用技术基础部分有哪些?
  7. Android根据分辨率进行单位转换-(dp,sp转像素px)
  8. 苏宁易购开放平台_苏宁易购半年报解读:业态场景与零售服务的“两手抓”
  9. 【从嵌入式视角学习香山处理器】一、如何开始?(开发环境搭建)
  10. 计算机里没有usb驱动设备,USB驱动,电脑没有usb驱动怎么办
  11. 切!原来进入500强就那么简单啊——前IBM,HP,Dell员工揭开外企的招聘内幕
  12. 抖音直播间截流黑科技
  13. python随风飘落怎么画_树叶飘落动画制作 如何制作树叶飘落的动画?视频画面添加树叶随风飘落的动画效果...
  14. 【代码记录】pytorch推理及与onnx推理精度对比
  15. 【微信小程序】模板消息推送(测试成功)。
  16. JQuery中$(document)和$(window)是什么意思,有什么作用
  17. 合作对策模型的简单实现
  18. 学院图书管理系统的设计与实现
  19. docker开放的端口_docker容器怎么开端口
  20. D2FQ(2021 FAST)

热门文章

  1. 进程与线程||线程应用:异步调用||多线程与单线程
  2. ES5新增的方法——数组的方法
  3. 高级指令——kill指令、ifconfig指令、reboot指令、shutdown指令、uptime指令、uname指令
  4. DOM操作之CRUD操作
  5. python连接oracle
  6. QT学习 之 计算器的实现
  7. CTFshow 命令执行 web121
  8. dosbox更新加载的文件夹
  9. 第三周项目4顺序表应用2 删除元素在[x,y]之间的所有元素
  10. vue在微信里面的兼容问题_微信H5页面兼容性问题分析及解决方法