Python Basics with Numpy

简单熟悉一下Python和Numpy

  • 首先安装好Python3.x和Ipython notebook
  • 熟悉numpy functions和numpy matrix/vector
  • 理解numpy中数组或向量,在运算中的”broadcasting”
  • Let’s get started!
test = "Hello World"print ("test: " + test)
test: Hello World

1- sigmoid function, np.exp()

sigmoid(x)=11+e−xsigmoid(x)=11+e−xsigmoid(x) = \frac{1}{1+e^{-x}}

sigmoid()sigmoid()sigmoid(),是一种非线性函数,在机器学习算法中常用作激活函数,对输入数据进行非线性变换.

# Python的math模块来实现sigmoidimport mathdef basic_sigmoid(x):s = 1/(1 + math.exp(-x))return s
basic_sigmoid(6)
0.9975273768433653

在ML或者DL算法中,需要进行非线性变换的数据,常常是matrix,vector,math.exp()不能处理这种类型的数据

如果,sigmoid()函数接受的参数是一个向量,比如x=(x1,x2,...,xn)x=(x1,x2,...,xn) x = (x_1, x_2, ..., x_n),np.exp(x)np.exp(x)np.exp(x),会对向量的每一个元素都进行非线性变换, np.exp(x)=(ex1,ex2,...,exn)np.exp(x)=(ex1,ex2,...,exn)np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})

import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
[ 2.71828183  7.3890561  20.08553692]
# broadcasting
x = np.array([1, 2, 3])
print (x + 3)
[4 5 6]

想了解更多,关注官方文档the official documentation.

  • 使用Numpy来实现sigmoid(x)sigmoid(x)sigmoid(x)
  • 参数xxx,可以是一个数,一个向量,或一个数组
For x∈Rn, sigmoid(x)=sigmoid(x1x2...xn)=(11+e−x111+e−x2...11+e−xn)" role="presentation">For x∈Rn, sigmoid(x)=sigmoid⎛⎝⎜⎜⎜x1x2...xn⎞⎠⎟⎟⎟=⎛⎝⎜⎜⎜⎜⎜11+e−x111+e−x2...11+e−xn⎞⎠⎟⎟⎟⎟⎟For x∈Rn, sigmoid(x)=sigmoid(x1x2...xn)=(11+e−x111+e−x2...11+e−xn)

\text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix}x_1 \\x_2 \\... \\x_n \\ \end{pmatrix} = \begin{pmatrix}\frac{1}{1+e^{-x_1}} \\\frac{1}{1+e^{-x_2}} \\... \\\frac{1}{1+e^{-x_n}} \\ \end{pmatrix}

import numpy as np
def sigmoid(x):s = 1 / (1 + np.exp(-x))return s
x = np.array([1, 2, 3])
sigmoid(x)
array([0.73105858, 0.88079708, 0.95257413])

1.2 - Sigmoid gradient

sigmoid_derivative(x)=σ′(x)=σ(x)(1−σ(x))sigmoid_derivative(x)=σ′(x)=σ(x)(1−σ(x))sigmoid\_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))
- two steps: sigmoid(x)=σ(x)sigmoid(x)=σ(x) sigmoid(x) = \sigma(x)
1. s=σ(x)s=σ(x) s = \sigma(x)
2. σ′(x)=s(1−s)σ′(x)=s(1−s)\sigma'(x) = s(1-s)

def sigmoid_derivative(x):s = sigmoid(x)ds = s * (1 - s)return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
sigmoid_derivative(x) = [0.19661193 0.10499359 0.04517666]

1.3 - Reshaping arrays

Numpy中常用的两个方法 np.shape and np.reshape().

  • X.shape : 用来查看变量X(scalar,vector,matrix)的维度信息.
  • X.reshape(x, shape) : 用来重置变量的维度,比如(1,6)转换为(3,2).shape为元组
  • 下面是具体用法
def image2vector(image):"""image -- a numpy array of shape (length, height, depth)v -- a vector of shape (length*height*depth, 1)"""# 改变变量维度信息时,传入的参数不要hardcode,尽量通过image.shape[i]来访问# 避免传入的参数乘机,超过了元素的数量v = image.reshape(image.shape[0] * image.shape[1] * image.shape[2], 1)return v
# image是一个3D的数组
image = np.array([[[ 0.67826139,  0.29380381],[ 0.90714982,  0.52835647],[ 0.4215251 ,  0.45017551]],[[ 0.92814219,  0.96677647],[ 0.85304703,  0.52351845],[ 0.19981397,  0.27417313]],[[ 0.60659855,  0.00533165],[ 0.10820313,  0.49978937],[ 0.34144279,  0.94630077]]])
print('image shape :',image.shape)
image shape : (3, 3, 2)

image 3D—->1D

v = image2vector(image)
print(v.shape)
(18, 1)

1.4 - Normalizing rows

Normalizing Data是ML&DL的通用技巧,对数据进行Normalizing后,在模型的训练过程中梯度会下降的更快.

  • normalization :x∥x∥x‖x‖ \frac{x}{\| x\|}
  • np.linalg.norm(x, axis, keepdims)
  • x : 原始数据
  • axis : 参数可选0,1
  • 0 :在列方向对每列的元素,求平方和在开方
  • 1 : 在行方向对每行的元素,求平方和在开方
  • keepdims : 与axis取值相关,axis=1,保持行维度不变,0:列

举个例子,

x=[023644]x=[034264]

x = \begin{bmatrix}0 & 3 & 4 \\2 & 6 & 4 \\\end{bmatrix} Normalizing:

∥x∥=np.linalg.norm(x,axis=1,keepdims=True)=[556−−√]‖x‖=np.linalg.norm(x,axis=1,keepdims=True)=[556]

\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix}5 \\\sqrt{56} \\\end{bmatrix}细节

[556−−√]=[0+32+42−−−−−−−−−√22+62+42−−−−−−−−−−√][556]=[0+32+4222+62+42]

\begin{bmatrix}5 \\ \sqrt{56} \\ \end{bmatrix} = \begin{bmatrix}\sqrt{0+3^2+4^2}\\ \sqrt{2^2+6^2+4^2} \\ \end{bmatrix}Normalized:

x_normalized=x∥x∥=[0256√35656√45456√]x_normalized=x‖x‖=[03545256656456]

x\_normalized = \frac{x}{\| x\|} = \begin{bmatrix}0 & \frac{3}{5} & \frac{4}{5} \\\frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \\ \end{bmatrix}
- 注意,你可能发现了,x,∥x∥x,‖x‖ x, \|x\|的大小(shape)不同,但是计算可以正常进行,这个就是numpy中的broadcating.

1.numpy会将∥x∥‖x‖\|x\|进行填充,使之变得的x的大小相同

∥x∥=[556−−√556−−√556−−√]‖x‖=[555565656]

\|x\| = \begin{bmatrix}5 & 5 & 5 \\\sqrt{56} & \sqrt{56} & \sqrt{56} \\\end{bmatrix}

2.大小相同后,对应元素进行除法运算

# normalizeRows
def normalizeRows(x):"""Argument:x -- A numpy matrix of shape (n, m)Returns:x -- The normalized (by row) numpy matrix. You are allowed to modify x."""x_norm = np.linalg.norm(x, axis = 1, keepdims = True)x = x / x_normreturn x
x = np.array([[0, 3, 4],[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
normalizeRows(x) = [[0.         0.6        0.8       ][0.13736056 0.82416338 0.54944226]]

1.5 - Broadcasting and the softmax function

"broadcasting”在numpy中是一个很重要的概念理解它,这对在两个不同尺寸(shape)的变量(vector,matrix)进行计算时是非常有用的.关于Broadcasting的更多细节-broadcasting documentation.

softmax():使用numpy来实现softmax()softmax()softmax(),softmax()函数常用在分类模型中.

  • for x∈R1×n, softmax(x)=softmax([x1x2…xn])=[ex1∑jexjex2∑jexj...exn∑jexj]for x∈R1×n, softmax(x)=softmax([x1x2…xn])=[ex1∑jexjex2∑jexj...exn∑jexj] \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix} x_1 && x_2 && … && x_n \end{bmatrix}) = \begin{bmatrix}\frac{e^{x_1}}{\sum_{j}e^{x_j}} &&\frac{e^{x_2}}{\sum_{j}e^{x_j}} &&... &&\frac{e^{x_n}}{\sum_{j}e^{x_j}} \end{bmatrix}

  • for a matrix x∈Rm×n, xij maps to the element in the ith row and jth column of x, thus we have: for a matrix x∈Rm×n, xijmaps to the element in the ithrow and jthcolumn of x, thus we have: \text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }

    softmax(x)=softmax⎡⎣⎢⎢⎢⎢⎢x11x21⋮xm1x12x22⋮xm2x13x23⋮xm3……⋱…x1nx2n⋮xmn⎤⎦⎥⎥⎥⎥⎥=⎡⎣⎢⎢⎢⎢⎢⎢⎢⎢ex11∑jex1jex21∑jex2j⋮exm1∑jexmjex12∑jex1jex22∑jex2j⋮exm2∑jexmjex13∑jex1jex23∑jex2j⋮exm3∑jexmj……⋱…ex1n∑jex1jex2n∑jex2j⋮exmn∑jexmj⎤⎦⎥⎥⎥⎥⎥⎥⎥⎥=⎛⎝⎜⎜⎜⎜softmax(first row of x)softmax(second row of x)...softmax(last row of x)⎞⎠⎟⎟⎟⎟softmax(x)=softmax[x11x12x13…x1nx21x22x23…x2n⋮⋮⋮⋱⋮xm1xm2xm3…xmn]=[ex11∑jex1jex12∑jex1jex13∑jex1j…ex1n∑jex1jex21∑jex2jex22∑jex2jex23∑jex2j…ex2n∑jex2j⋮⋮⋮⋱⋮exm1∑jexmjexm2∑jexmjexm3∑jexmj…exmn∑jexmj]=(softmax(first row of x)softmax(second row of x)...softmax(last row of x))

    softmax(x) = softmax\begin{bmatrix}x_{11} & x_{12} & x_{13} & \dots & x_{1n} \\x_{21} & x_{22} & x_{23} & \dots & x_{2n} \\\vdots & \vdots & \vdots & \ddots & \vdots \\x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn} \end{bmatrix} = \begin{bmatrix}\frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \\\frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \\\vdots & \vdots & \vdots & \ddots & \vdots \\\frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}} \end{bmatrix} = \begin{pmatrix}softmax\text{(first row of x)} \\softmax\text{(second row of x)} \\... \\softmax\text{(last row of x)} \\ \end{pmatrix}


def softmax(x):"""Calculates the softmax for each row of the input x.x -- A numpy matrix of shape (n,m)Returns:s -- A numpy matrix equal to the softmax of x, of shape (n,m)"""# 计算每个元素的exp()x_exp = np.exp(x)# 计算每行x_exp的和),x_exp.shape=(n,m)x_sum = np.sum(x_exp, axis = 1, keepdims = True)# 跟normalizing类似,x_sum的shape=(n,1)s = x_exp / x_sumreturn s
x = np.array([[9, 2, 5, 0, 0],[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
softmax(x) = [[9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-041.21052389e-04][8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-048.01252314e-04]]

2) Vectorization

在深度学习中,要处理的数据集very large,因此为了保证模型的计算能够高效的进行,我们需要进行向量化

import timex1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]### DOT PRODUCT 点积
tic = time.process_time()
dot = 0
for i in range(len(x1)):dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")### OUTER PRODUCT 外积
# 输出结果为(len(x1),len(x2))的矩阵
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):for j in range(len(x2)):outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")### 计算两个大小(shape)相同的向量对应元素的乘机
# 返回结果为向量
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")### dot product
# 向量与一个(3,len(x1))的矩阵每一行向量点乘
# 输出为(3,)的向量
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):for j in range(len(x1)):gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
dot = 278----- Computation time = 0.14245699999992922ms
outer = [[81. 18. 18. 81.  0. 81. 18. 45.  0.  0. 81. 18. 45.  0.  0.][18.  4.  4. 18.  0. 18.  4. 10.  0.  0. 18.  4. 10.  0.  0.][45. 10. 10. 45.  0. 45. 10. 25.  0.  0. 45. 10. 25.  0.  0.][ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.][ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.][63. 14. 14. 63.  0. 63. 14. 35.  0.  0. 63. 14. 35.  0.  0.][45. 10. 10. 45.  0. 45. 10. 25.  0.  0. 45. 10. 25.  0.  0.][ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.][ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.][ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.][81. 18. 18. 81.  0. 81. 18. 45.  0.  0. 81. 18. 45.  0.  0.][18.  4.  4. 18.  0. 18.  4. 10.  0.  0. 18.  4. 10.  0.  0.][45. 10. 10. 45.  0. 45. 10. 25.  0.  0. 45. 10. 25.  0.  0.][ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.][ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.]]----- Computation time = 0.3120860000001002ms
elementwise multiplication = [81.  4. 10.  0.  0. 63. 10.  0.  0.  0. 81.  4. 25.  0.  0.]----- Computation time = 0.5331989999999287ms
gdot = [24.75008591 26.52221182 27.61700577]----- Computation time = 0.44006000000029744ms

Numpy,向量化实现同上相同的计算

x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
dot = 278----- Computation time = 0.14096900000026835ms
outer = [[81 18 18 81  0 81 18 45  0  0 81 18 45  0  0][18  4  4 18  0 18  4 10  0  0 18  4 10  0  0][45 10 10 45  0 45 10 25  0  0 45 10 25  0  0][ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0][ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0][63 14 14 63  0 63 14 35  0  0 63 14 35  0  0][45 10 10 45  0 45 10 25  0  0 45 10 25  0  0][ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0][ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0][ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0][81 18 18 81  0 81 18 45  0  0 81 18 45  0  0][18  4  4 18  0 18  4 10  0  0 18  4 10  0  0][45 10 10 45  0 45 10 25  0  0 45 10 25  0  0][ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0][ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0]]----- Computation time = 0.322212000000377ms
elementwise multiplication = [81  4 10  0  0 63 10  0  0  0 81  4 25  0  0]----- Computation time = 0.12618199999980817ms
gdot = [24.75008591 26.52221182 27.61700577]----- Computation time = 0.2716710000001399ms

通过对比可以发现,Numpy实现同样的计算,更快,而且代码更简洁.

2.1 Implement the L1 and L2 loss functions

L1 loss:使用Numpy来计算L1 loss(vectorized version)

  • 用到的函数,np.sum(),np.abs():计算绝对值
  • loss用来评价模型的表现,loss越大说明模型预测的值 y^y^ \hat{y} 与真实的值 yyy 相差很多,在深度学习中,使用优化算法比如梯度下降来训练模型,使loss降到最小.
  • L1 loss :L1(y^,y)=∑i=0m|y(i)−y^(i)|" role="presentation" style="position: relative;">L1(y^,y)=∑i=0m|y(i)−y^(i)|L1(y^,y)=∑i=0m|y(i)−y^(i)|\begin{align*} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align*}
def L1(yhat, y):loss = np.sum(np.abs(y - yhat))return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
L1 = 1.1

L2 loss:Numpy计算L2 loss
- 用到的函数,np.dot()内积
- 如x=[x1,x2,...,xn]x=[x1,x2,...,xn]x = [x_1, x_2, ..., x_n], np.dot(x,x) = ∑nj=0x2j∑j=0nxj2\sum_{j=0}^n x_j^{2}.

  • L2 loss : L2(y^,y)=∑i=0m(y(i)−y^(i))2L2(y^,y)=∑i=0m(y(i)−y^(i))2\begin{align*} L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2\end{align*}
def L2(yhat, y):loss = np.dot((y - yhat),(y - yhat))return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
L2 = 0.43

Expected Output:

**L2** 0.43

DL-C_1_week_1_1相关推荐

  1. 计算TD-LTE DL 峰值速率的工具和相关参数

    前一段时间测试DT 基站,需要配置TDD LTE cell 的UDC.TM模式来验证不同组合下的下行峰值速率,趁此机会我用excel写了一个计算下行峰值速率的工具.工具上传至我的github: htt ...

  2. 列表标签ul、ol、dl、li

    列表标签的种类 无序列表标签(ul标签) 有序列表标签(ol标签) 自定义列表标签(dl标签) 无序列表 无序列表的各个列表项之间没有顺序级别之分,是并列的.其基本语法格式如下: <!-- ul ...

  3. 《繁凡的深度学习笔记》前言、目录大纲 一文让你完全弄懂深度学习所有基础(DL笔记整理系列)

    <繁凡的深度学习笔记>前言.目录大纲 (DL笔记整理系列) 一文弄懂深度学习所有基础 ! 3043331995@qq.com https://fanfansann.blog.csdn.ne ...

  4. 一文让你完全弄懂逻辑回归和分类问题实战《繁凡的深度学习笔记》第 3 章 分类问题与信息论基础(上)(DL笔记整理系列)

    好吧,只好拆分为上下两篇发布了>_< 终于肝出来了,今天就是除夕夜了,祝大家新快乐!^q^ <繁凡的深度学习笔记>第 3 章 分类问题与信息论基础 (上)(逻辑回归.Softm ...

  5. 一文让你完全弄懂回归问题、激活函数、梯度下降和神经元模型实战《繁凡的深度学习笔记》第 2 章 回归问题与神经元模型(DL笔记整理系列)

    <繁凡的深度学习笔记>第 2 章 回归问题与神经元模型(DL笔记整理系列) 3043331995@qq.com https://fanfansann.blog.csdn.net/ http ...

  6. 交叉熵损失函数的通用性(为什么深度学习DL普遍用它):预测输出与 y 差得越多,L 的值越大,也就是说对当前模型的 “ 惩罚 ” 越大,而且是非线性增大是一种类似指数增长的级别,结论:它对结果有引导性

    交叉熵损失函数的通用性(为什么深度学习DL普遍用它):预测输出与 y 差得越多,L 的值越大,也就是说对当前模型的 " 惩罚 " 越大,而且是非线性增大是一种类似指数增长的级别,结 ...

  7. 模拟儿童学习多语言,Deepmind让DL看视频就学会翻译

    点击上方"AI遇见机器学习",选择"星标"公众号 重磅干货,第一时间送达 来自:新智元 [导读]小孩儿是怎么学多种语言的?只通过观察,就同时掌握了不同语言.如何 ...

  8. lable、ul、ol、dl和table、fieldset标签

    lable标签 有时候,我们希望点击文件的时候,鼠标的光标自动跳到输出框中.这需要使用lable功能 <div><label for="name2">姓名: ...

  9. windows server 2008 r2 enterprise ,惠普DL 580 G7服务器报,事件 ID: 47错误。

    windows server 2008 r2 enterprise ,惠普DL 580 G7服务器报,事件 ID: 47错误. 1.先简单说明一下故障,某单位选用了惠普服务器DL 580 G7,环境是 ...

  10. 【深度学习】DL下的3D图像和Low-level Vision问题解析

    [深度学习]DL下的3D图像和Low-level Vision问题解析 文章目录 1 概述 2 low-level vision2.1 [注意力机制.超分]Learning Texture Trans ...

最新文章

  1. 聊天软交互原理_来自不同城市的人们如何在freeCodeCamp聊天室中进行交互
  2. BZOJ 2111 [ZJOI2010]Perm 排列计数:Tree dp + Lucas定理
  3. mac使用被动ftp模式(pasv)_网络安全工程师与白帽子黑客教你:Kali Linux之使用Metasploit进行FTP服务扫描实战...
  4. python随机生成中文字符串_利用python3随机生成中文字符的实现方法
  5. 运放电路分析_运放参数的详细解释和分析part4运放噪声
  6. Java enum枚举
  7. WebLogic Clustering Overview Slides
  8. Kyligence 行业峰会成功举办,“智能数据云”引领数字化转型未来
  9. 【java设计模式】之 抽象工厂(Abstract Factory)模式
  10. HTTP协议和HttpClient的入门
  11. 【Codeforces 404C】Restore Graph
  12. python curl invalid syntax_将CURL Post转换为Python请求失败
  13. 浏览器不能调用JAVA打印_网页调用IE浏览器的打印功能
  14. 再谈宋星博客的留言与seo培训联盟
  15. ggplot2-数据关系型图表
  16. elasticsearch篇之mapping
  17. IP地址分类以及网络地址的计算(子网划分、超网划分)
  18. cmd 更新 pip版本指令
  19. 使用eclipse遇到问题:the-package-collides-with-a-type
  20. Python小白到老司机,快跟我上车!基础篇(十八)

热门文章

  1. sourceMap到底是个啥玩意?
  2. python用sort()函数对列表进行排序,从最后一个元素开始判断,超详细讲解,图文+视频
  3. python人工智能五子棋_Python实现AI五子棋
  4. 30分钟学会正则表达式
  5. 简单易用的android 热修复
  6. 贵阳打出大数据战略组合拳
  7. linux 进程间通信机制(IPC机制)一消息队列
  8. ORA-01034: ORACLE not available ORA-27101: shared memory realm does not exist的原因分析
  9. 试试Navicat和Axere RP Pro吧
  10. DevExpress Dashboard for .NET简化商业智能开发