自己写了一个类似于adam优化的函数,一直想把自己写的神经网络运用到四轴飞行器的训练这个项目里,但又害怕训练效率太低,所以写了一个优化训练的函数,但不是标准的,和标准的有些出入.那个项目Actor-critic用tensorflow好像必须是用后端的一些参数,所以不如用自己的,直接就可以获得参数.

一般来说,optimizer类是单独实现的,nn.forward()计算每层的输出,loss.backward()计算损失函数对每个参数的导数,optimizer在构造函数里面就传入了nn的parameter的引用(所以可以获得nn的参数和梯度),optimizer.step()根据历史参数和算法将nn的参数向类梯度方向移动一定的距离,我的实现比较简易,没有单独实现optimizer类(pytorch里神经网络的实现步骤)

import numpy as np
import pandas as pd
import copy
def tanh(x):return np.tanh(x)
def tanh_derivative(x):return 1.0 - x * x
def sigmoid(x):return 1 / (1 + np.exp(-x))
def sigmoid_derivative(x):return x * (1 - x)
def relu(x):return np.maximum(x, 0)#t = copy.copy(x)#for i in range(len(t)):#  if t[i] < 0:#        t[i] = 0#return tdef relu_derivative(x):t = copy.copy(x)for i in range(len(t)):if t[i] <= (1e-12):t[i] = 0else:t[i] = 1return tclass ActivationFunc:def __init__(self):self.tdict = dict()self.tdict['tanh'] = np.tanhself.tdict['sigmoid'] = lambda x: 1 / (1 + np.exp(-x))self.tdict['relu'] = reluself.tdict['softmax'] = np.expself.ddict = dict()self.ddict['tanh'] = tanh_derivativeself.ddict['sigmoid'] = sigmoid_derivativeself.ddict['relu'] = relu_derivativeself.ddict['softmax'] = np.expdef getActivation(self, activation):if activation in self.tdict:return self.tdict[activation]else:return lambda x: xdef getDActivation(self, activation):if activation in self.ddict:return self.ddict[activation]else:return lambda x: np.ones(x.shape)#print(ActivationFunc().getActivation('logistic')(1.0))
#print(logistic_derivative(1.0))
class NNetwork:def __init__(self, inputsize, lr = 0.001, withbias = True, optimizer = 'adam') :self.para = []self.layerout = []self.grad = []self.backout = []self.activationclass = ActivationFunc()self.inputsize = inputsizeself.lastsize = inputsizeself.lr = lrself.layerlen = 0self.activation = []self.deactivation = []self.wbias = withbiasself.outputfunc = 'softmax'self.maxnum = 0.001self.bstep = 0self.belta1 = 0.7self.belta2 = 0.7self.alphat = 1.0self.Eg = Noneself.m = Noneif optimizer == 'adam':print('optimized with adam')self.stepfunc = self.adamstepelse:print('optimized with std')self.stepfunc = self.stdstep#self.activation = ActivationFunc().getActivation(mactivation)def add(self, densesize, actstr):tsize = self.lastsizeif self.wbias:tsize += 1self.para.append(np.random.rand(densesize, tsize) - 0.5)self.grad.append(np.zeros((densesize, tsize)))self.lastsize = densesizeself.activation.append(self.activationclass.getActivation(actstr))self.deactivation.append(self.activationclass.getDActivation(actstr))self.layerlen += 1self.outputfunc = actstrdef forward(self, input):self.layerout = []if self.wbias:self.layerout.append(np.append(np.array(input), 1))else:self.layerout.append(np.array(input))for i in range(self.layerlen):#print(self.layerout[-1].shape, self.para[i].shape)if self.wbias and i != self.layerlen - 1:self.layerout.append(np.append(self.activation[i](np.dot(self.para[i], self.layerout[-1].T)), 1))else:self.layerout.append(self.activation[i](np.dot(self.para[i], self.layerout[-1].T)))return self.layerout[-1]def backward(self, y, y_label):self.maxnum = 0.001self.bstep += 1tsumy = sum(y)if self.outputfunc == 'softmax':y[y_label] -= tsumy#self.maxnum = max(self.maxnum, max(y))self.backout = []self.backout.append(np.matrix(y).T)for i in range(self.layerlen, 0, -1):#print(self.backout[-1].shape, np.matrix(self.layerout[i - 1]).shape)self.grad[i - 1] += np.dot(self.backout[-1], np.matrix(self.layerout[i - 1]))self.maxnum = max(np.abs(self.grad[i - 1]).max().max(), self.maxnum)if i > 1:if self.wbias:self.backout.append(np.multiply(self.deactivation[i - 2](self.layerout[i - 1]), np.dot(self.backout[-1].T, self.para[i - 1])).T[:-1,:])else:self.backout.append(np.multiply(self.deactivation[i - 2](self.layerout[i - 1]), np.dot(self.backout[-1].T, self.para[i - 1])).T)else:self.backout.append(np.dot(self.backout[-1].T, self.para[i - 1]))def zero_grad(self):for obj in self.grad:obj.fill(0)self.maxnum = 0.001self.bstep = 0def step(self):self.stepfunc()def stdstep(self):for obj1, obj2 in zip(self.para, self.grad):obj1 -= self.lr * obj2 / max(self.maxnum, 0.001) * self.bstepself.zero_grad()def adamstep(self):self.belta2 = min(0.9, self.belta2 * 1.01)self.belta1 = min(0.9, self.belta1 * 1.01)if self.Eg != None:self.Eg = (1 - self.belta2) * self.maxnum + self.belta2 * self.Eg for obj1, obj2 in zip(self.m,self.grad):obj1 = (1 - self.belta1) * obj2 + self.belta1 * obj1else:self.Eg = self.maxnumself.m = self.grad#if abs(self.Eg - self.maxnum) > 0.01*self.maxnum:#    print(self.Eg, self.maxnum)te = self.Eg / (1 - np.power(self.belta2, self.alphat))tm = [obj / (1 - np.power(self.belta1, self.alphat)) for obj in self.m]for obj1, obj2 in zip(self.para, self.m):obj1 -= self.lr * obj2 / max(te, 0.001) * self.bstepself.zero_grad()def predict(self, input):y = self.forward(input)y /= np.sum(y)return y#2*x + y - 3
if __name__ == "__main__":model = NNetwork(2, withbias = True, lr = 0.001, optimizer = 'adam')model.add(16, 'relu')model.add(8, 'relu')model.add(2, 'softmax')data = pd.read_csv('data.csv').astype('float64').sample(frac=1)datalen = len(data)data_train = data.iloc[:int(datalen*0.9),:]data_test = data.iloc[int(datalen*0.9):,:]X_train = data_train.iloc[:,:2]y_train = data_train.iloc[:,2].astype('int')X_test = data_test.iloc[:,:2]y_test = data_test.iloc[:,2].astype('int')len_train = len(X_train)#print(X_train.dtype)for i in range(200000):tid = i % len_train#print(X_train.iloc[tid])output = model.forward(X_train.iloc[tid])model.backward(output, y_train.iloc[tid])if tid == len_train - 1:model.step()pres = []for ind, val in X_test.iterrows(): pres.append(np.argmax(model.predict(val)))res1 = np.array(pres)res2 = np.array(y_test)print(res1)print(res2)'''X = [[0,0],[0,1],[1,0],[1,1]]y = [0, 1, 1, 0]for i in range(200000):tid = i % 4#model.zero_grad()output = model.forward(X[tid])model.backward(output, y[tid])if tid == 3: model.step()print(model.predict([1,1]))print(model.predict([0,1]))print(model.predict([0,0]))print(model.predict([1,0]))'''
//data.csv
0.78051,-0.063669,1
0.28774,0.29139,1
0.40714,0.17878,1
0.2923,0.4217,1
0.50922,0.35256,1
0.27785,0.10802,1
0.27527,0.33223,1
0.43999,0.31245,1
0.33557,0.42984,1
0.23448,0.24986,1
0.0084492,0.13658,1
0.12419,0.33595,1
0.25644,0.42624,1
0.4591,0.40426,1
0.44547,0.45117,1
0.42218,0.20118,1
0.49563,0.21445,1
0.30848,0.24306,1
0.39707,0.44438,1
0.32945,0.39217,1
0.40739,0.40271,1
0.3106,0.50702,1
0.49638,0.45384,1
0.10073,0.32053,1
0.69907,0.37307,1
0.29767,0.69648,1
0.15099,0.57341,1
0.16427,0.27759,1
0.33259,0.055964,1
0.53741,0.28637,1
0.19503,0.36879,1
0.40278,0.035148,1
0.21296,0.55169,1
0.48447,0.56991,1
0.25476,0.34596,1
0.21726,0.28641,1
0.67078,0.46538,1
0.3815,0.4622,1
0.53838,0.32774,1
0.4849,0.26071,1
0.37095,0.38809,1
0.54527,0.63911,1
0.32149,0.12007,1
0.42216,0.61666,1
0.10194,0.060408,1
0.15254,0.2168,1
0.45558,0.43769,1
0.28488,0.52142,1
0.27633,0.21264,1
0.39748,0.31902,1
0.5533,1,0
0.44274,0.59205,0
0.85176,0.6612,0
0.60436,0.86605,0
0.68243,0.48301,0
1,0.76815,0
0.72989,0.8107,0
0.67377,0.77975,0
0.78761,0.58177,0
0.71442,0.7668,0
0.49379,0.54226,0
0.78974,0.74233,0
0.67905,0.60921,0
0.6642,0.72519,0
0.79396,0.56789,0
0.70758,0.76022,0
0.59421,0.61857,0
0.49364,0.56224,0
0.77707,0.35025,0
0.79785,0.76921,0
0.70876,0.96764,0
0.69176,0.60865,0
0.66408,0.92075,0
0.65973,0.66666,0
0.64574,0.56845,0
0.89639,0.7085,0
0.85476,0.63167,0
0.62091,0.80424,0
0.79057,0.56108,0
0.58935,0.71582,0
0.56846,0.7406,0
0.65912,0.71548,0
0.70938,0.74041,0
0.59154,0.62927,0
0.45829,0.4641,0
0.79982,0.74847,0
0.60974,0.54757,0
0.68127,0.86985,0
0.76694,0.64736,0
0.69048,0.83058,0
0.68122,0.96541,0
0.73229,0.64245,0
0.76145,0.60138,0
0.58985,0.86955,0
0.73145,0.74516,0
0.77029,0.7014,0
0.73156,0.71782,0
0.44556,0.57991,0
0.85275,0.85987,0
0.51912,0.62359,0

Actor-crtic好像还必须使用网络并联,这个实现起来好像有点要改结构,以后有空再实现,Actor-Critic本质看代码应该是Q(s,a)的对每个a的导数,和π(s) = a的输出相乘,然后backward更新π(s) = a网络的参数,Q(s,a)(critic)网络也要同时更新,和普通方法相同

动量在帮助函数走出局部最优有作用主要是因为函数自然情况下坑都比较小比较浅,如果人为第挖两个大坑则动量肯定会失效,动量也会使收敛速度减慢.

我猜想标准的神经网络节点应该是这么实现的,其中python里的列表应该会换成Vector3之类的,forwardlst存储后续连接信息:

class Node:def __init__(tid):self.mtop = 0self.forwardlist = Vector3[(id,w,delta)]self.backwardlist = Vector3[(int)]self.outputd = 0self.inputd = 0self.id = tid
Layer = Vector(Node)

每一层应该就是一个Node的列表,使用类似于C++ Vector的数据结构存储

self.forwardlist里存放的是一个三元数组,存放所有后继的节点的id,该连接的参数w,导数delta.    self.backwardlist里存放的是节点反向连接了哪些节点,只需要存id即可.self.mtop存放在backward算导数时前一层更新到该节点第几个连接了,由于对python里类似于Vector的数据结构不够了解,所以没有实现(list好像效率不够,因为里面可以存储不同的变量,需要可以存储相同变量的容器,比如c++的vector,效率更高),每次后一层backward时要将前一层的self.mtop清0,

该实现有点类似于图论中邻接表的实现.前一份代码实现有点类似于邻接矩阵,应该在全连接层的效率是非常高的(完全图使用邻接矩阵效率应该是更高的),但只能用于全连接层,卷积神经网络和连接的稍微一点变化的网络都必须使用类似于邻接表的实现,

之后又将该网络用于试验猫狗大战提取到的特征在全连接层上的训练,效果还可以,用于提取bottle_neck_feature的模型有VGG19,InceptionResNetV2,Xception(参数经过微调)模型

import numpy as np
import pandas as pd
import copy
def tanh(x):return np.tanh(x)
def tanh_derivative(x):return 1.0 - x * x
def sigmoid(x):return 1 / (1 + np.exp(-x.copy().clip(-20,20)))
def sigmoid_derivative(x):return x * (1 - x)
def relu(x):return np.maximum(x, 0)#t = copy.copy(x)#for i in range(len(t)):#  if t[i] < 0:#        t[i] = 0#return tdef relu_derivative(x):t = copy.copy(x)for i in range(len(t)):if t[i] <= (1e-12):t[i] = 0else:t[i] = 1return tclass ActivationFunc:def __init__(self):self.tdict = dict()self.tdict['tanh'] = np.tanhself.tdict['sigmoid'] = sigmoidself.tdict['relu'] = reluself.tdict['softmax'] = np.expself.ddict = dict()self.ddict['tanh'] = tanh_derivativeself.ddict['sigmoid'] = sigmoid_derivativeself.ddict['relu'] = relu_derivativeself.ddict['softmax'] = np.expdef getActivation(self, activation):if activation in self.tdict:return self.tdict[activation]else:return lambda x: xdef getDActivation(self, activation):if activation in self.ddict:return self.ddict[activation]else:return lambda x: np.ones(x.shape)#print(ActivationFunc().getActivation('logistic')(1.0))
#print(logistic_derivative(1.0))
class NNetwork:def __init__(self, inputsize, lr = 0.001, withbias = True, optimizer = 'adam') :self.para = []self.layerout = []self.grad = []self.backout = []self.activationclass = ActivationFunc()self.inputsize = inputsizeself.lastsize = inputsizeself.lr = lrself.layerlen = 0self.activation = []self.deactivation = []self.wbias = withbiasself.outputfunc = 'softmax'self.maxnum = 0.001self.maxpara = 1self.bstep = 0self.belta1 = 0.7self.belta2 = 0.7self.alphat = 1.0self.Eg = Noneself.m = Noneif optimizer == 'adam':print('optimized with adam')self.stepfunc = self.adamstepelse:print('optimized with std')self.stepfunc = self.stdstep#self.activation = ActivationFunc().getActivation(mactivation)def add(self, densesize, actstr):tsize = self.lastsizeif self.wbias:tsize += 1self.para.append((np.random.rand(densesize, tsize) - 0.5) * 2 * np.sqrt(6 / (self.inputsize + 2)) ) #randn * np.power(2 / (self.inputsize + 2), 0.25)self.grad.append(np.zeros((densesize, tsize)))self.lastsize = densesizeself.activation.append(self.activationclass.getActivation(actstr))self.deactivation.append(self.activationclass.getDActivation(actstr))self.layerlen += 1self.outputfunc = actstrdef forward(self, input):self.layerout = []if self.wbias:self.layerout.append(np.append(np.array(input), 1))else:self.layerout.append(np.array(input))for i in range(self.layerlen):#print(self.layerout[-1].shape, self.para[i].shape)if self.wbias and i != self.layerlen - 1:self.layerout.append(np.append(self.activation[i](np.dot(self.para[i], self.layerout[-1].T)), 1))else:self.layerout.append(self.activation[i](np.dot(self.para[i], self.layerout[-1].T)))return self.layerout[-1]def backward(self, y, y_label):self.maxnum = 0.001self.bstep += 1if self.outputfunc == 'softmax':tsumy = sum(y)y[y_label] -= tsumyy /= max(tsumy, 1e-4)if self.outputfunc == 'sigmoid':if y_label == 1:#print(y)y -= 1#self.maxnum = max(self.maxnum, max(y))self.backout = []self.backout.append(np.matrix(y).T)for i in range(self.layerlen, 0, -1):#print(self.backout[-1].shape, np.matrix(self.layerout[i - 1]).shape)self.grad[i - 1] += np.dot(self.backout[-1], np.matrix(self.layerout[i - 1]))self.maxnum = max(np.abs(self.grad[i - 1]).max().max(), self.maxnum)if i > 1:if self.wbias:self.backout.append(np.multiply(self.deactivation[i - 2](self.layerout[i - 1]), np.dot(self.backout[-1].T, self.para[i - 1])).T[:-1,:])else:self.backout.append(np.multiply(self.deactivation[i - 2](self.layerout[i - 1]), np.dot(self.backout[-1].T, self.para[i - 1])).T)else:self.backout.append(np.dot(self.backout[-1].T, self.para[i - 1]))def zero_grad(self):for obj in self.grad:obj.fill(0)self.maxnum = 0.001self.bstep = 0def step(self):self.stepfunc()def stdstep(self):tmaxpara = 0for obj1, obj2 in zip(self.para, self.grad):obj1 -= self.lr * obj2  * self.bstep #/ max(self.maxnum, 1e-4) * self.maxparatmaxpara = max(tmaxpara, np.abs(obj1).max().max())self.maxpara = tmaxparaself.zero_grad()def adamstep(self):self.belta2 = min(0.9, self.belta2 * 1.01)self.belta1 = min(0.9, self.belta1 * 1.01)if self.Eg != None:self.Eg = (1 - self.belta2) * self.maxnum + self.belta2 * self.Eg for obj1, obj2 in zip(self.m, self.grad):obj1 = (1 - self.belta1) * obj2 + self.belta1 * obj1else:self.Eg = self.maxnumself.m = self.grad#if abs(self.Eg - self.maxnum) > 0.01*self.maxnum:#   print(self.Eg, self.maxnum)te = self.Eg / (1 - np.power(self.belta2, self.alphat))tm = [obj / (1 - np.power(self.belta1, self.alphat)) for obj in self.m]for obj1, obj2 in zip(self.para, self.m):obj1 -= self.lr * obj2 / max(te, 1e-6) * self.bstepself.zero_grad()def predict(self, input):y = self.forward(input)if self.outputfunc == 'softmax':y /= np.sum(y)return y#2*x + y - 3if __name__ == "__main__":#    model = NNetwork(2, withbias = True, lr = 0.001, optimizer = 'std')
#   model.add(16, 'relu')
#   model.add(8, 'relu')
#   model.add(8, 'relu')
#   model.add(8, 'relu')
#   model.add(2, 'softmax')train_labels = np.load("y_train.npy")validation_labels = np.load("y_val.npy")for i,name in enumerate(['vgg19', 'Xception','InceptionResNetV2']):if i == 0:train_data = np.load('bottleneck_features_train_' + name  + '.npy') / 255validation_data = np.load('bottleneck_features_validation_' + name  + '.npy') / 255else:train_data = np.append(train_data,np.load('bottleneck_features_train_' + name + '.npy'), axis = 1)validation_data = np.append(validation_data, np.load('bottleneck_features_validation_' + name + '.npy'),axis = 1)tinputsize = train_data[0].shape[0]model = NNetwork(tinputsize, lr = 0.001, optimizer = 'std')model.add(256, 'relu')model.add(1, 'sigmoid')#print(model.outputfunc)epochs = 4maxcnt = 0finalModel = Nonefor e in range(epochs):for i in range(len(train_data)):output = model.forward(train_data[i])ty = output.copy()model.backward(output, train_labels[i])#if train_labels[i] == 1 and not flag:#    flag = True#   print(train_data[i], ty, train_labels[i])#  #print(model.backout)if i % 20 == 10:model.step()msum = 0for i in range(len(validation_data)):tmp = int(model.predict(validation_data[i]) > 0.5)if tmp == validation_labels[i]:msum += 1if msum >= maxcnt:maxcnt = msumfinalModel = copy.copy(model)print('epoch {} Accuracy {}%'.format(e+1, msum / len(validation_data) * 100))namelist = ['vgg19', 'Xception', 'InceptionResNetV2']bottleneck_features_test = Nonefor i,name in enumerate(namelist):if i == 0:bottleneck_features_test = np.load('bottleneck_features_test' + name + '.npy') else:bottleneck_features_test = np.append(bottleneck_features_test, np.load('bottleneck_features_test' + name + '.npy'),  axis = 1)tmp = []for val in bottleneck_features_test:x = finalModel.predict(val)tmp.append(x)predictions = np.array(tmp)for i in range(len(predictions)):if predictions[i] < 0.005:predictions[i] = 0.005if predictions[i] > 0.995:predictions[i] = 0.995result_csv = pd.read_csv("sample_submission.csv")result_csv['label'] = predictionstest_result_name = "catvsdog.csv"result_csv.to_csv(test_result_name, index=False)#print(tmp[:10])#   res1 = np.array(pres)
#   res2 = np.array(y_test)
#   print(res1)
#   print(res2)'''X = [[0,0],[0,1],[1,0],[1,1]]y = [0, 1, 1, 0]for i in range(200000):tid = i % 4#model.zero_grad()output = model.forward(X[tid])model.backward(output, y[tid])if tid == 3: model.step()print(model.predict([1,1]))print(model.predict([0,1]))print(model.predict([0,0]))print(model.predict([1,0]))'''

最好的结果如下图所示:

后面发现卷积神经网络可以使用python numpy切片实现,数据每层存储输出存储为二维的,但测试起来有些难度,以后有空会写一写.

带adam优化器版本的神经网络相关推荐

  1. NLP 神经网络训练慎用 Adam 优化器

    https://www.jianshu.com/p/48e71b72ca67 NLP 神经网络训练慎用 Adam 优化器 theoqian关注 12019.02.10 16:01:45字数 499阅读 ...

  2. Keras Adam代码解析以及EMA的Adam优化器

    文章目录 Keras Adam 初始化 更新函数 带EMA的Adam Adam理论可以参考下这里 优化算法的选择 Keras Adam class Adam(Optimizer):"&quo ...

  3. #深度解析# 深度学习中的SGD、BGD、MBGD、Momentum、NAG、Adagrad、Adadelta,RMSprop、Adam优化器

    关于SSE.MSE.RMSE.R-Squared等误差公式的深度解析请参考我的这篇博文->#深度解析# SSR,MSE,RMSE,MAE.SSR.SST.R-squared.Adjusted R ...

  4. tf.keras.optimizers.Adam 优化器 示例

    tf.keras.optimizers.Adam 优化器 示例 tf.keras.optimizers.Adam(learning_rate=0.001, # 学习率 默认 0.001beta_1=0 ...

  5. PyTorch基础-Adam优化器使用-06

    当不知道使用什么优化器的时候可以使用adam优化器 代码 import numpy as np import torch from torch import nn,optim from torch.a ...

  6. Pytorch框架中SGD&Adam优化器以及BP反向传播入门思想及实现

    因为这章内容比较多,分开来叙述,前面先讲理论后面是讲代码.最重要的是代码部分,结合代码去理解思想. SGD优化器 思想: 根据梯度,控制调整权重的幅度 公式: 权重(新) = 权重(旧) - 学习率 ...

  7. 2学习率调整_Keras的Adam优化器参数理解及自适应学习率

    Adam优化器是目前应用最多的优化器. optimizer--adam_小笨熊~~走向程序猿的~~历程~~专栏-CSDN博客​blog.csdn.net 在训练的过程中我们有时会让学习率随着训练过程自 ...

  8. Adam优化器偏差矫正的理解

    1.adam优化器公式 包括动量项和过去梯度平方的指数衰减平均 2.偏差校正后的,  3.Adam的参数更新公式 重点来了 第二部偏差矫正的公式是怎么等到的??? 论文中的推导 但是不知道是怎么变化来 ...

  9. 通俗理解 Adam 优化器

    Adam吸收了Adagrad(自适应学习率的梯度下降算法)和动量梯度下降算法的优点,既能适应稀疏梯度(即自然语言和计算机视觉问题),又能缓解梯度震荡的问题 常见优化器的详细解析请参考此文章-># ...

最新文章

  1. python线程的注意点(线程之间执行是无序的、主线程会等待所有的子线程执行结束再结束(守护主线程)、线程之间共享全局变量、线程之间共享全局变量数据出现错误问题(线程等待(join)、互斥锁))
  2. CTFshow php特性 web135
  3. 使用WordPress的Kyma plugin同Kyma断开连接的实现
  4. 上采样,下采样,过采样,欠采样的区别
  5. ucla ai_UCLA的可持续性:用户体验案例研究
  6. memcache单机版安装
  7. Redis锁的简单应用
  8. 2D阵列中的峰值检测
  9. c语言新手的无奈,几个新手容易犯的错误
  10. 全球DEM下载 90米、30米、12.5米等各种精度DEM数据
  11. HTML案例登录页面
  12. Android系统上使用ANMPP搭建Nginx+PHP+MySQL+FTP服务(以天猫魔盒TMB100A为例)搭建网站
  13. windows系统下,在iis管理器(无W3SVC/WAS服务)或网站IIS功能不全(无默认文档、模块、各种规则设置等)
  14. 英语学习之‘加减乘除’
  15. HTTP 多处理模块(MPM)
  16. 中央民族大学计算机考研2020,2020年中央民族大学856计算机学科专业综合考研复习资料...
  17. CISP与CISSP证书的区别,那个能适合现在的市场环境
  18. excel中插入文档
  19. CYPRESS代理铁电存储器中文资料FM25V05-GTR
  20. 【信道编码/Channel Coding】汉明码Hamming Code

热门文章

  1. 西安市经开区-公司设立流程
  2. 皮质-皮质网络的多尺度交流
  3. 在机器学习中,如何用Python进行数据预处理?
  4. 2021年G2电站锅炉司炉最新解析及G2电站锅炉司炉模拟考试
  5. 区块链中“鸡肋”的RPC漏洞
  6. 生产环境linux服务器ping 报错connect: network is unreachable
  7. Adobe Photoshop (PS)修改图片像素教程
  8. 认识计算机系统学反思,认识计算机学设计及反思.doc
  9. python 中国裁决文书网 爬虫,完整版!!!
  10. 如实使用Mixly按键控制LED灯