StackedGAN详解与实现(采用tensorflow2.3实现)

  • StackedGAN原理
  • StackedGAN实现
    • 编码器
    • 对抗网络
      • 鉴别器
      • 生成器
      • 模型构建
    • 模型训练
    • 效果展示

StackedGAN原理

StackedGAN提出了一种用于分解潜在表示以调节生成器输出的方法。与InfoGAN学习如何调节噪声以产生所需的输出,StackedGAN将GAN分解为GAN堆栈。每个GAN均以通常的区分生成器生成图片的方式进行独立训练,并带有自己的潜在编码。
编码器网络由一堆简单的编码器组成,即EncoderiEncoder_iEncoderi​,其中i=0,...,n−1i = 0,...,n-1i=0,...,n−1对应于nnn个特征。每个编码器都提取某些面部特征。例如,Encoder0Encoder_0Encoder0​可以是发型特征Feature1Feature_1Feature1​的编码器。所有简单的编码器都有助于使整个编码器执行正确的预测。
StackedGAN背后的想法是,如果想构建一个可以生成假名人面孔的GAN,应该简单地反转编码器。 StackedGAN由一堆更简单的GAN组成,GANiGAN_iGANi​,其中i=0,...,n−1i = 0,...,n-1i=0,...,n−1对应于nnn个特征。每个GANiGAN_iGANi​都会学习反转其相应编码器EncoderiEncoder_iEncoderi​的过程。例如,GAN0GAN_0GAN0​从伪造的发型特征生成伪造的名人面孔,这与Encoder0Encoder_0Encoder0​的过程相反。
每个GANiGAN_iGANi​使用一个潜编码ziz_izi​,以调节其生成器输出。例如,潜编码z0z_0z0​可以修改发型。GAN的堆栈也可以用作合成假名人面孔的对象,从而完成整个编码器的逆过程。每个GANiGAN_iGANi​的潜编码ziz_izi​可以用来更改假名人面孔的特定属性。

StackedGAN实现

StackedGAN的详细网络模型。以2个encoder-GAN堆栈为例。

StackedGAN包括编码器和GAN的堆栈。 对编码器进行预训练以执行分类。Generator1Generator_1Generator1​学习合成基于伪标签yfy_{f}yf​和潜编码z1fz_{1f}z1f​的特征f1ff_{1f}f1f​。Generator0Generator_0Generator0​使用伪特征f1ff_{1f}f1f​和潜码z0fz_{0f}z0f​产生伪图像。
StackedGAN从编码器开始。它可能是训练后的分类器,可以预测正确的标签。中间特征向量f1rf_{1r}f1r​可用于GAN训练。对于MNIST,可以使用基于CNN的分类器。
使用Dense层提取256-dim特征。 有两种输出模型,Encoder0Encoder_0Encoder0​和Encoder1Encoder_1Encoder1​。 两者都将用于训练StackedGAN。

编码器

def build_encoder(inputs,num_labels=10,feature1_dim=256):"""the Encoder Model sub networksTwo sub networks:Encoder0: Image to feature1Encoder1: feature1 to labels#argumentsinputs (layers): x - images, feature1 - feature1 layer outputnum_labels (int): number of class labelsfeature1_dim (int): feature1 dimenstionality#returnsenc0,enc1 (models):Description below"""kernel_size = 3filters = 64x,feature1 = inputs# Encoder0 or enc0y = keras.layers.Conv2D(filters=filters,kernel_size=kernel_size,padding='same',activation='relu')(x)y = keras.layers.MaxPool2D()(y)y = keras.layers.Conv2D(filters=filters,kernel_size=kernel_size,padding='same',activation='relu')(y)y = keras.layers.MaxPooling2D()(y)y = keras.layers.Flatten()(y)feature1_output = keras.layers.Dense(feature1_dim,activation='relu')(y)#Encoder0 or enc0: image (x or feature0) to feature1enc0 = keras.Model(inputs=x,outputs=feature1_output,name='encoder0')#Encoder1 or enc1y = keras.layers.Dense(num_labels)(feature1)labels = keras.layers.Activation('softmax')(y)#Encoder1 or enc1: feature1 to class labels (feature2)enc1 = keras.Model(inputs=feature1,outputs=labels,name='encoder1')#return both enc0,enc1return enc0,enc1

Encoder0Encoder_0Encoder0​的输出f1rf_{1r}f1r​是希望Generator1Generator_1Generator1​学习进行合成的256维特征向量。可用作Encoder0Encoder_0Encoder0​的辅助输出。训练整个编码器以对MNIST数字xrx_rxr​进行分类。 正确的标签yry_ryr​由Encoder1Encoder_1Encoder1​预测。 在此过程中,将学习中间特征集f1rf_1rf1​r并将其用于Generator0Generator_0Generator0​训练。 当GAN针对此编码器进行训练时,下标rrr用于强调和区分真实数据与伪数据。
假设编码器输入xrx_rxr​,输出为中间特征f1rf_{1r}f1r​和标签yry_ryr​,则每个GAN都会以通常的鉴别网络-对抗网络方式进行训练。

对抗网络

损失函数:
鉴别器
Li(D)=−Efi∼pdatalogD(fi)−Efi+1∼pdata,zilog[1−D(G(fi+1,zi))]\mathcal L_i^{(D)} = -\mathbb E_{f_i\sim p_{data}}logD(f_i)-\mathbb E_{f_{i+1}\sim p_{data},z_i}log[1 − D(G(f_{i+1},z_i))]Li(D)​=−Efi​∼pdata​​logD(fi​)−Efi+1​∼pdata​,zi​​log[1−D(G(fi+1​,zi​))]
生成器
Li(G)adv=−Efi∼pdata,zilogD(G(fi+1,zi))\mathcal L_i^{(G)adv} = -\mathbb E_{f_i\sim p_{data},z_i}logD(G(f_{i+1},z_i))Li(G)adv​=−Efi​∼pdata​,zi​​logD(G(fi+1​,zi​))
Li(D)cond=∥Ei(G(fi+1,zi)),fi∥2\mathcal L_i^{(D)cond} = \| \mathbb E_i(G(f_{i+1},z_i)),f_i \|_2Li(D)cond​=∥Ei​(G(fi+1​,zi​)),fi​∥2​
Li(D)ent=∥Qi(G(fi+1,zi)),zi∥2\mathcal L_i^{(D)ent} = \| Q_i(G(f_{i+1},z_i)),z_i \|_2Li(D)ent​=∥Qi​(G(fi+1​,zi​)),zi​∥2​
Li(G)=λ1Li(G)adv+λ2Li(D)cond+λ3Li(D)ent\mathcal L_i^{(G)} = \lambda_1 \mathcal L_i^{(G)adv}+\lambda_2 \mathcal L_i^{(D)cond} +\lambda_3 \mathcal L_i^{(D)ent} Li(G)​=λ1​Li(G)adv​+λ2​Li(D)cond​+λ3​Li(D)ent​
条件损失函数Li(D)cond\mathcal L_i^{(D)cond}Li(D)cond​确保了在从输入噪声编码ziz_izi​合成输出fif_ifi​时,生成器不会忽略输入fi+1f_{i+1}fi+1​。 编码器EncoderiEncoder_iEncoderi​必须能够通过反转GeneratoriGenerator_iGeneratori​的过程来恢复生成器输入。生成器输入和使用编码器恢复的输入之间的差通过欧几里德距离(均方误差(MSE))测量。
但是,条件损失函数引入了新问题。生成器忽略输入噪声编码ziz_izi​,仅依赖于fi+1f_{i+1}fi+1​。 熵损失函数Li(D)ent\mathcal L_i^{(D)ent}Li(D)ent​确保生成器不会忽略噪声编码ziz_izi​。 Q网络从生成器的输出中恢复噪声矢量。恢复的噪声与输入噪声之间的差异也可以通过欧几里德距离(MSE)进行测量。

鉴别器

构建Discriminator0Discriminator_0Discriminator0​和Discriminator1Discriminator_1Discriminator1​的函数。 除特征向量输入Z0Z_0Z0​和辅助网络Q0Q_0Q0​之外,dis0鉴别器与GAN鉴别器类似。创建dis0:

def discriminator(inputs,activation='sigmoid',num_codes=None):"""discriminator modelArguments:inputs (Layer): input layer of the discriminatoractivation (string): name of output activation layernum_labels (int): dimension of one-hot labels for ACGAN & InfoGANnum_codes (int): num_codes-dim Q network as outputif StackedGAN or 2 Q netwoek if InfoGANReturns:Model: Discriminator model"""kernel_size = 5layer_filters = [32,64,128,256]x = inputsfor filters in layer_filters:if filters == layer_filters[-1]:strides = 1else:strides = 2x = keras.layers.LeakyReLU(0.2)(x)x = keras.layers.Conv2D(filters=filters,kernel_size=kernel_size,strides=strides,padding='same')(x)x = keras.layers.Flatten()(x)outputs = keras.layers.Dense(1)(x)if activation is not None:print(activation)outputs = keras.layers.Activation(activation)(outputs)# StackedGAN Q0 output# z0_recon is reconstruction of z0 normal distributionz0_recon = keras.layers.Dense(num_codes)(x)z0_recon = keras.layers.Activation('tanh',name='z0')(z0_recon)outputs = [outputs,z0_recon]return keras.Model(inputs,outputs,name='discriminator')

dis1鉴别器由三层MLP组成。 最后一层区分真实和伪。网络共享dis1的前两层。其第三层重建z1z_1z1​。

def build_disciminator(inputs,z_dim=50):"""Discriminator 1 model将feature1分类为真实/伪图像,并恢复输入噪声或潜编码#argumnetsinputs (layer): feature1z_dim (int): noise dimensionality#Returnsdis1 (Model): feature1 as real/fake and recovered latent code"""#input is 256-dim feature1x = keras.layers.Dense(256,activation='relu')(inputs)x = keras.layers.Dense(256,activation='relu')(x)# first output is probality that feature1 is realf1_source = keras.layers.Dense(1)(x)f1_source = keras.layers.Activation('sigmoid',name='feature1_source')(f1_source)#z1 reonstruction (Q1 network)z1_recon = keras.layers.Dense(z_dim)(x)z1_recon = keras.layers.Activation('tanh',name='z1')(z1_recon)discriminator_outputs = [f1_source,z1_recon]dis1 = keras.Model(inputs,discriminator_outputs,name='dis1')return dis1

生成器

gen1生成器由带有标签和噪声编码z1fz_{1f}z1f​作为输入的三个密集层组成。 第三层生成伪造的特征f1ff_{1f}f1f​。

def build_generator(latent_codes,image_size,feature1_dim=256):"""build generator model sub networksTwo sub networks:class and noise to feature1feature1 to image#Argumentlatent_codes (layers): dicrete code (labels), noise and feature1 featuresimage_size (int): target size of one sidefeature1_dim (int): feature1 dimensionality#Returngen0,gen1 (models)"""#latent codes and network parameterslabels,z0,z1,feature1 = latent_codes#image_resize = image_size // 4#kernel_size = 5#layer_filters = [128,64,32,1]#gen1 inputsinputs = [labels,z1] #10+50=60-dimx = keras.layers.concatenate(inputs,axis=1)x = keras.layers.Dense(512,activation='relu')(x)x = keras.layers.BatchNormalization()(x)x = keras.layers.Dense(512,activation='relu')(x)x = keras.layers.BatchNormalization()(x)fake_feature1 = keras.layers.Dense(feature1_dim,activation='relu')(x)#gen1: classes and noise (feature2 + z1) to feature1gen1 = keras.Model(inputs,fake_feature1,name='gen1')#gen0: feature1 + z0 to feature0 (image)gen0 = generator(feature1,image_size,codes=z0)return gen0,gen1

gen0生成器类似于其他GAN生成器.

def generator(inputs,image_size,activation='sigmoid',codes=None):"""generator modelArguments:inputs (layer): input layer of generatorimage_size (int): Target size of one sideactivation (string): name of output activation layerlabels (tensor): input labelscodes (list): 2-dim disentangled codes for infoGANreturns:model: generator model"""image_resize = image_size // 4kernel_size = 5layer_filters = [128,64,32,1]## generator 0 of StackedGANinputs = [inputs,codes]x = keras.layers.concatenate(inputs,axis=1)x = keras.layers.Dense(image_resize*image_resize*layer_filters[0])(x)x = keras.layers.Reshape((image_resize,image_resize,layer_filters[0]))(x)for filters in layer_filters:if filters > layer_filters[-2]:strides = 2else:strides = 1x = keras.layers.BatchNormalization()(x)x = keras.layers.Activation('relu')(x)x = keras.layers.Conv2DTranspose(filters=filters,kernel_size=kernel_size,strides=strides,padding='same')(x)if activation is not None:x = keras.layers.Activation(activation)(x)return keras.Model(inputs,x,name='generator')

模型构建

def build_and_train_models():#build StackedGAN#数据加载(x_train,y_train),(x_test,y_test) = keras.datasets.mnist.load_data()image_size = x_train.shape[1]x_train = np.reshape(x_train,[-1,image_size,image_size,1])x_train = x_train.astype('float32') / 255.x_test = np.reshape(x_test,[-1,image_size,image_size,1])x_test = x_test.astype('float32') / 255.num_labels = len(np.unique(y_train))y_train = keras.utils.to_categorical(y_train)y_test = keras.utils.to_categorical(y_test)#超参数model_name = 'stackedGAN_mnist'batch_size = 64train_steps = 40000lr = 2e-4decay = 6e-8input_shape = (image_size,image_size,1)label_shape = (num_labels,)z_dim = 50z_shape = (z_dim,)feature1_dim = 256feature1_shape = (feature1_dim,)#discriminator 0 and Q network 0 modelsinputs = keras.layers.Input(shape=input_shape,name='discriminator0_input')dis0 = discriminator(inputs,num_codes=z_dim)optimizer = keras.optimizers.RMSprop(lr=lr,decay=decay)# 损失函数:1)图像是真实的概率# 2)MSE z0重建损失loss = ['binary_crossentropy','mse']loss_weights = [1.0,10.0]dis0.compile(loss=loss,loss_weights=loss_weights,optimizer=optimizer,metrics=['accuracy'])dis0.summary()#discriminator 1 and Q network 1 modelsinput_shape = (feature1_dim,)inputs = keras.layers.Input(shape=input_shape,name='discriminator1_input')dis1 = build_disciminator(inputs,z_dim=z_dim)# 损失函数: 1) feature1是真实的概率 (adversarial1 loss)# 2) MSE z1 重建损失 (Q1 network loss or entropy1 loss)loss = ['binary_crossentropy','mse']loss_weights = [1.0,1.0]dis1.compile(loss=loss,loss_weights=loss_weights,optimizer=optimizer,metrics=['acc'])dis1.summary()#generator modelsfeature1 = keras.layers.Input(shape=feature1_shape,name='featue1_input')labels = keras.layers.Input(shape=label_shape,name='labels')z1 = keras.layers.Input(shape=z_shape,name='z1_input')z0 = keras.layers.Input(shape=z_shape,name='z0_input')latent_codes = (labels,z0,z1,feature1)gen0,gen1 = build_generator(latent_codes,image_size)gen0.summary()gen1.summary()#encoder modelsinput_shape = (image_size,image_size,1)inputs = keras.layers.Input(shape=input_shape,name='encoder_input')enc0,enc1 = build_encoder((inputs,feature1),num_labels)enc0.summary()enc1.summary()encoder = keras.Model(inputs,enc1(enc0(inputs)))encoder.summary()data = (x_train,y_train),(x_test,y_test)#训练对抗网路前,需要已经训练完成的编码器网络train_encoder(encoder,data,model_name=model_name)#adversarial0 model = generator0 + discrimnator0 + encoder0optimizer = keras.optimizers.RMSprop(lr=lr*0.5,decay=decay*0.5)enc0.trainable = Falsedis0.trainable = Falsegen0_inputs = [feature1,z0]gen0_outputs = gen0(gen0_inputs)adv0_outputs = dis0(gen0_outputs) + [enc0(gen0_outputs)]adv0 = keras.Model(gen0_inputs,adv0_outputs,name='adv0')# 损失函数:1)feature1是真实的概率# 2)Q network 0 损失# 3)condition0 损失loss = ['binary_crossentropy','mse','mse']loss_weights = [1.0,10.0,1.0]adv0.compile(loss=loss,loss_weights=loss_weights,optimizer=optimizer,metrics=['acc'])adv0.summary()#adversarial1 model = generator1 + discrimnator1 + encoder1enc1.trainable = Falsedis1.trainable = Falsegen1_inputs = [labels,z1]gen1_outputs = gen1(gen1_inputs)adv1_outputs = dis1(gen1_outputs) + [enc1(gen1_outputs)]adv1 = keras.Model(gen1_inputs,adv1_outputs,name='adv1')#损失函数:1)标签是真实的概率#2)Q network 1 损失#3)conditional1 损失loss_weights = [1.0,1.0,1.0]loss = ['binary_crossentropy','mse','categorical_crossentropy']adv1.compile(loss=loss,loss_weights=loss_weights,optimizer=optimizer,metrics=['acc'])adv1.summary()models = (enc0,enc1,gen0,gen1,dis0,dis1,adv0,adv1)params = (batch_size,train_steps,num_labels,z_dim,model_name)train(models,data,params)

模型训练

#训练对抗网路前,需要已经训练完成的编码器网络
def train_encoder(model,data,model_name='stackedgan_mnist',batch_size=64):"""Train Encoder model# Argumentsmodel (model): Encoderdata (tensor): train and test datamodel_name (string): model namebatch_size (int): train batch size"""(x_train,y_train),(x_test,y_test) = datamodel.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['acc'])model.fit(x_train,y_train,validation_data=(x_test,y_test),epochs=20,batch_size=batch_size)model.save(model_name + '-encoder.h5')score = model.evaluate(x_test,y_test,batch_size=batch_size,verbose=0)print("\nTest accuracy: %.1f%%" % (100.0 * score[1]))

训练顺序为:
1.Discriminator1Discriminator_1Discriminator1​和Q1Q_1Q1​
2.Discriminator0Discriminator_0Discriminator0​和Q0Q_0Q0​
3.Adversarial1Adversarial_1Adversarial1​
4.Adversarial0Adversarial_0Adversarial0​

def train(models,data,params):"""train networksArgumentsmodels (models): encoder,generator,discriminator,adversarialdata (tuple): x_train,y_trainparams (tuple): parameters"""enc0,enc1,gen0,gen1,dis0,dis1,adv0,adv1 = modelsbatch_size,train_steps,num_labels,z_dim,model_name = params(x_train,y_train),_ = datasave_interval = 500z0 = np.random.normal(scale=0.5,size=[16,z_dim])z1 = np.random.normal(scale=0.5,size=[16,z_dim])noise_class = np.eye(num_labels)[np.arange(0,16) % num_labels]noise_params = [noise_class,z0,z1]train_size = x_train.shape[0]print(model_name,'labels for generated images: ',np.argmax(noise_class,axis=1))for i in range(train_steps):rand_indexes = np.random.randint(0,train_size,size=batch_size)real_images = x_train[rand_indexes]# real feature1 from encoder0 outputreal_feature1 = enc0.predict(real_images)# generate random 50-dim z1 latent codereal_z1 = np.random.normal(scale=0.5,size=[batch_size,z_dim])#real labelsreal_labels = y_train[rand_indexes]#generate fake feature1 using generator1 from real labels and 50-dim z1 latent codefake_z1 = np.random.normal(scale=0.5,size=[batch_size,z_dim])fake_feature1 = gen1.predict([real_labels,fake_z1])#real + fake datafeature1 = np.concatenate((real_feature1,fake_feature1))z1 = np.concatenate((real_z1,fake_z1))#label 1st half as real and 2nd half as fakey = np.ones([2*batch_size,1])y[batch_size:,:] = 0#train discriminator1 to classify feature1 as real/fake and recovermetrics = dis1.train_on_batch(feature1,[y,z1])log = "%d: [dis1_loss: %f]" % (i, metrics[0])#train the discriminator0 for 1 batch#1 batch of reanl and fake imagesreal_z0 = np.random.normal(scale=0.5,size=[batch_size,z_dim])fake_z0 = np.random.normal(scale=0.5,size=[batch_size,z_dim])fake_images = gen0.predict([real_feature1,fake_z0])#real + fake datax = np.concatenate((real_images,fake_images))z0 = np.concatenate((real_z0,fake_z0))#train discriminator0 to classify image as real/fake and recover latent code (z0)metrics = dis0.train_on_batch(x,[y,z0])log = "%s [dis0_loss: %f]" % (log, metrics[0])# 对抗训练# 生成fake z1,labelsfake_z1 = np.random.normal(scale=0.5,size=[batch_size,z_dim])#input to generator1 is sampling fr real labels and 50-dim z1 latent codegen1_inputs = [real_labels,fake_z1]y = np.ones([batch_size,1])#train generator1metrics = adv1.train_on_batch(gen1_inputs,[y,fake_z1,real_labels])fmt = "%s [adv1_loss: %f, enc1_acc: %f]"log = fmt % (log, metrics[0], metrics[6])# input to generator0 is real feature1 and 50-dim z0 latent codefake_z0 = np.random.normal(scale=0.5,size=[batch_size,z_dim])gen0_inputs = [real_feature1,fake_z0]#train generator0metrics = adv0.train_on_batch(gen0_inputs,[y,fake_z0,real_feature1])log = "%s [adv0_loss: %f]" % (log, metrics[0])print(log)if (i + 1) % save_interval == 0:genenators = (gen0,gen1)plot_images(genenators,noise_params=noise_params,show=False,step=(i+1),model_name=model_name)gen1.save(model_name + '-gen1.h5')gen0.save(model_name + '-gen0.h5')

效果展示

#绘制生成图片
def plot_images(generators,noise_params,show=False,step=0,model_name='gan'):"""generator fake images and plotArgumentsgenerators (model): gen0 and gen1 models for fake images generationnoise_params (list): noise parameters (label,z0 and z1 codes)show (bool): whether to show plot or notstep (int): Appended tor filename of the save imagesmodel_name (string): model name"""gen0,gen1 = generatorsnoise_class,z0,z1 = noise_paramsos.makedirs(model_name,exist_ok=True)filename = os.path.join(model_name,'%05d.png' % step)feature1 = gen1.predict([noise_class,z1])images = gen0.predict([feature1,z0])print(model_name,'labels for generated images: ',np.argmax(noise_class,axis=1))plt.figure(figsize=(2.2,2.2))num_images = images.shape[0]image_size = images.shape[1]rows = int(math.sqrt(noise_class.shape[0]))for i in range(num_images):plt.subplot(rows,rows,i + 1)image = np.reshape(images[i],[image_size,image_size])plt.imshow(image,cmap='gray')plt.axis('off')plt.savefig(filename)if show:plt.show()else:plt.close('all')
if __name__ == '__main__:build_and_train_models()
step=10000

修改书写角度的分离编码

StackedGAN详解与实现(采用tensorflow2.x实现)相关推荐

  1. 变分自编码器(VAE)详解与实现(tensorflow2.x)

    变分自编码器(VAE)详解与实现(tensorflow2.x) VAE介绍 VAE原理 变分推理 VAE核心方程 优化方式 重参数化技巧(Reparameterization trick) VAE实现 ...

  2. ACGAN(Auxiliary Classifier GAN)详解与实现(tensorflow2.x实现)

    ACGAN(Auxiliary Classifier GAN)详解与实现(tensorflow2.x实现) ACGAN原理 ACGAN实现 模块导入 生成器 鉴别器 模型构建 模型训练 虚假图像生成及 ...

  3. 深度残差网络(ResNet)详解与实现(tensorflow2.x)

    深度残差网络(ResNet)详解与实现(tensorflow2.x) ResNet原理 ResNet实现 模型创建 数据加载 模型编译 模型训练 测试模型 训练过程 ResNet原理 深层网络在学习任 ...

  4. YOLOv5算法详解

    目录 1.需求解读 2.YOLOv5算法简介 3.YOLOv5算法详解 3.1 YOLOv5网络架构 3.2 YOLOv5实现细节详解 3.2.1 YOLOv5基础组件 3.2.2 输入端细节详解 3 ...

  5. **组播PIM-SM详解****

    组播PIM-SM详解** PIM_SM:采用PULL的模式. 特点:需要建立SPT(源树)和RPT(共享树)两种树. 涉及设备:BSR.RP.DR.源端.组成员(接收者). 设计pim的路由表项:(S ...

  6. 天梯赛基础题型详解(2019 - 08 - 12)

    A.枚举 (1) 详解:用枚举法,从最开始的只有一层沙漏开始枚举,直至找到一个沙漏所用符号的总和小于等于输入的数(将每一次不同层数的沙漏的符号和都用数组储存起来),然后标记那个最大的和.要注意的是每增 ...

  7. 【蓝桥杯Python组】2022年第十三届蓝桥杯省赛B组Python解题思路详解

    第十三届蓝桥杯省赛B组Python解题思路详解 因为今年采用线上的举办方式进行比赛,所以组委会对题目做了一定的调整,将原来的5道填空+5道编程题变成了2道填空+8道编程题,据说是为了防止抄袭.其实题目 ...

  8. 自编码器模型详解与实现(采用tensorflow2.x实现)

    自编码器模型详解与实现(采用tensorflow2.x实现) 使用自编码器学习潜变量 编码器 解码器 构建自编码器 从潜变量生成图像 完整代码 使用自编码器学习潜变量 由于高维输入空间中有很多冗余,可 ...

  9. CycleGAN详解与实现(采用tensorflow2.x实现)

    CycleGAN详解与实现(采用tensorflow2.x实现) CycleGAN原理 CycleGAN概述 CycleGAN原理 前向循环 反向循环 训练过程 CycleGAN实现 加载库 生成器 ...

最新文章

  1. C#中Dispose和Close的区别
  2. VBS学习日记(二) 基础知识
  3. Android Activity 生命周期和LaunchMode 规则
  4. 数据结构与算法总结——背包问题与组和问题
  5. 【学习】013 Servlet、Cookie、Session的简述
  6. C++_auto_ptr与unique_ptr智能指针
  7. php电竞酒店系统,星云电竞酒店管理系统
  8. RedHat8.4系统安装docker
  9. 当电子工程师十余年,感慨万千
  10. 【MySQL】新闻发布系统数据库设计
  11. 【Android】安卓开发实战之软键盘设置
  12. 张爱玲经典爱情语录大全
  13. 免费的PHP在线解密工具源码
  14. c语言中chat的使用方法图解,Mechat
  15. C语言:生成随机数(并非固定的随机数)——rand()、srand()
  16. 农夫 狼 羊 白菜 java,农夫、狼、羊、白菜(回溯法求解)
  17. 毫米波电路的PCB设计和加工(第一部分)
  18. python图像处理(二)绘制函数图像
  19. getenv、setenv函数(获取和设置系统环境变量) 与 环境变量
  20. 大自然保护协会2018全球影赛获奖作品合集

热门文章

  1. MySQL GROUP_CONCAT长度限制引发的一场灾难
  2. oracle下查询的sql已经超出IIS响应时间
  3. 删除数据表中的重复行
  4. Customer Group Checkout----------Red2Black_RealTidbits
  5. sql 中 case when 语法
  6. 成功解决TypeError: a bytes-like object is required, not ‘str‘
  7. [转载] python类内部成员的访问及外部访问(入门)
  8. [转载] java中对数组进行排序_如何在Java中对数组排序
  9. 二维平面上判断点是否在三角形内
  10. day17 Python 反射获取内容和修改内容