DCGAN的网络结构:

它的基本思想是训练两个网络,一个判别器一个生成器,可以把他们比喻成警和匪,警的职责就是判断出真是的伪造的,而匪的任务就是去造假,然后让警判别不出真假,他们互相对抗以提高各自的业务能力,他们对抗的过程可以用下面的式子来表示:

这个式子就是说我先固定生成器,然后使得判别器最大,也就是使得警的业务能力最强,也就是D(x)最大,要靠近1,D(G(z))最小,要靠近0,这样它辨别真假的能力就很强。然后固定判别器,使得G(z)最小,这样D(G(Z))就大了,就让判别器误以为是真的,最终达到真假难辨的目的。

先来看看训练效果吧:

数据集下载地址:

链接:https://pan.baidu.com/s/1kLnwLFzGUQIdFkhCum7B5Q 
提取码:6xim

DCGAN与传统GAN的区别就在于

1、去掉了全连接,使用全卷积网络

2、最后一层使用sigmoid作为激活函数

3、在判别器的输入和生成器的输出不使用批量归一化,原因就是批量归一化会将数据的分布强制控制在一个范围内,这样在真实数据和生成数据输入进来的时候就会对判别不利。而生成器的输出是因为本来可能生成的是一个很接近真是的分布了,但是一做归一化后分布就变了。如果所有层都用BN的话会导致样本的震荡和不稳定。

4、判别器的中间层使用leakyrelu激活函数,生成器的使用relu。

训练的注意事项:

1、BN层的参数也要拿去训练,因为如果不训练的话那么就相当于人为的把数据的分布控制在一个范围内,与真实值就会有很大的偏差,一次BN层的参数也要拿去学习。

2、感受野的大小要稍微大一点,比如用5x5的卷积核,因为在做像生成和分割这样的问题是需要考虑像素之间的融合,如果太小,融合度就不够。

好了,说完了原理,我们来看看具体怎么做吧

首先搭建一个判别器,用一个类来定义:

class Dnet:def __init__(self):with tf.variable_scope('dnet'):self.w1 = tf.Variable(tf.random_normal(shape=[5, 5, 3, 64], dtype=tf.float32, stddev=0.02))self.b1 = tf.Variable(tf.zeros(shape=[64], dtype=tf.float32))self.w2 = tf.Variable(tf.random_normal(shape=[5, 5, 64, 128], dtype=tf.float32, stddev=0.02))self.b2 = tf.Variable(tf.zeros(shape=[128], dtype=tf.float32))self.w3 = tf.Variable(tf.random_normal(shape=[5, 5, 128, 256], dtype=tf.float32, stddev=0.02))self.b3 = tf.Variable(tf.zeros(shape=[256], dtype=tf.float32))self.w4 = tf.Variable(tf.random_normal(shape=[5, 5, 256, 512], dtype=tf.float32, stddev=0.02))self.b4 = tf.Variable(tf.zeros(shape=[512], dtype=tf.float32))self.w5 = tf.Variable(tf.random_normal(shape=[6*6*512,1], dtype=tf.float32, stddev=0.02))self.b5 = tf.Variable(tf.zeros(shape=[1], dtype=tf.float32))def forward(self, x,reuse=False):with tf.variable_scope('dnet',reuse=reuse):y1 = self.leaky_relu(tf.nn.conv2d(x, self.w1, strides=[1, 2, 2, 1], padding='SAME') + self.b1)#48y2 = self.leaky_relu(tf.layers.batch_normalization(tf.nn.conv2d(y1, self.w2, strides=[1, 2, 2, 1], padding='SAME') + self.b2,name='bn1'))#24y3 = self.leaky_relu(tf.layers.batch_normalization(tf.nn.conv2d(y2, self.w3, strides=[1, 2, 2, 1], padding='SAME') + self.b3,name='bn2'))#12y4 = self.leaky_relu(tf.layers.batch_normalization(tf.nn.conv2d(y3, self.w4, strides=[1, 2, 2, 1], padding='SAME') + self.b4,name='bn3'))#6y4 = tf.reshape(y4,[-1,6*6*512])y5 = tf.nn.sigmoid(tf.matmul(y4,self.w5)+self.b5)return y5def leaky_relu(self,x):return tf.maximum(0.2*x,x)def getParam(self):params = tf.trainable_variables()return [i for i in params if 'dnet' in i.name]

然后构建生成器,也使用一个类:

class Gnet:def __init__(self):with tf.variable_scope("gnet"):self.w1 = tf.Variable(tf.random_normal(shape=[100, 64*8*6*6], dtype=tf.float32, stddev=0.02))self.b1 = tf.Variable(tf.zeros(shape=[64*8*6*6], dtype=tf.float32))self.w2 = tf.Variable(tf.random_normal(shape=[5, 5, 256, 512], dtype=tf.float32, stddev=0.02))self.b2 = tf.Variable(tf.zeros(shape=[256], dtype=tf.float32))self.w3 = tf.Variable(tf.random_normal(shape=[5, 5, 128,256], dtype=tf.float32, stddev=0.02))self.b3 = tf.Variable(tf.zeros(shape=[128], dtype=tf.float32))self.w4 = tf.Variable(tf.random_normal(shape=[5, 5, 64, 128], dtype=tf.float32, stddev=0.02))self.b4 = tf.Variable(tf.zeros(shape=[64], dtype=tf.float32))self.w5 = tf.Variable(tf.random_normal(shape=[5, 5, 3, 64], dtype=tf.float32, stddev=0.02))self.b5 = tf.Variable(tf.zeros(shape=[3], dtype=tf.float32))def forward(self, x):with tf.variable_scope("gnet"):y1 = tf.nn.relu(tf.layers.batch_normalization(tf.matmul(x, self.w1) + self.b1,name='bn1'))y1 = tf.reshape(y1, [-1, 6, 6, 512])y2 = tf.nn.relu(tf.layers.batch_normalization(tf.nn.conv2d_transpose(y1, self.w2, strides=[1, 2, 2, 1], padding='SAME',output_shape=[batch_size, 12, 12, 256]) + self.b2,name='bn2'))y3 = tf.nn.relu(tf.layers.batch_normalization(tf.nn.conv2d_transpose(y2, self.w3, output_shape=[batch_size, 24, 24, 128], strides=[1, 2, 2, 1],padding='SAME') + self.b3,name='bn3'))y4 = tf.nn.relu(tf.layers.batch_normalization(tf.nn.conv2d_transpose(y3, self.w4, output_shape=[batch_size, 48, 48, 64], strides=[1, 2, 2, 1],padding='SAME') + self.b4,name='bn4'))y5 = tf.nn.tanh(tf.nn.conv2d_transpose(y4, self.w5, output_shape=[batch_size, 96, 96, 3], strides=[1, 2, 2, 1],padding='SAME') + self.b5)return y5def getParam(self):params = tf.trainable_variables()return [i for i in params if 'gnet' in i.name]

然后构建整个网络:

class Net:def __init__(self):self.real_x = tf.placeholder(shape=[None, 96,96,3], dtype=tf.float32)self.fack_x = tf.placeholder(shape=[None, 100], dtype=tf.float32)self.pos_y = tf.placeholder(shape=[None, 1], dtype=tf.float32)self.nega_y = tf.placeholder(shape=[None, 1], dtype=tf.float32)self.dnet = Dnet()self.gnet = Gnet()def forwrd(self):self.real_d_out = self.dnet.forward(self.real_x)self.g_out = self.gnet.forward(self.fack_x)self.g_d_out = self.dnet.forward(self.g_out,reuse=True)def backward(self):d_out_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=self.real_d_out, labels=self.pos_y))g_d_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=self.g_d_out, labels=self.nega_y))self.d_loss = d_out_loss + g_d_lossself.d_opt = tf.train.AdamOptimizer(learning_rate=0.0002, beta1=0.5).minimize(self.d_loss,var_list=self.dnet.getParam())self.g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=self.g_d_out, labels=self.pos_y))self.g_opt = tf.train.AdamOptimizer(learning_rate=0.0002, beta1=0.5).minimize(self.g_loss,var_list=self.gnet.getParam())self.d_para = self.dnet.getParam()self.g_para = self.gnet.getParam()

模型搭建好了,再对样本进行采样:

def get_batch(path,batchsize):imgs = [os.path.join(path,img) for img in os.listdir(path)]batch_number = len(imgs)//batchsizeimgs = imgs[:batch_number*batchsize]for i in range(batch_number):images = [cv2.imread(img) for img in imgs[i*batchsize:(i+1)*batchsize]]images = np.array(images)yield images

将生成的图片进行显示和保存:

def visit_img(batchsize,samples,i):fig,axes = plt.subplots(figsize=(12,12),nrows=8,ncols=8,sharex=True,sharey=True)for ax,img in zip(axes.flatten(),samples[-1]):ax.xaxis.set_visible(False)ax.yaxis.set_visible(False)im = ax.imshow(img.reshape((96, 96, 3)))plt.pause(0.5)plt.savefig(r'picture\{}.jpg'.format(i))

下面是整个代码:

import matplotlib.pyplot as plt
import tensorflow as tf
from scipy import misc
import os
import numpy as np
import os
from scipy import misc
import numpy as npdef vis_img(batch_size, samples,i):fig, axes = plt.subplots(figsize=(7, 7), nrows=8, ncols=8, sharey=True, sharex=True)for ax, img in zip(axes.flatten(), samples[batch_size]):# print(img.shape)ax.xaxis.set_visible(False)ax.yaxis.set_visible(False)im = ax.imshow(img.reshape((96, 96, 3)), cmap='Greys_r')# plt.show()plt.savefig(r'picture\{}.jpg'.format(i))def read_img(path):img = misc.imresize(misc.imread(path), size=[96, 96])return imgdef get_batch(path, batch_size):img_list = [os.path.join(path, i) for i in os.listdir(path)]n_batchs = len(img_list) // batch_sizeimg_list = img_list[:n_batchs * batch_size]for ii in range(n_batchs):tmp_img_list = img_list[ii * batch_size:(ii + 1) * batch_size]img_batch = np.zeros(shape=[batch_size, 96, 96, 3])for jj, img in enumerate(tmp_img_list):img_batch[jj] = read_img(img)yield img_batchdef generator(inputs, stddev=0.02, alpha=0.2, name='generator', reuse=False):with tf.variable_scope(name, reuse=reuse) as scope:fc1 = tf.layers.dense(gen_input, 64 * 8 * 6 * 6, name='fc1')re1 = tf.reshape(fc1, (-1, 6, 6, 512), name='reshape')bn1 = tf.layers.batch_normalization(re1, name='bn1')# ac1 = tf.maximum(alpha * bn1, bn1,name='ac1')ac1 = tf.nn.relu(bn1, name='ac1')de_conv1 = tf.layers.conv2d_transpose(ac1, 256, kernel_size=[5, 5], padding='same', strides=2,kernel_initializer=tf.random_normal_initializer(stddev=stddev),name='decov1')bn2 = tf.layers.batch_normalization(de_conv1, name='bn2')# ac2 = tf.maximum(alpha * bn2, bn2,name='ac2')ac2 = tf.nn.relu(bn2, name='ac2')de_conv2 = tf.layers.conv2d_transpose(ac2, 128, kernel_size=[5, 5], padding='same',kernel_initializer=tf.random_normal_initializer(stddev=stddev), strides=2,name='decov2')bn3 = tf.layers.batch_normalization(de_conv2, name='bn3')# ac3 = tf.maximum(alpha * bn3, bn3,name='ac3')ac3 = tf.nn.relu(bn3, name='ac3')de_conv3 = tf.layers.conv2d_transpose(ac3, 64, kernel_size=[5, 5], padding='same',kernel_initializer=tf.random_normal_initializer(stddev=stddev), strides=2,name='decov3')bn4 = tf.layers.batch_normalization(de_conv3, name='bn4')# ac4 = tf.maximum(alpha * bn4, bn4,name='ac4')ac4 = tf.nn.relu(bn4, name='ac4')logits = tf.layers.conv2d_transpose(ac4, 3, kernel_size=[5, 5], padding='same',kernel_initializer=tf.random_normal_initializer(stddev=stddev), strides=2,name='logits')output = tf.tanh(logits)return outputdef discriminator(inputs, stddev=0.02, alpha=0.2, batch_size=64, name='discriminator', reuse=False):with tf.variable_scope(name, reuse=reuse) as scope:conv1 = tf.layers.conv2d(inputs, 64, (5, 5), (2, 2), padding='same',kernel_initializer=tf.random_normal_initializer(stddev=stddev), name='conv1')ac1 = tf.maximum(alpha * conv1, conv1, name='ac1')conv2 = tf.layers.conv2d(ac1, 128, (5, 5), (2, 2), padding='same',kernel_initializer=tf.random_normal_initializer(stddev=stddev), name='conv2')bn2 = tf.layers.batch_normalization(conv2, name='bn2')ac2 = tf.maximum(alpha * bn2, bn2, name='ac2')conv3 = tf.layers.conv2d(ac2, 256, (5, 5), (2, 2), padding='same',kernel_initializer=tf.random_normal_initializer(stddev=stddev), name='conv3')bn3 = tf.layers.batch_normalization(conv3, name='bn3')ac3 = tf.maximum(alpha * bn3, bn3, name='ac3')conv4 = tf.layers.conv2d(ac3, 512, (5, 5), (2, 2), padding='same',kernel_initializer=tf.random_normal_initializer(stddev=stddev), name='conv4')bn4 = tf.layers.batch_normalization(conv4, name='bn4')ac4 = tf.maximum(alpha * bn4, bn4, name='ac4')flat = tf.reshape(ac4, shape=[batch_size, 6 * 6 * 512], name='reshape')fc2 = tf.layers.dense(flat, 1, kernel_initializer=tf.random_normal_initializer(stddev=stddev), name='fc2')return fc2lr = 0.0002
epochs = 100
batch_size = 64
alpha = 0.2#leakeyrelu的#生成器的输入
with tf.name_scope('gen_input') as scope:gen_input = tf.placeholder(dtype=tf.float32, shape=[None, 100], name='gen_input')
#判别器的输入
with tf.name_scope('dis_input') as scope:real_input = tf.placeholder(dtype=tf.float32, shape=[None, 96, 96, 3], name='dis_input')#生成图像
gen_out = generator(gen_input, stddev=0.02, alpha=alpha, name='generator', reuse=False)real_logits = discriminator(real_input, alpha=alpha, batch_size=batch_size)
fake_logits = discriminator(gen_out, alpha=alpha, reuse=True)#重用已经创建的变量# var_list_gen = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,scope='generator')
# var_list_dis = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,scope='discriminator')
train_var = tf.trainable_variables()#返回的是需要训练的变量列表
var_list_gen = [var for var in train_var if var.name.startswith('generator')]
var_list_dis = [var for var in train_var if var.name.startswith('discriminator')]
with tf.name_scope('metrics') as scope:loss_g = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(fake_logits) * 0.9, logits=fake_logits))loss_d_f = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.zeros_like(fake_logits), logits=fake_logits))loss_d_r = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(real_logits) * 0.9, logits=real_logits))loss_d = loss_d_f + loss_d_rgen_optimizer = tf.train.AdamOptimizer(0.0002, beta1=0.5).minimize(loss_g, var_list=var_list_gen)dis_optimizer = tf.train.AdamOptimizer(0.0002, beta1=0.5).minimize(loss_d, var_list=var_list_dis)
with tf.Session() as sess:sess.run(tf.global_variables_initializer())coord = tf.train.Coordinator()threads = tf.train.start_queue_runners(sess=sess, coord=coord)#创建入队线程启动器用于多线程读取数据writer = tf.summary.FileWriter('./graph/DCGAN', sess.graph)saver = tf.train.Saver()for epoch in range(epochs):total_g_loss = 0total_d_loss = 0KK = 0for batch in get_batch('./faces/', batch_size):x_real = batchx_real = x_real / 127.5 - 1#归一化x_fake = np.random.uniform(-1, 1, size=[batch_size, 100])KK += 1_, tmp_loss_d = sess.run([dis_optimizer, loss_d], feed_dict={gen_input: x_fake, real_input: x_real})total_d_loss += tmp_loss_d_, tmp_loss_g = sess.run([gen_optimizer, loss_g], feed_dict={gen_input: x_fake})_, tmp_loss_g = sess.run([gen_optimizer, loss_g], feed_dict={gen_input: x_fake})total_g_loss += tmp_loss_gsamples = sess.run(gen_out, feed_dict={gen_input: x_fake})samples = (((samples - samples.min()) * 255) / (samples.max() - samples.min())).astype(np.uint8)  # 归一化if (epoch+1) % 2 == 0:x_fake = np.random.uniform(-1, 1, [64, 100])samples = sess.run(gen_out, feed_dict={gen_input: x_fake})samples = (((samples - samples.min()) * 255) / (samples.max() - samples.min())).astype(np.uint8)#归一化vis_img(-1, [samples],epoch)print('epoch {},loss_g={}'.format(epoch, total_g_loss / 2 / KK))print('epoch {},loss_d={}'.format(epoch, total_d_loss / KK))saver.save(sess, "./checkpoints/DCGAN")writer.close()

下面这个更好:

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from cartoon_face_dataset import MyDataset
import osclass D_Net:def __init__(self):with tf.variable_scope("d_net"):self.w1 = tf.Variable(tf.truncated_normal(dtype=tf.float32, shape=[5, 5, 3, 64], stddev=0.02))self.b1 = tf.Variable(tf.zeros([64]), dtype=tf.float32)self.w2 = tf.Variable(tf.truncated_normal(dtype=tf.float32, shape=[5, 5, 64, 128], stddev=0.02))self.b2 = tf.Variable(tf.zeros([128]), dtype=tf.float32)self.w3 = tf.Variable(tf.truncated_normal(dtype=tf.float32, shape=[5, 5, 128, 256], stddev=0.02))self.b3 = tf.Variable(tf.zeros([256]), dtype=tf.float32)self.w4 = tf.Variable(tf.truncated_normal(dtype=tf.float32, shape=[5, 5, 256, 512], stddev=0.02))self.b4 = tf.Variable(tf.zeros([512]), dtype=tf.float32)self.w5 = tf.Variable(tf.truncated_normal(dtype=tf.float32, shape=[6 * 6 * 512, 1], stddev=0.02))self.b5 = tf.Variable(tf.zeros([1]), dtype=tf.float32)def forward(self, x, reuse=False):with tf.variable_scope("d_net", reuse=reuse):net = tf.nn.leaky_relu(tf.nn.conv2d(x, self.w1, [1, 2, 2, 1], padding="SAME") + self.b1)net = tf.nn.leaky_relu(tf.layers.batch_normalization(tf.nn.conv2d(net, self.w2, [1, 2, 2, 1], padding="SAME") + self.b2,momentum=0.9, epsilon=1e-5))net = tf.nn.leaky_relu(tf.layers.batch_normalization(tf.nn.conv2d(net, self.w3, [1, 2, 2, 1], padding="SAME") + self.b3,momentum=0.9, epsilon=1e-5))net = tf.nn.leaky_relu(tf.layers.batch_normalization(tf.nn.conv2d(net, self.w4, [1, 2, 2, 1], padding="SAME") + self.b4,momentum=0.9, epsilon=1e-5))net = tf.matmul(tf.reshape(net, [128, 6 * 6 * 512]), self.w5) + self.b5params = tf.trainable_variables()self.params = [var for var in params if 'd_net' in var.name]# print(self.params)# exit()return netclass G_Net:def __init__(self):with tf.variable_scope("g_net"):self.w1 = tf.Variable(tf.truncated_normal(dtype=tf.float32, shape=[128, 6 * 6 * 512], stddev=0.02))self.b1 = tf.Variable(tf.zeros([6 * 6 * 512]), dtype=tf.float32)self.w2 = tf.Variable(tf.truncated_normal(dtype=tf.float32, shape=[5, 5, 256, 512], stddev=0.02))self.b2 = tf.Variable(tf.zeros([256]), dtype=tf.float32)self.w3 = tf.Variable(tf.truncated_normal(dtype=tf.float32, shape=[5, 5, 128, 256], stddev=0.02))self.b3 = tf.Variable(tf.zeros([128]), dtype=tf.float32)self.w4 = tf.Variable(tf.truncated_normal(dtype=tf.float32, shape=[5, 5, 64, 128], stddev=0.02))self.b4 = tf.Variable(tf.zeros([64]), dtype=tf.float32)self.w5 = tf.Variable(tf.truncated_normal(dtype=tf.float32, shape=[5, 5, 3, 64], stddev=0.02))self.b5 = tf.Variable(tf.zeros([3]), dtype=tf.float32)def forward(self, x, reuse=False):with tf.variable_scope("g_net", reuse=reuse):net = tf.matmul(x, self.w1) + self.b1net = tf.nn.relu(tf.layers.batch_normalization(tf.reshape(net, [-1, 6, 6, 512]), momentum=0.9, epsilon=1e-5))net = tf.nn.relu(tf.layers.batch_normalization(tf.nn.conv2d_transpose(net, self.w2, [128, 12, 12, 256], [1, 2, 2, 1], padding="SAME") + self.b2,momentum=0.9, epsilon=1e-5))net = tf.nn.relu(tf.layers.batch_normalization(tf.nn.conv2d_transpose(net, self.w3, [128, 24, 24, 128], [1, 2, 2, 1], padding="SAME") + self.b3,momentum=0.9, epsilon=1e-5))net = tf.nn.relu(tf.layers.batch_normalization(tf.nn.conv2d_transpose(net, self.w4, [128, 48, 48, 64], [1, 2, 2, 1], padding="SAME") + self.b4,momentum=0.9, epsilon=1e-5))net = tf.nn.tanh(tf.nn.conv2d_transpose(net, self.w5, [128, 96, 96, 3], [1, 2, 2, 1], padding="SAME") + self.b5)params = tf.trainable_variables()self.params = [var for var in params if 'g_net' in var.name]# print(self.params)return netclass Net:def __init__(self):self.x = tf.placeholder(shape=[None, 96, 96, 3], dtype=tf.float32)self.init_data = tf.placeholder(shape=[None, 128], dtype=tf.float32)self.fake_label = tf.placeholder(shape=[None, 1], dtype=tf.float32)self.real_label = tf.placeholder(shape=[None, 1], dtype=tf.float32)def forward(self):self.d_net = D_Net()self.g_net = G_Net()self.g_out = self.g_net.forward(self.init_data)self.d_real_out = self.d_net.forward(self.x)self.d_fake_out = self.d_net.forward(self.g_out, reuse=True)def loss(self):d_real_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=self.real_label, logits=self.d_real_out))d_fake_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=self.fake_label, logits=self.d_fake_out))self.d_loss = d_fake_loss + d_real_lossself.g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=self.fake_label, logits=self.d_fake_out))def backward(self):self.bp_g = tf.train.AdamOptimizer(0.0002, beta1=0.5).minimize(self.g_loss, var_list=self.g_net.params)self.bp_d = tf.train.AdamOptimizer(0.0002, beta1=0.5).minimize(self.d_loss, var_list=self.d_net.params)def visit_image(batchsize, samples, i):fig, axes = plt.subplots(figsize=(48, 48), nrows=8, ncols=16, sharex=True, sharey=True)for ax, img in zip(axes.flatten(), samples[-1]):ax.xaxis.set_visible(False)ax.yaxis.set_visible(False)im = ax.imshow(img.reshape((96, 96, 3)))plt.savefig(r'C:\gan_faces\{0}.jpg'.format(i))if __name__ == '__main__':net = Net()net.forward()net.loss()net.backward()init = tf.global_variables_initializer()mydataset = MyDataset(r"C:\faces", 128)with tf.Session() as sess:saver = tf.train.Saver()sess.run(init)# num = os.listdir("./gen_face/")[-1].split(".")[0]# if os.path.exists("./gen_face/{0}/checkpoint".format(num)):#     saver.restore(sess, "./gen_face/{0}/gen_face.ckpt".format(num))# else:#     sess.run(init)for i in range(100000):xs = mydataset.get_batch(sess)[0]# init_datas = np.random.normal(0, 0.02, (128, 128))init_datas = np.random.uniform(-1, 1, (128, 128))d_real_labels = np.ones(shape=[128, 1])d_fake_labels = np.zeros(shape=[128, 1])d_loss_, _ = sess.run([net.d_loss, net.bp_d],feed_dict={net.x: xs, net.init_data: init_datas, net.real_label: d_real_labels,net.fake_label: d_fake_labels})print("D_loss: ", d_loss_)# init_datas = np.random.normal(0, 0.02, (128, 128))g_fake_labels = np.ones(shape=[128, 1])for _ in range(2):g_loss_, _ = sess.run([net.g_loss, net.bp_g],feed_dict={net.init_data: init_datas, net.fake_label: g_fake_labels})print("G_loss: ", g_loss_)if i % 50 == 0:# init_datas = np.random.normal(0, 0.02, (128, 128))init_datas = np.random.uniform(-1, 1, (128, 128))g_out_ = sess.run(net.g_out, feed_dict={net.init_data: init_datas})img_array = np.array(g_out_) / 2 + 0.5# plt.imshow(img_array)# plt.pause(0.1)visit_image(-1, [img_array], i)if i % 50 == 0:saver.save(sess, "./gen_face/{0}/gen_face.ckpt".format(i))

用DCGAN生成卡通人脸相关推荐

  1. #教计算机学画卡通人物#生成式对抗神经网络GAN原理、Tensorflow搭建网络生成卡通人脸

    生成式对抗神经网络GAN原理.Tensorflow搭建网络生成卡通人脸 下面这张图是我教计算机学画画,计算机学会之后画出来的,具体实现在下面. ▲以下是对GAN形象化地表述 ●赵某不务正业.游手好闲, ...

  2. 【论文解读】让特征感受野更灵活,腾讯优图提出非对称卡通人脸检测,推理速度仅50ms...

    该文是腾讯优图&东南大学联合提出一种的非对称卡通人脸检测算法,该方法取得了2020 iCartoon Face Challenge(Under 200MB)竞赛的冠军,推理速度仅为50ms且无 ...

  3. vs2019 利用Pytorch和TensorFlow分别实现DCGAN生成动漫头像

    这是针对于博客vs2019安装和使用教程(详细)的DCGAN生成动漫头像项目新建示例 目录 一.DCGAN架构及原理 二.项目结构 1.TensorFlow 2.Pytorch 三.数据集下载(两种方 ...

  4. 生成假人脸、假新闻...AI虚拟世界正形成

    整理 | 一一 出品 | AI科技大本营(ID:rgznai100) AI 正在创造一个独特的虚拟(虚假)信息世界. 一个人脸喂养生成网站火了.这个网站可以生成随机人脸图像,这些人脸没有姓名,在现实世 ...

  5. AI也能「抽象派」作画,圆形+方块组合,可微2D渲染下生成抽象人脸

    来源:机器之心 本文约2500字,建议阅读5分钟 本文介绍了AI条件下,抽象派作画. 有人将一张方块图.圆形图的组合生成了抽象的人脸!还有人将帆布油画<阿尼埃尔的浴场>还原为直线. 绘画, ...

  6. DCGAN生成cifar10, cifar100, mnist, fashion_mnist,STL10,Anime图片(pytorch)

    代码下载地址下载地址https://www.lanzouw.com/ipl8Yo37qxihttps://www.lanzouw.com/ipl8Yo37qxi Anime数据请在Anime Face ...

  7. 【MindSpore】DCGAN生成漫画头像-----利用华为云modelarts云终端实现

    前言 本人对于 mindspore 一点也不熟悉 但是 对于 学习新事物的心情和动力 一直都很澎湃 本次参加 mindSpore 的 DCGAN生成漫画头像 社区活动,希望能够增长见识 关注 证明图 ...

  8. 人工智能个性化和逼真的漫画素描生成输入人脸图像创建漫画照片

    人工智能个性化和逼真的漫画素描生成输入人脸图像创建漫画照片摘要 - 在本文中,我们提出了第一个交互式个性化和真实感面部漫画的素描系统.输入人脸图像,用户可以通过操纵其面部特征曲线来创建漫画照片.我们的 ...

  9. Android卡通人脸转换APP(附源码)

    Android卡通人脸转换APP 写在前面~ 效果~ 拍照或者从相册中选图片 前景融合 背景融合 个性签名 如何运行~ 注意的bug 写在后面~ 写在前面~ APP界面参考了微信小程序AI卡通秀,项目 ...

最新文章

  1. 2011年使用天正建筑8.0注册版(附注册机)
  2. 好莱坞电影公司系列电影
  3. Windows 全局钩子 Hook 详解
  4. CentOS安装php mbstring的扩展
  5. Python开发【第十一篇】:JavaScript
  6. 多个页面同时跳转到一个页面,然后返回到上级页面
  7. 【转】Pro Android学习笔记(八):了解Content Provider(下中)
  8. eplan 培训中心ppt_Eplan从入门到精通.doc
  9. 微信小程序转发分享及好友点击进入传参
  10. cei()、linspace()、arrange()、full()、eye()、empty()、random()
  11. “兴趣爱好”,蜜糖or砒霜?
  12. 重置计算机网络配置后上不了网,win10系统网络重置后不能连接网络如何解决
  13. [2022]李宏毅深度学习与机器学习课程内容总结
  14. 总离差平方和公式_excel公式怎么用:用EXCEL求离差平方和 和 相关系数
  15. mysql 的delete from 子查询限制
  16. 我国首个纯太阳能无人机首飞成功!飞行高度可达2万米,相当于一颗“伪卫星”...
  17. java-net-php-python-jsp桂林母婴用品二手交易网计算机毕业设计程序
  18. gif动态表情包在线制作的操作技巧
  19. 洛谷——小凯的疑惑 / [蓝桥杯 2013 省] 买不到的数目
  20. 《Activiti/Flowable  深入BPM工作流》- Activiti 与springboot 怎么进行整合?

热门文章

  1. 测试设备硬件项目开发流程
  2. 如何在VR平台上成功塑造您的品牌?
  3. python 求斐波那契数列和几何级数
  4. Java中基于netty-socketio的客户端
  5. Flask学习笔记(二):基于Flask框架上传图片到服务器端并原名保存
  6. 腾讯云重装系统后不能远程账户密码登录
  7. 『北京』今天早上刚发生在北京的真实的对话
  8. 众说纷纭的kafka
  9. angular中的$q使用详解
  10. Nirvana Chain 「为应用而生」Lily技术分享--节点机制 |棘轮效应上涨的经济模型