降噪自动编码器是经典的自动编码器的一种扩展,它最初被当作深度网络的一个模块使用 [Vincent08]。这篇指南中,我们首先也简单的讨论一下自动编码器。

自动编码器

文献[Bengio09] 给出了自动编码器的一个简介。在编码过程,它可以把输入$\mathbf{x} \in [0,1]^d$映射到一个隐式表达$\mathbf{y} \in [0,1]^{d'}$。映射关系定义如下:

$$\mathbf{y} = s(\mathbf{W}\mathbf{x} + \mathbf{b})$$

其中,$s$为一个非线性函数,比如sigmoid。在解码过程,映射后的$y$可以重新映射成和输入具有相同维度的$z$,这个过程和编码过程非常类似。

$$\mathbf{z} = s(\mathbf{W'}\mathbf{y} + \mathbf{b'})$$

这里需要注意,上标并不表示转置。$z$可以看作对于给定y的条件下,$x$的预测。当然,逆映射的权重$W'$也可以是映射种权重$W$的转置。$\mathbf{W'} = \mathbf{W}^T$,这种情况被成为绑定的权重。模型的系数($W$,$b$,$b'$,以及$W'$如果两个映射矩阵不绑定的话)可以通过最小化平均重建错误的方法求解。

这里的重建错误可以通过很多不同的方法量化。经典的方差$L(\mathbf{x}, \mathbf{z}) = || \mathbf{x} - \mathbf{z} ||^2$也可以采用。如果输入为位向量,或者是位概率向量,重建的交叉熵也可以使用。

$$L_{H} (\mathbf{x}, \mathbf{z}) = - \sum^d_{k=1}[\mathbf{x}_k \log \mathbf{z}_k + (1 - \mathbf{x}_k)\log(1 - \mathbf{z}_k)]$$

这里希望编码后的$y$为能够捕获数据中主要变量的变化的一种分布式表示。这个和主分量分析PCA中的原理是一样的。事实上,如果编码过程采用线性映射,并采用均方差准则训练网络,那么这$k$个隠单元可以把输入在PCA意义下进行投影。如果隐层位非线性函数,模型会捕获输入的多模态信息,这和PCA有些区别。

因为$y$可以看作输入$x$的有损压缩,它不能对所有的输入都达到最佳压缩的目的。优化过程可以对训练数据达到好的压缩目的,并希望可以应用到其他数据上面,但不能是任意数据上。这就是自动编码器泛化的道理:它可以在和输入数据具有相同分布的测试数据达到较低的重构错误,但是在随机数据上效果不佳。

为了复用的方便,我们采用theano实现自动编码器类。首先,创建模型参数的共享变量$W$,$b$,$b'$(这里$W'=W^T$)。

    def __init__(self,numpy_rng,theano_rng=None,input=None,n_visible=784,n_hidden=500,W=None,bhid=None,bvis=None):"""Initialize the dA class by specifying the number of visible units (thedimension d of the input ), the number of hidden units ( the dimensiond' of the latent or hidden space ) and the corruption level. Theconstructor also receives symbolic variables for the input, weights andbias. Such a symbolic variables are useful when, for example the inputis the result of some computations, or when weights are shared betweenthe dA and an MLP layer. When dealing with SdAs this always happens,the dA on layer 2 gets as input the output of the dA on layer 1,and the weights of the dA are used in the second stage of trainingto construct an MLP.:type numpy_rng: numpy.random.RandomState:param numpy_rng: number random generator used to generate weights:type theano_rng: theano.tensor.shared_randomstreams.RandomStreams:param theano_rng: Theano random generator; if None is given one isgenerated based on a seed drawn from `rng`:type input: theano.tensor.TensorType:param input: a symbolic description of the input or None forstandalone dA:type n_visible: int:param n_visible: number of visible units:type n_hidden: int:param n_hidden:  number of hidden units:type W: theano.tensor.TensorType:param W: Theano variable pointing to a set of weights that should beshared belong the dA and another architecture; if dA shouldbe standalone set this to None:type bhid: theano.tensor.TensorType:param bhid: Theano variable pointing to a set of biases values (forhidden units) that should be shared belong dA and anotherarchitecture; if dA should be standalone set this to None:type bvis: theano.tensor.TensorType:param bvis: Theano variable pointing to a set of biases values (forvisible units) that should be shared belong dA and anotherarchitecture; if dA should be standalone set this to None"""self.n_visible = n_visibleself.n_hidden = n_hidden# create a Theano random generator that gives symbolic random valuesif not theano_rng:theano_rng = RandomStreams(numpy_rng.randint(2 ** 30))# note : W' was written as `W_prime` and b' as `b_prime`if not W:# W is initialized with `initial_W` which is uniformely sampled# from -4*sqrt(6./(n_visible+n_hidden)) and# 4*sqrt(6./(n_hidden+n_visible))the output of uniform if# converted using asarray to dtype# theano.config.floatX so that the code is runable on GPUinitial_W = numpy.asarray(numpy_rng.uniform(low=-4 * numpy.sqrt(6. / (n_hidden + n_visible)),high=4 * numpy.sqrt(6. / (n_hidden + n_visible)),size=(n_visible, n_hidden)),dtype=theano.config.floatX)W = theano.shared(value=initial_W, name='W', borrow=True)if not bvis:bvis = theano.shared(value=numpy.zeros(n_visible,dtype=theano.config.floatX),borrow=True)if not bhid:bhid = theano.shared(value=numpy.zeros(n_hidden,dtype=theano.config.floatX),name='b',borrow=True)self.W = W# b corresponds to the bias of the hiddenself.b = bhid# b_prime corresponds to the bias of the visibleself.b_prime = bvis# tied weights, therefore W_prime is W transposeself.W_prime = self.W.Tself.theano_rng = theano_rng# if no input is given, generate a variable representing the inputif input is None:# we use a matrix because we expect a minibatch of several# examples, each example being a rowself.x = T.dmatrix(name='input')else:self.x = inputself.params = [self.W, self.b, self.b_prime]

这里我们把符号$input$作为输入传给模型,这样可以把几个自动编码器的层组合起来,构建深度网络,把第$k$层的输出,当作$k$+1层的输入。

潜在表示和重建信号可以通过一下方式进行计算:

    def get_hidden_values(self, input):""" Computes the values of the hidden layer """return T.nnet.sigmoid(T.dot(input, self.W) + self.b)

  

    def get_reconstructed_input(self, hidden):"""Computes the reconstructed input given the values of thehidden layer"""return T.nnet.sigmoid(T.dot(hidden, self.W_prime) + self.b_prime)

接下来,我们计算损失函数,并采用SGD算法求解参数。

def get_cost_updates(self, corruption_level, learning_rate):""" This function computes the cost and the updates for one trainngstep of the dA """tilde_x = self.get_corrupted_input(self.x, corruption_level)y = self.get_hidden_values(tilde_x)z = self.get_reconstructed_input(y)# note : we sum over the size of a datapoint; if we are using#        minibatches, L will be a vector, with one entry per#        example in minibatchL = - T.sum(self.x * T.log(z) + (1 - self.x) * T.log(1 - z), axis=1)# note : L is now a vector, where each element is the#        cross-entropy cost of the reconstruction of the#        corresponding example of the minibatch. We need to#        compute the average of all these to get the cost of#        the minibatchcost = T.mean(L)# compute the gradients of the cost of the `dA` with respect# to its parametersgparams = T.grad(cost, self.params)# generate the list of updatesupdates = [(param, param - learning_rate * gparam)for param, gparam in zip(self.params, gparams)]return (cost, updates)

然后,我们定义一个函数来迭代模型参数以最小化重建误差。

da = dA(numpy_rng=rng,theano_rng=theano_rng,input=x,n_visible=28 * 28,n_hidden=500)cost, updates = da.get_cost_updates(corruption_level=0.,learning_rate=learning_rate)train_da = theano.function([index],cost,updates=updates,givens={x: train_set_x[index * batch_size: (index + 1) * batch_size]})start_time = timeit.default_timer()############# TRAINING ############## go through training epochsfor epoch in range(training_epochs):# go through trainng setc = []for batch_index in range(n_train_batches):c.append(train_da(batch_index))print('Training epoch %d, cost ' % epoch, numpy.mean(c))end_time = timeit.default_timer()training_time = (end_time - start_time)print(('The no corruption code for file ' +os.path.split(__file__)[1] +' ran for %.2fm' % ((training_time) / 60.)), file=sys.stderr)image = Image.fromarray(tile_raster_images(X=da.W.get_value(borrow=True).T,img_shape=(28, 28), tile_shape=(10, 10),tile_spacing=(1, 1)))image.save('filters_corruption_0.png')# start-snippet-3###################################### BUILDING THE MODEL CORRUPTION 30% ######################################
rng = numpy.random.RandomState(123)theano_rng = RandomStreams(rng.randint(2 ** 30))da = dA(numpy_rng=rng,theano_rng=theano_rng,input=x,n_visible=28 * 28,n_hidden=500)cost, updates = da.get_cost_updates(corruption_level=0.3,learning_rate=learning_rate)train_da = theano.function([index],cost,updates=updates,givens={x: train_set_x[index * batch_size: (index + 1) * batch_size]})start_time = timeit.default_timer()############# TRAINING ############## go through training epochsfor epoch in range(training_epochs):# go through trainng setc = []for batch_index in range(n_train_batches):c.append(train_da(batch_index))print('Training epoch %d, cost ' % epoch, numpy.mean(c))end_time = timeit.default_timer()training_time = (end_time - start_time)print(('The 30% corruption code for file ' +os.path.split(__file__)[1] +' ran for %.2fm' % (training_time / 60.)), file=sys.stderr)# end-snippet-3# start-snippet-4image = Image.fromarray(tile_raster_images(X=da.W.get_value(borrow=True).T,img_shape=(28, 28), tile_shape=(10, 10),tile_spacing=(1, 1)))image.save('filters_corruption_30.png')# end-snippet-4
os.chdir('../')if __name__ == '__main__':test_dA()

降噪自动编码器

降噪自动编码器的动机其实很简单。为了强制隐层发掘更鲁棒的特征,我们将污染后的数据作为输入来训练自动编码器。

降噪自动编码器是自动编码器的一个随机版本。直觉上,这个模型主要解决了两个问题,首先,它可以对输入进行编码并尽量保持其信息,其次它尽量地从污染数据中,恢复输入数据。后者主要通过分析输入数据的统计依赖性实现的。降噪自动编码器可以从流形学习、随机分析等不同方面进行解释[Vincent08]。

为了实现降噪自动编码器,我们只需要在自动编码器的基础上对输入数据添加一个随机污染的过程。这个有很多实现方法,这里我们采用随机地对输入进行掩膜处理,使每个实例的和为0,相应代码如下:

 def get_corrupted_input(self, input, corruption_level):"""This function keeps ``1-corruption_level`` entries of the inputs thesame and zero-out randomly selected subset of size ``coruption_level``Note : first argument of theano.rng.binomial is the shape(size) ofrandom numbers that it should producesecond argument is the number of trialsthird argument is the probability of success of any trialthis will produce an array of 0s and 1s where 1 has aprobability of 1 - ``corruption_level`` and 0 with``corruption_level``The binomial function return int64 data type bydefault.  int64 multiplicated by the inputtype(floatX) always return float64.  To keep all datain floatX when floatX is float32, we set the dtype ofthe binomial to floatX. As in our case the value ofthe binomial is always 0 or 1, this don't change theresult. This is needed to allow the gpu to workcorrectly as it only support float32 for now."""return self.theano_rng.binomial(size=input.shape, n=1,p=1 - corruption_level,dtype=theano.config.floatX) * input

这样,我们就可以得到一个完整的降噪自动编码器类:

class dA(object):"""Denoising Auto-Encoder class (dA)A denoising autoencoders tries to reconstruct the input from a corruptedversion of it by projecting it first in a latent space and reprojectingit afterwards back in the input space. Please refer to Vincent et al.,2008for more details. If x is the input then equation (1) computes a partiallydestroyed version of x by means of a stochastic mapping q_D. Equation (2)computes the projection of the input into the latent space. Equation (3)computes the reconstruction of the input, while equation (4) computes thereconstruction error... math::\tilde{x} ~ q_D(\tilde{x}|x)                                     (1)y = s(W \tilde{x} + b)                                           (2)x = s(W' y  + b')                                                (3)L(x,z) = -sum_{k=1}^d [x_k \log z_k + (1-x_k) \log( 1-z_k)]      (4)"""def __init__(self,numpy_rng,theano_rng=None,input=None,n_visible=784,n_hidden=500,W=None,bhid=None,bvis=None):"""Initialize the dA class by specifying the number of visible units (thedimension d of the input ), the number of hidden units ( the dimensiond' of the latent or hidden space ) and the corruption level. Theconstructor also receives symbolic variables for the input, weights andbias. Such a symbolic variables are useful when, for example the inputis the result of some computations, or when weights are shared betweenthe dA and an MLP layer. When dealing with SdAs this always happens,the dA on layer 2 gets as input the output of the dA on layer 1,and the weights of the dA are used in the second stage of trainingto construct an MLP.:type numpy_rng: numpy.random.RandomState:param numpy_rng: number random generator used to generate weights:type theano_rng: theano.tensor.shared_randomstreams.RandomStreams:param theano_rng: Theano random generator; if None is given one isgenerated based on a seed drawn from `rng`:type input: theano.tensor.TensorType:param input: a symbolic description of the input or None forstandalone dA:type n_visible: int:param n_visible: number of visible units:type n_hidden: int:param n_hidden:  number of hidden units:type W: theano.tensor.TensorType:param W: Theano variable pointing to a set of weights that should beshared belong the dA and another architecture; if dA shouldbe standalone set this to None:type bhid: theano.tensor.TensorType:param bhid: Theano variable pointing to a set of biases values (forhidden units) that should be shared belong dA and anotherarchitecture; if dA should be standalone set this to None:type bvis: theano.tensor.TensorType:param bvis: Theano variable pointing to a set of biases values (forvisible units) that should be shared belong dA and anotherarchitecture; if dA should be standalone set this to None"""self.n_visible = n_visibleself.n_hidden = n_hidden# create a Theano random generator that gives symbolic random valuesif not theano_rng:theano_rng = RandomStreams(numpy_rng.randint(2 ** 30))# note : W' was written as `W_prime` and b' as `b_prime`if not W:# W is initialized with `initial_W` which is uniformely sampled# from -4*sqrt(6./(n_visible+n_hidden)) and# 4*sqrt(6./(n_hidden+n_visible))the output of uniform if# converted using asarray to dtype# theano.config.floatX so that the code is runable on GPUinitial_W = numpy.asarray(numpy_rng.uniform(low=-4 * numpy.sqrt(6. / (n_hidden + n_visible)),high=4 * numpy.sqrt(6. / (n_hidden + n_visible)),size=(n_visible, n_hidden)),dtype=theano.config.floatX)W = theano.shared(value=initial_W, name='W', borrow=True)if not bvis:bvis = theano.shared(value=numpy.zeros(n_visible,dtype=theano.config.floatX),borrow=True)if not bhid:bhid = theano.shared(value=numpy.zeros(n_hidden,dtype=theano.config.floatX),name='b',borrow=True)self.W = W# b corresponds to the bias of the hiddenself.b = bhid# b_prime corresponds to the bias of the visibleself.b_prime = bvis# tied weights, therefore W_prime is W transposeself.W_prime = self.W.Tself.theano_rng = theano_rng# if no input is given, generate a variable representing the inputif input is None:# we use a matrix because we expect a minibatch of several# examples, each example being a rowself.x = T.dmatrix(name='input')else:self.x = inputself.params = [self.W, self.b, self.b_prime]def get_corrupted_input(self, input, corruption_level):"""This function keeps ``1-corruption_level`` entries of the inputs thesame and zero-out randomly selected subset of size ``coruption_level``Note : first argument of theano.rng.binomial is the shape(size) ofrandom numbers that it should producesecond argument is the number of trialsthird argument is the probability of success of any trialthis will produce an array of 0s and 1s where 1 has aprobability of 1 - ``corruption_level`` and 0 with``corruption_level``The binomial function return int64 data type bydefault.  int64 multiplicated by the inputtype(floatX) always return float64.  To keep all datain floatX when floatX is float32, we set the dtype ofthe binomial to floatX. As in our case the value ofthe binomial is always 0 or 1, this don't change theresult. This is needed to allow the gpu to workcorrectly as it only support float32 for now."""return self.theano_rng.binomial(size=input.shape, n=1,p=1 - corruption_level,dtype=theano.config.floatX) * inputdef get_hidden_values(self, input):""" Computes the values of the hidden layer """return T.nnet.sigmoid(T.dot(input, self.W) + self.b)def get_reconstructed_input(self, hidden):"""Computes the reconstructed input given the values of thehidden layer"""return T.nnet.sigmoid(T.dot(hidden, self.W_prime) + self.b_prime)def get_cost_updates(self, corruption_level, learning_rate):""" This function computes the cost and the updates for one trainngstep of the dA """tilde_x = self.get_corrupted_input(self.x, corruption_level)y = self.get_hidden_values(tilde_x)z = self.get_reconstructed_input(y)# note : we sum over the size of a datapoint; if we are using#        minibatches, L will be a vector, with one entry per#        example in minibatchL = - T.sum(self.x * T.log(z) + (1 - self.x) * T.log(1 - z), axis=1)# note : L is now a vector, where each element is the#        cross-entropy cost of the reconstruction of the#        corresponding example of the minibatch. We need to#        compute the average of all these to get the cost of#        the minibatchcost = T.mean(L)# compute the gradients of the cost of the `dA` with respect# to its parametersgparams = T.grad(cost, self.params)# generate the list of updatesupdates = [(param, param - learning_rate * gparam)for param, gparam in zip(self.params, gparams)]return (cost, updates)

整合

我们可以非常容易的构建一个实例并进行训练:

 # allocate symbolic variables for the dataindex = T.lscalar()    # index to a [mini]batchx = T.matrix('x')  # the data is presented as rasterized images###################################### BUILDING THE MODEL CORRUPTION 30% ######################################
rng = numpy.random.RandomState(123)theano_rng = RandomStreams(rng.randint(2 ** 30))da = dA(numpy_rng=rng,theano_rng=theano_rng,input=x,n_visible=28 * 28,n_hidden=500)cost, updates = da.get_cost_updates(corruption_level=0.3,learning_rate=learning_rate)train_da = theano.function([index],cost,updates=updates,givens={x: train_set_x[index * batch_size: (index + 1) * batch_size]})start_time = timeit.default_timer()############# TRAINING ############## go through training epochsfor epoch in range(training_epochs):# go through trainng setc = []for batch_index in range(n_train_batches):c.append(train_da(batch_index))print('Training epoch %d, cost ' % epoch, numpy.mean(c))end_time = timeit.default_timer()training_time = (end_time - start_time)print(('The 30% corruption code for file ' +os.path.split(__file__)[1] +' ran for %.2fm' % (training_time / 60.)), file=sys.stderr)

最后,为了对模型有一个直观的了解,我们可以借助tile_raster_images函数,将训练得到的权重画出来。

    image = Image.fromarray(tile_raster_images(X=da.W.get_value(borrow=True).T,img_shape=(28, 28), tile_shape=(10, 10),tile_spacing=(1, 1)))image.save('filters_corruption_30.png')

如果允许上述代码,我们可以得到如下结果:

1. 没有加入噪声的模型:

2. 加入30%噪声的模型

转载于:https://www.cnblogs.com/xueliangliu/p/5193403.html

theano学习指南5(翻译)- 降噪自动编码器相关推荐

  1. 深度学习UFLDL教程翻译之自动编码器

    一.自动编码器 目前为止,我们介绍了神经网络在有标签的训练样本的有监督学习中的应用.现在假设我们只有一个未标记的训练集{x(1),x(2),x(3),-},其中x是n维的.自动编码器神经网络是一种采用 ...

  2. 基于噪声学习的卷积降噪自动编码器用于图像去噪

    python tensorflow1.14实现 卷积降噪自动编码器用于图像去噪,这个博客主要是借鉴了DnCNN用于图像去噪的方式,论文可以直接搜到(https://arxiv.org/pdf/1608 ...

  3. spring学习指南 第4版_邹为诚《综合英语教程(1)》(第3版)学习指南词汇短语课文精解全文翻译练习答案电子版学习资料...

    邹为诚<综合英语教程(1)>(第3版)学习指南[词汇短语+课文精解+全文翻译+练习答案] Unit 1 一.词汇短语 二.课文精解 三.全文翻译 四.练习答案 Unit 2 一.词汇短语 ...

  4. python 降噪_使用降噪自动编码器重建损坏的数据(Python代码)

    python 降噪 Autoencoders aren't too useful in practice, but they can be used to denoise images quite s ...

  5. 【书评】RHCSA/RHCE Red Hat Linux 认证学习指南(第6版)EX200 EX300

    这次参加 CSDN 举办的读书活动,正赶上项目忙,看得也是断断续续,拖了2周了,才能来写这个书评. ========== 书评的分割线 ========== 首先,我会肯定的告诉你,不论你是一名专业的 ...

  6. React-Native学习指南

    React-Native学习指南 本指南汇集React-Native各类学习资源,给大家提供便利.指南正在不断的更新,大家有好的资源欢迎提供给我们 目录 教程 React Native React.j ...

  7. java jsp学习指南_JSP教程–最终指南

    java jsp学习指南 编者注: JavaServer Pages(JSP)技术使您可以轻松创建同时包含静态和动态组件的Web内容. JSP技术提供了Java Servlet技术的所有动态功能,但提 ...

  8. 卷积神经网络学习指南_卷积神经网络的直观指南

    卷积神经网络学习指南 by Daphne Cornelisse 达芙妮·康妮莉丝(Daphne Cornelisse) 卷积神经网络的直观指南 (An intuitive guide to Convo ...

  9. 对《RHCSA/RHCE Red Hat Linux认证学习指南(第6版):EX200 EX300》的评价

    近期看了清华大学出版社出版的一本Linux认证指导书--<RHCSA/RHCE Red Linux认证学习指南(第六版)>,推荐下. 首先这本书介绍的比较全面,既可以最为入门,也可以作为技 ...

最新文章

  1. [C1W3] Neural Networks and Deep Learning - Shallow neural networks
  2. leetcode 463. 岛屿的周长
  3. 前端学习(3283):立即执行函数二
  4. [css] 用css画一个太阳
  5. 深度 | 伯克利教授Stuart Russell:人工智能基础概念与34个误区
  6. 以太网的分层架构_以太网矩阵(Ethernet Fabric)简介
  7. 【优化算法】非支配排序遗传算法(NSGA)【含Matlab源码 176期】
  8. [哲学部分]马克思主义基本原理概论思维导图
  9. CPU卡FM1208发卡操作流程(不带密钥验证)
  10. Newtonsoft.Json.JsonSerializationException
  11. 【Linux】VIM使用
  12. python用四个圆画成花_【元旦手工】最美元旦手工花手工教程,赶紧提前收藏吧!...
  13. java最新版下载地址
  14. 短视频去水印工具小程序
  15. 电动汽车充电报文该怎么看
  16. node数据库学习之mysql 1
  17. 用友科技软件测试,用友软件测试题_.doc
  18. 声讨阿里HR话题炸开了锅!传造假9年老员工离职谈话
  19. 最全的反诈宣传文案都在这里了!
  20. Videos as Space-Time Region Graphs文章解读

热门文章

  1. 三菱fx5u modbus tcp fb块用法_2020江苏三菱PLCFX3GA14MT回收回收上门提货西门子软启动器...
  2. python 声音合成_使用python进行声音生成/合成?
  3. Maven命令安装本地jar包到本地仓库
  4. tplink无线受限 服务器无响应,tplink怎么设置密码(tplink服务器无响应)
  5. yum安装指定版本php,如何通过yum安装指定版本的PHP
  6. 多分类f1分数_机器学习之分类模型评估总结
  7. c16语言延时函数delay,《linux设备驱动开发详解》笔记——10中断与时钟
  8. vim 配置_一步一步配置vim(4)--与latex进行实时显示
  9. java替换带特殊字符的字符串6_Java字符串替换特殊字符(保加利亚语,波兰语,德语)...
  10. matlab运动前无轨迹线,matlab 前轮前驱运动模型公式 和 轨迹仿真