CooGAN: A Memory-Efficient Framework for High-Resolution Facial Attribute Editing

[ECCV 2020]

目录

CooGAN: A Memory-Efficient Framework for High-Resolution Facial Attribute Editing

Abstract

Introduction

Methodology

Overview

Cascaded Global-to-Local Face Translation Architecture

Objective Function

Light Selective Transfer Unit


Abstract

In contrast to great success of memory-consuming face editing methods at a low resolution, to manipulate high-resolution (HR) facial images, i.e., typically larger than 7682 pixels, with very limited memory is still challenging.

This is due to the reasons of 1) intractable huge demand of memory; 2) inefficient multi-scale features fusion.

To address these issues, we propose a NOVEL pixel translation framework called Cooperative GAN (CooGAN) for HR facial image editing.

This framework features a local path for fine-grained local facial patch generation (i.e., patch-level HR, LOW memory) and a global path for global lowresolution (LR) facial structure monitoring (i.e., image-level LR, LOW memory), which largely reduce memory requirements. Both paths work in a cooperative manner under a local-to-global consistency objective (i.e., for smooth stitching).

In addition, we propose a lighter selective transfer unit for more efficient multi-scale features fusion, yielding higher fidelity facial attributes manipulation. Extensive experiments on CelebAHQ well demonstrate the memory efficiency as well as the high image generation quality of the proposed framework.

第一句,研究领域:高分辨率人脸图像生成;

第二句,提出问题:高分辨率与低分辨率人脸图像生成相比,存在的挑战是, 1) 内存消耗大;2)多尺度特征融合效率低(不理解的可以看 Introduction 部分详细介绍);

第三句,核心方法: Cooperative GAN (CooGAN) 协同GAN网络;

第四/五/六句,算法细节:该框架的特点是,局部路径用于生成细粒度的局部人脸补丁(即patch级的 HR,低内存),全局路径用于全局的低分辨率(LR)人脸结构监测(即图像级的LR,低内存),这大大降低了内存需求。两条路径在局部到全局一致性目标(即平滑拼接)下以协作的方式工作。另外,提出了一种更轻的选择性转移单元,以实现更高效的多尺度特征融合,从而实现更高保真度的面部属性操作。【这段写的很好,强调是如何解决前面提到的两大问题的

第七句,实验结果:在CelebAHQ上的大量实验很好地证明了该框架的存储效率和高图像生成质量。

Introduction

There are two major challenges of deep model based HR facial attribute editing:

1. Constrained Computational and Memory Resource. In some mobile scenarios (e.g., smartphone, AR/VR glasses) with only limited computational and memory resources, it is infeasible to use popular image editing models [18,30] which require sophisticated networks. To address this issue, methods based on model pruning and operator simplifying [35,13,28,34,23] have been proposed to reduce the inference computational complexity. However, the metric-based way to reduce the model size will do harm to model perceptual representation ability, the output facial image quality is usually largely sacrificed.

2. Inefficient Multi-scale Features Fusion. In order to achieve high-level semantic manipulation while maintaining local details during image generation, multi-scale features fusion is widely adopted. It is a common practice to utilize skip connection, e.g., U-Net [27]. However, fixed schemes, such as skip connection, usually result in infeasible or even self-contradicting fusion output (e.g., during style transfer, content is well-preserved but failed to change the image style), and flexible schemes such as GRU [9] lead to additional computational burden (e.g., applying GRU-based unit [20] directly for multi-scale features fusion can achieve excellent fusion effects, but will increase network parameters by more than four times).

1. 受限的计算和内存资源

在一些移动场景(如智能手机、AR/VR眼镜)中,由于计算和内存资源有限,使用流行的图像编辑模型是不可行的,因为这些模型需要复杂的网络。为了解决这个问题,提出了基于模型剪枝和算子简化的方法来降低推理计算的复杂度。然而,基于度量的模型尺寸压缩方法会损害模型的感知表征能力,往往会极大地降低输出的人脸图像质量。

2. 低效的多尺度特征融合

为了在图像生成过程中在保持局部细节的同时实现高级语义操作,多尺度特征融合被广泛采用。常用的做法是使用跳接,例如U-Net[27]。然而,固定方案,如跳过连接,通常导致不可行,甚至自相矛盾的融合输出(例如,在风格转变,内容是保存完好,但未能改变图像风格),和灵活的方案,诸如格勒乌[9]导致额外的计算负担(例如,应用GRU-based单元[20]直接多尺度特性的融合可以实现良好的融合效果,但会增加网络参数超过四倍)。

In this work, a novel image translation framework for high resolution facial attribute editing, dubbed CooGAN, is proposed to explicitly address above issues. It adopts a divide-and-combine strategy to break the intractable whole HR image generation task down to several sub-tasks for reducing memory cost greatly. More concretely, the pipeline of our framework consists of a series of local HR patch generation sub-tasks and a global LR image generation sub-task. To handle these two types of sub-tasks, our framework is also composed of two paths, i.e., local network path and global network path. Namely, the local subtask is to generate an HR patch with edited attributes and fine-grained details. And a global sub-task is to generate an LR whole facial snapshot with structural coordinates to guide the local workers properly recognize the correct patch semantics. As only tiny size patch (e.g., 64×64) generation sub-task is involved in the pipeline, the proposed framework avoids processing large size feature maps. As a result, this framework is very light-weighted and suited for resource constrained scenarios.

解决内存消耗问题:

策略:将 HR 图像分为 patch,作为局部网络输入,用于生成细粒度的细节;将 HR 整体 resize 到 LR,作为全局网络输入,用于学习语义信息。因为局部网络生成的是 patch 图像,patch 之间肯定不连贯,所以需要全局图像对这些 patch 图像进行引导,使它们连接起来是平滑的。这个平滑过程怎么实现呢?就是下面多尺度特征融合的任务。

In addition, a local-to-global consistency objective is proposed to enforce the cooperation of sub modules, which guarantees between-patch contextual consistency and appearance smoothness in fine-grained scale. Moreover, we design a variant of : Simple recurrent units (SRU) [19], Light Selective Transfer Unit (LSTU), for multi-scale features fusion. The GRU-based STU [20] has similar functions, but needs two states (one state obtained from encoder, another from the higher level hidden state) to inference the selected skip feature. As a result, it has to face a heavy computing burden and is not friendly to GPU acceleration. Unlike STU, our SRU-based LSTU just need a single hidden state to get the gating signal, which greatly reduces the complexity of the unit. Actually, the LSTU has only half as many parameters as STU and almost the same multi-scale features fusion effect, achieving a good balance between model efficiency and output image fidelity. Under this design, the framework is able to selectively and efficiently transfer the shallow semantics from the encoder to decoder, enabling more effective multi-scale features fusion with constrained memory consumption.

此外,提出了一种局部-全局一致性目标加强子模块之间的协作,在细粒度尺度上保证了补丁间的上下文一致性和外观平滑性。设计了轻量传输单元 (LSTU),用于多尺度特征融合。基于 GRUl 的 STU 具有类似的功能,但需要两种状态 (一种状态来自编码器,另一种来自更高级别的隐藏状态) 来推断所选择的跳过特性。因此,它面临着沉重的计算负担,对 GPU 加速也不友好。与 STU 不同的是,我们的基于 Simple recurrent units (SRU) 的 LSTU 只需要一个隐藏状态来获取门控信号,这大大降低了单元的复杂性。实际上,LSTU 的参数只有 STU 的一半,多尺度特征融合效果几乎相同,在模型效率和输出图像保真度之间取得了很好的平衡。在这种设计下,该框架能够有选择地、高效地将浅层语义从编码器转移到解码器,在有限内存消耗的情况下实现更有效的多尺度特征融合。[简单地说,就是通过两种方法实现 协同 -- patch 拼接平滑:1)局部-全局一致性损失函数,2)轻量传输单元 (LSTU)​​​​​​ 来实现多尺度融合,且其是轻量级的 ]。

Methodology

Overview

The proposed CooGAN framework for conditional facial generation presents two innovative modules, the global module and the local module. The global module is designed to generate LR translated facial image, and the local module aims at generating HR facial image patches and stitching them together. A cooperation mechanism is introduced to make these two modules work together, so that the global module provides the local module with a global-to-local spatial consistency constraint. In addition, to guarantee the performance and edit-ability of the generated images, we propose a well-designed unit, LSTU, to filter the features from latent space and infuse them with detail information inside the naive skip connection.

提出的用于条件人脸生成的 CooGAN 框架提出了两个创新模块:全局模块和局部模块。全局模块用于生成 LR 人脸图像,局部模块用于生成 HR 人脸图像块并将其拼接在一起。引入了一种协同机制使两个模块协同工作,使全局模块为局部模块提供全局到局部的空间一致性约束。此外,为了保证生成的图像的性能和编辑能力,使用一个设计良好的单元 LSTU 来过滤潜在空间中的特征,并在简单的跳过连接中为它们注入详细信息。

Cascaded Global-to-Local Face Translation Architecture

The CooGAN consists of two interdependent generation modules. We depict the framework architecture in Fig. 2.

框架有两个模块,左边是全局图像平移模块,右边是局部patch细化模块。每个模块都包含一个发生器和一个判别器。两个模块通过中间的协同机制相互协作

  • Global module

1. 功能:全局模块用于生成转换后的快照(snapshot),它承载了整个图像的空间坐标信息。注意,全局模块的主要目的是保证最终结果的全局语义一致性。

2. 结构:全局感知生成器(Gg : global-aware generator)和全局感知判别器(Dg : global-aware discriminator)。

3. 细节Gg -- LSTU 增强的传统 U-Net;输入是 下采样后的图像(降低内存消耗);输出是 低分辨率 的生成图像,称为 快照(snapshot)图像

Dg -- 两个输入,共享一个特征提取网络; 参考 Conditional Image Synthesis with Auxiliary Classifier GANs (2017 ICML)

  • Local module

1. 功能:处理高分辨率图像的 patch 块;

2. 结构:局部感知生成器(Gl : local-aware generator)和局部感知判别器(Dl : local-aware discriminator)。

3. 细节Gl -- 输入是 级联(concatenate) 高分辨率图像(GT图像) patch 块与 快照图像(由全局模块生成的 低分辨率图像)上采样再 剪切成 patch 块;

输出是每对 patch 图像的生成相应的 patch;这些输出是不重叠的;

Dl -- 为了避免生成的补丁之间的不一致性,引入了局部感知判别器,使最终的缝合输出平滑无缝。注意,这个判别器只是临时引入来训练模型(?),因此它不会在推理阶段增加任何内存开销。

  • Cooperation mechanism

1. 问题:将高分辨率图像下采样成若干 patch,patch 与 patch 在经过网络之后,可能并不连贯;

2. 功能:引入有效的协同机制,鼓励两个生成器进行良好的协作,以获得满意的生成性能;全局 LR 图像上采样再剪切成若干 patch,这些 patch 包含全局空间坐标信息,而 HR 图像的 patch 包含详细的纹理信息。通过这种方法,将全局空间信息和详细纹理信息结合在一起。即利用全局空间信息使生成的局部 patch 更加平滑和全局一致,利用细节纹理信息保持生成的质量。

Objective Function

  • Image reconstruction loss

上式可以看到,“0” 表示的是,输入 HR 图像 和生成的 HR 图像有相同属性;因此,这是一个 通过自监督来实现的损失函数。

  • Adversarial loss

为了减轻训练过程中对手损失的不稳定性,引入了梯度罚

x 是 输入的真实图像, ˆy 生成的图像和 xˆ 是沿着直线的采样点之间真正的图像分布和生成图像分布。

  • Attribute editing loss.

Light Selective Transfer Unit

The most popular method for multi-scale features fusion in image-to-image translation is the Skip Connection structure. It helps the network balance the contradiction between the pursuit of larger receptive field and loss of more details. One of its classic deployments is U-Net. However, there is a fatal drawback of the original skip connection: it will degrade the function of deeper parts and further damage the effectiveness of condition injection. From Table 1, it is obvious that PSNR increases but attribute editing accuracy decreases when the skip connection number multiplies. A detailed graph showing the editing accuracy of each specific attribute is given in suppl. STGAN [20] tries to alleviate the problem with the STU, a variant of GRU [7,9], which uses the latent feature to control the information transfer in the skip connection through the unit. This feature carries the conditional information added through the concatenation. Unfortunately, such a unit omits the spatial and temporal complexity and it is a time-consuming process for the underlying feature to bubble up from the bottleneck to drive the STU of each layer.

对于具有不同跳过连接数(SC)模型的PSNR/SSIM性能和平均属性编辑精度,SCi表示具有i跳过连接数的模型。

首先,分析了几种融合方法:

1. U-Net:但 skip 连接有一个致命的缺点:它会降低更深部分的功能,进一步损害条件注入的有效性(这句话的意思很简单:因为 skip 连接将浅层网络特征引入,使得更深层次网络的特征被弱化。从图 2 中看到,attribute vector 属性向量都是在 网络深层 引入的,这样 skip 连接不利于特征属性的表达)。从表 1 可以明显看出,当跳过连接数增加时,PSNR会增加,但属性编辑精度会降低。

2. STU:STGAN [A Unified Selective Transfer Network for Arbitrary Image Attribute Editing] 试图通过 GRU 的变体 STU 来缓解这一问题,STU 利用潜在特征来控制通过单元的skip连接中的信息传递。该特性携带通过连接添加的条件信息。不幸的是,这样的单元忽略了空间和时间的复杂性,并且底层特性从瓶颈中冒出来以驱动每一层的 STU 是一个耗时的过程。(对 STU 不是很理解,可以参考:[A Unified Selective Transfer Network for Arbitrary Image Attribute Editing] [Simple Recurrent Units for Highly Parallelizable Recurrence])

其次,介绍 LST

To explicitly address the mentioned problem, we present our framework to employ Light Selective Transfer Unit (LSTU) to efficiently and selectively transfer the encoder feature. LSTU is an SRU-based unit with totally different information flow. Compare to STU, our LSTU discards the dependence on the two states when calculating the gating signal, which greatly reduces our parameters but the unit is still efficient. The detailed structure of LSTU is shown in Fig. 3. Without loss of generality, we choose the l-th LSTU as an analysis example. The l-th layer feature coming from the encoder side denotes as denotes the feature in the adjacent deeper layer. It contains the filtered latent state information of that layer . is firstly concatenated with attribute difference Ad to obtain up-sampled hidden state . Then is used to independently calculate the masks for the forget-gate and reset-gate. WT, W1×1, Wf and Wr represent parameter matrix of transpose convolution, linear transform, forget gate and update gate. The further process is similar to SRU. The equation of gates is shown on the right side of Fig. 3.

LSTU的结构。LSTU的设计灵感来自于 SRU,LSTU 比 STU 更轻,更适合 GPU 并行加速。右边是 LSTU 推理过程的数学表达式。LR 是 LeakyRelu 的缩写。

下图是 STU 的结构:

扩展阅读:

CascadePSP: Toward Class-Agnostic and Very High-Resolution Segmentation via Global and Local Refinement

https://openaccess.thecvf.com/content_CVPR_2020/papers/Cheng_CascadePSP_Toward_Class-Agnostic_and_Very_High-Resolution_Segmentation_via_Global_and_CVPR_2020_paper.pdf

FHDe2Net: Full High Definition Demoireing Network

https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123670715.pdf

High-Resolution Image Inpainting with Iterative Confidence Feedback and Guided Upsampling

https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123640001.pdf

High-frequency Component Helps Explain the Generalization of Convolutional Neural Networks

https://openaccess.thecvf.com/content_CVPR_2020/papers/Wang_High-Frequency_Component_Helps_Explain_the_Generalization_of_Convolutional_Neural_Networks_CVPR_2020_paper.pdf

Nighttime Defogging Using High-Low Frequency Decomposition and Grayscale-Color Networks

https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123570460.pdf

MyDLNote-High-Resolution: CooGAN: 协同GAN网络,高分辨率面部属性的高效记忆框架相关推荐

  1. LIVE 预告 | TransGAN:丢弃卷积,纯Transformer构建GAN网络

    自2014年Ian J. Goodfellow等人提出以来,生成对抗网络(GAN,Generative Adversarial Networks)便迅速成为人工智能领域中最有前景的研究方向之一. 而另 ...

  2. CVPR 2020 Oral |目标检测+分割均实现SOTA!厦大提出协同学习网络

    点上方蓝字计算机视觉联盟获取更多干货 在右上方 ··· 设为星标 ★,与你不见不散 编辑:Sophia 计算机视觉联盟  报道  | 公众号 CVLianMeng 转载于 :机器之心 论文链接:htt ...

  3. 【论文阅读】结合空洞卷积的 FuseNet变体网络高分辨率遥感影像语义分割

    [论文阅读]结合空洞卷积的 FuseNet变体网络高分辨率遥感影像语义分割 一.论文总体框架   首先,采用 FuseNet变体网络将数字地表模型(digital surface model,DSM) ...

  4. 游戏过程中的鼠标是否为真人操作的检测(集成学习、GAN网络)

    目录 说明 问题的背景 外挂的猫鼠游戏 目前对于鼠标移动轨迹的机器人判别的研究 数据预处理 起点的平移 旋转变换 一种保留原始图像特征的归一化方式 系统模型设计 生成对抗网络 集成判别器 判别器D1 ...

  5. Nat. Commun. | 条件GAN网络和基因表达特征用于类苗头化合物的发现

    今天给大家介绍的是拜耳作物科学公司.拜耳公司机器学习研发部和遗传毒理学部于2020年1月联合发表在Nature Communications上的一篇论文,这篇文章通过一种生成模型进行分子的从头设计以及 ...

  6. tensorflow gan网络流程图

    tensorflow gan 网络流程图

  7. GAN背后的理论依据,以及为什么只使用GAN网络容易产生

    花了一下午研究的文章,解答了我关于GAN网络的很多疑问,内容的理论水平很高,只能尽量理解,但真的是一篇非常好的文章转自http://www.dataguru.cn/article-10570-1.ht ...

  8. 『TensorFlow』通过代码理解gan网络_中

    『cs231n』通过代码理解gan网络&tensorflow共享变量机制_上 上篇是一个尝试生成minist手写体数据的简单GAN网络,之前有介绍过,图片维度是28*28*1,生成器的上采样使 ...

  9. 简记GAN网络的loss

    <简记GAN loss的理解>   GAN 是一种思想,刚接触的时候极为震撼,后来通过GAN思想也做过模型的优化,写过一篇专利.最近在用 GAN 生成数据,顺手写一写对GAN loss的理 ...

  10. 不服就GAN:GAN网络生成 cifar10 的图片实例(keras 详细实现步骤),GAN 的训练的各种技巧总结,GAN的注意事项和大坑汇总

    GAN 的调参技巧总结 生成器的最后一层不使用 sigmoid,使用 tanh 代替 使用噪声作为生成器的输入时,生成噪声的步骤使用 正态分布 的采样来产生,而不使用均匀分布 训练 discrimin ...

最新文章

  1. 微型计算机中常用的进位计数制有,计算机试题与答案
  2. 程序员杂记:我们的爱情故事
  3. LeetCode224. Basic Calculator (用栈计算表达式)
  4. Leetcode_最后一个单词的长度
  5. boost::allocator_destroy的实例
  6. Linux网络协议栈(二)——套接字缓存(socket buffer)
  7. 【转载保存】dubbo学习笔记
  8. android包结构规范,【Android】Android产品-开发规范
  9. vs代码补全的快捷键_iPad Pro变生产力工具,你还缺个轻量级浏览器端代码编辑器...
  10. ios7中使用scrollview来横向滑动图片,自动产生偏移竖向的偏移 问题
  11. 使用HttpClient MultipartEntityBuilder 上传文件,并解决中文文件名乱码问题
  12. 基于阿里云服务器+wordpress构建自己的网站(全过程系列,无需任何编程知识)
  13. 为什么手工drop_caches之后cache值并未减少?
  14. MSDEV.EXE 版本
  15. npm启动报错——端口被占用
  16. 幼儿园计算机信息技术培训总结,幼儿园教师信息技术培训总结
  17. 6.5趣味逻辑之委派任务
  18. matlab 指定坐标轴,matlab设置坐标轴范围
  19. TransE全文中文翻译(Translating Embeddings for Modeling Multi-relational Data)
  20. 代码改变生活-文件重命名

热门文章

  1. 移动端 web 开发的设计稿与工作流
  2. android textview 字母数字键盘,android数字键盘怎样设置成默认的
  3. Wireshark中lua脚本介绍
  4. 拟立法禁止采购有漏洞软件,“引爆”网络安全行业
  5. Android解析SRT字幕文件
  6. 从低位开始取出长整型变量s中奇数位上的数依次构成一个新数放在t中
  7. 我找遍全网,整理了1份纯新手向电脑购机&装机攻略!
  8. 【示波器专题】示波器带宽对测量的影响
  9. Riverbed宣布收购领先的Wi-Fi网络提供商Xirrus
  10. 全志A10/RK2918等七款平板芯片横向PK