In this paper, we propose an effective face completion algorithm using a deep generative model.
本文提出了一种基于深层生成模型的有效人脸补全算法。
Different from well-studied background completion, the face completion task is more challenging as it often requires to generate semantically new pixels for the missing key components (e.g., eyes and mouths) that contain large appearance varia-tions.
不同的研究背景后,面对完成任务是常常需要产生新的像素缺失的关键部件在更具挑战性的(例如,眼睛和嘴巴),含有大量的外观的变化关系。
Unlike existing nonparametric algorithms that search for patches to synthesize, our algorithm directly generates contents for missing regions based on a neural network.
不同于现有的非参数算法,搜索补丁合成,我们的算法直接生成内容的缺失区域的神经网络的基础上。
The model is trained with a combination of a reconstruc-tion loss, two adversarial losses and a semantic parsing loss, which ensures pixel faithfulness and local-global con-tents consistency.
该模型是一个组合的重建丢失的训练,两个敌对的损失和语义分析的损失,保证像素忠实和当地的全球内容的一致性。
With extensive experimental results, we demonstrate qualitatively and quantitatively that our model is able to deal with a large area of missing pixels in arbi-trary shapes and generate realistic face completion results.

大量的实验结果,我们证明了定性和定量的分析,我们的模型能够处理大面积的任意形状的缺失像素生成逼真的脸和完成结果。

Image completion, as a common image editing oper-ation, aims to fill the missing or masked regions in im-ages with plausibly synthesized contents.
图像修复,是一种常见的图像编辑操作,旨在填补缺失或掩盖了地区形象的内容合理的合成。
The generated contents can either be as accurate as the original, or sim-ply fit well within the context such that the completed im-age appears to be visually realistic.
所生成的内容可以与原始内容一样准确,或者可以很好地在上下文中进行拟合,使得完成后的IM看起来是视觉逼真的。
Most existing com-pletion algorithms [2, 10] rely on low-level cues to search for patches from known regions of the same image and synthesize the contents that locally appear similarly to the matched patches.
大多数现有的完成算法[ 2, 10 ]依靠低级别的线索搜索从已知的相同的图像区域的补丁和合成局部出现同样的匹配补丁内容。
These approaches are all fundamentally constrained to copy existing patterns and structures from the known regions.
这些方法从根本上限制了从已知区域复制现有的模式和结构。
The copy-and-paste strategy performs particularly well for background completion (e.g., grass, sky, and mountain) by removing foreground objects and fill-ing the unknown regions with similar pattens from back-grounds.
复制和粘贴特别为背景完成执行策略(例如,草,天空和山)去除前景对象和填充未知区域与背景相似的模式。

However, the assumption of similar patterns can be found in the same image does not hold for filling missing parts of an object image (e.g., face).
然而,类似的模式的假设可以在同一图像中找到没有缺失值填充物体图像的部分(例如,脸)。
Many object parts contain unique patterns, which cannot be matched to other patches within the input image, as shown in Figure 1(b).
Many object parts contain unique patterns, which cannot be matched to other patches within the input image, as shown in Figure 1(b).
An alternative is to use external databases as references [9].
另一种方法是使用外部数据库作为参考文献[ 9 ]。
Although similar patches or images may be found, the unique patterns of objects that involve semantic representation are not well modeled,
虽然类似的补丁或图像可以发现,对象涉及语义表征的独特的模式不能很好地模拟,
since both low-level [2] and mid-level [10] visual cues of the known regions are not sufficient to infer semantically valid contents in missing regions.
由于低[ 2 ]和[ 10 ]的视觉线索中的已知区域不足以推断语义有效性内容缺失区域。

In this paper, we propose an effective object completion algorithm using a deep generative model.
在本文中,我们提出了一个有效的对象完成算法使用深生成模型。
The input is first masked with noise pixels on randomly selected square re-gion, and then fed into an autoencoder [25].
输入第一个蒙面与随机选择的方形区域噪声像素,然后送入自编码[ 25 ]。
While the en-coder maps the masked input to hidden representations, the decoder generates a filled image as its output.
当EN编码器将掩码输入映射到隐藏表示时,解码器生成填充图像作为其输出。
We regularize the training process of the generative model by introducing two adversarial losses [8]:
我们规范的生成模型的训练过程中,通过引入两个敌对的损失[ 8 ]:
a local loss for the missing region to ensure the generated contents are semantically coherent,
缺少区域以确保生成内容的局部丢失在语义上是一致的,
and a global one for the entire image to render more realistic and visually pleasing results.
对于整个图像来说,它是一个全局化的渲染器,它能提供更真实和更令人愉悦的结果。
In addition, we also propose a face parsing network [14, 22, 13] as an additional loss to regularize the generation procedure and enforce a more rea-sonable and consistent result with contexts.
此外,我们还提出了一种面解析网络[ 14, 22, 13 ]作为一个额外的损失,规范的生成过程和执行更合理和一致的结果与语境。
This generative model allows fast feed-forward image completion without requiring an external databases as reference.
这种生成模型允许快速的前馈图像完成,而不需要外部数据库作为参考。
For concreteness, we apply the proposed object completion algorithm on face images.
具体而言,我们提出的目标完成算法对人脸图像。

The main contributions of this work are summarized as follows.
这项工作的主要贡献概括如下。
First, we propose a deep generative comple-tion model that consists of an encoding-decoding generator and two adversarial discriminators to synthesize the miss-ing contents from random noise.
首先,我们提出了一个深刻的生成复杂模型,由一个编码解码器和两个敌对的鉴别器从随机噪声合成的遗漏内容。
Second, we tackle the challenging face completion task and show the proposed model is able to generate semantically valid patterns based on learned representations of this object class.
第二,我们处理具有挑战性的完成任务,并显示所提出的模型能够生成基于这个对象类的学习表示的语义有效模式。
Third, we demonstrate the effectiveness of semantic parsing in gener-ation, which renders the completion results that look both more plausible and consistent with surrounding contexts.
第三,我们展示了在语义生成句法分析的有效性,使完成的结果,看起来更合理和一致的与周围的环境。

Image completion. Image completion has been studied in numerous contexts, e.g., inpainting, texture synthesis, and sparse signal recovery.
图像修复。在许多情况下,图像修复完成了研究,如图像修复、纹理合成和稀疏信号恢复。
Since a thorough literature review is beyond the scope of this paper, and we discuss the most representative methods to put our work in proper context.
由于彻底的文献回顾超出了本文的范围,我们讨论了最有代表性的方法来把我们的工作放在适当的上下文中。
An early inpainting method [4] exploits a diffusion equation to iteratively propagate low-level features from known regions to unknown areas along the mask bound-aries.
一种早期的修复方法[ 4 ]利用一个扩散方程迭代地从已知区域迭代低层特征到未知区域。
While it performs well on inpainting, it is limited to deal with small and homogeneous regions.
虽然它在修复上表现良好,但它局限于处理小而均匀的区域。
Another method has been developed to further improve inpainting results by introducing texture synthesis [5]. In [29], the patch prior is learned to restore images with missing pixels.
另一种方法是通过引入纹理合成(5)来进一步改善修复结果。在[ 29 ]中,学习之前的补丁可以恢复缺少像素的图像。
Recently Ren et al. [20] learn a convolutional network for inpainting.
最近任正非等人。[ 20 ]学习卷积网络用于修复。
The performance of image completion is significantly improved by an efficient patch matching algorithm [2] for nonpara-metric texture synthesis.
图像完成绩效是一个有效的块匹配算法[ 2 ]为非参数化的纹理合成显著提高。
While it performs well when sim-ilar patches can be found, it is likely to fail when the source image does not contain sufficient amount of data to fill in the unknown regions.
虽然它执行类似的补丁时,可以发现,它很可能失败,当源图像不包含数据量足以填补未知区域。
We note this typically occurs in ob-ject completion as each part is likely to be unique and no plausible patches for the missing region can be found. Al-though this problem can be alleviated by using an external database [9],
我们注意到,这种情况通常发生在目标完成的每一部分可能是独特的,没有合理的补丁为失踪的地区可以找到。尽管这个问题可以通过使用外部数据库来缓解(9),
the ensuing issue is the need to learn high-level representation of one specific object class for patch match.
随之而来的问题是需要学习一个特定对象类的高级表示,用于补丁匹配。
Wright et al. [27] cast image completion as the task for recovering sparse signals from inputs.
莱特等人。[ 27 ]将图像完成作为从输入中恢复稀疏信号的任务。
By solving a sparse linear system, an image can be recovered from some cor-rupted input.
通过求解稀疏线性系统,图像可以从一些肺中断输入恢复。
However, this algorithm requires the images to be highly-structured (i.e., data points are assumed to lie in a low-dimensional subspace), e.g.,
然而,该算法要求图像具有高度结构化(即,数据点被假定位于低维子空间),例如,
well-aligned face im-ages. In contrast, our algorithm is able to perform object completion without strict constraints.
定向面。与此相反,我们的算法能够在没有严格约束的情况下完成对象的完成。

Image generation. Vincent et al. [24] introduce denois-ing autoencoders that learn to reconstruct clean signals from corrupted inputs.
图像生成。文森特等。[ 24 ]介绍去噪autoencoders,学会从损坏的输入干净的信号重构。
In [7], Dosovitskiy et al. demonstrate that an object image can be reconstructed by inverting deepconvolutional network features (e.g., VGG [21]) through a decoder network.
在[ 7 ],dosovitskiy等人。证明对象的图像可以通过反相deepconvolutional网络功能重建(例如,VGG [ 21 ])通过解码网络。
Kingma et al. [11] propose variational au-toencoders (VAEs) which regularize encoders by imposing prior over the   units such that images can be generated by sampling from or interpolating latent units.
金马等人。[ 11 ]提出了变金toencoders(声控)规范实施之前,编码器在图像可以通过抽取或插值隐单元所产生的单位。
However, the generated images by a VAE are usually blurry due to its training objective based on pixel-wise Gaussian likelihood. Larsen et al. [12] improve a VAE by adding a discrimina-tor for adversarial training which stems from the generative adversarial networks (GANs) [8] and demonstrate more re-alistic images can be generated.
然而,生成的图像由Vae通常模糊由于基于像素的高斯似然训练目的。拉森等人。[ 12 ]提高VAE加入了对抗性训练,源于生成对抗网络识别Tor(Gans)[ 8 ],表现出更多的现实图像可以产生。
Closest to this work is the method proposed by Deepak et al. [17] which applies an autoencoder and integrates learn-ing visual representations with image completion.
最近的这项工作是由Deepak等人提出的方法。[ 17 ]采用自编码和集成学习的视觉表现与图像修复。
How-ever, this approach emphasizes more on unsupervised learn-ing of representations than image completion.
不管怎样,这种方法更强调无监督学习,而不是图像完成。
In essence, this is a chicken-and-egg problem. Despite the promising results on object detection, it is still not entirely clear if im-age completion can provide sufficient supervision signals for learning high-level features.
从本质上讲,这是一个鸡和蛋的问题。尽管在目标检测方面取得了可喜的成果,但IM年龄的完成是否能够提供足够的监督信号来学习高级特征尚不十分清楚。
On the other hand, seman-tic labels or segmentations are likely to be useful for im-proving the completion results, especially on a certain ob-ject category.
另一方面,语义标签或分割可能会提高完成的结果是有用的,尤其是对某个对象类。
With the goal of achieving high-quality im-age completion, we propose to use an additional semantic parsing network to regularize the generative networks.
随着我年龄的实现高质量完成目标,我们建议使用一个额外的语义解析网络规范生成网络。
Our model deals with severe image corruption (large region with missing pixels), and develops a combined reconstruction, adversarial and parsing loss for face completion.
我们的模型处理严重的图像损坏(缺少像素的大区域),并开发了一种用于人脸完成的联合重建、对抗和解析损失。

To effectively train our network, we use the curriculum strategy [3] by gradually increasing the difficulty level and network scale.
为了有效地培训我们的网络,我们采用课程策略3,逐步增加难度等级和网络规模。
The training process is scheduled in three stages. First, we train the network using the reconstruction loss to obtain blurry contents.
培训过程分三个阶段进行。首先,我们利用重建损失训练网络,以获得模糊的内容。
Second, we fine-tune the net-work with the local adversarial loss.
第二,我们用当地的对抗损失来调整网络。
The global adversarial loss and semantic regularization are incorporated at the last stage, as shown in Figure 3.
全局对抗性损失和语义正则化在最后阶段被合并,如图3所示。
Each stage prepares features for the next one to improve, and hence greatly increases the ef-fectiveness and efficiency of network training.
每个阶段和特征为下一个提高,从而大大增加了有效性和网络训练的效率。
For example, in Figure 3, the reconstruction stage (c) restores the rough shape of the missing eye although the contents are blurry.
例如,在图3中,重建阶段(c)恢复了缺少的眼睛的粗糙形状,尽管内容模糊。
Then local adversarial stage (d) then generates more details to make the eye region visually realistic, and the global ad-versarial stage (e) refines the whole image to ensure that the appearance is consist around the boundary of the mask.
那么当地的对抗阶段(D)再生成更多的信息使眼部视觉效果逼真,与全球广告versarial阶段(E)使整个图像以确保外观由掩模的边界附近。
The semantic regularization (f) finally further enforces more consistency between components and let the generated re-sult to be closer to the actual face.
语义化(F)最后进一步执行元件和让产生的结果更接近于实际面之间的一致性。
When training with the adversarial loss, we use a method similar to [19] especially to avoid the case when the discriminator is too strong at the beginning of the training process.
当对抗性损失训练时,我们使用类似于[ 19 ]的方法,特别是为了避免在训练开始时鉴别器太强的情况。

Qualitative results. Figure 6 shows our face completion results on the CelebA test dataset.
定性结果。图6显示了我们的脸上celeba完成结果的测试数据集。
In each test image, the mask covers at least one key facial components.
在每个测试图像中,掩模覆盖至少一个关键面部组件。
The third column of each panel shows our completion results are visu-ally realistic and pleasing.
每个小组的第三列显示我们的完成结果的现实和令人愉快的盟友。
Note that during the testing, the mask does not need to be restricted as a 64 64 square mask, but the number of total masked pixels is suggested to be no more than 64 64 pixels.
请注意,在测试期间,掩模不需要被限制为64 - 64平方掩码,但总掩码像素的数目建议不超过64个64像素。
We show typical examples with one big mask covering at least two face components (e.g., eyes, mouths, eyebrows, hair, noses) in the first two rows.
我们展示了一个典型的例子,一个大的面具覆盖至少两个面部成分(如眼睛,嘴巴,眉毛,头发,鼻子)在前两排。
We specifically present more results on eye regions since they can better reflect how realistic of the newly generated faces are, with the proposed algorithm.
我们特别提出了更多的眼睛区域的结果,因为它们可以更好地反映新生成的面孔是多么现实,与所提出的算法。
Overall, the algo-rithm can successfully complete the images with faces in side views, or partially/completely corrupted by the masks with different shapes and sizes.
总体而言,该算法可以成功地在侧视图面完整的图像,或部分/完全被各种不同形状和尺寸的面具。
We present a few examples in the third row where the real occlusion (e.g., wearing glasses) occurs.
我们在第三排中出现了一些实际遮挡的例子(例如戴眼镜)。
As sometimes whether a region in the image is occluded or not is subjec-tive, we give this option for users to assign the occluded regions through drawing masks.
有时候,无论是在图像区域被遮挡或不主观,我们给用户分配的遮挡区域,通过绘制面具这个选项。
The results clearly show that our model is able to restore the partially masked eye-glasses, or remove the whole eyeglasses or just the frames by filling in realistic eyes and eyebrows.
结果清楚地表明,我们的模型可以恢复部分蒙面眼镜,或删除整个眼镜或只是框架填补现实的眼睛和眉毛。
In the last row, we present examples with multiple, ran-domly drawn masks, which are closer to real-world applica-tion scenarios.
在最后一排,我们提出的例子多,随机绘制的面具,这是接近真实的应用场景。
Figure 7 presents completion results where different key parts (e.g., eyes, nose, and mouth) of the same input face image are masked.
图7给出了相同的输入人脸图像中不同的关键部分(例如眼睛、鼻子和嘴巴)被掩盖的完成结果。
It shows that our completion results are consistent and realistic regardless of the mask shapes and locations.
它表明我们的完成结果是一致的和现实的,不管面具的形状和位置。
Quantitative results. In addition to the visual results, we also perform quantitative evaluation using three metrics on the CelebA test dataset (19,962 images).
定量的结果。除了视觉效果,我们还进行定量评估,使用三个指标对celeba测试集(19962幅)。
The first one is the peak signal-to-noise ratio (PSNR) which directly mea-sures the difference in pixel values.
第一个是峰值信噪比(PSNR)直接措施的像素值的差异。
The second one is the structural similarity index (SSIM) that estimates the holistic similarity between two images.
结构相似性指数是第二个(SSIM),估计两个图像之间的整体相似性。
Lastly we use the identity distance measured by the OpenFace toolbox [1] to deter-mine the high-level semantic similarity of two faces.
最后我们利用露面工具箱[ 1 ]测量确定两个面的高层语义相似性身份的距离。
These three metrics are computed between the completion results obtained by different methods and the original face images.
这三个度量是在不同方法得到的完成结果和原始人脸图像之间计算的。
The results are shown in Table 1-3. Specifically, the step-wise contribution of each component is shown from the 2nd to the 5th column of each table, where M1-M5 correspond to five different settings of our own model in Figure 3 and O1-O6 are six different masks for evaluation as shown in Figure 8.
结果见表1-3。具体地说,每个组件的逐步贡献出第二对每个表中的第五列,其中M1-M5对应图3和o1-o6五种不同的设置我们自己的模型是六个不同的面具进行评价如图8所示。
We then compare our model with the ContextEn-coder [17] (CE).
然后我们比较我们的模型与contexten编码器[ 17 ](CE)。
Since the CE model is originally not trained for faces, we retrain the CE model on the CelebA dataset for fair comparisons.
由于CE模型最初是不接受的面孔,我们训练模型对公平比较的celeba数据集。
As the evaluated masks O1-O6 are not in the image center, we use the inpaintRandom version of their code and mask 25% pixels masked in each image.
作为评价面具o1-o6不在图像中心,我们使用他们的代码inpaintrandom版本和面具25%像素每幅图像中的面具。
Finally we also replace the non-mask region of the output with original pixels. The comparison between our model (M4) and CE in 5th and 6th column show that ourmodel performs generally better than the CE model, espe-cially on large masks (e.g., O1-O3, O6). In the last column, we show that the poisson blending [18] can further improve the performance.
最后,我们还用原始像素替换输出的非掩码区域。我们的模型之间的比较(M4)和CE第五和第六列显示,我们的模型表现一般优于CE模型,尤其是在大口罩(例如,o1-o3,O6)。在最后一列中,我们表明泊松混合[ 18 ]可以进一步改善性能。

Note that we obtain relatively higher PSNR and SSIM values when using the reconstruction loss (M1) only but it does not imply better qualitative results, as shown in Fig-ure 3(c).
请注意,我们获得较高的PSNR和SSIM使用重建时的损失值(M1)但这并不意味着更好的定性结果,如图3所示(C)。
These two metrics simply favor smooth and blurry results.
这两个指标简单地支持平滑和模糊的结果。
We note that the model M1 performs poorly as it hardly recovers anything and is unlikely to preserve the identity well, as shown in Table 3.
我们注意到,M1模型性能很差,因为它几乎不能恢复任何东西,也不可能很好地保持身份,如表3所示。
Although the mask size is fixed as 64 64 during the training, we test different sizes, ranging from 16 to 80 with a step of 8, to evaluate the generalization ability of our model. Figure 9 shows quantitative results.
虽然面具的大小是固定的,在训练过程中的64,我们测试不同的大小,范围从16到80,步骤为8,来评估我们的模型的泛化能力。图9显示定量结果。
The performance of the proposed model gradually drops with the increasing mask size, which is expected as the larger mask size indicates more uncertainties in pixel values.
随着掩模尺寸的增加,该模型的性能逐渐下降,这是因为较大的掩模尺寸表示像素值的不确定性。
But generally our model performs well for smaller mask sizes (smaller than 64).
但一般来说,我们的模型适用于较小的掩模尺寸(小于64)。
We observe a local minimum around the medium size (e.g., 32). It is because that the medium size mask is mostly likely to occlude only part of the component (e.g., half eye).
我们观察到中等大小的局部极小值(例如,32)。这是因为,中等大小的面膜主要是可能堵塞的组成部分(例如,半眼)。
It is found in experiments that generating a part of the compo-nent is more difficult than synthesizing new pixels for theTraversing in latent space.
实验发现,生成的构件的一部分比合成新的像素的潜在空间更难穿越。
The missing region, although semantically constrained by the remaining pixels in an image, accommodates different plausible appearances as shown in Figure 10.
丢失的区域,尽管在图像中的剩余像素受到语义限制,但却容纳了不同的似是而非的外观,如图10所示。
We observe that when the mask is filled with different noise, all the generated contents are seman-tically realistic and consistent, but their appearances varies.
我们观察到,当面具都充满了不同的噪声,所有生成的内容是语义上的现实性和一致性,但他们的外表的变化。
This is different from the context encoder [17], where the mask is filled with zero values and thus the model only ren-ders single completion result.
这是由不同的上下文编码[ 17 ],那里的面具是用零填充值,因此模型只任工作单完成的结果。
It should be noted that under different input noise, the variations of our generated contents are unlikely to be as large as those in the original GAN [8, 19] model which is able to generate completely different faces.
需要注意的是,在不同的输入噪声下,我们生成的内容的变化不太可能像原来的GaN(8, 19)模型那样大,能够产生完全不同的面。
This is mainly due to the constraints from the contexts (i.e., non-mask re-gions).
这主要是由于限制从上下文(即非屏蔽区域)。
For example, in the second row of Figure 10 with only one eyebrow masked, the generated eyebrow is re-stricted to have the similar shape and size and reasonable position with the other eyebrow.
例如,在图10的第二行只有一个眉毛掩盖,生成的眉毛是限制具有相似的形状和大小与眉等合理的位置。
Therefore the variations on the appearance of the generated eyebrow are mainly re-flected at some details, such as the shade of the eyebrow.
因此,对生成的眉毛外观的变化主要体现在一些细节,如眉毛的阴影。

We carry out experiments with four variations of the probe image: the original one, the completed one by sim-ply filling random noise, by our reconstruction based model M1 and by our final model M5.
我们进行了四个不同的探针图像的实验:原来的一个,完成的一个填充随机噪声,我们的重建模型M1和我们的最终模型M5。
The recognition perfor-mance using original probe faces is regarded as the upper bound.
以原始探针面的识别性能作为上界。
Figure 11 shows that using the completed probe by our model M5 (green) achieves the closest performance to the upper bound (blue).
图11显示了使用我们的模型M5(绿色)完成的探针达到最接近上限(蓝色)的性能。
Although there is still a large gap between the performance of our M5 based recognition and the upper bound,
虽然基于M5的识别和上限的性能还有很大的差距,
especially when the mask is large (e.g., O1, O2), the proposed algorithm makes significant improvement with the completion results compared with that by either noise filling or the reconstruction loss (Lr).
尤其是当面具是大的(例如,O1,O2),该算法可以明显改善完成结果与由噪声填充或重建的损失相比(LR)。
We consider the identity-preserving completion to be an in-teresting direction to pursue.
我们认为身份保存完成的是一个有趣的方向去追寻。
Although our model is able to generate semantically plausible and visually pleasing contents, it has some limita-tions.
虽然我们的模型能够产生语义的合理和视觉愉悦的内容,具有一定的局限性。
The faces in the CelebA dataset are roughly cropped and aligned [15]. We implement various data augmentation to improve the robustness of learning, but find our model still cannot handle some unaligned faces well.
在celeba数据面大体剪裁和对齐[ 15 ]。我们实现了各种数据的增强提高学习算法的鲁棒性,但发现我们的模型还不能很好的处理一些不结盟的面孔。
We show one failure case in the first row of Figure 12.
我们在图12的第一行显示了一个失败的情况。
The unpleas-ant synthesized contents indicate that the network does not recognize the position/orientation of the face and its corre-sponding components. This issue can be alleviated with 3D data augmentation.
这个令人讨厌的合成内容表明网络不能识别的人脸及其相应的组件的位置/方向。三维数据增强可以缓解这个问题。
In addition, our model does not fully exploit the spatial correlations between adjacent pixels as shown in the second row of Figure 12.
此外,我们的模型没有充分利用相邻像素之间的空间相关性,如图12的第二行所示。
The proposed model fails to recover the correct color of the lip, which is originally painted with red lipsticks.
该模型未能恢复唇正确的颜色,这是最初涂红色唇膏。
In our future work, we plan to investigate the us-age of pixel-level recurrent neural network (PixelRNN [23]) to address this issue.
在今后的工作中,我们计划研究我们时代的像素级的递归神经网络(pixelrnn [ 23 ])来解决这个问题。

sci face 补全相关推荐

  1. MonoScene: 单目3D语义场景补全

    点击上方"3D视觉工坊",选择"星标" 干货第一时间送达 作者丨图灵智库 来源丨泡泡机器人SLAM 原文链接: https://arxiv.org/pdf/21 ...

  2. ICCV 2021 Oral | PoinTr:几何敏感的多样点云补全Transformer

    来源丨AI科技评论 作者丨于旭敏.王晔 我们提出了一种几何敏感的点云补全Transformer,通过将点云表示成为一组无序的点代理,并采用Transformer的Encoder-Decoder结构进行 ...

  3. ICCV 2021 Oral | 清华提出PoinTr:几何敏感的点云补全Transformer

    本文转载自:AI科技评论 作者 | 于旭敏    编辑 | 王晔 我们提出了一种几何敏感的点云补全Transformer,通过将点云表示成为一组无序的点代理,并采用Transformer的Encode ...

  4. ICRA2021|嵌入式系统的鲁棒单目视觉惯性深度补全算法

    点击上方"3D视觉工坊",选择"星标" 干货第一时间送达 标题:Robust Monocular Visual-Inertial Depth Completio ...

  5. CVPR2021直播|点云补全的方法梳理及最新进展分享

    点击上方"3D视觉工坊",选择"星标" 干货第一时间送达 本文仅做学术分享,如有侵权,请联系删文. 下载1 在「3D视觉工坊」公众号后台回复:3D视觉,即可下载 ...

  6. 3D点云补全算法汇总及最新进展

    点击上方"3D视觉工坊",选择"星标" 干货第一时间送达 Part 1  前言 在探讨3D 点云补全专题前,先介绍三个概念: 概念一:partial obser ...

  7. 基于视觉惯性里程计的无监督深度补全方法

    标题:Unsupervised Depth Completion From Visual Inertial Odometry 作者:Alex Wong , Xiaohan Fei , Stephani ...

  8. GRNet网络:3D网格进行点云卷积,实现点云补全

    点击上方"3D视觉工坊",选择"星标" 干货第一时间送达 Gridding Residual Network for Dense Point Cloud Com ...

  9. 图片高亮处理编程_GMT语法高亮-智能提示-代码补全插件

    GMT(Generic Mappint Tools)是地学界应用非常广泛的一款绘图兼数据处理的开源软件.其开发团队也是非常活跃,此软件还在不断的发展和更新中,变得越来越强大.目前已经有164个模块,而 ...

最新文章

  1. 字符串的最大相似匹配
  2. 让vs中网站的sln文件使用相对路径的办法
  3. 全球及中国太阳能硅片产业供需走势及投资建设前景分析报告2021-2027年
  4. CentOS 6.4 编译安装 gcc-4.8.0
  5. 【开源项目】EasyCmd命令图形化软件
  6. 本周计划(4月12日-19日)
  7. (转载)uCOS-II的嵌入式串口通信模块设计
  8. [WebException: The underlying connection was closed: The message length limit was exceeded.]解决方法...
  9. LotusScript (转)
  10. ENVI:There are no available ROls or EVFs associated with this input file.
  11. hibernate_Restrictions用法 .
  12. 乐高ev3搭建图_你所不知道的乐高EV3发展史
  13. 申请软著源程序量一般填多少
  14. 微信小程序+.NET(五) 音频格式转换-从.mp3到.wav
  15. MySQL如何删除一行数据
  16. 已知函数comp的C语言,在C ++ STL中设置value_comp()函数
  17. JS 刷新当前页面 返回上一页并刷新的方法
  18. 银河麒麟V10操作系统修改屏幕分辨率
  19. 【深度学习】FPN(特征金字塔)简介:Feature Pyramid Networks for Object Detection
  20. MSCKF 2.0 理论推导以及能观性分析

热门文章

  1. rocktmq 消息延时清空_使用Kotlin+RocketMQ实现延时消息的示例代码
  2. php带帽接口_利用php自包含特性上传webshell
  3. Python-----多线程threading用法
  4. iOS 网络状态判断方案(支持iOS11和iPhoneX)
  5. 韩国国税局正调查华为当地分公司 回应称“例行常规审计”
  6. sqlite3常用命令以及django如何操作sqlite3数据库
  7. Atitit 图像处理类库安装与安装模式的前世今生与未来大趋势attilax总结.docx
  8. 【转载】mysql慢查询
  9. git换行符之autoCRLF配置的意义
  10. STL——内存基本处理工具