gaugan使用教程

Both AI and interactive storytelling are complex and unpredictable systems. As we deepened Marrow’s design process, the challenge of combining those two systems into one coherent experience became apparent. On the one hand, as the authors, we developed AI systems and real-time interactions to lead the flow of the experience. On the other hand, we wished to tell a story that also provokes the participants’ imagination and emotions.

AI和交互式讲故事都是复杂且不可预测的系统。 随着我们对Marrow的设计过程的加深,将这两个系统组合为一个连贯的体验的挑战变得显而易见。 一方面,作为作者,我们开发了AI系统和实时交互来引导体验。 另一方面,我们希望讲一个故事,它也激发了参与者的想象力和情感。

Marrow is a story about the possibility of mental illness in machine learning models, focusing mainly on Generative Adversarial Networks (GAN). We question what kind of mental disorders could emerge in advanced AIs, and invite participants to play in an interactive theater scene operated by GAN. Together they play as one dysfunctional family of AIs. Since we are dealing with very abstract and complex concepts, we wanted to explore multiple ways to communicate the story, more than just through dialogue between the family members. Our tactic was to make the room more ‘alive,’ reflecting on the embodied models’ mental state. We wanted to dissolve the barriers between the participants and the environment; to slowly immerse them in an unfamiliar magical experience within that room. The room and the dinner scene were an invitation to let go and indulge in an emotional affair with three other strangers.

Marrow是有关机器学习模型中精神疾病的可能性的故事,主要关注于生成对抗网络(GAN)。 我们质疑先进的AI中会出现哪种精神障碍,并邀请参与者在GAN运营的互动剧场中表演。 他们在一起扮演着功能失调的AI家族。 由于我们处理的是非常抽象和复杂的概念,我们希望探索多种交流故事的方式,而不仅仅是通过家庭成员之间的对话。 我们的策略是让房间更加“活跃”,以体现具体模型的心理状态。 我们希望消除参与者与环境之间的障碍; 让他们慢慢沉浸在那个房间里不熟悉的魔法体验中。 房间和晚餐现场都是一个邀请,让他们放纵自己,与其他三个陌生人沉迷于情感上。

Photo by Andre Bendahan at the NFB labs 2020 © All rights reserved
安德烈·本达汉(Andre Bendahan)在NFB实验室2020中的照片©版权所有

In practice, this meant that we had to implement GAN networks that frequently interacted with the environment and with the participants. Since GAN’s training process does not happen in real-time, this became a challenge of manipulating the output of a pre-trained GAN network in response to real-time changes in the environment. To explain our solution, we first need to look at the difference between standard GANs and conditional GANs.

实际上,这意味着我们必须实施经常与环境和参与者交互的GAN网络。 由于GAN的培训过程不是实时发生的,因此,应对环境的实时变化来操纵经过预训练的GAN网络的输出成为一个挑战。 为了解释我们的解决方案,我们首先需要研究标准GAN与条件GAN之间的区别。

标准VS条件GAN (Standard VS Conditional GANs)

In its basic form, GAN is trained to produce new images that are visually similar to the training set. If we used a dataset of faces, it would generate new faces. If we trained it on cats, it would render new cats. It can maintain variability (not producing the same image every time) by taking an input ‘noise’ vector (essentially a series of random numbers) and using them as the basis for the output image. Thus, if we want to connect GAN’s output to changes in the environment, we need to manipulate the noise vector based on those changes. However, as we showed in our previous post, there is hardly any control over what kind of change would emerge as a result of changing the noise vector.

在其基本形式中,GAN受过训练以生成视觉上类似于训练集的新图像。 如果我们使用面部数据集,它将生成新的面部。 如果我们在猫身上训练它,它将渲染出新的猫。 通过获取输入的“噪声”矢量(本质上是一系列随机数)并将其用作输出图像的基础,它可以保持可变性(每次都不会生成相同的图像)。 因此,如果我们想将GAN的输出与环境的变化联系起来,则需要基于这些变化来操纵噪声矢量。 但是,正如我们在前一篇文章中所展示的那样,几乎没有任何控制权会由于更改了噪声矢量而产生什么样的变化。

We could link different variables in the physical room (such as the participants’ position, the position of objects, and mood analysis of the participants) to the generated output, but the lack of precise control over the output results in a tenuous connection to the environment.

我们可以将物理空间中的不同变量(例如,参与者的位置,对象的位置以及参与者的情绪分析)链接到生成的输出,但是由于缺乏对输出的精确控制,导致与输出的紧密联系。环境。

Illustration by the authors demonstration the difference between Standard GAN and Conditional GAN
作者的插图演示了标准GAN和条件GAN之间的区别

That is where conditional GANs enter the picture. Instead of training on one set of images, we train the network on pairs consisting of an image and a label (numerical input), conditioned to generate one type of image when being presented with a specific kind of label. That grants the user full control over how GAN generates its output for a particular input. The result still varies along with the noise vector, as in the original GAN. However, now the author can create meaningful interactions with the environment. One of the most famous conditional GANs is Pix2Pix.

那就是有条件的GAN进入图片的地方。 而不是在一组图像的训练,我们培养在由一个图像和一个标签(数字输入),调节以产生一个图像类型的网络被呈现特定类型的标签的时候。 这使用户可以完全控制GAN如何为特定输入生成其输出。 与原始GAN一样,结果仍会随噪声矢量而变化。 但是,现在作者可以与环境进行有意义的交互。 Pix2Pix是最著名的条件GAN之一。

rom ROM https://phillipi.github.io/pix2pix/https://phillipi.github.io/pix2pix/

It is a general-purpose image-to-image translator. It can be conditioned on any type of image to generate another. It analyzes pixels in both images, learning how to convert from one color to another. Pix2Pix is used in a variety of ways, such as transforming sketches into paintings and colormaps into photos. We have also used it in our prototype to convert a human’s colored pose analysis to a generated human from stock images of families.

它是一种通用的图像到图像转换器。 可以根据任何类型的图像来生成另一个图像。 它分析了两个图像中的像素,学习如何从一种颜色转换为另一种颜色。 Pix2Pix以多种方式使用,例如将草图转换为绘画,将颜色映射转换为照片。 我们还在原型中使用了它,可以将人的彩色姿势分析转换为由家庭库存图像生成的人。

高甘 (GauGAN)

Where Pix2Pix finds its strength, being a generic translator from any image to any image, it also has its weakness. Relying only on color misses out on metadata that one could feed into the network. The algorithm looks only at shapes and colors. It cannot differentiate between a dinner plate and a flying saucer if they look visually similar in the photo. That is what the researchers at NVIDIA addressed when they created GauGAN. Named after post-Impressionist painter Paul Gauguin, GauGAN also creates realistic images from colormaps. However, instead of learning pixel values, it learns the semantic data of the image. The project is also known as SPADE: Semantic Image Synthesis with Spatially-Adaptive Normalization. Instead of learning where green and blue are in the picture, GauGAN learns where there are grass and sky. That is possible because the images used in the training set, such as the generic database COCO-Stuff, contain semantic classifications of the different elements in the picture. The researchers were then able to demonstrate the capability of GauGAN by crafting an interactive painting tool where colors are not just colors but have meanings. When you paint green into the source sketch, you are telling GauGAN that here lies grass. Try it yourself here.

Pix2Pix可以找到其优势的地方,是从任何图像到任何图像的通用转换器,它也有其弱点。 仅依靠颜色会错过可能会馈入网络的元数据。 该算法仅查看形状和颜色。 如果它们在照片上看起来相似,则无法区分餐盘和飞碟。 NVIDIA的研究人员在创建GauGAN时就是如此 以后印象派画家保罗·高更(Paul Gauguin)的名字命名的GauGAN还可以从色彩图中创建逼真的图像。 但是,它不是学习像素值,而是学习图像的语义数据。 该项目也称为SPADE :具有空间自适应归一化的语义图像合成。 GauGAN不会了解图片中绿色蓝色的位置,而是了解天空的位置。 这是可能的,因为训练集中使用的图像(例如通用数据库COCO-Stuff )包含图片中不同元素的语义分类。 然后,研究人员能够通过制作一种互动绘画工具来展示GauGAN的功能,该工具不仅色彩是色彩,而且具有意义。 在源草图中绘制绿色时,您是在告诉GauGAN这是草。 在这里自己尝试。

https://www.nvidia.com/en-us/research/ai-playground/https://www.nvidia.com/zh-cn/research/ai-playground/

将GauGAN连接到实时360环境 (Connecting GauGAN to a real-time 360 environment)

GauGAN can generate photorealistic images from hand-drawn sketches. Our goal was to have it interact with a real-time physical environment. Solving this was like putting together pieces of a puzzle:

GauGAN可以从手绘草图生成逼真的图像。 我们的目标是使它与实时物理环境交互。 解决这个问题就像把一个难题拼在一起:

  1. We know that NVIDIA trained GauGAN on semantic data: they used the DeepLab v2 network to analyze the COCO-Stuff database and produce labels.

    我们知道NVIDIA在语义数据方面对GauGAN进行了培训:他们使用DeepLab v2网络分析了COCO-Stuff数据库并产生标签。

  2. We know that DeepLab V2 can segment a camera stream in real-time.我们知道DeepLab V2可以实时分割摄像机流。
  3. 1+2: If we feed DeepLab’s output of a camera stream directly to GauGAN, we should get its mirrored state of reality.1 + 2:如果我们将DeepLab的摄像机流输出直接提供给GauGAN,我们应该获得其真实的镜像状态。

The code itself was relatively straightforward and mostly had to do with format conversions between the two networks. We also upgraded DeepLab’s webcam code to stream from our 360 camera: RICOH THETA Z1. The segmentation networks are so robust that we could feed the widened stitched image straight to segmentation and generation. The result was surprisingly accurate.

该代码本身相对简单明了,并且主要与两个网络之间的格式转换有关。 我们还升级了DeepLab的网络摄像头代码,以从我们的360度摄像头: RICOH THETA Z1流式传输。 分割网络是如此强大,以至于我们可以将加宽的缝合图像直接用于分割和生成。 结果出乎意料地准确。

Illustration by the authors of the GauGAN’s system flow
GauGAN系统流程作者的插图

操纵GAN的现实(Manipulating GAN’s reality)

We now had a generated mirror image, depicting GAN’s (COCO-Stuff) version of whatever the camera is witnessing in the room. But we wanted more; we wanted a space that changes according to the story and resembles the character’s state of mind. We looked for ways to generate visuals that will connect to the story-world. To find meanings in between the words and lure the users into keeping acting, move objects around, see the reflection, and wonder what this is all about.

现在,我们生成了一个镜像,描绘了房间中任何摄像机所看到的GAN(COCO-Stuff)版本。 但是我们想要更多。 我们想要一个根据故事而变化并与角色的心态相似的空间。 我们一直在寻找产生与故事世界相关的视觉效果的方法。 要在单词之间找到含义,并吸引用户继续行动,四处移动对象,查看反射,并想一想到底是什么。

We realized that we could interfere in the process of perception and generation. Right after DeepLab analyzers the labels in the camera stream, why not replace them with something else? For example, let’s map any recognized bowl to a sea.

我们意识到我们可能会干扰感知和生成的过程。 在DeepLab分析仪分析摄像机流中的标签之后,为什么不用其他标签替换它们? 例如,让我们将任何公认的碗映射到大海。

Screenshot by Avner Peled while testing our GauGan system; a bowl is replaced with a sea texture
测试我们的GauGan系统时,Avner Peled的屏幕截图; 碗换成海味

We started looking for patterns that our characters’ stories can surface and that the physical space can support throughout the visual form: a face, a landscape, an object, a flower. Stories are recognizable patterns, and in those patterns, we find meaning. They are the signal within the noise.

我们开始寻找角色故事可以浮出水面并在整个视觉形式中支持物理空间的模式:面部,风景,物体,花朵。 故事是可识别的模式,在这些模式中,我们找到了意义。 它们是噪声中的信号。

When we finally got to the lab space to test it all, we discovered the effect of the physical setting. We started playing by arranging (and rearranging) strange elements and exploring the results we can achieve. We developed a scripting platform that lets us easily map objects to other objects. We could mask certain objects from the scene, select multiple objects at once, or invert the selection to map everything other than the objects specified. For example: ‘Dinner table,’ ‘table,’ ‘desk,’ ‘desk stuff,’ ‘floor,’ ‘bed, ‘car’ — suddenly became the same item and were mapped into a sea, while everything else was discarded. Although we didn’t have a car or plastic, or bed in the space. Or ‘frisbee’, ‘paper’, ‘mouse’, ‘metal’, ‘rock, ‘bowl’, ‘wine glass’, ‘bottle’ — all mapped to ‘rock’. Again, interesting to note that we didn’t have a mouse, frisbee, metal, rock, or paper in the real scene, but the network detected them. Therefore, we needed to consider them as well.

当我们最终到达实验室进行测试时,我们发现了物理设置的效果。 我们通过排列(和重新排列)奇怪的元素并探索我们可以达到的结果开始游戏。 我们开发了一个脚本平台,使我们可以轻松地将对象映射到其他对象。 我们可以从场景中屏蔽某些对象,一次选择多个对象,或者反转选择以映射除指定对象之外的所有内容。 例如:“餐桌”,“桌子”,“桌子”,“桌子上的东西”,“地板”,“床”,“汽车”-突然变成了同一物品并被映射到大海中,而其他所有物品都被丢弃了。 尽管我们没有汽车或塑料,也没有床。 或“飞盘”,“纸”,“鼠标”,“金属”,“岩石”,“碗”,“酒杯”,“瓶”-都映射到“岩石”。 同样,有趣的是,我们在真实场景中没有鼠标,飞盘,金属,石头或纸,但是网络检测到了它们。 因此,我们也需要考虑它们。

Screenshots from Marrow’s GitHub
Marrow的GitHub的屏幕截图

If that wasn’t enough, we discovered that changes in the lights, shadow, and camera angles generated different labels every time, which messed up our mapping. In an interactive storytelling framework, this felt both incredible and horrific. We had a little less than ten days before the opening to refine the space and debug the technology while understanding the range of possibilities we can create with what we just developed.

如果这还不够的话,我们发现每次改变灯光,阴影和摄影机角度都会生成不同的标签,这会使我们的映射混乱。 在交互式讲故事框架中,这既令人难以置信又令人恐惧。 开幕前不到十天,我们就在优化空间和调试技术的同时,了解了我们可以利用刚刚开发的产品创造的各种可能性。

Photo by the author. First experiments — left side: how we arranged the items. right side: how the GauGAN was mapped and project the image.
图片由作者提供。 第一个实验-左侧:我们如何布置物品。 右侧:如何映射GauGAN并投影图像。
Photo by the author showing a prototype of the set: bowls and bottles transformed to boats, table to ocean.
作者的照片,展示了场景的原型:碗和瓶子变成了小船,桌子变成了大海。

We played together with our network, with little control over the visuals, we looked to visualize the story of our characters’ inner world.

我们与我们的网络一起玩,几乎没有对视觉效果的控制,我们试图形象化角色内部世界的故事。

Slowly, we started to learn the system — what works, what doesn’t, how to clean the scene, how to stabilize the lighting. We also decided to project both stages of the process, the colored segmented analysis of DeepLab and GAN’s generated output. Gradually, the physical environment became more immersive and could link with the words of the story.

慢慢地,我们开始学习该系统-什么有效,什么无效,如何清洁场景,如何稳定照明。 我们还决定计划该过程的两个阶段,即DeepLab的彩色分段分析和GAN的生成输出。 逐渐地,物理环境变得更加身临其境,并且可以与故事的文字联系起来。

Photo by Andre Bendahan at the NFB labs 2020 © All rights reserved
安德烈·本达汉(Andre Bendahan)在NFB实验室2020中的照片©版权所有

感言(Reflections)

  • The resolution of the pre-trained SPADE/GauGan network generates images at a low 256x256 resolution. It was hard to engage people in these kinds of visuals and make them understand what they are seeing. Achieving a higher resolution would have required us to invest more resources into our training, which wasn’t possible at that time.预训练的SPADE / GauGan网络的分辨率可生成低256x256分辨率的图像。 很难使人们参与这些视觉效果并使他们理解所看到的东西。 要获得更高的分辨率,将需要我们在培训上投入更多资源,而当时这是不可能的。
  • Because GauGAN is semantically-aware, the context of images matters a lot. For example, mapping a desk to a sea, while leaving the concrete wall in the background, generates a murky lake or a pond. But map the wall into a blue sky, and now the sea looks more like an ocean.由于GauGAN具有语义意识,因此图像的上下文非常重要。 例如,将桌子映射到大海,同时将混凝土墙留在背景中,则会生成一个模糊的湖或池塘。 但是将墙壁映射成蓝天,现在海洋看起来更像海洋。
  • Because of this context-awareness, it was also hard to convey meaning with isolated objects. The images usually looked best when we showed them in their entirety.由于具有上下文意识,因此很难通过孤立的对象传达含义。 当我们完整显示它们时,这些图像通常看起来最好。
Photo by Andre Bendahan at the NFB labs 2020 © All rights reserved
安德烈·本达汉(Andre Bendahan)在NFB实验室2020中的照片©版权所有

While we still feel that there is a lot more room for experimentation and polishing the images around our story, the results give us the first glimpse of GAN’s “consciousness” as a perceiving entity that generates its inner world. Such a process resonates with the philosophy of human consciousness.

虽然我们仍然觉得还有更多的实验空间可以用来修饰故事的画面,但结果使我们对GAN的“意识”有了一个初步的了解,它是感知事物的实体,可以产生其内心世界。 这样的过程与人类意识的哲学产生了共鸣。

Immanuel Kant’s transcendental philosophy speaks of the act of synthesis: Our representations act together to mold one unified consciousness. In modern neuroscience, we speak of the Neural Correlates of Consciousness that describe the neural activity required for consciousness, not as a discrete feedforward mechanism of object recognition, but a long sustained feedback wave of a unified experience. That is also the type of experience we wished to design in Marrow’s room, where the final ‘editing’ happens in the participant’s mind.

依曼纽尔·康德(Immanuel Kant)的先验哲学谈到综合行为我们的表象共同作用以塑造一种统一的意识。 在现代神经科学中,我们说的是意识的神经相关性,它描述了意识所需的神经活动,不是作为对象识别的离散前馈机制,而是长期持续的统一体验反馈波。 这也是我们希望在Marrow的房间中设计的体验类型,最终的“编辑”发生在参与者的脑海中。

One thing we are sure will not bring harm to this creative work — is for more people to use it. You will not know what you’re doing unless you’re making it many times, especially in this complicated type of project. Just make make make.

我们确定不会对这项创造性工作造成损害的一件事-是让更多的人使用它。 除非您进行多次,否则您将不知道自己在做什么,尤其是在这种复杂的项目中。 只是制造。

Here is the project’s Open Source GitHub repository. Please share with us what you are making and thinking!

这是项目的开源GitHub存储库 请与我们分享您的想法和想法!

The development phase was done in collaboration with Philippe Lambert, sound artist, and Paloma Dawkins, animator. In the co-production of NFB Interactive and Atlas V.

开发阶段是与声音艺术家Philippe Lambert动画师Paloma Dawkins合作完成的 NFB InteractiveAtlas V.的联合制作中

翻译自: https://towardsdatascience.com/drawing-the-inner-world-of-a-story-using-gaugan-in-a-real-environment-d8e303aaa2f9

gaugan使用教程


http://www.taodudu.cc/news/show-5499119.html

相关文章:

  • java实现即时通信仿qq_java版仿QQ即时通讯系统
  • ThinkPad T430s 摄像头黑屏解决
  • 小鹏如何练就技术变现魔法?
  • 原因为何?日本机器人饭店Henn-na开除半数机器人
  • 2020海南酒店展将携手中国国际饭店业大会于11月27日举办
  • 乡村“蔬菜快递”直供饭店
  • 在饭店中,厨师需要做十道菜。厨师做好一道菜,就招呼侍者端走,菜还没有端走时,厨师就睡觉。侍者端走菜时把厨师唤醒,厨师做下一道菜。无做好的菜,侍者就睡觉。请编写程序模拟厨师和侍者的合作。(提示:厨师线程
  • 无穷饭店
  • 简单题-【锤锤的饭店】
  • Java程序编译后的扩展名_一个Java源程序经过编译后,得到的文件扩展名一定是.class。...
  • 某饭店招待国外考察团
  • 2021年中国星级饭店行业发展现状分析:星级饭店营业收入总额1379.43亿元[图]
  • centos8部署ceph( octopus)
  • CentOS 8 安装KVM虚拟机 Cockpit管理
  • 被尘封的故事技能点bug_DNF:11.12谋略战BUG,新幻神又出现了,4帧2倍速实测有效...
  • 驱动人生8之后需要会员才能满速下载,解决方案
  • 显卡驱动怎么更新下载,驱动人生一键解决
  • 打印机驱动下载后如何安装-驱动人生图文教程
  • 微软Win10 系统更新安装驱动软件不兼容等,驱动人生解决方案
  • 高通为自己洗白:证据居然是华为麒麟芯片
  • Java项目引起服务器cpu负载过高排查
  • 华为自主操作系统
  • 这么多想做芯片的为啥只有它成了?
  • linux如何更改密钥环密码,要解除锁定,请输入密钥环“默认密钥”的密码 是怎么回事呢?...
  • 关于kylin的启动报不能解析域名错误积累
  • 鸿蒙OS和麒麟,事关鸿蒙OS升级,华为新消息传来,这些麒麟机型基本都能升级...
  • 手机应用软件开发-高通骁龙615与麒麟925 CPU比较-华为Mate 7 金色高配版
  • Java教程:Java常用开发工具有哪些?
  • java篇 第一章java概述
  • 写了这么久Java项目,是否还记得你的第一行Java代码

gaugan使用教程_在真实环境中使用gaugan绘制故事的内部世界相关推荐

  1. python networkx教程_如何在python中使用networkx绘制有向图?

    我只是为了完整而把它放进去 . 我从marius和mdml中学到了很多东西 . 这是边缘权重 . 抱歉箭头 . 看起来我不是唯一一个说它无法帮助的人 . 我无法使用ipython笔记本呈现这一点我不得 ...

  2. python的执行过程_在交互式环境中执行Python程序过程详解

    前言 相信接触过Python的伙伴们都知道运行Python脚本程序的方式有多种,目前主要的方式有:交互式环境运行.命令行窗口运行.开发工具上运行等,其中在不同的操作平台上还互不相同.今天,小编讲些Py ...

  3. java中的jpa_JPA教程–在Java SE环境中设置JPA

    java中的jpa JPA代表Java Persistence API,它基本上是一个规范,描述了一种将数据持久存储到持久存储(通常是数据库)中的方法. 我们可以将其视为类似于Hibernate之类的 ...

  4. JPA教程–在Java SE环境中设置JPA

    JPA代表Java Persistence API,它基本上是一个规范,描述了一种将数据持久存储到持久存储(通常是数据库)中的方法. 我们可以将其视为类似于Hibernate的 ORM工具的东西,除了 ...

  5. 在成长中遇到的挫折事件对你的影响_多种语言环境中成长的宝宝,会影响说话早晚?其实没有想象的复杂...

    关于用多种语言抚养孩子的案例比比皆是,但并不是所有的父母都鼓励这样做,他们被告知这会导致孩子混乱和语言延迟,使他们错过机会之窗. 以下是最常见的案例,以及把孩子培养成双语者背后的真实故事. 误解一.与 ...

  6. kubernetes不同的命名空间下的容器能通信吗_在Kubernetes环境中,容器间如何进行网络通信?...

    前 言 随着云计算的兴起,各大平台之争也落下了帷幕,Kubernetes作为后起之秀已经成为了事实上的PaaS平台标准,而网络又是云计算环境当中最复杂的部分,总是让人琢磨不透.本文尝试着围绕在Kube ...

  7. Spring Security系列教程解决Spring Security环境中的跨域问题

    原创:千锋一一哥 前言 上一章节中,一一哥 给各位讲解了同源策略和跨域问题,以及跨域问题的解决方案,在本篇文章中,我会带大家进行代码实现,看看在Spring Security环境中如何解决跨域问题. ...

  8. jssdk信息验证失败_阿里云环境中TLS/SSL握手失败的场景分析

    TLS/SSL握手是一个相对复杂的过程,在阿里云环境中结合产品,安全等特性,可能会让TLS/SSL握手过程的不定性更多.本文来总结下各种握手失败的场景. 一次TLS/SSL握手的过程 本文不详细介绍T ...

  9. 爆破专栏丨Spring系列教程解决Spring Security环境中的跨域问题

    上一章节中,一一哥 给各位讲解了同源策略和跨域问题,以及跨域问题的解决方案,在本篇文章中,我会带大家进行代码实现,看看在Spring Security环境中如何解决跨域问题. 一. 启用Spring ...

最新文章

  1. 360怎么看电脑配置_Win10系统自带杀毒和垃圾清理好么?需不需要安装360卫士
  2. 华为实验台ENSP安装与使用
  3. 深入浅出:5G和HTTP
  4. mysql数据库优化韩顺平_韩顺平 Mysql数据库优化(一) 优化概述
  5. ASP.Net新手项目经验谈
  6. 从Windows文件夹到Linux分区
  7. 《Spring5官方文档》新功能(4,3)
  8. GoDiagram可以画节点和连线的WinForms
  9. 数学教师计算机能力提升,为未来“计算”,做一名新时代的数学教师
  10. 前后端分离项目部署_不用Docker前后端分离项目如何快速部署
  11. 局域网计算机如何传输文件,局域网如何快速传输文件|同一个局域网传输文件的方法...
  12. css设置背景颜色/背景图像/背景图像平铺/背景图像位置/背景图像固定显示/综合设置元素背景的方法(学习笔记)
  13. LeetCode509(力扣509) :斐波那契数列 C++ 多种思路与详细解析
  14. android 图片运动轨迹,基于Android的高德地图的定位和运动轨迹记录的功能
  15. ubuntu安装软件提示snap错误has install-snap change in progress
  16. 迷幻的find函数用法
  17. (转)oracle中的CURRVAL和NEXTVAL用法
  18. E. Selling Souvenirs
  19. 占书明:outlook发邮件时提示“出现意外错误”的原因及解决办法!
  20. 前端学习从入门到高级全程记录之45 (ajax---1)

热门文章

  1. Mac 下必备高效率软件与插件,值得拥有。
  2. openSUSE的虚拟机系统安装
  3. 用python爬取漫画!
  4. SEO检测网站,看网站是否具备SEO架构!!
  5. mui与html5 plus有什么关系,mui.init()与mui.plusReady()区别和关系
  6. 小程序如何访问ssm服务器,微信小程序和ssm交互
  7. 为什么说联想的智慧城市会带来新气象?
  8. Pikachu靶机通关和源码分析
  9. 给你的文章起一个有意义的标题
  10. 给Android应用设置DeviceOwner权限遇到的问题及解决方案