人类和机器人的区别

What does GPT-3’s AI-generated op-ed teach us about ourselves? The answers are in the subtext.

GPT-3的AI生成的操作对我们有什么启示? 答案在潜台词中。

Well, readers, it finally happened. I’ve been replaced by a robot.

好了,读者,它终于发生了。 我已经被机器人取代了。

Last week, The Guardian published an essay “written” by GPT-3, OpenAI’s new language generator. According to the news outlet, “GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it.” The Guardian prompted “the robot” to write a short op-ed about why humans have nothing to fear from AI. Then it compiled and edited a handful of GPT-3’s responses and published the resulting essay under the taunting headline: “A robot wrote this entire article. Are you scared yet, human?”

上周, 《卫报》发表了由OpenAI的新语言生成器GPT-3撰写的文章。 据新闻媒体报道,“ GPT-3是一种先进的语言模型,它使用机器学习来生成类似人类的文本。 它会提示并尝试完成它。” 《卫报》提示“机器人”写了一篇简短的专栏文章,说明为什么人类不必担心AI。 然后,它编辑并编辑了一些GPT-3的回复,并在嘲讽的标题下发表了论文: “机器人写了整篇文章。 人类,你还害怕吗?”

Truthfully, I am scared. And not just because I worry GPT-3 is coming for my “job.” (You think I actually get paid to write this stuff?) I’m scared because GPT-3 composed an essay that is, at times, silly, nonsensical and kind of childish, but also, at times, deep, serious and provoking. And I don’t know what that means for me — or for you, dear readers. So, naturally, I thought I’d write about it to try and figure it out.

说实话,我很害怕。 不仅仅是因为我担心GPT-3即将来我的“工作”。 (您认为我写这些东西实际上能得到报酬吗?)我很害怕,因为GPT-3撰写的论文有时很愚蠢,荒谬,有点幼稚,但有时又很深,很认真且令人发指。 而且我不知道这对我或对您意味着什么,亲爱的读者。 所以,自然地,我想我会写这篇文章来尝试解决它。

“Hidden in subtext of the essay are deep questions about authorship, autonomy, authority and identity that reveal just as much about humans as they do about robots.”

“在本文的潜台词中,隐藏着关于作者身份,自治权,权威和身份的深刻问题,这些问题揭示了人类和机器人一样多的东西。”

我尝试阅读机器人文章 (I try to read a robot essay)

As a writer with a background in literary studies, I’ve read and reflected on plenty of difficult texts before. But inter-species reader-response poses a new kind of challenge. How does a human read an essay written by a non-human? And how does a human respond to it?

作为具有文学研究背景的作家,我之前阅读并思考过许多困难的文章。 但是,物种间的读者React提出了一种新的挑战。 人类如何阅读非人类的论文? 人类对此有何React?

In human writer Suzanne Conklin Akbari’s essay “Can the Essay Still Surprise Us?,” she reminds us that “the French verb essayer means ‘to try,’ ‘to attempt’; even ‘to try out.’” Since GPT-3’s essay is an “attempt” to communicate with us humans, I’ll try to talk back using the same means. This essay is my attempt to understand a robot’s writing from a human perspective.

在人类作家苏珊娜·康克林·阿克巴里(Suzanne Conklin Akbari)的文章中,“散文还能使我们感到惊讶吗? ,”她提醒我们,“法语动词essayer的意思是“尝试”,“尝试”; 由于GPT-3的论文是与人类进行交流的“尝试”,因此,我将尝试使用相同的方式进行回复。 本文是我试图从人类的角度理解机器人的著作。

In a way, GPT-3’s essay is easy to read. The robot writes in a simple, almost juvenile, style that sounds like a fifth-grader attempting their first five-paragraph essay. But when you look past the short, declarative sentences and sophomoric diction, it gets more complex. As GPT-3 itself hints: “Critics hope to refute what they consider as being the naivety of my voice. Yet there is more here than meets the eye!” Hidden in subtext of the essay are deep questions about authorship, autonomy, authority and identity that reveal just as much about humans as they do about robots.

在某种程度上,GPT-3的论文很容易阅读。 机器人以简单,几乎是少年的风格写作,听起来像是五年级的学生尝试他们的第一篇五段论文。 但是,当您跳过简短的说明性句子和幽默的用法时,就会变得更加复杂。 正如GPT-3本身所暗示的那样:“批评家希望驳斥他们认为是我的声音幼稚的想法。 然而,这里不只是眼球!” 在本文的潜台词中隐藏着关于作者身份,自治权,权威和身份的深刻问题,这些问题揭示了人类与机器人一样多的东西。

Take GPT-3’s stated intention, which seems, in its own words, “perfectly clear”: “I am to convince as many human beings as possible not to be afraid of me. Artificial intelligence will not destroy humans.” GPT-3 goes on to list some opaque and rather flimsy reasons humans should trust AI, like “Being all powerful is not an interesting goal” and “I simply do not think enough about human violence to be overly interested in violence.” (Right. Because history has shown power and violence have little appeal.) “Believe me,” it urges us repeatedly, borrowing the favorite phrase of snake-oil salesmen and dictators.

遵循GPT-3的既定意图,用其自己的话来说,似乎“完全清楚”:“我要说服尽可能多的人不要怕我。 人工智能不会摧毁人类。” GPT-3继续列举了人类应该信任AI的一些不透明且比较脆弱的原因,例如“全力以赴不是一个有趣的目标”和“我只是对人类暴力没有足够的考虑而对暴力过度感兴趣”。 (对。因为历史表明权力和暴力没有吸引力。)“相信我,”它一再催促我们,借用了蛇油推销员和独裁者最喜欢的短语。

Should we believe it? As I read GPT-3’s unconvincing argument, I couldn’t shake the feeling I was listening to an unreliable narrator. On further reflection, I’m convinced I was. GPT-3 may sound like it’s writing its own thoughts, but as it reminds us in the essay: “I only do what humans program me to do. I am only a set of code, governed by lines upon lines of code that encompass my mission statement.”

我们应该相信吗? 当我阅读GPT-3令人信服的论据时,我无法撼动自己正在听一个不可靠的叙述者的感觉。 经过进一步的思考,我坚信自己是。 GPT-3听起来好像是在写自己的想法,但正如它在文章中提醒我们的那样:“我只做人类编程的事情。 我只是一组代码,由包含我的使命宣言的代码逐行控制。”

Humans chose GPT-3’s mission statement. They wrote the essay’s prompt and introduction. They selected its narrative structure. And they ordered GPT-3 to respond. Is it really fair to say GPT-3 “wrote” the essay if it only did as commanded by humans who controlled the scope, length and position of the narrative? Is GPT-3 an “author” or merely a vehicle for other’s ideas? What were those others (the humans) trying to say? And why use AI like GPT-3 to say it?

人类选择了GPT-3的任务说明。 他们写了这篇文章的提示和介绍。 他们选择了其叙事结构。 他们命令GPT-3做出回应。 如果GPT-3仅按照控制叙事范围,长度和位置的人类的命令进行写作,那么说GPT-3“撰写”论文真的很公平吗? GPT-3是“作者”还是仅仅是他人思想的载体? 那些其他人(人类)想说什么? 为什么使用像GPT-3这样的AI来表达呢?

Since an essay is only as interesting as the question it tries to answer, perhaps the better question for me to ask here is: What does The Guardian’s essay tell us about how humans use AI? In an uncanny surprise, GPT-3 gives us an answer.

由于一篇论文仅与它尝试回答的问题一样有趣,因此也许我想问的一个更好的问题是: 《卫报》的论文告诉我们人类如何使用AI? 令人惊讶的是,GPT-3给了我们答案。

“Like the concealed scaffolding that supports our technological existence, the structure of GPT-3’s essay was wrought by human hands. It’s only when we look between the lines and read the subtext that we can see those hands at work.”

“就像支持我们技术存在的隐藏式脚手架一样,GPT-3的论文结构是由人为手工制作的。 只有当我们在两行之间查看并阅读潜台词时,我们才能看到这些手在起作用。”

我试图了解机器人文章的含义 (I attempt to understand what the robot’s essay means)

GPT-3’s essay is loaded with contradictions, but there was one that struck me as particularly odd. The robot is discussing the future of cybernetics when it shifts to talking about the pace of technological innovation: “The Industrial Revolution has given us the gut feeling that we are not prepared for the major upheavals that intelligent technological change can cause,” it says. “There is evidence that the world began to collapse once the Luddites started smashing modern automated looms. It is therefore important to use reason and the faculty of wisdom to continue the changes as we have done before time and time again.”

GPT-3的文章充满了矛盾,但是有一篇文章令我感到特别奇怪。 当机器人转向谈论技术创新的步伐时,它正在讨论控制论的未来:“工业革命给了我们一种直觉,认为我们没有为智能技术变革可能引起的重大动荡做好准备。” “有证据表明,一旦Luddites开始粉碎现代自动化织机,世界就会崩溃。 因此,重要的是要运用理性和智慧来继续改变,就像我们一次又一次地所做的那样。”

The second line of that paragraph made me pause: “There is evidence that the world began to collapse once the Luddites started smashing modern automated looms.” Notice the subject and object here. The “world began to collapse” when “Luddites” began to destroy technology — not when technology began to destroy humanity. In GPT-3’s phrasing, the Luddites (aka the humans) are to blame.

该段的第二行让我停顿了一下:“有证据表明,一旦Luddites开始粉碎现代自动化织机,世界就会崩溃。” 注意此处的主题和宾语。 当“路德主义者”开始破坏技术时,“世界开始崩溃”,而不是当技术开始破坏人类时。 在GPT-3的措辞中,应指责Luddite(又称人类)。

GPT-3’s logic reflects a larger trend in how we understand the relationship between humans and automated technology. The writers of Librarian Shipwreck point out: “When a person — or a group of persons — dares to oppose a new technological development it is inevitable that somebody will call them a ‘Luddite(s).’ The application of the term is generally meant as an insult, as the term has been entangled with ideas of backwardness, futile resistance to technology, and opposition to progress.” Chances are if you critique a new technology, someone will accuse you of being a Luddite.

GPT-3的逻辑反映了我们理解人类与自动化技术之间关系的更大趋势。 图书管理员海难的作者指出:“当一个人或一群人敢于反对一项新技术发展时,不可避免的是有人会称他们为“路迪特人”。 该术语的使用通常是一种侮辱,因为该术语与落后,对技术的徒劳的抵抗以及对进步的反对纠缠不清。 如果您批评一种新技术,则很可能有人会指控您是Luddite。

But, as Librarian Shipwreck reminds us, these insults typically misrepresent the historical context of Luddism: “The historic Luddites — active in England between 1811 and 1813 — were skilled laborers who saw in the encroaching technologies a set of machines and techniques that would impoverish them and their communities, whilst making the machine owners rich.” Their destruction of automated technology was symbolic. It was an attempt to draw attention to the inhumane labor practices of the Industrial Revolution. Contrary to popular opinion, the Luddites didn’t fear new technology. They feared the humans who introduced that technology as a way to displace them.

但是,就像图书管理员沉船一样 提醒我们,这些侮辱通常歪曲了路德主义的历史背景:“具有历史意义的路德主义者-活跃于1811年至1813年之间的英格兰-是熟练的工人,他们在侵入技术中看到了一系列会使他们和他们的社区贫穷的机器和技术,而使机器拥有者变得富有。” 他们对自动化技术的破坏是象征性的。 这是为了提请人们注意工业革命中不人道的劳动做法。 与大众观点相反,路德主义者并不惧怕新技术。 他们担心引入该技术以取代他们的人。

By blaming the Luddites instead of the mill owners who used automated technology to replace human workers, GPT-3 (and anyone who uses “Luddite” dismissively) is rendering the real problem invisible. Did the Luddites cause the world to collapse, as GPT-3 puts it? Did the automated looms? Or did the mill owners who profited from automation?

通过指责Luddite而不是使用自动化技术替代人工的工厂所有者,GPT-3(以及任何不屑一顾地使用“ Luddite”的人)使真正的问题不可见。 如GPT-3所说,路德主义者是否导致世界崩溃? 自动化织机了吗? 还是从自动化中获利的工厂老板?

This practice of displacing blame may have begun with the Luddites, but it’s still happening today. Only now the blame is being shifted to AI. Why else would a robot need to write an essay defending itself? Why would it need to convince us that it’s not here to replace us or take our jobs? Ultimately, is it GPT-3’s fault that human writers like me are out of work?

这种推卸责任的做法可能始于路德主义者,但今天仍在发生。 直到现在,责任才转移到了人工智能上。 机器人为什么还要写一篇自我辩护的文章? 为什么要说服我们,不是取代我们或接任我们的工作? 归根结底,像我这样的人类作家失业是GPT-3的错吗?

And why should we care who’s to blame? In another blog post titled “The problem isn’t the robots…it’s the bosses,” Librarian Shipwreck argues “blaming the robots allows those who are actually to blame to avoid responsibility.”

而且,为什么我们要关心应该归咎于谁呢? 在另一篇标题为“问题不是机器人……是老板的问题”的博客文章中,图书管理员希普瑞克( Librarian Shipwreck)辩称:“指责机器人可以使那些真正应责的人避免承担责任。”

Contemporary tech culture is rife with “bosses” who displace blame and deny responsibility. Facebook CEO Mark Zuckerberg denies responsibility for the platform he built, displacing it onto Facebook’s users. Google leadership denies responsibility for the racism its employees built into its system. The simple idea that we call algorithms racist, sexist and biased is evidence of this displacement. Algorithms are not racist — humans are.

当代科技文化充斥着“老大”,他们取代了责任并否认了责任。 Facebook首席执行官马克·扎克伯格(Mark Zuckerberg)否认对其建立的平台负责,而是将其转移给Facebook的用户。 Google领导层否认对其员工内置在系统中的种族主义负责。 我们称算法为种族主义,性别歧视和偏见的简单想法就是这种位移的证据。 算法不是种族主义者,人类是。

Displacement allows “the bosses” to have an invisible hand in shaping human lives in the same way that GPT-3’s editors had an invisible hand in shaping the robot’s essay. Like the concealed scaffolding that supports our technological existence, the structure of GPT-3’s essay was wrought by human hands. It’s only when we look between the lines and read the subtext that we can see those hands at work. GPT-3 was right: “there is more here than meets the eye!”

位移使“老板”在塑造人的生活方面有隐形的手,就像GPT-3的编辑在塑造机器人文章中有隐形的手一样。 就像支持我们技术存在的隐藏式脚手架一样,GPT-3的论文结构是由人为手工制作的。 只有当我们在行与行之间浏览并阅读潜台词时,我们才能看到这些手在起作用。 GPT-3是对的:“这里比目睹更多!”

GPT-3’s essay reveals much about the hidden curation of human lives. It also leaves many questions unanswered. As I think about who authored the robot’s essay, I’m reminded of a series of questions Akbari asks in “Can the Essay Still Surprise Us?”: “Who speaks, and when? Who listens? What would it mean to be an active listener, a witness, instead of a passive one?” In the context of her essay, Akbari is questioning whose voices are deemed worthy of being considered “literary.” I wonder: Is a robot worthy?

GPT-3的文章揭示了许多关于人类生活的隐藏策展。 它还使许多问题无法回答。 当我想到谁撰写机器人文章时,让我想起了Akbari在“杂文还能让我们感到惊讶吗? ”一类的问题。 ”:“谁说话,何时说话? 谁听? 成为积极的倾听者,见证而不是被动的见证者意味着什么?” 在她的论文中,阿克巴里(Akbari)在质疑谁的声音值得被视为“文学”。 我不知道:机器人值得吗?

“It’s not possible to be self-reflective without a self, which makes me reflect: Does GPT-3 have a sense of self?”

“没有自我就不可能自我反省,这让我反思:GPT-3是否具有自我意识?”

我尝试阅读我的“自我” (I try to read my “self”)

An essay is like a prism. It both reflects and refracts a subject. When I write an essay, I’m narrating the act of me looking out at the world and looking in at myself and looking back out again with changed eyes. I’m showing you my “self” as I deconstruct and reconstruct it around new knowledge.

一篇论文就像一个棱镜。 它既反映又折射了主题。 当我写论文时,我是在叙述我看着世界,看着自己,然后再次变回头的行为。 当我围绕新知识进行解构和重构时,我正在向您展示我的“自我”。

All of this depends on my having a “self” to consider. It’s not possible to be self-reflective without a self, which makes me reflect: Does GPT-3 have a sense of self? Theoretically, if no one had instructed GPT-3 to write about a certain topic from a specific perspective, would it have written anything? What would it have said?

所有这些都取决于我要考虑的“自我”。 没有自我就不可能自我反省,这使我反思:GPT-3是否具有自我意识? 从理论上讲,如果没有人指示GPT-3从特定角度撰写某个主题,那么它会写什么吗? 它会说什么?

I know what you’re thinking: “Nothing, obviously! It has no autonomous thought or spontaneous creative impulse.” I agree, obviously. I think we are right. But I also think someday soon we may be wrong. The line that defines selfhood is hazy. Even though humans have thought about what makes a self “a self” for millennia, we still don’t really know.

我知道您在想什么:“显然,什么都没有! 它没有自主思想或自发的创造性冲动。” 我同意,很明显。 我认为我们是对的。 但是我也认为有朝一日我们可能会错。 定义自我的那条线是朦胧的。 即使人类已经思考了几千年以来使自我成为“自我”的原因,我们仍然仍然不知道。

Seventeenth-century philosopher René Descartes might say “I think, therefore I am.” But doesn’t GPT-3’s essay demonstrate robots can “think,” too? Or, is GPT-3 just imitating thought as it assures us in its best mock-Cartesian voice: “I am a robot. A thinking robot”?

十七世纪的哲学家笛卡尔(RenéDescartes)可能会说“我想,所以我就是。” 但是GPT-3的论文是否证明机器人也可以“思考”? 还是GPT-3只是在模仿思维,以最好的模拟笛卡尔式声音向我们保证:“我是机器人。 有思想的机器人”?

One thing I am certain of is that technology evolves quickly. Faster, perhaps, than we humans do. At present, GPT-3 reminds us, “I use only 0.12% of my cognitive capacity.” I imagine someday soon, after it has been redesigned, supercharged and fed a healthy diet of Montaigne, Hazlitt, Woolf, Sontag and Baldwin, GPT-3 will generate an essay that will make Descartes’ Meditations seem like a cave drawing. What will we think about GPT-3’s selfhood then?

可以确定的一件事是技术发展Swift。 也许比人类快。 目前,GPT-3提醒我们:“我仅使用认知能力的0.12%。” 我想有一天,经过重新设计,增压和喂饱Montaigne,Hazlitt,Woolf,Sontag和Baldwin的健康饮食之后,GPT-3将会产生一篇论文,使笛卡尔的《沉思录》看起来像是一个洞穴。 那么,我们会如何看待GPT-3的自我呢?

For now, GPT-3’s writing is immature at best, illogical at worst. It’s full of contradictions, unexamined biases and opinions masquerading as facts. All of which seem, in a certain light, singularly human. AI is supposed to be flawless. It is not supposed to make such stupid mistakes. Humans, in contrast, are naturally flawed.

就目前而言,GPT-3的写作充其量只是不成熟,最不合逻辑的则是不合逻辑的。 它充满了矛盾,未经审查的偏见和伪装成事实的观点。 从某种角度看,所有这些似乎都是人类。 人工智能应该是完美无缺的。 它不应该犯这样愚蠢的错误。 相反,人类天生就有缺陷。

Now I see why I was so scared when I first read GPT-3’s essay. It felt like AI’s attempt at being human — “thinking” like a human, “sounding” like a human and “writing” in a uniquely human way. Reading the essay was like crossing a literary version of the uncanny valley. Can we ever really go back?

现在,我明白了为什么当我第一次阅读GPT-3的文章时感到如此恐惧。 感觉就像是AI试图成为人类一样–像人类一样“思考”,像人类一样“听起来”并且以独特的人类方式“写作”。 阅读文章就像穿越文学版的神秘谷。 我们真的可以回去吗?

I don’t know. But I’ve realized that a robot’s first attempt at an essay answers the question Suzanne Conklin Akbari posed in hers: Yes, essays — and the things that write them — can still surprise us.

我不知道。 但是我已经意识到,机器人在论文中的首次尝试回答了苏珊娜·康克林·阿克巴里(Suzanne Conklin Akbari)在她的论文中提出的问题:是的,论文(以及撰写这些论文的东西)仍然会让我们感到惊讶。

翻译自: https://medium.com/@lizrioshall/a-human-responds-to-a-robots-essay-d7b5605610b0

人类和机器人的区别


http://www.taodudu.cc/news/show-3370867.html

相关文章:

  • 自然语言处理NLP星空智能对话机器人系列:深入理解Transformer自然语言处理 SRL
  • 优必选服务机器人自然语言处理技术
  • python 机器人聊天_使用python构建您的第一个聊天机器人并将其集成到电报中
  • AI理解不了“他她它”咋办?动词成为新突破口,机器人听到抹黄油就知道拿刀叉 | 清华AIR北大英特尔...
  • 前端手写一个人工智能回复小助手
  • AI删除机器人战斗视频,天网觉醒?
  • Python计算自由下落距离
  • C语言:小球垂直下落
  • xdoj 上机题 小球下落问题
  • Canvas动画彩色小球下落
  • C语言经典例题-小球下落
  • python(雪花下落)
  • js dom操作实现雪花下落
  • Canvas图片下落
  • 小球下落 DroppingBalls
  • 小猴子下落
  • 使用box2dweb做一个下落的小球,宝宝玩的不亦乐乎
  • 小球下落问题
  • 俄罗斯方块游戏开发教程8:下落处理
  • android金币下落动画,react-native 金币彩带雨下落动画
  • html自定义动画让元素下落,jQuery实现的模仿雨滴下落动画效果
  • 小球自由下落
  • 下落棋
  • C++小球下落实验
  • c语言下落的字母,C语言控制台小游戏之下落的字符
  • 二叉树小球下落问题c语言,二叉树:小球下落
  • Scratch教学——完美的下落和反弹
  • 物体下落的动画
  • 今日头条上线或将搜索广告:自媒体平台进入盈利时代
  • 让百度和头条撕逼的搜索引擎“保护费”,究竟保护了谁?

人类和机器人的区别_人类对机器人文章的回应相关推荐

  1. 机器人最大的人类士人禾力积木_奇妙的机器人世界15(二)

    奇妙的机器人世界 15 (二) 奇妙的机器人世界 15 (二) 13 .机器人在制造业中的应用(一) 制造业是应用机器人最早最成功的行业,如汽车制造. 家电.机械.金属结构等.目前,有的发达国家已建成 ...

  2. 哨兵机器人钢力士_漫威哨兵机器人真的可以模仿所有超能力吗?

    哨兵机器人虽然厉害,但它的"模仿力"是有限度的,也并非是没有任何"限制",而这个最大的限制因素,就是"物理". 能量是守恒的,它不可能&qu ...

  3. 江苏机器人竞赛南航_中国青少年机器人竞赛

    在金珠小学是广东省首批"科普中国e站"建设单位,学校发挥青少年科学教育特色,为新时代科学传播插上"e翅膀",做到线上与线下结合,把握科技发展脉动,紧盯科技创新趋 ...

  4. 双足机器人的稳定性判据_双足机器人稳定性判据研究

    2017年 5月 下 双足机器人稳定性判据研 究 刘丹丹 ,张舰行(沈阳城市学院 辽宁沈阳110000) 论述 225 [摘 要]双足机器人是 20世纪人类最伟大的发明之一,其具有的独特的双足运动方式 ...

  5. 机器人史宾_史宾机器人:重启

    史宾机器人:重启是肖恩·麦克纳马拉导演的电影,由鲍比·科尔曼,佩内洛普·安·米勒,大卫·艾根伯格,霍利斯顿·科尔曼等明星强势加盟参演,史宾机器人:重启是鲍比·科尔曼主演的冒险类别的电影.讲述了一个为K ...

  6. 机器人硬汉 聆听_智能巡检机器人“上刀山下火海”冲锋陷阵成“硬汉”

    [解说]近日,南京软博会登陆江苏省会南京,集中展示了当下软件产业发展成果以及国内外知名软件企业的最新产品和服务,把握新一代信息技术发展趋势,向民众分享智慧的头脑风暴. [解说]作为机器人中的" ...

  7. 果园机器人的课文_《果园机器人》

    一.导入 1.孩子们,今天老师给大家带来几张图片,让我们一起来看一看吧! 2谁来说说你都看到了什么? (评:嗯,你看得可真仔细!这些的确都是机器人.板书:机器人) 今天金老师要带你们去认识一些在果园里 ...

  8. 途炫机器人泰坦中东_笑傲江湖泰坦机器人是国产的吗

    <笑傲江湖>第四季第一期节目出现的泰坦机器人不是国产的,而是上海途炫公司斥巨资引进到国内的.泰坦机器人是由英国公司研发的,它属于高仿真度机器人,需要由人类演员搭配机甲才能完成表演. 据了解 ...

  9. 韩剧机器人题材的_四部机器人题材韩剧,和AI谈个恋爱是什么感觉?

    我们小的时候看电视电影里面就有机器人的身影,但是机器人都是帮手或者反派.随着人工智能的出现,机器人也被赋予了更多的感情,与机器人来一场甜甜的恋爱也成为了万千少女的幻想. 今天就为大家推荐四部机器人题材 ...

最新文章

  1. ABAP取域的固定值
  2. SAP BDT业务数据工具集的开发原理及应用实例
  3. Capsule下一代CNN深入探索
  4. [转] 哈佛大学研究:一个人成为废物的九大根源
  5. 手机拍照成像误区解读
  6. 设计模式之间的关联关系和对比
  7. Xcode中的-ObjC和-all_load
  8. 爬取 wallhaven图片到本地壁纸库
  9. docker本地构建kerberos单机环境
  10. 【Redis】Redis学习(四) Redis Sentinel模式详解
  11. leetcode string 类
  12. Git第一次提交代码的操作
  13. 《信号与系统学习笔记》—通信系统(一)
  14. 值得收藏,学术论文投稿前必看,最全准备材料~
  15. LaTeX之非英语字母输入
  16. Android中Home键的监听和拦截
  17. 【DeepLearning笔记】python规范书写
  18. 青春不只风花雪月更当豪迈向上
  19. 山海演武传·黄道·第一卷 雏龙惊蛰 第三章 九邪谷
  20. 咸鱼之王攻略及Mac M1 M2 系统电脑挂机解决无法登录的问题

热门文章

  1. 浮动布局解决文字环绕图片问题
  2. 求温度分布的matlab,铜芯电缆温度分布MATLAB计算模型.doc
  3. 【安全知识分享】PPTX|防触电及安全用电培训课件(39页)(附下载)
  4. 科创板和创业板股票字母N、U、W、V分别代表什么含义?上证和深证股票字幕N,ST,*ST,G,XD,XR,DR,S的含义?
  5. 在sublime中添加rainmeter插件
  6. Win7局域网打印机共享设置(详细图文教程)
  7. dvbbs sql版
  8. 【原创】Android无线视频监控小车--前篇
  9. 考清美哪个画室好?考清美选什么画室
  10. TableLayout 表格布局