错过了面试,公司招满人了

by Gil Fewster

吉尔·弗斯特(Gil Fewster)

您可能错过了Google令人赞叹的AI公告。 (The mind-blowing AI announcement from Google that you probably missed.)

Disclaimer: I’m not an expert in neural networks or machine learning. Since originally writing this article, many people with far more expertise in these fields than myself have indicated that, while impressive, what Google have achieved is evolutionary, not revolutionary. In the very least, it’s fair to say that I’m guilty of anthropomorphising in parts of the text.

免责声明 :我不是神经网络或机器学习方面的专家。 自从最初撰写本文以来,许多在这些领域拥有丰富经验的人比我自己表明,尽管令人印象深刻,但Google取得的成就是进化的,而不是革命的。 至少可以说,我在部分文本中对拟人化感到内gui。

I’ve left the article’s content unchanged, because I think it’s interesting to compare the gut reaction I had with the subsequent comments of experts in the field. I strongly encourage readers to browse the comments after reading the article for some perspectives more sober and informed than my own.

我将文章的内容保持不变,因为我认为将我的直觉React与该领域专家随后的评论进行比较很有趣。 我强烈建议读者阅读本文后,以比我自己更清醒和更明智的角度浏览评论。

In the closing weeks of 2016, Google published an article that quietly sailed under most people’s radars. Which is a shame, because it may just be the most astonishing article about machine learning that I read last year.

在2016年的最后几个星期,谷歌发表了一篇文章,悄悄地航行在大多数人的雷达下。 真可惜,因为这可能只是我去年阅读的有关机器学习的最惊人的文章。

Don’t feel bad if you missed it. Not only was the article competing with the pre-Christmas rush that most of us were navigating — it was also tucked away on Google’s Research Blog, beneath the geektastic headline Zero-Shot Translation with Google’s Multilingual Neural Machine Translation System.

如果错过了,不要感到难过。 这篇文章不仅与我们大多数人都在浏览的圣诞节前的竞争竞争,而且还藏在Google的研究博客中,该博客位于Google的多语言神经机器翻译系统的怪异标题零射翻译之下

This doesn’t exactly scream must read, does it? Especially when you’ve got projects to wind up, gifts to buy, and family feuds to be resolved — all while the advent calendar relentlessly counts down the days until Christmas like some kind of chocolate-filled Yuletide doomsday clock.

这不是尖叫必须读 ,是吗? 尤其是当您有待完成的项目,要购买的礼物以及要解决的家庭纠纷时,所有的到来日历都会像圣诞节前夕那样充斥着圣诞节,像圣诞节充沛的圣诞节末日时钟那样无休止地倒计时。

Luckily, I’m here to bring you up to speed. Here’s the deal.

幸运的是,我来这里是为了让您更快。 这是交易。

Up until September of last year, Google Translate used phrase-based translation. It basically did the same thing you and I do when we look up key words and phrases in our Lonely Planet language guides. It’s effective enough, and blisteringly fast compared to awkwardly thumbing your way through a bunch of pages looking for the French equivalent of “please bring me all of your cheese and don’t stop until I fall over.” But it lacks nuance.

截至去年9月,Google翻译使用基于短语的翻译。 当我们在《孤独星球》语言指南中查找关键字和短语时,您和我所做的基本上相同。 与笨拙地翻阅一堆页面来寻找法文版的“请把我所有的奶酪带给我,直到我跌倒之前不要停下来”相比,它足够有效,而且起泡速度非常快。 但是它缺乏细微差别。

Phrase-based translation is a blunt instrument. It does the job well enough to get by. But mapping roughly equivalent words and phrases without an understanding of linguistic structures can only produce crude results.

基于短语的翻译是一种钝器。 它做得足够好,可以勉强过关。 但是,在不了解语言结构的情况下映射大致相等的单词和短语只会产生粗略的结果。

This approach is also limited by the extent of an available vocabulary. Phrase-based translation has no capacity to make educated guesses at words it doesn’t recognize, and can’t learn from new input.

这种方法也受到可用词汇量的限制。 基于短语的翻译无法对无法识别的单词进行有根据的猜测,也无法从新输入中学习。

All that changed in September, when Google gave their translation tool a new engine: the Google Neural Machine Translation system (GNMT). This new engine comes fully loaded with all the hot 2016 buzzwords, like neural network and machine learning.

一切都在9月发生了变化,当时Google为他们的翻译工具提供了一个新引擎:Google神经机器翻译系统(GNMT)。 这款新引擎搭载了2016年所有热门词汇,例如神经网络机器学习

The short version is that Google Translate got smart. It developed the ability to learn from the people who used it. It learned how to make educated guesses about the content, tone, and meaning of phrases based on the context of other words and phrases around them. And — here’s the bit that should make your brain explode — it got creative.

简短的版本是Google Translate很聪明。 它发展了向使用它的人学习的能力。 它学习了如何根据周围的其他单词和短语的上下文对短语的内容,语调和含义进行有根据的猜测。 而且-这应该会使您的大脑爆炸-它很有创造力。

Google Translate invented its own language to help it translate more effectively.

Google翻译发明了自己的语言,以帮助其更有效地进行翻译。

What’s more, nobody told it to. It didn’t develop a language (or interlingua, as Google call it) because it was coded to. It developed a new language because the software determined over time that this was the most efficient way to solve the problem of translation.

而且,没有人告诉它。 它没有开发一种语言(或称为Google的interlingua ),因为它是经过编码的。 它开发了一种新语言,因为该软件随着时间的推移确定这是解决翻译问题的最有效方法。

Stop and think about that for a moment. Let it sink in. A neural computing system designed to translate content from one human language into another developed its own internal language to make the task more efficient. Without being told to do so. In a matter of weeks. (I’ve added a correction/retraction of this paragraph in the notes)

停下来想一想。 让它沉入其中。一种旨在将内容从一种人类语言转换为另一种人类语言的神经计算系统,开发了自己的内部语言,以使任务效率更高。 没有被告知要这样做。 在几周内。 (我在注释中添加了对该段落的更正/撤消)

To understand what’s going on, we need to understand what zero-shot translation capability is. Here’s Google’s Mike Schuster, Nikhil Thorat, and Melvin Johnson from the original blog post:

要了解发生了什么,我们需要了解什么是零镜头翻译功能。 以下是原始博客文章中Google的Mike Schuster,Nikhil Thorat和Melvin Johnson的文章:

Let’s say we train a multilingual system with Japanese⇄English and Korean⇄English examples. Our multilingual system, with the same size as a single GNMT system, shares its parameters to translate between these four different language pairs. This sharing enables the system to transfer the “translation knowledge” from one language pair to the others. This transfer learning and the need to translate between multiple languages forces the system to better use its modeling power.

假设我们用日语⇄英语和韩语⇄英语示例训练了一种多语言系统。 我们的多语言系统具有与单个GNMT系统相同的大小,共享其参数以在这四种不同的语言对之间进行翻译。 这种共享使系统能够将“翻译知识”从一种语言对转移到另一种语言。 这种转移学习和在多种语言之间进行翻译的需求迫使系统更好地利用其建模能力。

This inspired us to ask the following question: Can we translate between a language pair which the system has never seen before? An example of this would be translations between Korean and Japanese where Korean⇄Japanese examples were not shown to the system. Impressively, the answer is yes — it can generate reasonable Korean⇄Japanese translations, even though it has never been taught to do so.

这激发了我们提出以下问题:我们可以在系统从未见过的语言对之间进行翻译吗? 这样的一个例子是韩文和日文之间的翻译,其中系统中未显示韩文⇄日文的示例。 令人印象深刻的是,答案是肯定的-即使从未听说过它可以生成合理的韩语-日语翻译。

Here you can see an advantage of Google’s new neural machine over the old phrase-based approach. The GMNT is able to learn how to translate between two languages without being explicitly taught. This wouldn’t be possible in a phrase-based model, where translation is dependent upon an explicit dictionary to map words and phrases between each pair of languages being translated.

在这里,您可以看到Google新的神经机器相对于旧的基于短语的方法的优势。 GMNT能够学习如何在两种语言之间进行翻译,而无需进行明确的讲授。 在基于短语的模型中这是不可能的,在该模型中,翻译依赖于显式词典来在要翻译的每对语言之间映射单词和短语。

And this leads the Google engineers onto that truly astonishing discovery of creation:

这将Google工程师引向了真正令人惊讶的创造发现:

The success of the zero-shot translation raises another important question: Is the system learning a common representation in which sentences with the same meaning are represented in similar ways regardless of language — i.e. an “interlingua”? Using a 3-dimensional representation of internal network data, we were able to take a peek into the system as it translates a set of sentences between all possible pairs of the Japanese, Korean, and English languages.

零镜头翻译的成功提​​出了另一个重要问题:系统是否在学习一种通用表示形式,即无论语言如何,具有相同含义的句子都以相似的方式表示-即“ interlingua”? 使用内部网络数据的3D表示,我们可以窥视该系统,因为它可以在所有可能的日语,韩语和英语对之间转换一组句子。

Within a single group, we see a sentence with the same meaning but from three different languages. This means the network must be encoding something about the semantics of the sentence rather than simply memorizing phrase-to-phrase translations. We interpret this as a sign of existence of an interlingua in the network.

在单个组中,我们看到的句子具有相同的含义,但是来自三种不同的语言。 这意味着网络必须在编码有关句子语义的内容,而不是简单地记住短语到短语的翻译。 我们将其解释为网络中存在国际语言的标志。

So there you have it. In the last weeks of 2016, as journos around the world started penning their “was this the worst year in living memory” thinkpieces, Google engineers were quietly documenting a genuinely astonishing breakthrough in software engineering and linguistics.

所以你有它。 在2016年的最后几个星期,随着世界各地的新闻工作者开始写下他们的“那是记忆中最糟糕的一年”的想法,Google工程师悄悄记录了软件工程和语言学领域的真正惊人突破。

I just thought maybe you’d want to know.

我只是想也许您想知道。

Ok, to really understand what’s going on we probably need multiple computer science and linguistics degrees. I’m just barely scraping the surface here. If you’ve got time to get a few degrees (or if you’ve already got them) please drop me a line and explain it all me to. Slowly.

好的,要真正了解正在发生的事情,我们可能需要多个计算机科学和语言学学位。 我只是在这里勉强刮擦表面。 如果您有时间获得一些学位(或者如果您已经获得了学位),请给我讲一行,并向我解释。 慢慢来

Update 1: in my excitement, it’s fair to say that I’ve exaggerated the idea of this as an ‘intelligent’ system — at least so far as we would think about human intelligence and decision making. Make sure you read Chris McDonald’s comment after the article for a more sober perspective.

更新1 :令我兴奋的是,很公平地说,我夸大了将其作为“智能”系统的想法-至少就我们对人类智能和决策的考虑而言。 确保您在阅读完文章后已阅读Chris McDonald的评论 ,以获取更清醒的见解。

Update 2: Nafrondel’s excellent, detailed reply is also a must read for an expert explanation of how neural networks function.

更新2 : Nafrondel出色而详尽的答复也是必须阅读的内容,以对神经网络的功能进行专家解释。

翻译自: https://www.freecodecamp.org/news/the-mind-blowing-ai-announcement-from-google-that-you-probably-missed-2ffd31334805/

错过了面试,公司招满人了

错过了面试,公司招满人了_您可能错过了Google令人赞叹的AI公告。相关推荐

  1. 面试公司Offer——我的Python求职之路

    如何拿到半数面试公司Offer--我的Python求职之路 从八月底开始找工作,短短的一星期多一些,面试了9家公司,拿到5份Offer,可能是因为我所面试的公司都是些创业性的公司吧,不过还是感触良多, ...

  2. 10年程序员怒斥:只会八股文没用,公司招你来是做项目的,不是背题的……

    「作者主页」:士别三日wyx 前段时间跟一个老同事去出项目,我请他喝咖啡.闲聊之间得知他已经在这行干了十年并且在北京成家买房,我肃然起敬,啪!的一下就站起来了.同事摆摆手示意我坐下说话,收手的时候顺带 ...

  3. 公司网站Silverlight版^_^

    公司网站Silverlight版^_^ 网站地址:http://www.ichinagames.com/Silverlight/ 预览图: posted on 2010-01-16 13:47 now ...

  4. 错过校招_您可能错过的Web优化技巧

    错过校招 by Harnoor Bandesh 由Harnoor Bandesh 您可能错过的Web优化技巧 (The Web Optimization trick you might have mi ...

  5. 深造分布式 打败面试官 招式一 小试牛刀

    分布式系统和单体系统之间到底有什么区别? 前言 问题 解答 演变 提出问题 问题分析 Case1: 为什么系统构建的主流方式会从单体系统演变到现在的分布式系统? Case2:传统单体系统存在哪些核心问 ...

  6. 2021秋招笔试(1)_乐鑫

    文章目录 2021秋招笔试(1)_乐鑫 1.FIFO测试 **1)题目**: 2)分析 3)解析 2.按键识别.消抖 1)题目 2)分析 3.用Verilog 实现 CRC-8 的串行计算,G(D) ...

  7. 前端面试技巧和注意事项_面试Web前端需要注意什么?会面试哪些问题?

    展开全部 作为一名HTML5前端工程师e69da5e6ba9062616964757a686964616f31333433643664,为了工作,为了就业我们免不了要参加各种各样的面试.为此总结了面试 ...

  8. 单招计算机英语面试口语,面试单招英语自我介绍

    面试单招英语自我介绍 单招考试不仅有笔试还有面试,有的'时候一些如果报考英语相关的专业会选择面试时用英语做自我介绍.那么,怎么用英文做自我介绍呢?下面小编给大家带来面试单招英语自我介绍,希望对大家有帮 ...

  9. 上海Java开发工程师面试公司报告(2020年)

    以下完全是个人总结内容,仅供参考.下一篇会有面试题分享. 没有机会的面试公司,简历投了石沉大海 拼多多 微盟 得物 小米 蚂蚁金服 携程 小红书 高顿教育集团 陆金所 轻轻教育 唯品会 软件外包或人力 ...

最新文章

  1. 【视觉SLAM14讲】ch3课后题答案
  2. AngularJS鼠标进入划出事件
  3. Tomcat7目录结构详解(非常详细)
  4. matlab红外图像温度提取,一种基于红外热图的温度提取方法.pdf
  5. java消费者模式_基于Java 生产者消费者模式(详细分析)
  6. SAP S/4HANA: 一条代码线,许多种选择
  7. Keys.js 官方使用说明
  8. git push 出现 you are not allowed to upload merges 错误提示
  9. 新的任务范式:Program-guided Tasks
  10. python的作用域分别有几种_Python中作用域的深入讲解
  11. R|数据处理|list的转化与转置
  12. detectron2 ImportError: cannot import name ‘_C‘ from ‘detectron2‘
  13. python事件驱动编程_初识Twisted:事件驱动编程
  14. 浅谈Dubbo服务引入源码(@ReferenceBean依赖注入)
  15. wpewebkit在ubuntu18.04上编译配置
  16. 三菱FX3U与三菱变频器 modbus RTU通讯案例 采用485方式,modbus RTU协议。 与变频器通讯,控制启停,频率,加减速时间设定,频率
  17. 计算机鼠标老跳动,鼠标跳动是什么原因 鼠标经常抖动的解决方法
  18. 大数据调度平台Airflow版本升级方案文档(1.X升级到2.X)
  19. 排球分组循环交叉编排_全国气排球邀请赛在我市举行
  20. Found duplicate code in xxx,Inspection info: Finds duplicated code

热门文章

  1. 生产者消费者模型 java
  2. 修改线程的名称 java 1615387415
  3. selenium-隐式等待和显式等待-0223
  4. linux scp限制传输速度
  5. 七、Framework类库
  6. Animate.css
  7. Windows Server 2008 R2 域控DOS命令
  8. Dropbox 官方中文版!最优秀实用的免费跨平台文件网络同步网盘云存储服务
  9. 浅谈 Python 中的 __init__ 和 __new__
  10. 建立简单的VLAN通信