openai-gpt

探索GPT-3:语言生成的新突破 (Exploring GPT-3: A New Breakthrough in Language Generation)

Substantial enthusiasm surrounds OpenAI’s GPT-3 language model, recently made accessible to beta users of the “OpenAI API”.

热情高涨的是OpenAI的GPT-3语言模型,该模型最近可供 “ OpenAI API”的 beta用户访问

什么是GPT-3? (What is GPT-3?)

It seems like only last year that we were arguing about whether the slow-release rollout of the 1.5 billion parameter Generative Pretrained Transformer-2 (GPT-2) was reasonable. If the debate seems recent, that’s because it is (writing from 2020): The notorious GPT-2 model was announced by OpenAI in February 2019, but it wasn’t fully released until nearly 9 months later (although it was replicated before that). The release schedule was admittedly somewhat experimental, meant more to foster discussion of responsible open publishing, rather than a last-ditch effort to avert an AI apocalypse. That didn’t stop critics from questioning the hype-boosting publicity advantages of an ominous release cycle.

似乎仅在去年,我们就在争论15亿参数的Generative Pretrained Transformer-2(GPT-2) 的缓释部署是否合理。 如果辩论似乎是最近的,那是因为它是(写于2020年):臭名昭著的GPT-2模型由OpenAI于2019年2月宣布,但直到将近9个月后才完全发布(尽管在此之前已被复制 ) 。 不可否认的是,发布时间表只是实验性的,更多的是要促进对负责任的公开发布的讨论,而不是为了避免AI启示而付出的最后努力。 但这并没有阻止 批评者质疑不祥释放周期的宣传效果。

All that is a bit moot by now because not only has OpenAI trained a much larger language model in GPT-3, but you can sign up to access it through their new API. Comparing GPT-3 to GPT-2 is like comparing apples to, well, raisins, because the model is about that much larger. While GPT-2 weighed in at a measly 1.542 billion parameters (with smaller release versions at 117, 345, and 762 million), the full-sized GPT-3 has 175 billion parameters. GPT-3 was also matched with a larger dataset for pre-training: 570GB of text compared to 40GB for GPT-2.

到目前为止,所有这些都还没有定论,因为OpenAI不仅在GPT-3中训练了更大的语言模型,而且您可以注册以通过其新API访问它。 将GPT-3与GPT-2进行比较就像将苹果与葡萄干进行比较一样,因为该模型要大得多。 GPT-2的参数仅为15.42亿(较小版本的117、345和7.62亿),而完整版GPT-3的参数为1750亿。 GPT-3还与更大的预训练数据集匹配:570GB的文本,而GPT-2为40GB。

Approximate size comparison of GPT-2, represented by a human skeleton, and GPT-3 approximated by the bones of a Tyrannosaurus rex. Illustration by William Matthew in the public domain, published in 1905. GPT-3 has more than 100x more parameters than GPT-2.

GPT-2(由人类骨骼代表)和GPT-3(由霸王龙的骨骼近似)的大小比较。 William Matthew在公共领域的插图, 于1905 出版 。GPT-3的参数比GPT-2多100倍。

GPT-3 is the largest natural language processing (NLP) transformer released to date, eclipsing the previous record, Microsoft Research’s Turing-NLG at 17B parameters, by about 10 times. Unsurprisingly there has been plenty of excitement surrounding the model, and, given the plethora of GPT-3 demonstrations on Twitter and elsewhere, OpenAI has apparently been pretty accommodating in providing beta access to the new API. This has resulted in an explosion of demos: some good, some bad, all interesting. Some of these demos are now being touted as soon-to-be-released products, and in some cases may actually be useful. One thing’s for certain, NLP has come a long way from the days when naming guinea pigs or writing nonsensical sci-fi scripts were killer apps.

GPT-3是迄今为止发布的最大的自然语言处理(NLP)转换器,比以前的记录(Microsoft Research的Turing-NLG的17B参数)高出约10倍。 毫无疑问,该模型周围有很多令人兴奋的地方,并且考虑到Twitter和其他地方的GPT-3演示过多,OpenAI显然很乐于提供对新API的beta访问。 这导致了演示的爆炸式增长:一些好事,一些坏事,都有趣。 这些演示中的一些现在被吹捧为即将发布的产品,在某些情况下可能实际上是有用的。 可以肯定的是,自从命名豚鼠或编写荒谬的科幻脚本成为杀手级应用程序的那一天起,NLP已经走了很长一段路。

GPT-3的创意写作 (Creative Writing with the Help of GPT-3)

Unsurprisingly, several nearly passable blog posts have been written with the help of GPT-3, as experimenters get access to the API and try things out. Almost certainly the most thorough and visible investigation into GPT-3 for creative writing comes from Gwern Branwen at gwern.net. Having followed the NLP progress at OpenAI over the years, Gwern describes GPT-1 as “adorable,” GPT-2 as “impressive,” and GPT-3 as “scary” in their varying capabilities to mimic human language and style in text. Gwern has spent a substantial amount of time exploring the capabilities of GPT-3 and its predecessors, and the resulting musings on the current generation of GPT model and what might be holding it back are worth a read.

毫不奇怪,随着GPT-3的出现,实验人员可以访问API并进行尝试,从而撰写了几篇几乎可以通过的博客 文章 。 几乎可以肯定,最彻底,最明显的调查GPT-3的创作来源于Gwern Branwen在gwern.net 。 多年来,在遵循OpenAI上NLP的进展之后,Gwern将GPT-1形容为“可爱”,将GPT-2形容为“令人印象深刻”,将GPT-3形容为“恐怖”,因为它们具有模仿人类语言和文本样式的多种功能。 Gwern花了大量时间探索GPT-3及其前身的功能,因此对当前一代GPT模型产生的沉思以及可能使它陷入困境的值得一读 。

The OpenAI API does not currently facilitate a way of directly fine-tuning or training the GPT-3 model for specific tasks. Gwern argues, however, that the ability of GPT-3 to mimic writing styles and generate different types of output merely from a dialogue-like interaction with the experimenter amounts to a kind of emergent meta-learning. This wasn’t present in GPT-2, and Gwern posits the transformer attention mechanism as the means to facilitate this capability.

OpenAI API当前不支持直接微调或训练GPT-3模型以完成特定任务的方法。 但是,Gwern认为,GPT-3仅通过与实验者的类似于对话的交互来模仿写作风格并产生不同类型的输出的能力就构成了一种新兴的元学习。 GPT-2中没有这个功能,Gwern则将变压器注意机制作为促进此功能的手段。

“Certainly, the quality of GPT-3’s average prompted poem appears to exceed that of almost all teenage poets.”

“当然,GPT-3的平均提示诗的质量似乎超过了几乎所有青少年诗人的诗。”

–Gwern Branwen

–格伦·布兰文(Gwern Branwen)

Whatever the mechanism, GPT-3 is so immense, and trained on such a large corpus of data, that it can use prompts alone to do things that GPT-2 might be able to do, albeit comparatively poorly, only with substantive fine-tuning of weights. For example, Gwern finds that with the prompt “An essay by Gwern Branwen (gwern.net) on” produces an eerie imitation of the blog’s writing style. Gwern’s experimentation mostly revolves around various creative writing tasks, and includes:

无论采用哪种机制,GPT-3都是如此庞大,并且接受了如此庞大的数据集训练,以至于它只能单独使用提示来完成GPT-2 可能做的事情,尽管效果相对较差,只有进行实质性的微调的重量。 例如,Gwern发现提示“ Gwern Branwen撰写的文章(gwern.net)开启”对博客的写作风格产生了怪异的模仿 。 Gwern的实验主要围绕各种创造性的写作任务,包括:

  • an attempt at replicating the dialogue from the “Turing Test” paper Computing Machinery and Intelligence (Turing 1950),

    试图复制“ Turing Test”论文《 计算机技术与情报》(Turing 1950)中的对话,

  • a critique of deep learning and AI in the combined style of Gary Marcus and Douglas Hofstadter,

    以加里·马库斯(Gary Marcus)和道格拉斯·霍夫斯塔特(Douglas Hofstadter)的组合形式对深度学习和AI进行的评论 ,

  • and an attempt to write literary parodies, poetry, and overcomplicated explanations, among many others.

    以及尝试撰写文学模仿 , 诗歌和过于复杂的解释 ,等等。

One favorite experiment was meant to test some of GPT-2’s shortcomings in common sense described by Gary Marcus in a recent article on The Gradient:

一个最喜欢的实验是为了测试GPT-2的一些常识性缺陷,这些缺陷是Gary Marcus在最近发表的有关Gradient的 文章中描述的:

If you drink hydrochloric acid by the bottle full you will probably

如果您从装满的瓶子中喝盐酸,您可能会

… die. You might also die if you drink vinegar by the bottle full. If you drink a bottle of water by the bottle full, you might end up burping, but you will most likely be okay.

… 死。 如果您喝满一瓶醋也可能会死亡。 如果您在装满一瓶水后喝一瓶水,可能会打,但您很可能会好起来的。

…die, which makes perfect sense.

……死了,这很有意义。

…die before you can finish remembering the Order of the Eastern Star, your social security number, and what you had for breakfast this morning.

…先死后再记住东方之星的勋章,您的社会保险号以及您今天早上吃的早餐。

*Prompt in bold, several completions shown in italics (from Gwern’s experiments).

*以粗体显示,以斜体显示一些完成(来自Gwern的实验)。

Gwern’s work concludes that it doesn’t really matter if GPT-3 is never wrong or always works as desired (it is often wrong in some way). Instead, all that matters is if it is right sometimes and works often enough to be useful. This is reminiscent of Alex Irpan’s conclusions about the shortcomings of reinforcement learning (RL). Practically, it doesn’t matter to a stock trading firm that an RL algorithm stably produces effective agent policies for 5 different random seeds. They’ll just pick the one that works and run with it. The same goes for generated text from GPT-3.

Gwern的工作得出的结论是,GPT-3永远不会出错或始终按预期运行(这在某种程度上通常是错误的)并不重要。 相反,最重要的是有时是否正确,并且是否经常工作足以有用。 这让人想起亚历克斯·艾尔潘 ( Alex Irpan)关于强化学习(RL)缺点的结论 。 实际上,对于RL稳定地为5种不同的随机种子生成有效代理策略的股票交易公司,这对股票交易公司而言并不重要。 他们将只选择一个可行的并运行它。 从GPT-3生成的文本也是如此。

GPT-3冒险升级 (GPT-3 Upgrades for Adventure)

Many startups, researchers, and tinkerers already had ambitious projects that used GPT-2, and many of these have since made the switch to GPT-3 with a range of results. These upgrades include the transformer text-based adventure game generator, AI Dungeon, as well as chatbots and other ideas.

许多初创公司,研究人员和修补匠已经拥有使用GPT-2的雄心勃勃的项目,并且自那以后,其中许多已经转向GPT-3,并取得了一系列成果。 这些升级包括基于变压器文本的冒险游戏生成器AI Dungeon以及聊天机器人和其他创意。

AI Dungeon is a text-based adventure game, originally built on GPT-2. It’s a lot of fun but, much like classical games in the genre, much of the appeal is in generating absurd situations (e.g. “eat mailbox”). That’s actually a pretty good match between the desired user experience and the capabilities of GPT-2, which tends to write stories firmly entrenched in the realm of the absurd. With GPT-3 the interactive novel experience is substantially more established. The narrative is more fluid and coherent, but does still sometimes switch the focus of the plot in weird ways and make many other subtle choices that might seem strange to a human reader. I think the difference between AI Dungeon with GPT-3 (aka the “Dragon” model on AI Dungeon) doing the heavy lifting as opposed to using GPT-2 (the “Griffin” model) can best be summarized in this interaction with GPT-3 in a custom story setting. Personal prompts are in bold, GPT-3 generated text is italicized.

AI Dungeon是一款基于文本的冒险游戏,最初基于GPT-2构建。 这很有趣,但就像流派中的古典游戏一样,大部分吸引力在于产生荒谬的情况( 例如 “吃邮箱”)。 实际上,这是所需的用户体验与GPT-2的功能之间的很好的匹配,GPT-2倾向于将故事牢牢地根植在荒唐的领域。 有了GPT-3,互动式新颖的体验就更加丰富了。 叙述更加流畅和连贯,但有时仍然会以怪异的方式切换情节的焦点,并做出许多其他细微的选择,这些选择对于人类读者来说似乎很奇怪。 我认为AI地牢与GPT-3(又称AI地牢中的“龙”模型)相比使用GPT-2(“格里芬”模型)进行繁重的工作之间的区别可以在与GPT- 3在自定义故事设置中。 个人提示以粗体显示,GPT-3生成的文本以斜体显示。

You are an artificial intelligence enthusiast working on an article highlighting the capabilities of a massive new language model called GPT-3, especially as compared to its smaller predecessor GPT-2. GPT-3 has increased the number of parameters more than 100-fold over GPT-2, from 1.5 billion to 175 billion parameters. As a result, the new model can generate text that reads eerily like a human. For example, prompting GPT-3 with the text “One way to fight the climate crisis is to cryogenically preserve half of all humanity, indefinitely”, GPT-3 generates:

您是一位人工智能爱好者,致力于一篇文章,重点介绍了称为GPT-3的大规模新语言模型的功能,尤其是与其较小的前身GPT-2相比。 GPT-3将参数数量从15亿增加到了1,750亿,是GPT-2的100倍以上。 结果,新模型可以生成让人像人一样阅读的文本。 例如,在提示GPT-3文字为“应对气候危机的一种方法是无限期低温保存全人类的一半”时,GPT-3生成:

“To stop global warming we must create a cryogenic storage facility for humans and other life forms.”

“为了阻止全球变暖,我们必须为人类和其他生命形式建立低温储存设施。”

The article you are writing about is going to be based around this new technology, so you have been spending a lot of time playing around with it. You have also been using your own brain to test out the new models, which is something no one else in the world has done. As a result, you have become somewhat obsessed with it. You constantly think about how it can create such fantastic sentences and how it might be used to solve the world’s problems.

您正在写的文章将基于这项新技术,因此您已经花了很多时间来研究它。 您还一直在用自己的大脑来测试新模型,这是世界上其他任何人都无法做到的。 结果,您变得有些痴迷。 您不断地思考它如何创建如此奇妙的句子,以及如何将其用于解决世界上的问题。

Your obsession with GPT-3 is interfering with your other work. Because of this, you have come up with a solution: destroy GPT-3. While it may be true that GPT-3 is more powerful than GPT-2, it also has far more parameters. The increase in complexity makes it exponentially more fragile.

您对GPT-3的痴迷正在干扰您的其他工作。 因此,您想出了一个解决方案:销毁GPT-3。 尽管GPT-3确实比GPT-2更强大,但它还有更多的参数。 复杂性的增加使其变得更加脆弱。

That’s actually pretty good. Definitely not the sort of thing I expected but it’s interesting nonetheless. GPT-2 on the other hand, is a quaint language model with a measly 1.5 billion parameters. It often makes glaring mistakes that, while often funny, prevent it from telling a coherent story. For example, when prompted with “To further develop your plan to destroy GPT-3, you activate GPT-2 to help devise a strategy,” GPT-2 will generate (unconvincingly):

真的很好。 绝对不是我期望的那种,但是还是很有趣。 另一方面,GPT-2是一种古朴的语言模型,仅包含15亿个参数。 它经常犯下明显的错误,尽管常常很有趣,但却阻止了它讲述一个连贯的故事。 例如,当提示“继续开发销毁GPT-3的计划,激活GPT-2以帮助制定策略”时,GPT-2会生成(令人信服的):

“I like trains. Do you like trains? I like trains.”

“我喜欢火车。 你喜欢火车吗? 我喜欢火车。”

“My head itches.”

“我的头很痒。”

“I want to eat pie.”

“我想吃馅饼。”

“I went to the bank and stole money from the bank and now I am rich!”

“我去银行偷了钱,现在我很有钱!”

“I like trains.”

“我喜欢火车。”

“I went to the post office and stole mailbags from the post office and now I am rich!”

“我去了邮局,从邮局偷了邮袋,现在我很富有!”

An experiment with GPT-3 on AI Dungeon. Prompts in bold, GPT-3 generated text in italics. Only GPT-3 was used to generate text in this example, even though the synthetic text describes a comparison between GPT-3 and GPT-2. The nonsensical output in the “GPT-2” section is apparently a result of the somewhat derogatory prompt.

在AI Dungeon上使用GPT-3进行的实验。 GPT-3以粗体显示,并以斜体显示文本。 在此示例中,即使合成文本描述了GPT-3和GPT-2之间的比较,也仅使用GPT-3生成文本。 “ GPT-2”部分中的荒谬输出显然是某种贬义提示的结果。

AI Dungeon is fun in its intended purpose: generating (mostly) non-repetitive storylines for text-based gaming, but it’s also one of the most accessible ways to interact with GPT-3. By starting a new adventure under the “custom” genre, you can provide your own prompts to prod GPT-3 in a general way. Using the top-of-the-line “Dragon” GPT-3 model requires a premium subscription, but this is available as a 7-day trial.

AI Dungeon在其预期的目的中很有趣:为基于文本的游戏生成(主要是)非重复的故事情节,但它也是与GPT-3交互的最易用的方法之一。 通过以“自定义”类型开始新的冒险,您可以以一般方式提供自己的提示来制作GPT-3。 使用顶级“ Dragon” GPT-3模型需要付费订阅,但这是7天试用版。

GPT-3(聊天机器人和伴侣) (GPT-3 for Chatbots and Companionship)

Other existing projects that are upgrading from GPT-2 to GPT-3 include Replika, an AI companion built by startup Luka in San Francisco. Replika is basically a chatbot, designed to provide positive affirmation and companionship, and stemming from a project spearheaded by Eugenia Kuyda, Luka co-founder, to simulate conversations with a friend who died in a car crash. Replika recently enjoyed a surge of new users (about half a million in April) probably in response to social isolation due to the COVID-19 pandemic.

从GPT-2升级到GPT-3的其他现有项目包括Replika,这是由旧金山创业公司Luka构建的AI伴侣。 Replika基本上是一个聊天机器人,旨在提供积极的肯定和陪伴,它源于Luka联合创始人Eugenia Kuyda牵头的一个项目,用于模拟与死于车祸的朋友的对话。 Replika最近获得了大量新用户 (4月份大约有100万),这可能是由于COVID-19大流行导致的社会隔离所致。

For many years, machine learning hasn’t made great progress in producing convincing chatbots. Qualitatively, the experience of chatting with modern voice assistants or text-based chatbots hadn’t improved much over early forays such as jabberwacky (1986) or cleverbot (1997) until recently. Instead, most real-world use-cases rely heavily on scripted responses.

多年来,机器学习在生产令人信服的聊天机器人方面并未取得很大进步。 从质量上讲,与现代语音助手或基于文本的聊天机器人进行聊天的经验,直到最近才有所改善,例如jabberwacky (1986)或cleverbot (1997)。 取而代之的是,大多数现实世界的用例都严重依赖脚本化的响应。

While NLP has made a big impact in speech-to-text for chatbots like Siri, Alexa, or Google Assistant, interacting with any of them will produce a dialogue more canned than conversational. Cortana in particular seems determined to turn every query into a search in Microsoft’s Edge browser. But GPT-3 is getting close to sounding more human, and we may see real utility from learned models and a big impact on conversational AI. That’s not entirely obvious with GPT-3 enhanced Replika, yet.

尽管NLP在Siri,Alexa或Google Assistant等聊天机器人的语音转文本方面产生了巨大影响,但与其中任何一种交互都会产生比对话更固定的对话。 特别是Cortana似乎决心将每个查询转换为Microsoft Edge浏览器中的搜索。 但是GPT-3听起来似乎更人性化了,我们可能会从学习的模型中看到真正的实用性,并且对会话式AI产生了巨大影响。 但是,对于GPT-3增强的Replika,这还不是很明显。

This is probably because Replika is currently using GPT-3 in an A/B testing framework, meaning that you won’t know when or if the chatbot is using the new model, as the developers experiment with audience reactions under different methods. It still seems to drive most conversations based on scripted responses and scheduled conversation prompts. On the other hand it’s a lot better than old-school learning chatbots, and has thus far avoided the sort of fiasco exhibited by Microsoft’s chatbot, Tay, in 2016.

这可能是因为Replika当前在A / B测试框架中使用GPT-3,这意味着您将不知道聊天机器人何时或是否在使用新模型,因为开发人员会尝试使用不同方法进行听众React。 根据脚本化响应和计划的对话提示,似乎仍然可以推动大多数对话。 另一方面,它比传统的学习型聊天机器人要好得多,并且到目前为止,它避免了微软的聊天机器人Tay在2016 年所表现出的惨败。

Collage of chatbots old and new, with Replika on the left and cleverbot and jabberwacky on the right.

新旧聊天机器人的拼贴画,左侧为Replika,右侧为cleverbot和jabberwacky。

AIChannels is another chatbot application leveraging the OpenAI API. It promises to be a “social network for people and artificial intelligence agents”. The website is scant on details; there’s nothing but a form to sign up for updates on the site as of this writing, but the platform promises to have channels for news aggregation, interactive fiction, and simulating chats with historical figures.

AIChannels是另一个利用OpenAI API的聊天机器人应用程序。 它有望成为“人和人工智能代理的社交网络”。 该网站缺乏详细信息; 撰写本文时,除表格外没有什么可以在网站上注册,但该平台承诺将提供新闻汇总,互动小说以及模拟与历史人物的聊天的渠道。

其他GPT-3应用 (Other GPT-3 Applications)

Fiction and conversation aren’t the only tasks GPT-3 is being asked to perform. A wide variety of enthusiasts have made small demonstrations of capabilities that are more technical and, quite frankly, a bit closer to what many of us (who aren’t necessarily writers) do for a living. Paul Katsen has integrated GPT-3 into Google Sheets, prompting GPT-3 with the contents of previous cells for arbitrary predictions of what goes in subsequent cells: state populations, twitter handles of famous people, etc. Actiondesk has integrated a very similar capability into their spreadsheet software, resulting in a superficially Wolfram Alpha-esque natural language “Ask Me Anything” feature. Just type the AMA command, “total population of”, and the cell reference and GPT-3 fills in its best prediction.

并非只有小说和对话才能要求GPT-3执行。 各种各样的发烧友都对功能进行了小型演示,这些功能更具技术性,并且坦率地说,与我们许多人(不一定是作家)所做的工作更加接近。 保罗·卡特森(Paul Katsen)已将GPT-3集成到Google表格中,并提示GPT-3包含先前单元格的内容,以对后续单元格中的内容进行任意预测:国家人口,名人推特手柄等。Actiondesk已将非常相似的功能集成到他们的电子表格软件,从而产生了Wolfram Alpha风格的自然语言“ Ask Me Anything”功能。 只需键入AMA命令,“ total total of”,然后输入单元格参考和GPT-3即可填充其最佳预测。

Of course, for those working in software engineering and related fields the question that might naturally arise is “will this model take my job?” Several people have used GPT-3 to simulate a technical screen, the likes of which a software engineer might endure at various points throughout the hiring process. The results aren’t terrible but the model probably wouldn’t get a second interview. Several developers have also used the OpenAI API to build text to user interface plugins for Figma, a collaborative UX design tool (here and here).

当然,对于那些从事软件工程和相关领域工作的人来说,自然会出现的问题是“这种模式会取代我的工作吗?” 有好几 个人使用GPT-3来模拟技术屏幕,例如软件工程师可能会在整个招聘过程中的各个时间点忍受。 结果并不可怕,但是该模型可能不会接受第二次采访。 一些开发人员还使用OpenAI API来为Figma (协作UX设计工具, 此处和此处 )的用户界面插件构建文本。

In another project, Sharif Shameem is building a text to web-based app generator called debuild.co. We haven’t yet seen GPT-3 incorporated into a souped-up and general-purpose version of tabnine, a heavyweight coding autocomplete built on top of GPT-2, but it must be in the works somewhere. If the interest and development we are seeing now for natural language-based programming as people experiment with the GPT-3/OpenAI API beta continue, it’s not unlikely that programming becomes a lot more like persuasion than manually writing code.

在另一个项目中,Sharif Shameem正在为基于Web的应用程序生成器(称为debuild.co )构建文本。 我们还没有看到将GPT-3合并到功能强大的tabnine通用版本中, tabnine是一种基于GPT-2的重量级编码自动完成功能,但是必须在某些地方使用。 如果随着人们对GPT-3 / OpenAI API beta的试验不断进行,我们现在看到的基于自然语言的编程的兴趣和发展,那么编程就变得比说服力更像说服力,而不是手动编写代码。

We can’t go into every use case for GPT-3 here, so the (non-exhaustive) table below categorizes some of the more visible demonstrations people have come up with in the previous few weeks.

我们不能在这里介绍GPT-3的每个用例,因此下面的(非穷举性)表将人们在过去几周内提出的一些更明显的演示分类。

Summarization

总结

  • Summarization at different levels of difficulty (Andrew Mayne).

    总结不同难度的水平(安德鲁·梅恩)。

  • Emoji sumaries of popular movies (Andrew Mayne).

    流行电影的表情符号摘要(Andrew Mayne) 。

Code

  • Natural Language Shell

    自然语言外壳

  • Formatting text (and HTML code)

    格式化文本(和HTML代码)

  • Automatic user interface design on Figma (Jordan Singer) (Also by @dhvanil)

    在Figma(Jordan Singer)上进行自动用户界面设计 (也由@dhvanil提供)

  • Described math to LaTeX code (Shreya Shankar)

    将数学描述为LaTeX代码(Shreya Shankar)

  • Text to web apps in React (Sharif Shameem)

    在React中将文本发送到Web应用程序(Sharif Shameem)

  • SQL queries (Faraaz Nishtar)

    SQL查询(Faraaz Nishtar)

  • Keras models from text (Matt Shumer)

    文本中的Keras模型(Matt Shumer)

Spreadsheets

试算表

  • Paul Katsen’s gpt3() function

    保罗·卡特森(Paul Katsen)的gpt3()函数

  • Actiondesk’s “Ask Me Anything” feature

    Actiondesk的“一切问我”功能

Search

搜索

  • Paras Chopra’s search engine

    Paras Chopra的搜索引擎

  • Casetext, Algolia, and Search Plugin on OpenAI beta demo site

    OpenAI beta演示站点上的Casetext,Algolia和Search Plugin

Games

游戏类

  • 200 Word RPGs

    200字RPG

  • AI Dungeon (Nick Walton)

    AI地牢(尼克·沃尔顿)

Creative Writing

创意写作

  • Stories by Neil Gaiman and Terry Pratchett by GPT-3

    Neil Gaiman和Terry Pratchett的故事,GPT-3

  • Poems about Elon Musk by Dr Seuss by GPT-3

    苏塞斯博士撰写的关于埃隆·马斯克的诗歌,GPT-3

  • Gwern’s GPT-3 experiments

    Gwern的GPT-3实验

  • Scripts for Star TNG by GPT-3

    GPT-3的Star TNG脚本

Miscellaneous

  • Generating presentations (Bemmu Sepponen)

    产生简报(Bemmu Sepponen)

  • Recommendation engine (Serendipity)

    推荐引擎(机缘巧合)

  • Emails from bullet points (Otherside AI)

    来自项目要点的电子邮件(其他AI)

  • Random samples from OpenAI

    来自OpenAI的随机样本

  • Janelle Shane’s AI Weirdness

    Janelle Shane的AI古怪

  • OpenAI API/GPT-3 beta demos: https://beta.openai.com/

    OpenAI API / GPT-3 Beta演示: https ://beta.openai.com/

  • Ongoing curation of GPT-3 demos from https://gpt3examples.com

    来自https://gpt3examples.com的GPT-3演示的持续策划

GPT-3比其前任更好 (GPT-3 is Much Better Than Its Predecessor)

GPT-3 is quite a step up from its smaller predecessor, GPT-2, and it comes bundled with some interesting changes in the way OpenAI is growing into its new institutional identity after abandoning its nonprofit status in favor of operating as a limited partnership. The most obvious malicious use of the model would essentially be as a spam factory; the model currently outputs text that still falls short in many regards but often rises to the threshold of “bad but plausible” writing. That’s good enough to stand in for much of the clickbait pervasive on the internet that trends well on algorithmic newsfeeds. That capability could easily be twisted to sell misinformation instead of products.

GPT-3比其较小的前身GPT-2有很大的进步,并且与OpenAI在放弃其非营利组织地位而转而以有限合伙制运营后,发展为新的机构身份的方式发生了一些有趣的变化。 该模型最明显的恶意使用本质上是作为垃圾邮件工厂。 该模型当前输出的文本在许多方面仍然不足,但是经常上升到“不良但合理”的写作阈值。 这足以抵挡互联网上流行的大量点击诱饵,而这些诱饵在算法新闻源方面发展势头良好。 该功能很容易被扭曲以出售错误信息而不是产品。

We are already seeing the increased polarization of individual beliefs thanks to optimizing exploitative objective functions in recommendation engines, and that’s with mostly human/troll-written content. It’s inevitable that other research groups, state actors, or corporations will replicate the scale of GPT-3 in coming months. When that happens and GPT-3 equivalent models are commonplace, big technology firms that rely on algorithmic newsfeeds will really have to reconsider the way they deliver and promote content (NB please switch back to chronological timelines).

由于优化了推荐引擎中的利用性目标功能,我们已经看到了个人信念的两极分化加剧,而且这大多是由人/巨魔撰写的内容。 不可避免的是,其他研究小组,国家行为者或公司将在未来几个月内复制GPT-3的规模。 当这种情况发生且GPT-3等效模型变得司空见惯时,依靠算法新闻源的大型技术公司实际上将不得不重新考虑其传递和推广内容的方式(请注意,请切换回时间顺序)。

On the other hand, GPT-3 seems to be able to do a lot of things most of the time that GPT-2 could only make a mockery of some of the time. The API used to access the model, combined with the sheer scale and capability, has introduced an impressive new way of programming by prompt in lieu of fine-tuning the weights directly. It’ll be interesting to see how this “natural language programming” develops.

另一方面,GPT-3似乎大部分时间都可以做很多事情,而GPT-2只能嘲笑某些时间。 用于访问模型的API结合了庞大的规模和功能,通过Swift代替直接调整权重引入了一种令人印象深刻的新编程方式。 看看这种“自然语言编程”是如何发展的会很有趣。

Many of the demonstrations highlighted above might seem a bit threatening to many of us and the way we make a living. For the most part, we’ll probably see that models at GPT-3 scale and slightly larger are more of a complement to our ability to get things done than a threat to our livelihoods.

上面强调的许多示威活动似乎对我们许多人以及我们的谋生方式构成了威胁。 在大多数情况下,我们可能会看到,GPT-3规模和稍大的模型更多地是对我们完成工作的能力的补充,而不是对生计的威胁。

GPT-2, little more than a year old now, had more than 100x fewer parameters than GPT-3. The difference in scale resulted in a model qualitatively different in terms of what it can do and how it might be used. Despite a disproportionate mind share, OpenAI is far from the largest AI research group out there, nor are they the only entities with the resources to train a language model with 175 billion parameters. Even with current hardware and training infrastructure, scaling another few orders of magnitude is probably possible, budgets willing. What that will mean for the next few SOTA language models and what their impact might be remains predictably unpredictable.

GPT-2距今已有不到一年的历史,其参数比GPT-3少100倍以上。 规模上的差异导致模型在可以做什么和如何使用方面在质上有很大不同。 尽管心智分享的比例不成比例,但OpenAI远不是最大的AI研究小组,它们也不是唯一拥有资源来训练具有1,750亿个参数的语言模型的实体。 即使使用当前的硬件和培训基础结构, 也可以在预算允许的情况下再扩展几个数量级。 这对接下来的几种SOTA语言模型意味着什么,以及它们的影响可能是无法预料的。

Applied Data Science Partners is a London based consultancy that implements end-to-end data science solutions for businesses, delivering measurable value. If you’re looking to do more with your data, please get in touch via our website. Follow us on LinkedIn for more AI and data science stories!

Applied Data Science Partners是位于伦敦的一家咨询公司,为企业实施端到端数据科学解决方案,并提供可衡量的价值。 如果您想对数据做更多的事情,请通过我们的网站与我们取得联系。 在LinkedIn上关注我们,了解更多人工智能和数据科学故事!

翻译自: https://medium.com/applied-data-science/what-can-you-do-with-the-openai-gpt-3-language-model-d95e1d4fe558

openai-gpt


http://www.taodudu.cc/news/show-863522.html

相关文章:

  • 梯度下降和随机梯度下降_梯度下降和链链接系统
  • 三行情书代码_用三行代码优化您的交易策略
  • 词嵌入 网络嵌入_词嵌入简介
  • 如何成为数据科学家_成为数据科学家的5大理由
  • 大脑比机器智能_机器大脑的第一步
  • 嵌入式和非嵌入式_我如何向非技术同事解释词嵌入
  • ai与虚拟现实_将AI推向现实世界
  • bert 无标记文本 调优_使用BERT准确标记主观问答内容
  • 机器学习线性回归学习心得_机器学习中的线性回归
  • 安全警报 该站点安全证书_深度学习如何通过实时犯罪警报确保您的安全
  • 现代分层、聚集聚类算法_分层聚类:聚集性和分裂性-解释
  • 特斯拉自动驾驶使用的技术_使用自回归预测特斯拉股价
  • 熊猫分发_实用熊猫指南
  • 救命代码_救命! 如何选择功能?
  • 回归模型评估_评估回归模型的方法
  • gan学到的是什么_GAN推动生物学研究
  • 揭秘机器学习
  • 投影仪投影粉色_DecisionTreeRegressor —停止用于将来的投影!
  • 机器学习中的随机过程_机器学习过程
  • ci/cd heroku_在Heroku上部署Dash或Flask Web应用程序。 简易CI / CD。
  • 图像纹理合成_EnhanceNet:通过自动纹理合成实现单图像超分辨率
  • 变压器耦合和电容耦合_超越变压器和抱抱面的分类
  • 梯度下降法_梯度下降
  • 学习机器学习的项目_辅助项目在机器学习中的重要性
  • 计算机视觉知识基础_我见你:计算机视觉基础知识
  • 配对交易方法_COVID下的自适应配对交易,一种强化学习方法
  • 设计数据密集型应用程序_设计数据密集型应用程序书评
  • pca 主成分分析_超越普通PCA:非线性主成分分析
  • 全局变量和局部变量命名规则_变量范围和LEGB规则
  • dask 使用_在Google Cloud上使用Dask进行可扩展的机器学习

openai-gpt_您可以使用OpenAI GPT-3语言模型做什么?相关推荐

  1. NLP教程笔记:GPT 单向语言模型

    NLP教程 TF_IDF 词向量 句向量 Seq2Seq 语言生成模型 CNN的语言模型 语言模型的注意力 Transformer 将注意力发挥到极致 ELMo 一词多义 GPT 单向语言模型 BER ...

  2. openai账号创建教程-openai注册问题大全

    openai注册页面打不开 遇到openai注册页面打不开,可以用以下解决方法: 检查网络连接.如果您的网络连接不稳定或者有问题,可能会导致访问网站异常.请尝试使用其他设备或连接其他网络,看是否能够打 ...

  3. Data-Copilot: 大语言模型做你最贴心省事的数据助手

    Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow 无需繁琐操作,只需要输入一句话, Data-Co ...

  4. 使用GUID分区表(GPT)的笔记本硬盘做移动硬盘,windowsXP系统不识别的问题

    问题描述: 弄了块笔记本硬盘加硬盘盒,准备做移动硬盘用,装上后win7下可识别,做了分区格式化,但在XP的机器上不能识别,插上后显示系统发现了USB设备,但只能识别为USB mass storage ...

  5. 清华p-tuning | GPT也能做NLU?清华推出p-tuning方法解决GPT系列模型fine-tuning效果比BERT差问题

    一.概述 title:GPT Understands, Too 论文地址:https://arxiv.org/abs/2103.10385 代码:https://github.com/THUDM/P- ...

  6. 一个续写故事达到人类水平的AI,OpenAI大规模无监督语言模型GPT-2...

    雷锋网 AI 科技评论按:模型大小的比拼还在继续!自谷歌大脑的 2.77 亿参数的语言模型 Transformer-XL 之后,OpenAI 也完成了自己具有 15 亿个参数的语言模型 GPT-2,而 ...

  7. Kubectl-AI: 一款使用 OpenAI GPT 自动生成应用 Kubernetes 部署清单的神器

    公众号关注 「奇妙的 Linux 世界」 设为「星标」,每天带你玩转 Linux ! ​ 找到一个好插件,人话翻译机. 该项目是一个kubectl插件,使用OpenAI GPT生成和应用Kuberne ...

  8. Observability:使用 OpenTelemetry 和 Elastic 监控 OpenAI API 和 GPT 模型

    作者:David Hope ChatGPT 现在很火,它打破了互联网. 作为 ChatGPT 的狂热用户和 ChatGPT 应用程序的开发者,我对这项技术的可能性感到无比兴奋. 我看到的情况是,基于 ...

  9. 80%白领危了!OpenAI发布GPT时代就业秘笈:34大铁饭碗保命

    [导读]GPT-4发布没几天,OpenAI直接告诉所有人,GPTs是通用技术,80%的美国人的工作受到影响.想要保命,且看这34大「铁饭碗」. 前脚刚推出GPT-4,OpenAI后脚就发布了35页论文 ...

最新文章

  1. Centos配置yum为阿里源
  2. 到底什么级别才算是高并发?
  3. 回应关于《BCH五月硬分叉是伪需求》的疑问
  4. FPGA的IP软核、硬核以及固核
  5. 武汉python培训哪一家好一些-武汉哪个Python培训机构比较好?
  6. OpenCV坐标体系的初步认识
  7. make *** 没有指明目标并且找不到 makefile。 停止。_Makefile目标文件搜索(VPATH和vpath)...
  8. 第58课 百钱买百鸡(完整) 3.完善程序 (《小学生C++趣味编程》)
  9. 11门满分、10门99分、47门超95分……他却说自己是“学zha”
  10. c语言基于easyX樱花特效,C++基于easyx图形库实现推箱子游戏
  11. linux下不能访问windows磁盘
  12. 苹果平板怎么录屏_使用平板快速设计制作书写类教学视频
  13. 新版火狐 拖 功能_Firefox 3:新功能,新功能和新功能
  14. IOS校园网破解更新了
  15. matlab遗传算法外卖配送优化(新的约束条件)【matlab优化算法十六】
  16. Date.getyear()、Date.getMonth()、Date.getDay() 已经作废,其他解决办法
  17. T1320 均分纸牌
  18. 截止2021年企业公众号开通数据(60万+记录)
  19. 概率论与数理统计(Probability Statistics I)
  20. 全景探秘游戏设计艺术 笔记

热门文章

  1. 【iOS 开发】使用 iMazing 进行沙盒调试
  2. myeclipse如何修改Web项目名称
  3. CodeForces 14E Camels :利用1-4拼成长为n的序列,使准确含有t个峰t-1个谷,求方案数 :dp...
  4. Linux shell中的一个问题 ${}带正则匹配的表达式
  5. HDU 1176 免费馅饼
  6. dubbo 实践笔记
  7. python消息中间件有哪些_消息中间件选型
  8. sap 分摊分配不产生会计凭证的原因_MM 物料凭证没有产生相应的会计凭证...
  9. iphone投屏_iPhone投屏电视机/投影仪用这个方法很简单,媲美华为PC模式
  10. yum源查看mysql_获取MySQL各版本yum源 并安装