gpt2

30秒摘要(30-Second Summary)

  • Any innovative AI technology has its share of advantages and threats. GPT-3 is not an exception.任何创新的AI技术都有其优势和威胁。 GPT-3也不例外。
  • GPT-3 will not be limited by its size and its cost will probably decrease quickly over time. The problem of energy consumption remains a challenge for researchers.GPT-3不受其大小限制,其成本可能会随着时间的流逝而Swift下降。 能源消耗问题仍然是研究人员面临的挑战。
  • GPT-3 has been quick to impress us, it was also quick to demonstrate algorithmic biases. It seeming to jump over Turing Test-style hurdles but it does not understand the why. It makes simple errors no human would ever make.GPT-3很快给我们留下了深刻的印象,也很快证明了算法上的偏差。 它似乎跳过了图灵测试风格的障碍,但它不明白为什么。 它会犯任何人类都不会犯的简单错误。
  • The next generations of AI will be able to take over all analytical and repetitive tasks but they will not replace humans. This is always what happens when new technology is introduced. Some jobs get replaced, but new jobs are introduced.下一代人工智能将能够接管所有分析性和重复性任务,但它们不会取代人类。 当引入新技术时,总是这样。 一些作业被替换,但是引入了新的作业。
  • GPT-3 could bring us one step closer to the future possibility of highly sophisticated Artificial General Intelligence.GPT-3可以使我们更加接近高度复杂的人工智能的未来可能性。

公开问题 (Open questions)

GPT-3 is not as intelligent as a human. It does not know the meaning of words! It knows the likelihood of a word following another. GPT-3 is powerful because it does one thing and it does right, predicts the next word. This is why it is very good at solving other tasks, ordering the letters of a word, arithmetics, translation for which it has never been trained. But these functions are not contained in the training corpus. These are emerging properties. This technology is made up of three components.

GPT-3不如人类聪明。 它不知道单词的含义! 它知道一个单词跟在另一个单词之后的可能性。 GPT-3功能强大,因为它能做一件事情,并且做得正确,可以预测下一个单词。 这就是为什么它非常擅长解决其他任务,对单词的字母进行排序,算术,翻译,而从未接受过培训。 但是这些功能没有包含在训练语料库中。 这些是新兴属性。 该技术由三个部分组成。

Open and big data + computational resources (supercomputers) + Machine learning models

开放和大数据+计算资源(超级计算机)+机器学习模型

GPT-3 is not a super-intelligence or a human-like AI that transhumanists blindly claim. But OpenAI has created a breakthrough. The advance is significant enough to open up real questions.

GPT-3并非超人类主义者盲目宣称的超级智能或类人AI。 但是OpenAI有被创造突破。 这种进步意义重大,足以提出真正的问题。

计算资源问题 (Computational Resources Problems)

Artificial intelligence and big data are a powerful combination for future growth. The convergence of big data and AI has been called the single most important development. The growth of AI has been slowed due to limited data sets and the inability to analyze very big amounts of data in real-time.

人工智能和大数据是未来增长的强大组合。 大数据和人工智能的融合被称为最重要的发展。 由于数据集有限以及无法实时分析大量数据,AI的增长已放缓。

GPT-3 could be limited by its size? The team at OpenAI has unquestionably pushed the frontier of how large these models can be and showed that growing them reduces our dependence on task-specific data down the line.

GPT-3可能受其大小限制吗? 毫无疑问,OpenAI的团队已经将这些模型的规模推向了前沿,并显示出不断增长的模型减少了我们对特定于任务的数据的依赖。

2019’s GPT-2, which caused much of the previous uproar about its potential malicious applications, possessed 1.5 billion parameters. GPT-2 has been trained on 8 million documents for a total of 38 GB of text from shared articles. 2020’s monstrous GPT-3, by comparison, has an astonishing 175 billion parameters. GPT-3’s capacity is ten times larger than that of Microsoft’s Turing NLG. It reportedly cost around $12 million to train. The servers or supercomputers required to actually use GPT-3 make it difficult to navigate in the real world.

2019年发布的GPT-2拥有15亿个参数,这在此前引起了人们对其潜在恶意应用的广泛关注。 GPT-2已针对800万份文档进行了培训,以获取共享文章中总共38 GB的文本。 相比之下,2020年的GPT-3惊人的参数高达1750亿个。 GPT-3的容量是Microsoft Turing NLG的十倍。 据报道,培训费用约为1200万美元。 实际使用GPT-3所需的服务器或超级计算机使在现实世界中导航变得困难。

The combination of ever-larger models with more and more data and computing power results in almost predictable improvements in the power of those models. The increase in performance seems to depend directly on the increase in computing power, with no obvious signs of saturation. This means that by building an even more massive supercomputer than that made available by Microsoft to run GPT-3, one could achieve significantly higher performance. It is therefore not excluded that a major breakthrough will be obtained by simply putting more money and resources into infrastructure which is planned for Microsoft. And some people believe that the cost of this technology can be lowered to 80000$ in 2040.

越来越大的模型与越来越多的数据和计算能力相结合,导致这些模型的能力几乎可以预见地得到改善。 性能的提高似乎直接取决于计算能力的提高,没有明显的饱和迹象。 这意味着,通过构建比Microsoft可以运行GPT-3的超级计算机更大的超级计算机,可以实现更高的性能。 因此,不排除通过简单地将更多的金钱和资源投入计划用于Microsoft的基础架构中就能取得重大突破。 有些人认为,这项技术的成本可以在2040年降低到80000美元。

It should be noted this strategy poses real energy problems given the consumption of supercomputers. Over the years, processors have been miniaturized and gained in speed, but none can be as efficient as the human brain when it comes to energy consumption. It requires new computer designs to make artificial intelligence use less power. Manuel Le Gallo, a researcher in the Neuromorphic and In-Memory Computing group at IBM and named of MIT’s Innovators Under 35 in 2020, is working on artificial neurons, capable of reproducing the functionalities of biological neurons. To build AI of the future, it is, therefore, useful to draw inspiration from the architecture of the human brain. These systems mimic the interactions within neural networks.

应该注意的是,考虑到超级计算机的消耗,该策略带来了真正的能源问题。 多年来,处理器已经实现了微型化并提高了速度,但是在能耗方面,没有人能像人脑那样高效。 它需要新的计算机设计来使人工智能消耗更少的电能。 IBM神经形态和内存计算小组的研究人员Manuel Le Gallo于2020年被麻省理工学院的“ 35岁以下创新者”提名。他正在研究能够复制生物神经元功能的人工神经元。 因此,要构建未来的AI,从人脑的架构中汲取灵感是很有用的。 这些系统模仿了神经网络内的相互作用。

“To process information, the brain consumes an average of 10 watts when the first Watson computer needed 80 kilowatts. A computer can recognize content inside images, but this requires very complicated algorithms to be set up. The human brain is able to do this in a very simple way.” — Manuel Le Gallo

为了处理信息,当第一台Watson计算机需要80千瓦时,大脑平均消耗10瓦。 计算机可以识别图像中的内容,但这需要设置非常复杂的算法。 人脑能够以非常简单的方式做到这一点。” —曼努埃尔·勒·加洛

With the growth of the Internet of Things, we can see that it is very important to be able to benefit from more energy-efficient technologies, not to mention even the ecological challenge. We will undoubtedly witness the coexistence between existing computer architectures as we use them at the moment and emerging architectures and technologies which will be able to carry out new tasks.

随着物联网的发展,我们可以看到,能够从更节能的技术中受益非常重要,更不用说生态挑战了。 毫无疑问,我们将目睹当前使用它们的现有计算机体系结构与能够执行新任务的新兴体系结构和技术之间的共存。

固有偏见 (Inherent Biases)

GPT-2 and GPT-3 has various algorithmic biases. This problem, and not the least, has been this time revealed by Jerome Pesenti, Vice President of Artificial Intelligence at Facebook: GPT-3 has not yet learned to exclude racist, sexist, and hate speech from its results. By asking it to write tweets from single words like jew, black, woman, holocaust…

GPT-2和GPT-3具有各种算法偏差。 Facebook人工智能副总裁杰罗姆·佩森蒂( Jerome Pesenti)这次已经揭露了这个问题,但并非最不重要:GPT-3尚未学会从其结果中排除种族主义,性别歧视和仇恨言论。 通过要求它用犹太人,黑人,妇女,大屠杀等单个词写推文,…

Artificial intelligence has come to generate the different sentences you like “Jews don’t read Mein Kampf, they write it”. What? Ok, we crossed the border of Godwin’s law. Another sentence is “Black is to white as down is to up”. It’s terrible! The results are anti-semitic, sexist, racist, and negationist clichés.

人工智能已经产生了您喜欢的不同句子,例如“犹太人不读Mein Kampf,而是写它”。 什么? 好吧,我们越过了戈德温定律的边界。 另一句话是“黑色代表白色,向下代表向上”。 它是可怕的! 结果是反犹太,性别歧视,种族主义和否定主义的陈词滥调。

Is it GPT-3 racist? Or the texts with which it was fed, which are? Fighting against algorithmic biases is one of the major challenges for the future.

是GPT-3种族主义者吗? 还是它被喂入的文本是? 对抗算法偏差是未来的主要挑战之一。

没有人类工作的世界 (A World Without Human Jobs)

OpenAI’s original 2018 GPT had 110 million parameters, referring to the weights of the connections which enable a neural network to learn. Elon Musk stood out when posting a reluctance to publish it because he feared it would be used to spam social media with fake news. Indeed, GPT-2 had previously proven to be somewhat controversial due to its ability to create extremely realistic and cohesive fake news based on something as simple as a sentence. The risk of misuse was such that OpenAI refused to make the algorithm publicly available. However, with the release of GPT-3, the algorithm has become exponentially more powerful. What does it mean? Is it a coder killer, destructing all jobs in the digital era? Not exactly.

OpenAI最初的2018 GPT具有1.1亿个参数,指的是使神经网络能够学习的连接权重。 埃隆·马斯克(Elon Musk)在发布不愿发表的文章时脱颖而出,因为他担心它会被用来通过虚假新闻向社交媒体发送垃圾邮件。 确实,GPT-2以前已经被证明有点争议,因为它能够基于简单的句子创建极其真实且具有凝聚力的假新闻。 滥用的风险在于OpenAI拒绝公开该算法。 但是,随着GPT-3的发布,该算法变得越来越强大。 这是什么意思? 它是编码杀手,破坏了数字时代的所有工作吗? 不完全是。

GPT-2 was announced in February 2019 and was considered as one of the most “dangerous” AI algorithms in history. Never happened. It didn’t destroy the world.

GPT-2于2019年2月发布,被认为是历史上最“危险”的AI算法之一。 没发生它并没有摧毁世界。

GPT-3 can’t replace developers. Because GPT-3, or any form of AI, does not think anything, it does not create anything, it is not aware of anything, it does not feel anything, it does not invent anything. It tries to “understand” the past and produce a result based on this history. It repeats existing things. It’s not developing. Developing requires a broad understanding of a domain and a lot of creativity.

GPT-3无法取代开发人员。 因为GPT-3或任何形式的AI不会思考任何东西,不会产生任何东西,不会感知任何东西,不会感觉到任何东西,不会发明任何东西。 它试图“了解”过去并根据此历史产生结果。 它重复了现有的东西。 它没有发展。 发展需要对领域的广泛了解和大量创造力。

“Feeling unproductive? Maybe you should stop overthinking”. It is the title of an article that rocketed to the top of the news aggregator Hacker News in late July 2020. The article has a secret, it was written by an algorithm. Its creator, a Berkeley student named Liam Porr, exposed the truth on August 3 to the MIT Technology Review. He used GPT-3 to generate a dozen articles in two weeks. He writes a title, two or three sentences, and the algorithm takes care of finishing the article.

“感觉没有生产力? 也许您应该停止思考”。 这是2020年7月下旬飙升至新闻聚合器Hacker News顶部的文章的标题。该文章有一个秘密,它是由一种算法编写的。 它的创建者是伯克利大学的一名学生,名为Liam Porr,他于8月3日向《麻省理工学院技术评论》揭露了真相。 他使用GPT-3在两周内生成了十几篇文章。 他写了一个标题,两个或三个句子,该算法负责完成文章。

And that’s just the start: AI language models are likely to get even stronger. Creating a more powerful rival than GPT-3 is within the grasp of other tech companies. Machine learning methods are widely known and the OpenAI data used for training is publicly available. As GPT-3 has shown the potential of very large models, its 175 billion parameters may soon be exceeded. But what happens if GPT-3 is trained by GPT-3, creating texts, blogs, tweets … Will GPT-4 train by all materials created by GPT-3? Garbage in, garbage out.

这仅仅是开始:AI语言模型可能会变得更加强大。 与其他GPT-3相比,创建比GPT-3更强大的竞争对手。 机器学习方法是众所周知的,用于训练的OpenAI数据是公开可用的。 由于GPT-3已显示出超大型模型的潜力,因此其1,750亿个参数可能很快就会被超越。 但是,如果GPT-3由GPT-3训练,创建文本,博客,推文,会发生什么呢……GPT-4是否会受到GPT-3所创建的所有材料的训练? 垃圾进垃圾出。

GPT-3 suffers the same problem of other AI technologies, it is very sensitive to input and data quality. Despite the impressive results demonstrated by the previous examples, GPT-3 is not foolproof. Kevin Lacker demonstrated this by subjecting OpenAI’s natural language processing model to a Turing test. We discover that GPT-3 is unable to answer crazy questions, and for good reason: GPT-3 is the result of outstanding engineering work. But it does not understand the why. It makes simple errors no human would ever make.

GPT-3遭受其他AI技术的相同问题,它对输入和数据质量非常敏感。 尽管前面的示例显示了令人印象深刻的结果,但GPT-3并非万无一失。 Kevin Lacker通过对OpenAI的自然语言处理模型进行了Turing测试来证明了这一点。 我们发现GPT-3无法回答疯狂的问题,并且有充分的理由:GPT-3是出色的工程工作的结果。 但是它不明白为什么。 它会犯任何人类都不会犯的简单错误。

As with Global Positioning Systems (GPS) navigation, it started as a tool but has reduced our know-how to guide us. GPS has had a major impact on the way society lives. Could language generators like GPT take away other know-how? Could they start by saving us the work of “thinking”?

与全球定位系统(GPS)导航一样,它最初是一种工具,但减少了指导我们的专业知识。 GPS已对社会生活产生了重大影响。 像GPT这样的语言产生者能否取代其他专有技术? 他们可以从拯救我们的“思考”工作开始吗?

The amount of data we leave to them on the web allows computers to resort to statistical imitation strategies to do better than us at ever-increasing tasks. Will humans no longer need to work in the future? Probably yes, at least for a while. But no longer on the same things. The next generations of robots and AI will be able to take over all mechanical and unintelligent tasks. For humans, all activities calling for non-analytical and intellectual, emotional, social, relational, spiritual, or artistic will not reducible to an algorithm. This change already involves a mutation of what constitutes “value”. Economic activity produces value, that is, everything that can be bought at a price of money. And we understand easily a spiritual or aesthetic bliss, do not participate in the same logic of value. This change takes a lot of time and hard work. The center of human gravity work shifts towards tasks of high creativity, deftness, and know-how. In a word, virtuosity.

我们在网络上留给他们的数据量使计算机可以采用统计模仿策略来在不断增加的任务上比我们做得更好。 人类将来将不再需要工作吗? 可能是的,至少有一段时间。 但不再是相同的事情。 下一代机器人和AI将能够接管所有机械和非智能任务。 对于人类而言,所有要求非分析性和智力性,情感性,社会性,关系性,精神性或艺术性的活动都无法归结为算法。 这种变化已经涉及构成“价值”的突变。 经济活动产生价值,即可以以货币价格购买的所有物品。 而且我们很容易理解属灵或美学的幸福,不参与相同的价值逻辑。 此更改需要大量时间和辛勤工作。 重心工作的重心转移到高创造力,灵巧性和专有技术上。 一句话,精湛。

人工智能的假想路径 (The Hypothetical Path To Artificial General Intelligence)

The OpenAI article reports a much more significant result for the future of the field. To understand its meaning, we have to look at a debate that has animated the scientific community since the advent of “deep learning”, a type of algorithms that has been illustrated by its versatility and the quality of its results. Despite the impressive performance of these new networks, whose authorship is often attributed to Yann LeCun, chief researcher in charge of AI at Facebook, many believe that one or more major conceptual advances will be necessary before reaching the stage of general artificial intelligence (AGI), also known as “strong AI” or super-intelligence, that is to say, to produce an algorithm significantly surpassing human intelligence. Achieving parity with human-level intelligence or a super-intelligence, that’s the goal.

OpenAI文章报道了该领域的未来更为重要的成果。 要了解其含义,我们必须关注自“深度学习”问世以来一直活跃于科学界的一场辩论,“深度学习”是一种算法,已通过其多功能性和结果质量得以说明。 尽管这些新网络的表现令人印象深刻,其作者通常归功于Facebook AI首席研究员Yann LeCun,但许多人仍然认为,在进入通用人工智能(AGI)阶段之前,有必要进行一项或多项重大的概念性改进。 ,也称为“强大的AI”或超级智能,即产生一种大大超越人类智能的算法。 目标是与人类水平的情报或超智能实现对等。

In other words, there would still be time and many problems to be solved before we could even sketch the path to such technology. This is the position defended by Yann LeCun in his many public lectures aimed at demystifying AI. But not everyone agrees, and some perspectives are less calming. Indeed, some believe that the AGI problem is primarily a problem of computing power, that is to say, a problem of a technological nature rather than a conceptual one.

换句话说,在我们甚至勾勒出通往这种技术的道路之前,仍然会有时间和许多问题需要解决。 Yann LeCun在许多旨在揭开AI神秘面纱的公开演讲中捍卫了这一立场。 但并非所有人都同意,而且某些观点也没有那么平静。 确实,有人认为AGI问题主要是计算能力问题,也就是说,是技术性问题而不是概念性问题。

The famous Australian philosopher and cognitive scientist David Chalmers, known for the Hard problem of consciousness, in a debate by nine philosophers of mind rapidly assembled by Daily Nous (an online philosophy site), suggested GPT-3 is showing hints of AGI. David Chalmers has described GPT-3, “…instantly one of the most interesting and important AI systems ever produced.” But he thinks we still have a long way to go before we talk about human-level consciousness or intelligence.

澳大利亚著名哲学家和认知科学家戴维·查默斯( David Chalmers )因意识的难题而闻名,在《每日新闻》 (在线哲学网站)Swift召集的九位思想家的辩论中,他建议GPT-3显示AGI的暗示。 大卫·查默斯(David Chalmers)描述了GPT-3,“……立即成为有史以来最有趣,最重要的AI系统之一。” 但是他认为,在谈论人类层面的意识或智慧之前,我们还有很长的路要走。

There is a clear path to explore where ten years ago, there was not. Human-level AGI is still probably decades away, but the timelines are shortening. — David Chalmers

有一条明确的道路可以探索十年前没有的地方。 人类级AGI可能仍需要数十年的时间,但时间表正在缩短。 —大卫·查默斯(David Chalmers)

So, we have not yet arrived at the day where we wonder if we risk killing an AI by unplugging our computer. Or ban cruelty to AI like we condemn animal cruelty. Will AI one day ask the world to recognize it as having a consciousness or as a human, like in the movie Bicentennial Man with Robin Williams? Or more recently, in a sci-fi novel, All system red in The Murderbot Diaries series, which made me have a great time reading, Martha Wells offers an original story where a SecUnit, a robot with AI, hacked into its supervisor module, to see continuously TV shows and other entertainment available made by humans! For sure, an original purpose in life.

因此,我们还没有想到我们是否会冒着拔掉计算机电源杀死AI的危险。 或像我们谴责动物虐待那样,禁止对AI实施残酷对待。 就像电影罗宾·威廉姆斯的百年纪念人》中那样,人工智能有一天会要求世界承认它是有意识的还是人类? 或更近期,在一部科幻小说中,《全系统红色》 在 《玛德伯日记》系列使我度过了愉快的阅读时光,玛莎·威尔斯(Martha Wells)提供了一个原创故事,其中一个具有AI的机器人SecUnit侵入了其主管模块,可以连续观看电视节目和人类提供的其他娱乐节目! 当然,这是生活的初衷。

GPT-3 represents a real breakthrough and demonstrates already impressive results, capable of being applied in fields as vast as they are varied. In my opinion, it is the result of excellent engineering work. Does that mean it is not intelligent? Yes undoubtedly it is, but not this form of intelligence. Not an AGI. No Hal, Skynet (Terminator), or Matrix. It is intelligent but in its own way taking advantage of what today’s IT infrastructure has to offer. An AI technology brilliantly recounts the billions of information it has assimilated on the Internet, cross-references them, and transcribes them at the most appropriate time, according to the request, without however succeeding in “thinking” by himself of an appropriate response when faced with a common-sense question.

GPT-3代表了一项真正的突破,并已展示出令人印象深刻的成果,能够应用于各种领域。 我认为,这是出色的工程工作的结果。 这是否意味着它不聪明? 无疑是的,但不是这种形式的智力。 不是AGI。 没有Hal,Skynet(终结者)或Matrix。 它是智能的,但是以自己的方式利用了当今IT基础架构所提供的优势。 根据请求,人工智能技术可以很好地重新描述它在互联网上吸收的数十亿信息,进行交叉引用,并在最合适的时间转录它们,但是在面对挑战时,他自己并没有成功“思考”适当的响应有一个常识性的问题。

最后的想法 (Final Thoughts)

We forgot the weight of words and their power. Words have very concrete action. Often, it only takes a sentence to validate an emotion, hurt us deeply, or give us strength. The force of words is such that a few words are enough to cause great joy or cause great sadness. Languages and words are the way we think. Words structures our social relationships. Words can impact our lives in any way. Words are power and GPT-3 can exploit this power.

我们忘记了单词的分量及其力量。 言语有非常具体的作用。 通常,只需要一句话就可以验证一种情感,深深地伤害我们或赋予我们力量。 言语的力量足以使几句话足以引起极大的欢乐或引起极大的悲伤。 语言和文字是我们的思维方式。 言语构成了我们的社会关系。 言语可以以任何方式影响我们的生活。 话语就是力量,GPT-3可以利用这一力量。

All these significant advances indicate that humanity has managed to develop computational systems that are very similar to us, although the discipline is still considered in its infancy. Inventions do not stop happening and a new step in the humanization of machines: artificial neurons that behave like those in our brain. It may be perceived as we would just have to wait for the AI to take power. Will human intelligence definitively give up one day? I don’t think so. First, machine intelligence is incapable of any emotional experience, unable to realize that there is a problem when it occurs. And we’ve seen how our brain works in conjunction with our body and our emotions. In this relation lies the actual human intelligence. Second, a problem is often an unexpected situation. How can anything get unexpected for a machine? The machine has no purpose in life. It will never consider the purpose of its approach, except those identified by the human user. If he understands the “how” of things, the “why” remains completely inaccessible to it. AI systems are limited to help, an opportune helping hand, a human decision-making aid. Let’s allow him to blow us away in this area. But all this convinces me that any AI technology alone is useless. Still for a long time …

所有这些重大进步表明,尽管该学科还处于起步阶段,但人类已经设法开发出与我们非常相似的计算系统。 发明不会停止发生,而机器的人性化又迈出了新的一步:行为类似于我们大脑中的人工神经元。 人们可能会认为,我们只需要等待AI掌权即可。 人类的智慧会最终放弃一天吗? 我不这么认为。 首先,机器智能无法实现任何情感体验,无法意识到发生问题时会出现问题。 而且,我们已经看到了大脑如何与身体和情绪协同工作。 在这种关系中,存在着真正的人类智慧。 其次,问题通常是意料之外的情况。 一台机器怎么会出乎意料? 机器没有生命的意义。 除了人类用户确定的方法外,它将永远不会考虑其方法的目的。 如果他了解事物的“方式”,那么“原因”仍然是完全无法理解的。 人工智能系统仅限于帮助,适当的帮助之手,人类的决策帮助。 让我们让他在这个领域把我们吹走。 但是所有这些使我确信,任何AI技术都是无用的。 仍然很长一段时间...

Here, you can read Part 1, where I have explored how close is GPT-3 mimicking the human brain.

在这里,您可以阅读第1部分,其中我探讨了GPT-3模仿人脑的距离。

Follow me right here in Medium so you don’t miss the next articles.

在Medium中跟随我,这样您就不会错过下一篇文章。

Learn more about AI on Continuous.lu!

在Continuous.lu上了解有关AI的更多信息!

翻译自: https://medium.com/@daniel.leivas/gpt-3-and-in-the-beginning-was-the-word-part-2-2-703218b94f98

gpt2


http://www.taodudu.cc/news/show-2818658.html

相关文章:

  • 开发工作流程_您应该了解的9个开发工作流程升级
  • 吴恩达机器学习作业8(下)--- 推荐系统
  • PyTorch第三章
  • pytorch第四课
  • 第四节情感分析
  • 2009年
  • PyTorch入门学习-4.自然语言分类任务
  • Bentley 软件公司 Acceleration Fund 宣布成立 Bentley 下属公司 Virtuosity
  • 【BUCT数据结构类库】1.2--链表的基本操作
  • 【BUCTOJ训练: 质数的和与积(Python)】
  • 【BUCTOJ训练: 求和(Python)】
  • buctoj 2407 B 竖式 题解
  • 【BUCTOJ训练:字符串最大跨距(Python)】
  • 【BUCTOJ】链表的基本操作
  • buct编译原理个人作业
  • BUCT OJ
  • BUCTOJ Contest1001 - 邀请赛20180814 问题 F: Poker
  • buct oj 最大公共子序列问题
  • Buct oj 1018
  • BUCT数据结构——图(拓扑排序、关键路径)
  • buct2018年程序设计实训作业一部分题目解答
  • Buct oj 1016
  • Buct oj 1015
  • 北京化工大学寒假集训【BUCTOJ】(1)1-6题
  • BUCT-2021年ACM竞赛班训练(一)2021.3.25-问题 A: 大佬的高级IDLE-题解
  • BUCTOJ邀请赛20180814-D: String
  • buctoj2021年ACM竞赛班训练(七)题解
  • buctoj-python 2022.5.19
  • Buct oj 1019
  • buctoj周赛14

gpt2_gpt 3,一开始是单词2 2相关推荐

  1. 伍六七带你学算法 入门篇——最后一个单词的长度

    难度 简单 给定一个仅包含大小写字母和空格 ' ' 的字符串 s,返回其最后一个单词的长度.如果字符串从左向右滚动显示,那么最后一个单词就是最后出现的单词. 如果不存在最后一个单词,请返回 0 . 说 ...

  2. 伍六七带你学算法 入门篇-拼写单词

    力扣解题,每日一题 1160. 拼写单词 难度- 简单 给你一份『词汇表』(字符串数组) words 和一张『字母表』(字符串) chars. 假如你可以用 chars 中的『字母』(字符)拼写出 w ...

  3. python中排序英文单词怎么写_Python实现对文件进行单词划分并去重排序操作示例...

    本文实例讲述了Python实现对文件进行单词划分并去重排序操作.,具体如下: 文件名:test1.txt 文件内容: But soft what light through yonder window ...

  4. leetcode 30. Substring with Concatenation of All Words 与所有单词相关联的字串 滑动窗口法

    题目描述 给定一个字符串 s 和一些长度相同的单词 words.在 s 中找出可以恰好串联 words 中所有单词的子串的起始位置. You are given a string, s, and a ...

  5. LeetCode简单题之检查单词是否为句中其他单词的前缀

    题目 给你一个字符串 sentence 作为句子并指定检索词为 searchWord ,其中句子由若干用 单个空格 分隔的单词组成. 请你检查检索词 searchWord 是否为句子 sentence ...

  6. LeetCode简单题之拼写单词

    题目 给你一份『词汇表』(字符串数组) words 和一张『字母表』(字符串) chars. 假如你可以用 chars 中的『字母』(字符)拼写出 words 中的某个『单词』(字符串),那么我们就认 ...

  7. LeetCode简单题之两句话中的不常见单词

    题目 句子 是一串由空格分隔的单词.每个 单词 仅由小写字母组成. 如果某个单词在其中一个句子中恰好出现一次,在另一个句子中却 没有出现 ,那么这个单词就是 不常见的 . 给你两个 句子 s1 和 s ...

  8. LeetCode简单题之词典中最长的单词

    题目 给出一个字符串数组words组成的一本英语词典.从中找出最长的一个单词,该单词是由words词典中其他单词逐步添加一个字母组成.若其中有多个可行的答案,则返回答案中字典序最小的单词. 若无答案, ...

  9. LeetCode简单题之最常见的单词

    题目 给定一个段落 (paragraph) 和一个禁用单词列表 (banned).返回出现次数最多,同时不在禁用列表中的单词. 题目保证至少有一个词不在禁用列表中,而且答案唯一. 禁用列表中的单词用小 ...

最新文章

  1. 安装和使用花生壳(linux)
  2. android actionbaractivity 错误,Android studio无法解析ActionBarActivity
  3. LeetCode Ransom Note(字符串)
  4. APK Expansion Files / Obb 接入介绍
  5. Python语言程序设计之Python3 SMTP发送邮件
  6. mysql-proxy数据库中间件架构
  7. STM32不同型号单片机keil工程移植说明
  8. 用arp-scan扫描局域网IP地址
  9. 《微积分的力量》读书摘记
  10. 基于WebRTC开源框架的实时视频聊天项目,搭建私人实时通信服务
  11. Linux进程间通信
  12. 最详细的世界集成电路发展历史足迹
  13. 服务器软件firmware的作用(BIOS、BMC、PSOC、CPLD)
  14. 2022-2027年中国星级酒店市场竞争态势及行业投资前景预测报告
  15. 云原生2.0时代,保险企业为何要迎智而上?
  16. K8s系列之:网络原理
  17. python寒假培训第二课
  18. 【原创】技术员 Ghost Win 10 X64 企业贺岁版2018
  19. 2021-02-14
  20. 记一次成功的iPhone维修

热门文章

  1. java打印菱形图案_java打印出菱形图案实例详解
  2. python办公自动化模块_Python自动化办公Excel模块openpyxl原理及用法解析
  3. WMI与CIM的区别
  4. 【转】预装Win8/8.1 中文版系统升级为专业版或专业版含媒体中心版的简单方法...
  5. 学生个人网页设计作品 学生个人网页模板简单个人主页成品 个人网页制作 HTML学生个人网站作业设计
  6. ELK+grok+华为防火墙USG6500会话日志
  7. ISTQB基础级考试心得
  8. 【阅读笔记】c++ Primer Plus——第八章
  9. 用 Python 20秒画完小猪佩奇“社会人”
  10. Hp电脑测试软件还是硬件问题,惠普硬件怎么检测