数量和质量评价模型

The recent advances in language modeling with GPT-3 got me thinking: at what point does a quantitative change in a machines language generation ability cross a boundary into a qualitative change in our assessment of its intelligence or creativity?

使用GPT-3进行语言建模的最新进展使我想到:在什么时候,机器语言生成能力的定量变化会越过边界,而对我们的智力或创造力评估会发生质的变化吗?

当沙堆遇见Eubulides (When a sand heap met Eubulides)

How many grains of sand can you take from a sand heap until it’s not a heap? Or more personally, how many hairs on your head can you afford to lose before you’re bald, or pounds before you’re thin? Maybe it’s fun to annoy someone by asking one of these Sorites Paradoxes, attributed to the Greek philosopher Eubulides, precisely because they arise when language is imprecise. They expose that words we commonly use without hesitation, like heap, bald, thin, or even intelligent and creative, where we think we know exactly what we mean, actually have boundaries that can be quite vague when you really start to dig into them.

您可以从沙堆中取出多少粒沙,直到不是沙堆? 或更个人而言,在秃顶之前,您可以承受掉多少根头发,或者在变薄之前,可以承受多少磅的损失? 通过问这些归因于希腊哲学家Eubulides的Sorites 悖论之一而使某人烦恼可能很有趣,这恰恰是因为它们是在语言不精确时出现的。 他们揭露了我们通常毫不犹豫地使用的词,例如堆,秃头,薄薄甚至是聪明而富有创造力的词,在这些词中,我们认为我们确切地知道了我们的意思,但实际上当您真正开始研究它们时,它们之间的界限可能就很模糊了。

You can think about what’s going on here as a quantitative change: in grains of sand, hair, or weight, leading to a qualitative change that ascribes a property to something, like being a heap, bald, or thin.

您可以考虑一下这里发生的定量变化:沙子,头发或重量的颗粒,导致质的变化,使某种属性归于某种属性,例如堆积,秃头或稀薄。

Hegel developed an explicit relation between quality and quantity in Science of Logic:

黑格尔在逻辑科学中建立了质与量之间的明确关系:

[W]e have seen that the alterations of being in general are not only the transition of one magnitude into another, but a transition from quality into quantity and vice versa, a becoming-other which is an interruption of gradualness and the production of something qualitatively different from the reality which preceded it — Hegel

[W] e已经看到,一般性的改变不仅是从一个量级到另一个量级的过渡,而且是从质量到量级的过渡,反之亦然,成为另一个量级的过渡是对渐进性和某种事物的产生的干扰。与之前的现实在质量上有所不同-黑格尔

The idea was then taken further by Marx and Engels into the law of passage of quantitative changes into qualitative changes, and finally arrived in the most familiar and widely misattributed form you’ve likely heard:

马克思和恩格斯进一步将这个想法带入了从数量变化到质变的传递定律,并最终以您可能已经听说过的最熟悉,分布最广泛的形式出现:

Quantity has a quality of its own -Various

数量具有自己的质量-各种

While it’s not what any of them had in mind, at what point does a quantitative change in a machines language generation ability cross a boundary into a qualitative change in our assessment of its intelligence or creativity?

尽管并不是所有人都想到的,但是在什么时候机器语言生成能力的数量变化会越过边界,变成我们对其智力或创造力评估时的质变?

语言模型和GPT-3 (Language Models and GPT-3)

The release of GPT-3 from OpenAI has shown that an incredibly wide variety of language generation applications — from writing fiction to poems to computer code — can be performed by a fairly typical language model scaled up and trained on the largest amount of data yet.

OpenAI的GPT-3 版本表明,可以通过相当典型的语言模式执行各种各样的语言生成应用程序(从写小说到诗歌到计算机代码),这些语言模式 可以按比例扩展并接受最多的数据训练。

Language models have been used in the NLP community for decades, becoming increasingly more complicated, and relying on more and more data. A language model is a technical term for a mathematical model of language that is produced by an algorithm that uses existing written text to calculate the probabilities of words appearing next to each other, specifically how likely the next word or sequence of words is from a previous sequence of words. After training the language model by computing these probabilities, the model can be used to generate new text: start with a word or phrase as a prompt, and continue calculating the most probable next word for as long as you want.

语言模型已在NLP社区中使用了数十年,变得越来越复杂,并且依赖于越来越多的数据。 语言模型是语言数学模型的技术术语,该数学模型由一种算法生成,该算法使用现有的书面文本来计算彼此相邻出现的单词的概率,特别是下一个单词或单词序列与上一个单词的可能性单词顺序。 通过计算这些概率来训练语言模型之后,该模型可用于生成新文本:以单词或短语作为提示开始,并根据需要继续计算最可能的下一个单词。

When built well, they generate syntactically fluent language, although it used to be fairly easy to tell when text was generated from a model — it was clunky, repetitive, and lost coherence within at most a few sentences.

如果构建得当,它们可以生成语法上流利的语言,尽管过去通常很容易分辨何时从模型生成文本-它笨拙,重复且在几句话之内就失去了连贯性。

The algorithm used to build GPT-3 is still only trained by predicting the next sequence of words, but it is doing so for a model with 175 billion parameters — several orders of magnitude more than most previous language models — and on a huge amount of data taken directly from the internet (i.e. produced by us); a very impressive engineering feat.

用于构建GPT-3的算法仍然只能通过预测下一个单词序列来进行训练,但是对于具有1750亿个参数(比大多数以前的语言模型要高几个数量级)的模型,它的确需要这样做。直接从互联网获取的数据(即由我们产生); 一个非常令人印象深刻的工程壮举。

流利,一次骗我 (Fluency, fool me once)

The most striking aspect of the language produced by GPT-3 is how fluent it is across a variety of genres, how well it stylistically adapts to the given prompt, and how long the coherence of the generated text lasts.

GPT-3产生的语言最引人注目的方面是它在各种类型中的流利程度,在样式上对给定提示的适应程度以及所生成文本的连贯性持续了多长时间。

It’s natural to associate the fluency of language with how intelligent the process that generated the language must be. In other words, it’s hard to separate thinking up something to say from being able to say it well. What to say from how to say it. It’s a human bias that helps explain why we’re taken in by a smooth talker before realizing there’s little substance, or vice versa, assume a lack of cognitive capabilities when someone can’t express themselves.

将语言的流利程度与生成语言的过程的智能程度联系起来是很自然的。 换句话说,很难将要说的话与能够说的话分开。 从怎么说开始怎么说。 这是一种人为偏见,可以帮助解释为什么在意识到几乎没有实质内容之前,我们会被一个平稳的谈话者所接受,反之亦然,即当某人无法表达自己的能力时,我们会认为他们缺乏认知能力。

What to say starts by purposefully selecting some concept to represent in language. Whether the concept is an abstract idea in your mind or a spreadsheet table, it is a form of data, and you want to transform it into language as correctly and faithfully as possible. If you express your idea in language well enough to allow the reader to interpret what you’re saying correctly, your language has sufficient adequacy or accuracy.

要说什么首先要有目的地选择一些用语言表示的概念。 无论该概念是您脑海中的抽象概念还是电子表格表格,它都是一种数据形式,并且您希望将其尽可能正确和忠实地转换为语言。 如果您在语言不够好,以让读者理解你在说什么正确表达你的想法,你的语言有足够的准确性充分 性。

How to say it comes back to the fluency, whether the language used is understandable, regardless of whatever it is you’re saying. You can write an exceptionally fluent essay on bees, but if you were trying to give someone a quinoa recipe, it’s completely inadequate. A process, whether human or machine, can generate fluent language describing Mars or Elon Musk, and it doesn’t have to have any connection to reality or truth to be comprehensible.

怎么说又回到流利度 ,无论您说什么,使用的语言是否都能理解。 您可以写一篇关于蜜蜂的特别流利的文章,但是如果您想给某人做藜麦食谱,那是完全不够的。 一个过程,无论是人类还是机器,都能产生流利的描述火星或埃隆·马斯克的语言,并且不必与现实或真理有任何联系即可理解。

Fluency without adequacy, that’s easy to imagine. Fluency is on the surface, it’s visible. It can be untethered from trying to represent anything specific and still come off fine.

没有足够的流利度,这很容易想象。 流利性在表面上,可见。 它可以不受限制地尝试代表任何特定的东西,但仍然没有问题。

What’s harder to imagine is adequacy without fluency. For me to assess the adequacy of what you’re saying, I need to know that you’re trying to give me a recipe, and not talk about bees. Or I need to trust that whoever (or whatever) wrote the facts about Mars I’m reading knew what they (or it) was talking about. In either case, I need to be able to create an interpretation of the concept and data you’re relaying through language. But in order for me to create an interpretation, you need to first be coherent enough.

很难想象的是,没有流利性就足够了。 为了让我评估您在说什么的充分性,我需要知道您正在尝试给我一个食谱,而不是谈论蜜蜂。 或者我需要相信,无论谁(或任何人)写了我正在阅读的火星的事实,都知道他们(或它)在说什么。 无论哪种情况,我都需要能够对您要通过语言中继的概念和数据进行解释。 但是,要让我进行解释,您首先需要足够连贯。

Adequacy requires selecting something specific to represent, and being able to compare how well it’s represented. I think that’s why fluency is both easier to artificially manufacture and gives the impression of adequacy. Our cognitive bias is to default to truth. If language is fluent, we understand it; if we understand it, we create an interpretation of what is being said; if we create an interpretation, we assume it’s accurately representing the concept and data it set out to represent. Why else would someone take the time to write it, right? :)

充分性要求选择要代表的特定内容,并能够比较其代表程度。 我认为这就是为什么流畅性既易于人工制造,又给人以足够的印象。 我们的认知偏见是违背真理 。 如果语言说得很流利,我们就能理解。 如果我们理解,我们将对所讲内容进行解释; 如果我们创建一个解释,我们就假设它准确地表示了要表示的概念和数据。 为什么有人还会花时间写它,对吗? :)

也许我也是语言模特 (Maybe I’m a language model too)

When we write or speak, words usually come out of our mouth or our hands without any conscious effort of how they got there. We have an unconscious process for generating the next word, are we similar to a language model in finding the most probable next word from our prior experience with language? Is our ability to write not only fluently but adequately a matter of having several orders of magnitude more parameters in our brains than the current language models, and having seen lots and lots of text?

当我们书写或讲话时,通常是从我们的嘴巴或手中说出单词,而没有任何意识地努力知道它们是如何到达那里的。 我们有一个无意识的生成下一个单词的过程,是否类似于语言模型,可以从我们以前的语言经验中找到最可能的下一个单词? 我们的书写能力不仅流利而且足够,是因为我们的大脑中的参数比当前的语言模型多了几个数量级,并且看到了大量的文本?

Certainly the things we say are not always correct, i.e. what we say is not adequate to what we mean. Whether we think it is or not; people make mistakes. I misremember and make things up, how is that different from the language model is doing?

当然,我们所说的话并不总是正确的,即我们所说的话并不符合我们的意思。 我们是否认为; 人们会犯错误。 我记错了并整理了一切,这与语言模型的作用有何不同?

适应能力就在这里 (Adaptability is where it’s at)

Going one step further, the most impressive part of GPT-3 is likely not the fluency of the language it generates, but the ease with which it can perform different tasks with only a few prompting examples. Most machine learning models are trained to perform a specific, discrete task, like predicting the sentiment of a restaurant review, or answering trivia questions, but GPT-3 has shown an impressive ability to perform many different kinds of language generation without being specifically trained to do so.

更进一步,GPT-3最令人印象深刻的部分可能不是它所生成的语言的流利程度,而是它仅需几个提示示例就可以轻松地执行不同任务。 大多数机器学习模型都经过训练以执行特定的,离散的任务,例如预测餐厅评论的情绪或回答琐事问题,但是GPT-3显示出令人印象深刻的能力,可以执行多种不同类型的语言,而无需经过专门的训练即可这样做。

Adaptability is a core human trait. We all build models of the world in our minds — of your house, your friends, yourself. You use the model of the world you’ve built from all your prior experience to go into novel situations and make reasonable decisions. Not only do you not forget how to brush your teeth just because the color, size, or shape of the toothbrush changed, but if you have the intent to brush your teeth and there’s no toothbrush around, you can create something that will act like a toothbrush from completely different materials.

适应能力是人类的一项核心特征。 我们都在我们的思维中建立了世界的模型-您的房子,您的朋友,您自己。 您使用从以前的经验中建立的世界模型进入新颖的情况并做出合理的决定。 您不仅会忘记仅仅因为牙刷的颜色,大小或形状发生了变化就如何刷牙,而且如果您有刷牙的意图而又没有牙刷,那么您可以创造出一种像牙刷一样的功能。由完全不同的材料制成的牙刷。

Adaptability is closely tied to creativity, the ability to create something new and worthwhile. Adequacy is critical in a legal memo or biography, and it’s relatively easy to judge the adequacy by comparing these fact-based writings to some reality, but what about fiction, poetry, and other forms of creative writing? How useful or measurable is adequacy there? Is fluency sufficient for creativity?

适应能力与创造力,创造新事物和有价值的能力紧密相关。 在法律备忘录或传记中,适当性至关重要。通过将这些基于事实的作品与某种现实进行比较,判断适当性相对容易,但是小说,诗歌和其他形式的创造性写作呢? 那里的充足性有多有用或可衡量? 流利度足以创造力吗?

The language produced by even the simplest language models from decades ago can be said to create something new, maybe that’s sufficient to say any such process is being creative, but that doesn’t seem like a satisfactory answer.

即使是几十年前最简单的语言模型所产生的语言,也可以说是创造了一些新事物,也许足以说明任何这样的过程都具有创造力,但这似乎并不是令人满意的答案。

If you didn’t know a word of French, but randomly picked words from a French dictionary until you filled 100 pages, and happened to produce a coherent work of fiction, were you being creative? Taken a bit further, if you have a monkey, a typewriter, and infinite time, eventually it will type out any book you can think of, but it’s unlikely you’d call that creative.

如果您不懂法语,而是从法语词典中随机选择单词,直到您填满100页并碰巧产生了连贯的小说作品,您是否有创造力? 再进一步,如果您有猴子,打字机和无限的时间 ,最终它将打出您能想到的任何书籍,但您不太可能会称呼该创意。

Where do we draw the line? It seems like we need to look at the worthwhile aspect of creativity, but how do we measure whether a work of fiction is worthwhile? (that sounds awfully close to asking what the purpose of art is…)

我们在哪里画线? 似乎我们需要研究创造力的重要方面,但是我们如何衡量小说作品是否值得? (听起来很像在问艺术的目的是什么……)

打算救援? (Intent to the rescue?)

It seems like the adequacy and creativity questions of language models, including GPT-3, come down to introspection and intent. A typical flow in human conversation can be seen as four steps. First, you intentionally choose what to say (let’s leave free will out of this for now). You start with an intent: a concept or idea of what you want to say. Second, you choose words to transform that intent or concept into language. Third, the listener hears or reader reads the words. Fourth, they interpret the words into a concept in their mind.

似乎语言模型(包括GPT-3)的充分性和创造力问题归结为内省和意图。 人类对话中的典型流程可以看作是四个步骤。 首先,您有意选择说些什么(暂时请假)。 您从意图开始:您想说的概念或想法。 其次,您选择单词以将意图或概念转换为语言。 第三,听者听到或阅读者阅读单词。 第四,他们在脑海中将单词解释为一个概念。

You use language for a reason: to transform something conceptual from one form into words. That concept can take the form of a sales report, where your words reference customers, transactions, dollars, profits and losses; or it can be a creative idea for a novel, where you imagine a character, and the words describe a person, their hair color, how they walk, their own thoughts and concepts (I know, meta).

您使用语言是有原因的:将某种概念从一种形式转换为单词。 这个概念可以采用销售报告的形式,用您的话语来指代客户,交易,美元,损益。 或者这可能是一部小说的创意,您可以想象一个人物,这些文字描述一个人,他们的头发颜色,他们的行走方式,他们自己的思想和观念(我知道,元)。

The point is, when you think of words, they represent something in the real world, they refer to objects, whether real or imagined. Words are connected to your other perceptions of the world, and the actions you can take.

关键是,当您想到单词时,它们代表现实世界中的某些事物,它们指的是实物还是虚构的对象。 言语与您对世界的其他看法以及您可以采取的行动有关。

When GPT-3 produces sequences of characters, that’s all they are, even though we see them as meaning-carrying words. For GPT-3, the words it produces do not refer to any concept, intent it is trying to represent, or action it is trying to take. There is no concept behind the words. When it produces a poem about Elon Musk on Mars, it has no concept of who Elon Musk is or what Mars or a poem are; no connection to any objects.

当GPT-3生成字符序列时,就已经足够了,即使我们将其视为带有含义的词也是如此。 对于GPT-3,它产生的词语并不表示任何概念,意图表示的意图或试图采取的行动。 这句话背后没有任何概念。 当它写一首关于火星上的埃隆·马斯克的诗时,它并不知道埃隆·马斯克是谁,火星或诗歌是什么。 没有任何对象的连接。

Instead of the four steps above, when you read text produced by a language model like GPT-3 it’s different in a very important way. The language model doesn’t have its own intent. It’s not an agent acting in the world. A human has to start by prompting GPT-3 with the seed text. The language model is taking your concept, that you transformed into words, so you’re still doing the first two steps, and continues the second step by generating words that are the most probable to occur next in the sequence.

除了上面的四个步骤,当您阅读由GPT-3之类的语言模型生成的文本时,它的区别也非常重要。 语言模型没有自己的意图。 它不是在世界上行动的代理商。 人类必须首先用种子文本提示GPT-3。 语言模型采用了您的概念,即您将其转换为单词,因此您仍在执行前两个步骤,并通过生成在序列中最有可能出现的单词来继续第二步。

The human prompter seems more analogous to a teacher prompting an essay topic that the student (GPT-3) needs to write. We as humans are still reading and interpreting a meaning, because for us words actually have meaning and refer to objects, but those references were not intended by the model. The fact we can interpret them is a result of the fluency, not adequacy.

人工提示似乎更类似于教师提示学生(GPT-3)需要写的论文题目。 作为人类,我们仍在阅读和解释含义,因为对我们而言,单词实际上具有含义并指向对象,但是这些引用不是模型所要的。 我们可以解释它们的事实是流利程度而非适当性的结果。

Even for creative writing, there’s a reason why someone wrote a poem or a novel, and one or more concepts they were trying to express. Maybe we need to separate out creativity into the process of introspection, the effort that goes into the proper translation of a concept into language, and the final linguistic expression.

即使是创造性的写作,也有一个人为什么写诗或小说以及他们试图表达的一个或多个概念的原因。 也许我们需要将创造力分解为内省的过程,将概念正确翻译成语言的努力以及最终的语言表达。

GPT-3 has certainly produced writing that is funny, sarcastic, or makes you think, so it would qualify for the third form of creativity. Since it has no understanding of the words, through no intent of it’s own is it trying to be funny, sarcastic, or make you think. Those are your interpretations, and could even be a result of GPT-3 using large sequence of words it has previously been given directly from people’s writings on the internet.

GPT-3肯定产生了有趣,讽刺或让人想起的作品,因此符合第三种形式的创造力。 由于它对单词不了解,因此它本身并不是故意的,它试图变得有趣,讽刺或让您思考。 这些是您的解释,甚至可能是GPT-3使用大量单词的结果,该单词以前是直接从互联网上的人们的著作中直接给出的。

Many of the examples of its writing are also cherry picked by humans. Maybe it would be unfair to do otherwise, after all, many human attempts at writing fail, but are we then using our human standards and judging, or are we choosing a biased sample produced by a small percentage of monkeys?

其写作的许多例子也是人类采摘的樱桃。 否则,这样做也许是不公平的,毕竟,许多人类的写作尝试都失败了,但是我们是在使用人类的标准并进行判断,还是我们选择了由一小部分猴子生产的偏见样本?

The lack of intent to be funny or make you think can similarly be said for writing produced by people, so perhaps the effect on the reader is what matters. If some future language model can produce thousands of novels a day whose storylines and characters resonate with readers and sell, despite the model not having any intent to do so, maybe it will be quaint that I think something critical is missing on the side of the writer.

人们创作的文字也缺乏幽默感或想让您思考的意图,因此对读者的影响才是最重要的。 如果某种未来的语言模型每天可以制作成千上万本小说,其故事情节和人物在读者中引起共鸣并进行销售,尽管该模型没有任何意图,那么也许我会觉得古怪的是,我认为书中缺少一些重要的东西。作家。

智力就像智力一样 (Intelligence is as intelligence does)

It’s clear that with GPT-3’s size and training data it has been able to achieve language generation capabilities that necessitate sharpening some of the questions we need to understand about machine intelligence. It’s taken fluency, adaptability, and perhaps even a form of creativity, to a level we have not seen before in language models. While some qualitative transitions in our interpretations of its writing seem to be justified, it should not be seen as having qualitatively crossed the boundary for the general type of intelligence we associate with people. Without the ability to connect the words it produces to concepts in the world beyond other words, how can it be said to understand, and without understanding what it’s saying, how can something be intelligent?

显然,凭借GPT-3的规模和培训数据,它已经能够实现语言生成功能,从而有必要加深一些我们需要了解的有关机器智能的问题。 流利度,适应性,甚至可能是一种创造力,达到了我们在语言模型中从未见过的水平。 尽管我们对其著作的解释在质量上有一些转变是合理的,但不应将其视为在质性上超越了我们与人交往的一般智力的界限。 如果没有能力将产生的单词与世界上其他单词的概念联系起来,那么如何说才能理解它,而又不了解它在说什么,那么事物又如何变得聪明呢?

If in the future this mathematical model of language is coupled with other types of models for vision, action, and other perceptions, we may have something that does have concepts that imbue adequacy to its language. We may also need to be more exact in our definition or what “intelligent” or “intelligence” means, and define different kinds of intelligence. There has certainly been continuous progress in the biological sciences to understand our own and other animal cognitive behaviors, abilities, and limitations. But if the problems of defining precisely what AI is for the last 70 years, and other far simpler seeming terms, like heap, are any indication, precision in our definition may be a moving target. Maybe there’s a range of behavior where it’s truly indeterminate if something is exhibiting intelligence or creativity or not. Or maybe the meaning comes down to how we use the words, and what function they serve in our everyday language. If we think of something as intelligent or creative, then it is.

如果将来这种语言的数学模型与其他类型的视觉,动作和其他感知模型结合在一起,我们可能会拥有某些确实不适合其语言的概念。 我们可能还需要更精确地定义“智能”或“智能”的含义,并定义不同种类的智能。 在了解我们自己和其他动物的认知行为,能力和局限性方面,生物学领域肯定取得了持续的进步。 但是,如果要准确地定义过去70年里什么是AI的问题以及其他看起来更简单的术语(例如堆)可以说明问题,那么我们定义的精确度可能就是一个移动的目标。 也许在某些行为上,如果某物是否表现出智力或创造力,它实际上是不确定的。 也许含义取决于我们如何使用这些单词,以及它们在我们的日常语言中所起的作用。 如果我们认为某件事是聪明的或有创造力的,那就是。

Originally published at machineopinings.com on August 8, 2020.

最初于 2020年8月8日 发布在 machineopinings.com 上。

翻译自: https://medium.com/machine-opinings/quantity-can-have-a-quality-of-its-own-for-language-models-fe5e665869a3

数量和质量评价模型


http://www.taodudu.cc/news/show-863368.html

相关文章:

  • mlflow_使用MLflow跟踪进行超参数调整
  • 聊天产生器
  • 深度学习领域专业词汇_深度学习时代的人文领域专业知识
  • 图像分类
  • CSDN-Markdown基本语法
  • python3(一)数字Number
  • python3(二)Numpy
  • python3(三)Matplotlib
  • python3(四)Pandas库
  • python3(六)监督学习
  • pycharm中如何调用Anoconda的库
  • TensorFlow(四)优化器函数Optimizer
  • TensorFlow(三)常用函数
  • TensorFlow(五)常用函数与基本操作
  • TensorFlow(六)with语句
  • Pycharm如何选择自动打开最近项目
  • CSDN-Markdown编辑器如何修改图像大小
  • TensorFlow(七)tf.nn库
  • TensorFlow(八)激活函数
  • TensorFlow(九)eval函数
  • TensorFlow(十)定义图变量的方法
  • TensorFlow读取MNIST数据集错误的问题
  • Tensorflow(一) 基础命令
  • TensorFlow(二)函数基础
  • TensorFlow:实战Google深度学习框架(二)实现简单神经网络
  • TensorFlow:实战Google深度学习框架(三)深层神经网络
  • TensorFlow:实战Google深度学习框架(四)MNIST数据集识别问题
  • TensorFlow:实战Google深度学习框架(五)图像识别与卷积神经网络
  • TensorFlow:实战Google深度学习框架(六)图像数据处理
  • TensorFlow调试常见问题(pycharm)

数量和质量评价模型_数量对于语言模型可以具有自己的质量相关推荐

  1. 图像主观质量评价 评分_图像质量分析工具哪家强?

    上篇文章中提到了日本总务省按照ITU国际电信联盟标准做了一个真人评测4K/8K的实验,真的是有点费时费力,那么有什么设备或软件能够替代人力来做这些工作吗?答案当然是有的啦.其中不得不提到Tektron ...

  2. 图像主观质量评价 评分_视频质量评价算法 之 客观评价的性能指标

    前言乱语 说完数据集,先给大家结个尾吧(误) 视频质量评估(VQA)第二期 来介绍几个 评价视频质量评价算法的性能评估指标 我发4,没有在套娃...... 简易小目录 SROCC(Spearman r ...

  3. camx模型_【推荐】基于CAMx的空气质量模拟及污染来源解析技术

    关注并转发本文章至朋友圈或科研群3小时以上,截图联系文末客服即可免费参加海报免费课程,快动动您的小手转发起来! 各企事业单位: 随着我国经济快速发展,我国面临着日益严重的大气污染问题.大气污染是工农业 ...

  4. lr模型和dnn模型_建立ML或DNN模型的技巧

    lr模型和dnn模型 机器学习 (Machine Learning) Everyone can fit data into any model machine learning or deep lea ...

  5. [总结]视频质量评价技术零基础学习方法

    前段时间略忙,因此一直计划要总结的很多东西都没来得及写,这两天趁着空闲时间写上一篇.以后等时间充裕了再补充一些内容.本文总结一下学习视频质量评价技术的方法.视频质量评价是我研究生阶段主要的工作,包括发 ...

  6. 视频质量评价技术零基础学习方法

    前段时间略忙,因此一直计划要总结的很多东西都没来得及写,这两天趁着空闲时间写上一篇.以后等时间充裕了再补充一些内容.本文总结一下学习视频质量评价技术的方法.视频质量评价是我研究生阶段主要的工作,包括发 ...

  7. Quality-Estimation0 (翻译质量评价-使用 BERT 特征训练 QE 模型)

    简介 翻译质量评价(Quality Estimation,QE)是机器翻译领域中的一个子任务,大致可分为 Sentence-level QE,Word-level QE,Phrase-level QE ...

  8. 在线教育音视频质量评价与感知系统

    为了探讨用一套客观,完备的评价系统对在线教育的音视频通信质量做出评价,力求做到定量,准确,横向可对比,并基于线上运行的大数据系统,发掘端到端通信平台存在的问题,找到优化方向,提升在线教育的用户体验,V ...

  9. H.264 无参考视频质量评价方法 (使用了基于遗传编程方法的符号回归)

    Nicolas Staelens 等人在<Constructing a No-Reference H.264/AVC Bitstream-based Video Quality Metric u ...

最新文章

  1. 基于mpi的奇偶排序_并行程序设计(第2版)pdf
  2. MySQL The password hash doesn't have the expected format.
  3. EPUB.js 解决图片裁剪问题(缩放问题)
  4. web应用程序并发测试_测试并发应用
  5. c语言结构体中整形数组初始化,c – 将{0,0}在结构体中初始化数组?
  6. 矩阵乘法 算法训练 试题_ALS算法实现用户对音乐评分的预测
  7. 印尼商品期货交易监管局考虑对加密货币交易征税
  8. 电工技术(3)—电路的分析方法二
  9. 谷歌打开微信定位服务器地址,使用Chrome修改user agent模拟微信内置浏览器
  10. 腾讯云联手腾讯安全玄武实验室,提供「应用克隆」漏洞免费检测服务
  11. java jar加密工具_Java加密流程-防止jar被反编译
  12. Raft 论文精读笔记|In Search of an Understandable Consensus Alg orithm (Extended Version)
  13. 前端“Wed, 22 Sep 2021 15:48:33 GMT“时间转换成“2021-09-22 15:48:33
  14. STM32系列微控制器入门介绍
  15. Principles of fMRI 1课程笔记7--fMRI数据的时间分辨率和空间分辨率
  16. 2022年危险化学品经营单位主要负责人及危险化学品经营单位主要负责人模拟考试
  17. 前沿对话:聚焦元宇宙,数字营销都能玩什么丨温州元宇宙月
  18. 爬虫练习 -- 链家
  19. 马斯克要将特斯拉汽车送上火星,还要将《太空怪人》作为背景音乐
  20. 如何用python调用百度云接口实例

热门文章

  1. 10 个开发新人提及最多的 GitHub Repo
  2. 成立仅8个月的个人网站,月收入几十万美金
  3. Origin null is not allowed by Access-Control-Allow-Origin 解决方法
  4. mysql登录错误1045修改工具_mysql登录1045错误时 修改登录密码
  5. 2021宿州市地区高考成绩排名查询,2021年宿州市所有的高中排名,宿州市高中高考成绩排名出炉...
  6. 中北大学c语言程序设计期末_中北大学:工作室联合育人 家校情温暖寒冬
  7. oracle连接数一直超出,Oracle超出最大連接數問題及解決(…
  8. 深度学习之基于Inception_ResNet_V2和CNN实现交通标志识别
  9. java mysql dao_Java DAO 模式
  10. 实验7-3-6 字符串转换成十进制整数 (15分)