ai人工智能在手机的应用

Tassilo Klein and Moin Nabi (SAP AI Research)

Tassilo Klein和Moin Nabi(SAP AI研究)

Deep learning has heralded a new era in artificial intelligence, establishing itself in integral parts of today’s world within a short time. Despite its immense power — often achieving super-human performance at specific tasks — modern AI suffers from numerous shortcomings and is still far away from what is known as general artificial intelligence. These shortcomings become particularly prominent in AI’s limited capability in understanding human language. Everyone who has interacted in one way or another with a chatbot or text generation engine might have noticed that the longer the interaction goes on with the machine, the staler it gets. When generating long passages of text, for instance, a lack of consistency and human-feel can be observed. Essentially, this highlights that the model behind does not really understand what it says and does. Rather it is more or less walking along paths of statistical patterns of word usage and argument structure, which it acquired during training from perusing through huge text corpora. This rotelike behavior of replicating statistical patterns reveals the absence of a crucial component: common sense.

d EEP学习已预示着人工智能的新时代,一个很短的时间内,在当今世界的组成部分,建立本身。 尽管现代AI具有强大的功能-通常可以在特定任务上实现超人的性能-但它仍存在许多缺点,并且与所谓的通用人工智能相距甚远。 在AI理解人类语言的能力有限的情况下,这些缺点尤为突出。 与聊天机器人或文本生成引擎以一种或另一种方式进行交互的每个人都可能已经注意到,与机器进行的交互时间越长,它就会变得越老。 例如,当生成较长的文本段落时,会观察到缺乏一致性和人情味。 从本质上讲,这凸显了背后的模型并不真正理解其说和做的事。 相反,它或多或少地沿着单词用法和论点结构的统计模式的路径行走,这些模式是在训练过程中通过细读文本语料库而获得的。 复制统计模式的这种死记硬背的行为表明缺少一个关键组成部分:常识。

But what exactly is common sense? Actually, there exists no clear definition of what it is. It is one of those things we often take for granted and only notice when it is missing. Basically, common sense incorporates aspects from literally everything we deal with — ranging from natural laws, social conventions to unwritten rules. Consequently, the spectrum covered by the concept of common sense is quite broad, explaining the fluffy nature of its definition. Even though common sense is quite generic and applies to all kinds of domains, one particular medium enjoys importance in terms of constituting a popular testbed: natural language. Hence it is no big surprise that injecting common sense into NLP is a fundamental research challenge. And because text processing applications have far-reaching practical implications for consumers, common sense in AI is more than just an academic gimmick. To better understand why this is the case, let us first look at the shortcomings of current models in more detail.

但是常识到底是什么呢? 实际上,没有明确定义。 这是我们常理所当然的事情之一,只有在它丢失时才会注意到。 基本上,常识涵盖了我们处理的几乎所有方面,包括自然法,社会惯例到不成文的规则。 因此,常识概念涵盖的范围很广,解释了其定义的蓬松性质。 尽管常识非常通用并且适用于所有领域,但是就构成流行的测试平台而言,一种特定的媒体仍然占有重要地位:自然语言。 因此,将常识注入到自然语言处理中是一项基本的研究挑战,这不足为奇。 而且,由于文本处理应用程序对消费者具有深远的实际意义,因此AI的常识不仅仅是学术上的头。 为了更好地理解为什么会发生这种情况,让我们首先更详细地研究当前模型的缺点。

为什么深度学习在常识上挣扎? (Why Deep Learning is struggling with common sense?)

Among the most significant shortcomings of neural networks is the lack of interpretable behavior in the sense of human-like reasoning paths. This can be attributed mainly to how machines are trained. In the standard supervised learning paradigm, the model is provided with target labels and input data. Then during training using the conventional backpropagation method, the model’s weights are step by step tweaked to reach a state that, to some degree, establishes a mapping from input to desired output target. As this learning procedure is purely goal-oriented, the resulting model has the tendency to resort to some higher level of pattern matching. However, these patterns can be quite complex, and without taking extra precautions, the model is free to choose any solution it likes to achieve the goal mathematically. Unsurprisingly, it is more often than not prone to find some shortcuts, which does not emulate human-like reasoning paths. Human-like reasoning is extremely complex, and its intrinsics are far from being fully understood. However, what is known is its heavy reliance on mechanisms such as conceptualization and compositionality, which are extremely difficult to replicate within a machine. Concepts are mental representations of objects and categories, which according to Murphy, 2002, are “the glue that holds our mental world together” that help to understand and respond appropriately to new entities of previously seen categories. This is tightly connected to what is known as compositionality, which is yet another capability considered to be key for the human capacity in generalization. It is the capacity to understand and produce novel combinations from known components.

神经网络的最大缺点之一是缺乏类似于人类推理路径的可解释行为。 这主要归因于机器的培训方式。 在标准的监督学习范式中,模型提供了目标标签和输入数据。 然后,在使用常规的反向传播方法进行训练的过程中,逐步调整模型的权重,以达到某种程度的状态,该状态在一定程度上建立了从输入到所需输出目标的映射。 由于此学习过程纯粹是面向目标的,因此生成的模型倾向于诉诸更高级别的模式匹配。 但是,这些模式可能非常复杂,并且无需采取额外的预防措施,该模型就可以自由选择其喜欢的任何解决方案,以数学方式实现目标。 毫不奇怪,通常不容易找到一些捷径,而这些捷径并不能模仿人的推理路径。 类似人的推理非常复杂,其内在性远未得到充分理解。 但是,众所周知的是,它非常依赖诸如概念化和组合性之类的机制,而这些机制在机器中很难复制。 概念是对象和类别的心理表示,根据墨菲(Murphy)的说法,它们是2002年的“胶合剂,将我们的心理世界凝聚在一起”,有助于理解和适当响应以前看到的类别的新实体。 这与所谓的组合性紧密相关,而组合性又被认为是概括中人类能力的关键。 它是从已知组件中了解并产生新颖组合的能力。

The absence of these human reasoning capabilities is precisely what makes machine learning models take shortcuts with its seemingly non-intuitive behavior. This problem becomes particularly prominent in the presence of infrequent but significant events, such as when machines lack generalization schemes. For that reason, those events are also referred to as “black swans”, which highlights the essence of the issue in a more figurative fashion. This quaint metaphor has its origin in the long-prevailing assumption in Europe that all swans are white. A system such as a self-driving car AI might only have been exposed to white swans during training. In the absence of sophisticated reasoning mechanisms, the car control system might react in a rather unpredictable way when confronted with something new. Given the sheer infinite combinatorial space of concept in the real world, mastering black swans requires that a model possess a notion of transfer in terms of concepts. Knowing the concept of “animal” with the subgroup of “swan” and the concept of color, it should be able to connect these both together without having seen this combination before. That’s why mastering black swans entails acquiring the capability to conceptualize during training to facilitate a transfer of concepts. However, as the space of combinations is huge, plausibility gauging at inference time is crucial, which directly connects it to common sense. Commonsense reasoning, with its inherent ambiguity in terms of concepts and their relationships, constitutes a case in point in this regard. To truly reason about common sense, a model has to come up with a process of concept disentanglement and compositional inference. Now as we know a bit more about common sense and its importance and touched the intersection with AI — how is common sense actually defined in the AI space? If you expect a crisp definition, you might be again disappointed. However, one of the first definitions of commonsense in AI was put forward by AI pioneer John McCarthy, who actually coined the term ‘artificial intelligence.’ In his seminal work “Programs with Common Sense” (1958) he wrote

这些人类推理能力的缺失正是使机器学习模型似乎看似非直觉的行为而走捷径的原因。 在不频繁但重要的事件(例如机器缺乏通用方案)的存在下,此问题尤其突出。 因此,这些事件也被称为“黑天鹅”,以更具象征意义的方式突出了问题的实质。 这种古朴的隐喻起源于欧洲一个普遍存在的假设,即所有天鹅都是白色的。 诸如自动驾驶汽车AI之类的系统可能仅在训练期间暴露于白色天鹅中。 在缺乏复杂的推理机制的情况下,汽车控制系统在遇到新的事物时可能会以相当不可预测的方式做出React。 考虑到现实世界中概念的无限组合空间,掌握黑天鹅要求模型在概念上具有转移的概念。 知道“动物”和“天鹅”子组的概念以及颜色的概念,它应该能够将这两个部分连接在一起,而以前没有看到过这种组合。 因此,掌握黑天鹅需要在训练过程中获得概念化的能力,以促进概念的传递。 但是,由于组合的空间很大,所以在推理时进行合理的量度至关重要,这直接将其与常识相联系。 常识性推理在概念及其关系方面具有固有的模糊性,就此而言就是一个案例。 为了真正推理常识,模型必须提出概念解开和构图推理的过程。 现在,我们对常识及其重要性有了更多的了解,并且触及了与AI的交汇点-AI空间中的常识实际上是如何定义的? 如果您希望获得清晰的定义,您可能会再次感到失望。 但是,人工智能先驱者约翰·麦卡锡(John McCarthy)提出了AI中常识的第一个定义,他实际上创造了“人工智能”一词。 在他的开创性著作“具有常识的程序”(1958年)中,他写道

“We shall therefore say that a program has common sense if it automatically deduces for itself a sufficiently wide class of immediate consequences of anything it is told and what it already knows.

“因此,我们可以说,如果程序自动为自己推断出所告知的一切信息和已经知道的一切,其直接后果是足够广泛的,那么它就具有常识。

[…]

[…]

Our ultimate objective is to make programs that learn from their experience as effectively as humans do.”

我们的最终目标是使程序能够像人类一样有效地学习经验。”

评估常识推理 (Assessing Commonsense Reasoning)

Given the vagueness of commonsense reasoning, we need a somewhat objective measure to assess the commonsense reasoning capabilities of all the programs to check these claims. However, you might have already guessed it — this is everything but a trivial endeavor. One of the most well-known challenges in this regard is the Winograd Schema Challenge (WSC), which was devised as an alternative to the famous Turing Test. A Winograd schema is a pair of sentences containing two nouns, differing in as little as one word with ambiguous pronouns. The challenge involves resolving the pronoun correctly to one of the nouns. The difference in the sentences leads to flipped solutions for each sentence in the pair. A key characteristic of the test is that humans are able to resolve the pronouns with no difficulty, whereas AI without commonsense reasoning cannot distinguish candidates. Therefore human experts created the set of challenge tasks, incorporating different kinds of common sense entities.To make things a bit more concrete, let us look at a very popular example of WSC:

鉴于常识推理的模糊性,我们需要某种客观的措施来评估所有程序检查这些声明的常识推理能力。 但是,您可能已经猜到了–这只是一项琐碎的工作。 在这方面,最著名的挑战之一是Winograd Schema Challenge(WSC),它是著名的图灵测试的替代方案。 Winograd模式是一对包含两个名词的句子,它们之间的歧义少到只有一个词。 挑战涉及将代词正确解析为名词之一。 句子中的差异导致该对中每个句子的解法颠倒。 该测试的关键特征是人类能够轻松解决代词,而没有常识性推理的AI无法区分候选词。 因此,人类专家创建了一组挑战任务,并结合了不同种类的常识实体。为了使事情更具体一些,让我们来看一个非常流行的WSC示例:

1) The trophy doesn’t fit in the suitcase because it is too small.

1)奖杯不适合在旅行箱 ,因为 太小了。

2) The trophy doesn’t fit in the suitcase because it is too big.

2)奖杯不适合在旅行箱 ,因为太大

Answers Candidates: A) the trophy B) the suitcase

回答候选人 :A)奖杯B)手提箱

In this example, the nouns are “the trophy” and ‘the suitcase,” with the ambiguous pronoun being “it.” As can be seen, changing the adjective from “too small” to “too big” changes the direction of the relationship, which makes the tasks extremely hard. Thus resolving this entails the conceptualization of an item (trophy) and a container (suitcase) via the relation (fitting). Therefore it should be clear that understanding the high-level concepts behind allows us to resolve all kinds of combinations for resolution, i.e., replacing the suitcase with some other container, the AI system should still come to the same conclusion. Now that you are familiar with common sense and a way to test it, we will discuss how common sense reasoning has been approached technically.

在此示例中,名词是“奖杯”和“手提箱”,模棱两可的代词是“ it”。 可以看出,将形容词从“太小”更改为“太大”会改变关系的方向,这使任务变得异常艰巨。 因此,解决该问题需要通过关系(配件)将物品(奖杯)和容器(手提箱)概念化。 因此,很显然,理解后面的高级概念可以使我们解决各种组合问题,以解决问题,即用其他容器替换手提箱,人工智能系统仍应得出相同的结论。 现在您已熟悉常识及其测试方法,我们将讨论如何从技术上运用常识推理。

人工智能中的常识推理 (Commonsense Reasoning in AI)

A lot of time has passed since the definition of commonsense in AI was put forward by John McCarthy in the 50s. However, despite the recent advances in machine learning, not much has changed in terms of true commonsense reasoning capabilities. However, this has changed recently with the topic regaining popularity, which can be attributed to the recent progress in NLP and the importance of the task. Unsurprisingly, there exist a plethora of approaches to tackle commonsense reasoning, which can roughly be clustered in three groups:

自从50年代约翰·麦卡锡(John McCarthy)提出AI常识的定义以来,已经过去了很多时间。 但是,尽管最近机器学习取得了进步,但在真正的常识推理能力方面并没有太大改变。 但是,随着话题重新流行,这种情况最近发生了变化,这可以归因于NLP的最新进展和任务的重要性。 毫不奇怪,存在很多解决常识性推理的方法,这些方法可以大致分为三类:

  • Rule and knowledge-based approaches
    规则和基于知识的方法
  • Generic AI approaches
    通用AI方法
  • AI language model approaches
    AI语言模型方法

Current best-performing approaches are from the latter category. The underlying assumption of these methods is that their training corpora, such as encyclopedias, implicitly contain some commonsense knowledge that the model can usurp. However, this assumption is problematic because such texts barely incorporate common sense due to the assumed triviality. These methods usually function in a two-stage learning pipeline. Starting from an initial self-supervised model, commonsense-aware word embeddings are then obtained in a subsequent finetuning phase. Fine-tuning enforces the learned embedding to solve the downstream WSC task only as a plain coreference resolution task. Additionally, to fully utilize the power of language models, conventional approaches require annotated training data in terms of what is right and wrong. However, the creation of large labeled datasets and knowledge bases is cumbersome and expensive as it is done manually by experts. This applies particularly to commonsense reasoning, where compiling the complete set of commonsense entities of the world is intractable, due to the potentially infinite number concepts and combinations.Language models capture the probabilities of word occurrence based on the text they are exposed to during training. Apart from capturing the word statistics, neural language models also learn word embeddings from representation and raw text data. The recently proposed BERT picks up the notion of language modeling in a slightly different way. Instead of optimizing a standard language model objective — modeling the word probability given a preceding context — BERT has a pseudo-language model objective. Specifically, BERT leverages what is known as a masked language model that is trying to complete sentences, from which words were replaced by a mask (“_____”) randomly.

当前表现最佳的方法来自后一类。 这些方法的基本假设是它们的训练语料库(例如百科全书)隐含一些模型可以篡夺的常识。 但是,这种假设是有问题的,因为由于假定的琐碎性,这些文本几乎没有包含常识。 这些方法通常在两阶段的学习流程中起作用。 从初始的自我监督模型开始,然后在随后的微调阶段获得常识性词嵌入。 微调强制执行学习到的嵌入,以仅作为普通的共指解析任务来解决下游WSC任务。 另外,为了充分利用语言模型的功能,常规方法需要在对与错方面标注注释的训练数据。 但是,创建大型的带有标签的数据集和知识库非常麻烦且昂贵,因为它是由专家手动完成的。 这尤其适用于常识推理,由于潜在的无限数量的概念和组合,因此难以编译世界上完整的常识实体集。语言模型根据训练过程中暴露的文字来捕获单词出现的可能性。 除了捕获单词统计信息外,神经语言模型还从表示形式和原始文本数据中学习单词嵌入。 最近提出的BERT以稍微不同的方式接受了语言建模的概念。 BERT并没有优化标准语言模型目标(在给定上下文的情况下对单词概率进行建模),而是采用了伪语言模型目标。 具体地说,BERT利用了一种试图完成句子的被称为“掩盖语言”的模型,从该模型中,单词被一个掩码(“ _____”)随机替换。

“The trophy does not fit into the suitcase, because ____ is too big.”

“奖杯不适合放入手提箱,因为____太大。”

To solve this task, the model trains a so-called attention mechanism. It provides cues such as to which words the model might want to pay more attention to when solving a task. In terms of the preceding example, more attention to the word “trophy” than “suitcase”, because knowing that the subject is trophy is more plausible. However, as we will see shortly, filling in words like this is particularly challenging due to the inherent ambiguity and technically requires a notion of common sense. Apart from improving the performance of models, self-attention also suggests providing insights into a model’s inner working. This is quite a desirable property as Deep Learning is often taunted as being a black box. In addition to the masked word prediction, training BERT entails another auxiliary classification task. Specifically, it is a binary classification objective, predicting whether two sentences are successive. All this taken together, yielded embeddings that could be easily transferred by finetuning to a wide range of downstream tasks, which propelled the domain of NLP to a new era.

为了解决此任务,该模型训练了一种所谓的注意力机制。 它提供了提示,例如在解决任务时模型可能需要更多关注的单词。 在前面的示例中,与“手提箱”相比,更多地关注单词“奖杯”,因为知道受试者是奖杯更为合理。 但是,正如我们不久将看到的那样,由于内在的含糊性,像这样的词的填充特别具有挑战性,并且在技术上需要常识的概念。 除了提高模型的性能外,自我关注还建议提供对模型内部工作的见解。 这是相当可取的属性,因为深度学习通常被嘲讽为黑匣子。 除了掩盖单词预测之外,训练BERT还需要进行另一项辅助分类任务。 具体来说,它是一个二进制分类目标,预测两个句子是否连续。 所有这些结合在一起,产生了嵌入,可以通过微调轻松地将其嵌入到一系列下游任务中,从而将NLP的领域推向了一个新时代。

接下来是什么? (What’s up next?)

In the next blog post to be published soon, we will discuss two approaches for commonsense reasoning developed at SAP AI research that leverage the BERT language model, and have recently been published at ACL (Annual Conference of the Association for Computational Linguistics) — the premier conference of the field of computational linguistics. The focus of research has been geared towards having algorithms with minimal supervision to not establish shortcuts for shallow task solving. Thus, we will start with an unsupervised approach that directly exploits self-attention of the BERT language model without any further finetuning. Afterward, we will present a more powerful approach that operates in a self-supervised fashion, which outperforms supervised methods despite being only weakly supervised.

在即将发布的下一篇博客文章中,我们将讨论SAP AI研究开发的两种利用BERT语言模型的常识推理方法,该方法最近在ACL(计算语言学协会年度会议)上发布-首要计算语言学领域会议。 研究的重点是针对具有最少监督的算法,以不为解决浅层任务建立捷径。 因此,我们将从一种无监督的方法开始,该方法直接利用BERT语言模型的自我注意,而无需任何进一步的调整。 之后,我们将介绍一种更强大的方法,该方法以自我监督的方式运行,尽管仅受到弱监督,但其性能优于监督的方法。

翻译自: https://medium.com/sap-machine-learning-research/common-sense-still-not-common-in-ai-9d68f431e17f

ai人工智能在手机的应用

http://www.taodudu.cc/news/show-4612161.html

相关文章:

  • 手机加速度传感器在Android横竖屏切换中的应用
  • 两年前华为手机型号_两年前
  • 人脸识别不开手机也能解锁吗_为什么不应该用脸解锁手机
  • Planet of the phones手机星球
  • 智能家居价格昂贵?只能说你知道的太少——平价的智能家居方案
  • 未来智能家居方向是什么模式?小米?华为?智汀?
  • 智能家居赛道上,小米vs华为谁更有优势?
  • 智能家居大规模落地的关键,在于能否迈过“老人”这道坎
  • 2023智能家电、智能家居解决方案与技术论坛——CAEE
  • 米家智能家居之一——多功能智能网关
  • 自家的智能家居方案研究
  • 【听】你会杀死那个胖子吗?功利与道德的选择
  • Python循环 - 胖子老板来包烟
  • 0204原来“瘦胖子”比“真胖子”更危险
  • 不一样的第一桶金
  • 记忆中的那些死亡们
  • 程序员的“三大死穴”
  • 胖子伤不起?
  • 幽默搞笑:我赶紧把手抽开,这死胖子暗恋我十年,死心不改啊
  • 拆读死胖子【普通人的第一桶金】
  • JS实现手机号码以及姓名的脱敏处理
  • java 判断是否是手机号码_Java工具类:(1)判断String是否为手机号码
  • 简单验证 姓名,身份证,手机号码
  • JS 对输入的姓名 手机号码 邮箱做校验
  • 手机号码和姓名脱敏(加密*)
  • H5 雪碧图 移动的机器猫
  • CSS3 帧动画(Sprite,直译叫雪碧图)
  • 微信公众号开发,扫描二维码事件推送丢失参数问题
  • Java实现微信公众号扫描二维码未关注时跳转关注界面已关注跳转业务界面
  • 微信公众号开发(十)——扫描带参数二维码事件

ai人工智能在手机的应用_常识在人工智能中仍然不常见相关推荐

  1. ai人工智能在手机的应用_强化学习在人工智能中的应用

    ai人工智能在手机的应用 The reinforcement learning is being used in many Intelligent Systems and the developers ...

  2. ai人工智能在手机的应用_何时更重要地在产品中利用人工智能

    ai人工智能在手机的应用 You need to go from your house to the Airport. Do you take a Limo or a bike? Of course ...

  3. 促进新一代人工智能产业发展三年行动计划_工信部新一代人工智能产业创新重点揭榜任务——中国联通智能化网络基础设施及开放平台启动会成功召开...

    4月2日,工信部新一代人工智能产业创新重点揭榜任务(下称"人工智能重点揭榜任务")--中国联通智能化网络基础设施及开放平台在线启动会成功召开.来自中国联通网络技术研究院.联通集团智 ...

  4. java手机验证码登陆_在Web项目中手机短信验证码实现的全过程记录

    这篇文章主要给大家介绍了关于在Web项目中实现短信验证码的全过程记录,文中通过示例代码介绍的非常详细,在文末跟大家提供了源码下载,需要的朋友可以参考借鉴,下面随着小编来一起学习学习吧. 前言 最近在做 ...

  5. dbscan算法中 参数的意义_无监督机器学习中,最常见的聚类算法有哪些?

    在机器学习过程中,很多数据都具有特定值的目标变量,我们可以用它们来训练模型. 但是,大多数情况下,在处理实际问题时,数据不会带有预定义标签,因此我们需要开发能够对这些数据进行正确分类的机器学习模型,通 ...

  6. java布尔类型比较器_浅谈Java中几种常见的比较器的实现方法

    在java中经常会涉及到对象数组的排序问题,那么就涉及到对象之间的比较问题. 通常对象之间的比较可以从两个方面去看: 第一个方面:对象的地址是否一样,也就是是否引用自同一个对象.这种方式可以直接使用& ...

  7. ai模仿声音软件_如何开发人工智能类的软件?人工智能让我们的生活更加便捷!...

    应用程序逐渐成为商店的替代品.企业所有者现在可以坐在沙发上用手握住智能手机设备来管理自己的业务.智慧型手机的日常生活日新月异,而AI也因融入日常工作中而获得了巨大的力量.我们迈出的每一步都会受到AI的 ...

  8. ai人工智能的本质和未来_人工智能的未来在于模型压缩

    ai人工智能的本质和未来 The future looks towards running deep learning algorithms on more compact devices as an ...

  9. 神码ai人工智能写作机器人_真正的人工智能和机器学习的未来

    神码ai人工智能写作机器人 "Is there a true AI?" This is one question that a lot of experts in the indu ...

最新文章

  1. mysql查询日期胜负_MySQL面试题:查询每个日期的胜负次数
  2. python函数参数列表_python函数的列表参数传递
  3. 47. 全排列 II(回溯算法)
  4. python:dataframe groupby后agg、apply、transfrom用法
  5. 解决安装IIS时提示找不到zClientm.exe文件的问题
  6. 力扣101. 对称二叉树(JavaScript)
  7. 删除文件时提示“对于目标文件系统,文件xx过大”的处理办法”的解决办法
  8. Java课程设计-学生成绩管理系统
  9. 远程服务器停止运行了怎么办,远程桌面己停止工作”的解决方法
  10. 大物实验数据处理——用C求标准误差、标准偏差、标准偏差、相对误差
  11. 工欲善其事,必先利其器!idea最详细的Debug技巧及方法,让你定位bug如探囊取物!
  12. 利用Promise彻底解决微信小程序云函数因运行时间过长返回result,underfined为空的方法
  13. Civil 3d 之枚举 SpiralType
  14. 数字图像处理 - 灰度变换与空间滤波
  15. [数据库] DSN是什么/是什么意思--解释
  16. 微信公众号最佳实践 ( 3.2) 被动回复用户消息
  17. Python-Django毕业设计客户拜访系统小程序(程序+Lw)
  18. 2022/11/4电子体温计方案_单片机
  19. java excel表格导入_Java实现Excel表格的导入和导出(一)
  20. 手游联运是什么意思?

热门文章

  1. JAVA——多线程【线程终止、中断、插队】
  2. 【对抗攻击论文笔记】对抗迁移性:Delving Into Transferable Adversarial Examples And Black-Box Attacks
  3. python 笔记--同时输入两个数字
  4. 进制转换和函数的定义
  5. ARM嵌入式开发板学习路线指引
  6. 基于wpa_supplicant库的WIFI连接功能实现--wpa_cli命令解析
  7. for单次循环参数对比-以ode45求一元二阶微分方程为例
  8. 51单片机定时器工作方式1、2原理详解
  9. 如何破解Amazon 登陆 metadata1值?Amazon 登陆 metadata1 形成的主要混淆的js研究
  10. emqx broker安装