打破学习的玻璃墙

What Google Translate does is nothing short of amazing. In order to engineer the ability to translate between any pair within the dozens of languages it supports, Google Translate’s creators utilized some of the most advanced and recent developments in NLP in exceptionally creative ways.

Google Translate所做的一切令人惊讶。 为了设计在其支持的数十种语言中的任何一对之间进行翻译的能力,Google Translate的创造者以非凡的创造性方式利用了NLP的一些最先进和最新的发展。

In machine translation, there are generally two approaches: a rule-based approach and a machine learning-based approach. Rule-based translation involves the collection of a massive dictionary of translations, perhaps word-by-word or by phrase, which are pieced together into a translation.

在机器翻译中,通常有两种方法:基于规则的方法和基于机器学习的方法。 基于规则的翻译涉及大量海量翻译词典的收集,这些词典可能逐字逐词地或短语地拼凑成翻译。

For one, grammar structures differ significantly between languages. Consider Spanish, in which objects have a masculine or feminine gender. All adjectives and words like ‘the’ or ‘a’ must conform to the gender of the object in which it is describing. Translating ‘the big red apples’ into Spanish would require each of the words ‘the’, ‘big’, and ‘red’ to be written in both plural and feminine form, since those are the attributes of the word ‘apples’. In addition, in Spanish adjectives usually follow the noun, but sometimes they go before.

一方面,语言之间的语法结构差异很大。 考虑一下西班牙语,其中的对象具有男性或女性性别。 所有形容词和单词(例如“ the”或“ a”)必须符合其描述对象的性别。 将“大红苹果”翻译成西班牙语需要将“ the”,“ big”和“ red”这两个词分别以复数形式和女性形式书写,因为这些是“ apples”一词的属性。 另外,在西班牙语中,形容词通常跟随名词,但有时在其之前。

Image created by author
图片由作者创建

The result is ‘las [the] grandes [big] manzanas [apples] rojas [red]’. This grammar and the necessity of changing all adjectives doesn’t make any sense to a pure English speaker. Just within English-to-Spanish translation, there are too many disparities in fundamental structure to keep track of. However, a truly global translation requires translation between every pair of languages.

结果是“大[大]曼萨纳斯[苹果]罗哈斯[红色]”。 这种语法以及更改所有形容词的必要性对于纯英语使用者来说毫无意义。 在英语到西班牙语的翻译中,基本结构上的差异太多,无法跟踪。 但是,真正的全球翻译需要对语言之间的翻译。

Within this task arises another problem: to translate between, say, French and Mandarin, the only feasible rule-based solution would be to translate French into a base language — probably English — which would be then translated into Mandarin. This is like playing telephone: the nuance of a phrase said in one language is trampled over by noise and heavy-handed generalization.

在此任务中出现了另一个问题:例如在法语和普通话之间进行翻译,唯一可行的基于规则的解决方案是将法语翻译成基本语言(可能是英语),然后将其翻译成普通话。 这就像玩电话:用一种语言说的短语的细微差别被噪音和粗俗的概括所践踏。

Image created by author
图片由作者创建

The complete hopelessness of rule or dictionary-based translation and the need for some kind of universal model that can learn the vocabulary and the structure of two languages should be clear. Building this model is a difficult task for a few reasons, however:

规则或基于字典的翻译完全没有希望,并且需要某种通用的模型来学习两种语言的词汇和结构。 但是,出于以下几个原因,构建此模型是一项艰巨的任务:

  • The model needs to be lightweight enough such that it works offline so users can access it even without an Internet connection. Moreover, translation between any two languages should be supported, all downloaded on the user’s phone (or PC).

    该模型必须足够轻巧,以便脱机工作,以便用户即使没有Internet连接也可以访问它。 此外,应该支持任何两种语言之间的翻译, 所有语言都下载在用户的手机(或PC)上。

  • The model must be fast enough to generate live translations.该模型必须足够快才能生成实时翻译。
  • Elaborating on the example above — in English, the words ‘big red apple’ are sequential. If we are to process the data from left-to-right, however, the Spanish translation would be inaccurate, since in that language adjectives, even before the noun in English, change form depending on the noun. The model needs to consider non-sequential translation.详细说明上面的示例-用英语,“大红苹果”一词是顺序的。 但是,如果我们要从左到右处理数据,那么西班牙语的翻译将是不准确的,因为在该语言中,形容词甚至在英语名词之前都会根据名词而改变形式。 该模型需要考虑非顺序翻译。
  • Machine learning-based systems are always heavily reliant on the dataset, which means that words not represented in the data are words the model knows nothing about (it needs a robustness/good memory for rare words). Where would one find a collection of high-quality translated data representative of the entire grammar and vocabulary of a language?基于机器学习的系统始终严重依赖于数据集,这意味着未在数据中表示的单词是模型不知道的单词(对于稀有单词,它需要鲁棒性/良好的记忆力)。 在哪里可以找到代表一种语言的全部语法和词汇的高质量翻译数据的集合?
  • A lightweight model cannot memorize the vocabulary of an entire language. How does the model deal with unknown words?轻量级模型无法记住整个语言的词汇表。 模型如何处理未知词?
  • Many Asian languages like Japanese or Mandarin are based on characters instead of letters. Hence, there is one highly specific character for each word. A machine learning model must be able to translate between a letter-based system like in English, Spanish, or German — which, even containing accented letters, are nevertheless letters — to a character-based one like Korean, and vice versa.许多亚洲语言,例如日语或普通话,都是基于字符而不是字母。 因此,每个单词都有一个非常具体的字符。 机器学习模型必须能够在基于字母的系统(例如英语,西班牙语或德语)之间进行转换,该系统甚至包含重音字母,但仍然是字母,然后转换为基于字符的系统(如韩语),反之亦然。

When Google Translate was initially released, they used a phrase-based algorithm, which is essentially a rule-based method with more complexity. Soon after, however, it drastically improved its quality with the development of Google Neural Machine Translation (GNMT).

Google Translate最初发布时,他们使用了基于短语的算法,这实际上是一种基于规则的方法,具有更高的复杂性。 然而,不久之后,随着Google神经机器翻译(GNMT)的发展,它大大提高了质量。

Source: Google Translate. Image free to share.
资料来源:Google翻译。 图片免费分享。

They considered each of the problems above and came up with innovative solutions, creating an improved Google Translate —now, the world’s most popular free translation service.

他们考虑了上述每个问题,并提出了创新的解决方案,从而创建了改进的Google翻译-现在是世界上最受欢迎的免费翻译服务。

Creating one model for every pair of languages is obviously ridiculous: the number of deep models needed would reach the hundreds, each of which would need to be stored on a user’s phone or PC for efficient usage and/or offline use. Instead, Google decided to create one large neural network that could translate between any two languages, given two tokens (indicators; inputs) representing the languages.

为每对语言创建一个模型显然是荒谬的:所需的深度模型数量将达到数百种,每种深度模型都需要存储在用户的手机或PC上,以便有效使用和/或离线使用。 取而代之的是,Google决定创建一个大型神经网络,该网络可以在两种表示语言的标记(指示符;输入)之间进行翻译,从而可以在任何两种语言之间进行翻译。

The fundamental structure of the model is encoder-decoder. One segment of the neural network seeks to reduce one language into its fundamental, machine-readable ‘universal representation’, whereas the other takes this universal representation and repeatedly transforms the underlying ideas in the output language. This is a ‘Transformer Architecture’; the following graphic gives a good intuition of how it works, how previously generated content plays a role in generating following outputs, and its sequential nature.

该模型的基本结构是编码器-解码器。 神经网络的一个部分试图将一种语言简化为其基本的,机器可读的“通用表示”,而另一部分则采用这种通用表示,并在输出语言中反复转换基础思想。 这是一个“变压器架构”; 下图很好地说明了它的工作原理,先前生成的内容如何在生成后续输出中发挥作用以及其顺序性质。

AnalyticsIndiaMag. Image free to share.AnalyticsIndiaMag 。 图片免费分享。

Consider an alternative visualization of this encoder-decoder relationship (a seq2seq model). The intermediate attention between the encoder and decoder will be discussed later.

考虑此编码器-解码器关系的替代可视化(seq2seq模型)。 编码器和解码器之间的中间关注点将在后面讨论。

Google AI. Image free to share.Google AI 。 图片免费分享。

The encoder consists of eight stacked LSTM layers. In a nutshell, LSTM is an improvement upon an RNN — a neural network designed for sequential data — that allows the network to ‘remember’ useful information to make better future predictions. In order to address the non-sequential nature of language, the first two layers add bidirectionality. Pink nodes indicate a left-to-right reading, whereas green nodes indicate a right-to-left reading. This allows for GNMT to accommodate different grammar structures.

编码器由八个堆叠的LSTM层组成。 简而言之,LSTM是对RNN(针对顺序数据设计的神经网络)的改进,它使网络能够“记住”有用的信息,从而做出更好的未来预测。 为了解决语言的非顺序性质,前两层添加了双向性。 粉色节点表示从左到右的读数,而绿色节点表示从右到左的读数。 这允许GNMT适应不同的语法结构。

Source: GNMT Paper. Image free to share.
资料来源:GNMT文件。 图片免费分享。

The decoder model is also composed of eight LSTM layers. These seek to translate the encoded content into the new language.

解码器模型也由八个LSTM层组成。 这些试图将编码的内容翻译成新的语言。

An ‘attention mechanism’ is placed between the two models. In humans, our attention helps us keep focus on a task by looking for answers to that task and not additional irrelevant information. In the GNMT model, the attention mechanism helps identify and amplify the importance of unknown segments of the message, which are prioritized in the decoding. This solves a large part of the ‘rare words problem’, in which words that appear less often in the dataset are compensated with more attention.

在两个模型之间放置一个“注意机制”。 在人类中,我们的注意力通过寻找该任务的答案而不是其他不相关的信息来帮助我们专注于一项任务。 在GNMT模型中,注意力机制有助于识别和放大消息中未知片段的重要性,这些片段在解码时会优先处理。 这解决了“稀有词问题”的很大一部分,其中在数据集中出现频率较低的词得到了更多关注。

Skip connections, or connections that jump over certain layers, were used to stimulate healthy gradient flow. As is with the ResNet (Residual Network) model, updating gradients may be caught up at one particular layer, affecting all the layers before it. With such a deep network comprising of 16 LSTMs in total, it is imperative not only for training time but for performance that skip connections be employed, allowing gradients to cross potentially problematic layers.

跳过连接或跳过某些层的连接用于刺激健康的梯度流。 与ResNet(残差网络)模型一样,更新梯度可能会在一个特定的层上被捕获,从而影响到它之前的所有层。 对于这样一个由总共16个LSTM组成的深度网络,不仅对于训练时间而且对于性能而言,都必须跳过连接,从而允许梯度跨越可能存在问题的层。

Source: GNMT Paper. Image free to share.
资料来源:GNMT文件。 图片免费分享。

The builders of GNMT invested lots of effort into developing an efficient low-level system that ran on TPU (Tensor Processing Unit), a specialized machine-learning hardware processor designed by Google, for optimal training.

GNMT的创建者投入了很多精力来开发一种高效的低级系统,该系统在TPU(张量处理单元)上运行,TPU是Google设计的专用机器学习硬件处理器,用于最佳培训。

An interesting benefit of using one model to learn all the translations was that translations could be indirectly learned. For instance, if GNMT were trained only on English-to-Korean, Korean-to-English, Japanese-to-English, and English-to-Japanese, the model yielded good translations for Japanese-to-Korean and Korean-to-Japanese translation, even though it had never been directly trained on it. This is known as zero-shot learning, and significantly improved the required training time for deployment.

使用一种模型学习所有翻译的一个有趣的好处是可以间接学习翻译。 例如,如果GNMT仅接受了英语对韩语,韩语对英语,日语对英语和英语对日语的培训,那么该模型就可以很好地为日语对韩语和朝鲜语对英语进行翻译日语翻译,即使从未接受过日语翻译。 这被称为零击学习,并且大大缩短了部署所需的培训时间。

AnalyticsIndiaMag. Image free to share.AnalyticsIndiaMag 。 图片免费分享。

Heavy pre-processing and post-processing is done on the inputs and outputs of the GNMT model in order to support, for example, the highly specialized characters found often in Asian languages. Inputs are tokenized according to a custom-designed system, with word segmentation and markers for the beginning, middle, and end of a word. These additions made the bridge between different fundamental representations of language more fluid.

对GNMT模型的输入和输出进行大量的预处理和后处理,以支持例如亚洲语言中经常发现的高度专业化的字符。 输入是根据定制设计的系统标记的,带有单词分段和单词开头,中间和结尾的标记。 这些添加使语言的不同基本表示之间的桥梁更加流畅。

For training data, Google used documents from the United Nations and the European Parliament’s documents and transcripts. Since these organizations contained information professionally translated between many languages — with high quality (imagine the dangers of a badly translated declaration) — this data was a good starting point. Later on, Google began using user (‘community’) input to strengthen cultural-specific, slang, and informal language in its model.

对于培训数据,Google使用了来自联合国的文件以及欧洲议会的文件和成绩单。 由于这些组织包含在多种语言之间进行专业翻译的信息(质量很高(想像一下声明错误翻译的危险)),因此这些数据是一个很好的起点。 后来,Google开始使用用户(“社区”)输入来增强其模型中特定于文化的,语和非正式语言。

GNMT was evaluated on a variety of metrics. During training, GNMT used log perplexity. Perplexity is a form of entropy, particularly ‘Shannon entropy’, so it may be easier to start from there. Entropy is the average number of bits to encode the information contained in a variable, and so perplexity is how well a probability model can predict a sample. One example of perplexity would be the number of characters a user must type into a search box for a query proposer to be at least 70% sure the user will type any one query. It is a natural choice for evaluating NLP tasks and models.

对GNMT进行了多种评估。 在训练期间,GNMT使用了日志困惑。 困惑是熵的一种形式,特别是“香农熵”,因此从那里开始可能更容易。 熵是对变量中包含的信息进行编码的平均位数,因此困惑度是概率模型预测样本的能力。 困惑的一个例子是,用户必须在搜索框中键入一个字符数,查询提议者才能至少确保70%的用户可以键入任何一个查询。 这是评估NLP任务和模型的自然选择。

The standard BLEU score for language translation attempts to measure how close the translation was to a human one, on a scale from 0 to 1, using a string-matching algorithm. It is still widely used because it has shown strong correlation with human-rated performance: correct words are rewarded, with bonuses for consecutive correct words and longer/more complex words.

语言翻译的标准BLEU分数尝试使用字符串匹配算法以0到1的比例来衡量翻译与人类翻译的接近程度。 它仍被广泛使用,因为它显示出与人类评价的性能密切相关:奖励正确的单词,并为连续正确的单词和更长/更复杂的单词提供奖励。

However, it assumes that a professional human translation is the ideal translation, only evaluates a model on select sentences, and does not have much robustness to different phrasing or synonyms. This is why a high BLEU score (>0.7) is usually a sign of overfitting.

但是,它假定专业的人工翻译是理想的翻译,仅对所选句子评估模型,并且对不同的短语或同义词没有足够的鲁棒性。 这就是为什么BLEU分数较高(> 0.7)通常表示过度拟合的原因。

Regardless, an increase in BLEU score (represented as a fraction) has shown an increase in language-modelling power, as demonstrated below:

无论如何,BLEU分数的提高(表示为分数)显示出语言建模能力的提高,如下所示:

Google AI. Image free to share.Google AI 。 图片免费分享。

Using the developments of GNMT, Google launched an extension that could perform visual real-time translation of foreign text. One network identified potential letters, which were fed into a convolutional neural network for recognition. The recognized words are then fed into GNMT for recognition and rendered in the same font and style as the original.

借助GNMT的发展,Google推出了一项扩展程序,可以执行外文的可视实时翻译。 一个网络识别出潜在的字母,然后将其输入到卷积神经网络中进行识别。 然后将识别出的单词输入到GNMT中进行识别,并以与原始字体相同的字体和样式进行呈现。

Source: Google Translate. Image free to share.
资料来源:Google翻译。 图片免费分享。

One can only imagine the difficulties abound in creating such a service: identifying individual letters, piecing together words, determining the size and font of text, properly rendering the image.

人们只能想象创建此类服务的困难:识别单个字母,将单词拼凑在一起,确定文本的大小和字体,正确渲染图像。

GNMT appears in many other applications, sometimes with a different architecture. Fundamentally, however, GNMT represents a milestone in NLP, with the wonders of a lightweight yet effective design building upon years of NLP breakthroughs incredibly accessible to everyone.

GNMT出现在许多其他应用程序中,有时具有不同的体系结构。 但是,从根本上讲,GNMT代表了NLP的里程碑,其奇迹是在多年的NLP突破性成果令人难以置信的基础上构建轻巧而有效的设计。

关键点 (Key Points)

  • There are many challenges when it comes to providing a truly global translation services. The model must be lightweight, but also understand the vocabulary, grammar structures, and relationships between dozens of languages.提供真正的全球翻译服务面临许多挑战。 该模型必须是轻量级的,而且必须了解词汇表,语法结构以及数十种语言之间的关系。
  • Rule-based translation systems, even more complex phrase-based ones, fail to perform well at translation tasks.基于规则的翻译系统,甚至更复杂的基于短语的翻译系统,在翻译任务上的表现均不佳。
  • GNMT uses a Transformer Architecture, in which an encoder and decoder, composed of 8 LSTMs each. The first two layers of the encoder allow for bidirectional reading to accommodate non-sequential grammar.GNMT使用一种Transformer体系结构,其中的编码器和解码器分别由8个LSTM组成。 编码器的前两层允许双向读取以适应非顺序语法。
  • The GNMT model uses skip connections to promote healthy gradient flow.GNMT模型使用跳过连接来促进健康的梯度流。
  • GNMT developed zero-shot learning, which allowed for significantly faster growth and understanding in training.GNMT开发了零击学习,可以大大加快培训的增长和了解。
  • The model was trained on log perplexity and evaluated formally using the standard BLEU score.对模型进行对数困惑度训练,并使用标准BLEU评分进行正式评估。

With the advancements of GNMT — beyond text-to-text translation but image-to-image and sound-to-sound translation — deep learning has made one huge leap towards the understanding of human language. Its applications, not as an esoteric and impractical model but as an innovative, lightweight, and highly usable one, are unbounded. In many ways, GNMT is one of the most accessible and practical culmination of years of cutting-edge NLP research.

随着GNMT的发展(从文本到文本的翻译,但从图像到图像的翻译和声音到声音的翻译),深度学习在理解人类语言方面迈出了一大步。 它的应用不是无限的,不现实的模型,而是一种创新,轻便且高度可用的模型。 从许多方面来说,GNMT是多年来最前沿的NLP研究中最容易获得和最实用的成果之一。

This was just a peek into the fascinating machine learning behind Google Translate. You can read the full-length paper here and visit the interface for yourself here.

这只是对Google Translate背后有趣的机器学习的一瞥。 你可以阅读全长纸这里参观的界面为自己在这里 。

翻译自: https://towardsdatascience.com/breaking-down-the-innovative-deep-learning-behind-google-translate-355889e104f1

打破学习的玻璃墙


http://www.taodudu.cc/news/show-863858.html

相关文章:

  • 向量 矩阵 张量_张量,矩阵和向量有什么区别?
  • monk js_使用Monk AI进行手语分类
  • 辍学的名人_辍学效果如此出色的5个观点
  • 强化学习-动态规划_强化学习-第5部分
  • 查看-增强会话_会话式人工智能-关键技术和挑战-第2部分
  • 我从未看过荒原写作背景_您从未听说过的最佳数据科学认证
  • nlp算法文本向量化_NLP中的标记化算法概述
  • 数据科学与大数据排名思考题_排名前5位的数据科学课程
  • 《成为一名机器学习工程师》_如何在2020年成为机器学习工程师
  • 打开应用蜂窝移动数据就关闭_基于移动应用行为数据的客户流失预测
  • 端到端机器学习_端到端机器学习项目:评论分类
  • python 数据科学书籍_您必须在2020年阅读的数据科学书籍
  • ai人工智能收入_人工智能促进收入增长:使用ML推动更有价值的定价
  • 泰坦尼克数据集预测分析_探索性数据分析—以泰坦尼克号数据集为例(第1部分)
  • ml回归_ML中的分类和回归是什么?
  • 逻辑回归是分类还是回归_分类和回归:它们是否相同?
  • mongdb 群集_通过对比群集分配进行视觉特征的无监督学习
  • ansys电力变压器模型_变压器模型……一切是如何开始的?
  • 浓缩摘要_浓缩咖啡的收益递减
  • 机器学习中的无监督学习_无监督机器学习中聚类背后的直觉
  • python初学者编程指南_动态编程初学者指南
  • raspberry pi_在Raspberry Pi上使用TensorFlow进行对象检测
  • 我如何在20小时内为AWS ML专业课程做好准备并进行破解
  • 使用composer_在Google Cloud Composer(Airflow)上使用Selenium搜寻网页
  • nlp自然语言处理_自然语言处理(NLP):不要重新发明轮子
  • 机器学习导论�_机器学习导论
  • 直线回归数据 离群值_处理离群值:OLS与稳健回归
  • Python中机器学习的特征选择技术
  • 聚类树状图_聚集聚类和树状图-解释
  • 机器学习与分布式机器学习_我将如何再次开始学习机器学习(3年以上)

打破学习的玻璃墙_打破Google背后的创新深度学习相关推荐

  1. 深度学习与矩阵信号分解_分解谷歌翻译背后的创新深度学习

    深度学习与矩阵信号分解 What Google Translate does is nothing short of amazing. In order to engineer the ability ...

  2. Google首席科学家谈Google是怎么做深度学习的

    Google首席科学家谈Google是怎么做深度学习的 dongfeiwww  2016-03-26 10:17 收藏64 评论1 2016年3月7日,谷歌首席科学家,MapReduce.BigTab ...

  3. “机器学习”三重门_“中庸之道”趋若人(深度学习入门系列之四)

    原文链接   更多深度文章,请关注云计算频道:https://yq.aliyun.com/cloud 系列文章: 一入侯门"深"似海,深度学习深几许(深度学习入门系列之一) 人工& ...

  4. AI学习笔记(九)从零开始训练神经网络、深度学习开源框架

    AI学习笔记之从零开始训练神经网络.深度学习开源框架 从零开始训练神经网络 构建网络的基本框架 启动训练网络并测试数据 深度学习开源框架 深度学习框架 组件--张量 组件--基于张量的各种操作 组件- ...

  5. 【深度学习】从Pix2Code到CycleGAN:2017年深度学习重大研究进展全解读

    选自Statsbot 作者:Eduard Tyantov 机器之心编译 2017 年只剩不到十天,随着 NIPS 等重要会议的结束,是时候对这一年深度学习领域的重要研究与进展进行总结了.来自机器学习创 ...

  6. 深度学习和机器学习有什么关系?机器学习包含深度学习吗?

    没有一种技术的发展是孤立的,有人在接触人工智能AI的时候,搞不清楚深度学习和机器学习之间的关系,机器学习包含深度学习吗?深度学习是机器学习的一部分吗?本文来解答一下此问题. 深度学习和机器学习有什么关 ...

  7. 深度学习难,这本书让你轻松学深度学习

    深度学习在短短几年之内便让世界大吃一惊. 它非常有力地推动了计算机视觉.自然语言处理.自动语音识别.强化学习和统计建模等多个领域的快速发展. 随着这些领域的不断进步,人们现在可以制造自动驾驶的汽车,基 ...

  8. 深度学习FPGA实现基础知识13(向专家致敬--深度学习-LeCun、Bengio和Hinton的联合综述)

    需求说明:深度学习FPGA实现知识储备 来自:http://www.csdn.net/article/2015-06-01/2824811 [编者按]三大牛Yann LeCun.Yoshua Beng ...

  9. 统计深度学习与最优传输理论,传统方法vs深度学习,符号主义与联结主义

    统计深度学习与最优传输理论,传统方法vs深度学习,符号主义与联结主义 统计深度学习与最优传输理论 传统计算机视觉方法与基于统计的深度学习方法 符号主义与联结主义    本文多处摘引自当深度学习遇到3D ...

最新文章

  1. Atomic Layer Deposition原子层沉积技术
  2. 关于 sql语句的一些小优化
  3. 织梦网站翻页php,dedecms织梦网站列表页和内容页分页样式
  4. 把寄存器做成一个结构体,赋值初始地址后寄存器赋值的操作
  5. 虚拟机网络连接三种方式(桥接、NAT、主机)
  6. 开源NewSQL – CockroachDB在百度内部的应用与实践
  7. bootstarp span标签文本居中_web前端入门到实战:文本图标对齐的几种解决方案
  8. Python命令行参数
  9. [HNOI2003]多边形
  10. 初探TVM--TVM优化resnet50
  11. QFIL刷机失败Download Fail:Sahara Fail:QSaharaServer Fail:Process fail
  12. 工业污染治理投资完成情况分析(2000—2019年)
  13. 第4章 Function语义学
  14. ArduinoUNO实战-第七章-PWM调光
  15. Java毕设项目-社区居民健康档案管理系统
  16. 2003服务器安全攻略
  17. Linux如何使用WIFI连接abd
  18. VMware Bitfusion GPU共享技术的应用场景
  19. 802.11ax简介
  20. Android——安卓卡片样式——CardView使用、CardView失效等

热门文章

  1. TiDB 分布式数据库(一)
  2. 网站前端性能优化之javascript和css
  3. C#代码:获取与指定颜色相似的.NET自带颜色
  4. java整合flex
  5. Linux内核态抢占机制分析
  6. linux下装jdk以及failed /usr/local/jdk1.6.0_10/jre/lib/i386/client/libjvm.so,
  7. Silverlight 应用 WCF RIA Services 在 IIS6 部署问题总结
  8. oracle备份集注册,OracleRMAN将备份集重新注册到控制文件说明
  9. python云计算面试题_云计算工程师面试问题及答案解析
  10. python的输出函数_Python输出函数print()总结(python print())