This blog covers the oblivious training function and the internals of Maggy presented at Spark+AI Summit 2020, on June 26th.

该博客涵盖了忘却的培训功能以及Maggy在 星火+ AI首脑会议2020年 6月26日。

TLDR; Maggy is an open-source framework for distributed machine learning. In this post, we introduce a new unified framework for writing core ML training logic as “oblivious training functions”. Maggy enables you to reuse the same training code whether training small models on your laptop or reusing the same code to scale out hyperparameter tuning or distributed deep learning on a cluster. Maggy enables the replacement of the current waterfall development process for distributed ML applications, where code is rewritten at every stage, with an iterative development process.

TLDR; Maggy是用于分布式机器学习的开源框架。 在本文中,我们介绍了一个新的统一框架,用于将核心ML训练逻辑编写为“ 遗忘的训练功能 ”。 Maggy使您可以重用相同的训练代码,无论是在笔记本电脑上训练小型模型还是重用相同的代码来扩展超参数调整或在集群上进行分布式深度学习。 Maggy支持使用分布式ML应用程序替换当前的瀑布式开发过程,在分布式ML应用程序中,每个阶段都使用迭代开发过程来重写代码。

Most of the publicly available ML source code for training models is not built to scale-out on many servers or GPUs. Getting started with deep learning is relatively easy these days, thanks to fast.ai, GitHub, and the blogosphere. The hard part for practitioners starts when the code examples found online need to be applied to more challenging domains, with larger and custom datasets, which in turn will require a bigger customized version of the model to fit that dataset. Using publicly available code as a starting point for model development on clusters, you will end up in a process similar to the one depicted in Figure 1.

用于培训模型的大多数公开可用的ML源代码都无法在许多服务器或GPU上横向扩展。 如今 , 借助fast.ai ,GitHub和Blogosphere ,深度学习入门相对容易。 对于从业者而言,最困难的部分是需要将在线找到的代码示例应用于具有更大和自定义数据集的更具挑战性的领域时,这反过来又需要模型的更大自定义版本以适合该数据集。 使用可公开获得的代码作为在集群上进行模型开发的起点,您将最终完成类似于图1所示的过程。

The software development process for ML models is rarely the perfect waterfall development model, as shown in Figure 1 without the green arrows. In the (discredited) waterfall development process, you would start out with requirements, then move on to design, implementation and test. The (current!) equivalent process in ML model development is the following, as shown in Figure 1 with the green arrows. You start out on your local machine with a subset of the data in order to explore and design the model architecture. Then you move to use a cluster of resources (such as GPUs) to more quickly find hyperparameters, run lots of parallel ablation studies (many skip this stage!), and finally scale out the training of the model on the large dataset using lots of resources. Then, you’re done, right? Wrong! You typically iterate through the stages, finding better hyperparameters, adding new features, rewriting for distribution, going from your laptop to the cluster and back again.

ML模型的软件开发过程很少是完美的瀑布式开发模型 ,如图1所示,没有绿色箭头。 在(分散的)瀑布式开发过程中,您将从需求开始,然后进行设计,实施和测试。 ML模型开发中的(当前!)等效过程如下所示,如图1中的绿色箭头所示。 您首先从本地计算机上获取数据的子集,以探索和设计模型架构。 然后,您开始使用资源集群(例如GPU)来更快地找到超参数,运行大量并行消融研究( 许多跳过此阶段 !),最后使用大量资源对大型数据集进行模型训练资源。 然后,您完成了,对不对? 错误! 通常,您会遍历各个阶段,找到更好的超参数,添加新功能,重写以进行分发,从笔记本电脑到群集再返回。

We rewrite our model training code for distribution as it offers many benefits — faster training of models using more GPUs, parallelizing hyperparameter tuning over many GPUs, and parallelizing ablation studies to help understand the behaviour and performance of deep neural networks. However, not only will the boiler plate model training code need to be modified, but as you move along the process, distribution will introduce additional obtrusive code artifacts and modifications, depending on the frameworks used. This will lead to a mix of infrastructure code and model code, with duplicated training logic, hyperparameters hard-coded into the training loop, additional tracking code to keep record of your changes and config files for experiments:

我们重写了用于分发的模型训练代码,因为它具有许多好处-使用更多GPU更快地训练模型,并行化许多GPU上的超参数调整,并行化消融研究以帮助理解深度神经网络的行为和性能。 但是,不仅需要修改样板模型训练代码,而且随着过程的进行,根据所使用的框架,发行版还会引入其他干扰性代码工件和修改。 这将导致基础结构代码和模型代码的混合,以及重复的训练逻辑,将超参数硬编码到训练循环中,附加的跟踪代码来记录您的更改和用于实验的配置文件:

Figure 2: Model development creates a mix of code artefacts duplicating code for every step, making iterative development hard.图2:模型开发创建了混合代码工件,每一步都重复了代码,使得迭代开发变得困难。

With such a code base, iterating becomes near impossible as it requires adapting many copies of redundant code. And finally, imagine handing the code off to an ML engineer to productionize the model.

使用这样的代码库,几乎不可能进行迭代,因为它需要调整许多冗余代码副本。 最后,想象一下将代码交给ML工程师生产模型。

遗忘训练功能 (The Oblivious Training Function)

Figure 3: The oblivious training function makes training code reusable among all steps of the process.图3:遗忘的训练功能使训练代码在流程的所有步骤之间都可重用。

We introduce an open-source framework, Maggy, that enables write-once training functions that can be reused in single-host Python programs and cluster-scale PySpark or Distributed TensorFlow programs. Training functions written with Maggy look like best-practice TensorFlow programs where we factor out dependencies using popular programming idioms (such as functions to generate models and data batches). We call this new abstraction for ML model development the oblivious training function, as the core model training logic supports distribution transparency, that is, the training code is not aware (oblivious) of whether it is being run on a single host or whether it is being executed on hundreds of devices in parallel.

我们引入了一个开源框架Maggy,该框架启用了一次写入培训功能,该功能可以在单主机Python程序和群集级PySpark或分布式TensorFlow程序中重用。 用Maggy编写的训练函数看起来像最佳实践TensorFlow程序,在该程序中,我们使用流行的编程习惯用法(例如,用于生成模型和数据批的函数)来排除依赖关系。 我们将这种用于ML模型开发的新抽象称为忽略训练功能 ,因为核心模型训练逻辑支持分布透明性,也就是说,训练代码不知道(忽略)它是在单个主机上运行还是在单个主机上运行。在数百个设备上并行执行。

培训代码对分发透明是什么意思? (What does it mean for training code to be distribution transparent?)

Transparency in distributed systems refers to hiding distribution-specific aspects of an application from the developer — for example, a developer invoking a function may not know (or need to know) if the function she is calling is local to her application or on a remote server. This means, distribution transparency enables developers to write code that is reusable between single-host and distributed instantiations of a program:

分布式系统中的透明度是指向开发人员隐藏应用程序特定于发行版的方面-例如,调用某个功能的开发人员可能不知道(或需要知道)她正在调用的功能是应用程序本地的还是远程的服务器。 这意味着,分发透明性使开发人员能够编写可在程序的单主机和分布式实例之间重用的代码:

Figure 4: Distribution Transparency hides complexities related to distribution from the developer, making the same code executable on a single-host as well as in a large cluster. Transparency leads to DRY training code.
图4:分发透明性向开发人员隐藏了与分发有关的复杂性,使相同的代码可以在单个主机以及大型群集中执行。 透明导致 DRY 培训代码。

分配透明度的基石 (Building Blocks for Distribution Transparency)

How does ML code have to be structured in order to be transparently distributed? Firstly, developers have to follow best practices and, secondly, developers must be aware of the difference between distribution contexts, that is, what characterizes, for example, distributed hyperparameter tuning vs. distributed training.

ML代码必须如何构造才能透明分布? 首先,开发人员必须遵循最佳实践,其次,开发人员必须意识到分布环境之间的差异,即,特征在于(例如)分布式超参数调整与分布式训练。

1.机器学习开发最佳实践: (1. ML Development Best Practices:‍)

The ML community has recently developed some best practices, which are already widely spread among developers. Taking a look at the new well-illustrated Keras Guides, you will notice a common approach with four techniques.

ML社区最近开发了一些最佳实践,这些最佳实践已经在开发人员中广泛传播。 看一下新的插图齐全的Keras指南 ,您会注意到四种方法的通用方法。

  • Modularize: By modularizing code into reusable functions, these functions become building blocks, making the code pluggable in order to construct different configurations of the model for hyperparameter optimization or ablation.

    模块化 :通过将代码模块化为可重用的函数,这些函数成为构建块,使代码可插入,以便为超参数优化或消融构建模型的不同配置。

  • Parametrize: Instead of hardcoding parameters such as learning rate, regularization penalty or other hyperparameters, developers are encouraged to replace this with variables whenever possible, to have a single place for them to be changed.

    参数化 :鼓励开发人员尽可能地用变量代替变量,而不是诸如学习率,正则化代价或其他超参数之类的硬编码参数。

  • Higher order training functions: instead of using instantiated objects for example for the training dataset, the input logic related to the data can be encapsulated in a function which is being used by a higher order function. By doing so also the data input pipeline can be parametrized. The same holds for the generation of the model, which can be encapsulated in a function returning the model.

    高阶训练函数 :可以将例如与数据相关的输入逻辑封装在一个函数中,该函数将被高层函数使用,而不是将实例化的对象用于训练数据集。 这样也可以对数据输入管道进行参数化。 对于模型的生成也是如此,可以将其封装在返回模型的函数中。

  • Usage of callbacks at runtime: In order to be able to intercept and interact with the actual training loop, most ML frameworks such as TensorFlow and PyTorch offer the possibility to use callback functions that are invoked by the framework at certain points in time during training, such as at the end of every epoch or batch. Callback functions enable runtime monitoring of training, and can, for example, also be used to add support to stop the training early (important in hyperparameter optimization).

    在运行时使用回调:为了能够拦截实际的训练循环并与之交互,大多数ML框架(例如TensorFlow和PyTorch)都提供了使用在训练过程中某些时间点由框架调用的回调函数的可能性,例如在每个时期或批次结束时。 回调函数可实现对训练的运行时监视,例如,还可使用回调函数添加支持以尽早停止训练( 在超参数优化中很重要 )。

2.分发上下文 (2. Distribution Context‍)

While a single-host environment is self-explanatory, there is a difference between the context of ML experiments, such as hyperparameter optimization or parallel ablation studies, and the distributed training of a single model. Both hyperparameter optimization and parallel ablation studies have weak scaling requirements (also known as embarrassingly parallel), because all workers execute independent pieces of work and have limited communication. For example, hyperparameter tuning involves training independent copies of the model with different hyperparameters or different architectures, in order to find the best performing configuration. Distributed training, however, is strong scaling, as it introduces significant communication and coordination between the workers. As workers are training a single model, they continually exchange gradients, which are computed on independent shards of data (data parallel training). Many distributed training problems, in fact, become (network or disk) I/O bound as they scale. Figure 5 illustrates the three contexts and the step in the model development process that they are applicable to.

虽然单宿主环境是不言自明的,但是ML实验的上下文(例如超参数优化或并行消融研究)与单个模型的分布式训练之间存在差异。 超参数优化和并行消融研究都具有微弱的缩放要求(也称为尴尬地并行),因为所有工人都执行独立的工作并且沟通有限。 例如,超参数调整涉及使用不同的超参数或不同的架构训练模型的独立副本,以便找到性能最佳的配置。 但是,分布式培训具有很强的扩展性 ,因为它引入了工人之间的大量沟通和协调。 当工作人员正在训练一个模型时,他们会不断交换梯度,这些梯度是根据独立的数据碎片计算得出的(数据并行训练)。 实际上,许多分布式培训问题随着规模的增长而成为(网络或磁盘)I / O约束。 图5说明了这三个上下文以及它们适用于模型开发过程的步骤。

Figure 5: Single-host vs. parallel multi-host vs. distributed multi-host context and their applicability to the steps of the process.图5:单主机与并行多主机与分布式多主机上下文及其在流程步骤中的适用性。

Being aware of the different contexts and applying popular programming idioms, it becomes apparent what it means for the oblivious training function. It is no longer the developer herself who instantiates and launches the training function, but the framework that will invoke the training function as it is aware of the current context and it will take care of the distribution related complexities. That means, for exploration, the framework can be used to fix all parameters. For hyperparameter optimization experiments, the framework will take care of generating potentially good hyperparameter combinations and parameterizing the oblivious training function with them to be launched on different workers. For distributed training, it means setting up the environment for workers to discover each other and wrapping the model code with a distribution strategy.

意识到了不同的上下文并应用了流行的编程习惯用法,对于显而易见的训练功能意味着什么变得显而易见。 实例化并启动培训功能的不再是开发人员本人,而是框架,它将在知道当前上下文并了解与分发有关的复杂性的同时调用培训功能。 这意味着,为了进行探索,可以使用该框架修复所有参数。 对于超参数优化实验,该框架将负责生成潜在的良好超参数组合,并参数化遗忘的训练函数,并在不同的工作程序上启动它们。 对于分布式培训,这意味着要为工作人员搭建一个相互发现的环境,并用一种​​分配策略包装模型代码。

Figure 6: The oblivious training function as an abstraction allows us to let the system take care of distributed system related complexities.图6:作为抽象的遗忘训练功能使我们可以让系统处理与分布式系统相关的复杂性。

放在一起 (‍Putting it all together)

Having the building blocks at hand, how do we write the model training code in Maggy? Let us take a look at the latest best-practices MNIST example that already factors the model configuration, dataset preparation and training logic into functions. Building on this example, we will show the modifications to the code that are needed to construct an oblivious training function in Maggy. It is important to note that all modifications are still vanilla Python code, and can, therefore, be run as is on a single host environment. Let’s start with the boiler plate with the two functions and the training logic:

有了构建块,我们如何在Maggy中编写模型训练代码? 让我们看一下最新的MNIST最佳实践示例 ,该示例已将模型配置,数据集准备和训练逻辑纳入功能。 在此示例的基础上,我们将展示对在Maggy中构建遗忘训练功能所需的代码的修改。 重要的是要注意,所有修改仍然是原始的Python代码,因此可以在单个主机环境中按原样运行。 让我们从具有两个功能和训练逻辑的样板开始:

1.型号定义 (1. Model Definition)

2.数据集生成 (2. Data set generation)

3.训练逻辑 (3. Training logic)

1.模型生成 (1. Model generation)

We are parametrizing the model itself, by replacing hyperparameters with arguments.

我们通过用参数替换超参数来参数化模型本身。

Parametrizing the model definition

参数化模型定义

2.数据集生成 (2. Dataset generation)

The dataset generation function stays unchanged in this case, but similar to the model, this function could be parametrized.

在这种情况下,数据集生成函数保持不变,但是与模型类似,可以对该函数进行参数化。

3.训练逻辑 (3. Training logic)

The training logic is wrapped in a parametrized and pluggable function, the oblivious training function. Again, hyperparameters are passed as arguments to the function. Additionally, the dataset and model generation functions are replaced with arguments, in order to be able to let the system, for example, replace the dataset generator with an alternative one — we use this to drop features for ablation studies. Last, but not least, the training function should return its current performance as a metric to be optimized in hyperparameter optimization. This is needed to make Maggy aware of the desired optimization metric.

训练逻辑封装在参数化和可插入的功能中,即遗忘的训练功能。 同样,将超参数作为参数传递给函数。 此外,数据集和模型生成函数用参数替换,以便能够让系统(例如)用另一种替换数据集生成器-我们使用它来删除特征以进行消融研究。 最后但并非最不重要的一点是,训练函数应返回其当前性能,作为要在超参数优化中优化的指标。 这是使Maggy了解所需的优化指标所必需的。

Adjust Training Logic to be callable with different parameters‍

调整训练逻辑以使用不同参数可以调用

Note that up to this point, all modifications are pure Python code and, hence, the training function can still be run in a single host environment by calling it yourself in a Notebook with a fixed set of parameters and by passing the model and dataset generation functions as arguments.

请注意,到目前为止,所有修改都是纯Python代码,因此,仍然可以在单个主机环境中运行训练功能,方法是在笔记本中使用一组固定的参数自己调用它,并传递模型和数据集生成作为参数。

Finally, to execute the function in a different distribution context, Maggy is used:

最后,为了在不同的分发上下文中执行该功能,使用了Maggy:

Maggy requires additional configuration information for hyperparameter optimization, such as a search space definition and the optimization strategy to be used. In the case of distributed training, the distribution strategy is needed as well as a set of parameters to fix the model to. These parameters can either be taken from the previous hyperparameter tuning experiments or input manually. Lagom is the API to launch the function on a Spark cluster.

Maggy需要用于超参数优化的其他配置信息,例如搜索空间定义和要使用的优化策略。 在分布式训练的情况下,需要分配策略以及将模型固定到的一组参数。 这些参数可以从以前的超参数调整实验中获取,也可以手动输入。 Lagom是在Spark集群上启动该功能的API。

未来的工作 (Future Work)

You can try out Maggy for hyperparameter optimization or ablation studies now on Hopsworks.ai and keep an eye on Maggy’s GitHub repo for the oblivious training function to be released as a pure Spark version or wait until the next release of Hopsworks, that will include full support. Maggy is still a project under heavy development and our mission with Maggy is to provide a new way of writing machine learning applications that reduces the burden on Data Scientists becoming distributed systems experts. By following the best practices we are able to keep the high-level APIs of frameworks like Keras and PyTorch free of distribution obtrusive code.

您可以立即在Hopsworks.ai上试用Maggy进行超参数优化或消融研究,并密切关注Maggy的GitHub存储库,以将遗忘的训练功能作为纯Spark版本发布,或者等到下一个Hopswork发行,其中包括全力支持。 Maggy仍是一个仍在大量开发中的项目,我们与Maggy的任务是提供一种编写机器学习应用程序的新方法,以减轻数据科学家成为分布式系统专家的负担。 通过遵循最佳实践,我们可以使Keras和PyTorch之类的框架的高级API不受分发干扰代码的影响。

摘要 (Summary)

In this blog, we introduced a new feature to an open-source framework, Maggy, that enables write-once training functions that can be reused in single-host Python programs and cluster-scale PySpark programs. Training functions written with Maggy look like best-practice TensorFlow programs where we factor out dependencies using popular programming idioms (such as functions to generate models and data batches). In a single Jupyter notebook, developers can mix vanilla Python code to develop and test models on their laptop with PySpark-specific cells that can be run when a cluster is available using a PySpark kernel, such as Sparkmagic. This way, iterative development of deep learning models now becomes possible, moving from the laptop to the cluster and back again, with DRY code in the training function — as all phases reuse the same training code.

在此博客中,我们向开源框架Maggy引入了一项新功能,该功能启用了一次写入培训功能,该功能可以在单主机Python程序和群集级PySpark程序中重用。 用Maggy编写的训练函数看起来像最佳实践TensorFlow程序,在其中我们使用流行的编程习惯来排除依赖关系(例如生成模型和数据批处理的函数)。 在单个Jupyter笔记本中,开发人员可以将普通的Python代码与PySpark特定的单元混合使用,以在其笔记本电脑上开发和测试模型,当使用PySpark内核(例如Sparkmagic)提供集群时,便可以运行这些单元。 这样,在训练功能中使用DRY代码,就可以迭代开发深度学习模型,从笔记本电脑移动到群集,然后再返回,因为所有阶段都重复使用相同的训练代码。

观看我们在2020 Spark + AI峰会上演示的演示 (Watch our demo presented at the Spark+AI Summit 2020)

翻译自: https://medium.com/@moritzmeister/unifying-single-host-and-distributed-machine-learning-with-maggy-331bba8d2a67


http://www.taodudu.cc/news/show-1873946.html

相关文章:

  • 极速火箭网络助手怎么用_在检测火箭队方面,神经网络比灰烬更好吗? 如果是这样,如何?...
  • nlu 意图识别_在NLU中,您无视危险的意图
  • BERT-从业者的观点
  • 检测和语义分割_分割和对象检测-第4部分
  • 工业革命 书_工业革命以来最重大的变化
  • 实现无缝滑屏怎么实现_无缝扩展人工智能以实现分布式大数据
  • colab 数据集_Google Colab上的YOLOv4:轻松训练您的自定义数据集(交通标志)
  • 人工智能和机器学习的前五门课程
  • c语言儿童教学_五岁儿童的自然语言处理
  • 星球大战telnet_重制星球大战:第四集(1977)
  • ai人工智能的数据服务_建立AI系统的规则-来自数据科学家
  • 语音库构建_推动数据采用,以通过语音接口构建更好的产品
  • openai-gpt_GPT-3是“人类”吗?
  • 自动化运维--python_自动化-设计师的朋友还是敌人?
  • ai人工智能的数据服务_数据科学和人工智能如何改变超市购物
  • 游戏ai人工智能_AI与游戏,第1部分:游戏如何推动了两门AI研究流派
  • AI的帕雷多利亚
  • ai转型指南_穿越AI转型的转折点
  • 机器学习算法:马尔可夫链
  • node-red 可视化_可视化与注意-第1部分
  • 图像数据增强扩充数据库_分析数据扩充以进行图像分类
  • ai伴侣2.4.7_人工智能:世界各地的活动(7月4日)
  • 如何简化卷积神经网络_卷积神经网络:简化
  • 人工智能ai医学辅助系统_不同的人工智能(AI)技术彻底改变了医学领域(AIM)...
  • 仅使用Python代码从零开始进行Logistic回归
  • python精妙算法_YOLOv4:高速物体检测的精妙之处
  • watson机器人_使您的聊天机器人看起来更加智能! Watson Assistant的隐藏功能。
  • 评估分类器模型性能
  • 预测自适应滤波_使用自适应滤波的时间序列预测
  • 蜜源假货_假货

与Maggy统一单主机和分布式机器学习相关推荐

  1. 分布式机器学习_229页,CMU博士张昊毕业论文公布,探索机器学习并行化的奥秘...

    CMU 机器人研究所张昊(Hao Zhang)博士论文新鲜出炉,主要围绕着机器学习并行化的自适应.可组合与自动化问题展开. 机器之心报道,机器之心编辑部. 随着近年来,机器学习领域的创新不断加速,Sy ...

  2. 分布式机器学习的集群方案介绍之HPC实现

    机器学习的基本概念 机器学习方法是计算机利用已有的数据(经验),得出了某种模型(迟到的规律),并利用此模型预测未来(是否迟到)的一种方法.目前机器学习广泛应用于广告投放.趋势预测.图像识别.语音识别. ...

  3. 分布式机器学习_229页CMU博士张昊毕业论文公布,探索机器学习并行化的奥秘

    机器之心报道 机器之心编辑部 CMU 机器人研究所张昊(Hao Zhang)博士论文新鲜出炉,主要围绕着机器学习并行化的自适应.可组合与自动化问题展开. 随着近年来,机器学习领域的创新不断加速,Sys ...

  4. [机器学习入门] 深度学习简介,GPU计算的原理,分布式机器学习原理

    深度学习简介 深度学习的概念源于人工神经网络的研究.含多隐层的多层感知器就是一种深度学习结构.深度学习通过组合低层特征形成更加抽象的高层表示属性类别或特征,以发现数据的分布式特征表示. 深度学习采用的 ...

  5. 分布式机器学习的故事

    王益博士,称得上机器学习领域的资深从业者,本人之前有幸拜读过王益博士的一些paper和slides,对其从事的"分布式机器学习"方向尤感兴趣. 王益博士之前写过一篇<分布式机 ...

  6. 《分布式机器学习》-刘铁岩:全书汇总

    文章目录 1 背景 2 数据划分与模型并行 2.1 计算并行模式 2.2 数据并行模式 2.3 模型并行模式 2.3.1 线性模型 2.3.2 神经网络 3 通信机制 3.1 通信的内容 3.2 通信 ...

  7. 人机共生时代,分布式机器学习是如何加速的?

    导语 | 机器学习技术在现代社会中发挥着越来越重要的作用,深刻地影响着各行各业.同时,也面对着海量数据和复杂问题的挑战.今天我们主要讨论分布式机器学习技术是如何处理海量数据,利用海量算力加速训练,使得 ...

  8. kafka的简单介绍以及docker-compose部署单主机Kafka集群

    Kafka简单介绍 Kafka是由Apache软件基金会开发的一个分布式.分区的.多副本的.多订阅者的开源流处理平台,由Scala和Java编写.Kafka是一种高吞吐量的分布式发布订阅消息系统,它可 ...

  9. PAI分布式机器学习平台编程模型演进之路

    摘要: 在云栖计算之旅第5期-大数据与人工智能大会上,来自阿里云大数据事业部的九丰分享了<PAI分布式机器学习平台编程模型演进之路>.他主要介绍了在集团中使用机器学习解决大数据问题时如何通 ...

  10. 分布式机器学习框架:MxNet 前言

           原文连接:MxNet和Caffe之间有什么优缺点 一.前言: Minerva: 高效灵活的并行深度学习引擎 不同于cxxnet追求极致速度和易用性,Minerva则提供了一个高效灵活的平 ...

最新文章

  1. UVa 10051 Tower of Cubes(类似LIS)
  2. android11和ios,安卓与iOS细节对比:Reno Ace与iPhone 11 Pro Max,结果很意外
  3. Linux Shell特殊字符和控制字符大全
  4. Java线程:线程的同步-同步方法
  5. 如何在 MacOS 环境下搭建 SVN 服务端环境
  6. STM32项目(三)——通用LIN控制器
  7. 神经网络如何调参、超参数的最优化方法、python实现
  8. matlab二维谐振子,基于有限差分法求解的二维谐振子的MATLAB程序如下。哪位大神能帮我做个注明啊,完全看不懂啊,,急...
  9. ICCV 2019 | 爱奇艺提出半监督损失函数,利用无标签数据优化人脸识别模型
  10. eclipse中使用Lombok(转)
  11. 中年之后的人脉,靠的是两个字
  12. Exchange 2010 OWA更改过期密码
  13. 小程序_协作开发(版本控制)
  14. 说不尽的洒脱:不义而富且贵,于我如浮云
  15. 省市区三级行政区划数据JS插件
  16. 安卓游戏等待服务器响应时间,电竞显示器响应时间原理及最佳游戏设置方法
  17. 直流DC稳压降压电源模块芯片简单对比
  18. SDS启动失败,提示连接primary节点失败
  19. 北大的戴威,为何输给了三本的胡玮炜?
  20. 学习软件之epub阅读器推荐

热门文章

  1. jupyter查看函数参数
  2. 20200709每日一句
  3. unity C#修改脚本图标
  4. Atitit rest框架选型总结 Resteasy 实现 但是麻烦 作为JAX-RS的标准实现,RestEasy还具有以下亮点特性:   1)不需要配置文件,只要把JARs文件放到类路径里面
  5. Atitit 怎么阅读一本书 消化 分析 检索 attilax总结 1. 读书的本质 是数据的处理,大量的数据,处理能力有限的大脑 2 2. ETL数据清洗转换 摘要,缩小数据规模 2 2.1
  6. Atitit.ui控件---下拉菜单选择控件的实现select html
  7. paip.c++ static 变量的定义以及使用...
  8. paip.wscript.shell.run路径空格与VBs转义符 作者Attilax , EMAIL:1466519819@qq.com ,112237553@qq.com 来源:attilax
  9. paip.按键精灵调用其它程序及DLL以及EXE命令行的方法
  10. paip.XXListener is already configured监听器已经被配置的解决