超参数优化 贝叶斯优化框架

Tune your Machine Learning models with open-source optimization libraries

使用开源优化库调整机器学习模型

介绍(Introduction)

Hyper-parameters are the parameters used to control the behavior of the algorithm while building the model. These parameters cannot be learned from the regular training process. They need to be assigned before training the model.

超参数是用于在构建模型时控制算法行为的参数。 这些参数无法从常规培训过程中学习。 在训练模型之前,需要先分配它们。

Example: n_neighbors (KNN), kernel (SVC) , max_depth & criterion (Decision Tree Classifier) etc.

例如: n_neighbors (KNN),内核(SVC), max_depth标准(决策树分类器)等。

Hyperparameter optimization or tuning in machine learning is the process of selecting the best combination of hyper-parameters that deliver the best performance.

机器学习中的超参数优化调整是选择可提供最佳性能的超参数的最佳组合的过程。

Various automatic optimization techniques exist, and each has its own strengths and drawbacks when applied to different types of problems.

存在各种自动优化技术,当应用于不同类型的问题时,每种技术都有其自身的优缺点。

Example: Grid Search, Random Search, Bayesian Search, etc.

示例:网格搜索,随机搜索,贝叶斯搜索等。

Scikit-learn is one of the frameworks we could use for Hyperparameter optimization, but there are other frameworks that could even perform better.

Scikit-learn是我们可以用于超参数优化的框架之一但是还有其他一些框架甚至可以表现更好。

  1. Ray-Tune雷·图恩
  2. Optuna奥图纳
  3. Hyperopt超级选择
  4. mlmachine机器
  5. Polyaxon多轴突
  6. BayesianOptimization贝叶斯优化
  7. Talos塔罗斯
  8. SHERPA夏尔巴人
  9. Scikit-OptimizeScikit优化
  10. GPyOptGPyOpt

1.雷声 (1. Ray-Tune)

Tune is a Python library for experiment execution and hyperparameter tuning at any scale.[GitHub]

Tune是一个Python库,可用于任意规模的实验执行和超参数调整。[ GitHub ]

主要特征 (Key Features)

  1. Launch a multi-node distributed hyperparameter sweep in less than ten lines of code.

    少于十行代码即可启动多节点分布式超参数扫描。

  2. Supports any machine learning framework, including PyTorch, XGBoost, MXNet, and Keras.

    支持任何机器学习框架,包括PyTorch,XGBoost,MXNet和Keras 。

  3. Choose among the state of the art algorithms such as Population Based Training (PBT), BayesOptSearch, HyperBand/ASHA.

    在最新的算法中进行选择,例如基于人口的训练(PBT) , BayesOptSearch , HyperBand / ASHA 。

  4. Tune’s Search Algorithms are wrappers around open-source optimization libraries such as HyperOpt, SigOpt, Dragonfly, and Facebook Ax.

    Tune的搜索算法围绕着开放源代码优化库(例如HyperOpt,SigOpt,Dragonfly和Facebook Ax)进行包装。

  5. Automatically visualize results with TensorBoard.使用TensorBoard自动显示结果。

#Tune for Scikit Learn

#Scikit学习调音

Installation: pip install ray[tune] tune-sklearn

安装:pip install ray [tune] tune-sklearn

# from sklearn.model_selection import GridSearchCV
from ray.tune.sklearn import TuneGridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.datasets import load_iris
import numpy as npiris = load_iris()
X = iris.data
y = iris.targetx_train, x_test, y_train, y_test = train_test_split(X,y,test_size = 0.3,random_state = 14)# Example parameters to tune from SGDClassifier
parameter_grid = {"alpha": [1e-4, 1e-1, 1], "epsilon": [0.01, 0.1]}tune_search = TuneGridSearchCV(SGDClassifier(),parameter_grid,early_stopping=True,max_iters=10)tune_search.fit(x_train, y_train)#best set of perameter
print(tune_search.best_params_)#best score with best set of perameters
print(tune_search.best_score)

2.奥图纳(2. Optuna)

Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning.

Optuna是一个自动超参数优化软件框架,专门为机器学习而设计。

主要特征 (Key Features)

  1. Easy parallelization

    轻松并行化

  2. Quick visualization

    快速可视化

  3. Efficient optimization algorithms

    高效的优化算法

  4. Lightweight, versatile, and platform-agnostic architecture

    轻巧,通用且与平台无关的架构

  5. Pythonic search spaces

    Pythonic搜索空间

Installation: pip install optuna

安装:pip install optuna

import optuna
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import KFold , cross_val_score
from sklearn.datasets import load_irisiris = load_iris()
X = iris.data
y = iris.targetfrom sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size = 0.3,random_state = 14)def objective(trial):optimizer = trial.suggest_categorical('algorithm', ['auto','ball_tree','kd_tree','brute'])rf_max_depth = trial.suggest_int("k_n_neighbors", 2, 10, log=True)knn = KNeighborsClassifier(n_neighbors=rf_max_depth,algorithm=optimizer)score = cross_val_score(knn, X_train,y_train, n_jobs=-1, cv=3)accuracy = score.mean()return accuracyif __name__ == "__main__":study = optuna.create_study(direction="maximize")study.optimize(objective, n_trials=10)print(study.best_trial)#best parameter combination
study.best_params#score achieved with best parameter combination
study.best_value

3. Hyperopt(3. Hyperopt)

Hyperopt is a Python library for serial and parallel optimization over awkward search spaces, which may include real-valued, discrete, and conditional dimensions.

Hyperopt是一个Python库,用于在尴尬的搜索空间上进行串行和并行优化,搜索空间可能包括实值,离散和条件维。

Hyperopt currently it supports three algorithms :

Hyperopt当前支持三种算法:

  • Random Search

    随机搜寻

  • Tree of Parzen Estimators (TPE)

    Parzen估算器树(TPE)

  • Adaptive TPE

    自适应TPE

主要特征 (Key Features)

  1. Search space (you can create very complex parameter spaces)

    搜索空间(您可以创建非常复杂的参数空间)

  2. Persisting and restarting (you can save important information and later load and then resume the optimization process)

    持久并重新启动(您可以保存重要信息并在以后加载,然后继续优化过程)

  3. Speed and Parallelization (you can distribute your computation over a cluster of machines)

    速度和并行化(您可以将计算分布在一组计算机上)

Installation: pip install hyperopt

安装:pip install hyperopt

from hyperopt import fmin, tpe, hp, STATUS_OK, Trials , space_eval
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
from sklearn.datasets import load_irisiris = datasets.load_iris()
X = iris.data
y = iris.targetdef hyperopt_train_test(params):clf = RandomForestClassifier(**params)return cross_val_score(clf, X, y).mean()space = {'max_depth': hp.choice('max_depth', range(1,20)),'max_features': hp.choice('max_features', range(1,5)),'n_estimators': hp.choice('n_estimators', range(1,20)),'criterion': hp.choice('criterion', ["gini", "entropy"])}
best = 0
def f(params):global bestacc = hyperopt_train_test(params)if acc > best:best = accprint( 'new best:', best, params)return {'loss': -acc, 'status': STATUS_OK}
trials = Trials()
best = fmin(f, space, algo=tpe.suggest, max_evals=300, trials=trials)
print(best)

4. mlmachine(4. mlmachine)

mlmachine is a Python package that facilitates clean and organized notebook-based machine learning experimentation and accomplishes many key aspects of the experimentation life cycle.

mlmachine是一个Python软件包,可促进干净且有组织的基于笔记本的机器学习实验,并完成了实验生命周期的许多关键方面。

mlmachine performs Hyperparameter Tuning with Bayesian Optimization on multiple estimators in one shot and includes functionality for visualizing model performance and parameter selections.

mlmachine一口气对多个估计量执行带贝叶斯优化的超参数调整,并具有可视化模型性能和参数选择的功能。

A well explained article on mlmachine.

关于mlmachine的详尽解释的文章。

Installation: pip install mlmachine

安装方式:pip install mlmachine

5. Polyaxon (5. Polyaxon)

Polyaxon is a platform for building, training, and monitoring large scale deep learning applications. It makes a system to solve reproducibility, automation, and scalability for machine learning applications.

Polyaxon是用于构建,培训和监视大规模深度学习应用程序的平台。 它使系统能够解决机器学习应用程序的可重复性,自动化和可扩展性。

The way Polyaxon performs hyperparameter tuning is by providing a selection of customizable search algorithms. Polyaxon supports both simple approaches such as random search and grid search, and provides a simple interface for advanced approaches, such as Hyperband and Bayesian Optimization, it also integrates with tools such as Hyperopt, and provides an interface for running custom iterative processes. All these search algorithms run in an asynchronous way, and support concurrency and routing to leverage your cluster(s)’s resources to the maximum.

Polyaxon执行超参数调整的方式是提供一系列可定制的搜索算法。 Polyaxon支持简单方法(例如random searchgrid search ,并为高级方法(例如HyperbandBayesian Optimization提供简单的界面,还与Hyperopt等工具集成,并提供用于运行自定义迭代过程的界面。 所有这些搜索算法均以异步方式运行,并支持并发和路由,以最大程度地利用群集的资源。

主要特征 (Key Features)

  1. Easy-to-use: Polyaxon’s Optimization Engine is a built-in service and can be used easily by adding a matrix section to your operations, you can run hyperparameter tuning using the CLI, client, and the dashboard.

    易于使用:Polyaxon的优化引擎是一项内置服务,可以通过在操作中添加matrix部分来轻松使用,您可以使用CLI,客户端和仪表板运行超参数调整。

  2. Scalability: Tuning hyperparameters or neural architectures requires leveraging a large amount of computation resources, using Polyaxon you can run hundreds of trials in parallel and intuitively track their progress.可扩展性:调整超参数或神经体系结构需要利用大量的计算资源,使用Polyaxon,您可以并行运行数百个试验并直观地跟踪其进度。
  3. Flexibility: Besides the rich built-in algorithms, Polyaxon allows users to customize various hyperparameter tuning algorithms, neural architecture search algorithms, early stopping algorithms, etc.灵活性:除了丰富的内置算法,Polyaxon还允许用户自定义各种超参数调整算法,神经体系结构搜索算法,提前停止算法等。
  4. Efficiency: We are intensively working on more efficient model tuning from both system-level and algorithm level. For example, leveraging early feedback to speedup tuning procedure.效率:我们正在从系统级和算法级集中精力进行更有效的模型调整。 例如,利用早期反馈来加快调整过程。

Installation: pip install -U polyaxon

安装:pip install -U polyaxon

6.贝叶斯优化 (6. Bayesian Optimization)

Bayesian Optimization is another framework that is a pure Python implementation of Bayesian global optimization with Gaussian processes. This is a constrained global optimization package built upon Bayesian inference and Gaussian process, that attempts to find the maximum value of an unknown function in as few iterations as possible. This technique is particularly suited for the optimization of high-cost functions, situations where the balance between exploration and exploitation is important.

贝叶斯优化是另一个框架,它是具有高斯过程的贝叶斯全局优化的纯Python实现。 这是基于贝叶斯推理和高斯过程的受约束的全局优化程序包,该程序包尝试在尽可能少的迭代中找到未知函数的最大值。 该技术特别适合于高成本功能的优化,在勘探和开发之间的平衡很重要的情况下。

Installation: pip install bayesian-optimization

安装:pip install贝叶斯优化

7.塔洛斯(7. Talos)

Talos radically changes the ordinary Keras workflow by fully automating hyperparameter tuning and model evaluation. Talos exposes Keras functionality entirely and there is no new syntax or templates to learn.

Talos通过完全自动化超参数调整和模型评估,从根本上改变了普通的Keras工作流程。 Talos完全公开了Keras功能,没有新的语法或模板需要学习。

主要特征 (Key Features)

  1. Single-line optimize-to-predict pipeline talos.Scan(x, y, model, params).predict(x_test, y_test)

    单行优化预测管道talos.Scan(x, y, model, params).predict(x_test, y_test)

  2. Automated hyperparameter optimization自动化超参数优化
  3. Model generalization evaluator模型概括评估器
  4. Experiment analytics实验分析
  5. Pseudo, Quasi, and Quantum Random search options伪,拟和量子随机搜索选项
  6. Grid search网格搜索
  7. Probabilistic optimizers概率优化器
  8. Single file custom optimization strategies单文件自定义优化策略

Installation: pip install talos

安装:pip install talos

8.夏尔巴人 (8. SHERPA)

SHERPA is a Python library for hyperparameter tuning of machine learning models.

SHERPA是一个Python库,用于机器学习模型的超参数调整。

它提供: (It provides:)

  1. hyperparameter optimization for machine learning researchers机器学习研究人员的超参数优化
  2. a choice of hyperparameter optimization algorithms超参数优化算法的选择
  3. parallel computation that can be fitted to the user’s needs可以满足用户需求的并行计算
  4. a live dashboard for the exploratory analysis of results.一个实时的仪表板,用于对结果进行探索性分析。

Installation: pip install parameter-sherpa

安装:pip install参数-sherpa

from sklearn.datasets import load_breast_cancer
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
import time
import sherpa
import sherpa.algorithms.bayesian_optimization as bayesian_optimizationparameters = [sherpa.Discrete('n_estimators', [2, 50]),sherpa.Choice('criterion', ['gini', 'entropy']),sherpa.Continuous('max_features', [0.1, 0.9])]algorithm = bayesian_optimization.GPyOpt(max_concurrent=1,model_type='GP_MCMC',acquisition_type='EI_MCMC',max_num_trials=10)X, y = load_breast_cancer(return_X_y=True)
study = sherpa.Study(parameters=parameters,algorithm=algorithm,lower_is_better=False)for trial in study:print("Trial ", trial.id, " with parameters ", trial.parameters)clf = RandomForestClassifier(criterion=trial.parameters['criterion'],max_features=trial.parameters['max_features'],n_estimators=trial.parameters['n_estimators'],random_state=0)scores = cross_val_score(clf, X, y, cv=5)print("Score: ", scores.mean())study.add_observation(trial, iteration=1, objective=scores.mean())study.finalize(trial)print(study.get_best_result())

9. Scikit优化(9. Scikit-Optimize)

Scikit-Optimize, or skopt, is a simple and efficient library to minimize (very) expensive and noisy black-box functions. It implements several methods for sequential model-based optimization. skopt aims to be accessible and easy to use in many contexts. Scikit-Optimize provides support for tuning the hyperparameters of ML algorithms offered by the scikit-learn library, so-called hyperparameter optimization.

Scikit-Optimize或skopt是一个简单高效的库,可最大限度地减少(非常)昂贵且嘈杂的黑盒功能。 它实现了几种基于顺序模型优化的方法。 skopt旨在在许多情况下易于访问和使用。 Scikit-Optimize支持调整scikit-learn库提供的ML算法的超参数,即所谓的超参数优化。

The library is built on top of NumPy, SciPy and Scikit-Learn.

该库基于NumPy,SciPy和Scikit-Learn构建。

Installation: pip install scikit-optimize

安装:pip install scikit-optimize

from skopt import BayesSearchCVimport warnings
warnings.filterwarnings("ignore")# parameter ranges are specified by one of below
from skopt.space import Real, Categorical, Integerknn = KNeighborsClassifier()
#defining hyper-parameter grid
grid_param = { 'n_neighbors' : list(range(2,11)) , 'algorithm' : ['auto','ball_tree','kd_tree','brute'] }#initializing Bayesian Search
Bayes = BayesSearchCV(knn , grid_param , n_iter=30 , random_state=14)
Bayes.fit(X_train,y_train)#best parameter combination
Bayes.best_params_#score achieved with best parameter combination
Bayes.best_score_#all combinations of hyperparameters
Bayes.cv_results_['params']#average scores of cross-validation
Bayes.cv_results_['mean_test_score']

10. GPyOpt(10. GPyOpt)

GPyOpt is a tool for optimization (minimization) of black-box functions using Gaussian processes. It has been implemented in Python by the group of Machine Learning (at SITraN) of the University of Sheffield.GPyOpt is based on GPy, a library for Gaussian process modeling in Python. It can handle large data sets via sparse Gaussian process models.

GPyOpt是使用高斯过程优化(最小化)黑盒功能的工具。 谢菲尔德大学的机器学习小组(位于SITraN)已在Python中实现了它。GPyOpt基于GPy , GPy是一个用于Python中高斯过程建模的库。 它可以通过稀疏的高斯过程模型处理大型数据集。

主要特征 (Key Features)

  1. Bayesian optimization with arbitrary restrictions

    具有任意限制的贝叶斯优化

  2. Parallel Bayesian optimization

    并行贝叶斯优化

  3. Mixing different types of variables

    混合不同类型的变量

  4. Tuning scikit-learn models

    调整scikit学习模型

  5. Integrating the model hyper parameters

    集成模型超参数

  6. External objective evaluation

    外部目标评估

Installation: pip install gpyopt

安装:pip install gpyopt

import GPy
import GPyOpt
from GPyOpt.methods import BayesianOptimization
from sklearn.model_selection import train_test_split
from sklearn import svm
from sklearn.datasets import load_iris
from scipy.stats import uniform
from xgboost import XGBRegressor
import numpy as npiris = load_iris()
X = iris.data
y = iris.targetx_train, x_test, y_train, y_test = train_test_split(X,y,test_size = 0.3,random_state = 14)bds = [{'name': 'learning_rate', 'type': 'continuous', 'domain': (0, 1)},{'name': 'gamma', 'type': 'continuous', 'domain': (0, 5)},{'name': 'max_depth', 'type': 'discrete', 'domain': (1, 50)}]# Optimization objective
def cv_score(parameters):parameters = parameters[0]score = cross_val_score(XGBRegressor(learning_rate=parameters[0],gamma=int(parameters[1]),max_depth=int(parameters[2])), X, y, scoring='neg_mean_squared_error').mean()score = np.array(score)return scoreoptimizer = GPyOpt.methods.BayesianOptimization(f = cv_score,            # function to optimize       domain = bds,         # box-constraints of the problemacquisition_type ='LCB',       # LCB acquisitionacquisition_weight = 0.1)   # Exploration exploitationx_best = np.exp(optimizer.X[np.argmin(optimizer.Y)])
print("Best parameters: learning_rate="+str(x_best[0])+",gamma="+str(x_best[1])+",max_depth="+str(x_best[2]))

感谢您的阅读!(Thank you for reading!)

Any feedback and comments are, greatly appreciated!

任何反馈和评论都非常感谢!

Some of my other posts you may find interesting,

您可能会发现我的其他一些有趣的帖子,

翻译自: https://towardsdatascience.com/10-hyperparameter-optimization-frameworks-8bc87bc8b7e3

超参数优化 贝叶斯优化框架


http://www.taodudu.cc/news/show-3437136.html

相关文章:

  • (四)electron尝试使用win32 API——node-gpy到底是干嘛的?
  • 【npm】node-gpy 内嵌npm版本升级 (解决node-gpy 内嵌 npm版本过低识别不了2022 msvs_version版本问题)
  • GPy简单操作案例
  • pandas与excel生成户籍编号
  • 户籍和籍贯的区别
  • 户籍
  • mysql导入格式化数据_mysql将格式化文本数据导入数据库
  • P2P之UDP打洞
  • TCP打洞原理
  • 混凝土墙开洞_钢筋混凝土剪力墙上开洞的要求?
  • p2p打洞机制
  • 【转载】 P2P(打洞)方案 及webrtc实现
  • P2P打洞服务器与客户端
  • P2P原理之打洞
  • P2P NAT 打洞 穿透
  • P2P打洞java源代码
  • NAT穿透(UDP打洞)
  • UDP打洞(成功打洞)
  • NAT类型和打洞流程
  • TCP打洞方法
  • TCP 打洞和UDP打洞
  • TCP 打洞原理
  • 线性代数之矩阵打洞
  • 讲讲udp内网穿透又叫做udp打洞
  • p2p打洞技术原理
  • UDP打洞原理及软件简单实现
  • 黄龙洞传送阵
  • UDP打洞基础
  • 打洞机制
  • 打洞原理

超参数优化 贝叶斯优化框架_10个超参数优化框架相关推荐

  1. 超参数优化 贝叶斯优化框架_mlmachine-使用贝叶斯优化进行超参数调整

    超参数优化 贝叶斯优化框架 机器 (mlmachine) TL; DR (TL;DR) mlmachine is a Python library that organizes and acceler ...

  2. 超参数优---贝叶斯优化及其改进(PBT优化)

    目录 参考文献: 简介 贝叶斯优化方法为什么好 贝叶斯优化的适用条件 贝叶斯优化的历史与在神经网络中的应用 贝叶斯优化基本原理与流程 贝叶斯优化的经典变种及其文章 Python中的贝叶斯优化库 贝叶斯 ...

  3. [机器学习]超参数优化---贝叶斯优化(Bayesian Optimization) 理解

    背景 很多算法工程师戏谑自己是调参工程师,因为他们需要在繁杂的算法参数中找到最优的组合,往往在调参的过程中痛苦而漫长的度过一天.如果有一种方式可以帮助工程师找到最优的参数组合,那一定大有裨益,贝叶斯超 ...

  4. 贝叶斯优化神经网络参数_贝叶斯超参数优化:神经网络,TensorFlow,相预测示例

    贝叶斯优化神经网络参数 The purpose of this work is to optimize the neural network model hyper-parameters to est ...

  5. 超参数优化--贝叶斯方法

    直接讲方法,原理另外讲 贝叶斯优化是当今黑盒函数估计领域最为先进和经典的方法,在同一套序贯模型下使用不同的代理模型以及采集函数.还可以发展出更多更先进的贝叶斯优化改进版算法,因此,贝叶斯优化的其算法本 ...

  6. 机器学习之朴素贝叶斯三、拉普拉斯平滑技术、优化改进情感分析

    文章目录 一.前文问题 1. 先看下改进前我们的代码计算部分 2. `问题分析`: 二.针对问题进行解决 1. 什么是`拉普拉斯平滑`技术 2. 拉普拉斯优化-下溢上溢问题 3. 改进地方分析: 4. ...

  7. 机器学习 —— 基础整理(一)贝叶斯决策论;二次判别函数;贝叶斯错误率;生成式模型的参数方法...

    本文简单整理了以下内容: (一)贝叶斯决策论:最小错误率决策.最小风险决策:经验风险与结构风险 (二)判别函数:生成式模型:多元高斯密度下的判别函数:线性判别函数LDF.二次判别函数QDF (三)贝叶 ...

  8. PRML第八章读书笔记——Graphical Models 生成式模型/超先验/层次贝叶斯模型、d-分离/朴素贝叶斯、有向分解/马尔可夫毯、D图I图完美图、马尔科夫链/因子图/和积算法/最大和算法

    (终于读到概率图了,从这一章开始应该算是PRML的精华内容了.过于基础的东西就不写了,主要写自己不会的) 目录 8.1 Bayesian Networks P365 祖先采样法ancestral sa ...

  9. 【超详细的贝叶斯滤波原理】(不看后悔)

    贝叶斯公式 二维离散型随机变量的贝叶斯公式 对于二维离散型随机变量(X,Y)(X,Y)(X,Y),由其条件概率质量函数与全概率公式,容易得到其贝叶斯公式: fX∣Y(x∣y)=fX,Y(x,y)fY( ...

最新文章

  1. 20年资深Oracle数据库专家:国内应用级DBA的缺失
  2. [unreal4入门系列之六] 常用的按键和快捷键
  3. DAY2-python基础1
  4. 单核工作法12:现在专注一件事(下)
  5. 从系统中取得指定资源图像(转载)
  6. python格式化输出作业_Python格式化输出
  7. [AT2306]Rearranging(拓扑序)
  8. Windows:定时/进程结束执行命令
  9. rhel6.x版本和rehel7.x版本破解密码及恢复损坏的文件分区
  10. @OneToOne or @ManyToOne on references an unknown entity:
  11. 小黑相关预搞书籍杂志等
  12. 微信小游戏引擎插件,Creator 使用教程!
  13. 数据库查询优化-添加索引
  14. Android Studio设置签名密钥
  15. 分享一个精灵盛典辅助工具挂机方案
  16. 社交类App如何防范黑产垃圾用户?
  17. 攻防世界 web高手进阶区 9分题 favorite_number
  18. 回溯算法(持续更新)
  19. 软件测试工程师笔试题
  20. C语言编程计算1*2*3+3*4*5+...+99*100*101的值

热门文章

  1. 字节青训前端笔记 | 理解CSS
  2. 回顾java数组复制
  3. SAP WM模块事务码(TCODE)大全
  4. 爱吃榴莲的喵星人今日上线
  5. 使用VB制作一个简易通信录 电话号码查询器 电脑端通信录
  6. 博图在线升级 gsd_小米MIUI 11迎来全量开发版公测,38款机型可升级
  7. 梯形图能代替c语言,PLC今后会被单片机代替吗?梯形图变成高级语言?
  8. 360 php 防护代码,360发布通用php防护代码
  9. 玩转Unity中的ML-Agents 机器学习(三):BallacceBall 案例训练
  10. vboot源码详细分析-1