In the first two parts of this article I obtained and preprocessed Fitbit sleep data, split the data into training, validation and test set, trained three different Machine Learning models and compared their performance.

在本文的前两部分中,我获得并预处理了Fitbit睡眠数据,将数据分为训练,验证和测试集,训练了三种不同的机器学习模型并比较了它们的性能。

In part 2, we saw that using the default hyperparameters for Random Forest and Extreme Gradient Boosting and evaluating model performance on the validation set led to Multiple Linear Regression performing best and Random Forest as well as Gradient Boosting Regressor performing slightly worse.

在第2部分中 ,我们看到将默认超参数用于Random Forest和Extreme Gradient Boosting并在验证集上评估模型性能会导致多元线性回归表现最佳,而Random Forest以及Gradient Boosting Regressor表现稍差。

In this part of the article I will discuss shortcomings of using only one validation set, how we address those shortcomings and how we can tune model hyperparameters to boost performance. Let’s dive in.

在本文的这一部分中,我将讨论仅使用一个验证集的缺点,我们如何解决这些缺点以及如何调整模型超参数以提高性能。 让我们潜入。

交叉验证 (Cross-Validation)

简单培训,验证和测试拆分的缺点 (Shortcomings of simple training, validation and test split)

In part 2 of this article we split the data into training, validation and test set, trained our models on the training set and evaluated them on the validation set. We have not touched the test set yet as it is intended as a hold-out set that represents never before seen data that will be used to evaluate how well the Machine Learning models generalise once we feel like they are ready for that final test.

在本文的第2部分中,我们将数据分为训练集,验证集和测试集,在训练集上训练我们的模型,并在验证集上对其进行评估。 我们尚未触及该测试集,因为它旨在作为一种保留集,表示从未见过的数据,一旦我们感觉它们已经准备好用于最终测试,它们将用于评估机器学习模型的概括程度。

Because we only split the data into one set of training data and one set of validation data, the performance metrics of our models are highly reliant on those two sets. They are only trained and evaluated once so the performance depends on that one evaluation and may perform very differently when trained and evaluated on different subsets of the same data, just because of the nature of how the subsets are picked.

因为我们仅将数据分为一组训练数据和一组验证数据,所以我们模型的性能指标高度依赖于这两套数据。 它们仅经过训练和评估一次,因此性能取决于该评估,并且由于对同一数据的不同子集进行训练和评估而导致的性能可能会大不相同。

What if we could do this split into training and validation test multiple times, each time on different subsets of the data, train and evaluate our models each time and look at the average performance of the models across multiple evaluations? Exactly that is the idea behind K-fold Cross-Validation.

如果我们可以多次对数据的不同子集进行多次训练和验证测试,然后每次对模型进行训练和评估,并查看多次评估中模型的平均性能,该怎么办? 恰恰是K折交叉验证背后的想法。

K折交叉验证 (K-fold Cross-Validation)

In K-fold Cross-Validation (CV) we still start off by separating a test/hold-out set from the remaining data in the data set to use for the final evaluation of our models. The data that is remaining, i.e. everything apart from the test set, is split into K number of folds (subsets). The Cross-Validation then iterates through the folds and at each iteration uses one of the K folds as the validation set while using all remaining folds as the training set. This process is repeated until every fold has been used as a validation set. Here is what this process looks like for a 5-fold Cross-Validation:

在K折交叉验证(CV)中,我们仍然从将测试/保持集与数据集中的其余数据中分离出来以用于模型的最终评估开始。 剩余的数据(即除测试集以外的所有数据)被分为K个折叠(子集)数。 然后,交叉验证会遍历折痕,并且在每次迭代时,将K折痕之一用作验证集,而将所有其余折痕用作训练集。 重复此过程,直到所有折痕都用作验证集为止。 这是5倍交叉验证的过程:

By training and testing the model K number of times on different subsets of the same training data we get a more accurate representation of how well our model might perform on data it has not seen before. In a K-fold CV we score the model after every iteration and compute the average of all scores to get a better representation of how the model performs compared to only using one training and validation set.

通过在相同训练数据的不同子集上对模型进行K次训练和测试,我们可以更准确地表示我们的模型在从未见过的数据上的表现如何。 与仅使用一个训练和验证集相比,在K折CV中,我们在每次迭代后对模型进行评分,并计算所有评分的平均值,以更好地表示模型的性能。

Python中的K折交叉验证 (K-fold Cross-Validation in Python)

Because the Fitbit sleep data set is relatively small, I am going to use 4-fold Cross-Validation and compare the three models used so far: Multiple Linear Regression, Random Forest and Extreme Gradient Boosting Regressor.

由于Fitbit睡眠数据集相对较小,因此我将使用4倍交叉验证并比较到目前为止使用的三个模型:多重线性回归,随机森林和极端梯度增强回归。

Note that a 4-fold CV also nicely compares to the training and validation split from part 2 because we split the data into 75% training and 25% validation data. A 4-fold CV essentially does the same, just four times, and using different subsets each time. I have created a function that takes as inputs a list of models that we would like to compare, the feature data, the target variable data and how many folds we would like to create. The function computes the performance measures we used previously and returns a table with the averages for all models as well as the scores for each type of measure per fold, in case we would like to investigate further. Here is the function:

请注意,由于我们将数据分为75%的训练数据和25%的验证数据,因此4倍CV也可以很好地与第二部分的训练和验证结果进行比较。 4倍CV本质上是相同的,只有四次,并且每次使用不同的子集。 我创建了一个函数,该函数将我们要比较的模型列表,特征数据,目标变量数据以及我们要创建的折叠数作为输入。 该函数计算我们之前使用的性能指标,并返回一张表,其中包含所有模型的平均值以及每折类型每种指标的得分,以防我们需要进一步调查。 这是函数:

Now we can create a list of models to be used and call the above function with a 4-fold Cross-Validation:

现在,我们可以创建要使用的模型的列表,并使用4倍交叉验证调用上述函数:

The resulting comparison table looks like this:

结果比较表如下所示:

Using a 4-fold CV, the Random Forest Regressor outperforms the other two models on all performance measures. But in part 2 we saw that the Multiple Linear Regression had the best performance metrics, why has that changed?

使用4倍CV,随机森林回归指标在所有绩效指标上均优于其他两个模型。 但是在第2部分中,我们看到多元线性回归具有最佳的性能指标,为什么这有所改变?

In order to understand why the Cross-Validation results in different scores than the simple training and validation split from part 2 we need to have a closer look at how the models perform on each fold. The cv_comparison() function from above also returns a list of all the scores for each different model for every fold. Let’s have a look at how the R-squared of the three models compares for each fold. In order to have the results in table format, let’s quickly transform it into a DataFrame as well:

为了理解为什么交叉验证与第2部分中的简单训练和验证结果不同的分数,我们需要仔细研究模型在每张纸上的表现。 上面的cv_comparison()函数还会返回每个折叠的每个不同模型的所有得分的列表。 让我们看一下三个模型的R平方在每次折叠时的比较。 为了使结果具有表格格式,我们也将其快速转换为DataFrame:

The above table makes it clear why the scores obtained from the 4-fold CV differ to that of the training and validation set. The R-squared varies a lot from fold to fold, especially for Extreme Gradient Boosting and Multiple Linear Regression. This also shows why it is so important to use Cross-Validation, especially for small data sets. If you only rely on one simple training and validation set your results may be vastly different depending on what split of the data you end up with.

上表清楚说明了为什么从4倍CV中获得的分数与训练和验证集的分数不同。 R平方在折数之间变化很大,尤其是对于极端渐变增强和多重线性回归。 这也说明了为什么使用交叉验证如此重要的原因,特别是对于小型数据集。 如果仅依靠一个简单的训练和验证集,则结果可能会大不相同,具体取决于最终得到的数据拆分方式。

Now that we know what Cross-Validation is and why it is important let’s see if we can get more out of our models by tuning the hyperparameters.

现在我们知道了交叉验证是什么,为什么它很重要,让我们看看是否可以通过调整超参数来从模型中获得更多收益。

超参数调整 (Hyperparameter Tuning)

Unlike model parameters, which are learned during model training and can not be set arbitrarily, hyperparameters are parameters that can be set by the user before training a Machine Learning model. Examples of hyperparameters in a Random Forest are the number of decision trees to have in the forest, the maximum number of features to consider at each split or the maximum depth of the tree.

与模型参数是在模型训练期间学习的并且不能任意设置的模型参数不同,超参数是用户在训练机器学习模型之前可以设置的参数。 随机森林中超参数的示例包括森林中决策树的数量,每次拆分时要考虑的要素的最大数量或树的最大深度。

As I mentioned previously, there is no one-size-fits-all solution to finding optimum hyperparameters. A set of hyperparameters that performs well for one Machine Learning problem may perform poorly on another one. So how do we figure out what the optimal hyperparameters are?

正如我之前提到的,找到最佳超参数并没有一种千篇一律的解决方案。 一组对一个机器学习问题表现良好的超参数可能在另一个问题上表现不佳。 那么,我们如何找出最佳超参数呢?

One possible way is to manually tune the hyperparameters using educated guesses as starting points, changing some hyperparameters, training the model, evaluating its performance and repeating these steps until we are happy with the performance. That sounds like an unnecessarily tedious approach and it is.

一种可能的方法是使用有根据的猜测作为起点来手动调整超参数,更改一些超参数,训练模型,评估其性能,然后重复这些步骤,直到我们对性能满意为止。 这听起来像是一种不必要的乏味方法。

Compare hyperparameter tuning to tuning a guitar. You could choose to tune a guitar by ear, which requires a lot of practice and patience and may never lead to an optimal result, especially if you are a beginner. Luckily, there are electric guitar tuners which help you find the correct tones by interpreting the sound waves of your guitar strings and displaying what it reads. You still have to tune the strings using the machine head but the process will be much quicker and the electric tuner ensures your tuning is close to optimal. So what’s the Machine Learning equivalent of an electric guitar tuner?

比较超参数调整与调整吉他。 您可以选择用耳朵来调音吉他,这需要大量的练习和耐心,并且可能永远不会导致最佳效果,尤其是对于初学者而言。 幸运的是,有一些电吉他调音器可以帮助您通过解释吉他弦的声波并显示其读数来找到正确的音调。 您仍然必须使用机头调弦,但是过程会更快,并且电子调谐器可确保您的调音接近最佳。 那么什么是机器学习等同于电吉他调音器?

随机网格搜索交叉验证 (Randomised Grid Search Cross-Validation)

One of the most popular approaches to tune Machine Learning hyperparameters is called RandomizedSearchCV() in scikit-learn. Let’s dissect what this means.

调整机器学习超参数的最流行方法之一是scikit-learn中的RandomizedSearchCV()。 让我们剖析这意味着什么。

In Randomised Grid Search Cross-Validation we start by creating a grid of hyperparameters we want to optimise with values that we want to try out for those hyperparameters. Let’s look at an example of a hyperparameter grid for our Random Forest Regressor and how we can set it up:

在随机网格搜索交叉验证中,我们首先创建一个我们要优化的超参数网格,并使用这些超参数的值进行尝试。 让我们看一下我们的Random Forest Regressor的超参数网格的示例,以及如何设置它:

First, we create a list of possible values for each hyperparameter we want to tune and then we set up the grid using a dictionary with the key-value pairs as shown above. In order to find and understand the hyperparameters of a Machine Learning model you can check out the model’s official documentation, see the one for Random Forest Regressor here.

首先,我们为要调整的每个超参数创建一个可能值的列表,然后使用带有键-值对的字典来设置网格,如上所示。 为了查找和理解机器学习模型的超参数,您可以查看模型的官方文档,请参阅此处的适用于Random Forest Regressor的文档。

The resulting grid looks like this:

生成的网格如下所示:

As the name suggests, Randomised Grid Search Cross-Validation uses Cross-Validation to evaluate model performance. Random Search means that instead of trying out all possible combinations of hyperparameters (which would be 27,216 combinations in our example) the algorithm randomly chooses a value for each hyperparameter from the grid and evaluates the model using that random combination of hyperparameters.

顾名思义,随机网格搜索交叉验证使用交叉验证来评估模型性能。 随机搜索意味着,算法不会尝试所有可能的超参数组合(在我们的示例中为27,216个组合),而是从网格中为每个超参数随机选择一个值,并使用该超参数的随机组合评估模型。

Trying out all possible combinations would be really computationally expensive and take a long time. Choosing hyperparameters at random speeds up the process significantly and often provides a similarly good solution to trying out all possible combinations. Let’s see how the Randomised Grid Search Cross-Validation is used.

尝试所有可能的组合在计算上确实是昂贵的,并且需要很长时间。 随机选择超参数可以大大加快该过程,并且通常为尝试所有可能的组合提供类似的好解决方案。 让我们看看如何使用随机网格搜索交叉验证。

随机森林的超参数调整 (Hyperparameter Tuning for Random Forest)

Using the previously created grid, we can find the best hyperparameters for our Random Forest Regressor. I will use a 3-fold CV because the data set is relatively small and run 200 random combinations. Therefore, in total, the Random Grid Search CV will train and evaluate 600 models (3 folds for 200 combinations). Because Random Forests tend to be slowly computed compared to other Machine Learning models such as Extreme Gradient Boosting, running this many models takes a few minutes. Once the process is completed we can obtain the best hyperparameters.

使用先前创建的网格,我们可以为我们的Random Forest Regressor找到最佳的超参数。 我将使用3倍CV,因为数据集相对较小并且可以运行200个随机组合。 因此,总体而言,随机网格搜索CV将训练和评估600个模型(200种组合的3倍)。 由于与其他机器学习模型(例如,极端梯度增强)相比,随机森林的计算速度往往较慢,因此运行许多模型需要花费几分钟。 一旦过程完成,我们可以获得最佳的超参数。

Here is how to use RandomizedSearchCV():

这是使用RandomizedSearchCV()的方法:

We will use these hyperparameters in our final model, which we test on the test set.

我们将在最终模型中使用这些超参数,并在测试集上进行测试。

超参数调整可实现极端梯度提升 (Hyperparameter Tuning for Extreme Gradient Boosting)

For our Extreme Gradient Boosting Regressor the process is essentially the same as for the Random Forest. Some of the hyperparameters that we try to optimise are the same and some are different, due to the nature of the model. You can find the full list and explanations of the hyperparameters for XGBRegressor here. Once again, we create the grid:

对于我们的极端梯度增强回归器,该过程与“随机森林”基本相同。 由于模型的性质,我们尝试优化的一些超参数是相同的,而有些则是不同的。 您可以在此处找到XGBRegressor的超参数的完整列表和说明。 再一次,我们创建网格:

The resulting grid looks like this:

生成的网格如下所示:

In order to make the performance evaluations comparable I will use a 3-fold CV with 200 combinations for Extreme Gradient Boosting as well:

为了使性能评估具有可比性,我还将使用具有200种组合的3倍CV进行极端梯度增强:

The optimal hyperparameters are the following:

最佳超参数如下:

Again, these will be used in the final model.

同样,这些将在最终模型中使用。

Although it might appear obvious to some people I just want to mention it here: the reason why we do not do hyperparameter optimisation for Multiple Linear Regression is because there are no hyperparameters to be tweaked in the model, it is simply a Multiple Linear Regression.

尽管对于某些人来说似乎很明显,但我只想在此提及:我们不对多元线性回归进行超参数优化的原因是,因为模型中没有要调整的超参数,它只是多元线性回归。

Now that we have obtained the optimal hyperparameters (at least in terms of our Cross-Validation) we can finally evaluate our models on the test data that we have been holding out since the very beginning of this analysis!

现在,我们已经获得了最佳超参数(至少在交叉验证方面),我们终于可以根据自分析开始就一直坚持的测试数据评估模型!

最终模型评估 (Final model evaluation)

After evaluating the performance of our Machine Learning models and finding optimal hyperparameters it is time to put the models to their final test — the all-mighty hold-out set.

在评估了我们的机器学习模型的性能并找到了最佳超参数之后,是时候将这些模型进行最终测试了-全能的支持集。

In order to do so, we train the models on the entire 80% of the data that we used for all of our evaluations so far, i.e. everything apart from the test set. We use the hyperparameters that we found in the previous part and then compare how our models perform on the test set.

为了做到这一点,我们在到目前为止用于所有评估的全部80%数据(即除测试集之外的所有数据)上训练模型。 我们使用在上一部分中找到的超参数,然后比较我们的模型在测试集上的表现。

Let’s create and train our models:

让我们创建和训练我们的模型:

I defined a function that scores all of the final models and creates a DataFrame that makes the comparison easy:

我定义了一个对所有最终模型进行评分的函数,并创建了使比较容易的DataFrame:

Calling that function with our three final models and adjusting the column headers results in the following final evaluation:

用我们的三个最终模型调用该函数并调整列标题会导致以下最终评估:

And the winner is: Random Forest Regressor!

赢家是:随机森林回归者!

The Random Forest achieves an R-squared of 80% and an accuracy of 97.6% on the test set, meaning its predictions were only off by about 2.4% on average. Not bad!

随机森林在测试集上的R平方达到80%,准确度为97.6%,这意味着其预测平均仅降低了约2.4%。 不错!

The performance of the Multiple Linear Regression is not far behind but the Extreme Gradient Boosting failed to live up to its hype in this analysis.

多元线性回归的性能并没有落后,但在此分析中,极端梯度提升未能达到其炒作的目的。

结论性意见 (Concluding comments)

The process of coming up with this whole analysis and actually conducting it was a lot of fun. I have been trying to figure out how Fitbit computes Sleep Scores for a while now and am glad to understand it a bit better. On top of that, I managed to build a Machine Learning model that can predict Sleep Scores with great accuracy. That being said, there are a few things I want to highlight:

提出整个分析并进行实际分析的过程非常有趣。 我一直在试图弄清楚Fitbit如何计算睡眠分数,现在很高兴能更好地理解它。 最重要的是,我设法建立了一个机器学习模型,可以非常准确地预测睡眠分数。 话虽如此,我想强调一些事情:

  1. As I mentioned in part 2, the interpretation of the coefficients of the Multiple Linear Regression may not be accurate because there are high levels of multicollinearity between features.

    正如我在第2部分中提到的,由于特征之间存在较高的多重共线性,因此多重线性回归系数的解释可能不准确。

  2. The data set that I used for this analysis is rather small as it relies on 286 data points obtained from Fitbit. This limits the generalisability of the results and a much bigger data set would be needed to be able to train more robust models.我用于此分析的数据集很小,因为它依赖于从Fitbit获得的286个数据点。 这限制了结果的通用性,并且需要更大的数据集来训练更强大的模型。
  3. This analysis uses Fitbit sleep data of only one person and therefore may not generalise well to other people with different sleep patterns, heart rates, etc.该分析仅使用一个人的Fitbit睡眠数据,因此可能无法很好地推广到具有不同睡眠模式,心率等的其他人。

I hope you enjoyed this thorough analysis of how to use Machine Learning to predict Fitbit Sleep Scores and learned something about the importance of different sleep stages as well as the time spent asleep along the way.

我希望您喜欢如何使用机器学习来预测Fitbit睡眠得分的详尽分析,并了解了不同睡眠阶段的重要性以及整个过程中所花费的时间。

I highly appreciate constructive feedback and you can reach out to me on LinkedIn any time.

非常感谢您提供建设性的反馈,您可以随时在LinkedIn上与我联系。

Thanks for reading!

谢谢阅读!

翻译自: https://towardsdatascience.com/cross-validation-and-hyperparameter-tuning-how-to-optimise-your-machine-learning-model-13f005af9d7d


http://www.taodudu.cc/news/show-863569.html

相关文章:

  • 安装好机器学习环境的虚拟机_虚拟环境之外的数据科学是弄乱机器的好方法
  • 遭遇棘手 交接_Librosa的城市声音分类-棘手的交叉验证
  • 模型越复杂越容易惰性_ML模型的惰性预测
  • vgg 名人人脸图像库_您看起来像哪个名人? 图像相似度搜索模型
  • 机器学习:贝叶斯和优化方法_Facebook使用贝叶斯优化在机器学习模型中进行更好的实验
  • power-bi_在Power BI中的VertiPaq内-压缩成功!
  • 模型 标签数据 神经网络_大型神经网络和小数据的模型选择
  • 学习excel数据分析_为什么Excel是学习数据分析的最佳方法
  • 护理方面关于人工智能的构想_如何提出惊人的AI,ML或数据科学项目构想。
  • api数据库管理_API管理平台如何增强您的数据科学项目
  • batch lr替代关系_建立关系的替代方法
  • ai/ml_您本周应阅读的有趣的AI / ML文章(8月9日)
  • snowflake 使用_如何使用机器学习模型直接从Snowflake进行预测
  • 统计 python_Python统计简介
  • ios 图像翻转_在iOS 14中使用计算机视觉的图像差异
  • 熔池 沉积_用于3D打印的AI(第3部分):异常熔池分类的纠缠变分自动编码器
  • 机器学习中激活函数和模型_探索机器学习中的激活和丢失功能
  • macos上的硬盘检测工具_如何在MacOS上使用双镜头面部检测器(DSFD)实现90%以上的精度
  • 词嵌入应用_神经词嵌入的法律应用
  • 谷歌 colab_使用Google Colab在Python中将图像和遮罩拆分为多个部分
  • 美国人口普查年收入比赛_训练网络对收入进行分类:成人普查收入数据集
  • NLP分类
  • 解构里面再次解构_解构后的咖啡:焙炒,研磨和分层,以获得更浓的意式浓缩咖啡
  • 随机森林算法的随机性_理解随机森林算法的图形指南
  • 南加州大学机器视觉实验室_机器学习带动南加州爱迪生的变革
  • 机器学习特征构建_使用Streamlit构建您的基础机器学习Web应用
  • 数学建模算法:支持向量机_从零开始的算法:支持向量机
  • 普元部署包部署找不到构建_让我们在5分钟内构建和部署AutoML解决方案
  • 基于决策树的多分类_R中基于决策树的糖尿病分类—一个零博客
  • csdn无人驾驶汽车_无人驾驶汽车100年历史

交叉验证和超参数调整:如何优化您的机器学习模型相关推荐

  1. 机器学习基础|K折交叉验证与超参数搜索

    文章目录 交叉验证 交叉验证的概念 K的取值 为什么要用K折交叉验证 Sklearn交叉验证API 超参数搜索 超参数的概念 超参数搜索的概念 超参数搜索的原理 Sklearn超参数搜索API 实例 ...

  2. 自动机器学习超参数调整(贝叶斯优化)

    [导读]机器学习中,调参是一项繁琐但至关重要的任务,因为它很大程度上影响了算法的性能.手动调参十分耗时,网格和随机搜索不需要人力,但需要很长的运行时间.因此,诞生了许多自动调整超参数的方法.贝叶斯优化 ...

  3. cox风险回归模型参数估计_信用风险管理:分类模型和超参数调整

    cox风险回归模型参数估计 The final part aims to walk you through the process of applying different classificati ...

  4. 超参数调整的方法介绍

    文章目录 超参数调整的方法介绍 常用的超参数调整方法 网格搜索(Grid Search) 如何进行网格搜索 小结 随机搜索(Random Search) 贝叶斯优化(Bayesian Optimiza ...

  5. 超参数优化 贝叶斯优化框架_mlmachine-使用贝叶斯优化进行超参数调整

    超参数优化 贝叶斯优化框架 机器 (mlmachine) TL; DR (TL;DR) mlmachine is a Python library that organizes and acceler ...

  6. tensorflow官方Blog-使用Keras Tuner超参数优化框架 进行超参数调整 ,具体实现版本

    文章目录 进入正题,keras tuner超参数优化框架 模型构建def build_model(hp): 实例化tuner 加载数据集,进行超参数搜索tuner.search() 找到最佳的模型tu ...

  7. 贝叶斯优化xgboost_超参数调整xgboost和神经网络的hyperopt贝叶斯优化

    贝叶斯优化xgboost Hyperparameters: These are certain values/weights that determine the learning process o ...

  8. 深度学习中的验证集和超参数简介

    大多数机器学习算法都有超参数,可以设置来控制算法行为.超参数的值不是通过学习算法本身学习出来的(尽管我们可以设计一个嵌套的学习过程,一个学习算法为另一个学习算法学出最优超参数). 在多项式回归示例中, ...

  9. AI基础:数据划分、超参数调整、正则化

    本文来源于吴恩达老师的深度学习课程[1]笔记部分. 作者:黄海广[2] 导语 本文讲解机器学习的策略方面,包括数据划分.超参数调整.正则化等. 我正在编写AI基础系列,目前已经发布: AI 基础:简易 ...

最新文章

  1. 看完微软大神写的求平均值代码,我意识到自己还是too young了
  2. MySQL基础之 存储引擎
  3. python3菜鸟教程中文-Python3菜鸟教程 1.介绍
  4. 数据流和十六进制转换
  5. java标签组件命名_Java——标签组件:JLabel
  6. java cxf_拥抱模块化Java平台:Java 10上的Apache CXF
  7. beetl 取list下标的问题
  8. 套接字(socket)基本知识与工作原理
  9. access 根据id删除数据_小程序云开发之数据库自动备份丨云开发101
  10. 使用libvirt技术监控虚拟机资源利用情况
  11. Linux系统好用吗
  12. Yii --EClientScript 扩展,css,js文件代码压缩合并加载
  13. 结巴分词python教程_python结巴教程【python3怎么使用结巴分词】
  14. 这40份酷炫的 Python 可视化大屏,简直太爱了
  15. 2021年电工(初级)考试及电工(初级)考试题
  16. perl中uc,lc,ucfirst,lcfirst的用法(转载)
  17. 麻雀虽小五脏俱全----blender介绍
  18. 鬼谷八荒逆天改命词条通过C++代码制作
  19. APP注册名称的一些问题
  20. ubuntu 配置登录失败次数限制

热门文章

  1. CCNA考试中实验题精讲(RIP,OSPF,VLAN)
  2. tomcat安全机制j_security_check(简单版)
  3. 【Java每日一题】20161018
  4. JS基础篇--函数声明与定义,作用域,函数声明与表达式的区别
  5. IntelliJ idea 中使用Git
  6. SSH Secure Shell显示GCC编译错误信息乱码解决方法
  7. DHCP服务器-配置
  8. unit 10计算机英语教程,计算机英语实用教程Unit 10.doc
  9. MySQL 字段约束 null, not null, default, auto_increment
  10. springsecurity-sample中hsqldb的使用注意