python 线性回归模型

Linear regression and logistic regression are two of the most popular machine learning models today.

线性回归和逻辑回归是当今最受欢迎的两种机器学习模型。

In the last article, you learned about the history and theory behind a linear regression machine learning algorithm.

在上一篇文章中 ,您了解了线性回归机器学习算法背后的历史和理论。

This tutorial will teach you how to create, train, and test your first linear regression machine learning model in Python using the scikit-learn library.

本教程将教您如何使用scikit-learn库在Python中创建,训练和测试您的第一个线性回归机器学习模型。

第1节:线性回归 (Section 1: Linear Regression)

我们将在本教程中使用的数据集 (The Data Set We Will Use in This Tutorial)

Since we're just starting to learn about linear regression in machine learning, we will work with artificially-created datasets in this tutorial. This will allow you to focus on learning the machine learning concepts and avoid spending unnecessary time on cleaning or manipulating data.

由于我们才刚刚开始学习机器学习中的线性回归,因此在本教程中我们将使用人工创建的数据集。 这将使您可以专注于学习机器学习的概念,并避免在清理或处理数据上花费不必要的时间。

More specifically, we will be working with a data set of housing data and attempting to predict housing prices. Before we build the model, we’ll first need to import the required libraries.

更具体地说,我们将使用住房数据的数据集并尝试预测住房价格。 在构建模型之前,我们首先需要导入所需的库。

我们将在本教程中使用的图书馆 (The Libraries We Will Use in This Tutorial)

The first library that we need to import is pandas, which is a portmanteau of “panel data” and is the most popular Python library for working with tabular data.

我们需要导入的第一个库是pandas ,它是“面板数据”的portmanteau,是使用表格数据的最受欢迎的Python库。

It is convention to import pandas under the alias pd. You can import pandas with the following statement:

按照惯例,以别名pd导入pandas 。 您可以使用以下语句导入pandas

import pandas as pd

Next, we’ll need to import NumPy, which is a popular library for numerical computing. Numpy is known for its NumPy array data structure as well as its useful methods reshape, arange, and append.

接下来,我们需要导入NumPy ,这是一个流行的数值计算库。 Numpy以其NumPy数组数据结构以及有用的方法reshape , arange和append闻名。

It is convention to import NumPy under the alias np. You can import numpy with the following statement:

按照惯例,以别名np导入NumPy。 您可以使用以下语句导入numpy

import numpy as np

Next, we need to import matplotlib, which is Python’s most popular library for data visualization.

接下来,我们需要导入matplotlib ,这是Python最受欢迎的数据可视化库。

matplotlib is typically imported under the alias plt. You can import matplotlib with the following statement:

matplotlib通常以别名plt导入。 您可以使用以下语句导入matplotlib

import matplotlib.pyplot as plt%matplotlib inline

The %matplotlib inline statement will cause of of our matplotlib visualizations to embed themselves directly in our Jupyter Notebook, which makes them easier to access and interpret.

%matplotlib inline语句将使我们的matplotlib可视化效果直接嵌入到我们的Jupyter Notebook中,这使它们更易于访问和解释。

Lastly, you will want to import seaborn, which is another Python data visualization library that makes it easier to create beautiful visualizations using matplotlib.

最后,您将要导入seaborn ,这是另一个Python数据可视化库,可以更轻松地使用matplotlib创建漂亮的可视化。

You can import seaborn with the following statement:

您可以使用以下语句导入seaborn

import seaborn as sns

To summarize, here are all of the imports required in this tutorial:

总结一下,这是本教程中所有必需的导入:

import pandas as pdimport numpy as npimport matplotlib.pyplot as plt%matplotlib inlineimport seaborn as sns

In future articles, I will specify which imports are necessary but I will not explain each import in detail like I did here.

在以后的文章中,我将指定哪些导入是必需的,但不会像在这里一样详细解释每个导入。

导入数据集 (Importing the Data Set)

As mentioned, we will be using a data set of housing information. We will use

如前所述,我们将使用住房信息数据集。 我们将使用

The data set has been uploaded to my website as a .csv file at the following URL:

数据集已作为.csv文件通过以下URL上传到我的网站:

https://nickmccullum.com/files/Housing_Data.csv

To import the data set into your Jupyter Notebook, the first thing you should do is download the file by copying and pasting this URL into your browser. Then, move the file into the same directory as your Jupyter Notebook.

要将数据集导入到Jupyter Notebook中 ,您应该做的第一件事是通过将该URL复制并粘贴到浏览器中来下载文件。 然后,将文件移到与Jupyter Notebook相同的目录中。

Once this is done, the following Python statement will import the housing data set into your Jupyter Notebook:

完成此操作后,以下Python语句将外壳数据集导入到Jupyter Notebook中:

raw_data = pd.read_csv('Housing_Data.csv')

This data set has a number of features, including:

该数据集具有许多功能,包括:

  • The average income in the area of the house
    房屋面积的平均收入
  • The average number of total rooms in the area
    该地区平均客房总数
  • The price that the house sold for
    房子卖出的价格
  • The address of the house
    房子的地址

This data is randomly generated, so you will see a few nuances that might not normally make sense (such as a large number of decimal places after a number that should be an integer).

此数据是随机生成的,因此您会看到一些通常可能没有意义的细微差别(例如,在应该为整数的数字之后的大量小数位)。

了解数据集 (Understanding the Data Set)

Now that the data set has been imported under the raw_data variable, you can use the info method to get some high-level information about the data set. Specifically, running raw_data.info() gives:

现在已经在raw_data变量下导入了数据集,您可以使用info方法来获取有关数据集的一些高级信息。 具体来说,运行raw_data.info()得到:

<class 'pandas.core.frame.DataFrame'>RangeIndex: 5000 entries, 0 to 4999Data columns (total 7 columns):Avg. Area Income                5000 non-null float64Avg. Area House Age             5000 non-null float64Avg. Area Number of Rooms       5000 non-null float64Avg. Area Number of Bedrooms    5000 non-null float64Area Population                 5000 non-null float64Price                           5000 non-null float64Address                         5000 non-null objectdtypes: float64(6), object(1)memory usage: 273.6+ KB

Another useful way that you can learn about this data set is by generating a pairplot. You can use the seaborn method pairplot for this, and pass in the entire DataFrame as a parameter. Here is the entire statement for this:

您可以了解此数据集的另一种有用方法是生成对图。 您可以seaborn使用seaborn方法pairplot ,并将整个DataFrame作为参数传递。 这是整个说明:

sns.pairplot(raw_data)

The output of this statement is below:

该语句的输出如下:

Next, let’s begin building our linear regression model.

接下来,让我们开始构建线性回归模型。

建立机器学习线性回归模型 (Building a Machine Learning Linear Regression Model)

The first thing we need to do is split our data into an x-array (which contains the data that we will use to make predictions) and a y-array (which contains the data that we are trying to predict.

我们需要做的第一件事是将我们的数据分成一个x-array (包含我们将用来进行预测的数据)和一个y-array (其中包含我们正在尝试预测的数据)。

First, we should decide which columns to include. You can generate a list of the DataFrame’s columns using raw_data.columns, which outputs:

首先,我们应该决定要包括哪些列。 您可以使用raw_data.columns生成DataFrame列的列表,该列表输出:

Index(['Avg. Area Income', 'Avg. Area House Age', 'Avg. Area Number of Rooms','Avg. Area Number of Bedrooms', 'Area Population', 'Price', 'Address'],dtype='object')

We will be using all of these variables in the x-array except for Price (since that’s the variable we’re trying to predict) and Address (since it is only contains text).

我们将在x-array使用所有这些变量,但Price (因为这是我们要预测的变量)和Address (因为它仅包含文本)除外。

Let’s create our x-array and assign it to a variable called x.

让我们创建x-array并将其分配给名为x的变量。

x = raw_data[['Avg. Area Income', 'Avg. Area House Age', 'Avg. Area Number of Rooms','Avg. Area Number of Bedrooms', 'Area Population']]

Next, let’s create our y-array and assign it to a variable called y.

接下来,让我们创建y-array并将其分配给名为y的变量。

y = raw_data['Price']

We have successfully divided our data set into an x-array (which are the input values of our model) and a y-array (which are the output values of our model). We’lll learn how to split our data set further into training data and test data in the next section.

我们已经成功地将数据集划分为x-array (这是我们模型的输入值)和y-array (这是我们模型的输出值)。 在下一部分中,我们将学习如何将我们的数据集进一步分为训练数据和测试数据。

将我们的数据集分为训练数据和测试数据 (Splitting our Data Set into Training Data and Test Data)

scikit-learn makes it very easy to divide our data set into training data and test data. To do this, we’ll need to import the function train_test_split from the model_selection module of scikit-learn.

scikit-learn可以很容易地将我们的数据集分为训练数据和测试数据。 为此,我们需要从scikit-learnmodel_selection模块中导入函数train_test_split

Here is the full code to do this:

这是执行此操作的完整代码:

from sklearn.model_selection import train_test_split

The train_test_split data accepts three arguments:

train_test_split数据接受三个参数:

  • Our x-array

    我们的x-array

  • Our y-array

    我们的y-array

  • The desired size of our test data
    我们测试数据的期望大小

With these parameters, the train_test_split function will split our data for us! Here’s the code to do this if we want our test data to be 30% of the entire data set:

使用这些参数, train_test_split函数将为我们分割数据! 如果我们希望测试数据占整个数据集的30%,请执行以下代码:

x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.3)

Let’s unpack what is happening here.

让我们解压缩这里发生的事情。

The train_test_split function returns a Python list of length 4, where each item in the list is x_train, x_test, y_train, and y_test, respectively. We then use list unpacking to assign the proper values to the correct variable names.

train_test_split函数返回一个长度为4的Python列表 ,其中列表中的每个项目分别为x_trainx_testy_trainy_test 。 然后,我们使用列表解压缩将正确的值分配给正确的变量名称。

Now that we have properly divided our data set, it is time to build and train our linear regression machine learning model.

现在,我们已经正确划分了数据集,是时候构建和训练我们的线性回归机器学习模型了。

建立和训练模型 (Building and Training the Model)

The first thing we need to do is import the LinearRegression estimator from scikit-learn. Here is the Python statement for this:

我们需要做的第一件事是从scikit-learn导入LinearRegression估计器。 这是为此的Python语句:

from sklearn.linear_model import LinearRegression

Next, we need to create an instance of the Linear Regression Python object. We will assign this to a variable called model. Here is the code for this:

接下来,我们需要创建Linear Regression Python对象的实例。 我们将其分配给一个名为model的变量。 这是此代码:

model = LinearRegression()

We can use scikit-learn’s fit method to train this model on our training data.

我们可以使用scikit-learnfit方法在训练数据上训练该模型。

model.fit(x_train, y_train)

Our model has now been trained. You can examine each of the model’s coefficients using the following statement:

我们的模型现已训练完毕。 您可以使用以下语句检查模型的每个系数:

print(model.coef_)

This prints:

打印:

[2.16176350e+01 1.65221120e+05 1.21405377e+05 1.31871878e+031.52251955e+01]

Similarly, here is how you can see the intercept of the regression equation:

同样,这是查看回归方程的截距的方法:

print(model.intercept_)

This prints:

打印:

-2641372.6673013503

A nicer way to view the coefficients is by placing them in a DataFrame. This can be done with the following statement:

查看系数的一种更好的方法是将它们放在DataFrame中。 可以使用以下语句完成此操作:

pd.DataFrame(model.coef_, x.columns, columns = ['Coeff'])

The output in this case is much easier to interpret:

在这种情况下,输出更容易解释:

Let’s take a moment to understand what these coefficients mean. Let’s look at the Area Population variable specifically, which has a coefficient of approximately 15.

让我们花一点时间来理解这些系数的含义。 让我们具体看看Area Population变量,其系数约为15

What this means is that if you hold all other variables constant, then a one-unit increase in Area Population will result in a 15-unit increase in the predicted variable - in this case, Price.

这意味着如果您将所有其他变量保持不变,则“ Area Population增加1个单位将导致预测变量增加15单位-在这种情况下为Price

Said differently, large coefficients on a specific variable mean that that variable has a large impact on the value of the variable you’re trying to predict. Similarly, small values have small impact.

换句话说,特定变量的大系数意味着该变量对您要预测的变量的值有很大的影响。 同样,小的值影响也很小。

Now that we’ve generated our first machine learning linear regression model, it’s time to use the model to make predictions from our test data set.

现在,我们已经生成了第一个机器学习线性回归模型,是时候使用该模型从测试数据集中进行预测了。

根据我们的模型做出预测 (Making Predictions From Our Model)

scikit-learn makes it very easy to make predictions from a machine learning model. You simply need to call the predict method on the model variable that we created earlier.

scikit-learn使从机器学习模型进行预测变得非常容易。 您只需要在我们之前创建的model变量上调用predict方法。

Since the predict variable is designed to make predictions, it only accepts an x-array parameter. It will generate the y values for you!

由于predict变量旨在进行预测,因此它仅接受x-array参数。 它将为您生成y值!

Here is the code you’ll need to generate predictions from our model using the predict method:

这是您需要使用predict方法从我们的模型生成预测的代码:

predictions = model.predict(x_test)

The predictions variable holds the predicted values of the features stored in x_test. Since we used the train_test_split method to store the real values in y_test, what we want to do next is compare the values of the predictions array with the values of y_test.

predictions变量保存x_test存储的x_test预测值。 由于我们使用train_test_split方法将实际值存储在y_test ,因此下一步要做的是将predictions数组的值与y_test的值进行y_test

An easy way to do this is plot the two arrays using a scatterplot. It’s easy to build matplotlib scatterplots using the plt.scatter method. Here’s the code for this:

一种简单的方法是使用散点图绘制两个数组。 使用plt.scatter方法很容易构建matplotlib散点图 。 这是此代码:

plt.scatter(y_test, predictions)

Here’s the scatterplot that this code generates:

这是此代码生成的散点图:

As you can see, our predicted values are very close to the actual values for the observations in the data set. A perfectly straight diagonal line in this scatterplot would indicate that our model perfectly predicted the y-array values.

如您所见,我们的预测值非常接近数据集中观测值的实际值。 在该散点图中,一条完美的对角线将表明我们的模型完美地预测了y-array值。

Another way to visually assess the performance of our model is to plot its residuals, which are the difference between the actual y-array values and the predicted y-array values.

直观评估模型性能的另一种方法是绘制其residuals ,即实际y-array值与预测y-array值之间的差。

An easy way to do this is with the following statement:

下面的语句是实现此目的的简单方法:

plt.hist(y_test - predictions)

Here is the visualization that this code generates:

这是此代码生成的可视化效果:

This is a histogram of the residuals from our machine learning model.

这是我们的机器学习模型残差的直方图。

You may notice that the residuals from our machine learning model appear to be normally distributed. This is a very good sign!

您可能会注意到,我们的机器学习模型中的残差似乎呈正态分布。 这是一个非常好的信号!

It indicates that we have selected an appropriate model type (in this case, linear regression) to make predictions from our data set. We will learn more about how to make sure you’re using the right model later in this course.

这表明我们已经选择了适当的模型类型(在这种情况下为线性回归)来根据我们的数据集进行预测。 在本课程的后面,我们将详细了解如何确保使用正确的模型。

测试模型的性能 (Testing the Performance of our Model)

We learned near the beginning of this course that there are three main performance metrics used for regression machine learning models:

在本课程开始时,我们了解到有三种主要的性能指标用于回归机器学习模型:

  • Mean absolute error
    平均绝对误差
  • Mean squared error
    均方误差
  • Root mean squared error
    均方根误差

We will now see how to calculate each of these metrics for the model we’ve built in this tutorial. Before proceeding, run the following import statement within your Jupyter Notebook:

现在,我们将了解如何为我们在本教程中构建的模型计算每个指标。 在继续之前,请在Jupyter Notebook中运行以下import语句:

from sklearn import metrics

Mean Absolute Error (MAE)

平均绝对误差(MAE)

You can calculate mean absolute error in Python with the following statement:

您可以使用以下语句在Python中计算平均绝对错误:

metrics.mean_absolute_error(y_test, predictions)

均方误差(MSE) (Mean Squared Error (MSE))

Similarly, you can calculate mean squared error in Python with the following statement:

同样,您可以使用以下语句在Python中计算均方误差:

metrics.mean_squared_error(y_test, predictions)

均方根误差(RMSE) (Root Mean Squared Error (RMSE))

Unlike mean absolute error and mean squared error, scikit-learn does not actually have a built-in method for calculating root mean squared error.

与均值绝对误差和均方误差不同, scikit-learn实际上没有内置的方法来计算均方根误差。

Fortunately, it really doesn’t need to. Since root mean squared error is just the square root of mean squared error, you can use NumPy’s sqrt method to easily calculate it:

幸运的是,它确实不需要。 由于均方根误差只是均方根误差的sqrt ,因此您可以使用NumPy的sqrt方法轻松地进行计算:

np.sqrt(metrics.mean_squared_error(y_test, predictions))

本教程的完整代码 (The Complete Code For This Tutorial)

Here is the entire code for this Python linear regression machine learning tutorial. You can also view it in this GitHub repository.

这是此Python线性回归机器学习教程的全部代码。 您也可以在此GitHub存储库中查看它。

import pandas as pdimport numpy as npimport matplotlib.pyplot as pltimport seaborn as sns%matplotlib inlineraw_data = pd.read_csv('Housing_Data.csv')x = raw_data[['Avg. Area Income', 'Avg. Area House Age', 'Avg. Area Number of Rooms','Avg. Area Number of Bedrooms', 'Area Population']]y = raw_data['Price']from sklearn.model_selection import train_test_splitx_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.3)from sklearn.linear_model import LinearRegressionmodel = LinearRegression()model.fit(x_train, y_train)print(model.coef_)print(model.intercept_)pd.DataFrame(model.coef_, x.columns, columns = ['Coeff'])predictions = model.predict(x_test)# plt.scatter(y_test, predictions)plt.hist(y_test - predictions)from sklearn import metricsmetrics.mean_absolute_error(y_test, predictions)metrics.mean_squared_error(y_test, predictions)np.sqrt(metrics.mean_squared_error(y_test, predictions))

第2节:逻辑回归
(Section 2: Logistic Regression)

Note - if you have been coding along with this tutorial so far and built your linear regression model already, you'll want to open a new Jupyter Notebook (with no code in it) before proceeding.

注意-如果您到目前为止一直在与本教程一起编码并且已经建立了线性回归模型,则在继续之前,您需要打开一个新的Jupyter Notebook(其中没有代码)。

我们将在本教程中使用的数据集 (The Data Set We Will Be Using in This Tutorial)

The Titanic data set is a very famous data set that contains characteristics about the passengers on the Titanic. It is often used as an introductory data set for logistic regression problems.

泰坦尼克号数据集是非常著名的数据集,其中包含有关泰坦尼克号上乘客的特征。 它通常用作逻辑回归问题的入门数据集。

In this tutorial, we will be using the Titanic data set combined with a Python logistic regression model to predict whether or not a passenger survived the Titanic crash.

在本教程中,我们将结合泰坦尼克号数据集和Python logistic回归模型来预测乘客是否在泰坦尼克号坠机事故中幸免。

The original Titanic data set is publicly available on Kaggle.com, which is a website that hosts data sets and data science competitions.

原始的泰坦尼克号数据集可在Kaggle.com上公开获得,该网站托管数据集和数据科学竞赛。

To make things easier for you as a student in this course, we will be using a semi-cleaned version of the Titanic data set, which will save you time on data cleaning and manipulation.

为了使您本课程的学生更轻松,我们将使用Titanic数据集的半清洁版本,这将节省您在数据清洁和处理上的时间。

The cleaned Titanic data set has actually already been made available for you. You can download the data file by clicking the links below:

实际上,已清理的Titanic数据集已可供您使用。 您可以通过单击以下链接下载数据文件:

  • Titanic data

    泰坦尼克号数据

Once this file has been downloaded, open a Jupyter Notebook in the same working directory and we can begin building our logistic regression model.

下载此文件后,在同一工作目录中打开Jupyter Notebook ,我们可以开始构建逻辑回归模型。

我们将在本教程中使用的导入 (The Imports We Will Be Using in This Tutorial)

As before, we will be using multiple open-source software libraries in this tutorial. Here are the imports you will need to run to follow along as I code through our Python logistic regression model:

和以前一样,本教程中将使用多个开源软件库。 这是我通过Python Logistic回归模型进行编码时需要遵循的导入:

import pandas as pdimport numpy as npimport matplotlib.pyplot as plt%matplotlib inlineimport seaborn as sns

Next, we will need to import the Titanic data set into our Python script.

接下来,我们需要将Titanic数据集导入到我们的Python脚本中。

将数据集导入我们的Python脚本 (Importing the Data Set into our Python Script)

We will be using pandas’ read_csv method to import our csv files into pandas DataFrames called titanic_data.

我们将使用pandas的read_csv方法将csv文件导入名为titanic_data pandas titanic_data

Here is the code to do this:

这是执行此操作的代码:

titanic_data = pd.read_csv('titanic_train.csv')

Next, let’s investigate what data is actually included in the Titanic data set. There are two main methods to do this (using the titanic_data DataFrame specifically):

接下来,让我们研究一下Titanic数据集中实际包含的数据。 有两种主要方法可以做到这一点(专门使用titanic_data DataFrame):

  • The titanic_data.head(5) method will print the first 5 rows of the DataFrame. You can substitute 5 with whichever number you’d like.

    titanic_data.head(5)方法将打印DataFrame的前5行。 您可以用任意一个数字代替5

  • You can also print titanic_data.columns, which will show you the column named.

    您还可以打印titanic_data.columns ,这将向您显示名为的列。

Running the second command (titanic_data.columns) generates the following output:

运行第二个命令( titanic_data.columns )会生成以下输出:

Index(['PassengerId', 'Survived', 'Pclass', 'Name', 'Sex', 'Age', 'SibSp','Parch', 'Ticket', 'Fare', 'Cabin', 'Embarked'],dtype='object'

These are the names of the columns in the DataFrame. Here are brief explanations of each data point:

这些是DataFrame中列的名称。 以下是每个数据点的简要说明:

  • PassengerId: a numerical identifier for every passenger on the Titanic.

    PassengerId :泰坦尼克号上每个乘客的数字标识符。

  • Survived: a binary identifier that indicates whether or not the passenger survived the Titanic crash. This variable will hold a value of 1 if they survived and 0 if they did not.

    Survived :二进制标识符,指示乘客是否在泰坦尼克号坠机事故中幸存。 如果生存,则此变量的值将为1 ,否则生存为0

  • Pclass: the passenger class of the passenger in question. This can hold a value of 1, 2, or 3, depending on where the passenger was located in the ship.

    Pclass :有关乘客的乘客等级。 这可以保持的值12 ,或3 ,这取决于其中乘客位于在船上。

  • Name: the passenger’s name.`

    Name :乘客的名字。

  • Sex: male or female.

    Sex :男性或女性。

  • Age: the age (in years) of the passenger.

    Age :乘客的年龄(以年为单位)。

  • SibSp: the number of siblings and spouses aboard the ship.

    SibSp :船上兄弟姐妹和配偶的数量。

  • Parch: the number of parents and children aboard the ship.

    Parch :船上父母和子女的数量。

  • Ticket: the passenger’s ticket number.

    Ticket :乘客的票号。

  • Fare: how much the passenger paid for their ticket on the Titanic.

    Fare :乘客在《泰坦尼克号》上花了多少钱。

  • Cabin: the passenger’s cabin number.

    Cabin :乘客的客舱编号。

  • Embarked: the port where the passenger embarked (C = Cherbourg, Q = Queenstown, S = Southampton)

    Embarked :旅客Embarked的港口(C =瑟堡,Q =皇后镇,S =南安普敦)

Next up, we will learn more about our data set by using some basic exploratory data analysis techniques.

接下来,我们将通过使用一些基本的探索性数据分析技术来了解有关数据集的更多信息。

通过探索性数据分析了解我们的数据集 (Learning About Our Data Set With Exploratory Data Analysis)

每个分类类别的流行 (The Prevalence of Each Classification Category)

When using machine learning techniques to model classification problems, it is always a good idea to have a sense of the ratio between categories. For this specific problem, it’s useful to see how many survivors vs. non-survivors exist in our training data.

当使用机器学习技术对分类问题建模时,了解类别之间的比率始终是一个好主意。 对于此特定问题,查看我们的训练数据中有多少幸存者与非幸存者是有用的。

An easy way to visualize this is using the seaborn plot countplot. In this example, you could create the appropriate seasborn plot with the following Python code:

一种简单的可视化方法是使用seaborn plot countplot 。 在此示例中,您可以使用以下Python代码创建适当的seasborn图:

sns.countplot(x='Survived', data=titanic_data)

This generates the following plot:

这将生成以下图:

As you can see, we have many more incidences of non-survivors than we do of survivors.

如您所见,与幸存者相比,非幸存者的发病率要高得多。

性别之间的成活率 (Survival Rates Between Genders)

It is also useful to compare survival rates relative to some other data feature. For example, we can compare survival rates between the Male and Female values for Sex using the following Python code:

比较相对于某些其他数据特征的生存率也很有用。 例如,我们可以使用以下Python代码来比较“ Sex的“ Male和“ Female值的生存率:

sns.countplot(x='Survived', hue='Sex', data=titanic_data)

This generates the following plot:

这将生成以下图:

As you can see, passengers with a Sex of Male were much more likely to be non-survivors than passengers with a Sex of Female.

如您所见, SexMale乘客比SexFemale乘客更有可能是非幸存者。

旅客舱位之间的生存率 (Survival Rates Between Passenger Classes)

We can perform a similar analysis using the Pclass variable to see which passenger class was the most (and least) likely to have passengers that were survivors.

我们可以使用Pclass变量执行类似的分析,以查看哪个旅客类别最有(和最少)可能有幸存者。

Here is the code to do this:

这是执行此操作的代码:

sns.countplot(x='Survived', hue='Pclass', data=titanic_data)

This generates the following plot:

这将生成以下图:

The most noticeable observation from this plot is that passengers with a Pclass value of 3 - which indicates the third class, which was the cheapest and least luxurious - were much more likely to die when the Titanic crashed.

从该图中最明显的观察结果是,当泰坦尼克号坠毁时, Pclass值为3乘客(表示最便宜,最不豪华的第三等舱)更有可能死亡。

泰坦尼克号乘客的年龄分布 (The Age Distribution of Titanic Passengers)

One other useful analysis we could perform is investigating the age distribution of Titanic passengers. A histogram is an excellent tool for this.

我们可以执行的另一项有用的分析是调查泰坦尼克号乘客的年龄分布。 直方图是一个很好的工具。

You can generate a histogram of the Age variable with the following code:

您可以使用以下代码生成Age变量的直方图:

plt.hist(titanic_data['Age'].dropna())

Note that the dropna() method is necessary since the data set contains several nulls values.

注意,因为数据集包含多个null值,所以dropna()方法是必需的。

Here is the histogram that this code generates:

这是此代码生成的直方图:

As you can see, there is a concentration of Titanic passengers with an Age value between 20 and 40.

如您所见, Age2040之间的泰坦尼克号乘客集中。

泰坦尼克号乘客的票价分布 (The Ticket Price Distribution of Titanic Passengers)

The last exploratory data analysis technique that we will use is investigating the distribution of fare prices within the Titanic data set.

我们将使用的最后一种探索性数据分析技术是调查泰坦尼克号数据集中的票价分布。

You can do this with the following code:

您可以使用以下代码执行此操作:

plt.hist(titanic_data['Fare'])

This generates the following plot:

这将生成以下图:

As you can see, there are three distinct groups of Fare prices within the Titanic data set. This makes sense because there are also three unique values for the Pclass variable. The difference Fare groups correspond to the different Pclass categories.

如您所见,在泰坦尼克号数据集中有三组不同的Fare价格。 这是有道理的,因为Pclass变量还有三个唯一值。 差异Fare组对应于不同的Pclass类别。

Since the Titanic data set is a real-world data set, it contains some missing data. We will learn how to deal with missing data in the next section.

由于Titanic数据集是真实世界的数据集,因此它包含一些缺失的数据。 在下一节中,我们将学习如何处理丢失的数据 。

从我们的数据集中删除空数据 (Removing Null Data From Our Data Set)

To start, let’s examine where our data set contains missing data. To do this, run the following command:

首先,让我们检查数据集中包含缺失数据的位置。 为此,请运行以下命令:

titanic_data.isnull()

This will generate a DataFrame of boolean values where the cell contains True if it is a null value and False otherwise. Here is an image of what this looks like:

这将生成一个布尔值的DataFrame,如果该单元格为空值,则该单元格包含True ,否则为False 。 这是它的样子的图像:

A far more useful method for assessing missing data in this data set is by creating a quick visualization. To do this, we can use the seaborn visualization library. Here is quick command that you can use to create a heatmap using the seaborn library:

评估此数据集中缺失数据的一种更有用的方法是创建快速可视化。 为此,我们可以使用seaborn可视化库。 这是快速命令,可用于使用seaborn库创建heatmap

sns.heatmap(titanic_data.isnull(), cbar=False)

Here is the visualization that this generates:

这是生成的可视化效果:

In this visualization, the white lines indicate missing values in the dataset. You can see that the Age and Cabin columns contain the majority of the missing data in the Titanic data set.

在此可视化中,白线表示数据集中缺少的值。 您可以看到“ Age和“ Cabin列包含“泰坦尼克号”数据集中大部分丢失的数据。

The Age column in particular contains a small enough amount of missing that that we can fill in the missing data using some form of mathematics. On the other hand, the Cabin data is missing enough data that we could probably remove it from our model entirely.

特别是“ Age列包含的缺失量很小,我们可以使用某种形式的数学来填充缺失数据。 另一方面, Cabin数据缺少足够的数据,因此我们有可能将其完全从模型中删除。

The process of filling in missing data with average data from the rest of the data set is called imputation. We will now use imputation to fill in the missing data from the Age column.

用其余数据集中的平均数据填充缺失数据的过程称为imputation 。 现在,我们将使用imputation来填充“ Age列中的缺失数据。

The most basic form of imputation would be to fill in the missing Age data with the average Age value across the entire data set. However, there are better methods.

imputation最基本形式是用整个数据集中的平均Age值来填充缺失的Age数据。 但是,有更好的方法。

We will fill in the missing Age values with the average Age value for the specific Pclass passenger class that the passenger belongs to. To understand why this is useful, consider the following boxplot:

我们将使用该乘客所属的特定Pclass乘客舱的平均Age值来填充缺少的Age值。 要了解为什么这样做有用,请考虑以下箱线图:

sns.boxplot(titanic_data['Pclass'], titanic_data['Age'])

As you can see, the passengers with a Pclass value of 1 (the most expensive passenger class) tend to be the oldest while the passengers with a Pclass value of 3 (the cheapest) tend to be the youngest. This is very logical, so we will use the average Age value within different Pclass data to imputate the missing data in our Age column.

如您所见, Pclass值为1 (最昂贵的乘客舱)的乘客往往是最老的,而Pclass值为3 (最便宜的乘客)的乘客往往是最年轻的。 这是非常符合逻辑的,所以我们将使用的平均Age不同范围内的值Pclass数据imputate我们丢失的数据Age列。

The easiest way to perform imputation on a data set like the Titanic data set is by building a custom function. To start, we will need to determine the mean Age value for each Pclass value.

对像泰坦尼克号数据集这样的数据集执行imputation的最简单方法是构建自定义函数。 首先,我们需要确定每个Pclass值的平均Age值。

#Pclass value 1titanic_data[titanic_data['Pclass'] == 1]['Age'].mean()#Pclass value 2titanic_data[titanic_data['Pclass'] == 2]['Age'].mean()#Pclass 3titanic_data[titanic_data['Pclass'] == 2]['Age'].mean()

Here is the final function that we will use to imputate our missing Age variables:

这是我们将用来imputate缺少的Age变量的最终函数:

def impute_missing_age(columns):age = columns[0]passenger_class = columns[1]if pd.isnull(age):if(passenger_class == 1):return titanic_data[titanic_data['Pclass'] == 1]['Age'].mean()elif(passenger_class == 2):return titanic_data[titanic_data['Pclass'] == 2]['Age'].mean()elif(passenger_class == 3):return titanic_data[titanic_data['Pclass'] == 3]['Age'].mean()else:return age

Now that this imputation function is complete, we need to apply it to every row in the titanic_data DataFrame. Python’s apply method is an excellent tool for this:

现在,该插补功能已完成,我们需要将其应用于titanic_data DataFrame中的每一行。 Python的apply方法是一个出色的工具:

titanic_data['Age'] = titanic_data[['Age', 'Pclass']].apply(impute_missing_age, axis = 1)

Now that we have performed imputation on every row to deal with our missing Age data, let’s investigate our original boxplot:

既然我们已经对每一行进行了imputation以处理丢失的Age数据,那么让我们研究一下原始箱形图:

sns.heatmap(titanic_data.isnull(), cbar=False)

You wil notice there is no longer any missing data in the Age column of our pandas DataFrame!

您会发现我们的熊猫DataFrame的Age列中不再缺少任何数据!

You might be wondering why we spent so much time dealing with missing data in the Age column specifically. It is because given the impact of Age on survival for most disasters and diseases, it is a variable that is likely to have high predictive value within our data set.

您可能想知道为什么我们要花费大量时间专门处理“ Age列中的缺失数据。 这是因为考虑到Age对大多数灾难和疾病生存的影响,在我们的数据集中,该变量可能具有较高的预测价值。

Now that we have an understanding of the structure of this data set and have removed its missing data, let’s begin building our logistic regression machine learning model.

现在我们已经了解了该数据集的结构并删除了缺失的数据,让我们开始构建逻辑回归机器学习模型。

建立逻辑回归模型 (Building a Logistic Regression Model)

It is now time to remove our logistic regression model.

现在是时候删除我们的逻辑回归模型了。

删除缺少太多数据的列 (Removing Columns With Too Much Missing Data)

First, let’s remove the Cabin column. As we mentioned, the high prevalence of missing data in this column means that it is unwise to impute the missing data, so we will remove it entirely with the following code:

首先,让我们删除“ Cabin列。 正如我们提到的,此列中丢失数据的普遍性意味着不正确地impute丢失数据,因此我们将使用以下代码将其完全删除:

titanic_data.drop('Cabin', axis=1, inplace = True)

Next, let’s remove any additional columns that contain missing data with the pandas dropna() method:

接下来,让我们使用pandas dropna()方法删除包含丢失数据的所有其他列:

titanic_data.dropna(inplace = True)

使用虚拟变量处理分类数据 (Handling Categorical Data With Dummy Variables)

The next task we need to handle is dealing with categorical features. Namely, we need to find a way to numerically work with observations that are not naturally numerical.

我们需要处理的下一个任务是处理分类特征。 即,我们需要找到一种方法来对非自然数值的观测值进行数值处理。

A great example of this is the Sex column, which has two values: Male and Female. Similarly, the Embarked column contains a single letter which indicates which city the passenger departed from.

一个很好的例子是“ Sex列,该列具有两个值: MaleFemale 。 同样,“ Embarked栏包含一个字母,指示该乘客离开的城市。

To solve this problem, we will create dummy variables. These assign a numerical value to each category of a non-numerical feature.

为了解决这个问题,我们将创建dummy variables 。 这些将数字值分配给非数字特征的每个类别。

Fortunately, pandas has a built-in method called get_dummies() that makes it easy to create dummy variables. The get_dummies method does have one issue - it will create a new column for each value in the DataFrame column.

幸运的是, pandas有一个名为get_dummies()的内置方法,可轻松创建虚拟变量。 get_dummies方法确实存在一个问题-它会为DataFrame列中的每个值创建一个新列。

Let’s consider an example to help understand this better. If we call the get_dummies() method on the Age column, we get the following output:

让我们考虑一个示例,以帮助您更好地理解这一点。 如果我们在Age列上调用get_dummies()方法, get_dummies()得到以下输出:

pd.get_dummies(titanic_data['Sex'])

As you can see, this creates two new columns: female and male. These columns will both be perfect predictors of each other, since a value of 0 in the female column indicates a value of 1 in the male column, and vice versa.

如您所见,这将创建两个新列: femalemale 。 这些列都将是彼此的完美预测器,因为female列中的值为0表示male列中的值为1 ,反之亦然。

This is called multicollinearity and it significantly reduces the predictive power of your algorithm. To remove this, we can add the argument drop_first = True to the get_dummies method like this:

这称为multicollinearity ,它会大大降低算法的预测能力。 要删除它,我们可以将参数drop_first = True添加到get_dummies方法中,如下所示:

pd.get_dummies(titanic_data['Sex'], drop_first = True)

Now, let’s create dummy variable columns for our Sex and Embarked columns, and assign them to variables called sex and embarked.

现在,让我们为我们的虚拟变量列SexEmbarked列,并将它们分配给变量称为sexembarked

sex_data = pd.get_dummies(titanic_data['Sex'], drop_first = True)embarked_data = pd.get_dummies(titanic_data['Embarked'], drop_first = True)

There is one important thing to note about the embarked variable defined below. It has two columns: Q and S, but since we’ve already removed one other column (the C column), neither of the remaining two columns are perfect predictors of each other, so multicollinearity does not exist in the new, modified data set.

还有就是要注意的一个重要的事情有关embarked下面定义的变量。 它有两列: QS ,但是由于我们已经删除了另一列( C列),因此其余两列都不是彼此的完美预测变量,因此在修改后的新数据集中不存在multicollinearity

将虚拟变量添加到pandas DataFrame (Adding Dummy Variables to the pandas DataFrame)

Next we need to add our sex and embarked columns to the DataFrame.

接下来,我们需要增加我们的sex ,并embarked列数据框。

You can concatenate these data columns into the existing pandas DataFrame with the following code:

您可以使用以下代码将这些数据列连接到现有的pandas DataFrame中:

titanic_data = pd.concat([titanic_data, sex_data, embarked_data], axis = 1)

Now if you run the command print(titanic_data.columns), your Jupyter Notebook will generate the following output:

现在,如果您运行命令print(titanic_data.columns) ,Jupyter Notebook将生成以下输出:

Index(['PassengerId', 'Survived', 'Pclass', 'Name', 'Sex', 'Age', 'SibSp','Parch', 'Ticket', 'Fare', 'Embarked', 'male', 'Q', 'S'],dtype='object')

The existence of the male, Q, and S columns shows that our data was concatenated successfully.

maleQS列的存在表明我们的数据已成功连接。

从数据集中删除不必要的列 (Removing Unnecessary Columns From The Data Set)

This means that we can now drop the original Sex and Embarked columns from the DataFrame. There are also other columns (like Name , PassengerId, Ticket) that are not predictive of Titanic crash survival rates, so we will remove those as well. The following code handles this for us:

这意味着我们现在可以从DataFrame中删除原始的SexEmbarked列。 还有其他一些列(如NamePassengerIdTicket )无法预测泰坦尼克号的撞车幸存率,因此我们也将其删除。 以下代码为我们处理了此问题:

titanic_data.drop(['Name', 'Ticket', 'Sex', 'Embarked'], axis = 1, inplace = True)

If you print titanic_data.columns now, your Jupyter Notebook will generate the following output:

如果您现在打印titanic_data.columns ,则Jupyter Notebook将生成以下输出:

Index(['Survived', 'Pclass', 'Age', 'SibSp', 'Parch', 'Fare','male', 'Q', 'S'],dtype='object'

The DataFrame now has the following appearance:

DataFrame现在具有以下外观:

As you can see, every field in this data set is now numeric, which makes it an excellent candidate for a logistic regression machine learning algorithm.

如您所见,该数据集中的每个字段现在都是数字,这使其成为逻辑回归机器学习算法的理想选择。

创建培训数据和测试数据 (Creating Training Data and Test Data)

Next, it’s time to split our titanic_data into training data and test data. As before, we will use built-in functionality from scikit-learn to do this.

接下来,是时候将我们的titanic_data分为训练数据和测试数据了。 和以前一样,我们将使用scikit-learn内置功能来执行此操作。

First, we need to divide our data into x values (the data we will be using to make predictions) and y values (the data we are attempting to predict). The following code handles this:

首先,我们需要将数据分为x值(我们将用于进行预测的数据)和y值(我们正在尝试预测的数据)。 以下代码处理此问题:

y_data = titanic_data['Survived']x_data = titanic_data.drop('Survived', axis = 1)

Next, we need to import the train_test_split function from scikit-learn. The following code executes this import:

接下来,我们需要从scikit-learn导入train_test_split函数。 以下代码执行此导入:

from sklearn.model_selection import train_test_split

Lastly, we can use the train_test_split function combined with list unpacking to generate our training data and test data:

最后,我们可以结合使用train_test_split函数和列表解train_test_split来生成我们的训练数据和测试数据:

x_training_data, x_test_data, y_training_data, y_test_data = train_test_split(x_data, y_data, test_size = 0.3)

Note that in this case, the test data is 30% of the original data set as specified with the parameter test_size = 0.3.

请注意,在这种情况下,测试数据是参数test_size = 0.3指定的原始数据集的30%。

We have now created our training data and test data for our logistic regression model. We will train our model in the next section of this tutorial.

现在,我们为逻辑回归模型创建了训练数据和测试数据。 我们将在本教程的下一部分中训练模型。

训练逻辑回归模型 (Training the Logistic Regression Model)

To train our model, we will first need to import the appropriate model from scikit-learn with the following command:

要训​​练我们的模型,我们首先需要使用以下命令从scikit-learn导入适当的模型:

from sklearn.linear_model import LogisticRegression

Next, we need to create our model by instantiating an instance of the LogisticRegression object:

接下来,我们需要通过实例化LogisticRegression对象的实例来创建模型:

model = LogisticRegression()

To train the model, we need to call the fit method on the LogisticRegression object we just created and pass in our x_training_data and y_training_data variables, like this:

要训​​练模型,我们需要在刚刚创建的LogisticRegression对象上调用fit方法,并传入x_training_datay_training_data变量,如下所示:

model.fit(x_training_data, y_training_data)

Our model has now been trained. We will begin making predictions using this model in the next section of this tutorial.

我们的模型现已训练完毕。 我们将在本教程的下一部分中开始使用此模型进行预测。

使用我们的Logistic回归模型进行预测 (Making Predictions With Our Logistic Regression Model)

Let’s make a set of predictions on our test data using the model logistic regression model we just created. We will store these predictions in a variable called predictions:

让我们使用刚刚创建的model逻辑回归模型对测试数据进行一组预测。 我们将这些预测存储在一个名为predictions的变量中:

predictions = model.predict(x_test_data)

Our predictions have been made. Let’s examine the accuracy of our model next.

我们已经做出了预测。 接下来让我们检查模型的准确性。

测量Logistic回归机器学习模型的性能 (Measuring the Performance of a Logistic Regression Machine Learning Model)

scikit-learn has an excellent built-in module called classification_report that makes it easy to measure the performance of a classification machine learning model. We will use this module to measure the performance of the model that we just created.

scikit-learn具有一个出色的内置模块,称为classification_report _报告,可轻松测量分类机器学习模型的性能。 我们将使用此模块来评估我们刚刚创建的模型的性能。

First, let’s import the module:

首先,让我们导入模块:

from sklearn.metrics import classification_report

Next, let’s use the module to calculate the performance metrics for our logistic regression machine learning module:

接下来,让我们使用该模块为我们的逻辑回归机器学习模块计算性能指标:

classification_report(y_test_data, predictions)

Here is the output of this command:

这是此命令的输出:

precision    recall  f1-score   support0       0.83      0.87      0.85       1691       0.75      0.68      0.72        98accuracy                           0.80       267macro avg       0.79      0.78      0.78       267weighted avg       0.80      0.80      0.80       267

If you’re interested in seeing the raw confusion matrix and calculating the performance metrics manually, you can do this with the following code:

如果您有兴趣查看原始的混淆矩阵并手动计算性能指标,则可以使用以下代码进行操作:

from sklearn.metrics import confusion_matrixprint(confusion_matrix(y_test_data, predictions))

This generates the following output:

这将产生以下输出:

[[145  22][ 30  70]]

本教程的完整代码 (The Full Code for This Tutorial)

You can view the full code for this tutorial in this GitHub repository. It is also pasted below for your reference:

您可以在GitHub存储库中查看本教程的完整代码。 还将其粘贴在下面以供您参考:

import pandas as pdimport numpy as npimport matplotlib.pyplot as plt%matplotlib inlineimport seaborn as sns#Import the data settitanic_data = pd.read_csv('titanic_train.csv')#Exploratory data analysissns.heatmap(titanic_data.isnull(), cbar=False)sns.countplot(x='Survived', data=titanic_data)sns.countplot(x='Survived', hue='Sex', data=titanic_data)sns.countplot(x='Survived', hue='Pclass', data=titanic_data)plt.hist(titanic_data['Age'].dropna())plt.hist(titanic_data['Fare'])sns.boxplot(titanic_data['Pclass'], titanic_data['Age'])#Imputation functiondef impute_missing_age(columns):age = columns[0]passenger_class = columns[1]if pd.isnull(age):if(passenger_class == 1):return titanic_data[titanic_data['Pclass'] == 1]['Age'].mean()elif(passenger_class == 2):return titanic_data[titanic_data['Pclass'] == 2]['Age'].mean()elif(passenger_class == 3):return titanic_data[titanic_data['Pclass'] == 3]['Age'].mean()else:return age#Impute the missing Age datatitanic_data['Age'] = titanic_data[['Age', 'Pclass']].apply(impute_missing_age, axis = 1)#Reinvestigate missing datasns.heatmap(titanic_data.isnull(), cbar=False)#Drop null datatitanic_data.drop('Cabin', axis=1, inplace = True)titanic_data.dropna(inplace = True)#Create dummy variables for Sex and Embarked columnssex_data = pd.get_dummies(titanic_data['Sex'], drop_first = True)embarked_data = pd.get_dummies(titanic_data['Embarked'], drop_first = True)#Add dummy variables to the DataFrame and drop non-numeric datatitanic_data = pd.concat([titanic_data, sex_data, embarked_data], axis = 1)titanic_data.drop(['Name', 'PassengerId', 'Ticket', 'Sex', 'Embarked'], axis = 1, inplace = True)#Print the finalized data settitanic_data.head()#Split the data set into x and y datay_data = titanic_data['Survived']x_data = titanic_data.drop('Survived', axis = 1)#Split the data set into training data and test datafrom sklearn.model_selection import train_test_splitx_training_data, x_test_data, y_training_data, y_test_data = train_test_split(x_data, y_data, test_size = 0.3)#Create the modelfrom sklearn.linear_model import LogisticRegressionmodel = LogisticRegression()#Train the model and create predictionsmodel.fit(x_training_data, y_training_data)predictions = model.predict(x_test_data)#Calculate performance metricsfrom sklearn.metrics import classification_reportprint(classification_report(y_test_data, predictions))#Generate a confusion matrixfrom sklearn.metrics import confusion_matrixprint(confusion_matrix(y_test_data, predictions))

最后的想法 (Final Thoughts)

In this tutorial, you learned how to build linear regression and logistic regression machine learning models in Python.

在本教程中,您学习了如何在Python中构建线性回归和逻辑回归机器学习模型。

If you're interested in learning more about building, training, and deploying cutting-edge machine learning model, my eBook Pragmatic Machine Learning will teach you how to build 9 different machine learning models using real-world projects.

如果您想了解有关构建,训练和部署前沿机器学习模型的更多信息,我的电子书实用机器学习电子书将教您如何使用实际项目构建9种不同的机器学习模型。

You can deploy the code from the eBook to your GitHub or personal portfolio to show to prospective employers. The book launches on August 3rd – preorder it for 50% off now!

您可以将代码从电子书部署到GitHub或个人投资组合,以向潜在雇主展示。 该书将于8月3日发行, 现在可以50%的价格预订 !

Here is a brief summary of what you learned in this article:

这是您从本文中学到的简短摘要:

  • How to import the libraries required to build a linear regression machine learning algorithm
    如何导入构建线性回归机器学习算法所需的库
  • How to split a data set into training data and test data using scikit-learn

    如何使用scikit-learn将数据集分为训练数据和测试数据

  • How to use scikit-learn to train a linear regression model and make predictions using that model

    如何使用scikit-learn训练线性回归模型并使用该模型进行预测

  • How to calculate linear regression performance metrics using scikit-learn

    如何使用scikit-learn计算线性回归性能指标

  • Why the Titanic data set is often used for learning machine learning classification techniques
    为什么Titanic数据集经常用于学习机器学习分类技术
  • How to perform exploratory data analysis when working with a data set for classification machine learning problems
    处理分类机器学习问题的数据集时如何执行探索性数据分析
  • How to handle missing data in a pandas DataFrame
    如何处理Pandas DataFrame中的缺失数据
  • What imputation means and how you can use it to fill in missing data

    imputation含义以及如何使用它来填写缺失的数据

  • How to create dummy variables for categorical data in machine learning data sets
    如何为机器学习数据集中的分类数据创建虚拟变量
  • How to train a logistic regression machine learning model in Python
    如何在Python中训练Logistic回归机器学习模型
  • How to make predictions using a logistic regression model in Python
    如何在Python中使用逻辑回归模型进行预测
  • How to the scikit-learn’s classification_report to quickly calculate performance metrics for machine learning classification problems

    scikit-learnclassification_report如何快速计算机器学习分类问题的性能指标

翻译自: https://www.freecodecamp.org/news/how-to-build-and-train-linear-and-logistic-regression-ml-models-in-python/

python 线性回归模型

python 线性回归模型_如何在Python中建立和训练线性和逻辑回归ML模型相关推荐

  1. python中如何画logistic_如何在 Python 中建立和训练线性和 logistic 回归 ML 模型?

    原标题:如何在 Python 中建立和训练线性和 logistic 回归 ML 模型? 英语原文: 翻译:(Key.君思) 线性回归与logistic回归,是. 在我的里,你们已经学习了线性回归机器学 ...

  2. python计算均方根误差_如何在Python中创建线性回归机器学习模型?「入门篇」

    线性回归和逻辑回归是当今很受欢迎的两种机器学习模型. 本文将教你如何使用 scikit-learn 库在Python中创建.训练和测试你的第一个线性.逻辑回归机器学习模型,本文适合大部分的新人小白. ...

  3. django 传递中文_如何在Django中建立消息传递状态

    django 传递中文 by Ogundipe Samuel 由Ogundipe Samuel 如何在Django中建立消息传递状态 (How to Build a Message Delivery ...

  4. python保存模型_如何在Python中保存ARIMA时间序列预测模型

    自回归移动平均模型(ARIMA)是一种常用于时间序列分析和预测的线性模型. statsmodels库提供了Python中使用ARIMA的实现.ARIMA模型可以保存到文件中,以便以后对新数据进行预测. ...

  5. unbantu上python安装步骤_如何在Ubuntu中安装Python 3.6?

    Python是增长最快的主要通用编程语言.原因有很多,比如它的可读性和灵活性,易于学习和使用,可靠和高效. 有两个主要的Python版本被使用- 2和3 (Python的现在和未来);前者将看不到新的 ...

  6. python进程暂停_如何在Python中暂停多进程?

    我希望用户能够在怎么开始的实现它?在 我的代码是:# -*- coding: utf-8 -*- from PySide import QtCore, QtGui from Ui_MainWindow ...

  7. python 拟合正态分布_如何在Python中拟合双高斯分布?

    我试图使用Python获得数据(link)的双高斯分布.原始数据的格式为: 对于给定的数据,我想获得图中所示峰值的两个高斯分布.我用以下代码(source)进行了尝试:from sklearn imp ...

  8. python 概率分布函数_如何在Python中实现这五类强大的概率分布

    匿名用户 1级 2016-04-25 回答 首页 所有文章 观点与动态 基础知识 系列教程 实践项目 工具与框架应用 工具资源 伯乐在线 > Python - 伯乐在线 > 所有文章 &g ...

  9. python mqtt库_如何在 Python 中使用 MQTT

    Python 是一种广泛使用的解释型.高级编程.通用型编程语言.Python 的设计哲学强调代码的可读性和简洁的语法(尤其是使用空格缩进划分代码块,而非使用大括号或者关键词).Python 让开发者能 ...

最新文章

  1. 寒假训练,2.25,J-Palindrome Names (回文
  2. java如何监控cpu耗时_超级干货:3个性能监控和优化命令讲解
  3. php linux下保存文件路径怎么写,linux下php导入带图片的word文档转为html,图片保存下来生成路径。...
  4. java 连接不上hbase_无法远程连接到Hbase
  5. sprint冲刺计划第三天团队任务
  6. layui table行点击tr_LayUI数据表格行单击事件中选中
  7. 【译】《Understanding ECMAScript6》- 第三章-Object
  8. 10个必知的网页设计术语计算机与网络,计算机网络专业毕业论文-网页设计与制作(23页)-原创力文档...
  9. 《程序设计技术基础》第1-5章例程
  10. 阿里云GIS曾志明:空间数据中台是什么,怎么用
  11. 开心网之开心餐厅游戏分析报告
  12. JavaScript系列-闭包
  13. Python-Django-视图
  14. 一份大厂出来的创业公司cto的创业心得
  15. 嵌入式的可移植性和可复用性
  16. 基于 Java 的短视频实战项目
  17. lintcode 778. 太平洋和大西洋的水流 dfs
  18. 帝国系统在万网虚拟主机更换php版本后无法登陆后台问题分析
  19. java基于Springboot+vue的农产品销售商城网站 elementui
  20. 【100个 Unity实用技能】 | Unity自定义脚本的初始模版

热门文章

  1. 2020.10.s1 冯上
  2. static静态属性 java 1614870751
  3. 学习日报 1027 自动类型转换 运算符
  4. dj电商-需求分析-购物车模块与订单模块
  5. jquery-入门-基本使用-选择器-转移
  6. spring系统学习:20180607--Spring的 IOC 的XML和注解的整合开发
  7. 在Windows Server 2008上安装和配置Web和FTP服务
  8. 华为成功完成中国联通NFV三层解耦测试验证
  9. 用python做测试实现高性能测试工具(4)—系统架构
  10. git在eclipse中的配置 转载