原文:http://scikit-learn.org/stable/modules/feature_selection.html

The classes in the sklearn.feature_selection module can be used for feature selection/dimensionality reduction on sample sets, either to improve estimators’ accuracy scores or to boost their performance on very high-dimensional datasets.

1.13.1. Removing features with low variance

VarianceThreshold is a simple baseline approach to feature selection. It removes all features whose variance doesn’t meet some threshold. By default, it removes all zero-variance features, i.e. features that have the same value in all samples.

As an example, suppose that we have a dataset with boolean features, and we want to remove all features that are either one or zero (on or off) in more than 80% of the samples. Boolean features are Bernoulli random variables, and the variance of such variables is given by

so we can select using the threshold .8 * (1 - .8):

>>>

>>> from sklearn.feature_selection import VarianceThreshold
>>> X = [[0, 0, 1], [0, 1, 0], [1, 0, 0], [0, 1, 1], [0, 1, 0], [0, 1, 1]] >>> sel = VarianceThreshold(threshold=(.8 * (1 - .8))) >>> sel.fit_transform(X) array([[0, 1],  [1, 0],  [0, 0],  [1, 1],  [1, 0],  [1, 1]]) 

As expected, VarianceThreshold has removed the first column, which has a probability  of containing a zero.

1.13.2. Univariate feature selection

Univariate feature selection works by selecting the best features based on univariate statistical tests. It can be seen as a preprocessing step to an estimator. Scikit-learn exposes feature selection routines as objects that implement the transformmethod:

  • SelectKBest removes all but the  highest scoring features

  • SelectPercentile removes all but a user-specified highest scoring percentage of features

  • using common univariate statistical tests for each feature: false positive rate SelectFpr, false discovery rateSelectFdr, or family wise error SelectFwe.

  • GenericUnivariateSelect allows to perform univariate feature

    selection with a configurable strategy. This allows to select the best univariate selection strategy with hyper-parameter search estimator.

For instance, we can perform a  test to the samples to retrieve only the two best features as follows:

>>>

>>> from sklearn.datasets import load_iris
>>> from sklearn.feature_selection import SelectKBest >>> from sklearn.feature_selection import chi2 >>> iris = load_iris() >>> X, y = iris.data, iris.target >>> X.shape (150, 4) >>> X_new = SelectKBest(chi2, k=2).fit_transform(X, y) >>> X_new.shape (150, 2) 

These objects take as input a scoring function that returns univariate p-values:

  • For regression: f_regression
  • For classification: chi2 or f_classif

Feature selection with sparse data

If you use sparse data (i.e. data represented as sparse matrices), only chi2 will deal with the data without making it dense.

Warning

Beware not to use a regression scoring function with a classification problem, you will get useless results.

Examples:

Univariate Feature Selection

1.13.3. Recursive feature elimination

Given an external estimator that assigns weights to features (e.g., the coefficients of a linear model), recursive feature elimination (RFE) is to select features by recursively considering smaller and smaller sets of features. First, the estimator is trained on the initial set of features and weights are assigned to each one of them. Then, features whose absolute weights are the smallest are pruned from the current set features. That procedure is recursively repeated on the pruned set until the desired number of features to select is eventually reached.

RFECV performs RFE in a cross-validation loop to find the optimal number of features.

Examples:

  • Recursive feature elimination: A recursive feature elimination example showing the relevance of pixels in a digit classification task.
  • Recursive feature elimination with cross-validation: A recursive feature elimination example with automatic tuning of the number of features selected with cross-validation.

1.13.4. Feature selection using SelectFromModel

SelectFromModel is a meta-transformer that can be used along with any estimator that has a coef_ or feature_importances_attribute after fitting. The features are considered unimportant and removed, if the corresponding coef_ orfeature_importances_ values are below the provided threshold parameter. Apart from specifying the threshold numerically, there are build-in heuristics for finding a threshold using a string argument. Available heuristics are “mean”, “median” and float multiples of these like “0.1*mean”.

For examples on how it is to be used refer to the sections below.

Examples

  • Feature selection using SelectFromModel and LassoCV: Selecting the two most important features from the Boston dataset without knowing the threshold beforehand.

1.13.4.1. L1-based feature selection

Linear models penalized with the L1 norm have sparse solutions: many of their estimated coefficients are zero. When the goal is to reduce the dimensionality of the data to use with another classifier, they can be used along withfeature_selection.SelectFromModel to select the non-zero coefficients. In particular, sparse estimators useful for this purpose are the linear_model.Lasso for regression, and of linear_model.LogisticRegression and svm.LinearSVC for classification:

>>>

>>> from sklearn.svm import LinearSVC
>>> from sklearn.datasets import load_iris >>> from sklearn.feature_selection import SelectFromModel >>> iris = load_iris() >>> X, y = iris.data, iris.target >>> X.shape (150, 4) >>> lsvc = LinearSVC(C=0.01, penalty="l1", dual=False).fit(X, y) >>> model = SelectFromModel(lsvc, prefit=True) >>> X_new = model.transform(X) >>> X_new.shape (150, 3) 

With SVMs and logistic-regression, the parameter C controls the sparsity: the smaller C the fewer features selected. With Lasso, the higher the alpha parameter, the fewer features selected.

Examples:

  • Classification of text documents using sparse features: Comparison of different algorithms for document classification including L1-based feature selection.

L1-recovery and compressive sensing

For a good choice of alpha, the Lasso can fully recover the exact set of non-zero variables using only few observations, provided certain specific conditions are met. In particular, the number of samples should be “sufficiently large”, or L1 models will perform at random, where “sufficiently large” depends on the number of non-zero coefficients, the logarithm of the number of features, the amount of noise, the smallest absolute value of non-zero coefficients, and the structure of the design matrix X. In addition, the design matrix must display certain specific properties, such as not being too correlated.

There is no general rule to select an alpha parameter for recovery of non-zero coefficients. It can by set by cross-validation (LassoCV or LassoLarsCV), though this may lead to under-penalized models: including a small number of non-relevant variables is not detrimental to prediction score. BIC (LassoLarsIC) tends, on the opposite, to set high values of alpha.

Reference Richard G. Baraniuk “Compressive Sensing”, IEEE Signal Processing Magazine [120] July 2007http://dsp.rice.edu/files/cs/baraniukCSlecture07.pdf

1.13.4.2. Randomized sparse models

The limitation of L1-based sparse models is that faced with a group of very correlated features, they will select only one. To mitigate this problem, it is possible to use randomization techniques, reestimating the sparse model many times perturbing the design matrix or sub-sampling data and counting how many times a given regressor is selected.

RandomizedLasso implements this strategy for regression settings, using the Lasso, while RandomizedLogisticRegression uses the logistic regression and is suitable for classification tasks. To get a full path of stability scores you can uselasso_stability_path.

Note that for randomized sparse models to be more powerful than standard F statistics at detecting non-zero features, the ground truth model should be sparse, in other words, there should be only a small fraction of features non zero.

Examples:

  • Sparse recovery: feature selection for sparse linear models: An example comparing different feature selection approaches and discussing in which situation each approach is to be favored.

References:

  • N. Meinshausen, P. Buhlmann, “Stability selection”, Journal of the Royal Statistical Society, 72 (2010)http://arxiv.org/pdf/0809.2932
  • F. Bach, “Model-Consistent Sparse Estimation through the Bootstrap” http://hal.inria.fr/hal-00354771/

1.13.4.3. Tree-based feature selection

Tree-based estimators (see the sklearn.tree module and forest of trees in the sklearn.ensemble module) can be used to compute feature importances, which in turn can be used to discard irrelevant features (when coupled with thesklearn.feature_selection.SelectFromModel meta-transformer):

>>>

>>> from sklearn.ensemble import ExtraTreesClassifier
>>> from sklearn.datasets import load_iris >>> from sklearn.feature_selection import SelectFromModel >>> iris = load_iris() >>> X, y = iris.data, iris.target >>> X.shape (150, 4) >>> clf = ExtraTreesClassifier() >>> clf = clf.fit(X, y) >>> clf.feature_importances_ array([ 0.04..., 0.05..., 0.4..., 0.4...]) >>> model = SelectFromModel(clf, prefit=True) >>> X_new = model.transform(X) >>> X_new.shape (150, 2) 

Examples:

  • Feature importances with forests of trees: example on synthetic data showing the recovery of the actually meaningful features.
  • Pixel importances with a parallel forest of trees: example on face recognition data.

1.13.5. Feature selection as part of a pipeline

Feature selection is usually used as a pre-processing step before doing the actual learning. The recommended way to do this in scikit-learn is to use a sklearn.pipeline.Pipeline:

clf = Pipeline([('feature_selection', SelectFromModel(LinearSVC(penalty="l1"))), ('classification', RandomForestClassifier()) ]) clf.fit(X, y) 

In this snippet we make use of a sklearn.svm.LinearSVC coupled with sklearn.feature_selection.SelectFromModel to evaluate feature importances and select the most relevant features. Then, a sklearn.ensemble.RandomForestClassifier is trained on the transformed output, i.e. using only relevant features. You can perform similar operations with the other feature selection methods and also classifiers that provide a way to evaluate feature importances of course. See thesklearn.pipeline.Pipeline examples for more details.

Feature selection相关推荐

  1. R语言使用caret包的findCorrelation函数批量删除相关性冗余特征、实现特征筛选(feature selection)、剔除高相关的变量

    R语言使用caret包的findCorrelation函数批量删除相关性冗余特征.实现特征筛选(feature selection).剔除高相关的变量 目录

  2. R语言基于随机森林进行特征选择(feature selection)

    R语言基于随机森林进行特征选择(feature selection) 目录 R语言基于随机森林进行特征选择(feature selection)

  3. R语言常用线性模型特征筛选(feature selection)技术实战:基于前列腺特异性抗原(PSA)数据

    R语言常用线性模型特征筛选(feature selection)技术实战 目录 R语言常用线性模型特征筛选(feature selection)技术实战

  4. R语言基于线性回归(Linear Regression)进行特征筛选(feature selection)

    R语言基于线性回归(Linear Regression)进行特征筛选(feature selection) 对一个学习任务来说,给定属性集,有些属性很有用,另一些则可能没什么用.这里的属性即称为&qu ...

  5. R语言基于LASSO进行特征筛选(feature selection)

    R语言基于LASSO进行特征筛选(feature selection) 对一个学习任务来说,给定属性集,有些属性很有用,另一些则可能没什么用.这里的属性即称为"特征"(featur ...

  6. R语言基于Boruta进行机器学习特征筛选(Feature Selection)

    R语言基于Boruta进行机器学习特征筛选(Feature Selection) 对一个学习任务来说,给定属性集,有些属性很有用,另一些则可能没什么用.这里的属性即称为"特征"(f ...

  7. R语言基于机器学习算法进行特征筛选(Feature Selection)

    R语言基于机器学习算法进行特征筛选(Feature Selection) 对一个学习任务来说,给定属性集,有些属性很有用,另一些则可能没什么用.这里的属性即称为"特征"(featu ...

  8. R语言基于信息价值IV(Information Value)和证据权重WOE(Weights of Evidence)进行特征筛选(feature selection)

    R语言基于信息价值IV(Information Value)和证据权重WOE(Weights of Evidence)进行特征筛选(feature selection) 对一个学习任务来说,给定属性集 ...

  9. R语言基于模拟退火(Simulated Annealing)进行特征筛选(feature selection)

    R语言基于模拟退火(Simulated Annealing)进行特征筛选(feature selection) 特征选择的目的 1.简化模型,使模型更易于理解:去除不相关的特征会降低学习任务的难度.并 ...

  10. R语言基于DALEX包进行特征筛选(feature selection)

    R语言基于DALEX包进行特征筛选(feature selection) 对一个学习任务来说,给定属性集,有些属性很有用,另一些则可能没什么用.这里的属性即称为"特征"(featu ...

最新文章

  1. Java SE 6 新特性: 编译器 API
  2. MongoDB最简单的入门教程之四:使用Spring Boot操作MongoDB
  3. leetcode96. 不同的二叉搜索树(动态规划)
  4. linux无法访问443端口,linux – 为什么我无法在Ubuntu上ping端口443?
  5. %3c故乡%3e中语言描写的作用是什么,第三单元考试题
  6. 行为设计模式 - Memento设计模式
  7. PowerDesigner设置code和name不联动的方法
  8. asp.net Page事件处理管道
  9. Maven3.8.1下载
  10. 实战项目 — 爬取中国票房网年度电影信息并保存在csv
  11. 分析Ajax爬取今日头条街拍图片
  12. 大学计算机英语要求,2015级本科生大学英语、计算机分级考试要求.doc
  13. Shader实现马赛克
  14. 读书会招募 | 勇气七日谈,一起在“讨厌”中寻找幸福的真谛
  15. docker--volumes,bind mounts和tmpfs mount
  16. NAS如何找固定IP
  17. 计算机机房雷电接地,机房防雷接地系统解决方案
  18. 一梦江湖手游基础攻略之暴力成品华山
  19. 创宇区块链|无聊猿项目“又 双 叒 叕” 遭受钓鱼攻击,网络钓鱼究竟是何方神圣
  20. Visio Viewer 无法打开 VSD文件 解决方法

热门文章

  1. html5的高级选择器,web@css高级选择器(after,befor用法),基本css样式
  2. linux 硬盘空间监控,Linux服务器硬盘空间监控
  3. android 已经给权限读取照片 还是提示无法读取照片_iPhone无法访问照片,一招教你解决...
  4. c语言3368题目,电大《C语言程序设计课程》期末考试复习资料
  5. oracle每季度补丁,Oracle 2020 年第四季度补丁发布
  6. in ms sql 集合参数传递_神奇的 SQL → 为什么 GROUP BY 之后不能直接引用原表中的列?...
  7. c语言stm32串口控制单片机,实用STM32的串口控制平台的实现
  8. java中线程调度遵循的原则_深入理解Java多线程核心知识:跳槽面试必备
  9. Nvidia CUDA初级教程6 CUDA编程一
  10. java日历事件处理_日历表的事件处理和管理(刘静)