sklearn.metrics: Metrics

官网是最好的学习区。

See the Model evaluation: quantifying the quality of predictions section and the Pairwise metrics, Affinities and Kernelssection of the  user guide for further details.

sklearn.metrics 模块包括评分函数、性能指标、成对指标和距离计算

Model Selection Interface 模型可选接口

See the The scoring parameter: defining model evaluation rules section of the user guide for further details.

metrics.get_scorer(scoring) Get a scorer from string
metrics.make_scorer(score_func[, …]) Make a scorer from a performance metric or loss function.

Classification metrics 分类问题的指标

See the Classification metrics section of the user guide for further details.

metrics.accuracy_score(y_true, y_pred[, …]) 分类准确性得分
metrics.auc(x, y[, reorder]) 计算AUC
metrics.average_precision_score(y_true, y_score) Compute average precision (AP) from prediction scores
metrics.brier_score_loss(y_true, y_prob[, …]) Compute the Brier score.
metrics.classification_report(y_true, y_pred) Build a text report showing the main classification metrics
metrics.cohen_kappa_score(y1, y2[, labels, …]) Cohen’s kappa: a statistic that measures inter-annotator agreement.
metrics.confusion_matrix(y_true, y_pred[, …]) Compute confusion matrix to evaluate the accuracy of a classification
metrics.f1_score(y_true, y_pred[, labels, …]) Compute the F1 score, also known as balanced F-score or F-measure
metrics.fbeta_score(y_true, y_pred, beta[, …]) Compute the F-beta score
metrics.hamming_loss(y_true, y_pred[, …]) Compute the average Hamming loss.
metrics.hinge_loss(y_true, pred_decision[, …]) Average hinge loss (non-regularized)
metrics.jaccard_similarity_score(y_true, y_pred) Jaccard similarity coefficient score
metrics.log_loss(y_true, y_pred[, eps, …]) Log loss, aka logistic loss or cross-entropy loss.
metrics.matthews_corrcoef(y_true, y_pred[, …]) Compute the Matthews correlation coefficient (MCC)
metrics.precision_recall_curve(y_true, …) Compute precision-recall pairs for different probability thresholds
metrics.precision_recall_fscore_support(…) Compute precision, recall, F-measure and support for each class
metrics.precision_score(y_true, y_pred[, …]) Compute the precision
metrics.recall_score(y_true, y_pred[, …]) Compute the recall
metrics.roc_auc_score(y_true, y_score[, …]) Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores.
metrics.roc_curve(y_true, y_score[, …]) Compute Receiver operating characteristic (ROC)
metrics.zero_one_loss(y_true, y_pred[, …]) Zero-one classification loss.

Regression metrics 回归问题的指标

See the Regression metrics section of the user guide for further details.

metrics.explained_variance_score(y_true, y_pred) Explained variance regression score function
metrics.mean_absolute_error(y_true, y_pred) Mean absolute error regression loss
metrics.mean_squared_error(y_true, y_pred[, …]) Mean squared error regression loss
metrics.mean_squared_log_error(y_true, y_pred) Mean squared logarithmic error regression loss
metrics.median_absolute_error(y_true, y_pred) Median absolute error regression loss
metrics.r2_score(y_true, y_pred[, …]) R^2 (coefficient of determination) regression score function.

Multilabel ranking metrics 多标签排序指标

See the Multilabel ranking metrics section of the user guide for further details.

metrics.coverage_error(y_true, y_score[, …]) Coverage error measure
metrics.label_ranking_average_precision_score(…) Compute ranking-based average precision
metrics.label_ranking_loss(y_true, y_score) Compute Ranking loss measure

Clustering metrics 聚类指标

See the Clustering performance evaluation section of the user guide for further details.

The sklearn.metrics.cluster submodule contains evaluation metrics for cluster analysis results. There are two forms of evaluation:

  • supervised, which uses a ground truth class values for each sample.

  • unsupervised, which does not and measures the ‘quality’ of the model itself.

metrics.adjusted_mutual_info_score(…) Adjusted Mutual Information between two clusterings.
metrics.adjusted_rand_score(labels_true, …) Rand index adjusted for chance.
metrics.calinski_harabaz_score(X, labels) Compute the Calinski and Harabaz score.
metrics.completeness_score(labels_true, …) Completeness metric of a cluster labeling given a ground truth.
metrics.fowlkes_mallows_score(labels_true, …) Measure the similarity of two clusterings of a set of points.
metrics.homogeneity_completeness_v_measure(…) Compute the homogeneity and completeness and V-Measure scores at once.
metrics.homogeneity_score(labels_true, …) Homogeneity metric of a cluster labeling given a ground truth.
metrics.mutual_info_score(labels_true, …) Mutual Information between two clusterings.
metrics.normalized_mutual_info_score(…) Normalized Mutual Information between two clusterings.
metrics.silhouette_score(X, labels[, …]) Compute the mean Silhouette Coefficient of all samples.
metrics.silhouette_samples(X, labels[, metric]) Compute the Silhouette Coefficient for each sample.
metrics.v_measure_score(labels_true, labels_pred) V-measure cluster labeling given a ground truth.

Biclustering metrics

See the Biclustering evaluation section of the user guide for further details.

metrics.consensus_score(a, b[, similarity]) The similarity of two sets of biclusters.

Pairwise metrics

See the Pairwise metrics, Affinities and Kernels section of the user guide for further details.

metrics.pairwise.additive_chi2_kernel(X[, Y]) Computes the additive chi-squared kernel between observations in X and Y
metrics.pairwise.chi2_kernel(X[, Y, gamma]) Computes the exponential chi-squared kernel X and Y.
metrics.pairwise.cosine_similarity(X[, Y, …]) Compute cosine similarity between samples in X and Y.
metrics.pairwise.cosine_distances(X[, Y]) Compute cosine distance between samples in X and Y.
metrics.pairwise.distance_metrics() Valid metrics for pairwise_distances.
metrics.pairwise.euclidean_distances(X[, Y, …]) Considering the rows of X (and Y=X) as vectors, compute the distance matrix between each pair of vectors.
metrics.pairwise.kernel_metrics() Valid metrics for pairwise_kernels
metrics.pairwise.laplacian_kernel(X[, Y, gamma]) Compute the laplacian kernel between X and Y.
metrics.pairwise.linear_kernel(X[, Y]) Compute the linear kernel between X and Y.
metrics.pairwise.manhattan_distances(X[, Y, …]) Compute the L1 distances between the vectors in X and Y.
metrics.pairwise.pairwise_distances(X[, Y, …]) Compute the distance matrix from a vector array X and optional Y.
metrics.pairwise.pairwise_kernels(X[, Y, …]) Compute the kernel between arrays X and optional array Y.
metrics.pairwise.polynomial_kernel(X[, Y, …]) Compute the polynomial kernel between X and Y:
metrics.pairwise.rbf_kernel(X[, Y, gamma]) Compute the rbf (gaussian) kernel between X and Y:
metrics.pairwise.sigmoid_kernel(X[, Y, …]) Compute the sigmoid kernel between X and Y:
metrics.pairwise.paired_euclidean_distances(X, Y) Computes the paired euclidean distances between X and Y
metrics.pairwise.paired_manhattan_distances(X, Y) Compute the L1 distances between the vectors in X and Y.
metrics.pairwise.paired_cosine_distances(X, Y) Computes the paired cosine distances between X and Y
metrics.pairwise.paired_distances(X, Y[, metric]) Computes the paired distances between X and Y.
metrics.pairwise_distances(X[, Y, metric, …]) Compute the distance matrix from a vector array X and optional Y.
metrics.pairwise_distances_argmin(X, Y[, …]) Compute minimum distances between one point and a set of points.
metrics.pairwise_distances_argmin_min(X, Y) Compute minimum distances between one point and a set of points.

转载于:https://blog.51cto.com/emily18/2090970

sequential模型编译时的指标设置:sklearn.metrics:指标相关推荐

  1. 解决sklearn.metrics指标报错ValueError: Target is multiclass but average=‘binary‘. Please choose anothe...

    完整报错为:ValueError: Target is multiclass but average='binary'. Please choose another average setting, ...

  2. Unet项目解析(7): 模型编译-优化函数、损失函数、指标列表

    项目GitHub主页:https://github.com/orobix/retina-unet 参考论文:Retina blood vessel segmentation with a convol ...

  3. 不平衡多分类问题模型评估指标探讨与sklearn.metrics实践

    我们在用机器学习.深度学习建模.训练模型过程中,需要对我们模型进行评估.评价,并依据评估结果决策下一步工作策略,常用的评估指标有准确率.精准率.召回率.F1分数.ROC.AUC.MAE.MSE等等,本 ...

  4. java文件用editplus乱码,EditPlus设置编码后,编译时仍然出现乱码

    之前用EditPlus学习JavaSE的内容,为了统一编码,按网上教程将该IDE当做编辑器编辑Java程序,将EditPlus的编码设置为UTF-8,教程步骤如下: 在工具(Tools)--配置(用户 ...

  5. VS编译时output/Error list窗口自动弹出设置

    导入了别人的一个vs配置之后每次编译时总是默认弹出Errorlist窗口,自己习惯了output窗口导致用了很不习惯,在网上也没找到直接说明的解决方式,自己找到了,在此记录.  菜单->tool ...

  6. android重新编译res,使用 gradle 在编译时动态设置 Android resValue / BuildConfig / Manifes中lt;meta-datagt;变量的值...

    你也能够查看我的其它同类文章.也会让你有一定的收货 关于使用Gradle来控制版本号和生成不同版本号的代码.我总结了三篇文章,网上关于这些知识,都比較零散.我在学习这些的之前.根本不知道还有这种方法. ...

  7. 使用 gradle 在编译时动态设置 Android resValue / BuildConfig / Manifes中lt;meta-datagt;变量的值...

    转载请标明出处:http://blog.csdn.net/xx326664162/article/details/49247815 文章出自:薛瑄的博客 你也能够查看我的其它同类文章.也会让你有一定的 ...

  8. C编译时编码设置(UTF-8、GBK编码格式)

    C编译时编码设置(UTF-8.GBK编码格式) 建立uft-8和gbk编码格式的文件 通过VSCode新建GBK.UTF-8编码格式的两个C程序文件(分别是"存储类型GBK.c.存储类型UT ...

  9. Keras中Sequential模型及方法详细总结

    Sequential 序贯模型 序贯模型是函数式模型的简略版,为最简单的线性.从头到尾的结构顺序,不分叉,是多个网络层的线性堆叠. Keras实现了很多层,包括core核心层,Convolution卷 ...

最新文章

  1. 链表问题1——打印两个有序链表的公共部分
  2. 06 回归算法 - 损失函数、过拟合欠拟合
  3. 四十一、文件的物理结构(上)
  4. AI“入侵”华尔街 高端职位也不保
  5. Spring PropertyPlaceholderConfigurer Usage
  6. “伪基站”任意冒用手机号短信诈骗
  7. msfvenom生成木马和内网穿透
  8. Mysql的row_format
  9. 活动 | Unity带你亲临王者荣耀KPL总决赛,领略电竞的魅力
  10. docker集群管理
  11. 听了一堂《**学院》的课,我也是醉了
  12. 开放下载!《OSS运维基础实战手册》
  13. asp.net 调用SmtpClient发送邮件(转)
  14. Android应用资源分析(老罗链接整理)
  15. linux 系统内存占用高,linux free 命令以及系统内存占用过高的处理方法
  16. 性能测试----测试执行
  17. TD幅度预测、幅度膨胀突破、TD通道
  18. 前端技术基础--笔记
  19. 物流快递系统程序设计
  20. r7 5800h 怎么样 相当于什么水平

热门文章

  1. IntelliJ 创建main函数快捷
  2. Amoeba实现mysql主从读写分离
  3. [转载] 七龙珠第一部——第035话 北方女孩
  4. 面向对象设计原则之三:里氏替换原则
  5. IPV6 ripng互联
  6. SQL临时表的生存期问题
  7. Tomcat是什么:Tomcat与Java技、Tomcat与Web应用以及Tomcat基本框架及相关配置
  8. Flask的flask-sqlalchemy
  9. java中的assert
  10. Oracle DBA课程系列笔记(19)