一、混淆矩阵

对于二分类的模型,预测结果与实际结果分别可以取0和1。我们用N和P代替0和1,T和F表示预测正确和错误。将他们两两组合,就形成了下图所示的混淆矩阵(注意:组合结果都是针对预测结果而言的)。

由于1和0是数字,阅读性不好,所以我们分别用P和N表示1和0两种结果。变换之后为PP,PN,NP,NN,阅读性也很差,我并不能轻易地看出来预测的正确性与否。因此,为了能够更清楚地分辨各种预测情况是否正确,我们将其中一个符号修改为T和F,以便于分辨出结果。

  • P(Positive):代表 1
  • N(Negative):代表 0
  • T(True):代表预测正确
  • F(False):代表预测错误

二、准确率、精确率、召回率、F1-Measure

  • 准确率(Accuracy):对于给定的测试数据集,分类器正确分类的样本数与总样本数之比。
    Accuracy=TP+TNTP+TN+FP+FN=TP+TN总样本数量Accuracy=\cfrac{TP+TN}{TP+TN+FP+FN}=\cfrac{TP+TN}{总样本数量}Accuracy=TP+TN+FP+FNTP+TN​=总样本数量TP+TN​
  • 精确率(Precision)**:精指分类正确的正样本个数(TP)占分类器判定为正样本的样本个数(TP+FP)的比例。
    Precision=TPTP+FP=分类正确的正样本个数判定为正样本的样本个数Precision=\cfrac{TP}{TP+FP}=\cfrac{分类正确的正样本个数}{判定为正样本的样本个数}Precision=TP+FPTP​=判定为正样本的样本个数分类正确的正样本个数​
  • 召回率(Recall):召回率是指分类正确的正样本个数(TP)占真正的正样本个数(TP+FN)的比例。
    Recall=TPTP+FN=分类正确的正样本个数全部真正的正样本个数Recall=\cfrac{TP}{TP+FN}=\cfrac{分类正确的正样本个数}{全部真正的正样本个数}Recall=TP+FNTP​=全部真正的正样本个数分类正确的正样本个数​
  • F1-Measure值:就是精确率和召回率的调和平均值
    F1−Measure=21Precision+1Recall=2×Precision×RecallPrecision+Recall\begin{aligned}F1-Measure=\cfrac{2}{\cfrac{1}{Precision}+\cfrac{1}{Recall}}=\cfrac{2×Precision×Recall}{Precision+Recall}\end{aligned}F1−Measure=Precision1​+Recall1​2​=Precision+Recall2×Precision×Recall​​

每个评估指标都有其价值,但如果只从单一的评估指标出发去评估模型,往往会得出片面甚至错误的结论;只有通过一组互补的指标去评估模型,才能更好地发现并解决模型存在的问题,从而更好地解决实际业务场景中遇到的问题。

三、多分类评价指标-案例

假设有如下的数据

预测 真实
A A
A A
B A
C A
B B
B B
C B
B C
C C

可以看出,上表为一份样本量为9,类别数为3的含标注结果的三分类预测样本。TN对于准召的计算而言是不需要的,因此下面的表格中未统计该值。

1、按照定义计算Precision、Recall

1.1 对于类别A

TP = 2 FP = 0
FN = 2 TN = ~

Precision=TPTP+FP=分类正确的正样本个数判定为正样本的样本个数=22+0=100%=1.0Precision=\cfrac{TP}{TP+FP}=\cfrac{分类正确的正样本个数}{判定为正样本的样本个数}=\cfrac{2}{2+0}=100\%=1.0Precision=TP+FPTP​=判定为正样本的样本个数分类正确的正样本个数​=2+02​=100%=1.0

Recall=TPTP+FN=分类正确的正样本个数真正的正样本个数=22+2=50%=0.5Recall=\cfrac{TP}{TP+FN}=\cfrac{分类正确的正样本个数}{真正的正样本个数}=\cfrac{2}{2+2}=50\%=0.5Recall=TP+FNTP​=真正的正样本个数分类正确的正样本个数​=2+22​=50%=0.5

1.2 对于类别B

TP = 2 FP = 2
FN = 1 TN = ~

Precision=TPTP+FP=分类正确的正样本个数判定为正样本的样本个数=22+2=50%=0.5Precision=\cfrac{TP}{TP+FP}=\cfrac{分类正确的正样本个数}{判定为正样本的样本个数}=\cfrac{2}{2+2}=50\%=0.5Precision=TP+FPTP​=判定为正样本的样本个数分类正确的正样本个数​=2+22​=50%=0.5

Recall=TPTP+FN=分类正确的正样本个数真正的正样本个数=22+1=67%=0.67Recall=\cfrac{TP}{TP+FN}=\cfrac{分类正确的正样本个数}{真正的正样本个数}=\cfrac{2}{2+1}=67\%=0.67Recall=TP+FNTP​=真正的正样本个数分类正确的正样本个数​=2+12​=67%=0.67

1.3 对于类别C

TP = 1 FP = 2
FN = 1 TN = ~

Precision=TPTP+FP=分类正确的正样本个数判定为正样本的样本个数=11+2=33%=0.33Precision=\cfrac{TP}{TP+FP}=\cfrac{分类正确的正样本个数}{判定为正样本的样本个数}=\cfrac{1}{1+2}=33\%=0.33Precision=TP+FPTP​=判定为正样本的样本个数分类正确的正样本个数​=1+21​=33%=0.33

Recall=TPTP+FN=分类正确的正样本个数真正的正样本个数=11+1=50%=0.5Recall=\cfrac{TP}{TP+FN}=\cfrac{分类正确的正样本个数}{真正的正样本个数}=\cfrac{1}{1+1}=50\%=0.5Recall=TP+FNTP​=真正的正样本个数分类正确的正样本个数​=1+11​=50%=0.5

2、调用sklearn的api进行验证

from sklearn.metrics import classification_report
from sklearn.metrics import precision_score, recall_score, f1_scoretrue_lable = [0, 0, 0, 0, 1, 1, 1, 2, 2]
prediction = [0, 0, 1, 2, 1, 1, 2, 1, 2]measure_result = classification_report(true_lable, prediction)
print('measure_result = \n', measure_result)

打印结果:

measure_result = precision    recall  f1-score   support0       1.00      0.50      0.67         41       0.50      0.67      0.57         32       0.33      0.50      0.40         2accuracy                           0.56         9macro avg       0.61      0.56      0.55         9
weighted avg       0.69      0.56      0.58         9

四、Micro-F1、Macro-F1、weighted-F1

总的来说,微观F1(micro-F1)和宏观F1(macro-F1)都是F1合并后的结果,这两个F1都是用在多分类任务中的评价指标,是两种不一样的求F1均值的方式;micro-F1和macro-F1的计算方法有差异,得出来的结果也略有差异;

1、Micro-F1

Micro-F1 不需要区分类别,直接使用总体样本的准召计算f1 score。

  • 计算方法:先计算所有类别的总的Precision和Recall,然后计算出来的F1值即为micro-F1;

  • 使用场景:在计算公式中考虑到了每个类别的数量,所以适用于数据分布不平衡的情况;但同时因为考虑到数据的数量,所以在数据极度不平衡的情况下,数量较多数量的类会较大的影响到F1的值;

该样本的混淆矩阵如下:

TP = 5 FP = 4
FN = 2 TN = ~

Precision=TPTP+FP=分类正确的正样本个数判定为正样本的样本个数=55+4=55.56%=0.5556Precision=\cfrac{TP}{TP+FP}=\cfrac{分类正确的正样本个数}{判定为正样本的样本个数}=\cfrac{5}{5+4}=55.56\%=0.5556Precision=TP+FPTP​=判定为正样本的样本个数分类正确的正样本个数​=5+45​=55.56%=0.5556

Recall=TPTP+FN=分类正确的正样本个数真正的正样本个数=55+4=55.56%=0.5556Recall=\cfrac{TP}{TP+FN}=\cfrac{分类正确的正样本个数}{真正的正样本个数}=\cfrac{5}{5+4}=55.56\%=0.5556Recall=TP+FNTP​=真正的正样本个数分类正确的正样本个数​=5+45​=55.56%=0.5556

F1−Measure=21Precision+1Recall=2×Precision×RecallPrecision+Recall=2×0.5556×0.55560.5556+0.5556=0.5556\begin{aligned}F1-Measure=\cfrac{2}{\cfrac{1}{Precision}+\cfrac{1}{Recall}}=\cfrac{2×Precision×Recall}{Precision+Recall}=\cfrac{2×0.5556×0.5556}{0.5556+0.5556}=0.5556\end{aligned}F1−Measure=Precision1​+Recall1​2​=Precision+Recall2×Precision×Recall​=0.5556+0.55562×0.5556×0.5556​=0.5556​

2、Macro-F1

不同于micro f1,macro f1需要先计算出每一个类别的准召及其f1 score,然后通过求均值得到在整个样本上的f1 score。

  • 计算方法:将所有类别的Precision和Recall求平均,然后计算F1值作为macro-F1;
  • 使用场景:没有考虑到数据的数量,所以会平等的看待每一类(因为每一类的precision和recall都在0-1之间),会相对受高precision和高recall类的影响较大;

类别A的​:
F1−A=2×Precision×RecallPrecision+Recall=2×1×0.51+0.5=0.6667\begin{aligned}F1-A=\cfrac{2×Precision×Recall}{Precision+Recall}=\cfrac{2×1×0.5}{1+0.5}=0.6667\end{aligned}F1−A=Precision+Recall2×Precision×Recall​=1+0.52×1×0.5​=0.6667​

类别B的​:
F1−B=2×Precision×RecallPrecision+Recall=2×0.5×0.670.5+0.67=0.57265\begin{aligned}F1-B=\cfrac{2×Precision×Recall}{Precision+Recall}=\cfrac{2×0.5×0.67}{0.5+0.67}=0.57265\end{aligned}F1−B=Precision+Recall2×Precision×Recall​=0.5+0.672×0.5×0.67​=0.57265​

类别C的​:
F1−C=2×Precision×RecallPrecision+Recall=2×0.33×0.50.33+0.5=0.39759\begin{aligned}F1-C=\cfrac{2×Precision×Recall}{Precision+Recall}=\cfrac{2×0.33×0.5}{0.33+0.5}=0.39759\end{aligned}F1−C=Precision+Recall2×Precision×Recall​=0.33+0.52×0.33×0.5​=0.39759​

Macro-F1为上面三者的平均值:
Macro−F1=F1−A+F1−B+F1−C3=0.6667+0.57265+0.397593=0.546\begin{aligned}Macro-F1=\cfrac{F1-A + F1-B + F1-C}{3}=\cfrac{0.6667 + 0.57265 + 0.39759}{3}=0.546\end{aligned}Macro−F1=3F1−A+F1−B+F1−C​=30.6667+0.57265+0.39759​=0.546​

3、weighted-F1

除了micro-F1和macro-F1,还有weighted-F1,是一个将F1-score乘以该类的比例之后相加的结果,也可以看做是macro-F1的变体吧。

weighted-F1和macro-F1的区别在于:macro-F1对每一类都赋予了相同的权重,而weighted-F1则根据每一类的比例分别赋予不同的权重。

五、指标的选择问题

“我们看到,对于 Macro 来说, 小类别相当程度上拉高了 Precision 的值,而实际上, 并没有那么多样本被正确分类,考虑到实际的环境中,真实样本分布和训练样本分布相同的情况下,这种指标明显是有问题的, 小类别起到的作用太大,以至于大样本的分类情况不佳。 而对于 Micro 来说,其考虑到了这种样本不均衡的问题, 因此在这种情况下相对较佳。

总的来说, 如果你的类别比较均衡,则随便; 如果你认为大样本的类别应该占据更重要的位置, 使用Micro; 如果你认为小样本也应该占据重要的位置,则使用 Macro; 如果 Micro << Macro , 则意味着在大样本类别中出现了严重的分类错误; 如果 Macro << Micro , 则意味着小样本类别中出现了严重的分类错误。

为了解决 Macro 无法衡量样本均衡问题,一个很好的方法是求加权的 Macro, 因此 Weighed F1 出现了。”

六、代码

1、数据01

true_lable = [0, 0, 0, 0, 1, 1, 1, 2, 2]
prediction = [0, 0, 1, 2, 1, 1, 2, 1, 2]
from sklearn.metrics import classification_report
from sklearn.metrics import precision_score, recall_score, f1_scoretrue_lable = [0, 0, 0, 0, 1, 1, 1, 2, 2]
prediction = [0, 0, 1, 2, 1, 1, 2, 1, 2]measure_result = classification_report(true_lable, prediction)
print('measure_result = \n', measure_result)print("----------------------------- precision(精确率)-----------------------------")
precision_score_average_None = precision_score(true_lable, prediction, average=None)
precision_score_average_micro = precision_score(true_lable, prediction, average='micro')
precision_score_average_macro = precision_score(true_lable, prediction, average='macro')
precision_score_average_weighted = precision_score(true_lable, prediction, average='weighted')
print('precision_score_average_None = ', precision_score_average_None)
print('precision_score_average_micro = ', precision_score_average_micro)
print('precision_score_average_macro = ', precision_score_average_macro)
print('precision_score_average_weighted = ', precision_score_average_weighted)print("\n\n----------------------------- recall(召回率)-----------------------------")
recall_score_average_None = recall_score(true_lable, prediction, average=None)
recall_score_average_micro = recall_score(true_lable, prediction, average='micro')
recall_score_average_macro = recall_score(true_lable, prediction, average='macro')
recall_score_average_weighted = recall_score(true_lable, prediction, average='weighted')
print('recall_score_average_None = ', recall_score_average_None)
print('recall_score_average_micro = ', recall_score_average_micro)
print('recall_score_average_macro = ', recall_score_average_macro)
print('recall_score_average_weighted = ', recall_score_average_weighted)print("\n\n----------------------------- F1-value-----------------------------")
f1_score_average_None = f1_score(true_lable, prediction, average=None)
f1_score_average_micro = f1_score(true_lable, prediction, average='micro')
f1_score_average_macro = f1_score(true_lable, prediction, average='macro')
f1_score_average_weighted = f1_score(true_lable, prediction, average='weighted')
print('f1_score_average_None = ', f1_score_average_None)
print('f1_score_average_micro = ', f1_score_average_micro)
print('f1_score_average_macro = ', f1_score_average_macro)
print('f1_score_average_weighted = ', f1_score_average_weighted)

打印结果:

measure_result = precision    recall  f1-score   support0       1.00      0.50      0.67         41       0.50      0.67      0.57         32       0.33      0.50      0.40         2accuracy                           0.56         9macro avg       0.61      0.56      0.55         9
weighted avg       0.69      0.56      0.58         9----------------------------- precision(精确率)-----------------------------
precision_score_average_None =  [1.         0.5        0.33333333]
precision_score_average_micro =  0.5555555555555556
precision_score_average_macro =  0.611111111111111
precision_score_average_weighted =  0.6851851851851852----------------------------- recall(召回率)-----------------------------
recall_score_average_None =  [0.5        0.66666667 0.5       ]
recall_score_average_micro =  0.5555555555555556
recall_score_average_macro =  0.5555555555555555
recall_score_average_weighted =  0.5555555555555556----------------------------- F1-value-----------------------------
f1_score_average_None =  [0.66666667 0.57142857 0.4       ]
f1_score_average_micro =  0.5555555555555556
f1_score_average_macro =  0.546031746031746
f1_score_average_weighted =  0.5756613756613757Process finished with exit code 0

2、数据02

true_lable = [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3]
prediction = [3, 0, 0, 0, 0, 0, 0, 0, 2, 3, 3, 1, 1, 1, 1, 1, 1, 3, 1, 2, 2, 2, 2, 2, 3, 0, 3, 3, 3, 3]
from sklearn.metrics import classification_report
from sklearn.metrics import precision_score, recall_score, f1_scoretrue_lable = [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3]
prediction = [3, 0, 0, 0, 0, 0, 0, 0, 2, 3, 3, 1, 1, 1, 1, 1, 1, 3, 1, 2, 2, 2, 2, 2, 3, 0, 3, 3, 3, 3]measure_result = classification_report(true_lable, prediction)
print('measure_result = \n', measure_result)print("----------------------------- precision(精确率)-----------------------------")
precision_score_average_None = precision_score(true_lable, prediction, average=None)
precision_score_average_micro = precision_score(true_lable, prediction, average='micro')
precision_score_average_macro = precision_score(true_lable, prediction, average='macro')
precision_score_average_weighted = precision_score(true_lable, prediction, average='weighted')
print('precision_score_average_None = ', precision_score_average_None)
print('precision_score_average_micro = ', precision_score_average_micro)
print('precision_score_average_macro = ', precision_score_average_macro)
print('precision_score_average_weighted = ', precision_score_average_weighted)print("\n\n----------------------------- recall(召回率)-----------------------------")
recall_score_average_None = recall_score(true_lable, prediction, average=None)
recall_score_average_micro = recall_score(true_lable, prediction, average='micro')
recall_score_average_macro = recall_score(true_lable, prediction, average='macro')
recall_score_average_weighted = recall_score(true_lable, prediction, average='weighted')
print('recall_score_average_None = ', recall_score_average_None)
print('recall_score_average_micro = ', recall_score_average_micro)
print('recall_score_average_macro = ', recall_score_average_macro)
print('recall_score_average_weighted = ', recall_score_average_weighted)print("\n\n----------------------------- F1-value-----------------------------")
f1_score_average_None = f1_score(true_lable, prediction, average=None)
f1_score_average_micro = f1_score(true_lable, prediction, average='micro')
f1_score_average_macro = f1_score(true_lable, prediction, average='macro')
f1_score_average_weighted = f1_score(true_lable, prediction, average='weighted')
print('f1_score_average_None = ', f1_score_average_None)
print('f1_score_average_micro = ', f1_score_average_micro)
print('f1_score_average_macro = ', f1_score_average_macro)
print('f1_score_average_weighted = ', f1_score_average_weighted)

打印结果:

measure_result = precision    recall  f1-score   support0       0.88      0.78      0.82         91       0.86      0.75      0.80         82       0.83      0.71      0.77         73       0.56      0.83      0.67         6accuracy                           0.77        30macro avg       0.78      0.77      0.76        30
weighted avg       0.80      0.77      0.77        30----------------------------- precision(精确率)-----------------------------
precision_score_average_None =  [0.875      0.85714286 0.83333333 0.55555556]
precision_score_average_micro =  0.7666666666666667
precision_score_average_macro =  0.7802579365079365
precision_score_average_weighted =  0.7966269841269841----------------------------- recall(召回率)-----------------------------
recall_score_average_None =  [0.77777778 0.75       0.71428571 0.83333333]
recall_score_average_micro =  0.7666666666666667
recall_score_average_macro =  0.7688492063492064
recall_score_average_weighted =  0.7666666666666667----------------------------- F1-value-----------------------------
f1_score_average_None =  [0.82352941 0.8        0.76923077 0.66666667]
f1_score_average_micro =  0.7666666666666667
f1_score_average_macro =  0.7648567119155354
f1_score_average_weighted =  0.7732126696832579Process finished with exit code 0



参考资料:
Macro-F1 Score与Micro-F1 Score
分类问题的几个评价指标(Precision、Recall、F1-Score、Micro-F1、Macro-F1)
分类问题中的各种评价指标——precision,recall,F1-score,macro-F1,micro-F1

分类问题的评价指标:多分类【Precision、 micro-P、macro-P】、【Recall、micro-R、macro-R】、【F1、 micro-F1、macro-F1】相关推荐

  1. 分类器评价指标 ROC,AUC,precision,recall,F-score,多分类评价指标

    目录 一.定义 二.ROC曲线 三.如何画ROC曲线 详解ROC/AUC计算过程(roc计算非常详细) 四.AUC AUC值的计算 AUC的计算方法(两个公式并且都举了例子) 为什么使用ROC曲线 五 ...

  2. 搞懂回归和分类模型的评价指标的计算:混淆矩阵,ROC,AUC,KS,SSE,R-square,Adjusted R-Square

    今天看到某同学总结了回归和分类模型的评价指标,两篇博客讲的特别清楚,看完后以前的疑惑都解除了,收获很大,加一点补充,整理出来方便以后查看,蓝色的大标题是原文链接. 回归模型的几个评价指标 对于回归模型 ...

  3. 机器学习模型常用评价指标(Accuracy, Precision, Recall、F1-score、MSE、RMSE、MAE、R方)

    前言 众所周知,机器学习分类模型常用评价指标有Accuracy, Precision, Recall和F1-score,而回归模型最常用指标有MAE和RMSE.但是我们真正了解这些评价指标的意义吗? ...

  4. pytorch实战:详解查准率(Precision)、查全率(Recall)与F1

    pytorch实战:详解查准率(Precision).查全率(Recall)与F1 1.概述 本文首先介绍了机器学习分类问题的性能指标查准率(Precision).查全率(Recall)与F1度量,阐 ...

  5. 分类问题的评价指标:多标签分类【基于标签度量(同多分类一样):准确率(Accuracy)、精确率(Precision)、召回率(Recall)、F1】【基于样本度量:Hamming Loss...】

    多标签分类的分类评价指标分为两大类: 基于标签上的度量:同多分类一样,在每一个标签上计算 Accuray.P.R.F-- 基于样本上的度量:又分为基于分类的度量.基于排序的度量 基于分类的度量:Sub ...

  6. 多分类损失函数和评价指标(objectives and metrics)

    目录 1 多分类损失函数和评价指标(objectives and metrics) 1 1.1 MultiClass- softmax loss 2 1.2 MultiClassOneVsAll 2 ...

  7. 分类模型性能评价指标:混淆矩阵、F Score、ROC曲线与AUC面积、PR曲线

    以二分类模型为例:二分类模型最终需要判断样本的结果是1还是0,或者说是positive还是negative. 评价分类模型性能的场景: 采集一个称之为测试集的数据集: 测试集的每一个样本由特征数据及其 ...

  8. 二分类和多分类问题的评价指标总结

    1 二分类评价指标 准确率,精确率,召回率,F1-Score, AUC, ROC, P-R曲线 1.1 准确率(Accuracy) 评价分类问题的性能指标一般是分类准确率,即对于给定的数据,分类正确的 ...

  9. 机器学习分类算法常用评价指标

    目录 1.准确率,召回率,精确率,F1-score,Fβ,ROC曲线,AUC值 2.宏平均(Macro-averaging)和微平均(Micro-averaging) 3.Python3  sklea ...

最新文章

  1. 这 30 个常用的 Maven 命令你必须熟悉!
  2. oracle的to_char中的fm
  3. SQL数据库学习-简单查询
  4. 26.idea导入jar包
  5. 一张图教你玩转阿里云双11上云狂欢节
  6. C++socket编程(二):系统socket库介绍
  7. c#图像处理-图像预览全解
  8. Oracle Concepts Guide 中 Oracle 实例 和 数据库 【关系图】
  9. 道德经和译文_道德经 - 道德经全文及译文 - 道德经全文 - 老子道德经
  10. std::adjacent_find 用法
  11. MIMO的信道容量以及实现
  12. HoloLens2通过Wifi部署应用到HoloLens2设备上
  13. 手把手教你接入微信开放平台,实现网站拉起微信账号登录,从0开始详细记录
  14. 一个很简单的淘宝优惠券搜索助手 大家看看有没有用吧
  15. 健身房管理系统毕业设计c语言,健身房会员管理系统设计与实现
  16. 基于SpringBoot+Vue的学生成绩管理系统
  17. Hbase的JavaAPI
  18. 源生JS 之对象key值为数字时的取值及修改key值方法
  19. python一些常用函数
  20. led数码显示控制plc实验_实验三 LED数码显示控制 PLC实验报告

热门文章

  1. 2019中兴校招面经整理
  2. 亚马逊测评使用的买家账号怎么养?测评自养号需要具备哪些条件?
  3. 对抗样本(对抗攻击)入门
  4. VulnHub narak
  5. java火焰_现代化的Java(十三)——火焰图
  6. 3C数码行业S2B2C电商系统加速供应链弹性控制,提升S2B2C平台运营效率
  7. Python 将两个三维模型(obj)合成一个三维模型(obj)
  8. 图像识别应用:识别出做过标记的单元格
  9. cortex a53 微型计算机,2019年值得期待的5个树莓派替代品
  10. L3MON搭建(基于云端的远程安卓管理套件)