TPR、FPR、precision、recall、accuracy、ROC、AUC

https://www.cnblogs.com/sunupo/p/12827639.html

准确率(Accuracy) | 查准率(Precision) | 查全率(Recall)

https://www.jianshu.com/p/8b7324b0f307?from=timeline

最后采用方案:

https://stackabuse.com/understanding-roc-curves-with-python/   *****

https://www.cnblogs.com/wj-1314/p/9400375.html   *****

https://github.com/marcelcaraciolo/PyROC/blob/master/pyroc.py   *****

https://github.com/scikit-learn/scikit-learn/blob/a24c8b46/sklearn/metrics/ranking.py#L453   *****源代码

https://github.com/yohann84L/plot_metric

https://medium.com/datadriveninvestor/computing-an-roc-graph-with-python-a3aa20b9a3fb

https://zhuanlan.zhihu.com/p/32824418

from sklearn.metrics import roc_curve,auc
from scipy import interpolate#pos_label=1 将0,1中的1视为正样本
fpr,tpr,thresh = roc_curve(labels, scores, pos_label=1)
fpr95 = float(interpolate.interp1d(tpr, fpr)(0.95))
print('FPR95:', fpr95)
print(auc(fpr, tpr))
return fpr95

关于指标写在前面:

https://blog.csdn.net/GAN_player/article/details/85113431

matchnet的计算方式如下:

import operator
import numpy as np
def ErrorRateAt95Recall(labels, scores):recall_point = 0.95# Sort label-score tuples by the score in descending order.temp = zip(labels, scores)#operator.itemgetter(1)按照第二个元素的次序对元组进行排序,reverse=True是逆序,即按照从大到小的顺序排列#sorted_scores.sort(key=operator.itemgetter(1), reverse=True)sorted_scores = sorted(temp, key=operator.itemgetter(1), reverse=True)# Compute error rate# n_match表示测试集正样本数目n_match = sum(1 for x in sorted_scores if x[0] == 1)n_thresh = recall_point * n_matchtp = 0count = 0for label, score in sorted_scores:count += 1if label == 1:tp += 1if tp >= n_thresh:breakreturn float(count - tp) / count

而论文Learning to Compare Image Patches via Convolutional Neural Networks在代码:https://github.com/szagoruyko/cvpr15deepcompare中有不同表示方法

论文:

https://openaccess.thecvf.com/content_cvpr_2015/papers/Zagoruyko_Learning_to_Compare_2015_CVPR_paper.pdf

补充文件supplemental(里面有很多实验):

https://openaccess.thecvf.com/content_cvpr_2015/supplemental/Zagoruyko_Learning_to_Compare_2015_CVPR_supplemental.pdf

matlab中使用如下:

https://www.mathworks.com/help/deeplearning/ref/roc.html;jsessionid=1cd8d420ae2aef8d44508a1e4e4d

function [tpr,fpr,thresholds] = roc(targets,outputs)
%ROC Receiver operating characteristic.
%
%  The receiver operating characteristic is a metric used to check the
%  quality of classifiers. For each class of a classifier, threshold values
%  across the interval [0,1] are applied to outputs. For each threshold,
%  two values are calculated, the True Positive Ratio (the proportion of
%  the targets that are greater than or equal to the threshold that
%  actually have a target value of one), and the False Positive Ratio (the
%  proportion of the targets that are greater than or equal to the
%  threshold that actually have a target value of zero).
%
%  For single class problems, [TPR,FPR,TH] = <a href="matlab:doc roc">roc</a>(T,Y) takes
%  a 1xQ target matrix T, where each element is either 1 or 0 indicating
%  class membership or non-menbership respectively, and 1xQ outputs Y of
%  values in the range [0,1].
%
%  It returns three 1xQ vectors: the true-positive/positive ratios TPR,
%  the false-positive/negative ratios FPR, and the thresholds associated
%  with each of those values TH.
%
%  For multi-class problems [TPR,FPR,TH] = <a href="matlab:doc roc">roc</a>(T,Y) takes
%  an SxQ target matrix T, where each column contains a single 1 value,
%  with all other elements 0. The row index of each 1 indicates which of S
%  categories that vector represents. It also takes an SxQ output matrix Y,
%  with values in the range [0,1]. The row indices of the largest elements in
%  each column of Y indicate the most likely class.
%
%  In the multi-class case, all three values returned are 1xS cell arrays,
%  so that TPR{i}, FPR{i} and TH{i} are the ratios and thresholds for the
%  ith class.
%
%  <a href="matlab:doc roc">roc</a>(T,Y) can also take a boolean row vector T, and row vector Y, in
%  which case two categories are represented by targets 1 and 0.
%
%  Here a network is trained to recognize iris flowers the ROC is
%  calculated and plotted.
%
%    [x,t] = <a href="matlab:doc iris_dataset">iris_dataset</a>;
%    net = <a href="matlab:doc patternnet">patternnet</a>(10);
%    net = <a href="matlab:doc train">train</a>(net,x,t);
%    y = net(x);
%    [tpr,fpr,th] = <a href="matlab:doc roc">roc</a>(t,y)
%    <a href="matlab:doc plotroc">plotroc</a>(t,y)
%
%  See also PLOTROC, CONFUSION% Copyright 2007-2016 The MathWorks, Inc.nnassert.minargs(nargin,2);
targets = nntype.data('format',targets,'Targets');
outputs = nntype.data('format',outputs,'Outputs');
% TOTO - nnassert_samesize({targets,outputs},{'Targets','Outputs'});
if size(targets,1) > 1warning(message('nnet:roc:Arguments'));
end
targets = [targets{1,:}];
outputs = [outputs{1,:}];
numClasses = size(targets,1);known = find(~isnan(sum(targets,1)));
targets = targets(:,known);
outputs = outputs(:,known);if (numClasses == 1)targets = [targets; 1-targets];outputs = [outputs; 1-outputs-eps*(outputs==0.5)];[tpr,fpr,thresholds] = roc(targets,outputs);tpr = tpr{1};fpr = fpr{1};thresholds = thresholds{1};return;
endfpr = cell(1,numClasses);
tpr = cell(1,numClasses);
thresholds = cell(1,numClasses);for i=1:numClasses[tpr{i},fpr{i},thresholds{i}] = roc_one(targets(i,:),outputs(i,:));
end%%
function [tpr,fpr,thresholds] = roc_one(targets,outputs)numSamples = length(targets);
numPositiveTargets = sum(targets);
numNegativeTargets = numSamples-numPositiveTargets;thresholds = unique([0 outputs 1]);
numThresholds = length(thresholds);sortedPosTargetOutputs = sort(outputs(targets == 1));
numPosTargetOutputs = length(sortedPosTargetOutputs);
sortedNegTargetOutputs = sort(outputs(targets == 0));
numNegTargetOutputs = length(sortedNegTargetOutputs);fpcount = zeros(1,numThresholds);
tpcount = zeros(1,numThresholds);posInd = 1;
negInd = 1;
for i=1:numThresholdsthreshold = thresholds(i);while (posInd <= numPosTargetOutputs) && (sortedPosTargetOutputs(posInd) <= threshold)posInd = posInd + 1;endtpcount(i) = numPosTargetOutputs + 1 - posInd;while (negInd <= numNegTargetOutputs) && (sortedNegTargetOutputs(negInd) <= threshold)negInd = negInd + 1;endfpcount(i) = numNegTargetOutputs + 1 - negInd;
endtpr = fliplr(tpcount) ./ max(1,numPositiveTargets);
fpr = fliplr(fpcount) ./ max(1,numNegativeTargets);
thresholds = fliplr(thresholds);

这种方式和下面的一样,可直接用于计算在不同阈值下,TPR和FPR对应的值,进而可以得出TPR=0.95时,FPR的值:

from sklearn.metrics import roc_curve,auc
fpr,tpr,thresh = roc_curve(labels,scores)

https://github.com/szagoruyko/cvpr15deepcompare中使用的lua代码:

local tnt = require 'torchnet.env'
local argcheck = require 'argcheck'local FPR95Meter = torch.class('tnt.FPR95Meter', 'tnt.Meter', tnt)FPR95Meter.__init = argcheck{doc = [[Compute false positive rate at 95%For more information see http://www.mathworks.com/help/nnet/ref/roc.html]],{name="self", type="tnt.FPR95Meter"},call = function(self)self:reset()end}FPR95Meter.reset = argcheck{{name="self", type="tnt.FPR95Meter"},call = function(self)self.targets = {}self.outputs = {}self.tpr, self.fpr = nilend
}FPR95Meter.add = argcheck{{name="self", type="tnt.FPR95Meter"},{name="output", type="torch.*Tensor"},{name="target", type="torch.*Tensor"}, call = function(self, output, target)if output:numel() ~= 1 then output = output:squeeze() endif target:numel() ~= 1 then target = target:squeeze() endtable.insert(self.targets, target:float())table.insert(self.outputs, output:float())end
} local roc = function(targets, outputs)local L,I = torch.sort(outputs, 1, true)local labels = targets:index(1,I)local TPR = torch.cumsum(labels:gt(0):float()) / labels:gt(0):float():sum()local FPR = torch.cumsum(labels:lt(0):float()) / labels:lt(0):float():sum()return TPR, FPR
endFPR95Meter.value = argcheck{{name="self", type="tnt.FPR95Meter"},{name="t", type="number", opt=true},call = function(self, t)local targets = torch.cat(self.targets)local outputs = torch.cat(self.outputs)self.tpr, self.fpr = roc(targets, outputs)local _,k = (self.tpr -0.95):abs():min(1)local FPR95 = self.fpr[k[1]]return FPR95end
}

python版本直接使用了:

from sklearn.metrics import roc_curve,auc
from scipy import interpolate#pos_label=1 将0,1中的1视为正样本
fpr,tpr,thresh = roc_curve(labels, scores, pos_label=1)
fpr95 = float(interpolate.interp1d(tpr, fpr)(0.95))
print('FPR95:', fpr95)
return fpr95
from __future__ import print_function
import os
import sys
import argparse
from functools import partial
from tqdm import tqdm
import numpy as np
import torch
from torch.utils.serialization import load_lua
from torchnet.dataset import ListDataset, ConcatDataset
from torch.autograd import Variable
import torch.nn.functional as F
from sklearn import metrics
from scipy import interpolate
from torch.backends import cudnn
cudnn.benchmark = Trueparser = argparse.ArgumentParser(description='DeepCompare PyTorch evaluation code')parser.add_argument('--model', default='2ch', type=str)
parser.add_argument('--lua_model', default='', type=str, required=True)
parser.add_argument('--nthread', default=4, type=int)
parser.add_argument('--gpu_id', default='0', type=str)parser.add_argument('--batch_size', default=256, type=int)
parser.add_argument('--test_set', default='liberty', type=str)
parser.add_argument('--test_matches', default='m50_100000_100000_0.txt', type=str)def get_iterator(dataset, batch_size, nthread):def get_list_dataset(pair_type):ds = ListDataset(elem_list=dataset[pair_type],load=lambda idx: {'input': np.stack((dataset['patches'][v].astype(np.float32)- dataset['mean'][v]) / 256.0 for v in idx),'target': 1 if pair_type == 'matches' else -1})ds = ds.transform({'input': torch.from_numpy, 'target': lambda x: torch.LongTensor([x])})return ds.batch(policy='include-last', batchsize=batch_size // 2)concat = ConcatDataset([get_list_dataset('matches'),get_list_dataset('nonmatches')])return concat.parallel(batch_size=2, shuffle=False, num_workers=nthread)def conv2d(input, params, base, stride=1, padding=0):return F.conv2d(input, params[base + '.weight'], params[base + '.bias'],stride, padding)def linear(input, params, base):return F.linear(input, params[base + '.weight'], params[base + '.bias'])#####################   2ch   #####################def deepcompare_2ch(input, params):o = conv2d(input, params, 'conv0', stride=3)o = F.max_pool2d(F.relu(o), 2, 2)o = conv2d(o, params, 'conv1')o = F.max_pool2d(F.relu(o), 2, 2)o = conv2d(o, params, 'conv2')o = F.relu(o).view(o.size(0), -1)return linear(o, params, 'fc')#####################   2ch2stream   #####################def deepcompare_2ch2stream(input, params):def stream(input, name):o = conv2d(input, params, name + '.conv0')o = F.max_pool2d(F.relu(o), 2, 2)o = conv2d(o, params, name + '.conv1')o = F.max_pool2d(F.relu(o), 2, 2)o = conv2d(o, params, name + '.conv2')o = F.relu(o)o = conv2d(o, params, name + '.conv3')o = F.relu(o)return o.view(o.size(0), -1)o_fovea = stream(F.avg_pool2d(input, 2, 2), 'fovea')o_retina = stream(F.pad(input, (-16,) * 4), 'retina')o = linear(torch.cat([o_fovea, o_retina], dim=1), params, 'fc0')return linear(F.relu(o), params, 'fc1')#####################   siam   #####################def siam(patch, params):o = conv2d(patch, params, 'conv0', stride=3)o = F.max_pool2d(F.relu(o), 2, 2)o = conv2d(o, params, 'conv1')o = F.max_pool2d(F.relu(o), 2, 2)o = conv2d(o, params, 'conv2')o = F.relu(o)return o.view(o.size(0), -1)def deepcompare_siam(input, params):o = linear(torch.cat(map(partial(siam, params=params), input.split(1, dim=1)),dim=1), params, 'fc0')return linear(F.relu(o), params, 'fc1')def deepcompare_siam_l2(input, params):def single(patch):return F.normalize(siam(patch, params))return - F.pairwise_distance(*map(single, input.split(1, dim=1)))#####################   siam2stream   #####################def siam_stream(patch, params, base):o = conv2d(patch, params, base + '.conv0', stride=2)o = F.max_pool2d(F.relu(o), 2, 2)o = conv2d(o, params, base + '.conv1')o = F.relu(o)o = conv2d(o, params, base + '.conv2')o = F.relu(o)o = conv2d(o, params, base + '.conv3')return o.view(o.size(0), -1)def streams(patch, params):o_retina = siam_stream(F.pad(patch, (-16,) * 4), params, 'retina')o_fovea = siam_stream(F.avg_pool2d(patch, 2, 2), params, 'fovea')return torch.cat([o_retina, o_fovea], dim=1)def deepcompare_siam2stream(input, params):embeddings = map(partial(streams, params=params), input.split(1, dim=1))o = linear(torch.cat(embeddings, dim=1), params, 'fc0')o = F.relu(o)o = linear(o, params, 'fc1')return odef deepcompare_siam2stream_l2(input, params):def single(patch):return F.normalize(streams(patch, params))return - F.pairwise_distance(*map(single, input.split(1, dim=1)))models = {'2ch': deepcompare_2ch,'2ch2stream': deepcompare_2ch2stream,'siam': deepcompare_siam,'siam_l2': deepcompare_siam_l2,'siam2stream': deepcompare_siam2stream,'siam2stream_l2': deepcompare_siam2stream_l2,
}def main(args):opt = parser.parse_args(args)print('parsed options:', vars(opt))os.environ['CUDA_VISIBLE_DEVICES'] = opt.gpu_idif torch.cuda.is_available():# to prevent opencv from initializing CUDA in workerstorch.randn(8).cuda()os.environ['CUDA_VISIBLE_DEVICES'] = ''def load_provider():print('Loading test data')p = np.load(opt.test_set)[()]for i, t in enumerate(['matches', 'nonmatches']):p[t] = p['match_data'][opt.test_matches][i]return ptest_iter = get_iterator(load_provider(), opt.batch_size, opt.nthread)def cast(t):return t.cuda() if torch.cuda.is_available() else tf = models[opt.model]net = load_lua(opt.lua_model)if opt.model == '2ch':params = {}for j, i in enumerate([0, 3, 6]):params['conv%d.weight' % j] = net.get(i).weightparams['conv%d.bias' % j] = net.get(i).biasparams['fc.weight'] = net.get(9).weightparams['fc.bias'] = net.get(9).biaselif opt.model == '2ch2stream':params = {}for j, branch in enumerate(['fovea', 'retina']):for k, layer in enumerate(map(net.get(0).get(j).get(1).get, [1, 4, 7, 9])):params['%s.conv%d.weight' % (branch, k)] = layer.weightparams['%s.conv%d.bias' % (branch, k)] = layer.biasfor k, layer in enumerate(map(net.get, [1, 3])):params['fc%d.weight' % k] = layer.weightparams['fc%d.bias' % k] = layer.biaselif opt.model == 'siam' or opt.model == 'siam_l2':params = {}for k, layer in enumerate(map(net.get(0).get(0).get, [1, 4, 7])):params['conv%d.weight' % k] = layer.weightparams['conv%d.bias' % k] = layer.biasfor k, layer in enumerate(map(net.get, [1, 3])):params['fc%d.weight' % k] = layer.weightparams['fc%d.bias' % k] = layer.biaselif opt.model == 'siam2stream' or opt.model == 'siam2stream_l2':params = {}for stream, name in zip(net.get(0).get(0).modules, ['retina', 'fovea']):for k, layer in enumerate(map(stream.get, [2, 5, 7, 9])):params['%s.conv%d.weight' % (name, k)] = layer.weightparams['%s.conv%d.bias' % (name, k)] = layer.biasfor k, layer in enumerate(map(net.get, [1, 3])):params['fc%d.weight' % k] = layer.weightparams['fc%d.bias' % k] = layer.biasparams = {k: Variable(cast(v)) for k, v in params.items()}def create_variables(sample):inputs = Variable(cast(sample['input'].float().view(-1, 2, 64, 64)))targets = Variable(cast(sample['target'].float().view(-1)))return inputs, targetstest_outputs, test_targets = [], []for sample in tqdm(test_iter, dynamic_ncols=True):inputs, targets = create_variables(sample)y = f(inputs, params)test_targets.append(sample['target'].view(-1))test_outputs.append(y.data.cpu().view(-1))fpr, tpr, thresholds = metrics.roc_curve(torch.cat(test_targets).numpy(),torch.cat(test_outputs).numpy(), pos_label=1)fpr95 = float(interpolate.interp1d(tpr, fpr)(0.95))print('FPR95:', fpr95)return fpr95if __name__ == '__main__':main(sys.argv[1:])

关于image-patchs论文中的指标FPR95的一些理解相关推荐

  1. 知识图谱论文中模型指标MRR,MR,HITS@1,HITS@3,HITS@10的含义

    知识图谱论文中模型指标MRR,MR,HITS@1,HITS@3,HITS@10的含义 本文将介绍用于衡量知识图谱嵌入(Knowledge Graph Embedding,KGE)模型性能中最常用的几个 ...

  2. 论文中常用的反证法思路本质理解

    有些定理从正面不容易证明,从反面反而容易一些,那么就需要用到反证法. 反证法核心思想:假设条件成立(式子1),而结论不成立(式子2),那么就用这两个式子去推,推出某些与目前已知的定理或者公理矛盾的结果 ...

  3. 手把手教你使用R语言做出SCI论文中的表二(单因素分析表)(1)

    在SCI论文中,我们经常可以看见一些这样的表格,大多数命名表格2,主要用来表示原因和结果的单因素分析的关系或者是分组变量的关系,如下图 这样论文中的表格数不胜数,今天我们通过一个实例数据演示告诉大家怎 ...

  4. DEA在科技评价中的指标优化研究

    摘要:为解决DEA在科技评价中的指标选取与模型优化问题,本文建立效率回归模型,首先采用DEA方法对科技投入产出进行效率测度,然后将效率作为被解释变量,将所有投入产出指标作为解释变量,运用多元线性回归进 ...

  5. 数学建模学习——聚类(包含优秀建模论文中的应用)

    #聚类 ##一.目的 将具有相似特征的样本聚成一类,与其他类别进行有效区分. ##二.聚类的研究方法 ###1.基于层次的聚类 层次聚类.BIRCH算法(平均迭代规约和聚类).CURE算法(代表点CH ...

  6. 论文中英对照翻译--(Fusing Multiple Deep Features for Face Anti-spoofing)

    [开始时间]2018.10.22 [完成时间]2018.10.22 [论文翻译]论文中英对照翻译--(Fusing Multiple Deep Features for Face Anti-spoof ...

  7. 谈谈我对NLP文档级关系抽取中Ign_F1指标的理解(Ign_F1与F1的关系)

    因为Ign_F1这个参数网上所解释的内容都是一致的,并且不太好理解 于是我就特地请教了YX师兄 这里特地感谢1107实验室YX师兄 F1分数 F1为精确率和召回率的调和平均数(为下部分做准备) 对于查 ...

  8. 【理论+实践】史上最全-论文中常用的图像分割评价指标-附完整代码

    图像分割的评价指标非常多,论文中常用的包括像素准确率(Pixel Accuracy, PA).交并比(Intersection-Over-Union,IOU).Dice系数(Dice Coeffcie ...

  9. SCI科研论文中如何正确自我引用,避免过度自引?

    学术人总面临着要证明自己拥有科研产出和声望的压力,那些处于职业生涯早期的学术人这方面的压力更大,而论文的引用率对于能否得到晋升或取得终身职位都会产生影响. 学术界依赖期刊影响因子.引文索引和其他研究人 ...

最新文章

  1. [gic]-ARM gicv3/gicv2的总结和介绍-PPT
  2. 蚂蚁的难题(一) http://acm.nyist.net/JudgeOnline/status.php?pid=744
  3. java 大数据处理类 BigDecimal 解析
  4. 安装-consul服务发现集群
  5. 将Quarkus应用程序部署到AWS Elastic Beanstalk
  6. 信息学奥赛一本通(1156:求π的值)
  7. 漫步最优化三十五——共轭
  8. 【计算机网络】—— 停止-等待协议
  9. 好用的pdf预览插件
  10. java 文件url地址_简单的解析文件,取URL地址,并根据地址抓下页面
  11. 销售订单获取不到即时库存
  12. 快递100 快递公司编码-标准国际
  13. 如何判断关系是否自反,反自反,对称,反对称,传递
  14. windows server 2012 更改网络位置
  15. 常见分布式任务调度工具分析
  16. 古马其顿国王-亚历山大
  17. 服务器问题网站拔毛,网站被百度拔毛的经验分析
  18. 【报错】Cannot mix different versions of joi schemas(Postman)
  19. java生成二维码以及链接邀请
  20. ArcGIS栅格裁剪

热门文章

  1. 利用CSS3 transform属性制作漂亮的照片墙特效
  2. 动画基础2 -- 插值器与估值器
  3. 绝对干货!论文图表基本规范大全
  4. mysql cluster rpm包的作用_MySQL之——MySQL Cluster集群搭建详解(基于RPM安装包)
  5. JVM Metaspace内存溢出排查与总结
  6. 苏州大学计算机学硕和专硕,凤传文卿|苏州大学新传学硕专硕到底有什么区别?...
  7. linux主机宕机排查问题的方法
  8. 腾图信使:将信息发送到飞书、钉钉、微信、息知
  9. MQL5源码:智能交易脚本EA结构解读
  10. 微型计算机的组成结构