yolov3--25--Detectron目标检测可视化-P-R曲线绘制-Recall-TP-FP-FN等评价指标
Detectron目标检测平台
- 评估训练结果(生成mAP)
CUDA_VISIBLE_DEVICES=4 python tools/test_net.py --cfg experiments/2gpu_e2e_faster_rcnn_R-50-FPN-voc2007.yaml TEST.WEIGHTS out-faster-rcnn-1/train/voc_2007_train/generalized_rcnn/model_final.pkl NUM_GPUS 1 | tee visualization/2gpu_e2e_faster_rcnn_R-50-FPN-voc2007-test_net.log
- 可视化检测结果(测试多张图片)
CUDA_VISIBLE_DEVICES=4 python tools/visualize_results.py --dataset voc_2007_val --detections out-faster-rcnn-1/test/voc_2007_val/generalized_rcnn/detections.pkl --output-dir out-faster-rcnn-1/detectron-visualizations
P-R曲线绘制
效果如下图:
得到Recall-TP-FP-FN等评价指标
改动一:函数部分更改为:
# first load gtif not os.path.isdir(cachedir):os.mkdir(cachedir)imageset = os.path.splitext(os.path.basename(imagesetfile))[0]cachefile = os.path.join(cachedir, imageset + '_annots.pkl') #val_annotations.pkl# read list of imageswith open(imagesetfile, 'r') as f: #lines = f.readlines()imagenames = [x.strip() for x in lines]#if not os.path.isfile(cachefile):# load annotsrecs = {}for i, imagename in enumerate(imagenames):recs[imagename] = parse_rec(annopath.format(imagename))if i % 100 == 0:logger.info('Reading annotation for {:d}/{:d}'.format(i + 1, len(imagenames)))# savelogger.info('Saving cached annotations to {:s}'.format(cachefile))save_object(recs, cachefile) #changeelse:recs = load_object(cachefile) #change# extract gt objects for this classclass_recs = {}npos = 0for imagename in imagenames:R = [obj for obj in recs[imagename] if obj['name'] == classname] #bbox = np.array([x['bbox'] for x in R])difficult = np.array([x['difficult'] for x in R]).astype(np.bool)det = [False] * len(R)npos = npos + sum(~difficult)class_recs[imagename] = {'bbox': bbox,'difficult': difficult,'det': det}# read detsdetfile = detpath.format(classname) #with open(detfile, 'r') as f:lines = f.readlines() #splitlines = [x.strip().split(' ') for x in lines] #image_ids = [x[0] for x in splitlines] #confidence = np.array([float(x[1]) for x in splitlines]) #BB = np.array([[float(z) for z in x[2:]] for x in splitlines]) ## sort by confidencesorted_ind = np.argsort(-confidence) #BB = BB[sorted_ind, :] #image_ids = [image_ids[x] for x in sorted_ind] ## go down dets and mark TPs and FPsnd = len(image_ids) #tp = np.zeros(nd)fp = np.zeros(nd)for d in range(nd): #R = class_recs[image_ids[d]] #image_ids[d]bb = BB[d, :].astype(float) #ovmax = -np.infBBGT = R['bbox'].astype(float)#if BBGT.size > 0:# compute overlaps# intersectionixmin = np.maximum(BBGT[:, 0], bb[0])iymin = np.maximum(BBGT[:, 1], bb[1])ixmax = np.minimum(BBGT[:, 2], bb[2])iymax = np.minimum(BBGT[:, 3], bb[3])iw = np.maximum(ixmax - ixmin + 1., 0.)ih = np.maximum(iymax - iymin + 1., 0.)inters = iw * ih# unionuni = ((bb[2] - bb[0] + 1.) * (bb[3] - bb[1] + 1.) +(BBGT[:, 2] - BBGT[:, 0] + 1.) *(BBGT[:, 3] - BBGT[:, 1] + 1.) - inters)overlaps = inters / uniovmax = np.max(overlaps)jmax = np.argmax(overlaps)if ovmax > ovthresh:if not R['difficult'][jmax]:if not R['det'][jmax]:tp[d] = 1.R['det'][jmax] = 1 #else:fp[d] = 1. else:fp[d] = 1. #num = len(np.where(confidence>0.7)[0]) #confidence# compute precision recallfp = np.cumsum(fp)fpnum = int(fp[num-1]) #tp = np.cumsum(tp)tpnum = int(tp[num-1]) #rec = tp / float(npos)objectnum = int(npos)# avoid divide by zero in case the first detection matches a difficult# ground truthprec = tp / np.maximum(tp + fp, np.finfo(np.float64).eps)alarm = 1-prec[num-1]ap = voc_ap(rec, prec, use_07_metric)return rec, prec, ap, objectnum, tpnum, fpnum, alarm
改动二:函数部分更改为:
if not os.path.isdir(output_dir):os.mkdir(output_dir)for _, cls in enumerate(json_dataset.classes):if cls == '__background__':continuefilename = _get_voc_results_file_template(json_dataset, salt).format(cls)rec, prec, ap ,objectnum, tpnum, fpnum, alarm = voc_eval(filename, anno_path, image_set_path, cls, cachedir, ovthresh=0.5,use_07_metric=use_07_metric)aps += [ap]logger.info('AP for {} = {:.4f}'.format(cls, ap))logger.info(' {}TP+FN{:d}, TP{:d}, FN{:d}, recall{:.4f}, FP{:d}, precision{:.4f}, precision2{:.4f}'.format(cls, objectnum, tpnum, objectnum-tpnum, tpnum/float(objectnum), fpnum, 1-alarm, tpnum/(tpnum+fpnum)))
不使用07_metric计算AP指标,替换True与False的位置(重要,会影响精度)
.sh 训练文件:
CUDA_VISIBLE_DEVICES='6,7' python tools/train_net.py --cfg experiments/e2e_mask_rcnn_R-101-FPN_2x_gn-train1999-2gpu.yaml OUTPUT_DIR e2e_mask_rcnn_R-101-FPN_2x_gn-train1999-2gpu/ | tee visualization/e2e_mask_rcnn_R-101-FPN_2x_gn-train1999-2gpu-start6-18-416-416-1.log
1、训练得到.pkl文件,
2、转换为此种格式:
3、和其它曲线画到一张图中:
出现错误:
python3错误之ImportError: No module named 'cPickle'
Python 文件操作出现错误(result, consumed) = self._buffer_decode(data, self.errors, final)
解决办法:
解决办法:“r”改为“rb”
R = [obj for obj in recs[imagename] if obj['name'] == classname] KeyError: '007765'
有效解决方案:训练前需要将cache中的pki文件以及VOCdevkit2007中annotations_cache的缓存删掉。我的路径是../data/VOCdevkit/annotations_cache/ ,删掉annots.pkl即可正常test,亲测有效
问题:https://blog.csdn.net/nuoyanli/article/details/94434890
注意Python插入程序时的对齐:空4格
pickle.load的时候出现EOFError: Ran out of input
参考:https://blog.csdn.net/Mr_health/article/details/89519469
yolov3--25--Detectron目标检测可视化-P-R曲线绘制-Recall-TP-FP-FN等评价指标相关推荐
- 计算机视觉:基于YOLO-V3林业病虫害目标检测
计算机视觉:基于YOLO-V3林业病虫害目标检测 卷积神经网络提取特征 根据输出特征图计算预测框位置和类别 建立输出特征图与预测框之间的关联 计算预测框是否包含物体的概率 计算预测框位置坐标 计算物体 ...
- 树莓派摄像头 C++ OpenCV YoloV3 实现实时目标检测
树莓派摄像头 C++ OpenCV YoloV3 实现实时目标检测 本文将实现树莓派摄像头 C++ OpenCV YoloV3 实现实时目标检测,我们会先实现树莓派对视频文件的逐帧检测来验证算法流程, ...
- YOLOv3实现鱼类目标检测
YOLOv3实现鱼类目标检测 我将以一个项目实例,记录如何用YOLOv3训练自己的数据集. 在开始之前,首先了解一下YOLO系列代表性的DarkNet网络. 如下图所示,是YOLOv3中使用的Dark ...
- 手把手教你用yolov3模型实现目标检测教程(一) - 环境配置
手把手教你用yolov3模型实现目标检测(一) 写在前面: 由于项目需要,使用yolov3模型做了各种现实场景物体的目标检测.做完了过了好长时间,感觉有些遗忘,还是该留下点东西,方便自己查找,也希望能 ...
- 基于yolov3的行人目标检测算法在图像和视频中识别检测
资源下载地址:https://download.csdn.net/download/sheziqiong/85772186 资源下载地址:https://download.csdn.net/downl ...
- 目标检测 TP\FP\FN\TN如何理解?FN和TN无意义
1.TP TN FP FN的概念 TP TN FP FN里面一共出现了4个字母,分别是T F P N. T是True: F是False: P是Positive: N是Negative. T或者F代表的 ...
- yolov3权重_目标检测之 YOLOv3 (Pytorch实现)
1.文章目的 Github上已经有YOLOv3 Pytorch版本的实现,但我总觉得不是自己的东西直接拿过来用着不舒服.想着自己动手丰衣足食,因此,本文主要是出于学习的目的,将YOLO3网络自己搭建一 ...
- 目标检测计算mAP,AP,Recall,Precision的计算方式和代码(YOLO和FastRCNN等)
目标检测中计算mAP是较为复杂的,并不是很多讲解中说的那个计算precision和recall,然后总的ground truth 目标和检测出的真实目标做除法就可以了.而是需要构建precision和 ...
- 目标检测相关概念:IOU,precision, recall, AP, mAP
1.IOU(交并比) IOU,交并比,顾名思义,就是预测框bounding box与真实框ground truth的交集比上并集.可以用来衡量检测物体位置的偏差.形象点可以看下图(用画图软件所画): ...
最新文章
- [BZOJ4033][HAOI2015]树上染色
- float 属性详解
- [cocos2d-x·总结]关于cocos2d-x几种画图方法的用法与思考
- SQLSERVER和ORACLE批量处理表名和字段名大写
- 北京达内python价格_记录在北京达内学Python-day07
- double在mysql中是什么类型_为什么PVC输送带深受企业的青睐?
- struts 通配符的使用
- linux bash技巧_Bash提示技巧和窍门
- html禁止转义reg,HTML转义 - Reg表达式?
- virt-install选项详解
- 如何调用VS自带的数据源选择对话框
- 代码实现Autolayout
- Rhythmk 一步一步学 JAVA(9) JAVA 基础笔记[枚举,...]
- 2021-09-02最大矩形
- 计算机在食品科学中的应用统计学,响应面法及其在食品中的应用
- HCNP数通认证考试心得体会
- 计算机专业专业课代号408,计算机408有多难
- python中比较运算符
- 【ultraiso制作ubuntu启动盘(包括U盘和光盘)】
- python-机器学习-决策树算法