最近开始使用Keras来做深度学习,发现模型搭建相较于MXnet, Caffe等确实比较方便,适合于新手练手,于是找来了目标检测经典的模型Faster-RCNN的keras代码来练练手,代码的主题部分转自知乎专栏Learning Machine,作者张潇捷,链接如下: 
keras版faster-rcnn算法详解(1.RPN计算) 
keras版faster-rcnn算法详解 (2.roi计算及其他)

我再对代码中loss的计算,config的设置等细节进行学习 
Keras版Faster-RCNN代码学习(IOU,RPN)1 
Keras版Faster-RCNN代码学习(Batch Normalization)2 
Keras版Faster-RCNN代码学习(loss,xml解析)3 
Keras版Faster-RCNN代码学习(roipooling resnet/vgg)4 
Keras版Faster-RCNN代码学习(measure_map,train/test)5

config.py

from keras import backend as K
import mathclass Config:def __init__(self):self.verbose = True self.network = 'resnet50' # setting for data augmentation self.use_horizontal_flips = False self.use_vertical_flips = False self.rot_90 = False # anchor box scales self.anchor_box_scales = [128, 256, 512] # anchor box ratios self.anchor_box_ratios = [[1, 1], [1./math.sqrt(2), 2./math.sqrt(2)], [2./math.sqrt(2), 1./math.sqrt(2)]] # size to resize the smallest side of the image self.im_size = 600 # image channel-wise mean to subtract self.img_channel_mean = [103.939, 116.779, 123.68] self.img_scaling_factor = 1.0 # number of ROIs at once self.num_rois = 4 # stride at the RPN (this depends on the network configuration) self.rpn_stride = 16 self.balanced_classes = False # scaling the stdev self.std_scaling = 4.0 self.classifier_regr_std = [8.0, 8.0, 4.0, 4.0] # overlaps for RPN self.rpn_min_overlap = 0.3 self.rpn_max_overlap = 0.7 # overlaps for classifier ROIs self.classifier_min_overlap = 0.1 self.classifier_max_overlap = 0.5 # placeholder for the class mapping, automatically generated by the parser self.class_mapping = None #location of pretrained weights for the base network # weight files can be found at: # https://github.com/fchollet/deep-learning-models/releases/download/v0.2/resnet50_weights_th_dim_ordering_th_kernels_notop.h5 # https://github.com/fchollet/deep-learning-models/releases/download/v0.2/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5 self.model_path = 'model_frcnn.vgg.hdf5' 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59

对代码所需要的参数进行配置

data_generators.py

import cv2
import numpy as np
import copy#传递图像参数,增广配置参数,是否进行图像增广 def augment(img_data, config, augment=True): assert 'filepath' in img_data assert 'bboxes' in img_data assert 'width' in img_data assert 'height' in img_data img_data_aug = copy.deepcopy(img_data) img = cv2.imread(img_data_aug['filepath']) if augment: rows, cols = img.shape[:2] #图像水平翻转,对应的bbox的对角坐标也进行水平翻转,翻转概率为50% if config.use_horizontal_flips and np.random.randint(0, 2) == 0: img = cv2.flip(img, 1) for bbox in img_data_aug['bboxes']: x1 = bbox['x1'] x2 = bbox['x2'] bbox['x2'] = cols - x1 bbox['x1'] = cols - x2 #图像垂直翻转,对应的bbox的对角坐标也进行垂直翻转,翻转概率为50% if config.use_vertical_flips and np.random.randint(0, 2) == 0: img = cv2.flip(img, 0) for bbox in img_data_aug['bboxes']: y1 = bbox['y1'] y2 = bbox['y2'] bbox['y2'] = rows - y1 bbox['y1'] = rows - y2 #图像按90度旋转,对应的bbox的对角坐标也进行90度旋转,旋转概率为50% if config.rot_90: angle = np.random.choice([0,90,180,270],1)[0] if angle == 270: img = np.transpose(img, (1,0,2)) img = cv2.flip(img, 0) elif angle == 180: img = cv2.flip(img, -1) elif angle == 90: img = np.transpose(img, (1,0,2)) img = cv2.flip(img, 1) elif angle == 0: pass for bbox in img_data_aug['bboxes']: x1 = bbox['x1'] x2 = bbox['x2'] y1 = bbox['y1'] y2 = bbox['y2'] if angle == 270: bbox['x1'] = y1 bbox['x2'] = y2 bbox['y1'] = cols - x2 bbox['y2'] = cols - x1 elif angle == 180: bbox['x2'] = cols - x1 bbox['x1'] = cols - x2 bbox['y2'] = rows - y1 bbox['y1'] = rows - y2 elif angle == 90: bbox['x1'] = rows - y2 bbox['x2'] = rows - y1 bbox['y1'] = x1 bbox['y2'] = x2 elif angle == 0: pass img_data_aug['width'] = img.shape[1] img_data_aug['height'] = img.shape[0] return img_data_aug, img 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74

关于坐标计算,还是有点绕的,可通过矩阵计算或者画图描述(画图描述比较清晰,对应对角坐标会发生变化,对角坐标为一个最小的,一个最大的)如[x1,y1,x2,y2],则x1 < x2,y1 < y2,面积为(x2-x1)(y2-y1),所以对角坐标翻转后的坐标并不是对角坐标,需要调整,即找到最小的x,y和最大的x,y。如[x1,y1,x2,y2],则x1 < x2,y1 < y2,进行水平翻转后,cols - x2 < cols - x1,y1 < y2,重新组合的坐标为[cols - x2,y1,cols - x1,y2],其他同理。

IOU,RPN计算

from __future__ import absolute_import
import numpy as np
import cv2 import random import copy from . import data_augment import threading import itertools #并集 def union(au, bu, area_intersection): area_a = (au[2] - au[0]) * (au[3] - au[1]) area_b = (bu[2] - bu[0]) * (bu[3] - bu[1]) area_union = area_a + area_b - area_intersection return area_union #交集 def intersection(ai, bi): x = max(ai[0], bi[0]) y = max(ai[1], bi[1]) w = min(ai[2], bi[2]) - x h = min(ai[3], bi[3]) - y if w < 0 or h < 0: return 0 return w*h #交并比 def iou(a, b): # a and b should be (x1,y1,x2,y2) if a[0] >= a[2] or a[1] >= a[3] or b[0] >= b[2] or b[1] >= b[3]: return 0.0 area_i = intersection(a, b) area_u = union(a, b, area_i) return float(area_i) / float(area_u + 1e-6) #图像resize def get_new_img_size(width, height, img_min_side=600): if width <= height: f = float(img_min_side) / width resized_height = int(f * height) resized_width = img_min_side else: f = float(img_min_side) / height resized_width = int(f * width) resized_height = img_min_side return resized_width, resized_height class SampleSelector: def __init__(self, class_count): # ignore classes that have zero samples self.classes = [b for b in class_count.keys() if class_count[b] > 0] self.class_cycle = itertools.cycle(self.classes) self.curr_class = next(self.class_cycle) def skip_sample_for_balanced_class(self, img_data): class_in_img = False for bbox in img_data['bboxes']: cls_name = bbox['class'] if cls_name == self.curr_class: class_in_img = True self.curr_class = next(self.class_cycle) break if class_in_img: return False else: return True def calc_rpn(C, img_data, width, height, resized_width, resized_height, img_length_calc_function): downscale = float(C.rpn_stride) anchor_sizes = C.anchor_box_scales anchor_ratios = C.anchor_box_ratios num_anchors = len(anchor_sizes) * len(anchor_ratios) # calculate the output map size based on the network architecture (output_width, output_height) = img_length_calc_function(resized_width, resized_height) n_anchratios = len(anchor_ratios) # initialise empty output objectives y_rpn_overlap = np.zeros((output_height, output_width, num_anchors)) y_is_box_valid = np.zeros((output_height, output_width, num_anchors)) y_rpn_regr = np.zeros((output_height, output_width, num_anchors * 4)) num_bboxes = len(img_data['bboxes']) num_anchors_for_bbox = np.zeros(num_bboxes).astype(int) best_anchor_for_bbox = -1*np.ones((num_bboxes, 4)).astype(int) best_iou_for_bbox = np.zeros(num_bboxes).astype(np.float32) best_x_for_bbox = np.zeros((num_bboxes, 4)).astype(int) best_dx_for_bbox = np.zeros((num_bboxes, 4)).astype(np.float32) # get the GT box coordinates, and resize to account for image resizing gta = np.zeros((num_bboxes, 4)) for bbox_num, bbox in enumerate(img_data['bboxes']): # get the GT box coordinates, and resize to account for image resizing gta[bbox_num, 0] = bbox['x1'] * (resized_width / float(width)) gta[bbox_num, 1] = bbox['x2'] * (resized_width / float(width)) gta[bbox_num, 2] = bbox['y1'] * (resized_height / float(height)) gta[bbox_num, 3] = bbox['y2'] * (resized_height / float(height)) # rpn ground truth for anchor_size_idx in range(len(anchor_sizes)): for anchor_ratio_idx in range(n_anchratios): anchor_x = anchor_sizes[anchor_size_idx] * anchor_ratios[anchor_ratio_idx][0] anchor_y = anchor_sizes[anchor_size_idx] * anchor_ratios[anchor_ratio_idx][1] for ix in range(output_width): # x-coordinates of the current anchor box x1_anc = downscale * (ix + 0.5) - anchor_x / 2 x2_anc = downscale * (ix + 0.5) + anchor_x / 2 # ignore boxes that go across image boundaries if x1_anc < 0 or x2_anc > resized_width: continue for jy in range(output_height): # y-coordinates of the current anchor box y1_anc = downscale * (jy + 0.5) - anchor_y / 2 y2_anc = downscale * (jy + 0.5) + anchor_y / 2 # ignore boxes that go across image boundaries if y1_anc < 0 or y2_anc > resized_height: continue # bbox_type indicates whether an anchor should be a target bbox_type = 'neg' # this is the best IOU for the (x,y) coord and the current anchor # note that this is different from the best IOU for a GT bbox best_iou_for_loc = 0.0 for bbox_num in range(num_bboxes): # get IOU of the current GT box and the current anchor box curr_iou = iou([gta[bbox_num, 0], gta[bbox_num, 2], gta[bbox_num, 1], gta[bbox_num, 3]], [x1_anc, y1_anc, x2_anc, y2_anc]) # calculate the regression targets if they will be needed if curr_iou > best_iou_for_bbox[bbox_num] or curr_iou > C.rpn_max_overlap: cx = (gta[bbox_num, 0] + gta[bbox_num, 1]) / 2.0 cy = (gta[bbox_num, 2] + gta[bbox_num, 3]) / 2.0 cxa = (x1_anc + x2_anc)/2.0 cya = (y1_anc + y2_anc)/2.0 tx = (cx - cxa) / (x2_anc - x1_anc) ty = (cy - cya) / (y2_anc - y1_anc) tw = np.log((gta[bbox_num, 1] - gta[bbox_num, 0]) / (x2_anc - x1_anc)) th = np.log((gta[bbox_num, 3] - gta[bbox_num, 2]) / (y2_anc - y1_anc)) if img_data['bboxes'][bbox_num]['class'] != 'bg': # all GT boxes should be mapped to an anchor box, so we keep track of which anchor box was best if curr_iou > best_iou_for_bbox[bbox_num]: best_anchor_for_bbox[bbox_num] = [jy, ix, anchor_ratio_idx, anchor_size_idx] best_iou_for_bbox[bbox_num] = curr_iou best_x_for_bbox[bbox_num,:] = [x1_anc, x2_anc, y1_anc, y2_anc] best_dx_for_bbox[bbox_num,:] = [tx, ty, tw, th] # we set the anchor to positive if the IOU is >0.7 (it does not matter if there was another better box, it just indicates overlap) if curr_iou > C.rpn_max_overlap: bbox_type = 'pos' num_anchors_for_bbox[bbox_num] += 1 # we update the regression layer target if this IOU is the best for the current (x,y) and anchor position if curr_iou > best_iou_for_loc: best_iou_for_loc = curr_iou best_regr = (tx, ty, tw, th) # if the IOU is >0.3 and <0.7, it is ambiguous and no included in the objective if C.rpn_min_overlap < curr_iou < C.rpn_max_overlap: # gray zone between neg and pos if bbox_type != 'pos': bbox_type = 'neutral' # turn on or off outputs depending on IOUs if bbox_type == 'neg': y_is_box_valid[jy, ix, anchor_ratio_idx + n_anchratios * anchor_size_idx] = 1 y_rpn_overlap[jy, ix, anchor_ratio_idx + n_anchratios * anchor_size_idx] = 0 elif bbox_type == 'neutral': y_is_box_valid[jy, ix, anchor_ratio_idx + n_anchratios * anchor_size_idx] = 0 y_rpn_overlap[jy, ix, anchor_ratio_idx + n_anchratios * anchor_size_idx] = 0 elif bbox_type == 'pos': y_is_box_valid[jy, ix, anchor_ratio_idx + n_anchratios * anchor_size_idx] = 1 y_rpn_overlap[jy, ix, anchor_ratio_idx + n_anchratios * anchor_size_idx] = 1 start = 4 * (anchor_ratio_idx + n_anchratios * anchor_size_idx) y_rpn_regr[jy, ix, start:start+4] = best_regr # we ensure that every bbox has at least one positive RPN region for idx in range(num_anchors_for_bbox.shape[0]): if num_anchors_for_bbox[idx] == 0: # no box with an IOU greater than zero ... if best_anchor_for_bbox[idx, 0] == -1: continue y_is_box_valid[ best_anchor_for_bbox[idx,0], best_anchor_for_bbox[idx,1], best_anchor_for_bbox[idx,2] + n_anchratios * best_anchor_for_bbox[idx,3]] = 1 y_rpn_overlap[ best_anchor_for_bbox[idx,0], best_anchor_for_bbox[idx,1], best_anchor_for_bbox[idx,2] + n_anchratios * best_anchor_for_bbox[idx,3]] = 1 start = 

转载于:https://www.cnblogs.com/kekexuanxaun/p/9459368.html

Keras版Faster-RCNN代码学习(IOU,RPN)1相关推荐

  1. 关于RPN中proposal的坐标回归参数的一点理解及Faster R-CNN的学习资料

    在Faster R-CNN的区域生成网络RPN中为了能够以目标真实框(Ground Truth box)为监督信号去训练RPN网络依据anchor预测proposal的位置,作者并不是直接回归prop ...

  2. Caffe版Faster R-CNN可视化——网络模型,图像特征,Loss图,PR曲线

    可视化网络模型   Caffe目前有两种常用的可视化模型方式: 使用Netscope在线可视化 Caffe代码包内置的draw_net.py文件可以可视化网络模型 Netscope能可视化神经网络体系 ...

  3. Faster RCNN代码详解(五):关于检测网络(Fast RCNN)的proposal

    在Faster RCNN代码详解(二):网络结构构建中介绍了Faster RCNN算法的网络结构,其中有一个用于生成ROI proposal target的自定义层,该自定义层的输出作为检测网络(Fa ...

  4. 保姆级 Keras 实现 Faster R-CNN 八

    保姆级 Keras 实现 Faster R-CNN 八 一. 预训练模型 二. 冻结主干网络 三. 训练 上一篇文章 我们完成了 RPN 网络的训练, 还提到了按照文章中的步骤训练时 Loss 可能会 ...

  5. 【论文阅读笔记】faster rcnn 代码阅读细节

    faster rcnn细节 bounding box regression原理 理论 faster rcnn 代码 box_coder.py 参考: https://zhuanlan.zhihu.co ...

  6. tf-faster-rcnn代码学习.目标检测(Tensorflow版Faster R-CNN)

    TF-Faster R-CNN 电脑配置 代码来源 环境配置 demo测试 参考博客 训练自己的数据集 测试阶段 Tensorboard查看收敛情况 电脑配置 系统:Ubuntu 16.04 GPU型 ...

  7. 如何用深度学习进行CT影像肺结节探测(附有基于Intel Extended Caffe的3D Faster RCNN代码开源)

    近期宜远智能参加阿里天池医疗AI大赛,用3D Faster RCNN模型在CT影像的肺结节探测上,取得了较好的成绩,特别是在计算资源充足的情况下,模型效果表现优异.这是他们的经验分享(https:// ...

  8. Faster RCNN代码理解(Python) ---训练过程

    最近开始学习深度学习,看了下Faster RCNN的代码,在学习的过程中也查阅了很多其他人写的博客,得到了很大的帮助,所以也打算把自己一些粗浅的理解记录下来,一是记录下自己的菜鸟学习之路,方便自己过后 ...

  9. 收藏 | 万字长文带你理解Pytorch官方Faster RCNN代码

    点上方蓝字计算机视觉联盟获取更多干货 在右上方 ··· 设为星标 ★,与你不见不散 仅作学术分享,不代表本公众号立场,侵权联系删除 转载于:作者丨白裳@知乎 来源丨https://zhuanlan.z ...

  10. faster rcnn 代码与原理结合详解

    文章目录 faster rcnn 原理概括 特征提取层的特点和其与feature mpa坐标映射的关系 RPN layer详解 ROI pooling详解 分类层与第二次边框回归 faster rcn ...

最新文章

  1. 正点原子探索者原理图_正点原子【STM32-F407探索者】第六章 跑马灯实验
  2. 简单图库软件的实现(联网下载图片保存到sdcard在Listview中展示,并作为ContentProvider为其他软件提供图库数据)
  3. 2020-09-11
  4. tstringlist怎么查看是否存在该数据_财务报表审计该如何进行?
  5. CSS布局解决方案(终结版)
  6. 繁华模拟赛 ljw分雕塑
  7. mysql动态变量查询_MySQL将变量传递给动态查询
  8. PHP SESSION生存时间设置
  9. python自动测试模型_Selenium+Python 自动化测试模型
  10. JavaScript生成树形菜单(递归算法)
  11. 计算机原理及应用课程,课程描述
  12. 图卷积网络GCN理解
  13. oc语言的优缺点有哪些
  14. 【DJ-ZBS2 DH-70L两档切换漏电继电器】
  15. predefined annotation
  16. VC++ 获取窗体句柄,并发送键盘消息(这种方法也可以打开某些应用程序)
  17. 2017年8月23日 星期三 --出埃及记 Exodus 29:2
  18. 【sbt】sbt package与sbt assembly
  19. 程序员接私活经验总结
  20. Android AM命令及使用

热门文章

  1. mysql004操作表.增删改
  2. python进阶(第三章1) 字典
  3. 听说现在都考这些React面试题
  4. MySQL数据库的datetime与timestamp
  5. 使用Nagios监控esx、esxi、vcenter
  6. linux 文件系统的管理 (硬盘)
  7. McAfee:较之中国美国黑客才最令人害怕
  8. 修改PATH导致fedora无法登录XWindow
  9. 硬盘结构及硬盘错误的解决方法(一)
  10. 通过修改注册表设定浏览器的却省值