如果是自己来写这部分代码的话,真的是很容易,如果这个框架中有自带的backbone结构的话,那可以从其他地方找到对应的pytorch版本实现,或者自己写。

配置部分在configs/_base_/models目录下,具体实现在mmdet/models/backbone目录下。

1.configs

我们看下yolox的backbone吧。https://github.com/open-mmlab/mmdetection/blob/master/configs/yolox/yolox_s_8x8_300e_coco.py

    backbone=dict(type='CSPDarknet', deepen_factor=0.33, widen_factor=0.5),

2.具体实现

具体实现,去找CSPDarknet这个类:

https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/backbones/csp_darknet.py#L124

@BACKBONES.register_module()
class CSPDarknet(BaseModule):"""CSP-Darknet backbone used in YOLOv5 and YOLOX.Args:arch (str): Architecture of CSP-Darknet, from {P5, P6}.Default: P5.deepen_factor (float): Depth multiplier, multiply number ofblocks in CSP layer by this amount. Default: 1.0.widen_factor (float): Width multiplier, multiply number ofchannels in each layer by this amount. Default: 1.0.out_indices (Sequence[int]): Output from which stages.Default: (2, 3, 4).frozen_stages (int): Stages to be frozen (stop grad and set evalmode). -1 means not freezing any parameters. Default: -1.use_depthwise (bool): Whether to use depthwise separable convolution.Default: False.arch_ovewrite(list): Overwrite default arch settings. Default: None.spp_kernal_sizes: (tuple[int]): Sequential of kernel sizes of SPPlayers. Default: (5, 9, 13).conv_cfg (dict): Config dict for convolution layer. Default: None.norm_cfg (dict): Dictionary to construct and config norm layer.Default: dict(type='BN', requires_grad=True).act_cfg (dict): Config dict for activation layer.Default: dict(type='LeakyReLU', negative_slope=0.1).norm_eval (bool): Whether to set norm layers to eval mode, namely,freeze running stats (mean and var). Note: Effect on Batch Normand its variants only.init_cfg (dict or list[dict], optional): Initialization config dict.Default: None.Example:>>> from mmdet.models import CSPDarknet>>> import torch>>> self = CSPDarknet(depth=53)>>> self.eval()>>> inputs = torch.rand(1, 3, 416, 416)>>> level_outputs = self.forward(inputs)>>> for level_out in level_outputs:...     print(tuple(level_out.shape))...(1, 256, 52, 52)(1, 512, 26, 26)(1, 1024, 13, 13)"""# From left to right:# in_channels, out_channels, num_blocks, add_identity, use_spparch_settings = {'P5': [[64, 128, 3, True, False], [128, 256, 9, True, False],[256, 512, 9, True, False], [512, 1024, 3, False, True]],'P6': [[64, 128, 3, True, False], [128, 256, 9, True, False],[256, 512, 9, True, False], [512, 768, 3, True, False],[768, 1024, 3, False, True]]}def __init__(self,arch='P5',deepen_factor=1.0,widen_factor=1.0,out_indices=(2, 3, 4),frozen_stages=-1,use_depthwise=False,arch_ovewrite=None,spp_kernal_sizes=(5, 9, 13),conv_cfg=None,norm_cfg=dict(type='BN', momentum=0.03, eps=0.001),act_cfg=dict(type='Swish'),norm_eval=False,init_cfg=dict(type='Kaiming',layer='Conv2d',a=math.sqrt(5),distribution='uniform',mode='fan_in',nonlinearity='leaky_relu')):super().__init__(init_cfg)arch_setting = self.arch_settings[arch]if arch_ovewrite:arch_setting = arch_ovewriteassert set(out_indices).issubset(i for i in range(len(arch_setting) + 1))if frozen_stages not in range(-1, len(arch_setting) + 1):raise ValueError('frozen_stages must be in range(-1, ''len(arch_setting) + 1). But received 'f'{frozen_stages}')self.out_indices = out_indicesself.frozen_stages = frozen_stagesself.use_depthwise = use_depthwiseself.norm_eval = norm_evalconv = DepthwiseSeparableConvModule if use_depthwise else ConvModuleself.stem = Focus(3,int(arch_setting[0][0] * widen_factor),kernel_size=3,conv_cfg=conv_cfg,norm_cfg=norm_cfg,act_cfg=act_cfg)self.layers = ['stem']for i, (in_channels, out_channels, num_blocks, add_identity,use_spp) in enumerate(arch_setting):in_channels = int(in_channels * widen_factor)out_channels = int(out_channels * widen_factor)num_blocks = max(round(num_blocks * deepen_factor), 1)stage = []conv_layer = conv(in_channels,out_channels,3,stride=2,padding=1,conv_cfg=conv_cfg,norm_cfg=norm_cfg,act_cfg=act_cfg)stage.append(conv_layer)if use_spp:spp = SPPBottleneck(out_channels,out_channels,kernel_sizes=spp_kernal_sizes,conv_cfg=conv_cfg,norm_cfg=norm_cfg,act_cfg=act_cfg)stage.append(spp)csp_layer = CSPLayer(out_channels,out_channels,num_blocks=num_blocks,add_identity=add_identity,use_depthwise=use_depthwise,conv_cfg=conv_cfg,norm_cfg=norm_cfg,act_cfg=act_cfg)stage.append(csp_layer)self.add_module(f'stage{i + 1}', nn.Sequential(*stage))self.layers.append(f'stage{i + 1}')def _freeze_stages(self):if self.frozen_stages >= 0:for i in range(self.frozen_stages + 1):m = getattr(self, self.layers[i])m.eval()for param in m.parameters():param.requires_grad = Falsedef train(self, mode=True):super(CSPDarknet, self).train(mode)self._freeze_stages()if mode and self.norm_eval:for m in self.modules():if isinstance(m, _BatchNorm):m.eval()def forward(self, x):outs = []for i, layer_name in enumerate(self.layers):layer = getattr(self, layer_name)x = layer(x)if i in self.out_indices:outs.append(x)return tuple(outs)

3.如何调用的?

3.1 注册

注册创建类名字典。

@BACKBONES.register_module()

3.2 调用

即以字典中的类名,进行类的实例化。

由于backbone是包在models里面的,所以去看YOLOX这个类。

https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/detectors/yolox.py#L15

@DETECTORS.register_module()
class YOLOX(SingleStageDetector):r"""Implementation of `YOLOX: Exceeding YOLO Series in 2021<https://arxiv.org/abs/2107.08430>`_Note: Considering the trade-off between training speed and accuracy,multi-scale training is temporarily kept. More elegant implementationwill be adopted in the future.Args:backbone (nn.Module): The backbone module.neck (nn.Module): The neck module.bbox_head (nn.Module): The bbox head module.train_cfg (obj:`ConfigDict`, optional): The training configof YOLOX. Default: None.test_cfg (obj:`ConfigDict`, optional): The testing configof YOLOX. Default: None.pretrained (str, optional): model pretrained path.Default: None.input_size (tuple): The model default input image size. The shapeorder should be (height, width). Default: (640, 640).size_multiplier (int): Image size multiplication factor.Default: 32.random_size_range (tuple): The multi-scale random range duringmulti-scale training. The real training image size willbe multiplied by size_multiplier. Default: (15, 25).random_size_interval (int): The iter interval of changeimage size. Default: 10.init_cfg (dict, optional): Initialization config dict.Default: None."""def __init__(self,backbone,neck,bbox_head,train_cfg=None,test_cfg=None,pretrained=None,input_size=(640, 640),size_multiplier=32,random_size_range=(15, 25),random_size_interval=10,init_cfg=None):super(YOLOX, self).__init__(backbone, neck, bbox_head, train_cfg,test_cfg, pretrained, init_cfg)log_img_scale(input_size, skip_square=True)self.rank, self.world_size = get_dist_info()self._default_input_size = input_sizeself._input_size = input_sizeself._random_size_range = random_size_rangeself._random_size_interval = random_size_intervalself._size_multiplier = size_multiplierself._progress_in_iter = 0def forward_train(self,img,img_metas,gt_bboxes,gt_labels,gt_bboxes_ignore=None):"""Args:img (Tensor): Input images of shape (N, C, H, W).Typically these should be mean centered and std scaled.img_metas (list[dict]): A List of image info dict where each dicthas: 'img_shape', 'scale_factor', 'flip', and may also contain'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.For details on the values of these keys see:class:`mmdet.datasets.pipelines.Collect`.gt_bboxes (list[Tensor]): Each item are the truth boxes for eachimage in [tl_x, tl_y, br_x, br_y] format.gt_labels (list[Tensor]): Class indices corresponding to each boxgt_bboxes_ignore (None | list[Tensor]): Specify which boundingboxes can be ignored when computing the loss.Returns:dict[str, Tensor]: A dictionary of loss components."""# Multi-scale trainingimg, gt_bboxes = self._preprocess(img, gt_bboxes)losses = super(YOLOX, self).forward_train(img, img_metas, gt_bboxes,gt_labels, gt_bboxes_ignore)# random resizingif (self._progress_in_iter + 1) % self._random_size_interval == 0:self._input_size = self._random_resize(device=img.device)self._progress_in_iter += 1return lossesdef _preprocess(self, img, gt_bboxes):scale_y = self._input_size[0] / self._default_input_size[0]scale_x = self._input_size[1] / self._default_input_size[1]if scale_x != 1 or scale_y != 1:img = F.interpolate(img,size=self._input_size,mode='bilinear',align_corners=False)for gt_bbox in gt_bboxes:gt_bbox[..., 0::2] = gt_bbox[..., 0::2] * scale_xgt_bbox[..., 1::2] = gt_bbox[..., 1::2] * scale_yreturn img, gt_bboxesdef _random_resize(self, device):tensor = torch.LongTensor(2).to(device)if self.rank == 0:size = random.randint(*self._random_size_range)aspect_ratio = float(self._default_input_size[1]) / self._default_input_size[0]size = (self._size_multiplier * size,self._size_multiplier * int(aspect_ratio * size))tensor[0] = size[0]tensor[1] = size[1]if self.world_size > 1:dist.barrier()dist.broadcast(tensor, 0)input_size = (tensor[0].item(), tensor[1].item())return input_size

发现并不容易看出backbone的部分,然后去找其基类SingleStageDetector中的代码段,通过获取注册字典中的CSPDarknet:

self.backbone = build_backbone(backbone)

https://github.com/open-mmlab/mmdetection/blob/31c84958f54287a8be2b99cbf87a6dcf12e57753/mmdet/models/detectors/single_stage.py#L12

class SingleStageDetector(BaseDetector):"""Base class for single-stage detectors.Single-stage detectors directly and densely predict bounding boxes on theoutput features of the backbone+neck."""def __init__(self,backbone,neck=None,bbox_head=None,train_cfg=None,test_cfg=None,pretrained=None,init_cfg=None):super(SingleStageDetector, self).__init__(init_cfg)if pretrained:warnings.warn('DeprecationWarning: pretrained is deprecated, ''please use "init_cfg" instead')backbone.pretrained = pretrainedself.backbone = build_backbone(backbone)if neck is not None:self.neck = build_neck(neck)bbox_head.update(train_cfg=train_cfg)bbox_head.update(test_cfg=test_cfg)self.bbox_head = build_head(bbox_head)self.train_cfg = train_cfgself.test_cfg = test_cfgdef extract_feat(self, img):"""Directly extract features from the backbone+neck."""x = self.backbone(img)if self.with_neck:x = self.neck(x)return xdef forward_dummy(self, img):"""Used for computing network flops.See `mmdetection/tools/analysis_tools/get_flops.py`"""x = self.extract_feat(img)outs = self.bbox_head(x)return outsdef forward_train(self,img,img_metas,gt_bboxes,gt_labels,gt_bboxes_ignore=None):"""Args:img (Tensor): Input images of shape (N, C, H, W).Typically these should be mean centered and std scaled.img_metas (list[dict]): A List of image info dict where each dicthas: 'img_shape', 'scale_factor', 'flip', and may also contain'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.For details on the values of these keys see:class:`mmdet.datasets.pipelines.Collect`.gt_bboxes (list[Tensor]): Each item are the truth boxes for eachimage in [tl_x, tl_y, br_x, br_y] format.gt_labels (list[Tensor]): Class indices corresponding to each boxgt_bboxes_ignore (None | list[Tensor]): Specify which boundingboxes can be ignored when computing the loss.Returns:dict[str, Tensor]: A dictionary of loss components."""super(SingleStageDetector, self).forward_train(img, img_metas)x = self.extract_feat(img)losses = self.bbox_head.forward_train(x, img_metas, gt_bboxes,gt_labels, gt_bboxes_ignore)return lossesdef simple_test(self, img, img_metas, rescale=False):"""Test function without test-time augmentation.Args:img (torch.Tensor): Images with shape (N, C, H, W).img_metas (list[dict]): List of image information.rescale (bool, optional): Whether to rescale the results.Defaults to False.Returns:list[list[np.ndarray]]: BBox results of each image and classes.The outer list corresponds to each image. The inner listcorresponds to each class."""feat = self.extract_feat(img)results_list = self.bbox_head.simple_test(feat, img_metas, rescale=rescale)bbox_results = [bbox2result(det_bboxes, det_labels, self.bbox_head.num_classes)for det_bboxes, det_labels in results_list]return bbox_resultsdef aug_test(self, imgs, img_metas, rescale=False):"""Test function with test time augmentation.Args:imgs (list[Tensor]): the outer list indicates test-timeaugmentations and inner Tensor should have a shape NxCxHxW,which contains all images in the batch.img_metas (list[list[dict]]): the outer list indicates test-timeaugs (multiscale, flip, etc.) and the inner list indicatesimages in a batch. each dict has image information.rescale (bool, optional): Whether to rescale the results.Defaults to False.Returns:list[list[np.ndarray]]: BBox results of each image and classes.The outer list corresponds to each image. The inner listcorresponds to each class."""assert hasattr(self.bbox_head, 'aug_test'), \f'{self.bbox_head.__class__.__name__}' \' does not support test-time augmentation'feats = self.extract_feats(imgs)results_list = self.bbox_head.aug_test(feats, img_metas, rescale=rescale)bbox_results = [bbox2result(det_bboxes, det_labels, self.bbox_head.num_classes)for det_bboxes, det_labels in results_list]return bbox_resultsdef onnx_export(self, img, img_metas, with_nms=True):"""Test function without test time augmentation.Args:img (torch.Tensor): input images.img_metas (list[dict]): List of image information.Returns:tuple[Tensor, Tensor]: dets of shape [N, num_det, 5]and class labels of shape [N, num_det]."""x = self.extract_feat(img)outs = self.bbox_head(x)# get origin input shape to support onnx dynamic shape# get shape as tensorimg_shape = torch._shape_as_tensor(img)[2:]img_metas[0]['img_shape_for_onnx'] = img_shape# get pad input shape to support onnx dynamic shape for exporting# `CornerNet` and `CentripetalNet`, which 'pad_shape' is used# for inferenceimg_metas[0]['pad_shape_for_onnx'] = img_shapeif len(outs) == 2:# add dummy score_factorouts = (*outs, None)# TODO Can we change to `get_bboxes` when `onnx_export` faildet_bboxes, det_labels = self.bbox_head.onnx_export(*outs, img_metas, with_nms=with_nms)return det_bboxes, det_labels

【mmdetection系列】mmdetection之backbone讲解相关推荐

  1. pytorch 34 mmdetection配置文件中指定backbone与neck

    探索在mmdetection中可用的backbone与neck,尝试自由组合backbone与neck,实现更为丰富的模型.主要研究替换retinanet的backbone(该经验可以用到任意模型中) ...

  2. 071-JAVA项目实训:仿QQ即时通讯软件系列讲座六(讲解QQ主界面功能)

    [上一讲]070-JAVA项目实训:仿QQ即时通讯软件讲座五(讲解用户注册功能)_CSDN专家-赖老师(软件之家)的博客-CSDN博客 [下一讲]072-JAVA项目实训:仿QQ即时通讯软件系列讲座七 ...

  3. 思科4500R系列交换机双引擎冗余讲解

    4500R系列交换机双引擎冗余讲解<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office&qu ...

  4. 【mmdetection系列】mmdetection之evaluate评测

    1.configs 还是以yolox为例,配置有一项evaluation.用于配置评估是用什么评价指标评估. https://github.com/open-mmlab/mmdetection/blo ...

  5. MMDetection系列之(迁移学习)

    CocO数据集上预训练的检测器可以作为其他数据集的良好预训练模型,如CityScapes和KITTI数据集.本教程为用户提供使用Model Zoo中提供的模型获取其他数据集的更好性能的指导. 在新数据 ...

  6. MMDetection系列 | 5. MMDetection运行配置介绍

    如有错误,恳请指出. 开门见山,基于mmdet的官方文档直接介绍如何进行我们的运行配置.个人觉得,继承于default_runtime.py这个文件之后,主要需要自己稍微更改下的配置主要有7个,分别是 ...

  7. 【mmdetection】mmdetection数据处理pipline结果可视化

    mmdetection数据处理pipline结果可视化 仅查看训练图像并计算std与mean 查看训练图像并显示bbox 参考<mmdetection 和 mmclassification的da ...

  8. 【mmdetection】mmdetection安装详细步骤

    mmdetection是由商汤科技和香港中文大学开源了一个基于Pytorch实现的深度学习计算机视觉工具箱,涵盖了目标检测.实例分割.全景分割.模型蒸馏等计算机视觉任务,复现了最新的一些论文和成果,特 ...

  9. 【mmdetection】mmdetection学习率设置

    mmdetection中学习率的设置第一种方法: 训练命令后面添加参数 --auto-scale-lr mmdetection中学习率的设置第二种方法: 8 gpus.imgs_per_gpu = 2 ...

最新文章

  1. 漫画:前端发展史的江湖恩怨情仇
  2. vue+webpack+amazeui项目小记
  3. ubuntu 16.04 分辨率只有800×600问题解决
  4. GARFIELD@07-08-2005 DILBERT
  5. ORM 关系对象映射 基础知识点
  6. OpenCV使用Kinect和其他OpenNI兼容的深度传感器
  7. 图论复习——dfs树,点双,边双,强连通分量
  8. 如何在React Native中使用文本输入组件?
  9. c语言红警源代码,真香!红警游戏源代码开源了,70,80,90最好的游戏
  10. Linux运行多个openssl,linux – 使用多选项解释rsa的openssl速度输出
  11. the first blog
  12. FlowNet到FlowNet2.0:基于卷积神经网络的光流预测算法
  13. wdf中的两个宏WDF_DECLARE_CONTEXT_TYPE WDF_DECLARE_CONTEXT_TYPE_WITH_NAME
  14. 还能这样玩——关于一些OI的黑(sao)科(cao)技(zuo)优化
  15. 短视频SDK测试tips
  16. 【测试】功能测试用例设计方法总结
  17. 微信公众号自定义菜单如何添加特殊符号?
  18. java 格式化输出xml_Java格式化输出Xml
  19. 最窄770px最宽1024px的经典布局研究
  20. 转 fpga学习经验2

热门文章

  1. i english怎么样,家长讲讲自己的经历!
  2. Linux启动流程之ROM-CODE
  3. php 手机号归属地 dat,GitHub - china-qd/phonedata: 手机号码归属地信息库、手机号归属地查询 phone.dat 最后更新:2020年04月...
  4. 遇上Android客户端打包党,该怎么办?
  5. 戴尔服务器虚拟 介质,使用Dell R710 IDRAC挂载虚拟介质
  6. 互联网产品运营必备工具大全
  7. 如何使用谷歌浏览器远程调试安卓/ios真机H5应用?
  8. 解决谷歌浏览器,打开开发者工具后,页面突然变成手机模式
  9. 解放双手——Android的自动化构建及发布
  10. 油菜花系统服务器能删除内容吗,在这个油菜花盛开的地方,有这样一家数字化的亲民医院...