背景简介

前段时间看到了百度新出的一篇论文,提出了一种基于MKLDNN加速策略的轻量级CPU网络,即PP-LCNet,它提高了轻量级模型在多任务上的性能,对于计算机视觉的下游任务,如目标检测、语义分割等,也有很好的表现。以下是论文链接和开源的基于PaddlePaddle的实现。

arXiv: https://arxiv.org/pdf/2109.15099.pdf

code: https://github.com/PaddlePaddle/PaddleClas

论文很短,模型结构也十分简洁,没有特别创新的部分,应该是属于深挖技术细节并细心整理的工程应用梳理性质的文章,里面有一些极其实用的工程细节,非常值得一读。

Pytorch实现PP-LCNet

简单浏览了一下网上对该文章的解读。

快到飞起的轻量级网络怎能不让人心动?可惜原版PP-LCNet只有PaddlePaddle的实现,对于我这样的Pytorch玩家没法直接白嫖,不过好在PaddlePaddle和Pytorch的动态图机制极其相似,参考相关代码,实现起来也并不难,下面贴一下我用Pytorch的实现

import os
import torch
import torch.nn as nnNET_CONFIG = {"blocks2":# k, in_c, out_c, s, use_se[[3, 16, 32, 1, False]],"blocks3": [[3, 32, 64, 2, False], [3, 64, 64, 1, False]],"blocks4": [[3, 64, 128, 2, False], [3, 128, 128, 1, False]],"blocks5": [[3, 128, 256, 2, False], [5, 256, 256, 1, False],[5, 256, 256, 1, False], [5, 256, 256, 1, False],[5, 256, 256, 1, False], [5, 256, 256, 1, False]],"blocks6": [[5, 256, 512, 2, True], [5, 512, 512, 1, True]]
}def autopad(k, p=None):if p is None:p = k // 2 if isinstance(k, int) else [x // 2 for x in k]return pdef make_divisible(v, divisor=8, min_value=None):if min_value is None:min_value = divisornew_v = max(min_value, int(v + divisor / 2) // divisor * divisor)if new_v < 0.9 * v:new_v += divisorreturn new_vclass HardSwish(nn.Module):def __init__(self, inplace=True):super(HardSwish, self).__init__()self.relu6 = nn.ReLU6(inplace=inplace)def forward(self, x):return x * self.relu6(x+3) / 6class HardSigmoid(nn.Module):def __init__(self, inplace=True):super(HardSigmoid, self).__init__()self.relu6 = nn.ReLU6(inplace=inplace)def forward(self, x):return (self.relu6(x+3)) / 6class SELayer(nn.Module):def __init__(self, channel, reduction=16):super(SELayer, self).__init__()self.avgpool = nn.AdaptiveAvgPool2d(1)self.fc = nn.Sequential(nn.Linear(channel, channel // reduction, bias=False),nn.ReLU(inplace=True),nn.Linear(channel // reduction, channel, bias=False),HardSigmoid())def forward(self, x):b, c, h, w = x.size()y = self.avgpool(x).view(b, c)y = self.fc(y).view(b, c, 1, 1)return x * y.expand_as(x)class DepthwiseSeparable(nn.Module):def __init__(self, inp, oup, dw_size, stride, use_se=False):super(DepthwiseSeparable, self).__init__()self.use_se = use_seself.stride = strideself.inp = inpself.oup = oupself.dw_size = dw_sizeself.dw_sp = nn.Sequential(nn.Conv2d(self.inp, self.inp, kernel_size=self.dw_size, stride=self.stride,padding=autopad(self.dw_size, None), groups=self.inp, bias=False),nn.BatchNorm2d(self.inp),HardSwish(),nn.Conv2d(self.inp, self.oup, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(self.oup),HardSwish(),)self.se = SELayer(self.oup)def forward(self, x):x = self.dw_sp(x)if self.use_se:x = self.se(x)return xclass PP_LCNet(nn.Module):def __init__(self, scale=1.0, class_num=10, class_expand=1280, dropout_prob=0.2):super(PP_LCNet, self).__init__()self.scale = scaleself.conv1 = nn.Conv2d(3, out_channels=make_divisible(16 * self.scale),kernel_size=3, stride=2, padding=1, bias=False)# k, in_c, out_c, s, use_se   inp, oup, dw_size, stride, use_se=Falseself.blocks2 = nn.Sequential(*[DepthwiseSeparable(inp=make_divisible(in_c * self.scale),oup=make_divisible(out_c * self.scale),dw_size=k, stride=s, use_se=use_se)for i, (k, in_c, out_c, s, use_se) in enumerate(NET_CONFIG["blocks2"])])self.blocks3 = nn.Sequential(*[DepthwiseSeparable(inp=make_divisible(in_c * self.scale),oup=make_divisible(out_c * self.scale),dw_size=k, stride=s, use_se=use_se)for i, (k, in_c, out_c, s, use_se) in enumerate(NET_CONFIG["blocks3"])])self.blocks4 = nn.Sequential(*[DepthwiseSeparable(inp=make_divisible(in_c * self.scale),oup=make_divisible(out_c * self.scale),dw_size=k, stride=s, use_se=use_se)for i, (k, in_c, out_c, s, use_se) in enumerate(NET_CONFIG["blocks4"])])# k, in_c, out_c, s, use_se  inp, oup, dw_size, stride, use_se=Falseself.blocks5 = nn.Sequential(*[DepthwiseSeparable(inp=make_divisible(in_c * self.scale),oup=make_divisible(out_c * self.scale),dw_size=k, stride=s, use_se=use_se)for i, (k, in_c, out_c, s, use_se) in enumerate(NET_CONFIG["blocks5"])])self.blocks6 = nn.Sequential(*[DepthwiseSeparable(inp=make_divisible(in_c * self.scale),oup=make_divisible(out_c * self.scale),dw_size=k, stride=s, use_se=use_se)for i, (k, in_c, out_c, s, use_se) in enumerate(NET_CONFIG["blocks6"])])self.GAP = nn.AdaptiveAvgPool2d(1)self.last_conv = nn.Conv2d(in_channels=make_divisible(NET_CONFIG["blocks6"][-1][2] * scale),out_channels=class_expand,kernel_size=1, stride=1, padding=0, bias=False)self.hardswish = HardSwish()self.dropout = nn.Dropout(p=dropout_prob)self.fc = nn.Linear(class_expand, class_num)def forward(self, x):x = self.conv1(x)x = self.blocks2(x)x = self.blocks3(x)x = self.blocks4(x)x = self.blocks5(x)x = self.blocks6(x)x = self.GAP(x)x = self.last_conv(x)x = self.hardswish(x)x = self.dropout(x)x = torch.flatten(x, start_dim=1, end_dim=-1)x = self.fc(x)return xdef PPLCNET_x0_25(**kwargs):model = PP_LCNet(scale=0.25, **kwargs)return modeldef PPLCNET_x0_35(**kwargs):model = PP_LCNet(scale=0.35, **kwargs)return modeldef PPLCNET_x0_5(**kwargs):model = PP_LCNet(scale=0.5, **kwargs)return modeldef PPLCNET_x0_75(**kwargs):model = PP_LCNet(scale=0.75, **kwargs)return modeldef PPLCNET_x1_0(**kwargs):model = PP_LCNet(scale=1.0, **kwargs)return modeldef PPLCNET_x1_5(**kwargs):model = PP_LCNet(scale=1.5, **kwargs)return modeldef PPLCNET_x2_0(**kwargs):model = PP_LCNet(scale=2.0, **kwargs)return modeldef PPLCNET_x2_5(**kwargs):model = PP_LCNet(scale=2.5, **kwargs)return modelif __name__ == '__main__':model = PPLCNET_x1_5()input = torch.randn(1, 3, 224, 224)print(input.shape)output = model(input)print(output.shape)

PP-LCNet-YoloV5

既然已经实现了Pytorch版的PP-LCNet,接下里就是实际应用环节了,因为我的工作主要以检测、追踪为主,首先想到的自然就是目标检测的经典模型——YoloV5了,PP-LCNet有0.25,0.35,0.5,0.75,1.0,1.5,2.0,2.5一个八种模型,这里以PPLCNet_x_1_0为例,在原版YoloV5基础上修改以下三个文件

common.py
# 增加如下代码
#-------------------------------------PP_LCNet------------------------------------------------------
NET_CONFIG = {"blocks2":# k, in_c, out_c, s, use_se[[3, 16, 32, 1, False]],"blocks3": [[3, 32, 64, 2, False], [3, 64, 64, 1, False]],"blocks4": [[3, 64, 128, 2, False], [3, 128, 128, 1, False]],"blocks5": [[3, 128, 256, 2, False], [5, 256, 256, 1, False],[5, 256, 256, 1, False], [5, 256, 256, 1, False],[5, 256, 256, 1, False], [5, 256, 256, 1, False]],"blocks6": [[5, 256, 512, 2, True], [5, 512, 512, 1, True]]
}
BLOCK_LIST = ["blocks2", "blocks3", "blocks4", "blocks5", "blocks6"]def make_divisible_LC(v, divisor=8, min_value=None):if min_value is None:min_value = divisornew_v = max(min_value, int(v + divisor / 2) // divisor * divisor)if new_v < 0.9 * v:new_v += divisorreturn new_vclass HardSwish(nn.Module):def __init__(self, inplace=True):super(HardSwish, self).__init__()self.relu6 = nn.ReLU6(inplace=inplace)def forward(self, x):return x * self.relu6(x+3) / 6class HardSigmoid(nn.Module):def __init__(self, inplace=True):super(HardSigmoid, self).__init__()self.relu6 = nn.ReLU6(inplace=inplace)def forward(self, x):return (self.relu6(x+3)) / 6class SELayer(nn.Module):def __init__(self, channel, reduction=16):super(SELayer, self).__init__()self.avgpool = nn.AdaptiveAvgPool2d(1)self.fc = nn.Sequential(nn.Linear(channel, channel // reduction, bias=False),nn.ReLU(inplace=True),nn.Linear(channel // reduction, channel, bias=False),HardSigmoid())def forward(self, x):b, c, h, w = x.size()y = self.avgpool(x).view(b, c)y = self.fc(y).view(b, c, 1, 1)return x * y.expand_as(x)class DepthwiseSeparable(nn.Module):def __init__(self, inp, oup, dw_size, stride, use_se=False):super(DepthwiseSeparable, self).__init__()self.use_se = use_seself.stride = strideself.inp = inpself.oup = oupself.dw_size = dw_sizeself.dw_sp = nn.Sequential(nn.Conv2d(self.inp, self.inp, kernel_size=self.dw_size, stride=self.stride,padding=autopad(self.dw_size, None), groups=self.inp, bias=False),nn.BatchNorm2d(self.inp),HardSwish(),nn.Conv2d(self.inp, self.oup, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(self.oup),HardSwish(),)self.se = SELayer(self.oup)def forward(self, x):x = self.dw_sp(x)if self.use_se:x = self.se(x)return xclass PPLC_Conv(nn.Module):def __init__(self, scale):super(PPLC_Conv, self).__init__()self.scale = scaleself.conv = nn.Conv2d(3, out_channels=make_divisible_LC(16 * self.scale),kernel_size=3, stride=2, padding=1, bias=False)def forward(self, x):return self.conv(x)class PPLC_Block(nn.Module):def __init__(self, scale, block_num):super(PPLC_Block, self).__init__()self.scale = scaleself.block_num = BLOCK_LIST[block_num]self.block = nn.Sequential(*[DepthwiseSeparable(inp=make_divisible_LC(in_c * self.scale),oup=make_divisible_LC(out_c * self.scale),dw_size=k, stride=s, use_se=use_se)for i, (k, in_c, out_c, s, use_se) in enumerate(NET_CONFIG[self.block_num])])def forward(self, x):return self.block(x)
yolo.py
# 修改parse_model函数
def parse_model(d, ch):  # model_dict, input_channels(3)LOGGER.info('\n%3s%18s%3s%10s  %-40s%-30s' % ('', 'from', 'n', 'params', 'module', 'arguments'))anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple']na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors  # number of anchorsno = na * (nc + 5)  # number of outputs = anchors * (classes + 5)layers, save, c2 = [], [], ch[-1]  # layers, savelist, ch outfor i, (f, n, m, args) in enumerate(d['backbone'] + d['head']):  # from, number, module, argsm = eval(m) if isinstance(m, str) else m  # eval stringsfor j, a in enumerate(args):try:args[j] = eval(a) if isinstance(a, str) else a  # eval stringsexcept:passn = n_ = max(round(n * gd), 1) if n > 1 else n  # depth gainif m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv,BottleneckCSP, C3, C3TR, C3SPP, C3Ghost]:c1, c2 = ch[f], args[0]   if c2 != no:  # if not outputc2 = make_divisible(c2 * gw, 8)args = [c1, c2, *args[1:]]if m in [BottleneckCSP, C3, C3TR, C3Ghost]:args.insert(2, n)  # number of repeatsn = 1elif m is nn.BatchNorm2d:args = [ch[f]]elif m is Concat:c2 = sum([ch[x] for x in f])elif m is Detect:args.append([ch[x] for x in f])if isinstance(args[1], int):  # number of anchorsargs[1] = [list(range(args[1] * 2))] * len(f)elif m is Contract:c2 = ch[f] * args[0] ** 2elif m is Expand:c2 = ch[f] // args[0] ** 2
# 添加加该部分代码
#---------------------------------------------            elif m is PPLC_Conv:c2 = args[0]args = args[1:]elif m is PPLC_Block:c2 = args[0]args = args[1:]
#----------------------------------------------else:c2 = ch[f]m_ = nn.Sequential(*[m(*args) for _ in range(n)]) if n > 1 else m(*args)  # modulet = str(m)[8:-2].replace('__main__.', '')  # module typenp = sum([x.numel() for x in m_.parameters()])  # number paramsm_.i, m_.f, m_.type, m_.np = i, f, t, np  # attach index, 'from' index, type, number paramsLOGGER.info('%3s%18s%3s%10.0f  %-40s%-30s' % (i, f, n_, np, t, args))  # printsave.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1)  # append to savelistlayers.append(m_)if i == 0:ch = []ch.append(c2)return nn.Sequential(*layers), sorted(save)
yolov5_LCNet.yaml
# YOLOv5 												

PP-LCNet-YoloV5相关推荐

  1. YOLOv5/v7 更换骨干网络之 PP-LCNet

    论文地址:https://arxiv.org/abs/2109.15099 代码地址:https://github.com/ngnquan/PP-LCNet 我们提出了一种基于MKLDNN加速策略的轻 ...

  2. YOLOv5+BiSeNet——同时进行目标检测和语义分割

    前言 在Gayhub上看到个项目,有人在YOLOv5的基础上,新增了一个分割头,把BiSeNet语义分割算法加入到了目标检测中,使其能够同时进行目标检测和语义分割. 项目地址:https://gith ...

  3. yolov5 代码解读 --common.py

    先从common.py学起,接下来的近期时间将会对yolov5的代码进行解析,以及对yolov5的网络结构进行解析. 在common.py文件中主要是封装了不同的通用模块 1:头文件 这是common ...

  4. 基于深度学习的水果检测与识别系统(Python界面版,YOLOv5实现)

    摘要:本博文介绍了一种基于深度学习的水果检测与识别系统,使用YOLOv5算法对常见水果进行检测和识别,实现对图片.视频和实时视频中的水果进行准确识别.博文详细阐述了算法原理,同时提供Python实现代 ...

  5. Yolov5-5.0源码分享以及环境配置——Yolov5训练及测试教程(超详细含数据集制作,格式转换,数据集划分)

    yolov5-5.0百度网盘连接 链接: https://pan.baidu.com/s/1Hd2KKBixuEWRv3jcH6Bcsw 提取码: g6xf 复制这段内容后打开百度网盘手机App,操作 ...

  6. 在Yolov5 Yolov4 Yolov3 TensorRT 实现Implementation

    在Yolov5 Yolov4 Yolov3 TensorRT实现Implementation news: yolov5 support 引论 该项目是nvidia官方yolo-tensorrt的封装实 ...

  7. GitHub上YOLOv5开源代码的训练数据定义

    GitHub上YOLOv5开源代码的训练数据定义 代码地址:https://github.com/ultralytics/YOLOv5 训练数据定义地址:https://github.com/ultr ...

  8. GitHub上开源的YOLOv5

    GitHub上开源的YOLOv5 代码地址:https://github.com/ultralytics/YOLOv5 该存储库代表Ultralytics对未来的对象检测方法的开源研究,并结合了我们在 ...

  9. YOLOv4没交棒,但YOLOv5来了!

    YOLOv4没交棒,但YOLOv5来了! 前言 4月24日,YOLOv4来了! 5月30日,"YOLOv5"来了! 这里的 "YOLOv5" 是带有引号的,因为 ...

  10. YOLOv5目标检测源码重磅发布了!

    YOLOv5目标检测源码重磅发布了! https://github.com/ultralytics/yolov5 该存储库代表了对未来对象检测方法的超解析开源研究,并结合了在使用之前的YOLO存储库在 ...

最新文章

  1. java 抽象类和接口1--基本概念
  2. 滴滴 AI Labs 负责人叶杰平因个人原因即将离职!CTO 张博接任
  3. bzoj 3111: [Zjoi2013]蚂蚁寻路(DP)
  4. GOF23设计模式之单例模式
  5. Vue-计算属性与事件监听
  6. PLC的当前状态,电气工程师建议看完,进阶上位机编程
  7. ps6人脸识别液化工具在哪_PS上手指南 篇五:玩转人脸识别液化
  8. 腾讯投资“差评”遭舆论讨伐,或被迫退股!!
  9. 什么是 PDF 扁平化?怎样扁平化 PDF? 一起涨知识!
  10. 中台搞了2年,项目叫停,CIO被裁!本以为中台是道送分题,没想到是送命题!...
  11. cond怎么读_cond condition是什么意思
  12. pyecharts js 地图无法显示 Map china not exists the geoJson of the map must be provided
  13. J-LINK的VCC还是TVCC?
  14. ​2021年,ThinkPad为什么存在?
  15. 基于SSH的客户关系CRM管理系统设计与实现
  16. C++描述 LeetCode 485. 最大连续1的个数
  17. idea里把选中的变为大写或小写快捷键
  18. 幼儿园科学计算机课程,幼儿园科学课程
  19. 各种交通工具旅游路线查询程序(C++)
  20. 中国黑客灰色产业链调查:年收入超过2亿元

热门文章

  1. 迷你世界进云服务器需要密码,迷你世界云服务器安装
  2. C语言编程年龄的立方是个四位数,C 程序设计 功能:求一个四位数的各位数字的立方和。...
  3. a commit git 参数是什么意思_git commit - Git中的Sign Off功能是什么?
  4. 程序员会遇到的10种经典的错误提示信息
  5. 2022新年新气象,思想提升
  6. 工作中应该如何管理自己的情绪?
  7. Chrome 控制台实用指南
  8. 电商第一站给大学几个生创业金点子
  9. 得此触控板,一览众山小!
  10. MAF: Validation in Require Field