前言:

最近比较忙(懒),本学渣在写毕业论文(好难受),所以博客的更新不是那么有效率,哈哈;
本文的目的是用实操带你一步一步的实现darknet模型框架的部署流程;(当然darknet算法的训练在本人之前的博客也有写,你要是串起来,那也是很棒的!)
实现yolov3/yolov3-tiny模型从.weights模型转换到.onnx模型然后再转换到.trt模型;
当然,本文也是本人自己对自己学习的一个记录和小总结吧,若有不足之处请多多指点,欢迎提出意见!
学习的过程也有参考各路大佬的博客和社区资源。
注意:本文对理论的讲解会不足,追求理论理解的,可以去看官方文档。
本文所有涉及的代码均会待博客完整写完后上传到本人的github上…

本博客共计为两个部分:

第一部分:.weights -> .onnx

1.模型转换到.onnx结构后,采用onnxruntime进行模型的推理;

  • 测试code会提供大家对图片检测的demo,以及对视频流的推理demo;
第二部分:.weights -> .onnx -> .trt

2.模型从.weights结构转换到.onnx结构然后再转换到.trt结构,然后进行模型的推理;

  • 此部分提供照片测试的代码demo,视频流的推理可以模仿上面的onnxruntime模型在视频上推理的代码写此部分代码;

第一部分

该部分的主要code来源于tensorrt的samples中的代码;

还是一样,先大致说一下project的思路;

  • 首先就是得训练好一个darknet的模型,yolov3.weights或者yolov3-tiny.weights;
  • 然后将训练好的模型结合对应的网络模型cfg文件转到onnx模型结构;
  • 然后就是采用onnxruntime进行模型的推理;(本文中的模型推理采用onnxruntime cpu版本,读者可自行换为onnxruntime-gpu版本)

本博客中测试的darknet模型采用官方的yolov3.weights, 该模型检测的类别数为80类(COCO-datasets)。
接下来我们按照project的思路来逐步实现学习。

一. project环境的配置

  • 为你的实验创建一个新的env(本人使用的是conda,建议使用conda,当然实验环境的搭建和您是否使用anaconda没有直接关系…)
conda create -n yolo_inference python=3.5
  • 激活envs,然后安装相关依赖的package
source activate yolo_inference   或者   conda activate yolo_inference
pip install pillow
pip install opencv-python
pip install onnx==1.4.1
pip install onnxruntime==1.1.0

二. weights 模型转换为onnx模型文件

这里需要说明,此部分代码的使用需要在python 2下运行。
直接上代码(代码来自于官方samples):
yolov3_to_onnx.py

#!/usr/bin/env python2
#
# Copyright 1993-2018 NVIDIA Corporation.  All rights reserved.
#
# NOTICE TO LICENSEE:
#
# This source code and/or documentation ("Licensed Deliverables") are
# subject to NVIDIA intellectual property rights under U.S. and
# international Copyright laws.
#
# These Licensed Deliverables contained herein is PROPRIETARY and
# CONFIDENTIAL to NVIDIA and is being provided under the terms and
# conditions of a form of NVIDIA software license agreement by and
# between NVIDIA and Licensee ("License Agreement") or electronically
# accepted by Licensee.  Notwithstanding any terms or conditions to
# the contrary in the License Agreement, reproduction or disclosure
# of the Licensed Deliverables to any third party without the express
# written consent of NVIDIA is prohibited.
#
# NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
# LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE
# SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE.  IT IS
# PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND.
# NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED
# DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY,
# NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
# NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
# LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY
# SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THESE LICENSED DELIVERABLES.
#
# U.S. Government End Users.  These Licensed Deliverables are a
# "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT
# 1995), consisting of "commercial computer software" and "commercial
# computer software documentation" as such terms are used in 48
# C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government
# only as a commercial end item.  Consistent with 48 C.F.R.12.212 and
# 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all
# U.S. Government End Users acquire the Licensed Deliverables with
# only those rights set forth herein.
#
# Any use of the Licensed Deliverables in individual and commercial
# software must include, in the user documentation and internal
# comments to the code, the above Disclaimer and U.S. Government End
# Users Notice.
#from __future__ import print_function
from collections import OrderedDict
import hashlib
import os.pathimport onnx
from onnx import helper
from onnx import TensorProto
import numpy as npimport sysclass DarkNetParser(object):"""Definition of a parser for DarkNet-based YOLOv3-608 (only tested for this topology)."""def __init__(self, supported_layers):"""Initializes a DarkNetParser object.Keyword argument:supported_layers -- a string list of supported layers in DarkNet naming convention,parameters are only added to the class dictionary if a parsed layer is included."""# A list of YOLOv3 layers containing dictionaries with all layer# parameters:self.layer_configs = OrderedDict()self.supported_layers = supported_layersself.layer_counter = 0def parse_cfg_file(self, cfg_file_path):"""Takes the yolov3.cfg file and parses it layer by layer,appending each layer's parameters as a dictionary to layer_configs.Keyword argument:cfg_file_path -- path to the yolov3.cfg file as string"""with open(cfg_file_path, 'rb') as cfg_file:remainder = cfg_file.read()while remainder is not None:layer_dict, layer_name, remainder = self._next_layer(remainder)if layer_dict is not None:self.layer_configs[layer_name] = layer_dictreturn self.layer_configsdef _next_layer(self, remainder):"""Takes in a string and segments it by looking for DarkNet delimiters.Returns the layer parameters and the remaining string after the last delimiter.Example for the first Conv layer in yolo.cfg ...[convolutional]batch_normalize=1filters=32size=3stride=1pad=1activation=leaky... becomes the following layer_dict return value:{'activation': 'leaky', 'stride': 1, 'pad': 1, 'filters': 32,'batch_normalize': 1, 'type': 'convolutional', 'size': 3}.'001_convolutional' is returned as layer_name, and all lines that follow in yolo.cfgare returned as the next remainder.Keyword argument:remainder -- a string with all raw text after the previously parsed layer"""remainder = remainder.split('[', 1)if len(remainder) == 2:remainder = remainder[1]else:return None, None, Noneremainder = remainder.split(']', 1)if len(remainder) == 2:layer_type, remainder = remainderelse:return None, None, Noneif remainder.replace(' ', '')[0] == '#':remainder = remainder.split('\n', 1)[1]layer_param_block, remainder = remainder.split('\n\n', 1)layer_param_lines = layer_param_block.split('\n')[1:]layer_name = str(self.layer_counter).zfill(3) + '_' + layer_typelayer_dict = dict(type=layer_type)if layer_type in self.supported_layers:for param_line in layer_param_lines:if param_line[0] == '#':continueparam_type, param_value = self._parse_params(param_line)layer_dict[param_type] = param_valueself.layer_counter += 1return layer_dict, layer_name, remainderdef _parse_params(self, param_line):"""Identifies the parameters contained in one of the cfg file and returnsthem in the required format for each parameter type, e.g. as a list, an int or a float.Keyword argument:param_line -- one parsed line within a layer block"""param_line = param_line.replace(' ', '')param_type, param_value_raw = param_line.split('=')param_value = Noneif param_type == 'layers':layer_indexes = list()for index in param_value_raw.split(','):layer_indexes.append(int(index))param_value = layer_indexeselif isinstance(param_value_raw, str) and not param_value_raw.isalpha():condition_param_value_positive = param_value_raw.isdigit()condition_param_value_negative = param_value_raw[0] == '-' and \param_value_raw[1:].isdigit()if condition_param_value_positive or condition_param_value_negative:param_value = int(param_value_raw)else:param_value = float(param_value_raw)else:param_value = str(param_value_raw)return param_type, param_valueclass MajorNodeSpecs(object):"""Helper class used to store the names of ONNX output names,corresponding to the output of a DarkNet layer and its output channels.Some DarkNet layers are not created and there is no corresponding ONNX node,but we still need to track them in order to set up skip connections."""def __init__(self, name, channels):""" Initialize a MajorNodeSpecs object.Keyword arguments:name -- name of the ONNX nodechannels -- number of output channels of this node"""self.name = nameself.channels = channelsself.created_onnx_node = Falseif name is not None and isinstance(channels, int) and channels > 0:self.created_onnx_node = Trueclass ConvParams(object):"""Helper class to store the hyper parameters of a Conv layer,including its prefix name in the ONNX graph and the expected dimensionsof weights for convolution, bias, and batch normalization.Additionally acts as a wrapper for generating safe names for allweights, checking on feasible combinations."""def __init__(self, node_name, batch_normalize, conv_weight_dims):"""Constructor based on the base node name (e.g. 101_convolutional), the batchnormalization setting, and the convolutional weights shape.Keyword arguments:node_name -- base name of this YOLO convolutional layerbatch_normalize -- bool value if batch normalization is usedconv_weight_dims -- the dimensions of this layer's convolutional weights"""self.node_name = node_nameself.batch_normalize = batch_normalizeassert len(conv_weight_dims) == 4self.conv_weight_dims = conv_weight_dimsdef generate_param_name(self, param_category, suffix):"""Generates a name based on two string inputs,and checks if the combination is valid."""assert suffixassert param_category in ['bn', 'conv']assert (suffix in ['scale', 'mean', 'var', 'weights', 'bias'])if param_category == 'bn':assert self.batch_normalizeassert suffix in ['scale', 'bias', 'mean', 'var']elif param_category == 'conv':assert suffix in ['weights', 'bias']if suffix == 'bias':assert not self.batch_normalizeparam_name = self.node_name + '_' + param_category + '_' + suffixreturn param_nameclass WeightLoader(object):"""Helper class used for loading the serialized weights of a binary file streamand returning the initializers and the input tensors required for populatingthe ONNX graph with weights."""def __init__(self, weights_file_path):"""Initialized with a path to the YOLOv3 .weights file.Keyword argument:weights_file_path -- path to the weights file."""self.weights_file = self._open_weights_file(weights_file_path)def load_conv_weights(self, conv_params):"""Returns the initializers with weights from the weights file andthe input tensors of a convolutional layer for all corresponding ONNX nodes.Keyword argument:conv_params -- a ConvParams object"""initializer = list()inputs = list()if conv_params.batch_normalize:bias_init, bias_input = self._create_param_tensors(conv_params, 'bn', 'bias')bn_scale_init, bn_scale_input = self._create_param_tensors(conv_params, 'bn', 'scale')bn_mean_init, bn_mean_input = self._create_param_tensors(conv_params, 'bn', 'mean')bn_var_init, bn_var_input = self._create_param_tensors(conv_params, 'bn', 'var')initializer.extend([bn_scale_init, bias_init, bn_mean_init, bn_var_init])inputs.extend([bn_scale_input, bias_input,bn_mean_input, bn_var_input])else:bias_init, bias_input = self._create_param_tensors(conv_params, 'conv', 'bias')initializer.append(bias_init)inputs.append(bias_input)conv_init, conv_input = self._create_param_tensors(conv_params, 'conv', 'weights')initializer.append(conv_init)inputs.append(conv_input)return initializer, inputsdef _open_weights_file(self, weights_file_path):"""Opens a YOLOv3 DarkNet file stream and skips the header.Keyword argument:weights_file_path -- path to the weights file."""weights_file = open(weights_file_path, 'rb')length_header = 5np.ndarray(shape=(length_header,), dtype='int32', buffer=weights_file.read(length_header * 4))return weights_filedef _create_param_tensors(self, conv_params, param_category, suffix):"""Creates the initializers with weights from the weights file together withthe input tensors.Keyword arguments:conv_params -- a ConvParams objectparam_category -- the category of parameters to be created ('bn' or 'conv')suffix -- a string determining the sub-type of above param_category (e.g.,'weights' or 'bias')"""param_name, param_data, param_data_shape = self._load_one_param_type(conv_params, param_category, suffix)initializer_tensor = helper.make_tensor(param_name, TensorProto.FLOAT, param_data_shape, param_data)input_tensor = helper.make_tensor_value_info(param_name, TensorProto.FLOAT, param_data_shape)return initializer_tensor, input_tensordef _load_one_param_type(self, conv_params, param_category, suffix):"""Deserializes the weights from a file stream in the DarkNet order.Keyword arguments:conv_params -- a ConvParams objectparam_category -- the category of parameters to be created ('bn' or 'conv')suffix -- a string determining the sub-type of above param_category (e.g.,'weights' or 'bias')"""param_name = conv_params.generate_param_name(param_category, suffix)channels_out, channels_in, filter_h, filter_w = conv_params.conv_weight_dimsif param_category == 'bn':param_shape = [channels_out]elif param_category == 'conv':if suffix == 'weights':param_shape = [channels_out, channels_in, filter_h, filter_w]elif suffix == 'bias':param_shape = [channels_out]param_size = np.product(np.array(param_shape))param_data = np.ndarray(shape=param_shape,dtype='float32',buffer=self.weights_file.read(param_size * 4))param_data = param_data.flatten().astype(float)return param_name, param_data, param_shapeclass GraphBuilderONNX(object):"""Class for creating an ONNX graph from a previously generated list of layer dictionaries."""def __init__(self, output_tensors):"""Initialize with all DarkNet default parameters used creating YOLOv3,and specify the output tensors as an OrderedDict for their output dimensionswith their names as keys.Keyword argument:output_tensors -- the output tensors as an OrderedDict containing the keys'output dimensions"""self.output_tensors = output_tensorsself._nodes = list()self.graph_def = Noneself.input_tensor = Noneself.epsilon_bn = 1e-5self.momentum_bn = 0.99self.alpha_lrelu = 0.1self.param_dict = OrderedDict()self.major_node_specs = list()self.batch_size = 1def build_onnx_graph(self,layer_configs,weights_file_path,verbose=True):"""Iterate over all layer configs (parsed from the DarkNet representationof YOLOv3-608), create an ONNX graph, populate it with weights from the weightsfile and return the graph definition.Keyword arguments:layer_configs -- an OrderedDict object with all parsed layers' configurationsweights_file_path -- location of the weights fileverbose -- toggles if the graph is printed after creation (default: True)"""for layer_name in layer_configs.keys():layer_dict = layer_configs[layer_name]major_node_specs = self._make_onnx_node(layer_name, layer_dict)if major_node_specs.name is not None:self.major_node_specs.append(major_node_specs)outputs = list()for tensor_name in self.output_tensors.keys():output_dims = [self.batch_size, ] + \self.output_tensors[tensor_name]output_tensor = helper.make_tensor_value_info(tensor_name, TensorProto.FLOAT, output_dims)outputs.append(output_tensor)inputs = [self.input_tensor]weight_loader = WeightLoader(weights_file_path)initializer = list()for layer_name in self.param_dict.keys():_, layer_type = layer_name.split('_', 1)conv_params = self.param_dict[layer_name]assert layer_type == 'convolutional'initializer_layer, inputs_layer = weight_loader.load_conv_weights(conv_params)initializer.extend(initializer_layer)inputs.extend(inputs_layer)del weight_loaderself.graph_def = helper.make_graph(nodes=self._nodes,name='YOLOv3-608',inputs=inputs,outputs=outputs,initializer=initializer)if verbose:print(helper.printable_graph(self.graph_def))model_def = helper.make_model(self.graph_def,producer_name='NVIDIA TensorRT sample')return model_defdef _make_onnx_node(self, layer_name, layer_dict):"""Take in a layer parameter dictionary, choose the correct function forcreating an ONNX node and store the information important to graph creationas a MajorNodeSpec object.Keyword arguments:layer_name -- the layer's name (also the corresponding key in layer_configs)layer_dict -- a layer parameter dictionary (one element of layer_configs)"""layer_type = layer_dict['type']if self.input_tensor is None:if layer_type == 'net':major_node_output_name, major_node_output_channels = self._make_input_tensor(layer_name, layer_dict)major_node_specs = MajorNodeSpecs(major_node_output_name,major_node_output_channels)else:raise ValueError('The first node has to be of type "net".')else:node_creators = dict()node_creators['convolutional'] = self._make_conv_nodenode_creators['shortcut'] = self._make_shortcut_nodenode_creators['route'] = self._make_route_nodenode_creators['upsample'] = self._make_upsample_nodeif layer_type in node_creators.keys():major_node_output_name, major_node_output_channels = \node_creators[layer_type](layer_name, layer_dict)major_node_specs = MajorNodeSpecs(major_node_output_name,major_node_output_channels)else:print('Layer of type %s not supported, skipping ONNX node generation.' %layer_type)major_node_specs = MajorNodeSpecs(layer_name,None)return major_node_specsdef _make_input_tensor(self, layer_name, layer_dict):"""Create an ONNX input tensor from a 'net' layer and store the batch size.Keyword arguments:layer_name -- the layer's name (also the corresponding key in layer_configs)layer_dict -- a layer parameter dictionary (one element of layer_configs)"""batch_size = layer_dict['batch']channels = layer_dict['channels']height = layer_dict['height']width = layer_dict['width']self.batch_size = batch_sizeinput_tensor = helper.make_tensor_value_info(str(layer_name), TensorProto.FLOAT, [batch_size, channels, height, width])self.input_tensor = input_tensorreturn layer_name, channelsdef _get_previous_node_specs(self, target_index=-1):"""Get a previously generated ONNX node (skip those that were not generated).Target index can be passed for jumping to a specific index.Keyword arguments:target_index -- optional for jumping to a specific index (default: -1 for jumpingto previous element)"""previous_node = Nonefor node in self.major_node_specs[target_index::-1]:if node.created_onnx_node:previous_node = nodebreakassert previous_node is not Nonereturn previous_nodedef _make_conv_node(self, layer_name, layer_dict):"""Create an ONNX Conv node with optional batch normalization andactivation nodes.Keyword arguments:layer_name -- the layer's name (also the corresponding key in layer_configs)layer_dict -- a layer parameter dictionary (one element of layer_configs)"""previous_node_specs = self._get_previous_node_specs()inputs = [previous_node_specs.name]previous_channels = previous_node_specs.channelskernel_size = layer_dict['size']stride = layer_dict['stride']filters = layer_dict['filters']batch_normalize = Falseif 'batch_normalize' in layer_dict.keys() and layer_dict['batch_normalize'] == 1:batch_normalize = Truekernel_shape = [kernel_size, kernel_size]weights_shape = [filters, previous_channels] + kernel_shapeconv_params = ConvParams(layer_name, batch_normalize, weights_shape)strides = [stride, stride]dilations = [1, 1]weights_name = conv_params.generate_param_name('conv', 'weights')inputs.append(weights_name)if not batch_normalize:bias_name = conv_params.generate_param_name('conv', 'bias')inputs.append(bias_name)conv_node = helper.make_node('Conv',inputs=inputs,outputs=[layer_name],kernel_shape=kernel_shape,strides=strides,auto_pad='SAME_LOWER',dilations=dilations,name=layer_name)self._nodes.append(conv_node)inputs = [layer_name]layer_name_output = layer_nameif batch_normalize:layer_name_bn = layer_name + '_bn'bn_param_suffixes = ['scale', 'bias', 'mean', 'var']for suffix in bn_param_suffixes:bn_param_name = conv_params.generate_param_name('bn', suffix)inputs.append(bn_param_name)batchnorm_node = helper.make_node('BatchNormalization',inputs=inputs,outputs=[layer_name_bn],epsilon=self.epsilon_bn,momentum=self.momentum_bn,name=layer_name_bn)self._nodes.append(batchnorm_node)inputs = [layer_name_bn]layer_name_output = layer_name_bnif layer_dict['activation'] == 'leaky':layer_name_lrelu = layer_name + '_lrelu'lrelu_node = helper.make_node('LeakyRelu',inputs=inputs,outputs=[layer_name_lrelu],name=layer_name_lrelu,alpha=self.alpha_lrelu)self._nodes.append(lrelu_node)inputs = [layer_name_lrelu]layer_name_output = layer_name_lreluelif layer_dict['activation'] == 'linear':passelse:print('Activation not supported.')self.param_dict[layer_name] = conv_paramsreturn layer_name_output, filtersdef _make_shortcut_node(self, layer_name, layer_dict):"""Create an ONNX Add node with the shortcut properties fromthe DarkNet-based graph.Keyword arguments:layer_name -- the layer's name (also the corresponding key in layer_configs)layer_dict -- a layer parameter dictionary (one element of layer_configs)"""shortcut_index = layer_dict['from']activation = layer_dict['activation']assert activation == 'linear'first_node_specs = self._get_previous_node_specs()second_node_specs = self._get_previous_node_specs(target_index=shortcut_index)assert first_node_specs.channels == second_node_specs.channelschannels = first_node_specs.channelsinputs = [first_node_specs.name, second_node_specs.name]shortcut_node = helper.make_node('Add',inputs=inputs,outputs=[layer_name],name=layer_name,)self._nodes.append(shortcut_node)return layer_name, channelsdef _make_route_node(self, layer_name, layer_dict):"""If the 'layers' parameter from the DarkNet configuration is only one index, continuenode creation at the indicated (negative) index. Otherwise, create an ONNX Concat nodewith the route properties from the DarkNet-based graph.Keyword arguments:layer_name -- the layer's name (also the corresponding key in layer_configs)layer_dict -- a layer parameter dictionary (one element of layer_configs)"""route_node_indexes = layer_dict['layers']if len(route_node_indexes) == 1:split_index = route_node_indexes[0]assert split_index < 0# Increment by one because we skipped the YOLO layer:split_index += 1self.major_node_specs = self.major_node_specs[:split_index]layer_name = Nonechannels = Noneelse:inputs = list()channels = 0for index in route_node_indexes:if index > 0:# Increment by one because we count the input as a node (DarkNet# does not)index += 1route_node_specs = self._get_previous_node_specs(target_index=index)inputs.append(route_node_specs.name)channels += route_node_specs.channelsassert inputsassert channels > 0route_node = helper.make_node('Concat',axis=1,inputs=inputs,outputs=[layer_name],name=layer_name,)self._nodes.append(route_node)return layer_name, channelsdef _make_upsample_node(self, layer_name, layer_dict):"""Create an ONNX Upsample node with the properties fromthe DarkNet-based graph.Keyword arguments:layer_name -- the layer's name (also the corresponding key in layer_configs)layer_dict -- a layer parameter dictionary (one element of layer_configs)"""upsample_factor = float(layer_dict['stride'])previous_node_specs = self._get_previous_node_specs()inputs = [previous_node_specs.name]channels = previous_node_specs.channelsassert channels > 0upsample_node = helper.make_node('Upsample',mode='nearest',# For ONNX versions <0.7.0, Upsample nodes accept different parameters than 'scales':scales=[1.0, 1.0, upsample_factor, upsample_factor],inputs=inputs,outputs=[layer_name],name=layer_name,)self._nodes.append(upsample_node)return layer_name, channelsdef generate_md5_checksum(local_path):"""Returns the MD5 checksum of a local file.Keyword argument:local_path -- path of the file whose checksum shall be generated"""with open(local_path) as local_file:data = local_file.read()return hashlib.md5(data).hexdigest()def download_file(local_path, link, checksum_reference=None):"""Checks if a local file is present and downloads it from the specified path otherwise.If checksum_reference is specified, the file's md5 checksum is compared against theexpected value.Keyword arguments:local_path -- path of the file whose checksum shall be generatedlink -- link where the file shall be downloaded from if it is not found locallychecksum_reference -- expected MD5 checksum of the file"""if not os.path.exists(local_path):print('Downloading from %s, this may take a while...' % link)wget.download(link, local_path)print()if checksum_reference is not None:checksum = generate_md5_checksum(local_path)if checksum != checksum_reference:raise ValueError('The MD5 checksum of local file %s differs from %s, please manually remove \the file and try again.' %(local_path, checksum_reference))return local_pathdef main():"""Run the DarkNet-to-ONNX conversion for YOLOv3-608."""# Have to use python 2 due to hashlib compatibilityif sys.version_info[0] > 2:raise Exception("This is script is only compatible with python2, please re-run this script \with python2. The rest of this sample can be run with either version of python")# Download the config for YOLOv3 if not present yet, and analyze the checksum:cfg_file_path = 'config/yolov3.cfg'# These are the only layers DarkNetParser will extract parameters from. The three layers of# type 'yolo' are not parsed in detail because they are included in the post-processing later:supported_layers = ['net', 'convolutional', 'shortcut','route', 'upsample']# Create a DarkNetParser object, and the use it to generate an OrderedDict with all# layer's configs from the cfg file:parser = DarkNetParser(supported_layers)layer_configs = parser.parse_cfg_file(cfg_file_path)# We do not need the parser anymore after we got layer_configs:del parser# In above layer_config, there are three outputs that we need to know the output# shape of (in CHW format):output_tensor_dims = OrderedDict()output_tensor_dims['082_convolutional'] = [255, 19, 19]output_tensor_dims['094_convolutional'] = [255, 38, 38]output_tensor_dims['106_convolutional'] = [255, 76, 76]# Create a GraphBuilderONNX object with the known output tensor dimensions:builder = GraphBuilderONNX(output_tensor_dims)# We want to populate our network with weights later, that's why we download those from# the official mirror (and verify the checksum):weights_file_path = 'yolov3.weights'# Now generate an ONNX graph with weights from the previously parsed layer configurations# and the weights file:yolov3_model_def = builder.build_onnx_graph(layer_configs=layer_configs,weights_file_path=weights_file_path,verbose=True)# Once we have the model definition, we do not need the builder anymore:del builder# Perform a sanity check on the ONNX model definition:onnx.checker.check_model(yolov3_model_def)# Serialize the generated ONNX graph to this file:output_file_path = 'yolov3_608.onnx'onnx.save(yolov3_model_def, output_file_path)if __name__ == '__main__':main()

代码里面有部分参数相关的代码需要简单的解释一下,避免跳坑:

  • 配置文件.cfg
    首先你需要将配置文件内的batch和subdivision参数设置为1;
    其次,你需要将配置文件最后一行增加一行空格;
  • 代码中的此部分需要注意,后期要是你需要转换自己的训练的模型需要修改此部分的参数,将255改为你对应使用的参数
output_tensor_dims['082_convolutional'] = [255, 19, 19] # 255代表的是3*(classes + 4 + 1)
output_tensor_dims['094_convolutional'] = [255, 38, 38] # 255代表的是3*(classes + 4 + 1)
output_tensor_dims['106_convolutional'] = [255, 76, 76] # 255代表的是3*(classes + 4 + 1)

成功运行后你将会得到一个yolov3_608.onnx的模型,在你代码的执行目录中;接下来我们开始采用onnxruntime进行模型的推理;

二. 使用onnxruntime对模型进行推理

该节包括三部分代码

  • darknet_api.py
  • onnx_img_inference.py
  • onnx_video_inference.py

1.首先简单的介绍一下darknet_api.py的功能;就是筛选bbox以及图片/视频的预处理功能;
darknet_api.py

# coding: utf-8
# 2019-12-10
"""
YOlo相关的预处理api;
"""
import cv2
import time
import numpy as np# 加载label names;
def get_labels(names_file):names = list()with open(names_file, 'r') as f:lines = f.read()for name in lines.splitlines():names.append(name)f.close()return names# 照片预处理
def process_img(img_path, input_shape):ori_img = cv2.imread(img_path)img = cv2.resize(ori_img, input_shape)image = img[:, :, ::-1].transpose((2, 0, 1))image = image[np.newaxis, :, :, :] / 255image = np.array(image, dtype=np.float32)return ori_img, ori_img.shape, image# 视频预处理
def frame_process(frame, input_shape):image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)image = cv2.resize(image, input_shape)# image = cv2.resize(image, (640, 480))image_mean = np.array([127, 127, 127])image = (image - image_mean) / 128image = np.transpose(image, [2, 0, 1])image = np.expand_dims(image, axis=0)image = image.astype(np.float32)return image# sigmoid函数
def sigmoid(x):s = 1 / (1 + np.exp(-1 * x))return s# 获取预测正确的类别,以及概率和索引;
def get_result(class_scores):class_score = 0class_index = 0for i in range(len(class_scores)):if class_scores[i] > class_score:class_index += 1class_score = class_scores[i]return class_score, class_index# 通过置信度筛选得到bboxs
def get_bbox(feat, anchors, image_shape, confidence_threshold=0.25):box = list()for i in range(len(anchors)):for cx in range(feat.shape[0]):for cy in range(feat.shape[1]):tx = feat[cx][cy][0 + 85 * i]ty = feat[cx][cy][1 + 85 * i]tw = feat[cx][cy][2 + 85 * i]th = feat[cx][cy][3 + 85 * i]cf = feat[cx][cy][4 + 85 * i]cp = feat[cx][cy][5 + 85 * i:85 + 85 * i]bx = (sigmoid(tx) + cx) / feat.shape[0]by = (sigmoid(ty) + cy) / feat.shape[1]bw = anchors[i][0] * np.exp(tw) / image_shape[0]bh = anchors[i][1] * np.exp(th) / image_shape[1]b_confidence = sigmoid(cf)b_class_prob = sigmoid(cp)b_scores = b_confidence * b_class_probb_class_score, b_class_index = get_result(b_scores)if b_class_score >= confidence_threshold:box.append([bx, by, bw, bh, b_class_score, b_class_index])return box# 采用nms算法筛选获取到的bbox
def nms(boxes, nms_threshold=0.6):l = len(boxes)if l == 0:return []else:b_x = boxes[:, 0]b_y = boxes[:, 1]b_w = boxes[:, 2]b_h = boxes[:, 3]scores = boxes[:, 4]areas = (b_w + 1) * (b_h + 1)order = scores.argsort()[::-1]keep = list()while order.size > 0:i = order[0]keep.append(i)xx1 = np.maximum(b_x[i], b_x[order[1:]])yy1 = np.maximum(b_y[i], b_y[order[1:]])xx2 = np.minimum(b_x[i] + b_w[i], b_x[order[1:]] + b_w[order[1:]])yy2 = np.minimum(b_y[i] + b_h[i], b_y[order[1:]] + b_h[order[1:]])# 相交面积,不重叠时面积为0w = np.maximum(0.0, xx2 - xx1 + 1)h = np.maximum(0.0, yy2 - yy1 + 1)inter = w * h# 相并面积,面积1+面积2-相交面积union = areas[i] + areas[order[1:]] - inter# 计算IoU:交 /(面积1+面积2-交)IoU = inter / union# 保留IoU小于阈值的boxinds = np.where(IoU <= nms_threshold)[0]order = order[inds + 1]  # 因为IoU数组的长度比order数组少一个,所以这里要将所有下标后移一位final_boxes = [boxes[i] for i in keep]return final_boxes# 绘制预测框
def draw_box(boxes, img, img_shape):label = ["background", "person","bicycle", "car", "motorbike", "aeroplane","bus", "train", "truck", "boat", "traffic light","fire hydrant", "stop sign", "parking meter", "bench","bird", "cat", "dog", "horse", "sheep", "cow", "elephant","bear", "zebra", "giraffe", "backpack", "umbrella", "handbag","tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball","kite", "baseball bat", "baseball glove", "skateboard", "surfboard","tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon","bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog","pizza", "donut", "cake", "chair", "sofa", "potted plant", "bed", "dining table","toilet", "TV monitor", "laptop", "mouse", "remote", "keyboard", "cell phone","microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase","scissors", "teddy bear", "hair drier", "toothbrush"]for box in boxes:x1 = int((box[0] - box[2] / 2) * img_shape[1])y1 = int((box[1] - box[3] / 2) * img_shape[0])x2 = int((box[0] + box[2] / 2) * img_shape[1])y2 = int((box[1] + box[3] / 2) * img_shape[0])cv2.rectangle(img, (x1, y1), (x2, y2), (0, 255, 0), 2)cv2.putText(img, label[int(box[5])] + ":" + str(round(box[4], 3)), (x1 + 5, y1 + 10), cv2.FONT_HERSHEY_SIMPLEX,0.5, (0, 0, 255), 1)print(label[int(box[5])] + ":" + "概率值:%.3f" % box[4])cv2.imshow('image', img)cv2.waitKey(10)cv2.destroyAllWindows()# 获取预测框
def get_boxes(prediction, anchors, img_shape, confidence_threshold=0.25, nms_threshold=0.6):boxes = []for i in range(len(prediction)):feature_map = prediction[i][0].transpose((2, 1, 0))box = get_bbox(feature_map, anchors[i], img_shape, confidence_threshold)boxes.extend(box)Boxes = nms(np.array(boxes), nms_threshold)return Boxes

此部分代码参考的博客地址为:https://blog.csdn.net/u013597931/article/details/89412272
此部分代码参数中 85 的修改需要注意,便于后期根据自己模型而做调整:

tx = feat[cx][cy][0 + 85 * i] # 85的意思(classes + 4 + 1)
ty = feat[cx][cy][1 + 85 * i] # 85的意思(classes + 4 + 1)
tw = feat[cx][cy][2 + 85 * i] # 85的意思(classes + 4 + 1)
th = feat[cx][cy][3 + 85 * i] # 85的意思(classes + 4 + 1)
cf = feat[cx][cy][4 + 85 * i] # 85的意思(classes + 4 + 1)
cp = feat[cx][cy][5 + 85 * i:85 + 85 * i] # 85的意思(classes + 4 + 1)

2.万事具备,接下来进行照片的推理

  • onnx_img_inference.py
# coding: utf-8
# author: hxy
# 2019-12-10"""
照片的inference;
默认推理过程在CPU上;
"""
import os
import time
import logging
import onnxruntime
from lib.darknet_api import process_img, get_boxes, draw_box# 定义日志格式
def log_set():logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')# 加载onnx模型
def load_model(onnx_model):sess = onnxruntime.InferenceSession(onnx_model)in_name = [input.name for input in sess.get_inputs()][0]out_name = [output.name for output in sess.get_outputs()]logging.info("输入的name:{}, 输出的name:{}".format(in_name, out_name))return sess, in_name, out_nameif __name__ == '__main__':log_set()input_shape = (608, 608)# anchorsanchors_yolo = [[(116, 90), (156, 198), (373, 326)], [(30, 61), (62, 45), (59, 119)],[(10, 13), (16, 30), (33, 23)]]anchors_yolo_tiny = [[(81, 82), (135, 169), (344, 319)], [(10, 14), (23, 27), (37, 58)]]session, inname, outname = load_model(onnx_model='yolov3_608.onnx')logging.info("开始Inference....")# 照片的批量inferenceimg_files_path = 'test_pic'imgs = os.listdir(img_files_path)logging.debug(imgs)for img_name in imgs:img_full_path = os.path.join(img_files_path, img_name)logging.debug(img_full_path)img, img_shape, testdata = process_img(img_path=img_full_path,input_shape=input_shape)s = time.time()prediction = session.run(outname, {inname: testdata})# logging.info("推理照片 %s 耗时:% .2fms" % (img_name, ((time.time() - s)*1000)))boxes = get_boxes(prediction=prediction,anchors=anchors_yolo,img_shape=input_shape)draw_box(boxes=boxes,img=img,img_shape=img_shape)logging.info("推理照片 %s 耗时:% .2fms" % (img_name, ((time.time() - s)*1000)))

实测在本人的macbookair渣渣硬件配置上,cpu推理一张图片的速度好几十毫秒;

3.进行视屏流的推理

  • 有点尴尬的说,速度够呛,想要做到rtsp高清视屏流的实时处理,还是需要适当调整优化代码;
  • 同时,代码这里本人给的是单进程处理的代码;
  • 实测采用多进程处理视屏流的推理,会适当提高在视屏流上的推理效率;
    onnx_video_inference.py
# coding: utf-8
# author: hxy
# 2019-12-10
"""
视频流的推理过程;
默认推理过程在CPU上;
"""import cv2
import time
import logging
import numpy as np
import onnxruntime
from lib.darknet_api import get_boxes# 定义日志格式
def log_set():logging.basicConfig(level=logging.INFO, format='%(asctime)s -%(filename)s:%(lineno)d - %(levelname)s - %(message)s')# 加载onnx模型pip
def load_model(onnx_model):sess = onnxruntime.InferenceSession(onnx_model)in_name = [input.name for input in sess.get_inputs()][0]out_name = [output.name for output in sess.get_outputs()]logging.info("输入的name:{}, 输出的name:{}".format(in_name, out_name))return sess, in_name, out_namedef frame_process(frame, input_shape=(608, 608)):image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)image = cv2.resize(image, input_shape)# image = cv2.resize(image, (640, 480))image_mean = np.array([127, 127, 127])image = (image - image_mean) / 128image = np.transpose(image, [2, 0, 1])image = np.expand_dims(image, axis=0)image = image.astype(np.float32)return image# 视屏预处理
def stream_inference():# 基本的参数设定label = ["background", "person","bicycle", "car", "motorbike", "aeroplane","bus", "train", "truck", "boat", "traffic light","fire hydrant", "stop sign", "parking meter", "bench","bird", "cat", "dog", "horse", "sheep", "cow", "elephant","bear", "zebra", "giraffe", "backpack", "umbrella", "handbag","tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball","kite", "baseball bat", "baseball glove", "skateboard", "surfboard","tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon","bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog","pizza", "donut", "cake", "chair", "sofa", "potted plant", "bed", "dining table","toilet", "TV monitor", "laptop", "mouse", "remote", "keyboard", "cell phone","microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase","scissors", "teddy bear", "hair drier", "toothbrush"]anchors_yolo_tiny = [[(81, 82), (135, 169), (344, 319)], [(10, 14), (23, 27), (37, 58)]]anchors_yolo = [[(116, 90), (156, 198), (373, 326)], [(30, 61), (62, 45), (59, 119)],[(10, 13), (16, 30), (33, 23)]]session, in_name, out_name = load_model(onnx_model='yolov3_608.onnx')# rtsp = ''cap = cv2.VideoCapture(0)while True:_, frame = cap.read()input_shape = frame.shapes = time.time()test_data = frame_process(frame, input_shape=(608, 608))logging.info("process per pic spend time is:{}ms".format((time.time() - s)*1000))s1 = time.time()prediction = session.run(out_name, {in_name: test_data})s2 = time.time()print("prediction cost time: %.3fms" % (s2 - s1))boxes = get_boxes(prediction=prediction,anchors=anchors_yolo,img_shape=(608, 608))print("get box cost time:{}ms".format((time.time()-s2)*1000))for box in boxes:x1 = int((box[0] - box[2] / 2) * input_shape[1])y1 = int((box[1] - box[3] / 2) * input_shape[0])x2 = int((box[0] + box[2] / 2) * input_shape[1])y2 = int((box[1] + box[3] / 2) * input_shape[0])logging.info(label[int(box[5])] + ":" + str(round(box[4], 3)))cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 255, 0), 1)cv2.putText(frame, label[int(box[5])] + ":" + str(round(box[4], 3)),(x1 + 5, y1 + 10),cv2.FONT_HERSHEY_SIMPLEX,0.5,(0, 0, 255),1)frame = cv2.resize(frame, (0, 0), fx=0.7, fy=0.7)cv2.imshow("Results", frame)if cv2.waitKey(1) & 0xFF == ord('q'):breakcap.release()cv2.destroyAllWindows()if __name__ == '__main__':log_set()stream_inference()

到这里,采用onnxruntime对yolov3进行推理的过程全部做完啦~

但是还是要总结一下:

  1. 其实本人做了yolov3和yolov3-tiny的onnx转换和推理,但是本博客只写了yolov3的内容,yolov3-tiny的可以您自己去学习;但是测试部分的代码均适应yolov3和yolov3-tiny的测试,您只需要修改对你的anchors和输入的size即可;
  2. 至于yolov3-tiny部分的,等全部博客写完,本人会放到github上,供互相学习;
  3. 在yolov3-tiny的推理上,速度很快,对视频流的处理能够实时检测;
  4. 建议您尝试在cpu和GPU两个不同的环境下测试此部分代码;
  5. 大部分代码在macbook上写的,要是您复制下来到ubuntu上运行有报格式错误,还请检查代码书写格式小问题;
  6. 本人的学习能力有限,要是有不足之处还请多多指教!
  7. 本博客也参考了其它资料,感谢大家的分享~共同学习进步!!

谢谢您的浏览 ,本内容供大家学习,参考使用!要是觉得对您有帮助的话,欢迎点赞!

不得不说相比之下,采用tensort进行推理,速度更稳,第二部分的博客本人会尽快更新!!

PS:yolov3-tiny的模型转换部分博客写好了,这里贴出来一下:

  • yolov3-tiny原始weights模型转onnx模型并进行推理
  • yolov3_tiny.onnx转trt采用tensorrt加速模型推理

yolov3模型部署实战weights转onnx并推理相关推荐

  1. Pytorch版本YOLOv3模型转Darknet weights模型然后转caffemodel再转wk模型在nnie上面推理

    Pytorch版本YOLOv3模型转darknet weights模型然后转caffemodel再转wk模型在nnie上面推理 文章目录 Pytorch版本YOLOv3模型转darknet weigh ...

  2. PyTorch模型部署:pth转onnx跨框架部署详解+代码

    文章目录 引言 基础概念 onnx:跨框架的模型表达标准 onnxruntime:部署模型的推理引擎 示例代码 0)安装onnx和onnxruntime 1)pytorch模型转onnx模型 2)on ...

  3. 基于TensorRT的深度学习模型部署实战教程!

    应用背景介绍 早在遥远的1989年,一家叫做ALVIVN的公司首次将神经网络用在汽车上,进行车道线检测和地面分割.时至今日,深度学习已经应用在自动驾驶系统的多个分支领域.首先是感知领域,常用的传感器有 ...

  4. 【Pytorch基础教程33】算法模型部署(MLFlow/ONNX/tf serving)

    内容概况 服务器上训练好模型后,需要将模型部署到线上,接受请求.完成推理并且返回结果. 保存模型结构和参数最简单的是torch.save保存为checkpoint,但一般用于训练时记录过程,训练中断可 ...

  5. 深度学习模型部署简要介绍

    一.模型部署简介 近几年来,随着算力的不断提升和数据的不断增长,深度学习算法有了长足的发展.深度学习算法也越来越多的应用在各个领域中,比如图像处理在安防领域和自动驾驶领域的应用,再比如语音处理和自然语 ...

  6. 如何将深度学习模型部署到实际工程中?(分类+检测+分割)

    应用背景介绍 早在遥远的1989年,一家叫做ALVIVN的公司首次将神经网络用在汽车上,进行车道线检测和地面分割.时至今日,深度学习已经应用在自动驾驶系统的多个分支领域.首先是感知领域,常用的传感器有 ...

  7. 目标检测算法模型YOLOV3原理及其实战 课程简介

    前言 在移植目标检测算法模型到海思AI引擎上运行的过程中,深切感受到理解和掌握算法模型原理的重要性. 基于此,我出了一门专门来讲目标检测算法模型原理及实战的课程.虽然讲的是YOLOV3模型,但是对理解 ...

  8. 使用Google云平台实战基于PyTorch的yolo-v3模型

    对于计算机视觉爱好者来说,YOLO (You Only Look Once)是一个非常流行的实时目标检测算法,因为它非常快,同时性能非常好. 在本文中,我将共享一个视频处理的代码,以获取视频中每个对象 ...

  9. onnx模型部署(一) ONNXRuntime

    通常我们在训练模型时可以使用很多不同的框架,比如有的同学喜欢用 Pytorch,有的同学喜欢使用 TensorFLow,也有的喜欢 MXNet,以及深度学习最开始流行的 Caffe等等,这样不同的训练 ...

最新文章

  1. robotframework的学习笔记(十二)------DatabaseLibrary 库
  2. 阿里达摩院百万大奖评选开启!这次人人都能给青年科学家当伯乐
  3. linux上使用git把代码push到gitee上
  4. 甲骨文:正在从SAP手中赢得应用产品市场份额
  5. delphi XE 下打开内存泄漏调试功能
  6. 【转】 Ubuntu 11.04 下安装配置 JDK 7
  7. redhat 5.6下网卡冗余实验
  8. prometheus连续查询_Grafana + Prometheus快速搭建监控平台
  9. Think in AngularJS:对比jQuery和AngularJS的不同思维模式
  10. 计算机毕业设计答辩慌?软工本科 Java EE 毕设项目答辩问题、答案汇总指南奉上
  11. 关于结构体数据的读写
  12. 通俗易懂的语言解释下股票、基金、证券、债券、信托、期货、国债、外汇?
  13. Android创建项目java报错,创建Android工程时报错:Errors running builder 'Android resource manager' on project '项目...
  14. 通过keil hex2bin,bin2hex的方法
  15. 混响运行于CPU或者DSP时的部分指标对比
  16. 【codemirror】Json编辑器使用总结
  17. 常见电脑故障自检指南(南城ZW)
  18. Java Swing制作超简单版打地鼠小游戏
  19. 使用结构化思维,让工作有条不紊
  20. Groovy读取properties文件

热门文章

  1. 疫情当前,产品求职者更需苦练内功
  2. 差异数据的对比和整理
  3. cmake简洁教程 - 第五篇
  4. 硕士论文要不要附matlab程序,论文必须要有附录吗_毕业论文附录一定要写吗_毕业论文中附录是不是必须要写的...
  5. Python之django框架模型(models)详解
  6. 【mysql】算术运算符
  7. 地图坐标转换(84坐标、百度坐标、国测局坐标)
  8. 上市公司产权和股权性质-区分非国企、国企和央企(2003-2020)
  9. DataTable 服务端模式 进行分页 排序搜索
  10. 同账号下阿里云ecs克隆——通过镜像更换系统