瑞芯微转化人脸检测retinaface模型

  • 一、运行docker
  • 二、转换步骤
    • 1.使用https://netron.app/查看模型的输入及输出
    • 2.设置转换模型参数
    • 3.运行文件生成rknn模型文件
    • 4.调用模型测试
    • 5.运行测试
  • 总结

一、运行docker

docker run -t -i --privileged -v /dev/bus/usb:/dev/bus/usb rknn-toolkit:1.7.1 /bin/bash

二、转换步骤

1.使用https://netron.app/查看模型的输入及输出

2.设置转换模型参数

test.py代码如下:

import os
import urllib
import traceback
import time
import sys
import numpy as np
import cv2
from rknn.api import RKNN
import timeONNX_MODEL = 'det_10g.onnx'
RKNN_MODEL = 'det_10g_500.rknn'if __name__ == '__main__':# Create RKNN object#rknn = RKNN(verbose = True)rknn = RKNN()# Load ONNX modelprint('--> Loading model')ret = rknn.load_onnx(model=ONNX_MODEL,#inputs=['input.1'],#input_size_list=[[3,608, 608]],#outputs=['277','279'])#调试查看中间输出outputs=['448','471','494','451','474','497','454','477','500'])if ret != 0:print('Load failed!')exit(ret)print('done')# pre-process configprint('--> Config model')   #rknn.config(batch_size=10,target_platform=['rv1126'],mean_values=[[127.5, 127.5, 127.5]], std_values=[[128.0,128.0, 128.0]],quantized_dtype='dynamic_fixed_point-#i8',quantized_algorithm = "mmse")rknn.config(batch_size=10,target_platform=['rv1126'],mean_values=[[127.5, 127.5, 127.5]], std_values=[[128.0,128.0, 128.0]])print('done')# Build modelprint('--> Building model')t1 =time.time()#ret = rknn.build(do_quantization=False,pre_compile=False) #不量化测试ret = rknn.build(do_quantization=True,dataset='./data.txt',pre_compile=True)t2 = time.time() - t1print("cost time = ",t2)if ret != 0:print('Build failed!')exit(ret)print('done')# Export RKNN modelprint('--> Export RKNN model')ret = rknn.export_rknn(RKNN_MODEL)if ret != 0:print('Export failed!')exit(ret)print('done')#性能分析,可删除print('--> Accuracy analysis')rknn.accuracy_analysis(inputs='./test.txt', target='rv1126')print('done')rknn.release()

3.运行文件生成rknn模型文件

python test.py

4.调用模型测试

run.py代码如下:

import numpy as np
import cv2
from PIL import Image
from rknn.api import RKNN
import timeGRID0 = 19
GRID1 = 38
GRID2 = 76
LISTSIZE = 85
SPAN = 3
NUM_CLS = 80
MAX_BOXES = 500
OBJ_THRESH = 0.5
NMS_THRESH = 0.5
fmc = 3
_feat_stride_fpn = [8, 16, 32]
_num_anchors = 2
use_kps = True
center_cache = {}def distance2bbox(points, distance, max_shape=None):"""Decode distance prediction to bounding box.Args:points (Tensor): Shape (n, 2), [x, y].distance (Tensor): Distance from the given point to 4boundaries (left, top, right, bottom).max_shape (tuple): Shape of the image.Returns:Tensor: Decoded bboxes."""x1 = points[:, 0] - distance[:, 0]y1 = points[:, 1] - distance[:, 1]x2 = points[:, 0] + distance[:, 2]y2 = points[:, 1] + distance[:, 3]if max_shape is not None:x1 = x1.clamp(min=0, max=max_shape[1])y1 = y1.clamp(min=0, max=max_shape[0])x2 = x2.clamp(min=0, max=max_shape[1])y2 = y2.clamp(min=0, max=max_shape[0])return np.stack([x1, y1, x2, y2], axis=-1)def distance2kps(points, distance, max_shape=None):"""Decode distance prediction to bounding box.Args:points (Tensor): Shape (n, 2), [x, y].distance (Tensor): Distance from the given point to 4boundaries (left, top, right, bottom).max_shape (tuple): Shape of the image.Returns:Tensor: Decoded bboxes."""preds = []for i in range(0, distance.shape[1], 2):px = points[:, i%2] + distance[:, i]py = points[:, i%2+1] + distance[:, i+1]if max_shape is not None:px = px.clamp(min=0, max=max_shape[1])py = py.clamp(min=0, max=max_shape[0])preds.append(px)preds.append(py)return np.stack(preds, axis=-1)def nms(dets):thresh = NMS_THRESHx1 = dets[:, 0]y1 = dets[:, 1]x2 = dets[:, 2]y2 = dets[:, 3]scores = dets[:, 4]areas = (x2 - x1 + 1) * (y2 - y1 + 1)order = scores.argsort()[::-1]keep = []while order.size > 0:i = order[0]keep.append(i)xx1 = np.maximum(x1[i], x1[order[1:]])yy1 = np.maximum(y1[i], y1[order[1:]])xx2 = np.minimum(x2[i], x2[order[1:]])yy2 = np.minimum(y2[i], y2[order[1:]])w = np.maximum(0.0, xx2 - xx1 + 1)h = np.maximum(0.0, yy2 - yy1 + 1)inter = w * hovr = inter / (areas[i] + areas[order[1:]] - inter)inds = np.where(ovr <= thresh)[0]order = order[inds + 1]return keepdef load_model():rknn = RKNN(verbose = True)print('-->loading model')rknn.load_rknn('./det_10g_500.rknn')print('loading model done')print('--> Init runtime environment')ret = rknn.init_runtime(target='rv1126')if ret != 0:print('Init runtime environment failed')exit(ret)print('done')return rknnif __name__ == '__main__':rknn = load_model()im = cv2.imread('./data/2.jpg')h, w, channels = im.shape#print(im)dim = (608, 608)# resize imagedet_img = cv2.resize(im, dim, interpolation = cv2.INTER_AREA)det_img = np.asarray(det_img)   print('--> inter')#print(x2)#增加一个维度det_img = det_img[:, :, :, np.newaxis]#转换为模型需要的输入维度(1, 3, 608, 608)det_img = det_img.transpose([3, 2, 0, 1])print(det_img)# Inferenceprint('--> Running model')t1 = time.time()net_outs = rknn.inference(inputs=det_img,data_format="nchw")t2 = time.time() -t1print('--> t2 = ',t2)#net_outs = rknn.inference(inputs=[det_img],data_format="nchw")print('--> inputs')print(len(net_outs[2]))print(net_outs[2])print('outputs over!!!!!!')scores_list = []bboxes_list = []kpss_list = []det_scale = 1for idx, stride in enumerate(_feat_stride_fpn):scores = net_outs[idx]bbox_preds = net_outs[idx+fmc]#print("scores",scores)# print("bbox_preds",bbox_preds,fmc)# print("scores",net_outs[0],"box",net_outs[3])for idx_i in range(len(scores)):score_num = float(scores[idx_i])if score_num > 0.5:kps_preds = net_outs[idx+fmc*2] * strideprint("score_num:",idx_i,float(score_num))bbox_preds = bbox_preds * stride        kps_preds = net_outs[idx+fmc*2] * strideheight = 608// stride#input_height // stridewidth = 608// stride#input_width // strideK = height * widthkey = (height, width, stride)anchor_centers = np.stack(np.mgrid[:height, :width][::-1], axis=-1).astype(np.float32)anchor_centers = (anchor_centers * stride).reshape( (-1, 2) )if _num_anchors>1:anchor_centers = np.stack([anchor_centers]*_num_anchors, axis=1).reshape( (-1,2) )if len(center_cache)<100:center_cache[key] = anchor_centerspos_inds = np.where(scores>=OBJ_THRESH)[0]       bboxes = distance2bbox(anchor_centers, bbox_preds)pos_scores = scores[pos_inds]pos_bboxes = bboxes[pos_inds]scores_list.append(pos_scores)bboxes_list.append(pos_bboxes)      kpss = distance2kps(anchor_centers, kps_preds)         kpss = kpss.reshape( (kpss.shape[0], -1, 2) )pos_kpss = kpss[pos_inds]kpss_list.append(pos_kpss)max_num = 0scores = np.vstack(scores_list)scores_ravel = scores.ravel()order = scores_ravel.argsort()[::-1]bboxes = np.vstack(bboxes_list) / det_scalekpss = np.vstack(kpss_list) / det_scalepre_det = np.hstack((bboxes, scores)).astype(np.float32, copy=False)pre_det = pre_det[order, :]keep = nms(pre_det)det = pre_det[keep, :]kpss = kpss[order,:,:]kpss = kpss[keep,:,:]if max_num > 0 and det.shape[0] > max_num:area = (det[:, 2] - det[:, 0]) * (det[:, 3] -det[:, 1])img_center = img.shape[0] // 2, img.shape[1] // 2offsets = np.vstack([(det[:, 0] + det[:, 2]) / 2 - img_center[1],(det[:, 1] + det[:, 3]) / 2 - img_center[0]])offset_dist_squared = np.sum(np.power(offsets, 2.0), 0)if metric=='max':values = areaelse:values = area - offset_dist_squared * 2.0  # some extra weight on the centeringbindex = np.argsort(values)[::-1]  # some extra weight on the centeringbindex = bindex[0:max_num]det = det[bindex, :]if kpss is not None:kpss = kpss[bindex, :]print('--> Begin evaluate modelperformance')perf_results = rknn.eval_perf(inputs=[det_img])print('done')print(det)for face_i in range(len(det)): cv2.rectangle(im,(int(det[face_i][0]*w *1.0/608),int(det[face_i][1]*h *1.0/608)),(int(det[face_i][2]*w *1.0/608),int(det[face_i][3]*h *1.0/608)),(0,0,255),3)cv2.imwrite("out.jpg",im)rknn.release()

5.运行测试

python run.py

总结

模型转换时量化的数据集准备使用场景的图像,特别注意,模型输入进行了通道转换,量化转换完成后耗时50ms左右。

瑞芯微转化人脸检测retinaface模型相关推荐

  1. 瑞芯微转化人脸识别模型

    瑞芯微转化人脸识别模型 一.运行docker 二.使用步骤 1.使用https://netron.app/查看模型的输入及输出 2.设置转换模型参数 3. 运行文件生成rknn模型文件 4. 测试 5 ...

  2. 基于瑞芯微RK3288人脸识别测温一体终端解决方案

    方案背景 随着企业复工.学校开学,返程高峰的逐步到来,各地将迎来大规模的人员流动,为接下来的疫情防控带来新的难题和不小的压力.筛查和判别感染人群主要以是否有"发热"症状作为初步判断 ...

  3. 瑞芯微RK3399Pro平台YOLOv4 pytorch模型转RKNN模型失败

    报错:The following operators are not implemented: ['aten::detach', 'aten::to', 'aten::floor', 'aten::s ...

  4. 利用瑞芯微3399Pro推理yoloV5s目标检测模型:踩坑笔记

    1.参考: yolov5和rknn模型的问题_走错路的程序员的博客-CSDN博客_rknn YOLOv5s部署在瑞芯微电子RK3399Pro中使用NPU进行加速推理_Deepsdu的博客-CSDN博客 ...

  5. 瑞芯微芯片AI部分开发记录 第一节 《PC端环境搭建2》

    此部分为瑞芯微芯片NPU部分的开发记录.包括服务器(PC)端模型训练.模型转换以及瑞芯微接口调用rknn模型,实现目标检测任务. 本小节使用yolov3算法训练自己的数据集,并且部署到瑞芯微rk356 ...

  6. 瑞芯微芯片AI部分开发记录 第一节 《PC端环境搭建1》

    此部分为瑞芯微芯片NPU部分的开发记录.包括服务器(PC)端模型训练.模型转换以及瑞芯微接口调用rknn模型,实现目标检测任务. 本小节使用yolov3算法训练自己的数据集,并且部署到瑞芯微rk356 ...

  7. 目标检测 YOLOv5 - v6.2版本模型在瑞芯微 Rockchip设备从训练到C++部署实践

    目标检测 YOLOv5 - v6.2版本模型在瑞芯微 Rockchip设备从训练到C++部署实践 flyfish 源码地址 https://github.com/shaoshengsong/rockc ...

  8. yolov5-5.0训练模型+瑞芯微rv1126上实现模型部署

    yolov5-5.0训练模型+瑞芯微rv1126上实现模型部署   第一次接触模型训练和在开发板部署,过程曲折,从开始的一脸懵到最后模型部署成功,查阅了不少资料和学习了不少大佬的经验,在这里记录一下过 ...

  9. 35、ubuntu20.04搭建瑞芯微的npu仿真环境和测试rv1126的Debain系统下的yolov5+npu检测功能以及RKNN推理部署以及RTSP视频流解码

    基本思想:手中有一块core-1126/1109-JD4,记录一下其刷机过程和开发人数统计,与树莓派的nanodet 每帧200ms对比一下 第一步:刷机,真的是难,各种各样的小问题,反正成功的方法只 ...

最新文章

  1. 业务库负载翻了百倍,我做了什么来拯救MySQL架构?
  2. flask 检测post是否为空_用Flask和Vue制作一个单页应用(五)
  3. P1433 吃奶酪 回溯法 优化
  4. hdu 4417(树状数组+离线算法)
  5. mysql执行动态批处理,使用BAT批处理执行sql语句的代码
  6. python基础知识-列表,元组,字典
  7. 数据结构-王道2017-第5章 图
  8. Spring Boot Initilizr - 使用IDE或IDE插件
  9. double 保留两位小数
  10. [转] 面向对象编程 - 继承和多态
  11. iOS “项目名” has conflicting provisioning settings.
  12. 开发中一些常用的代码片段(持续更新,要是各位大牛看见了麻烦也给在评论区添一下常用的代码)
  13. 【代码优化】私有构造器使用及对象创建优化
  14. 进销存excel_excel进销存仓库表格同步手机操作
  15. html记事本制作静态网页,记事本编辑html静态网页设计(3页)-原创力文档
  16. matlab 求矩阵各行的平均值
  17. Attach机制实现完全解读
  18. Ruoyi若依前后端分离框架【若依登录详细过程】
  19. C++ 学生姓名学号 字符串
  20. windows之关闭thinkpad的默认fn功能键

热门文章

  1. C语言构建环形缓冲区
  2. C语言循环结构(while循环,do...while循环,for循环)
  3. 【xauth: file /home/user/.Xauthority does not exist】用户区产生.Xauthority-n文件解决办法
  4. MYSQL Workbench-8.0.27.1出现“Exception: Current profile has no WMI enabled“错误的解决方法
  5. OpenCV函数汇总(所有函数)---219个函数
  6. 28道Webpack面试题及答案
  7. 【翻译】SECS GEM系列之二:GEM 收集事件
  8. A15处理器和m1哪个好 a15处理器和m1处理器差距
  9. 使用html画简单的图形,HTML5 Canvas 绘图——使用 Canvas 绘制图形图文教程 使用html5 canvas 绘制精美的图...
  10. 单片机实例27——ADC0809A/D转换器基本应用技术(硬件电路图+汇编程序+C语言程序)