1_Opencv获取导入DNN模型的各层信息

  • 一、代码
    • 1、C++代码
    • 2、Python代码
  • 二、相关说明
    • 1、支持深度学习框架
    • 2、导入模型的有三个参数

一、代码

1、C++代码

#include <opencv2/opencv.hpp>
#include <opencv2/dnn.hpp>
#include <iostream>using namespace cv;
using namespace cv::dnn;
using namespace std;int main(int argc, char** argv) {string bin_model = "E:/workOpencv/JZG_opencv/opencv_tutorial/data/models/googlenet/bvlc_googlenet.caffemodel";string protxt = "E:/workOpencv/JZG_opencv/opencv_tutorial/data/models/googlenet/bvlc_googlenet.prototxt";// load CNN modelNet net = dnn::readNet(bin_model, protxt);// 获取各层信息   # vector就是容器,相当于是一个动态的数组   vector <类型>  标识符vector<String> layer_names = net.getLayerNames();for (int i = 0; i < layer_names.size(); i++) {int id = net.getLayerId(layer_names[i]);auto layer = net.getLayer(id);printf("layer id:%d, type: %s, name:%s \n", id, layer->type.c_str(), layer->name.c_str());}system("pause");   // 用双引号,不要用单引号return 0;
}

输出结果:

Attempting to upgrade input file specified using deprecated V1LayerParameter: E:/workOpencv/JZG_opencv/opencv_tutorial/data/models/googlenet/bvlc_googlenet.caffemodel
Successfully upgraded file specified using deprecated V1LayerParameter
layer id:1, type: Convolution, name:conv1/7x7_s2
layer id:2, type: ReLU, name:conv1/relu_7x7
layer id:3, type: Pooling, name:pool1/3x3_s2
layer id:4, type: LRN, name:pool1/norm1
layer id:5, type: Convolution, name:conv2/3x3_reduce
layer id:6, type: ReLU, name:conv2/relu_3x3_reduce
layer id:7, type: Convolution, name:conv2/3x3
layer id:8, type: ReLU, name:conv2/relu_3x3
layer id:9, type: LRN, name:conv2/norm2
layer id:10, type: Pooling, name:pool2/3x3_s2
layer id:11, type: Convolution, name:inception_3a/1x1
layer id:12, type: ReLU, name:inception_3a/relu_1x1
layer id:13, type: Convolution, name:inception_3a/3x3_reduce
layer id:14, type: ReLU, name:inception_3a/relu_3x3_reduce
layer id:15, type: Convolution, name:inception_3a/3x3
layer id:16, type: ReLU, name:inception_3a/relu_3x3
layer id:17, type: Convolution, name:inception_3a/5x5_reduce
layer id:18, type: ReLU, name:inception_3a/relu_5x5_reduce
layer id:19, type: Convolution, name:inception_3a/5x5
layer id:20, type: ReLU, name:inception_3a/relu_5x5
layer id:21, type: Pooling, name:inception_3a/pool
layer id:22, type: Convolution, name:inception_3a/pool_proj
layer id:23, type: ReLU, name:inception_3a/relu_pool_proj
layer id:24, type: Concat, name:inception_3a/output
layer id:25, type: Convolution, name:inception_3b/1x1
layer id:26, type: ReLU, name:inception_3b/relu_1x1
layer id:27, type: Convolution, name:inception_3b/3x3_reduce
layer id:28, type: ReLU, name:inception_3b/relu_3x3_reduce
layer id:29, type: Convolution, name:inception_3b/3x3
layer id:30, type: ReLU, name:inception_3b/relu_3x3
layer id:31, type: Convolution, name:inception_3b/5x5_reduce
layer id:32, type: ReLU, name:inception_3b/relu_5x5_reduce
layer id:33, type: Convolution, name:inception_3b/5x5
layer id:34, type: ReLU, name:inception_3b/relu_5x5
layer id:35, type: Pooling, name:inception_3b/pool
layer id:36, type: Convolution, name:inception_3b/pool_proj
layer id:37, type: ReLU, name:inception_3b/relu_pool_proj
layer id:38, type: Concat, name:inception_3b/output
layer id:39, type: Pooling, name:pool3/3x3_s2
layer id:40, type: Convolution, name:inception_4a/1x1
layer id:41, type: ReLU, name:inception_4a/relu_1x1
layer id:42, type: Convolution, name:inception_4a/3x3_reduce
layer id:43, type: ReLU, name:inception_4a/relu_3x3_reduce
layer id:44, type: Convolution, name:inception_4a/3x3
layer id:45, type: ReLU, name:inception_4a/relu_3x3
layer id:46, type: Convolution, name:inception_4a/5x5_reduce
layer id:47, type: ReLU, name:inception_4a/relu_5x5_reduce
layer id:48, type: Convolution, name:inception_4a/5x5
layer id:49, type: ReLU, name:inception_4a/relu_5x5
layer id:50, type: Pooling, name:inception_4a/pool
layer id:51, type: Convolution, name:inception_4a/pool_proj
layer id:52, type: ReLU, name:inception_4a/relu_pool_proj
layer id:53, type: Concat, name:inception_4a/output
layer id:54, type: Convolution, name:inception_4b/1x1
layer id:55, type: ReLU, name:inception_4b/relu_1x1
layer id:56, type: Convolution, name:inception_4b/3x3_reduce
layer id:57, type: ReLU, name:inception_4b/relu_3x3_reduce
layer id:58, type: Convolution, name:inception_4b/3x3
layer id:59, type: ReLU, name:inception_4b/relu_3x3
layer id:60, type: Convolution, name:inception_4b/5x5_reduce
layer id:61, type: ReLU, name:inception_4b/relu_5x5_reduce
layer id:62, type: Convolution, name:inception_4b/5x5
layer id:63, type: ReLU, name:inception_4b/relu_5x5
layer id:64, type: Pooling, name:inception_4b/pool
layer id:65, type: Convolution, name:inception_4b/pool_proj
layer id:66, type: ReLU, name:inception_4b/relu_pool_proj
layer id:67, type: Concat, name:inception_4b/output
layer id:68, type: Convolution, name:inception_4c/1x1
layer id:69, type: ReLU, name:inception_4c/relu_1x1
layer id:70, type: Convolution, name:inception_4c/3x3_reduce
layer id:71, type: ReLU, name:inception_4c/relu_3x3_reduce
layer id:72, type: Convolution, name:inception_4c/3x3
layer id:73, type: ReLU, name:inception_4c/relu_3x3
layer id:74, type: Convolution, name:inception_4c/5x5_reduce
layer id:75, type: ReLU, name:inception_4c/relu_5x5_reduce
layer id:76, type: Convolution, name:inception_4c/5x5
layer id:77, type: ReLU, name:inception_4c/relu_5x5
layer id:78, type: Pooling, name:inception_4c/pool
layer id:79, type: Convolution, name:inception_4c/pool_proj
layer id:80, type: ReLU, name:inception_4c/relu_pool_proj
layer id:81, type: Concat, name:inception_4c/output
layer id:82, type: Convolution, name:inception_4d/1x1
layer id:83, type: ReLU, name:inception_4d/relu_1x1
layer id:84, type: Convolution, name:inception_4d/3x3_reduce
layer id:85, type: ReLU, name:inception_4d/relu_3x3_reduce
layer id:86, type: Convolution, name:inception_4d/3x3
layer id:87, type: ReLU, name:inception_4d/relu_3x3
layer id:88, type: Convolution, name:inception_4d/5x5_reduce
layer id:89, type: ReLU, name:inception_4d/relu_5x5_reduce
layer id:90, type: Convolution, name:inception_4d/5x5
layer id:91, type: ReLU, name:inception_4d/relu_5x5
layer id:92, type: Pooling, name:inception_4d/pool
layer id:93, type: Convolution, name:inception_4d/pool_proj
layer id:94, type: ReLU, name:inception_4d/relu_pool_proj
layer id:95, type: Concat, name:inception_4d/output
layer id:96, type: Convolution, name:inception_4e/1x1
layer id:97, type: ReLU, name:inception_4e/relu_1x1
layer id:98, type: Convolution, name:inception_4e/3x3_reduce
layer id:99, type: ReLU, name:inception_4e/relu_3x3_reduce
layer id:100, type: Convolution, name:inception_4e/3x3
layer id:101, type: ReLU, name:inception_4e/relu_3x3
layer id:102, type: Convolution, name:inception_4e/5x5_reduce
layer id:103, type: ReLU, name:inception_4e/relu_5x5_reduce
layer id:104, type: Convolution, name:inception_4e/5x5
layer id:105, type: ReLU, name:inception_4e/relu_5x5
layer id:106, type: Pooling, name:inception_4e/pool
layer id:107, type: Convolution, name:inception_4e/pool_proj
layer id:108, type: ReLU, name:inception_4e/relu_pool_proj
layer id:109, type: Concat, name:inception_4e/output
layer id:110, type: Pooling, name:pool4/3x3_s2
layer id:111, type: Convolution, name:inception_5a/1x1
layer id:112, type: ReLU, name:inception_5a/relu_1x1
layer id:113, type: Convolution, name:inception_5a/3x3_reduce
layer id:114, type: ReLU, name:inception_5a/relu_3x3_reduce
layer id:115, type: Convolution, name:inception_5a/3x3
layer id:116, type: ReLU, name:inception_5a/relu_3x3
layer id:117, type: Convolution, name:inception_5a/5x5_reduce
layer id:118, type: ReLU, name:inception_5a/relu_5x5_reduce
layer id:119, type: Convolution, name:inception_5a/5x5
layer id:120, type: ReLU, name:inception_5a/relu_5x5
layer id:121, type: Pooling, name:inception_5a/pool
layer id:122, type: Convolution, name:inception_5a/pool_proj
layer id:123, type: ReLU, name:inception_5a/relu_pool_proj
layer id:124, type: Concat, name:inception_5a/output
layer id:125, type: Convolution, name:inception_5b/1x1
layer id:126, type: ReLU, name:inception_5b/relu_1x1
layer id:127, type: Convolution, name:inception_5b/3x3_reduce
layer id:128, type: ReLU, name:inception_5b/relu_3x3_reduce
layer id:129, type: Convolution, name:inception_5b/3x3
layer id:130, type: ReLU, name:inception_5b/relu_3x3
layer id:131, type: Convolution, name:inception_5b/5x5_reduce
layer id:132, type: ReLU, name:inception_5b/relu_5x5_reduce
layer id:133, type: Convolution, name:inception_5b/5x5
layer id:134, type: ReLU, name:inception_5b/relu_5x5
layer id:135, type: Pooling, name:inception_5b/pool
layer id:136, type: Convolution, name:inception_5b/pool_proj
layer id:137, type: ReLU, name:inception_5b/relu_pool_proj
layer id:138, type: Concat, name:inception_5b/output
layer id:139, type: Pooling, name:pool5/7x7_s1
layer id:140, type: Dropout, name:pool5/drop_7x7_s1
layer id:141, type: InnerProduct, name:loss3/classifier
layer id:142, type: Softmax, name:prob

OpenCV DNN 获取导入模型各层信息

2、Python代码

import cv2 as cv
import numpy as npbin_model = "googlenet/bvlc_googlenet.caffemodel"
protxt = "googlenet/bvlc_googlenet.prototxt"# Load names of classes
classes = None
with open("googlenet/classification_classes_ILSVRC2012.txt", 'rt') as f:classes = f.read().rstrip('\n').split('\n')# load CNN model
net = cv.dnn.readNet(bin_model, protxt)# 获取各层信息
layer_names = net.getLayerNames()
print(layer_names)
for name in layer_names:id = net.getLayerId(name)layer = net.getLayer(id)print("layer id : %d, type : %s, name: %s"%(id, layer.type, layer.name))print("successfully loaded model...")

二、相关说明

1、支持深度学习框架

模型支持1000个类别的图像分类,OpenCV DNN模块支持下面框架的预训练模型的前馈网络(预测图)使用

  • Caffe
  • Tensorflow
  • Torch
  • DLDT
  • Darknet

2、导入模型的有三个参数

同时还支持自定义层解析、非最大抑制操作、获取各层的信息等。OpenCV加载模型的通用API为
Net cv::dnn::readNet(
const String & model,
const String & config = “”,
const String & framework = “”
)

1、第一个参数model
model是训练好的二进制网络权重文件,可能来自支持的网络框架,不同的深度学习框架对应的二进制模型扩展名如下:

*.caffemodel (Caffe, http://caffe.berkeleyvision.org/)
*.pb (TensorFlow, https://www.tensorflow.org/)
*.t7 | *.net (Torch, http://torch.ch/)
*.weights (Darknet, https://pjreddie.com/darknet/)
*.bin (DLDT, https://software.intel.com/openvino-toolkit)

2、第二个参数config
config针对模型二进制的描述文件,不同的框架配置文件有不同扩展名,如下:

*.prototxt (Caffe, http://caffe.berkeleyvision.org/)
*.pbtxt (TensorFlow, https://www.tensorflow.org/)
*.cfg (Darknet, https://pjreddie.com/darknet/)
*.xml (DLDT, https://software.intel.com/openvino-toolkit)

3、第三个参数framework
framework显示声明参数,说明模型使用哪个框架训练出来的

后面的两个参数并不是很重要,第一参数一定要有



♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠ ⊕ ♠

Opencv获取导入DNN模型的各层信息相关推荐

  1. R语言epiDisplay包cox.display函数获取cox回归模型汇总统计信息(风险率HR、调整风险率及其置信区间、模型系数t检验的p值、Wald检验的p值和似然比检验的p值)、保存结果到csv

    R语言使用epiDisplay包的cox.display函数获取cox回归模型的汇总统计信息(风险率HR.调整的风险率及其置信区间.模型系数的t检验的p值.Wald检验的p值和似然比检验的p值).使用 ...

  2. DL之DNN:自定义2层神经网络TwoLayerNet模型(封装为层级结构)利用MNIST数据集进行训练、预测

    DL之DNN:自定义2层神经网络TwoLayerNet模型(封装为层级结构)利用MNIST数据集进行训练.预测 导读           计算图在神经网络算法中的作用.计算图的节点是由局部计算构成的. ...

  3. DL之DNN:自定义2层神经网络TwoLayerNet模型(封装为层级结构)利用MNIST数据集进行训练、GC对比

    DL之DNN:自定义2层神经网络TwoLayerNet模型(封装为层级结构)利用MNIST数据集进行训练.GC对比 导读           神经网络算法封装为层级结构的作用.在神经网络算法中,通过将 ...

  4. DL之DNN:自定义2层神经网络TwoLayerNet模型(计算梯度两种方法)利用MNIST数据集进行训练、预测

    DL之DNN:自定义2层神经网络TwoLayerNet模型(计算梯度两种方法)利用MNIST数据集进行训练.预测 导读 利用python的numpy计算库,进行自定义搭建2层神经网络TwoLayerN ...

  5. OpenCV运行自定义OCR模型

    OpenCV运行自定义OCR模型 运行自定义OCR模型 介绍 训练自己的OCR模型 将OCR模型转换为ONNX格式并在OpenCV DNN中使用 提供了预训练的ONNX模型 选型建议 运行自定义OCR ...

  6. lr模型和dnn模型_建立ML或DNN模型的技巧

    lr模型和dnn模型 机器学习 (Machine Learning) Everyone can fit data into any model machine learning or deep lea ...

  7. 强强联合,OpenCV搭载飞桨模型,帮你轻松玩转深度学习

    OpenCV 是一个跨平台的开源的计算机视觉库,提供了 Python.Ruby.MATLAB 等语言的接口.下一个版本OpenCV 5.0 不仅仅支持常规的图像分类.目标检测.图像分割和风格迁移功能, ...

  8. 《OpenCv视觉之眼》Python图像处理十九:Opencv图像处理实战四之通过OpenCV进行人脸口罩模型训练并进行口罩检测

    本专栏主要介绍如果通过OpenCv-Python进行图像处理,通过原理理解OpenCv-Python的函数处理原型,在具体情况中,针对不同的图像进行不同等级的.不同方法的处理,以达到对图像进行去噪.锐 ...

  9. bullet物理引擎与OpenGL结合 导入3D模型进行碰撞检测 以及画三角网格的坑

    原文作者:aircraft 原文链接:https://www.cnblogs.com/DOMLX/p/11681069.html 一.初始化世界以及模型 /// 冲突配置包含内存的默认设置,冲突设置. ...

最新文章

  1. 锁的算法,隔离级别的问题
  2. ACM如何产生各种随机数
  3. Centos7使用yum源安装mysql5.7和redis
  4. 字节跳动AI副总裁离职,将加入清华大学张亚勤团队
  5. Java爬虫抓取网页
  6. 通过一个最简单的例子,理解Angular rxjs里的Observable对象的pipe方法
  7. 解决slideDown()、slideUp()执行结束后才执行下一次,导致鼠标离开后很久动画依然在执行的问题...
  8. [react] 说出几点你认为的React实践
  9. 总奖金100万!2021SEED江苏大数据开发与应用大赛(华录杯)正式开赛!
  10. Mongodb常规操作【一】
  11. 学python之前要学c语言吗_学Python之前需要学c语言吗
  12. c语言实验答案周信东综合程序设计,周信东主编最新版-C语言程序设计基础实验一实验报告.doc...
  13. 45分布式电商项目 - SpringDataSolr 入门例子
  14. 最新消息,CDRX7冰点价再返现,你知道么?
  15. JavaScript高级程序设计[美]Nicholas C.Zakas著 读书笔记(二)
  16. 文字转语音,有什么软件好用?
  17. 京东关联营销怎么设置,1分钟创建关联营销
  18. oracle em 监听,监听程序ORACLE_HOME是啥??我EM重置,这个不知道要填什么
  19. ultraiso制作u盘启动盘教程
  20. UVM:7.5.2 常用操作及其对期望值和镜像值的影响

热门文章

  1. ubuntu安装KVM虚拟机管理virt-manager
  2. Last-Modified、If-Modified-Since 实现缓存和 OutputCache 的区别
  3. C++学习笔记(四)----关于参数传递(1)
  4. 一个用BitMap类完成的网页随机码图片生成类
  5. 基于cookies的小型购物车程序
  6. python 仪表盘-python仪表盘
  7. python编程入门指南怎么样-Python 应该怎么学?
  8. 想学python编程-想学Python编程?你真的适合吗?
  9. 编程语言python特点-Python编程语言的优点
  10. python的编译器有哪些-python的编译器有哪些