OpenCV3.3深度神经网络DNN模块应用全套视频、课程配套PPT的PDF版本和配套源码

全套例程源码、用到的模型文件、图片和视频素材整理

在线观看

实例1:读取单张PNG文件(opencv3.3环境测试)

实例2:GoogleNet-Caffe模型实现图像分类

实例3:SSD模型实现对象检测

实例4:SSD-MobileNet模型实时对象检测

实例5:FCN模型实现图像分割

实例6:CNN模型预测性别与年龄

实例7:GOTURN模型实现视频对象跟踪

1 概述 - DNN模块介绍

1.1 环境配置

 下载与配置OpenCV3.3

 OpenCV3.3下载
包括头文件
D:\opencv-3.3\opencv\build\include
D:\opencv-3.3\opencv\build\include\opencv
D:\opencv-3.3\opencv\build\include\opencv2
  库文件
D:\opencv-3.3\opencv\build\x64\vc14\lib
链接器
opencv_world330d.lib
环境变量
D:\opencv-3.3\opencv\build\x64\vc14\bin

详细opencv3.3环境配置请点击

实例1:读取单张PNG文件(opencv3.3环境测试)

#include <opencv2/opencv.hpp>
#include <opencv2/dnn.hpp>//dnn模块类
#include <iostream>using namespace cv;
using namespace std;int main(int argc, char** argv) {Mat src = imread("tx.png");if (src.empty()) {printf("could not load image...\n");return -1;}namedWindow("input image", CV_WINDOW_AUTOSIZE);imshow("input image", src);waitKey(0);return 0;
}


1.2 DNN模块介绍

 Tiny-dnn模块
 支持深度学习框架
- Caffe
- TensorFlow
- Torch/PyTorch

1.3 支持的层类型

1.4 DNN模块

 图像分类
 对象检测
 实时对象检测
 图像分割
 预测
 视频对象跟踪

2 使用GoogleNet模型数据的图像分类

 Googlenet模型与数据介绍

 Caffe - 模型下载

 bvlc_googlenet CNN模型

 基于100万张图像实现1000个分类

2.1 使用模型实现图像分类

 编码处理
- 加载Caffem模型
- 使用模型预测

实例2:GoogleNet-Caffe模型实现图像分类

#include <opencv2/opencv.hpp>
#include <opencv2/dnn.hpp>
#include <iostream>
//使用Googlenet Caffe模型实现图像分类
using namespace cv;
using namespace cv::dnn;
using namespace std;String model_bin_file = "D:/opencv3.3/opencv/sources/samples/data/dnn/bvlc_googlenet.caffemodel";//模型二进制文件
String model_txt_file = "D:/opencv3.3/opencv/sources/samples/data/dnn/bvlc_googlenet.prototxt";//模型文本(描述)文件
String labels_txt_file = "D:/opencv3.3/opencv/sources/samples/data/dnn/synset_words.txt";//标签文本文件
vector<String> readLabels();//读写文件方法
int main(int argc, char** argv) {Mat src = imread("space_shuttle.jpg");if (src.empty()) {printf("could not load image...\n");return -1;}namedWindow("input image", CV_WINDOW_AUTOSIZE);imshow("input image", src);vector<String> labels = readLabels();//读取Caffe模型Net net = readNetFromCaffe(model_txt_file, model_bin_file);if (net.empty()) {//如果没读到模型printf("read caffe model data failure...\n");return -1;}//由bvlc_googlenet.prototxt知网络输入层大小为224*224Mat inputBlob = blobFromImage(src, 1.0, Size(224, 224), Scalar(104, 117, 123));Mat prob;for (int i = 0; i < 10; i++) {net.setInput(inputBlob, "data");//设置第一层数据层进行输入prob = net.forward("prob");//设置最后一层进行结果输出}Mat probMat = prob.reshape(1, 1);//转换成一行多列的分类结果Point classNumber;//最大可能性的分类号double classProb;//最大可能性的概率值minMaxLoc(probMat, NULL, &classProb, NULL, &classNumber);int classidx = classNumber.x;printf("\n current image classification : %s, possible : %.2f", labels.at(classidx).c_str(), classProb);//图片上放置文本  红色显示putText(src, labels.at(classidx), Point(20, 20), FONT_HERSHEY_SIMPLEX, 1.0, Scalar(0, 0, 255), 2, 8);imshow("Image Classification", src);waitKey(0);return 0;
}
vector<String> readLabels() {//读取标签文本文件vector<String> classNames;ifstream fp(labels_txt_file);//文件输入输出流if (!fp.is_open()) {//如果文件未打开printf("could not open the file");exit(-1);}string name;while (!fp.eof()) {//如果文件并未读取到结尾getline(fp, name);//读取文件每一行if (name.length()) {classNames.push_back(name.substr(name.find(' ') + 1));//字符拆解与分割}}fp.close();//关闭文件输入输出流return classNames;//返回分类名
}

航天飞机,概率100%

山地单车,概率93%

3 使用SSD模型实现对象检测

 SSD模型与数据介绍
 使用模型实现对象检测

3.1 SSD模型与数据介绍

 SSD模型
- https://github.com/weiliu89/caffe/tree/ssd#models
 Fast –R-CNN模型基础上延伸
 基于PASCAL VOC数据集实现200个分类对象检测

3.2 模型文件

 二进制模型
- VGG_ILSVRC2016_SSD_300x300_iter_440000.caffemodel
 网络描述
- ILSVRC2016/SSD_300x300/deploy.prototxt
 分类信息
- ILSVRC2016/SSD_300x300/labelmap_det.txt

3.3 使用模型实现图像分类

 编码处理
- 加载Caffem模型
- 使用模型预测

实例3:SSD模型实现对象检测

#include <opencv2/opencv.hpp>
#include <opencv2/dnn.hpp>
#include <iostream>using namespace cv;
using namespace cv::dnn;
using namespace std;const size_t width = 300;//模型尺寸为300*300
const size_t height = 300;
//label文件
String labelFile = "D:/opencv3.3/opencv/sources/samples/data/dnn/labelmap_det.txt";
//模型文件
String modelFile = "D:/opencv3.3/opencv/sources/samples/data/dnn/VGG_ILSVRC2016_SSD_300x300_iter_440000.caffemodel";
//模型描述文件
String model_text_file = "D:/opencv3.3/opencv/sources/samples/data/dnn/deploy.prototxt";vector<String> readLabels();
const int meanValues[3] = { 104, 117, 123 };
static Mat getMean(const size_t &w, const size_t &h) {Mat mean;vector<Mat> channels;for (int i = 0; i < 3; i++) {Mat channel(h, w, CV_32F, Scalar(meanValues[i]));channels.push_back(channel);}merge(channels, mean);return mean;
}static Mat preprocess(const Mat &frame) {Mat preprocessed;frame.convertTo(preprocessed, CV_32F);resize(preprocessed, preprocessed, Size(width, height)); // 300x300 imageMat mean = getMean(width, height);subtract(preprocessed, mean, preprocessed);return preprocessed;
}int main(int argc, char** argv) {Mat frame = imread("persons.png");if (frame.empty()) {printf("could not load image...\n");return -1;}namedWindow("input image", CV_WINDOW_AUTOSIZE);imshow("input image", frame);vector<String> objNames = readLabels();// import Caffe SSD modelPtr<dnn::Importer> importer;try {importer = createCaffeImporter(model_text_file, modelFile);}catch (const cv::Exception &err) {cerr << err.msg << endl;}//初始化网络Net net;importer->populateNet(net);importer.release();Mat input_image = preprocess(frame);//获取输入图像Mat blobImage = blobFromImage(input_image);//将图像转换为blobnet.setInput(blobImage, "data");//将图像转换的blob数据输入到网络的第一层“data”层,见deploy.protxt文件Mat detection = net.forward("detection_out");//结果输出(最后一层“detection_out”层)输出给detectionMat detectionMat(detection.size[2], detection.size[3], CV_32F, detection.ptr<float>());float confidence_threshold = 0.2;//自信区间,可以修改,越低检测到的物体越多for (int i = 0; i < detectionMat.rows; i++) {float confidence = detectionMat.at<float>(i, 2);if (confidence > confidence_threshold) {size_t objIndex = (size_t)(detectionMat.at<float>(i, 1));float tl_x = detectionMat.at<float>(i, 3) * frame.cols;float tl_y = detectionMat.at<float>(i, 4) * frame.rows;float br_x = detectionMat.at<float>(i, 5) * frame.cols;float br_y = detectionMat.at<float>(i, 6) * frame.rows;Rect object_box((int)tl_x, (int)tl_y, (int)(br_x - tl_x), (int)(br_y - tl_y));//标记框rectangle(frame, object_box, Scalar(0, 0, 255), 2, 8, 0);//设置颜色putText(frame, format("%s", objNames[objIndex].c_str()), Point(tl_x, tl_y), FONT_HERSHEY_SIMPLEX, 1.0, Scalar(255, 0, 0), 2);}}imshow("ssd-demo", frame);waitKey(0);return 0;
}vector<String> readLabels() {vector<String> objNames;ifstream fp(labelFile);if (!fp.is_open()) {printf("could not open the file...\n");exit(-1);}string name;while (!fp.eof()) {getline(fp, name);if (name.length() && (name.find("display_name:") == 0)) {string temp = name.substr(15);temp.replace(temp.end() - 1, temp.end(), "");objNames.push_back(temp);}}return objNames;
}

     

      

由于SSD模型支持200个分类,分类数目比较多,所以运行时间会长一些

4 SSD-MobileNet模型实时对象检测

4.1  MobileNet模型与数据介绍

 SSD-MobileNet模型
- https://github.com/weiliu89/caffe/tree/ssd#models
 SSD模型的分类子集
 支持20个分类标签
 实时检测

4.2 模型文件

 二进制模型
- MobileNetSSD_deploy.caffemodel
 网络描述
- MobileNetSSD_deploy.prototxt
 分类信息
- 20个分类

4.3 使用模型实现对象检测

 编码处理
- 加载Caffem模型
- 使用模型预测

实例4:SSD-MobileNet模型实时对象检测

#include <opencv2/opencv.hpp>
#include <opencv2/dnn.hpp>
#include <iostream>using namespace cv;
using namespace cv::dnn;
using namespace std;const size_t width = 300;
const size_t height = 300;
const float meanVal = 127.5;//均值
const float scaleFactor = 0.007843f;
const char* classNames[] = { "background",
"aeroplane", "bicycle", "bird", "boat",
"bottle", "bus", "car", "cat", "chair",
"cow", "diningtable", "dog", "horse",
"motorbike", "person", "pottedplant",
"sheep", "sofa", "train", "tvmonitor" };
//模型文件
String modelFile = "D:/opencv3.3/opencv/sources/samples/data/dnn/MobileNetSSD_deploy.caffemodel";
//二进制描述文件
String model_text_file = "D:/opencv3.3/opencv/sources/samples/data/dnn/MobileNetSSD_deploy.prototxt";int main(int argc, char** argv) {VideoCapture capture;//读取视频capture.open("01.mp4");namedWindow("input", CV_WINDOW_AUTOSIZE);int w = capture.get(CAP_PROP_FRAME_WIDTH);//获取视频宽度int h = capture.get(CAP_PROP_FRAME_HEIGHT   );//获取视频高度printf("frame width : %d, frame height : %d", w, h);// set up netNet net = readNetFromCaffe(model_text_file, modelFile);Mat frame;while (capture.read(frame)) {imshow("input", frame);// 预测Mat inputblob = blobFromImage(frame, scaleFactor, Size(width, height), meanVal, false);net.setInput(inputblob, "data");Mat detection = net.forward("detection_out");// 绘制Mat detectionMat(detection.size[2], detection.size[3], CV_32F, detection.ptr<float>());float confidence_threshold = 0.25;//自信区间,越小检测到的物体越多(>=0.25)for (int i = 0; i < detectionMat.rows; i++) {float confidence = detectionMat.at<float>(i, 2);if (confidence > confidence_threshold) {size_t objIndex = (size_t)(detectionMat.at<float>(i, 1));float tl_x = detectionMat.at<float>(i, 3) * frame.cols;float tl_y = detectionMat.at<float>(i, 4) * frame.rows;float br_x = detectionMat.at<float>(i, 5) * frame.cols;float br_y = detectionMat.at<float>(i, 6) * frame.rows;Rect object_box((int)tl_x, (int)tl_y, (int)(br_x - tl_x), (int)(br_y - tl_y));rectangle(frame, object_box, Scalar(0, 0, 255), 2, 8, 0);putText(frame, format("%s", classNames[objIndex]), Point(tl_x, tl_y), FONT_HERSHEY_SIMPLEX, 1.0, Scalar(255, 0, 0), 2);}}imshow("ssd-video-demo", frame);char c = waitKey(5);if (c == 27) { // 如果ESC按下break;}}capture.release();waitKey(0);return 0;
}

视频效果

5 FCN模型图像分割

5.1 FCN模型模型与数据介绍

FCN模型
支持20个分割标签


 使用模型实现图像分割

5.2 模型文件

 二进制模型
- fcn8s-heavy-pascal.caffemodel   官网下载
 网络描述
- fcn8s-heavy-pascal.prototxt
 分割信息
- pascal-classes.txt
- 20个分类

5.3 使用模型实现图像分割

 编码处理
- 加载Caffem模型
- 使用模型预测

实例5:FCN模型实现图像分割

#include <opencv2/opencv.hpp>
#include <opencv2/dnn.hpp>
#include <iostream>using namespace cv;
using namespace cv::dnn;
using namespace std;const size_t width = 300;
const size_t height = 300;
String labelFile = "D:/opencv3.3/opencv/sources/samples/data/dnn/pascal-classes.txt";
String modelFile = "D:/opencv3.3/opencv/sources/samples/data/dnn/fcn8s-heavy-pascal.caffemodel";
String model_text_file = "D:/opencv3.3/opencv/sources/samples/data/dnn/fcn8s-heavy-pascal.prototxt";vector<Vec3b> readColors();
int main(int argc, char** argv) {Mat frame = imread("rgb.jpg");if (frame.empty()) {printf("could not load image...\n");return -1;}namedWindow("input image", CV_WINDOW_AUTOSIZE);imshow("input image", frame);resize(frame, frame, Size(500, 500));//改变尺寸vector<Vec3b> colors = readColors();
//// init net  初始化网络Net net = readNetFromCaffe(model_text_file, modelFile);Mat blobImage = blobFromImage(frame);// use net   使用网络float time = getTickCount();net.setInput(blobImage, "data");Mat score = net.forward("score");float tt = getTickCount() - time;printf("time consume: %.2f ms \n", (tt / getTickFrequency()) * 1000);// segmentation and display   分割并显示const int rows = score.size[2];const int cols = score.size[3];const int chns = score.size[1];Mat maxCl(rows, cols, CV_8UC1);Mat maxVal(rows, cols, CV_32FC1);// setup LUT  LUT查找for (int c = 0; c < chns; c++) {for (int row = 0; row < rows; row++) {const float *ptrScore = score.ptr<float>(0, c, row);uchar *ptrMaxCl = maxCl.ptr<uchar>(row);float *ptrMaxVal = maxVal.ptr<float>(row);for (int col = 0; col < cols; col++) {if(ptrScore[col] > ptrMaxVal[col]) {ptrMaxVal[col] = ptrScore[col];ptrMaxCl[col] = (uchar)c;}}}}// look up colors 找到对应颜色Mat result = Mat::zeros(rows, cols, CV_8UC3);for (int row = 0; row < rows; row++) {const uchar *ptrMaxCl = maxCl.ptr<uchar>(row);Vec3b *ptrColor = result.ptr<Vec3b>(row);for (int col = 0; col < cols; col++) {ptrColor[col] = colors[ptrMaxCl[col]];}}Mat dst;imshow("FCN-demo1", result);addWeighted(frame, 0.3, result, 0.7, 0, dst);//增加宽度imshow("FCN-demo", dst);waitKey(0);return 0;
}vector<Vec3b> readColors() {vector<Vec3b> colors;ifstream fp(labelFile);if (!fp.is_open()) {printf("could not open the file...\n");exit(-1);}string line;while (!fp.eof()) {getline(fp, line);if (line.length()) {stringstream ss(line);string name;ss >> name;int temp;Vec3b color;ss >> temp;color[0] = (uchar)temp;ss >> temp;color[1] = (uchar)temp;ss >> temp;color[2] = (uchar)temp;colors.push_back(color);}}return colors;
}

      

pascal可实现分割的种类和显示颜色(BGR)可见 pascal-classes.txt文件

                       

6 CNN模型预测性别与年龄

 age_net.caffemodel
 deploy_age.prototxt
 gender_net.caffemodel
 deploy_gender.prototxt

6.1 级联分类器人脸检测

 HAAR数据
 人脸检测

6.2 使用模型

 编码处理
- 加载Caffem模型
- 使用模型预测

实例6:CNN模型预测性别与年龄

#include <opencv2/opencv.hpp>
#include <opencv2/dnn.hpp>
#include <iostream>using namespace cv;
using namespace cv::dnn;
using namespace std;
//人脸检测文件
String haar_file = "D:/opencv3.3/opencv/build/etc/haarcascades/haarcascade_frontalface_alt_tree.xml";
//年龄预测模型
String age_model = "D:/opencv3.3/opencv/sources/samples/data/dnn/age_net.caffemodel";
//年龄描述文件
String age_text = "D:/opencv3.3/opencv/sources/samples/data/dnn/deploy_age.prototxt";//性别预测模型
String gender_model = "D:/opencv3.3/opencv/sources/samples/data/dnn/gender_net.caffemodel";
//年龄描述文件
String gender_text = "D:/opencv3.3/opencv/sources/samples/data/dnn/deploy_gender.prototxt";void predict_age(Net &net, Mat &image);//预测年龄
void predict_gender(Net &net, Mat &image);//预测性别
int main(int argc, char** argv) {Mat src = imread("star_lady.png");if (src.empty()) {printf("could not load image...\n");return -1;}namedWindow("input", CV_WINDOW_AUTOSIZE);imshow("input", src);CascadeClassifier detector;detector.load(haar_file);//人脸检测vector<Rect> faces;Mat gray;cvtColor(src, gray, COLOR_BGR2GRAY);detector.detectMultiScale(gray, faces, 1.02, 1, 0, Size(40, 40), Size(200, 200));//加载网络Net age_net = readNetFromCaffe(age_text, age_model);Net gender_net = readNetFromCaffe(gender_text, gender_model);for (size_t t= 0; t < faces.size(); t++) {rectangle(src, faces[t], Scalar(30, 255, 30), 2, 8, 0);//年龄、性别预测Mat face = src(faces[t]);//自己加的,不加会报错,提示类型错误predict_age(age_net, face);predict_gender(age_net, face);}imshow("age-gender-prediction-demo", src);waitKey(0);return 0;
}vector<String> ageLabels() {vector<String> ages;ages.push_back("0-2");ages.push_back("4 - 6");ages.push_back("8 - 13");ages.push_back("15 - 20");ages.push_back("25 - 32");ages.push_back("38 - 43");ages.push_back("48 - 53");ages.push_back("60-");return ages;
}void predict_age(Net &net, Mat &image) {// 输入Mat blob = blobFromImage(image, 1.0, Size(227, 227));net.setInput(blob, "data");// 预测分类Mat prob = net.forward("prob");Mat probMat = prob.reshape(1, 1);//变为一行Point classNum;double classProb;vector<String> ages = ageLabels();minMaxLoc(probMat, NULL, &classProb, NULL, &classNum);//提取最大概率的编号和概率值int classidx = classNum.x;putText(image, format("age:%s", ages.at(classidx).c_str()), Point(2, 10), FONT_HERSHEY_PLAIN, 0.8, Scalar(0, 0, 255), 1);
}void predict_gender(Net &net, Mat &image) {// 输入Mat blob = blobFromImage(image, 1.0, Size(227, 227));net.setInput(blob, "data");// 预测分类Mat prob = net.forward("prob");Mat probMat = prob.reshape(1, 1);putText(image, format("gender:%s", (probMat.at<float>(0, 0) > probMat.at<float>(0, 1) ? "M" : "F")),Point(2, 20), FONT_HERSHEY_PLAIN, 0.8, Scalar(0, 0, 255), 1);
}

       

7 GOTURN模型实现视频对象跟踪

 GOTURN(Generic Object Tracking Using
Regression Networks)介绍
 下载模型与使用

7.1 GOTURN算法介绍

7.2 使用模型实现对象跟踪

 编码处理
- 加载Caffem模型
- 使用模型预测

模型下载地址

实例7:GOTURN模型实现视频对象跟踪

#include <opencv2/opencv.hpp>
#include <opencv2/dnn.hpp>
#include <iostream>using namespace cv;
using namespace cv::dnn;
using namespace std;String  goturn_model = "D:/opencv3.3/opencv/sources/samples/data/dnn/goturn.caffemodel";
String goturn_prototxt = "D:/opencv3.3/opencv/sources/samples/data/dnn/goturn.prototxt";Net net;
void initGoturn();
Rect trackObjects(Mat& frame, Mat& prevFrame);
Mat frame, prevFrame;
Rect prevBB;
int main(int argc, char** argv) {initGoturn();VideoCapture capture;capture.open("01.mp4");capture.read(frame);frame.copyTo(prevFrame);prevBB = selectROI(frame, true, true);namedWindow("frame", CV_WINDOW_AUTOSIZE);while (capture.read(frame)) {Rect currentBB = trackObjects(frame, prevFrame);rectangle(frame, currentBB, Scalar(0, 0, 255), 2, 8, 0);// ready for next frameframe.copyTo(prevFrame);prevBB.x = currentBB.x;prevBB.y = currentBB.y;prevBB.width = currentBB.width;prevBB.height = currentBB.height;imshow("frame", frame);char c = waitKey(50);if (c == 27) {break;}}
}void initGoturn() {Ptr<Importer> importer;importer = createCaffeImporter(goturn_prototxt, goturn_model);importer->populateNet(net);importer.release();
}Rect trackObjects(Mat& frame, Mat& prevFrame) {Rect rect;int INPUT_SIZE = 227;//Using prevFrame & prevBB from model and curFrame GOTURN calculating curBBMat curFrame = frame.clone();Rect2d curBB;float padTargetPatch = 2.0;Rect2f searchPatchRect, targetPatchRect;Point2f currCenter, prevCenter;Mat prevFramePadded, curFramePadded;Mat searchPatch, targetPatch;prevCenter.x = (float)(prevBB.x + prevBB.width / 2);prevCenter.y = (float)(prevBB.y + prevBB.height / 2);targetPatchRect.width = (float)(prevBB.width * padTargetPatch);targetPatchRect.height = (float)(prevBB.height * padTargetPatch);targetPatchRect.x = (float)(prevCenter.x - prevBB.width * padTargetPatch / 2.0 + targetPatchRect.width);targetPatchRect.y = (float)(prevCenter.y - prevBB.height * padTargetPatch / 2.0 + targetPatchRect.height);copyMakeBorder(prevFrame, prevFramePadded, (int)targetPatchRect.height, (int)targetPatchRect.height, (int)targetPatchRect.width, (int)targetPatchRect.width, BORDER_REPLICATE);targetPatch = prevFramePadded(targetPatchRect).clone();copyMakeBorder(curFrame, curFramePadded, (int)targetPatchRect.height, (int)targetPatchRect.height, (int)targetPatchRect.width, (int)targetPatchRect.width, BORDER_REPLICATE);searchPatch = curFramePadded(targetPatchRect).clone();//Preprocess//Resizeresize(targetPatch, targetPatch, Size(INPUT_SIZE, INPUT_SIZE));resize(searchPatch, searchPatch, Size(INPUT_SIZE, INPUT_SIZE));//Mean SubtracttargetPatch = targetPatch - 128;searchPatch = searchPatch - 128;//Convert to Float typetargetPatch.convertTo(targetPatch, CV_32F);searchPatch.convertTo(searchPatch, CV_32F);Mat targetBlob = blobFromImage(targetPatch);Mat searchBlob = blobFromImage(searchPatch);net.setInput(targetBlob, ".data1");net.setInput(searchBlob, ".data2");Mat res = net.forward("scale");Mat resMat = res.reshape(1, 1);//printf("width : %d, height : %d\n", (resMat.at<float>(2) - resMat.at<float>(0)), (resMat.at<float>(3) - resMat.at<float>(1)));curBB.x = targetPatchRect.x + (resMat.at<float>(0) * targetPatchRect.width / INPUT_SIZE) - targetPatchRect.width;curBB.y = targetPatchRect.y + (resMat.at<float>(1) * targetPatchRect.height / INPUT_SIZE) - targetPatchRect.height;curBB.width = (resMat.at<float>(2) - resMat.at<float>(0)) * targetPatchRect.width / INPUT_SIZE;curBB.height = (resMat.at<float>(3) - resMat.at<float>(1)) * targetPatchRect.height / INPUT_SIZE;//Predicted BBRect boundingBox = curBB;return boundingBox;
}

     

贾志刚OpenCV3.3深度神经网络DNN模块应用学习笔记相关推荐

  1. dnn神经网络_OpenCV3.3深度神经网络(DNN)模块

    今天,甜菇凉整理了一下电脑里面OpenCV深度神经网络(DNN)模块-应用视频教程,这个是贾志刚老师的视频,学习视觉的同学应该都知道贾志刚老师吧,他的视频很适合想要入门计算机视觉图像处理的小白学习,那 ...

  2. 深度神经网络及目标检测学习笔记

    这是一段实时目标识别的演示, 计算机在视频流上标注出物体的类别, 包括人.汽车.自行车.狗.背包.领带.椅子等. 今天的计算机视觉技术已经可以在图片. 视频中识别出大量类别的物体, 甚至可以初步理解图 ...

  3. [OPENCV]010.深度神经网络(dnn模块)

    1.加载Caffe框架模型 在本教程中,您将学习如何使用opencv_dnn模块进行图像分类,通过使用GoogLeNet训练网络从Caffe模型动物园. 1.1.下载opencv_extra 项目 到 ...

  4. 深度神经网络DNN的多GPU数据并行框架 及其在语音识别的应用

    http://www.csdn.net/article/2014-07-11/2820628-DNN 深度神经网络(Deep Neural Networks, 简称DNN)是近年来机器学习领域中的研究 ...

  5. TensorFlow2.0(四)--Keras构建深度神经网络(DNN)

    Keras构建深度神经网络(DNN) 1. 深度神经网络简介 2. Kerase搭建DNN模型 2.1 导入相应的库 2.2 数据加载与归一化 2.3 网络模型的构建 2.4 批归一化,dropout ...

  6. 讯飞输入法将深度神经网络DNN技术应用于语音识别达到业界领先水平

    10月20日,以"语见更好的我们"为主题的"讯飞输入法10周年 A.I.输入沙龙"在北京举办.记者从活动现场了解到,自2010年10月上线至今,讯飞输入法的日语 ...

  7. 理解深度神经网络——DNN(Deep Neural Networks)

    深度神经网络--DNN 是深度学习的基础. 要理解DNN最好先搞清楚它的模型.本篇博文主要对DNN的模型与前向传播算法做一个易于理解的总结. 1.从感知机到神经网络的理解. 感知机是这么一种模型:一个 ...

  8. 多层感知器及常见激活函数-深度神经网络DNN及计算推导

    多层感知器 在之前的博客中,我们了解到,感知器(指单层感知器)具有一定的局限--无法解决异或问题,即线性不可分的问题. 将多个单层感知器进行组合,就可以得到一个多层感知器(MLP--Multi-Lay ...

  9. 深度神经网络DNN的理解

    1.从感知机到神经网络 上图是一个感知机模型,有若干个输入和一个输出(输出的结果只可以是1或-1) 输入和输出有一个线性关系: 神经元激活函数:(二分类) 由于这个简单的感知机只可以进行二分类,则对于 ...

最新文章

  1. Mybatis-Plus 之 人生中荒废的一下午
  2. 团队项目第二阶段冲刺站立会议06
  3. v-distpicker 一个好用的三级联动的插件
  4. 学习在网页中应用大图片背景的20个精美案例
  5. 牛客 - 养花(最大流)
  6. 谁来理解外来工的孩子的心理健康?
  7. java哈希_Java如何采用哈希码实现分类(以员工分配为例)
  8. 同一个容器实例可以同时运行在多个宿主机_从零开始学K8s: 3.什么是容器
  9. PHP开发的爱情盲盒交友系统网站源码
  10. 数据结构上机实践第二周项目2- 程序的多文件组织
  11. 关于 printf(%*.*s/n,m,n,ch) 的问题
  12. 什么是接口?接口的定义和使用
  13. Http和RPC区别
  14. java实现从浏览器读取Csv文件解析成 ListMap
  15. c语言校时程序,我校C语言程序设计教与学的思考
  16. /NOENTRY在VS里面的设置位置
  17. 向云再出发:如数据般飞驰的内蒙古
  18. 含protobuf程序运行时与libqgtk3.0.so冲突
  19. Facebook工程师告诉你,如何正确的阅读《算法导论》(CLRS)?
  20. flutter app安卓应用开机自启动

热门文章

  1. 遥感高光谱图像分类数据集总结
  2. 三款大四学生必备PDF阅读器,国产也可以很牛x
  3. 阅读器的转换功能搬家了,你发现了吗?
  4. Ubuntu系统下,在指定的虚拟环境下Lintel安装向导
  5. 易语言打开C盘文件,易语言教程磁盘操作目录相关
  6. 搭建抢购网环境(给自己加强记忆的)
  7. 解决“下载软件仓库信息失败 检查您的网络连接”仓库 “http://ppa.launchpad.net/fcitx-team/..../”没有Release文件
  8. Java解析p12文件
  9. Sibelius综合音色库-Avid Sibelius v7.5 Sounds Library WiN | 23G
  10. c语言的运算符优先级文档下载,c语言运算符优先级