dlib是一个C++编写的工具集,相比于深度学习库而言,dlib内部更多的是封装了很多传统机器学习计算函数,例如回归分析、支撑向量机、聚类,开箱即用,对外提供了C++和python两种接口。
本文通过一个C++调用dlib的demo来体验一下dlib这个强大的工具库。

获取

从官网dlib官网或者github地址dlib源码下载最新源码
这里用的是dlib-19.17

编译

支持windows和linux双平台编译
为了避免引入过多的依赖,解压源码后,打开dlib-19.17/dlib/CMakeLists.txt找到以下几行进行修改

if (DLIB_ISO_CPP_ONLY)option(DLIB_JPEG_SUPPORT ${DLIB_JPEG_SUPPORT_STR} OFF)option(DLIB_LINK_WITH_SQLITE3 ${DLIB_LINK_WITH_SQLITE3_STR} OFF)option(DLIB_USE_BLAS ${DLIB_USE_BLAS_STR} OFF)option(DLIB_USE_LAPACK ${DLIB_USE_LAPACK_STR} OFF)option(DLIB_USE_CUDA ${DLIB_USE_CUDA_STR} OFF)option(DLIB_PNG_SUPPORT ${DLIB_PNG_SUPPORT_STR} OFF)option(DLIB_GIF_SUPPORT ${DLIB_GIF_SUPPORT_STR} OFF)#option(DLIB_USE_FFTW ${DLIB_USE_FFTW_STR} OFF)option(DLIB_USE_MKL_FFT ${DLIB_USE_MKL_FFT_STR} OFF)
else()option(DLIB_JPEG_SUPPORT ${DLIB_JPEG_SUPPORT_STR} ON)option(DLIB_LINK_WITH_SQLITE3 ${DLIB_LINK_WITH_SQLITE3_STR} ON)option(DLIB_USE_BLAS ${DLIB_USE_BLAS_STR} ON)option(DLIB_USE_LAPACK ${DLIB_USE_LAPACK_STR} ON)option(DLIB_USE_CUDA ${DLIB_USE_CUDA_STR} ON)option(DLIB_PNG_SUPPORT ${DLIB_PNG_SUPPORT_STR} ON)option(DLIB_GIF_SUPPORT ${DLIB_GIF_SUPPORT_STR} ON)#option(DLIB_USE_FFTW ${DLIB_USE_FFTW_STR} ON)option(DLIB_USE_MKL_FFT ${DLIB_USE_MKL_FFT_STR} ON)
endif()

改为

if (DLIB_ISO_CPP_ONLY)option(DLIB_JPEG_SUPPORT ${DLIB_JPEG_SUPPORT_STR} OFF)option(DLIB_LINK_WITH_SQLITE3 ${DLIB_LINK_WITH_SQLITE3_STR} OFF)option(DLIB_USE_BLAS ${DLIB_USE_BLAS_STR} OFF)option(DLIB_USE_LAPACK ${DLIB_USE_LAPACK_STR} OFF)option(DLIB_USE_CUDA ${DLIB_USE_CUDA_STR} OFF)option(DLIB_PNG_SUPPORT ${DLIB_PNG_SUPPORT_STR} OFF)option(DLIB_GIF_SUPPORT ${DLIB_GIF_SUPPORT_STR} OFF)#option(DLIB_USE_FFTW ${DLIB_USE_FFTW_STR} OFF)option(DLIB_USE_MKL_FFT ${DLIB_USE_MKL_FFT_STR} OFF)
else()option(DLIB_JPEG_SUPPORT ${DLIB_JPEG_SUPPORT_STR} ON)option(DLIB_LINK_WITH_SQLITE3 ${DLIB_LINK_WITH_SQLITE3_STR} ON)option(DLIB_USE_BLAS ${DLIB_USE_BLAS_STR} OFF)option(DLIB_USE_LAPACK ${DLIB_USE_LAPACK_STR} OFF)option(DLIB_USE_CUDA ${DLIB_USE_CUDA_STR} OFF)option(DLIB_PNG_SUPPORT ${DLIB_PNG_SUPPORT_STR} ON)option(DLIB_GIF_SUPPORT ${DLIB_GIF_SUPPORT_STR} ON)#option(DLIB_USE_FFTW ${DLIB_USE_FFTW_STR} ON)option(DLIB_USE_MKL_FFT ${DLIB_USE_MKL_FFT_STR} OFF)
endif()

显式地禁止使用blas,lapack,cuda和mkl依赖
另外,确保编译器能支持C++11全部特性

windows

环境

  • win7/win10 64位
  • vs2015 update3 及以上

步骤

1,用cmake编译静态库
要用release版,计算效率快
windows下推荐用官网的命令来编译,也可以用cmake的gui工具

cd dlib-19.17
mkdir build_x64
cd build_x64
cmake -G "Visual Studio 14 2015 Win64" -T host=x64 ..
cmake --build . --config Release

会在build/dlib/Release目录生成静态链接库dlib19.17.0_release_64bit_msvc1900.lib将其改名为dlib.lib

2,替换config.h
这一步非常重要,解决调用dlib时USER_ERROR__inconsistent_build_configuration__see_dlib_faq_2这个报错
需要将build/dlib/config.h文件拷贝到源码目录dlib-19.17/dlib进行覆盖

linux

环境

  • ubuntu14.04 64位
  • gcc4.8.1及以上

步骤

1,用cmake编译静态库
要用release版,计算效率快

cd dlib-19.17
mkdir build
cd build
cmake ..
cmake --build . --config Release

会在build/dlib/Release目录生成静态链接库libdlib.a
2,替换config.h
这一步非常重要,解决调用dlib时USER_ERROR__inconsistent_build_configuration__see_dlib_faq_2这个报错
需要将build/dlib/config.h文件拷贝到源码目录dlib-19.17/dlib进行覆盖

使用

创建项目,用cmake构建跨平台项目

目录结构
dlib_test
├── CMakeLists.txt
└── src
└── main.cpp

其中
CMakeLists.txt

用cmake编译,运行,注意demo程序也是采用64位release模式进行编译运行

project(dlib_test)
cmake_minimum_required(VERSION 2.8)add_definitions(-std=c++11)if (UNIX)
include_directories(/home/user/codetest/dlib-19.17
)
else()
include_directories(D:/Programs/dlib-19.17
)
endif()aux_source_directory(./src DIR_SRCS)if (UNIX)
link_directories(/home/user/codetest/dlib-19.17/build/dlib
)
else()
link_directories(D:/Programs/dlib-19.17/build_x64/dlib/Release
)
endif()add_executable(dlib_test ${DIR_SRCS})
target_link_libraries(dlib_test dlib)

main.cpp

#include <iostream>
#include "dlib/svm.h"using namespace std;
using namespace dlib;int main()
{typedef matrix<double, 2, 1> sample_type;typedef radial_basis_kernel<sample_type> kernel_type;// Now we make objects to contain our samples and their respective labels.std::vector<sample_type> samples;std::vector<double> labels;// Now let's put some data into our samples and labels objects.  We do this// by looping over a bunch of points and labeling them according to their// distance from the origin.for (int r = -20; r <= 20; ++r){for (int c = -20; c <= 20; ++c){sample_type samp;samp(0) = r;samp(1) = c;samples.push_back(samp);// if this point is less than 10 from the originif (sqrt((double)r*r + c*c) <= 10)labels.push_back(+1);elselabels.push_back(-1);}}vector_normalizer<sample_type> normalizer;// Let the normalizer learn the mean and standard deviation of the samples.normalizer.train(samples);// now normalize each samplefor (unsigned long i = 0; i < samples.size(); ++i)samples[i] = normalizer(samples[i]);randomize_samples(samples, labels);// here we make an instance of the svm_c_trainer object that uses our kernel// type.svm_c_trainer<kernel_type> trainer;cout << "doing cross validation" << endl;for (double gamma = 0.00001; gamma <= 1; gamma *= 5){for (double C = 1; C < 100000; C *= 5){// tell the trainer the parameters we want to usetrainer.set_kernel(kernel_type(gamma));trainer.set_c(C);cout << "gamma: " << gamma << "    C: " << C;// Print out the cross validation accuracy for 3-fold cross validation using// the current gamma and C.  cross_validate_trainer() returns a row vector.// The first element of the vector is the fraction of +1 training examples// correctly classified and the second number is the fraction of -1 training// examples correctly classified.cout << "     cross validation accuracy: "<< cross_validate_trainer(trainer, samples, labels, 3);}}trainer.set_kernel(kernel_type(0.15625));trainer.set_c(5);typedef decision_function<kernel_type> dec_funct_type;typedef normalized_function<dec_funct_type> funct_type;// Here we are making an instance of the normalized_function object.  This// object provides a convenient way to store the vector normalization// information along with the decision function we are going to learn.  funct_type learned_function;learned_function.normalizer = normalizer;  // save normalization informationlearned_function.function = trainer.train(samples, labels); // perform the actual SVM training and save the results// print out the number of support vectors in the resulting decision functioncout << "\nnumber of support vectors in our learned_function is "<< learned_function.function.basis_vectors.size() << endl;// Now let's try this decision_function on some samples we haven't seen before.sample_type sample;sample(0) = 3.123;sample(1) = 2;cout << "This is a +1 class example, the classifier output is " << learned_function(sample) << endl;sample(0) = 3.123;sample(1) = 9.3545;cout << "This is a +1 class example, the classifier output is " << learned_function(sample) << endl;sample(0) = 13.123;sample(1) = 9.3545;cout << "This is a -1 class example, the classifier output is " << learned_function(sample) << endl;sample(0) = 13.123;sample(1) = 0;cout << "This is a -1 class example, the classifier output is " << learned_function(sample) << endl;// We can also train a decision function that reports a well conditioned// probability instead of just a number > 0 for the +1 class and < 0 for the// -1 class.  An example of doing that follows:typedef probabilistic_decision_function<kernel_type> probabilistic_funct_type;typedef normalized_function<probabilistic_funct_type> pfunct_type;pfunct_type learned_pfunct;learned_pfunct.normalizer = normalizer;learned_pfunct.function = train_probabilistic_decision_function(trainer, samples, labels, 3);// Now we have a function that returns the probability that a given sample is of the +1 class.  // print out the number of support vectors in the resulting decision function.  // (it should be the same as in the one above)cout << "\nnumber of support vectors in our learned_pfunct is "<< learned_pfunct.function.decision_funct.basis_vectors.size() << endl;sample(0) = 3.123;sample(1) = 2;cout << "This +1 class example should have high probability.  Its probability is: "<< learned_pfunct(sample) << endl;sample(0) = 3.123;sample(1) = 9.3545;cout << "This +1 class example should have high probability.  Its probability is: "<< learned_pfunct(sample) << endl;sample(0) = 13.123;sample(1) = 9.3545;cout << "This -1 class example should have low probability.  Its probability is: "<< learned_pfunct(sample) << endl;sample(0) = 13.123;sample(1) = 0;cout << "This -1 class example should have low probability.  Its probability is: "<< learned_pfunct(sample) << endl;serialize("saved_function.dat") << learned_pfunct;// Now let's open that file back up and load the function object it contains.deserialize("saved_function.dat") >> learned_pfunct;cout << "\ncross validation accuracy with only 10 support vectors: "<< cross_validate_trainer(reduced2(trainer, 10), samples, labels, 3);// Let's print out the original cross validation score too for comparison.cout << "cross validation accuracy with all the original support vectors: "<< cross_validate_trainer(trainer, samples, labels, 3);// When you run this program you should see that, for this problem, you can// reduce the number of basis vectors down to 10 without hurting the cross// validation accuracy. // To get the reduced decision function out we would just do this:learned_function.function = reduced2(trainer, 10).train(samples, labels);// And similarly for the probabilistic_decision_function: learned_pfunct.function = train_probabilistic_decision_function(reduced2(trainer, 10), samples, labels, 3);return 0;
}

跑一个简单的svm的例子,借鉴自官方的example里面svm_c_ex.cpp
运行结果

doing cross validation
gamma: 1e-05    C: 1     cross validation accuracy: 0 1
gamma: 1e-05    C: 5     cross validation accuracy: 0 1
gamma: 1e-05    C: 25     cross validation accuracy: 0 1
gamma: 1e-05    C: 125     cross validation accuracy: 0 1
gamma: 1e-05    C: 625     cross validation accuracy: 0 1
gamma: 1e-05    C: 3125     cross validation accuracy: 0 1
gamma: 1e-05    C: 15625     cross validation accuracy: 0 1
gamma: 1e-05    C: 78125     cross validation accuracy: 0 1
gamma: 5e-05    C: 1     cross validation accuracy: 0 1
gamma: 5e-05    C: 5     cross validation accuracy: 0 1
gamma: 5e-05    C: 25     cross validation accuracy: 0 1
gamma: 5e-05    C: 125     cross validation accuracy: 0 1
gamma: 5e-05    C: 625     cross validation accuracy: 0 1
gamma: 5e-05    C: 3125     cross validation accuracy: 0 1
gamma: 5e-05    C: 15625     cross validation accuracy: 0 1
gamma: 5e-05    C: 78125     cross validation accuracy: 0 1
gamma: 0.00025    C: 1     cross validation accuracy: 0 1
gamma: 0.00025    C: 5     cross validation accuracy: 0 1
gamma: 0.00025    C: 25     cross validation accuracy: 0 1
gamma: 0.00025    C: 125     cross validation accuracy: 0 1
gamma: 0.00025    C: 625     cross validation accuracy: 0 1
gamma: 0.00025    C: 3125     cross validation accuracy: 0 1
gamma: 0.00025    C: 15625     cross validation accuracy: 0 1
gamma: 0.00025    C: 78125     cross validation accuracy: 0.990476 0.991189
gamma: 0.00125    C: 1     cross validation accuracy: 0 1
gamma: 0.00125    C: 5     cross validation accuracy: 0 1
gamma: 0.00125    C: 25     cross validation accuracy: 0 1
gamma: 0.00125    C: 125     cross validation accuracy: 0 1
gamma: 0.00125    C: 625     cross validation accuracy: 0 1
gamma: 0.00125    C: 3125     cross validation accuracy: 0.980952 0.994126
gamma: 0.00125    C: 15625     cross validation accuracy: 0.980952 0.991924
gamma: 0.00125    C: 78125     cross validation accuracy: 0.984127  0.99486
gamma: 0.00625    C: 1     cross validation accuracy: 0 1
gamma: 0.00625    C: 5     cross validation accuracy: 0 1
gamma: 0.00625    C: 25     cross validation accuracy: 0 1
gamma: 0.00625    C: 125     cross validation accuracy: 0.980952  0.99486
gamma: 0.00625    C: 625     cross validation accuracy: 0.980952 0.991924
gamma: 0.00625    C: 3125     cross validation accuracy: 0.980952 0.995595
gamma: 0.00625    C: 15625     cross validation accuracy: 0.987302 0.994126
gamma: 0.00625    C: 78125     cross validation accuracy: 0.990476  0.99486
gamma: 0.03125    C: 1     cross validation accuracy: 0 1
gamma: 0.03125    C: 5     cross validation accuracy: 0.971429 0.996329
gamma: 0.03125    C: 25     cross validation accuracy: 0.974603 0.992658
gamma: 0.03125    C: 125     cross validation accuracy: 0.980952 0.996329
gamma: 0.03125    C: 625     cross validation accuracy: 0.987302  0.99486
gamma: 0.03125    C: 3125     cross validation accuracy: 0.990476  0.99486
gamma: 0.03125    C: 15625     cross validation accuracy:  0.95873 0.995595
gamma: 0.03125    C: 78125     cross validation accuracy: 0.996825 0.995595
gamma: 0.15625    C: 1     cross validation accuracy: 0.952381 0.998532
gamma: 0.15625    C: 5     cross validation accuracy: 0.993651 0.996329
gamma: 0.15625    C: 25     cross validation accuracy: 0.990476 0.995595
gamma: 0.15625    C: 125     cross validation accuracy: 0.980952  0.99486
gamma: 0.15625    C: 625     cross validation accuracy: 0.949206 0.997797
gamma: 0.15625    C: 3125     cross validation accuracy: 0.993651 0.998532
gamma: 0.15625    C: 15625     cross validation accuracy: 0.987302        1
gamma: 0.15625    C: 78125     cross validation accuracy: 0.990476 0.997797
gamma: 0.78125    C: 1     cross validation accuracy: 0.952381 0.997797
gamma: 0.78125    C: 5     cross validation accuracy: 0.974603 0.997797
gamma: 0.78125    C: 25     cross validation accuracy: 0.974603        1
gamma: 0.78125    C: 125     cross validation accuracy: 0.984127        1
gamma: 0.78125    C: 625     cross validation accuracy: 0.987302        1
gamma: 0.78125    C: 3125     cross validation accuracy: 0.987302        1
gamma: 0.78125    C: 15625     cross validation accuracy: 0.987302 0.997797
gamma: 0.78125    C: 78125     cross validation accuracy: 0.980952 0.998532 number of support vectors in our learned_function is 209
This is a +1 class example, the classifier output is 2.71477
This is a +1 class example, the classifier output is -0.0102314
This is a -1 class example, the classifier output is -4.36211
This is a -1 class example, the classifier output is -2.16552number of support vectors in our learned_pfunct is 209
This +1 class example should have high probability.  Its probability is: 1
This +1 class example should have high probability.  Its probability is: 0.465781
This -1 class example should have low probability.  Its probability is: 3.05246e-11
This -1 class example should have low probability.  Its probability is: 5.78323e-06cross validation accuracy with only 10 support vectors: 0.993651  0.99486
cross validation accuracy with all the original support vectors: 0.993651 0.996329

机器学习库dlib的C++编译和使用(windows和linux)相关推荐

  1. C++机器学习库整理

    来自谷歌AI的TensorFlow 由 Google 开发的热门深度学习库,它拥有自己的工具.库和社区资源生态系统,使研究人员和开发人员能够轻松构建和部署 ML 支持的应用程序. 官方文档:https ...

  2. C++常用机器学习库

    Table of Contents 一.侧重机器学习算法 1.Shogun 2.Shark 3.Dlib 4.Mlpack 5.Libtorch 6.Opencv 二.矩阵.数据处理相关 7.Arma ...

  3. 收藏 | 2021 十大机器学习库

    来源:大数据与机器学习文摘 本文约2600字,建议阅读9分钟 本文为你介绍2021年最为重要的10个 Python 机器学习相关的第三方库. Python 之于机器学习,可以说是最为锋利的武器:而机器 ...

  4. Python 机器学习库 Top 10,你值得拥有!

    随着人工智能技术的发展与普及,Python 超越了许多其他编程语言,成为了机器学习领域中最热门最常用的编程语言之一.有许多原因致使 Python 在众多开发者中如此受追捧,其中之一便是其拥有大量的与机 ...

  5. Spark机器学习库(MLlib)指南

    spark-1.6.1 机器学习库(MLlib)指南 MLlib是Spark的机器学习(ML)库.旨在简化机器学习的工程实践工作,并方便扩展到更大规模.MLlib由一些通用的学习算法和工具组成,包括分 ...

  6. python机器学习库_Python机器学习库 Top 10,你值得拥有!

    随着人工智能技术的发展与普及,Python超越了许多其他编程语言,成为了机器学习领域中最热门最常用的编程语言之一.有许多原因致使Python在众多开发者中如此受追捧,其中之一便是其拥有大量的与机器学习 ...

  7. 肝!十大 Python 机器学习库

    Python 之于机器学习,可以说是最为锋利的武器:而机器学习之于 Python,则有着扩大影响再造辉煌的助力.二者相辅相成,以至于一提到机器学习,人们自然而然的就想到了 Python,虽然有些狭隘, ...

  8. 支持C/C++、Java、python、Matlab等语言的第三方机器学习库汇总

    C 通用机器学习 Recommender - 一个产品推荐的C语言库,利用了协同过滤. 计算机视觉 CCV - C-based/Cached/Core Computer Vision Library ...

  9. GitHub上25个最受欢迎的开源机器学习库

    作者 | Khoa Pham 译者 | Shawn Lee 编辑 | Jane 出品 | AI科技大本营 在过去的几年里,机器学习为各行各业开创了新纪元,诞生了许多成功的案例: Facebook 的面 ...

最新文章

  1. 【ICML2021】具有持续进化策略的展开计算图的无偏梯度估计
  2. mysql数据库导出最大值_4.6 MySQL数据库导入与导出攻略
  3. Android开发_如何调用系统默认浏览器访问
  4. dell增强保护套装还原失效_汕头长安欧尚汽车音响改装升级,还原真实音色
  5. 设计模式六大原则——迪米特法则(LoD)
  6. PDF 补丁丁 0.4.1.688 测试版发布(请务必用其替换 682 测试版)
  7. 港股区块链板块持续上行,火币科技涨超50%
  8. ORA-02290:违反检查约束条件(sys_c0011321)什么原因
  9. 解决 No module named PyQt5.QtWebKitWidgets
  10. 黑塞矩阵(海森矩阵,Hessian Matrix)与牛顿法最优化
  11. matlab fft freqz,【急】请教 fft、freqz、bode 求相频响应的区别及原因
  12. 逻辑回归及美团逻辑回归总结
  13. Spring Cloud 入门手册
  14. 财报出炉,阿里大涨的背后 —— 凤凰终将涅槃?
  15. nar神经网络_NAR 神经网络多步和单步预测
  16. 设定快搜Caption时注意
  17. 刷脸支付在流量金贵时代把控千万用户
  18. 清华计算机学院考研真题,清华大学计算机考研912真题(回忆版)
  19. 实体嵌入(向量化):用深度学习处理结构化数据
  20. break、continue、goto

热门文章

  1. 转载 GIS地图知识
  2. 基于C++的简易RLC电路仿真器与滤波器仿真测试
  3. 绘画入门经典教程——如果你想, 一切皆有可能!
  4. 卸载xampp并重装mysql
  5. uni-app 解决富文本图片溢出问题
  6. PDF文件如何在线分割
  7. Android使用notifyDataSetChanged刷新适配器数据无效
  8. 为什么计算机播放音乐不响,电脑打开音响,播放音乐为什么没有声音??!!~急急急!!快! 爱问知识人...
  9. 导入的素材PS突然很卡,但是内存足够、素材图层也不多。。。该怎么办呢????
  10. SDN控制器技术综述:SDN交换机配置技术与控制技术的关系—Vecloud