ICCV 2013的人脸特征点检评测及Workshop 网址: http://ibug.doc.ic.ac.uk/resources ,可以找到很多facial landmark detection的state-of-the-art的代码(可执行)及文档资料。

Datasets

  • 300 Faces in-the-Wild Challenge (300-W), ICCV 2013
  • MMI Facial expression database
  • SEMAINE database
  • MAHNOB-HCI-Tagging database
  • MAHNOB Laughter database
  • MAHNOB MHI-Mimicry database

Code

  • Facial Point detector (2005/2007)
  • DRMF Matlab Code (CVPR 2013)
  • Facial point detector (2010/2013)
  • SEMAINE Visual Components (2008/2009)
  • SEMAINE Visual Components (2009/2010)
  • Salient Point Detector (2006/2008)
  • Salient Point Detector (2010)
  • AU detector (LAUD 2010)
  • AU detector (TAUD 2011)
  • Smile Detectors
  • Gesture Detector (2010)
  • Gesture Detector (2011)
  • Head Nod Shake Detector and 5 Dimensional Emotion Predictor (2010/2011)
  • Head Nod Shake Detector (2010/2011)
  • HCI^2 Framework
  • Facial tracker (2011)
  • Subspace Learning from Image Gradient Orientations (2011)
  • AOMs Generic Face Alignment (2012)
  • Robust and Efficient Parametric Face/Object Alignment (2011)

300 Faces in-the-Wild Challenge (300-W), ICCV 2013

Latest News!
We provide the  bounding box initialisations , as produced by our in-house detector, for each database of the training procedure. Additionaly the bounding boxes of the ground truth are given.

300-W
The first Automatic Facial Landmark Detection in-the-Wild Challenge (300-W 2013) to be held in conjunction with International Conference on Computer Vision 2013, Sydney, Australia.

Organisers
Georgios Tzimiropoulos, University of Lincoln, UK
Stefanos Zafeiriou, Imperial College London, UK
Maja Pantic, Imperial College London, UK

Scope
Automatic facial landmark detection is a longstanding problem in computer vision, and 300-W Challenge is the first event of its kind organized exclusively to benchmark the efforts in the field. The particular focus is on facial landmark detection in real-world datasets of facial images captured in-the-wild. The results of the Challenge will be presented at the 300-W Faces in-the-Wild Workshop to be held in conjunction with ICCV 2013.
A special issue of Image and Vision Computing Journal will present the best performing methods and summarize the results of the Challenge.

The 300-W Challenge
Landmark annotations (following the Multi-PIE [1] 68 points mark-up, please see Fig. 1) for four popular data sets are available from http://ibug.doc.ic.ac.uk/resources/300-W . All participants in the Challenge will be able to train their algorithms using these data. Performance evaluation will be carried out on 300-W test set, using the same Multi-PIE mark-up, and the same face-bounding box initialization.

Figure 1: The 68 and 51 points mark-up used for our annotations.

Training
The datasets LFPW [2], AFW [3], HELEN [4], and XM2VTS [5] have been re-annotated using the mark-up of Fig 1. We provide additional annotations for another 135 images in difficult poses and expressions (IBUG training set). Annotations have the same name as the corresponding images. For LFPW, AFW, HELEN,  and IBUG  datasets we also provide the images. The remaining image databases can be downloaded from the authors’ websites. All annotations can be downloaded at:

  • http://ibug.doc.ic.ac.uk/media/uploads/competitions/lfpw.zip
  • http://ibug.doc.ic.ac.uk/media/uploads/competitions/afw.zip
  • http://ibug.doc.ic.ac.uk/media/uploads/competitions/helen.zip
  • http://ibug.doc.ic.ac.uk/media/uploads/competitions/xm2vts.zip
  • http://ibug.doc.ic.ac.uk/media/uploads/competitions/ibug.zip

Participants are strongly encouraged to train their algorithms using these training data. Should you use any of the provided annotations please cite [6] and the paper presenting the corresponding database.
Please note that the re-annotated data for this challenge are saved in the matlab convention of 1 being
the first index, i.e. the coordinates of the top left pixel in an image are x=1, y=1.

Testing
Participants will have their algorithms tested on a newly collected data set with 2x300 (300 indoor and 300 outdoor) face images collected in the wild (300-W test set). Sample images are shown in Fig 2 and Fig 3.

Figure 2: Outdoor.
Figure 3: Indoor.

300-W test set is aimed to test the ability of current systems to handle unseen subjects, independently of variations in pose, expression, illumination, background, occlusion, and image quality.
Participants should send binaries  with their trained algorithms to the organisers, who will run each algorithm on the 300-W test set using the same bounding box initialization. This bounding box is provided by our in-house face detector. The face region that our detector was trained on is defined by the bounding box as computed by the landmark annotations (please see Fig. 4).

Figure 4: Face region (bounding box) that our face detector was trained on.

Examples of bounding box initialisations along with the ground-truth bounding boxes are show in Fig. 5. The initialisations for the whole LPFW test-set can be downloaded from: http://ibug.doc.ic.ac.uk/media/uploads/competitions/images_testset_lfpw_inits.zip .

Figure 5: Examples of bounding box initialisations for images from the test set of LFPW.

Participants should expect that initialisations for the 300-W test set are of similar accuracy.
Each binary should accept two inputs: input image (RGB with .png extension) and the coordinates of the bounding box. Bounding box should be a 4x1 vector [xmin, ymin, xmax, ymax] (please see Fig. 6). The output of the binary should be a 68 x 2 matrix with the detected landmarks. This matrix should be saved in the same format (.pts) and ordering as the one of the provided annotations.

Figure 6: Coordinates of the bounding box (the coordinates of the top left pixel are x=1, y=1).

Facial landmark detection performance will be assessed on both the 68 points mark-up of Fig 1 and the 51 points which correspond to the points without border (please see Fig1). The average point-to-point Euclidean error normalized by the inter-ocular distance (measured as the Euclidean distance between the outer corners of the eyes) will be used as the error measure. Matlab code for calculating the error can be downloaded from http://ibug.doc.ic.ac.uk/media/uploads/competitions/compute_error.m . Finally, the cumulative curve corresponding to the percentage of test images for which the error was less than a specific value will be produced. Additionally, fitting times will be recorded. These results will be returned to the participants for inclusion in their papers.

The binaries submitted for the competition will be handled confidentially.   They will be used only for the scope of the competition and will be erased after the completion. The binaries should be complied in a 64bit machine and dependencies to publicly available vision repositories (such as Open CV) should be explicitly stated in the document that accompanies the binary

Papers
Challenge participants should submit a paper to the 300-W Workshop, which summarizes the methodology and the achieved performance of their algorithm. Submissions should adhere to the main ICCV 2013 proceedings style, and have a maximum length of 8 pages. The workshop papers will be published in the ICCV 2013 proceedings.

Important Dates

  • Binaries submission deadline: September 7, 2013
  • Paper submission deadline: September 15, 2013
  • Author Notification: October 7, 2013
  • Camera-Ready Papers: November 13, 2013

Contact
Dr. Georgios Tzimiropoulos
gtzimiropoulos@lincoln.ac.uk ,   gt204@imperial.ac.uk
Intelligent Behaviour Understanding Group (iBUG)

References 
[1] R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker.Multi-pie. Image and Vision Computing, 28(5):807–813, 2010.
[2] Belhumeur, P., Jacobs, D., Kriegman, D., Kumar, N.. ‘Localizing parts of faces using a consensus of exemplars’.  In Computer Vision and Pattern Recognition, CVPR. (2011).
[3] X. Zhu, D. Ramanan. ‘Face detection, pose estimation and landmark localization in the wild’, Computer Vision and Pattern Recognition (CVPR) Providence, Rhode Island, June 2012.
[4] Vuong Le, Jonathan Brandt, Zhe Lin, Lubomir Boudev, Thomas S. Huang. ‘Interactive Facial Feature Localization’, ECCV2012.
[5] Messer, K., Matas, J., Kittler, J., Luettin, J., Maitre, G. ‘Xm2vtsdb: The ex- tended m2vts database’. In: 2nd international conference on audio and video-based biometric person authentication. Volume 964. (1999).
[6] C. Sagonas, G. Tzimiropoulos, S. Zafeiriou and Maja Pantic. ‘A semi-automatic methodology for facial landmark annotation’, IEEE Int’l Conf. Computer Vision and Pattern Recognition (CVPR-W’13), 5th Workshop on Analysis and Modeling of Faces and Gestures (AMFG2013). Portland Oregon, USA, June 2013 (accepted for publication).

ICCV 2013的人脸特征点检评测及代码相关推荐

  1. 无监督学习-NMF-降维-人脸特征的提取-所有的代码解释和分析

    实验目标 目标︰已知Olivetti人脸数据共400个,每个数据是64*64大小.由于NMF分解得到的W矩阵相当于从原始矩阵中提取的特征,那么就可以使用NMF对400个人脸数据进行特征提取. 逐个方法 ...

  2. 深度学习(十七)基于改进Coarse-to-fine CNN网络的人脸特征点定位

    基于改进Coarse-to-fine CNN网络的人脸特征点定位 原文地址:http://blog.csdn.net/hjimce/article/details/50099115 作者:hjimce ...

  3. 第二篇:基于深度学习的人脸特征点检测 - 数据与方法(转载)

    https://yinguobing.com/facial-landmark-localization-by-deep-learning-data-and-algorithm/ 在上一篇博文中,我们了 ...

  4. Facial Landmark Detection(人脸特征点检测)

    原文地址:http://www.learnopencv.com/facial-landmark-detection/#comment-2471797375 作为计算机视觉研究员,我们很早就开始研究人脸 ...

  5. 深度学习(十五)基于级联卷积神经网络的人脸特征点定位

    基于级联卷积神经网络的人脸特征点定位 原文地址:http://blog.csdn.net/hjimce/article/details/49955149 作者:hjimce 一.相关理论 本篇博文主要 ...

  6. learnOpenCv】Facial Landmark Detection:人脸特征点检测的一些应用

    目前在计算机视觉领域,人脸方向的研究非常之多.人脸分析最常见的应用是人脸识别,但是如果我们想要验证图像中一个人的身份,需要先知道这个图像中人脸的位置.因此,人脸检测(在图像中定位人脸,并返回一个包含人 ...

  7. dlib库包的介绍与使用,opencv+dlib检测人脸框、opencv+dlib进行人脸68关键点检测,opencv+dlib实现人脸识别,dlib进行人脸特征聚类、dlib视频目标跟踪

    文章目录: 1 dlib库介绍 2 dlib人脸检测:绘制出人脸检测框 2.1 dlib人脸检测源码 2.2 opencv + dlib 人脸检测 2.3 dlib人脸检测总结 3 dlib人脸关键点 ...

  8. PFLD:简单、快速、超高精度人脸特征点检测算法

    作者 | 周强(CV君) 来源 | 我爱计算机视觉(公众号id:aicvml) 60s测试:你是否适合转型人工智能? https://edu.csdn.net/topic/ai30?utm_sourc ...

  9. CPU上跑到 100 fps 的高精度PyTorch人脸特征点检测库

      视学算法分享   作者 | cunjian 编译 | CV君 转自 | 我爱计算机视觉 [导读]向大家推荐一款基于PyTorch实现的快速高精度人脸特征点检测库,其在CPU上的运行速度可达100 ...

最新文章

  1. 【组队学习】【27期】李宏毅机器学习
  2. ISE和Modelsim联合仿真
  3. 数字货币 BCH的混币神器CashShuffle
  4. python一千行入门代码-Python 有哪些一千行左右的经典练手项目?
  5. Redis高可用原理
  6. python中restful接口开发实例_Python RESTful接口开发02
  7. Java Web应用小案例:实现登录功能
  8. 第一章 WebGL简介 Introduction
  9. CRT、Windows API、C/C++标准库、 ATL
  10. INTEL经典芯片及主板回顾
  11. Linux参考资料名称,Linux系统管理员必备参考资料下载汇总
  12. springboot jar包启动 读取resource下的文件
  13. ipad查看电脑中的文件
  14. Linux 网络编程: daytime Service
  15. 哈登独得40分保罗复出 火箭主场103:98复仇魔术
  16. 正运动学及逆运动学求解方法
  17. 42个机器学习练手项目
  18. oracle 查询数据的结果集导出
  19. Arduino与Proteus仿真实例-ULN2003驱动直流电机仿真
  20. Exchange用户邮件状态跟踪

热门文章

  1. python转盘抽奖_react 抽奖转盘 ----小计
  2. esp32摄像显示时间_TinyPICO-比拇指还小的ESP32开发板- 国外创客众筹发现第二期
  3. java ajax jquery分页插件_JQueryPagination分页插件,ajax从struts请求数据
  4. python打开文件夹对话框_文件对话框打开文件夹中的文件(tkinter)
  5. 前端技术周刊 2019-01-07:CSS 动画
  6. JavaScript专题之模拟实现call和apply
  7. 首页列表显示全部问答,完成问答详情页布局。
  8. SendEmail使用TLS发送邮件
  9. 从大门看守到贴身保镖服务的安全纵深防御
  10. nRF905 - 系列示意图