来源于:http://www.face-rec.org/databases/。原链接失效的已整理删除,可参考另外几部分的数据库:

人脸数据库汇总—Part 1 - holybin的专栏 - 博客频道 - CSDN.NET

人脸数据库汇总—Part 3 - holybin的专栏 - 博客频道 - CSDN.NET

人脸数据库汇总—Part 4 - holybin的专栏 - 博客频道 - CSDN.NET

The FERET Database

Feret:http://www.itl.nist.gov/iad/humanid/feret/

Gray Feret(百度云盘):pan.baidu.com/s/16GJCU

Color Feret:http://www.nist.gov/itl/iad/ig/colorferet.cfm(按提示发个邮件就能拿到账号密码,直接复制粘贴到浏览器就能下载,不需要wget)

TheFERET program set out to establish a large database of facial images that wasgathered independently from the algorithm developers. Dr. Harry Wechsler atGeorge Mason University was selected to direct the collection of this database.The database collection was a collaborative effort between Dr. Wechsler and Dr.Phillips. The images were collected in a semi-controlled environment. Tomaintain a degree of consistency throughout the database, the same physicalsetup was used in each photography session. Because the equipment had to bereassembled for each session, there was some minor variation in imagescollected on different dates. The FERET database was collected in 15 sessionsbetween August 1993 and July 1996. The database contains 1564 sets of imagesfor a total of 14,126 images that includes 1199 individuals and 365 duplicatesets of images. A duplicate set is a second set of images of a person alreadyin the database and was usually taken on a different day. For some individuals,over two years had elapsed between their first and last sittings, with somesubjects being photographed multiple times. This time lapse was importantbecause it enabled researchers to study, for the first time, changes in asubject's appearance that occur over a year.

SCface- Surveillance Cameras Face Database

SCfaceis a database of static images of human faces. Images were taken inuncontrolled indoor environment using five video surveillance cameras ofvarious qualities. Database contains 4160 static images (in visible andinfrared spectrum) of 130 subjects. Images from different quality cameras mimicthe real-world conditions and enable robust face recognition algorithmstesting, emphasizing different law enforcement and surveillance use casescenarios. SCface database is freely available to research community. The paperdescribing the database is available here.

Multi-PIE

A closerelationship exists between the advancement of face recognition algorithms andthe availability of face databases varying factors that affect facialappearance in a controlled manner. The PIE database, collected at Carnegie MellonUniversity in 2000, has been very influential in advancing research in facerecognition across pose and illumination. Despite its success the PIE databasehas several shortcomings: a limited number of subjects, a single recordingsession and only few expressions captured. To address these issues researchersat Carnegie Mellon University collected the Multi-PIE database. It contains 337subjects, captured under 15 view points and 19 illumination conditions in fourrecording sessions for a total of more than 750,000 images. The paperdescribing the database is available here.

AT&T "The Database ofFaces"-formerly "The ORL Database of Faces"

Tendifferent images of each of 40 distinct subjects.For some subjects, the imageswere taken at different times, varying thelighting, facial expressions (open /closed eyes, smiling / not smiling) andfacial details (glasses / no glasses).All the images were taken against a darkhomogeneous background with thesubjects in an upright, frontal position (withtolerance for some sidemovement).

Cohn-Kanade AU Coded FacialExpressionDatabase

Subjectsin the released portion of the Cohn-KanadeAU-Coded Facial Expression Databaseare 100 university students. They ranged inage from 18 to 30 years. Sixty-fivepercent were female, 15 percent wereAfrican-American, and three percent wereAsian or Latino. Subjects wereinstructed by an experimenter to perform aseries of 23 facial displays thatincluded single action units and combinationsof action units. Image sequencesfrom neutral to target display were digitizedinto 640 by 480 or 490 pixelarrays with 8-bit precision for grayscale values.Included with the image filesare "sequence" files; these are shorttext files that describe theorder in which images should be read.

MIT-CBCL Face Recognition Database

TheMIT-CBCL face recognition database contains faceimages of 10 subjects. Itprovides two training sets: 1. High resolutionpictures, including frontal,half-profile and profile view; 2. Synthetic images(324/subject) rendered from3D head models of the 10 subjects. The head modelswere generated by fitting amorphable model to the high-resolution trainingimages. The 3D models are notincluded in the database. The test set consists of200 images per subject. Wevaried the illumination, pose (up to about 30 degreesof rotation in depth) andthe background.

Image Database of Facial Actionsand Expressions -Expression Image Database

24subjects are represented in this database, yieldingbetween about 6 to 18examples of the 150 different requested actions. Thus,about 7,000 color imagesare included in the database, and each has a matchinggray scale image used inthe neural network analysis.

Face Recognition Data, University of Essex, UK

395individuals (male and female), 20 images perindividual. Contains images ofpeople of various racial origins, mainly of firstyear undergraduate students,so the majority of indivuals are between 18-20years old but some olderindividuals are also present. Some individuals arewearing glasses and beards.

NIST Mugshot Identification Database

Thereare images of 1573 individuals (cases) 1495 maleand 78 female. The databasecontains both front and side (profile) views whenavailable. Separating frontviews and profiles, there are 131 cases with two ormore front views and 1418with only one front view. Profiles have 89 cases withtwo or more profiles and1268 with only one profile. Cases with both fronts andprofiles have 89 caseswith two or more of both fronts and profiles, 27 with twoor more fronts andone profile, and 1217 with only one front and one profile.

The AR Face Database, The Ohio State University, USA

4,000color images corresponding to 126 people's faces(70 men and 56 women). Imagesfeature frontal view faces with different facialexpressions, illuminationconditions, and occlusions (sun glasses and scarf).

The University of Oulu Physics-Based Face Database

Contains125 different faces each in 16 differentcamera calibration and illuminationcondition, an additional 16 if the personhas glasses. Faces in frontalposition captured under Horizon, Incandescent,Fluorescent and Daylightilluminant. Includes 3 spectral reflectance of skin perperson measured fromboth cheeks and forehead. Contains RGB spectral response ofcamera used andspectral power distribution of illuminants.

Face Video Database of the MaxPlanck Institute for Biological Cybernetics

This database contains short video sequences of facialAction Units recordedsimultaneously from six different viewpoints, recorded in2003 at the MaxPlanck Institute for Biological Cybernetics. The video cameraswere arranged at18 degrees intervals in a semi-circle around the subject at adistance of roughly1.3m. The cameras recorded 25 frames/sec at 786x576 videoresolution,non-interlaced. In order to facilitate the recovery of rigid headmotion, thesubject wore a headplate with 6 green markers. The website containsa total of246 video sequences in MPEG1 format.

Caltech Faces

450 faceimages. 896 x 592 pixels. JPEG format. 27 orso unique people under withdifferent lighting/expressions/backgrounds.

EQUINOXHID Face Database

Humanidentificationfrom facial features has beenstudied primarily using imageryfrom visible videocameras. Thermal imagingsensors are one of the mostinnovative emergingtechonologies in the market.Fueled by ever lowering costsand improvedsensitivity and resolution, oursensors provide exciting newoportunities forbiometric identification. As partof our involvement in thiseffort, Equinox iscollecting an extensive databaseof face imagery in thefollowing modalities:coregistered broadband-visible/LWIR(8-12 microns), MWIR(3-5 microns), SWIR(0.9-1.7 microns). This data collectionis made availablefor experimentation andstatistical performance evaluations.

VALID Database

With theaim to facilitate the development of robustaudio, face, and multi-modal personrecognition systems, the large and realisticmulti-modal (audio-visual) VALIDdatabase was acquired in a noisy "realworld" office scenario with nocontrol on illumination or acoustic noise.The database consists of fiverecording sessions of 106 subjects over a periodof one month. One session isrecorded in a studio with controlled lighting andno background noise, theother 4 sessions are recorded in office type scenarios.The database containsuncompressed JPEG Images at resolution of 720x576 pixels.

The UCD Colour Face Image Database for Face Detection

Thedatabase has two parts. Part one contains colourpictures of faces having ahigh degree of variability in scale, location,orientation, pose, facialexpression and lighting conditions, while part two hasmanually segmentedresults for each of the images in part one of the database.These images areacquired from a wide variety of sources such as digitalcameras, picturesscanned using photo-scanner, other face databases and theWorld Wide Web. Thedatabase is intended for distribution to researchers.

Georgia Tech Face Database

Thedatabase contains images of 50 people and is storedin JPEG format. For eachindividual, there are 15 color images captured between06/01/99 and 11/15/99.Most of the images were taken in two different sessionsto take into accountthe variations in illumination conditions, facialexpression, and appearance.In addition to this, the faces were captured atdifferent scales andorientations.

Indian Face Database

Thedatabase contains a set of face images taken inFebruary, 2002 in the IITKanpur campus. There are eleven different images ofeach of 40 distinctsubjects. For some subjects, some additional photographs areincluded. All theimages were taken against a bright homogeneous background withthe subjects inan upright, frontal position. The files are in JPEG format. Thesize of eachimage is 640x480 pixels, with 256 grey levels per pixel. The imagesareorganized in two main directories - males and females. In each ofthesedirectories, there are directories with name as a serial numbers,eachcorresponding to a single individual. In each of these directories, thereareeleven different images of that subject, which have names of the formabc.jpg,where abc is the image number for that subject. The followingorientations ofthe face are included: looking front, looking left, lookingright, looking up,looking up towards left, looking up towards right, lookingdown. Availableemotions are: neutral, smile, laughter, sad/disgust.

VidTIMIT Database

TheVidTIMIT database is comprised of video andcorresponding audio recordings of43 people, reciting short sentences. It can beuseful for research on topicssuch as multi-view face recognition, automatic lipreading and multi-modalspeech recognition. The dataset was recorded in 3sessions, with a space ofabout a week between each session. There are 10sentences per person, chosenfrom the TIMIT corpus. In addition to thesentences, each person performed ahead rotation sequence in each session. Thesequence consists of the personmoving their head to the left, right, back tothe center, up, then down andfinally return to center. The recording was donein an office environment usinga broadcast quality digital video camera. Thevideo of each person is stored asa numbered sequence of JPEG images with aresolution of 512 x 384 pixels. Thecorresponding audio is stored as a mono, 16bit, 32 kHz WAV file.

The LFWcrop Database

LFWcropis a cropped version of the Labeled Faces inthe Wild (LFW) dataset, keepingonly the center portion of each image (i.e. theface). In the vast majority ofimages almost all of the background is omitted.LFWcrop was created due toconcern about the misuse of the original LFW dataset,where face matchingaccuracy can be unrealistically boosted through the use ofbackground parts ofimages (i.e. exploitation of possible correlations betweenfaces andbackgrounds). As the location and size of faces in LFW was determinedthroughthe use of an automatic face locator (detector), the cropped faces inLFWcropexhibit real-life conditions, including mis-alignment, scalevariations,in-plane as well as out-of-plane rotations.

Labeled Faces in the Wild-a (LFW-a)

The"Labeled Faces in the Wild-a" imagecollection is a database oflabeled, face images intended for studying FaceRecognition in unconstrainedimages. It contains the same images available in theoriginal Labeled Faces in the Wild dataset,however, here we provide them after alignment using a commercial facealignmentsoftware. Some of our results were produced using these images. Weshow thisalignment to improve the performance of face recognition algorithms.We havemaintained the same directory structure as in the original LFW dataset, and sothese images can be used as direct substitutes for those in theoriginal imageset. Note, however, that the images available here are grayscaleversions of theoriginals.

3D_RMA database

The3D_RMA database is a collection of two sessions(Nov 1997 and Jan 1998)consisting of 120 persons. For each session, three shotswere recorded withdifferent (but limited) orientations of the head. Detailsabout the populationand typical problems affecting the quality are given in thereferred link. 3Dwas captured thanks to a first prototype of a proprietarysystem based onstructured light (analog camera!). The quality was limited butsufficient toshow the ability of 3D face recognition. For privacy reasons, thetextureimages are not made available. In the period 2003-2008, this databasehas beendownloaded by about 100 researchers. A few papers present recognitionresultswith the database (like, of course, papers from the author).

The Bosphorus Database

TheBosphorus Database is a new 3D face database thatincludes a rich set ofexpressions, systematic variation of poses and differenttypes of occlusions.This database is unique from three aspects: (1) The facialexpressions arecomposed of judiciously selected subset of Action Units as wellas the sixbasic emotions, and many actors/actresses are incorporated to obtainmorerealistic expression data; (2) A rich set of head pose variationsareavailable; (3) Different types of face occlusions are included. Hence, thisnewdatabase can be a very valuable resource for development and evaluationofalgorithms on face recognition under adverse conditions and facialexpressionanalysis as well as for facial expression synthesis.

The Basel Face Model (BFM)

TheBasel Face Model (BFM) is a 3D Morphable Face Modelconstructed from 100 maleand 100 female example faces. The BFM consists of agenerative 3D shape modelcovering the face surface from ear to ear and a highquality texture model. Themodel can be used either directly for 2D and 3D facerecognition or to generatetraining and test images for any imaging condition.Hence, in addition to beinga valuable model for face analysis it can also beviewed as a meta-databasewhich allows the creation of accurately labeledsynthetic training and testingimages. To allow for a fair comparison with otheralgorithms, we provide boththe training data set (the BFM) and the modelfitting results for severalstandard image data sets (CMU-PIE, FERET) obtainedwith our fitting algorithm.The BFM web page additionally provides a set ofregistered scans of tenindividuals, together with a set of 270 renderings ofthese individuals withsystematic pose and light variations. These scans are notincluded in thetraining set of the BFM and form a standardized test set with aground truthfor pose and illumination.

Plastic Surgery Face Database

Theplastic surgery face database is a real worlddatabase that contains 1800 preand post surgery images pertaining to 900subjects. Different types of facialplastic surgeries have different impact onfacial features. To enable theresearchers to design and evaluate facerecognition algorithms on all types offacial plastic surgeries, the databasecontains images from a wide variety of casessuch as Rhinoplasty (nose surgery),Blepharoplasty (eyelid surgery), brow lift,skin peeling, and Rhytidectomy (facelift). For each individual, there are twofrontal face images with properillumination and neutral expression: the firstis taken before surgery and thesecond is taken after surgery. The databasecontains 519 image pairscorresponding to local surgeries and 381 cases ofglobal surgery (e.g., skinpeeling and face lift). The details of the databaseand performance evaluationof several well known face recognition algorithms isavailable in this paper.

The Hong Kong Polytechnic University NIR Face Database

URL:www4.comp.polyu.edu.hk/~biometrics/polyudb_face.htm

TheBiometric Research Centre at The Hong KongPolytechnic University developed areal time NIR face capture device and used itto construct a large-scale NIRface database. The NIR face image acquisitionsystem consists of a camera, anLED light source, a filter, a frame grabber cardand a computer. The cameraused is a JAI camera, which is sensitive to NIR band.The active light sourceis in the NIR spectrum between 780nm - 1,100 nm. Thepeak wavelength is 850 nm.The strength of the total LED lighting is adjusted toensure a good quality ofthe NIR face images when the camera face distance isbetween 80 cm - 120 cm,which is convenient for the users. By using the dataacquisition devicedescribed above, we collected NIR face images from 335subjects. During therecording, the subject was first asked to sit in front ofthe camera, and thenormal frontal face images of him/her were collected. Thenthe subject wasasked to make expression and pose changes and the correspondingimages werecollected. To collect face images with scale variations, we askedthe subjectsto move near to or away from the camera in a certain range. Atlast, to collectface images with time variations, samples from 15 subjects werecollected attwo different times with an interval of more than two months. Ineachrecording, we collected about 100 images from each subject, and in totalabout34,000 images were collected in the PolyU-NIRFD database.

MOBIO - Mobile Biometry Face and Speech Database

TheMOBIO database consists of bi-modal (audio andvideo) data taken from 152people. The database has a female-male ratio ornearly 1:2 (100 males and 52females) and was collected from August 2008 untilJuly 2010 in six differentsites from five different countries. This led to adiverse bi-modal databasewith both native and non-native English speakers. Intotal 12 sessions werecaptured for each client: 6 sessions for Phase I and 6sessions for Phase II.The Phase I data consists of 21 questions with the questiontypes ranging from:Short Response Questions, Short Response Free Speech, SetSpeech, and FreeSpeech. The Phase II data consists of 11 questions with thequestion typesranging from: Short Response Questions, Set Speech, and FreeSpeech. Thedatabase was recorded using two mobile devices: a mobile phone and alaptopcomputer. The mobile phone used to capture the database was a NOKIA N93imobilewhile the laptop computer was a standard 2008 MacBook. The laptop wasonly usedto capture part of the first session, this first session consists ofdatacaptured on both the laptop and the mobile phone.

Texas 3D Face Recognition Database - Texas 3DFRD

Texas 3DFace Recognition database (Texas 3DFRD)contains 1149 pairs of facial color andrange images of 105 adult humansubjects. The images were acquired at thecompany Advanced Digital ImagingResearch (ADIR), LLC (Friendswood, TX),formerly a subsidiary of IrisInternational, Inc. (Chatsworth, CA), withassistance from research students andfaculty from the Laboratory for Image andVideo Engineering (LIVE) at TheUniversity of Texas at Austin. This project wassponsored by the AdvancedTechnology Program of the National Institute ofStandards and Technology (NIST).The database is being made available by Dr.Alan C Bovik at UT Austin. Theimages were acquired using a stereo imagingsystem at a high spatial resolutionof 0.32 mm. The color and range images werecaptured simultaneously and thus areperfectly registered to each other. Allfaces have been normalized to thefrontal position and the tip of the nose ispositioned at the center of theimage. The images are of adult humans from allthe major ethnic groups and bothgenders. For each face, is also availableinformation about the subjects'gender, ethnicity, facial expression, and thelocations 25 anthropometric facialfiducial points. These fiducial points werelocated manually on the facial colorimages using a computer based graphicaluser interface. Specific data partitions(training, gallery, and probe) thatwere employed at LIVE to develop the Anthropometric 3D Face Recognition algorithm arealso available.

NaturalVisible and Infrared facial Expression database - USTC-NVIE

Thedatabase contains both spontaneous and posedexpressions of more than 100subjects, recorded simultaneously by a visible andan infrared thermal camera,with illumination provided from three differentdirections. The posed databasealso includes expression images with and withoutglasses. The paper describingthe database is available here.

FEI Face Database

The FEIface database is a Brazilian face database thatcontains a set of face imagestaken between June 2005 and March 2006 at theArtificial IntelligenceLaboratory of FEI in Sao Bernardo do Campo, Sao Paulo,Brazil. There are 14images for each of 200 individuals, a total of 2800 images.All images arecolourful and taken against a white homogenous background in anupright frontalposition with profile rotation of up to about 180 degrees. Scalemight varyabout 10% and the original size of each image is 640x480 pixels. Allfaces aremainly represented by students and staff at FEI, between 19 and 40years oldwith distinct appearance, hairstyle, and adorns. The number of maleand femalesubjects are exactly the same and equal to 100.

UMB database of 3D occluded faces

TheUniversity of Milano Bicocca 3D face database is acollection of multimodal (3D+ 2D colour images) facial acquisitions. Thedatabase is available touniversities and research centers interested in facedetection, facerecognition, face synthesis, etc. The UMB-DB has been acquiredwith aparticular focus on facial occlusions, i.e. scarves, hats, hands,eyeglassesand other types of occlusion wich can occur in real-world scenarios.

VADANA: Vims Appearance Dataset for facial ANAlysis

Theprimary use of VADANA is for the problems of faceverification and recognitionacross age progression. The main characteristics ofVADANA, which distinguishit from current benchmarks, is the large number ofintra-personal pairs (orderof 168 thousand); natural variations in pose,expression and illumination; andthe rich set of additional meta-data providedalong with standard partitionsfor direct comparison and bench-marking efforts.

Long Distance Heterogeneous Face Database - LDHF-DB

LDHFdatabase contains both visible (VIS) andnear-infrared (NIR) face images atdistances of 60 m, 100 m and 150 m outdoorsand at a 1 m distance indoors. Faceimages of 100 subjects (70 males and 30females) were captured; for eachsubject one image was captured at each distancein daytime and nighttime. Allthe images of individual subjects are frontalfaces without glasses andcollected in a single sitting.

PhotoFace: Face recognition using photometric stereo

Thisunique 3D face database is amongst the largestcurrently available, containing3187 sessions of 453 subjects, captured in tworecording periods ofapproximately six months each. The Photoface device waslocated in anunsupervised corridor allowing real-world and unconstrainedcapture. Eachsession comprises four differently lit colour photographs of thesubject, fromwhich surface normal and albedo estimations can be calculated(photometricstereo Matlab code implementation included). This allows for manytestingscenarios and data fusion modalities. Eleven facial landmarks havebeenmanually located on each session for alignment purposes. Additionally,thePhotoface Query Tool is supplied (implemented in Matlab), which allowsforsubsets of the database to be extracted according to selected metadatae.g.gender, facial hair, pose, expression.

The EURECOM Kinect Face Dataset - EURECOM KFD

TheDataset consists of multimodal facial images of 52people (14 females, 38males) acquired with a Kinect sensor. The data iscaptured in two sessions atdifferent intervals (of about two weeks). In eachsession, 9 facial images arecollected from each person according to differentfacial expressions, lightingand occlusion conditions: neutral, smile, openmouth, left profile, rightprofile, occluded eyes, occluded mouth, sideocclusion with a sheet of paper andlight on. An RGB color image, a depth map(provided both as a bitmap depthimage and a text file containing the originaldepth levels sensed by Kinect) aswell as the associated 3D data are providedfor all samples. In addition, thedataset includes 6 manually labeled landmarkpositions for every face: lefteye, right eye, tip of the nose, left side ofmouth, right side of mouth andthe chin. Other information, such as gender, yearof birth, ethnicity, glasses(whether a person wears glasses or not) and thetime of each session are alsoavailable.

YouTube Faces Database

The dataset contains 3,425 videos of 1,595 differentpeople. All the videos weredownloaded from YouTube. An average of 2.15 videosare available for eachsubject. The shortest clip duration is 48 frames, thelongest clip is 6,070frames, and the average length of a video clip is 181.3frames. In designingour video data set and benchmarks we follow the example ofthe 'Labeled Facesin the Wild' LFW image collection. Specifically, our goal isto produce a largescale collection of videos along with labels indicating theidentities of aperson appearing in each video. In addition, we publishbenchmark tests,intended to measure the performance of video pair-matchingtechniques on thesevideos. Finally, we provide descriptor encodings for thefaces appearing inthese videos, using well established descriptor methods.

YMU, VMU, MIW

YMU (YouTube Makeup) Dataset

Thedataset consists of 151 subjects, specificallyCaucasian females, from YouTubemakeup tutorials. Images of the subjects beforeand after the application ofmakeup were captured. There are four shots persubject: two shots before theapplication of makeup and two shots after theapplication of makeup. For a fewsubjects, three shots each before and after theapplication of makeup wereobtained. The makeup in these face images varies fromsubtle to heavy. Thecosmetic alteration is mainly in the ocular area, where theeyes have beenaccentuated by diverse eye makeup products. Additional changesare on thequality of the skin due to the application of foundation and changein lipcolor. This dataset includes some variations in expression and pose. Theilluminationcondition is reasonably constant over multiple shots of the samesubject. Infew cases, the hair style before and after makeup changesdrastically.

VMU (Virtual Makeup) Dataset

The VMUdataset was assembled by synthetically addingmakeup to 51 female Caucasiansubjects in the FRGC dataset. We added makeup byusing a publicly availabletool from Taaz. Three virtual makeovers were created:(a) application oflipstick only; (b) application of eye makeup only; and (c)application of afull makeup consisting of lipstick, foundation, blush and eyemakeup. Hence,the assembled dataset contains four images per subject: onebefore-makeup shotand three aftermakeup shots.

MIW (Makeup in the"wild") Dataset

The MIWdataset contains 125 subjects with 1-2 imagesper subject. Total number ofimages is 154 (77 with makeup and 77 withoutmakeup). The images are obtainedfrom the internet and the faces areunconstrained.

3D Mask Attack Database - 3DMAD

The 3DMask Attack Database (3DMAD) is a biometric(face) spoofing database. Itcurrently contains 76500 frames of 17 persons,recorded using Kinect for bothreal-access and spoofing attacks. Each frameconsists of: (1) a depth image(640x480 pixels – 1x11 bits); (2) thecorresponding RGB image (640x480 pixels –3x8 bits); (3) manually annotated eyepositions (with respect to the RGBimage). The data is collected in 3 differentsessions for all subjects and foreach session 5 videos of 300 frames arecaptured. The recordings are done undercontrolled conditions, with frontal-viewand neutral expression. The first twosessions are dedicated to the real accesssamples, in which subjects arerecorded with a time delay of ~2 weeks betweenthe acquisitions. In the thirdsession, 3D mask attacks are captured by a singleoperator (attacker). If youuse this database please cite this publication: N.Erdogmus and S. Marcel."Spoofing in 2D Face Recognitionwith3D Masks and Anti-spoofing with Kinect", in IEEESixthInternational Conference on Biometrics: Theory, Applications andSystems(BTAS), 2013. Source code to reproduce experiments in the paper: https://pypi.python.org/pypi/maskattack.lbp

Senthilkumar Face Database - Version1.0

TheSenthilkumar Face Database contains 80 grayscaleface images of 5 people (allare men), including frontal views of faces withdifferent facial expressions,occlusions and brightness conditions. Each personhas 16 different images. Theface portion of the image is manually cropped to140x188 pixels and then it isnormalized. Facial images are available in bothgrayscale and colour images.

人脸数据库汇总—Part 2相关推荐

  1. 人脸技术-人脸数据库汇总

    ■Annotated Database (Hand, Meat, LV Cardiac, IMM face) (http://www2.imm.dtu.dk/~aam/) ■AR Face Datab ...

  2. 送你9个常用的人脸数据库(附链接、报告)

    后台回复"1814"获取人脸识别研究报告PDF 本文主要介绍以下几种常用的人脸数据库: 1. FERET人脸数据库 http://www.nist.gov/itl/iad/ig/c ...

  3. 脑与神经类开放数据库汇总

    1000 Functional Connectomes Project 最为著名的脑与神经数据库之一 http://fcon_1000.projects.nitrc.org/fcpClassic/Fc ...

  4. FaceDataset常用的人脸数据库

    from: http://blog.csdn.net/chenriwei2/article/details/50631212 公开人脸数据集 本页面收集到目前为止可以下载到的人脸数据库,可用于训练人脸 ...

  5. 七龙珠 |召唤一份单细胞数据库汇总

    每次分析自己的单细胞数据后,都让我产生无限的遐想与yy,总感觉不对劲也说不上是哪里,于是总想找别人发表的单细胞数据分析一下,第一步是检索, 检索 检索 .... 睡着了-- 那么,就总结一下收录有单细 ...

  6. (五)为边缘AI人脸识别创建人脸数据库

    目录 介绍 数据库中有什么 创建数据库 填充数据库 下一步 在这里,我们将解释用于人脸识别的简单人脸数据库的结构,开发用于将人脸添加到人脸数据库的实用程序的Python代码,并提供下载人脸以创建数据库 ...

  7. 【数据库】FaceDataset常用的人脸数据库

    公开人脸数据集 本页面收集到目前为止可以下载到的人脸数据库,可用于训练人脸深度学习模型. 人脸识别 数据库 描述 用途 获取方法 WebFace 10k+人,约500K张图片 非限制场景 链接 Fac ...

  8. 关键点提取:face_recognition、疲劳检测、人脸校准、人脸数据库

    日萌社 人工智能AI:Keras PyTorch MXNet TensorFlow PaddlePaddle 深度学习实战(不定时更新) 人脸识别功能实现的原理介绍与算法介绍 人脸识别:人脸数据集 A ...

  9. 现有的人脸数据库介绍及下载链接

    在国际上已有的一些人脸数据库: Yale人脸库(美国): 耶鲁大学,15人,每人11张照片,主要包括光照条件的变化,表情的变化等. ORL人脸库(英国): 剑桥大学,40人,每人10张照片,包括表情变 ...

最新文章

  1. 【GStreamer】在x264enc中设置profile级别
  2. 如何使用Fiddler抓包操作?
  3. R语言临床预测模型的评价指标与验证指标实战:净重新分类指数NRI(Net Reclassification Index, NRI)
  4. 研究人员研发可自我修复的“电子皮肤”,重点是还能回收再利用
  5. wordpress-4.7.2-zh_CN页面加载慢
  6. [二分查找] 一:子区间界限应当如何确定
  7. 图解web前端开发工具教程
  8. 51nod 1770 数数字 找规律,注意进位,时间复杂度O(n)
  9. java实验1_Java程序实验1
  10. python transform_Pandas的数据分组-transform函数
  11. VC6,SDI视图改变背景颜色的方法
  12. pmw调光c语言程序,51单片机led灯渐变PWM调光(渐亮渐灭)C语言和汇编源程序
  13. java redis hession_spring: 整合 springmvc shiro redis hessian rocketMQ
  14. 51nod 1185 || 51nod 1072 威佐夫博弈
  15. 利用 Web Share API 将网页分享到 App(下)
  16. 十月第一周学习进度条
  17. java为什么要连接Mysql_为什么要启动mysql workbech,java才能连接mysql数据库呢?
  18. 全新UI四方聚合支付系统源码/新增USDT提现/最新更新安全升级修复XSS漏洞补单漏洞
  19. spring boot通过JPA访问Mysql
  20. k8s--基础--12.2--pod--生命周期,状态,重启策略

热门文章

  1. STM32连接TFT-LCD
  2. 计算机系统项目管理师,信息系统项目管理师英语复习资料:计算机专业英语汇总[5]...
  3. 引擎师(引擎)“悠歌”回合文案释义
  4. wps中ctrl+v粘贴快捷键失灵
  5. 计算机三级证对工作帮助大吗?
  6. DHT11温湿度传感器编程思路以及代码的实现
  7. thinkphp3的模型类字段
  8. 揭秘宜信财富年度账单的技术实现
  9. 探测C库malloc元数据捕获野指针
  10. Windows Server 2012 R2下补丁服务器部署与配置