Annotated Facial Landmarks in the Wild (AFLW) provides a large-scale collection of annotated face images gathered from the web, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total about 25k faces are annotated with up to 21 landmarks per image.
Annotated Facial Landmarks in the Wild (AFLW) provides a large-scale collection of annotated face images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total about 25k faces are annotated with up to 21 landmarks per image. A short comparison to other important face databases with annotated landmarks is provided here:
|Database||# landmarked imgs||# landmarks||# subjects||image size||image color||Ref.|
|Caltech 10,000 Web Faces||10,524||-||-||-||color|||
|CMU/VASC Profile||590||6 to 9||-||-||grayscale|||
The motivation for the AFLW database is the need for a large-scale, multi-view, real-world face database with annotated facial features. We gathered the images on Flickr using a wide range of face relevant tags (e.g., face, mugshot, profile face). The downloaded set of images was manually scanned for images containing faces. The key data and most important properties of the database are:
Due to the nature of the database and the comprehensive annotation we think it is well suited to train and test algorithms for
By downloading the database you agree to the following restrictions:
If you use AFLW, please cite our paper:
BibTeX reference for convenience:
The work was supported by the FFG projects MDL (818800) and SECRET (821690) under the Austrian Security Research Programme KIRAS. We want to thank all people who have been involved in the annotation process, especially, the interns at the institute and the colleagues from the Documentation Center of the National Defense Academy of Austria.
A. Angelova, Y. Abu-Mostafam, and P. Perona. Pruning training sets for learning of object categories. In Proc. CVPR, 2005.
O. Aran, I. Ari, M. A. Guvensan, H. Haberdar, Z. Kurt, H. I. Turkmen, A. Uyar, and L. Akarun. A database of non-manual signs in turkish sign language. In Proc. Signal Processing and Communications Applications, 2007.
O. Jesorsky, K. J. Kirchberg, and R. W. Frischholz. Robust face detection using the Hausdorff distance. In Proc. Audio and Video-based Biometric Person Authentication, 2001.
S. A. Kasiński A., Florek A. The PUT face database. Image Processing & Communications, pages 59–64, 2008.
A. Martinez and R. Benavente. The AR face database. Technical Report 24, Computer Vision Center, University of Barcelona, 1998.
K. Messer, J. Matas, J. Kittler, and K. Jonsson. XM2VTSDB: The extended M2VTS database. In Proc. Audio and Video-based Biometric Person Authentication, 1999.
S. Milborrow, J. Morkel, and F. Nicolls. The MUCT Landmarked Face Database. In Proc. Pattern Recognition Association of South Africa, 2010.
C. P. N. Aifanti and A. Delopoulos. The MUG facial expression database. In Proc. Workshop on Image Analysis for Multimedia Interactive Services, 2005.
M. M. Nordstrom, M. Larsen, J. Sierakowski, and M. B. Stegmann. The IMM face database - an annotated dataset of 240 face images. Technical report, Informatics and Mathematical Modelling, Technical University of Denmark, DTU, 2004.
H. Rowley, S. Baluja, and T. Kanade. Rotation invariant neural network-based face detection. Technical Report CMU-CS-97-201, Computer Science Department, Carnegie Mellon University (CMU), 1997.
H. Schneiderman and T. Kanade. A statistical model for 3D object detection applied to faces and cars. In Proc. CVPR, 2000.