In Emerging Technology, we are constantly working on new research projects around security and biometrics. Since late last year, I spent some time researching age progression related problems using biometrics. One of the interesting problems with age progression was to create age progressed images with occlusions and in different age gaps for face subjects of different ages, shapes and gender.
In this work, we study the problem of face age progression. Building on the simple idea that a face image can be expressed as a superposition of age and common components, we propose a novel method for recovering those components from a cohort of images of different ages. Subsequently, we show how the extracted components can be employed for age progression. In addition, a novel system for automatic age invariant face verification is introduced. Furthermore, through the extensive experiments we show that the proposed method outperforms the state-of-the-art methods both qualitatively and quantitatively.
This project also highlighted as one of the popular project entries for TechConnect 2016.
Gemma Morris from Sky’s weekly technology programme “Swipe” visited the IBM UK Lab Campus to learn about the new exciting technology we’re working on. As part of the visit the Emerging Technology team were chosen to demo the face ageing app and Gemma was able to see her age progressed forwards and backwards from 0 to 80 years. She brought in a photo of herself aged 2 as a comparison. You can see her reaction to the results on the Sky News website, which was broadcast on national television.
The rest of the article goes into detail on how the research was done, the experimentation and the results achieved through the research.
This research project was a challenge that I set myself in order to create a new solution for invariant age progression and test it with large databases of images in different age groups. The challenge was also to compare the solution to the state-of-the art solutions and outperform them if possible.
If the face images can be normalized in age, there can be a huge impact on the face verification accuracy and thus many novel applications such as matching drivers license, passport and visa images with the real person could become possible. Face progression can address this issue by generating a face image for a specific age. Many researchers have attempted to address this problem focusing on predicting older faces from a younger face. In this project, we propose a novel method called RAP for Robust and Automatic face Progression in totally unconstrained conditions. Our method takes into account that faces belonging to the same age-groups share age patterns such as wrinkles while faces across different age-groups share some common patterns such as expressions and skin colors. Given training images of K different age-groups the proposed method learns to recover K low-rank age and one low-rank common components. These extracted components from the learning phase are used to progress an input face to younger as well as older ages in bidirectional fashion. Using standard datasets, we demonstrate that the proposed progression method outperforms state-of-the-art age progression methods and also improves matching accuracy in a face verification protocol that includes age progression.
Age progression is a very slow and complex process, which affects both the appearance and shape of the face. The shape-related age progression’s effects are mainly observed during childhood as it is the period where the shape of the skull is changed. On the other hand, the formation of the wrinkles, which is due to the reduction of the muscles strength, results in the drastic change of the appearance during the adult ages. In addition, the environmental factors as well as the subject’s daily life can affect the procedure. Thus, we can say that the aging procedure is different for each individual and has different effects. However, some general patterns related to the shape and appearance can be found and described.
Fig: Given an input image the proposed progression method is able to reconstruct the clean neutral frontal face of the input subject in different ages from babies to seniors.
We demonstrated the efficacy of the proposed method in the tasks of face age progression and face verification under unconstrained (in-the-wild) conditions.
In order to train the RAP we used the MORPH and CACD databases. The MORPH database contains 16,894 images, captured under controlled conditions, that belong to 4,664 subjects with ages from 16 to 77. The CACD database has images of 2,000 celebrities with ages ranging from 14 to 62 . In total, 160,000 of in-the-wild images are provided. Given that in the selected databases the number of images with a specific age is small or zero, we collected additional images from the web. By using the age labels, we picked images of people from 1 month to 70 years old. The selected images were classified into K = 9 different age-groups i.e., 0 -3 , 4 -7 , 8 -13 , 14 -20 , 21 -30 , 31 -40 , 41 -50 , 51 -60 , 61 -70 . Then, the facial landmark point detector trained with the images was employed in order to detect 68 landmark points in each face. Finally, by visually inspecting the created landmark points we selected N = 400 images of each gender that depict frontal and non-occluded faces of different subjects.
Fig: Qualitative comparison of the proposed method with prior works and Personalized age progression with aging dictionary.
Face verification in the wild:
The performance of the RAP in the face verification under in-the-wild conditions is assessed by conducting experiments in the CACD-VS and FG-net databases. The aims of the conducted experiments are twofold: a) to validate that the identity information remains after the progression step and b) to show that in cases where the gallery and probe images contain the same subjects in different ages, we can improve the accuracy of a verification system using the progressed images and not by retraining it. To this end, we propose a new Fully automatic Age-Invariant face Verification system (coined as FAIV). In the conducted experiments we used very simple features (e.g., simple gradient orientations and not features extracted by employing Deep-Learning methodologies) and classifiers, as the scope at this stage is purely to validate the image produced by the RAP and not to propose a very powerful verification system.
We have used the databases and tested against a number of subjects and compared it with prior art. One of the experiments done was by using the images taken of the subject at the youngest age and then using the age progression algorithm to progress the age in different age groups.
Fig: Comparison of the proposed progression method with state-of-the-art system. In all subjects, the image taken at the youngest age was used as an input to progression method. In order to create the final results we warped the progressed images into the real ones and then we applied Poisson blending on the composed images.
Through our experimentation steps, we have been able to plot our results in comparison to the state-of-the-art solution.
In this experiment we employ the CACD-VS database, consisting of 2,000 positive and 2,000 negative pairs of celebrities in different ages. We employed the provided evaluation protocol, which consists of 10 folds, with each fold consisting 400 positive and 400 negative pairs. The performance of the FAIV is compared against two baseline systems. In the first baseline (i.e., FVB-1) all the images are warped into the same mean shape and classified by a single SVM. In order to evaluate the effectiveness of the proposed progression method we created another baseline system (i.e., FVB-2) which is similar to the FAIV with the difference that instead of using the progression step, we only warp the input images into the corresponding mean shape of each age-group. The mean classification accuracy, area under curves (AUC), and receiver operating characteristic (ROC) curves computed based on the 10 folds are reported in Fig a below. By inspecting all of the computed performance metrics we can see that the proposed verification system, even without the progression step, outperforms the FVB-1 by a large margin. Using the progression step, the FAIV system clearly outperforms all the others, which indicates that without using any kind of age information the proposed progression system improves the verification accuracy.
Fig: Mean accuracy, AUC, and ROC curve of: (a),(b) FVB-1, FVB-2, and FAIV systems on CACD-VS and FG-net databases, respectively, c) RAP and IAAP  on FG-net database.
 K. Ricanek Jr and T. Tesafaye, “Morph: A longitudinal image database of normal adult age-progression,” in IEEE AFGR, 2006, pp. 341–345.
 B.-C. Chen, C.-S. Chen, and W. H. Hsu, “Cross-age reference coding for age-invariant face recognition and retrieval,” in ECCV, 2014, pp. 768–783.
 A. Lanitis, C. Taylor, and T. Cootes, “Toward automatic simulation of aging effects on face images,” IEEE TPAMI, vol. 24, no. 4, pp. 442–455, 2002.
Share this post: