
Diverse videos synthesis using manifold‐based parametric motion model for facial understanding
Author(s) -
Mohammadian Amin,
Aghaeinia Hassan,
Towhidkhah Farzad
Publication year - 2016
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2014.0905
Subject(s) - computer science , artificial intelligence , computer vision , motion (physics) , parametric statistics , manifold (fluid mechanics) , nonlinear dimensionality reduction , pattern recognition (psychology) , mathematics , dimensionality reduction , engineering , statistics , mechanical engineering
Personal style is the cause of interpersonal variations, which play an important role in facial expression recognition. In this study, a model has been proposed to generate diverse sequences of virtual samples for a new subject. These sequences are thought to enrich the training set in order to increase robustness of recognition with respect to individual variations and improve generalisation to the new person. In the manifold‐based parametric motion model, the trajectory of the source person has been used to estimate and conduct the virtual facial expression vectors of the target person. In the recognition experiments, images are represented in the lower‐dimensional feature space, whereas the virtual vectors are generated in the feature space. The accuracy of the independent system shows the effectiveness of samples in improving the performance of the system based on the limited data of the new person. The accuracy of seven expressions is 90.65%, which is an improvement over the baseline model, which is 86.11%, and represents a significant ( P < 0.05) improvement over the baseline method, which is 83.4%.