Open Access
Fusion of transformed shallow features for facial expression recognition
Author(s) -
Bougourzi Fares,
Mokrani Karim,
Ruichek Yassine,
Dornaika Fadi,
Ouafi Abdelkrim,
TalebAhmed Abdelmalik
Publication year - 2019
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2018.6235
Subject(s) - computer science , artificial intelligence , concatenation (mathematics) , histogram , expression (computer science) , pattern recognition (psychology) , facial expression , facial recognition system , feature (linguistics) , computer vision , feature extraction , face (sociological concept) , fusion , principal component analysis , image (mathematics) , mathematics , social science , linguistics , philosophy , combinatorics , sociology , programming language
Facial expression conveys important signs about the human affective state, cognitive activity, intention and personality. In fact, the automatic facial expression recognition systems are getting more interest year after year due to its wide range of applications in several interesting fields such as human computer/robot interaction, medical applications, animation and video gaming. In this study, the authors propose to combine between different descriptors features (histogram of oriented gradients, local phase quantisation and binarised statistical image features) after applying principal component analysis on each of them to recognise the six basic expressions and the neutral face from the static images. Their proposed fusion method has been tested on four popular databases which are: JAFFE, MMI, CASIA and CK+, using two different cross‐validation schemes: subject independent and leave‐one–subject‐out. The obtained results show that their method outperforms both the raw features concatenation and state‐of‐the‐art methods.