Fusion of transformed shallow features for facial expression recognition
Résumé
Facial expression conveys important signs about the human affective state, cognitive activity, intention and personality. In fact, the automatic facial expression recognition systems are getting more interest year after year due to its wide range of applications in several interesting fields such as human computer/robot interaction, medical applications, animation and video gaming. In this study, the authors propose to combine between different descriptors features (histogram of oriented gradients, local phase quantisation and binarised statistical image features) after applying principal component analysis on each of them to recognise the six basic expressions and the neutral face from the static images. Their proposed fusion method has been tested on four popular databases which are: JAFFE, MMI, CASIA and CK+, using two different cross-validation schemes: subject independent and leave-one-subject-out. The obtained results show that their method outperforms both the raw features concatenation and state-of-the-art methods.
Origine | Fichiers produits par l'(les) auteur(s) |
---|