z-logo
Premium
Facial Expression Recognition by Deep Learning Models Using Multiple Datasets
Author(s) -
Kuremoto Takashi,
Mori Yuya,
Mabu Shingo
Publication year - 2025
Publication title -
electronics and communications in japan
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.131
H-Index - 13
eISSN - 1942-9541
pISSN - 1942-9533
DOI - 10.1002/ecj.12484
ABSTRACT Facial Expression Recognition has been studied for many years; however, it remains a challenging task in real‐world environments due to complex backgrounds, varying illumination conditions, and online processing issues. In this study, we propose a deep learning model, CAER‐Net‐RS, by leveraging multiple training datasets. The proposed model integrates three neural networks: the Face Network, the Context Network, and the Adaptive Network. Different datasets are employed for the pretraining of these networks: the facial expression image dataset RAF‐DB for the Face Network, the scene image dataset Places365‐Standard for the Context Network, and the CAER‐S dataset for the Adaptive Network. In the experiment, the proposed model achieved an average recognition accuracy of 85.20% across seven types of facial expressions, compared to 70.92% for the conventional Context‐Aware Emotion Recognition Network (CAER‐Net).

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here
Empowering knowledge with every search

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom