z-logo
open-access-imgOpen Access
The Effective Evaluation of Emotions in the Visual Emotion Images Using Convolutional Neural Networks
Author(s) -
Modestas Motiejauskas,
Gintautas Dzemyda
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3596484
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
This paper develops a model for recognizing visual image emotions. The integration of contrastive-center loss optimization is proposed in this paper. This effectively improves the recognition of emotions when training a convolutional neural network against the baseline. The proposed contrastive-center loss function optimizes deep neural networks by enhancing feature discriminability. This loss function includes two key components: intra-class compactness and inter-class separability. We have suggested controlling the impact of the inter-class separability on the loss function. Moreover, we suggest combining cross-entropy and contrastive-center loss to calculate the total loss. In addition, we have proposed to apply the dimensionality reduction (visualization) for interactive evaluation of how the objects (images) in the test set are arranged and how this arrangement, as well as the classification as a whole, can be improved by choosing the best combination of the strength of contrastive-center loss impact on the total loss. The efficiency of the developed model improvements is examined on three datasets: WEBEmo, FI-8, and EmoSet-118K. Our research allows us to improve the performance of visual emotion classification: for the WEBEmo dataset by 1.6%, the FI-8 dataset by 2.2%, and for the EmoSet-118K dataset by 2.52% higher accuracies than the baseline case.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom