z-logo
open-access-imgOpen Access
FGMFN: Fine-Grained Multiscale Cross-Modal Sentiment Analysis in Advertisements
Author(s) -
Han Wang,
Peng Chen,
Xiangyu Du
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3571624
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Cross-modal sentiment analysis in advertising has gained significant attention due to its potential in brand communication and consumer behavior analysis. However, traditional methods struggle to handle the multi-scale features and redundant objects in advertising images effectively, resulting in limited emotion recognition accuracy. To address the challenges of insufficient multi-scale features and target redundancy in multi-modal sentiment analysis of advertisements, we introduce a novel framework, Fine-Grained Multiscale Cross-Modal Feature Network (FGMFN). The model is designed to process multi-scale feature inputs, facilitate efficient sentiment fusion between images and text. FGMFN employs a multi-scale network to extract key features from advertising images, and uses visual features to guide the textual data representation. Additionally, to reduce textual ambiguity caused by strong intra-class similarity in advertising contexts, we introduce a multi-task learning approach combining image-text matching loss with image-text mutual information loss. This strategy narrows the gap between visual features and sentiment semantics, improving the model’s generalization capabilities. Finally, we construct a fine-grained image-text sentiment analysis dataset (YTB-ADS), which, in contrast to traditional coarse-grained datasets with high intra-class similarity, better serves the specific needs of advertising sentiment analysis. Experimental results show that FGMFN outperforms existing methods on the YTB-ADS dataset, as well as on the publicly available Twitter-2015 and Twitter-2017 datasets, confirming the model’s superior performance in advertising sentiment analysis tasks.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom