z-logo
open-access-imgOpen Access
EEG-SKDNet: A Self-Knowledge Distillation Model with Scaled Weights for Emotion Recognition from EEG Signals
Author(s) -
Thuong Duong Thi Mai,
Duc-Quang Vu,
Huy Nguyen Phuong,
Trung-Nghia Phung
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3594671
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Electroencephalogram-based emotion recognition has garnered increasing attention due to its potential in human–computer interaction and affective computing. While recent deep learning methods have achieved remarkable performance in this task, most approaches emphasize accuracy at the expense of computational efficiency, making them impractical for real-time applications or deployment on resource-constrained devices. This paper addresses the critical challenge of achieving high-performance electroencephalography-based emotion recognition with low computational cost by introducing a lightweight yet robust learning strategy. In this paper, we propose a novel self-knowledge distillation framework that requires no teacher model. Unlike conventional knowledge distillation approaches that rely on large pre-trained teacher networks, our method leverages two different augmented views of the electroencephalography input, which are passed through a single student model to generate diverse predictions. These predictions are then used to transfer knowledge internally within the model. To enhance this self-distillation process, we introduce a scaled-weights mechanism that dynamically adjusts the contribution of each soft label based on the original input, allowing the model to focus on electroencephalography segments with more informative or high-intensity signal regions. The experiment results have shown that our proposed framework consistently outperforms the baseline and even state-of-the-art deep models, achieving a superior trade-off among performance, model size, computation cost and inference time. This makes our proposed framework highly suitable for deployment in real-time and edge-based emotion recognition applications.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom