z-logo
open-access-imgOpen Access
Enhancing Motor Imagery EEG Classification in Brain-Computer Interfaces via TransformerNet
Author(s) -
Ulvi Baspinar,
Yahya Tastan
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3594083
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Brain-computer interfaces (BCIs) offer promising solutions for assisting individuals with disabilities, supporting neurorehabilitation, and enhancing human capabilities. However, the limited decoding accuracy of EEG-based motor imagery (MI) signals poses a major challenge for the practical deployment of BCI systems. A common approach involves using signals from opposite hemispheres to boost classification accuracy. Yet, to control devices such as robotic hands or prosthetics in a human-like manner, it is essential to accurately classify hand opening and closing tasks using EEG signals from the same motor cortex region. This study introduces TransformerNet, a novel deep learning architecture specifically designed to classify hand open-close MI tasks from the same brain region. The task is particularly challenging due to the high similarity and overlapping nature of the EEG signals. TransformerNet combines a convolutional module, inspired by EEGNet, to extract local spatial features, with a Transformer encoder that captures long-range temporal dependencies. Furthermore, a channel attention mechanism enhances the model’s ability to focus on the most informative features. In experimental evaluations, TransformerNet achieved an average classification accuracy of 85.97%, outperforming traditional deep learning methods. The model effectively captures high-level temporal-spectral patterns and uncovers hidden dependencies within the EEG signals. These results demonstrate the potential of integrating attention mechanisms with Transformer-based architectures to improve MI-based BCI performance. This advancement holds promise for real-world applications such as brain-controlled prosthetics, assistive devices, and human-computer interaction, moving BCI technologies closer to practical and reliable implementation.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom