Voluntary Eye Blink and SSVEP-Based Selective Attention Interface for XR User Authentication: An Explainable Neural Decoding and Machine Learning Approach to Reducing Visual Fatigue
Author(s) -
Min Seong Chae,
Abdul Rehman,
Yeni Kim,
Jaedeok Kim,
Yaeeun Han,
Sangin Park,
Sungchul Mun
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3613355
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
The growing demand for secure, immersive authentication in extended reality (XR) environments calls for neural interfaces that are both robust and user-friendly. This study introduces a novel and robust dual-modality EEG-based authentication framework that independently exploits (i) Steady-State Visually Evoked Potentials (SSVEP) and (ii) voluntary eye-blink–induced EEG responses as covert neural signatures. Both signals are recorded using a 64-channel EEG system seamlessly integrated with the Microsoft HoloLens 2 for immersive XR-based user evaluation. To mitigate visual fatigue while preserving signal fidelity, we replace conventional flicker stimuli with a 10 Hz grow–shrink visual design. We employ a modality-specific classification strategy, modeling SSVEP and eye-blink signals independently to retain their distinct neurophysiological characteristics. A multi-stage feature selection pipeline combines SHAP and Random Forest rankings, followed by logistic regression–based permutation importance to identify the top 10 discriminative features per modality. These features undergo statistical validation via non-parametric tests to ensure physiological plausibility and class separability. Classification is subsequently performed using four machine learning models—Random Forest, XGBoost, Support Vector Machine, and Logistic Regression—with Random Forest and XGBoost consistently yielding the highest performance. Evaluated across 20 participants using user-wise validation, our framework achieves over 99% accuracy and near-perfect ROC-AUC scores for both modalities, confirming strong discriminability between genuine and impostor attempts. Our results demonstrate that interpretable, fatigue-aware EEG features can deliver high authentication performance under real-time XR conditions. The proposed system is lightweight, spoof-resistant, and well-suited for deployment in XR-based defense, training, and industrial applications.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom