Open Access
Semi-supervised Deep Learning in Motor Imagery-Based Brain-Computer Interfaces with Stacked Variational Autoencoder
Author(s) -
Junjian Chen,
Zhuliang Yu,
Zhenghui Gu
Publication year - 2020
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1631/1/012007
Subject(s) - autoencoder , brain–computer interface , motor imagery , decoding methods , computer science , deep learning , artificial intelligence , interface (matter) , pattern recognition (psychology) , encoding (memory) , machine learning , electroencephalography , representation (politics) , neural decoding , algorithm , psychology , neuroscience , bubble , maximum bubble pressure method , parallel computing , politics , political science , law
Recently, deep learning methods have contributed to the development of motor imagery (MI) based brain-computer interface (BCI) research. However, these methods typically focused on supervised deep learning with the labelled data and failed to learn from the unlabelled data, where additional information may be critical for performance improvement in MI decoding. To address this problem, we propose a semi-supervised deep learning method based on the stacked variational autoencoder (SVAE) for MI decoding, where the input to the network is an envelope representation of EEG signal. Under the framework of SVAE, the labelled training data and unlabelled test data can be trained collaboratively. Experimental evaluation on the BCI IV 2a dataset reveals that SVAE outperforms competing methods and it also yields state-of-the-art performance in decoding MI tasks. Hence, the proposed method is a promising tool in the research of the MI-based BCI system.