
Wavelet filterbank‐based EEG rhythm‐specific spatial features for covert speech classification
Author(s) -
Biswas Sukanya,
Sinha Rohit
Publication year - 2022
Publication title -
iet signal processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.384
H-Index - 42
eISSN - 1751-9683
pISSN - 1751-9675
DOI - 10.1049/sil2.12059
Subject(s) - discriminative model , pattern recognition (psychology) , computer science , speech recognition , covert , artificial intelligence , support vector machine , electroencephalography , wavelet , feature (linguistics) , context (archaeology) , psychology , paleontology , philosophy , linguistics , psychiatry , biology
The derivation of rhythm‐specific spatial patterns of electroencephalographic (EEG) signals for covert speech EEG classification task is dealt in this work. This study has been performed on a publicly accessible multi‐channel covert speech EEG database consisting of multi‐syllabic words. With the motivation of deriving more discriminative features, each channel data has been decomposed into distinct bands focussing on the five basic EEG rhythms using the discrete wavelet transform (DWT)‐based signal decomposition algorithm. Following that, for each band, the multi‐class common spatial pattern (CSP) features are computed using joint approximate diagonalisation. The final feature vector is formed by retaining a few significant CSP components from all five bands. Radial basis function kernel‐based support vector machines are used for covert speech classification. After 5‐fold cross‐validation, the proposed DWT‐based bandwise‐CSP features are noted to yield an average classification accuracy of 94%. In contrast with the existing (non‐decomposed) CSP feature, a relative improvement of about 24% is achieved. For generalisation purposes, the proposed approach has also been evaluated for another covert speech database comprising more classes and subjects. The study highlights the discovery of more discriminative patterns with rhythm‐specific processing in the context of covert speech classification. The proposed approach has the potential to be useful in other brain‐computer interface paradigms that employ EEG signals.