Eye Movement Signal Classification for Developing Human-Computer Interface Using Electrooculogram
Author(s) -
M. Thilagaraj,
B. Dwarakanath,
S. Ramkumar,
K. Karthikeyan,
Aneesh Jayan Prabhu,
Gurusamy Saravanakumar,
M. Pallikonda Rajasekaran,
N. Arunkumar
Publication year - 2021
Publication title -
journal of healthcare engineering
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.509
H-Index - 29
eISSN - 2040-2309
pISSN - 2040-2295
DOI - 10.1155/2021/7901310
Subject(s) - computer science , classifier (uml) , brain–computer interface , interface (matter) , eye movement , wheelchair , convolutional neural network , artificial intelligence , signal (programming language) , artificial neural network , human–computer interaction , pattern recognition (psychology) , speech recognition , electroencephalography , medicine , bubble , psychiatry , maximum bubble pressure method , parallel computing , world wide web , programming language
Human-computer interfaces (HCI) allow people to control electronic devices, such as computers, mouses, wheelchairs, and keyboards, by bypassing the biochannel without using motor nervous system signals. These signals permit communication between people and electronic-controllable devices. This communication is due to HCI, which facilitates lives of paralyzed patients who do not have any problems with their cognitive functioning. The major plan of this study is to test out the feasibility of nine states of HCI by using modern techniques to overcome the problem faced by the paralyzed. Analog Digital Instrument T26 with a five-electrode system was used in this method. Voluntarily twenty subjects participated in this study. The extracted signals were preprocessed by applying notch filter with a range of 50 Hz to remove the external interferences; the features were extracted by applying convolution theorem. Afterwards, extracted features were classified using Elman and distributed time delay neural network. Average classification accuracy with 90.82% and 90.56% was achieved using two network models. The accuracy of the classifier was analyzed by single-trial analysis and performances of the classifier were observed using bit transfer rate (BTR) for twenty subjects to check the feasibility of designing the HCI. The achieved results showed that the ERNN model has a greater potential to classify, identify, and recognize the EOG signal compared with distributed time delay network for most of the subjects. The control signal generated by classifiers was applied as control signals to navigate the assistive devices such as mouse, keyboard, and wheelchair activities for disabled people.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom