z-logo
open-access-imgOpen Access
Machine-vision fused brain machine interface based on dynamic augmented reality visual stimulation
Author(s) -
Deyu Zhang,
Siyu Liu,
Kai Wang,
Jian Zhang,
Duanduan Chen,
Yilong Zhang,
Lei Nie,
Jiajia Yang,
Funabashi Shinntarou,
Jinglong Wu,
Tianyi Yan
Publication year - 2021
Publication title -
journal of neural engineering
Language(s) - Uncategorized
Resource type - Journals
SCImago Journal Rank - 1.594
H-Index - 111
eISSN - 1741-2560
pISSN - 1741-2552
DOI - 10.1088/1741-2552/ac2c9e
Subject(s) - computer science , brain–computer interface , artificial intelligence , augmented reality , robot , computer vision , object (grammar) , interface (matter) , machine vision , electroencephalography , psychology , bubble , psychiatry , maximum bubble pressure method , parallel computing
Objective. Brain-machine interfaces (BMIs) interpret human intent into machine reactions, and the visual stimulation (VS) paradigm is one of the most widely used of these approaches. Although VS-based BMIs have a relatively high information transfer rate (ITR), it is still difficult for BMIs to control machines in dynamic environments (for example, grabbing a dynamic object or targeting a walking person). Approach. In this study, we utilized a BMI based on augmented reality (AR) VS (AR-VS). The proposed VS was dynamically generated based on machine vision, and human intent was interpreted by a dynamic decision time interval approach. A robot based on the coordination of a task and self-motion system was controlled by the proposed paradigm in a fast and flexible state. Methods. Objects in scenes were first recognized by machine vision and tracked by optical flow. AR-VS was generated based on the objects' parameters. The number and distribution of VS was confirmed by the recognized objects. Electroencephalogram (EEG) features corresponding to VS and human intent were collected by a dry-electrode EEG cap and determined by the filter bank canonical correlation analysis method. Key parameters in the AR-VS, including the effect of VS size, frequency, dynamic object moving speed, ITR and the performance of the BMI-controlled robot, were analyzed. Conclusion and significance. The ITR of the proposed AR-VS paradigm for nine healthy subjects was 36.3 ± 20.1 bits min -1 . In the online robot control experiment, brain-controlled hybrid tasks including self-moving and grabbing objects were 64% faster than when using the traditional steady-state visual evoked potential paradigm. The proposed paradigm based on AR-VS could be optimized and adopted in other kinds of VS-based BMIs, such as P300, omitted stimulus potential, and miniature event-related potential paradigms, for better results in dynamic environments.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here