z-logo
open-access-imgOpen Access
Enhancing an Eye-Tracker based Human-Computer Interface with Multi-modal Accessibility Applied for Text Entry
Author(s) -
Jai Vardhan,
Girijesh Prasad
Publication year - 2015
Publication title -
international journal of computer applications
Language(s) - English
Resource type - Journals
ISSN - 0975-8887
DOI - 10.5120/ijca2015907194
Subject(s) - computer science , modal , human–computer interaction , interface (matter) , information retrieval , operating system , chemistry , bubble , maximum bubble pressure method , polymer chemistry
In natural course, human beings usually make use of multisensory modalities for effective communication or efficiently executing day-to-day tasks. For instance, during verbal conversations we make use of voice, eyes, and various body gestures. Also effective human-computer interaction involves hands, eyes, and voice, if available. Therefore by combining multi-sensory modalities, we can make the whole process more natural and ensure enhanced performance even for the disabled users. Towards this end, we have developed a multimodal human-computer interface (HCI) by combining an eyetracker with a soft-switch which may be considered as typically representing another modality. This multi-modal HCI is applied for text entry using a virtual keyboard appropriately designed in-house, facilitating enhanced performance. Our experimental results demonstrate that using multi-modalities for text entry through the virtual keyboard is more efficient and less strenuous than single modality system and also solves the Midas-touch problem, which is inherent in an eye-tracker based HCI system where only dwell time is used for selecting a character.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom