
GAZE AND FEET AS ADDITIONAL INPUT MODALITIES FOR INTERACTING WITH GEOSPATIAL INTERFACES
Author(s) -
Arzu Çöltekin,
J. Hempel,
A. Brychtova,
I. Giannopoulos,
S. Stellmach,
R. Dachselt
Publication year - 2016
Publication title -
isprs annals of the photogrammetry, remote sensing and spatial information sciences
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.356
H-Index - 38
eISSN - 2194-9042
pISSN - 2196-6346
DOI - 10.5194/isprsannals-iii-2-113-2016
Subject(s) - human–computer interaction , computer science , gaze , zoom , modalities , gesture , geospatial analysis , process (computing) , panning (audio) , user interface , artificial intelligence , cartography , engineering , geography , social science , sociology , petroleum engineering , lens (geology) , operating system
Geographic Information Systems (GIS) are complex software environments and we often work with multiple tasks and multiple displays when we work with GIS. However, user input is still limited to mouse and keyboard in most workplace settings. In this project, we demonstrate how the use of gaze and feet as additional input modalities can overcome time-consuming and annoying mode switches between frequently performed tasks. In an iterative design process, we developed gaze- and foot-based methods for zooming and panning of map visualizations. We first collected appropriate gestures in a preliminary user study with a small group of experts, and designed two interaction concepts based on their input. After the implementation, we evaluated the two concepts comparatively in another user study to identify strengths and shortcomings in both. We found that continuous foot input combined with implicit gaze input is promising for supportive tasks.