Testing Two Tools for Multimodal Navigation
Author(s) -
Mats Liljedahl,
Stefan Lindberg,
Katarina Delsing,
Mikko Polojärvi,
Timo Saloranta,
Ismo Alakärppä
Publication year - 2012
Publication title -
advances in human-computer interaction
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.429
H-Index - 21
eISSN - 1687-5907
pISSN - 1687-5893
DOI - 10.1155/2012/251384
Subject(s) - computer science , compass , gesture , global positioning system , turn by turn navigation , focus (optics) , graphics , point (geometry) , human–computer interaction , location based service , computer vision , artificial intelligence , computer graphics (images) , robot , geography , telecommunications , physics , cartography , geometry , mathematics , robot control , optics , mobile robot
The latest smartphones with GPS, electronic compasses, directional audio, touch screens, and so forth, hold a potential for location-based services that are easier to use and that let users focus on their activities and the environment around them. Rather than interpreting maps, users can search for information by pointing in a direction and database queries can be created from GPS location and compass data. Users can also get guidance to locations through point and sweep gestures, spatial sound, and simple graphics. This paper describes two studies testing two applications with multimodal user interfaces for navigation and information retrieval. The applications allow users to search for information and get navigation support using combinations of point and sweep gestures, nonspeech audio, graphics, and text. Tests show that users appreciated both applications for their ease of use and for allowing users to interact directly with the surrounding environment
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom