z-logo
open-access-imgOpen Access
Occlusion-aware interfaces
Author(s) -
Daniel Vogel,
Ravin Balakrishnan
Publication year - 2010
Publication title -
citeseer x (the pennsylvania state university)
Language(s) - English
Resource type - Conference proceedings
DOI - 10.1145/1753326.1753365
Subject(s) - occlusion , computer science , task (project management) , computer vision , artificial intelligence , ambiguity , stability (learning theory) , human–computer interaction , machine learning , engineering , medicine , systems engineering , cardiology , programming language
We define occlusion-aware interfaces as interaction techniques which know what area of the display is currently occluded, and use this knowledge to counteract potential problems and/or utilize the hidden area. As a case study, we describe the Occlusion-Aware Viewer, which identifies important regions hidden beneath the hand and displays them in a non-occluded area using a bubble-like callout. To determine what is important, we use an application agnostic image processing layer. For the occluded area, we use a user configurable, real-time version of Vogel et al.'s [21] geometric model. In an evaluation with a simultaneous monitoring task, we find the technique can successfully mitigate the effects of occlusion, although issues with ambiguity and stability suggest further refinements. Finally, we present designs for three other occlusion-aware techniques for pop-ups, dragging, and a hidden widget.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom