z-logo
open-access-imgOpen Access
A More Flexible Approach to Utilizing Depth Cameras for Hand andTouch Interaction
Author(s) -
Thomas Butkiewicz
Publication year - 2012
Publication title -
international journal of virtual reality
Language(s) - English
Resource type - Journals
eISSN - 2727-9979
pISSN - 1081-1451
DOI - 10.20870/ijvr.2012.11.3.2851
Subject(s) - computer science , computer vision , orientation (vector space) , artificial intelligence , tracking (education) , simple (philosophy) , human–computer interaction , computer graphics (images) , psychology , pedagogy , philosophy , geometry , mathematics , epistemology
Many researchers have utilized depth cameras for tracking user's hands to implement various interaction methods, such as touch-sensitive displays and gestural input. With the recent introduction of Microsoft's low-cost Kinect sensor, there is increased interest in this strategy. However, a review of the existing literature on these systems suggests that the majority suffer from similar limitations due to the image processing methods used to extract, segment, and relate the user's body to the environment/display. This paper presents a simple, efficient method for extracting interactions from depth images that is more flexible in terms of sensor placement, display orientation, and dependency on surface reflectivity.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom