z-logo
open-access-imgOpen Access
Moving Volume KinectFusion
Author(s) -
H. P. Roth,
Marsette Vona
Publication year - 2012
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.26.112
Subject(s) - roaming , computer science , computer vision , volume (thermodynamics) , computer graphics (images) , terrain , perception , augmented reality , artificial intelligence , cube (algebra) , robot , bounding volume , mathematics , collision detection , geography , cartography , computer security , physics , quantum mechanics , telecommunications , combinatorics , neuroscience , collision , biology
Newcombe and Izadi et al’s KinectFusion [5] is an impressive new algorithm for real-time dense 3D mapping using the Kinect. It is geared towards games and augmented reality, but could also be of great use for robot perception. However, the algorithm is currently limited to a relatively small volume fixed in the world at start up (typically a ∼ 3m cube). This limits applications for perception. Here we report moving volume KinectFusion with additional algorithms that allow the camera to roam freely. We are interested in perception in rough terrain, but the system would also be useful in other applications including free-roaming games and awareness aids for hazardous environments or the visually impaired. Our approach allows the algorithm to handle a volume that moves arbitrarily on-line (Figure 1).

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom