z-logo
open-access-imgOpen Access
Robust Wearable Camera Localization as a Target Tracking Problem on SE(3)
Author(s) -
Guillaume Bourmaud,
Audrey Giremus
Publication year - 2015
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.29.39
Subject(s) - computer vision , artificial intelligence , computer science , wearable computer , camera auto calibration , inertial measurement unit , trajectory , smart camera , tracking (education) , monocular , camera resectioning , position (finance) , motion blur , computer graphics (images) , image (mathematics) , psychology , pedagogy , physics , finance , astronomy , economics , embedded system
In this paper, we are interested in Visual Indoor Localization (VIL) for challenging video sequences coming from a single monocular camera where the person wearing the camera performs daily living activities (see Fig.1(a)). The difficulty of this problem resides in the fact that: i) handheld objects are frequently interposed between the camera and the environment; ii) strong motion blur and differences in illumination occur; iii) the environment changes between the images of the database and the video frames to localize, and the viewpoints can be significantly different. We wish to develop a method that: relies only on the images coming from the wearable camera, i.e no other sensor such as Inertial Measurement Units should be used estimates the camera position with a sub-meter level accuracy as well as its orientation is consistent with the topology of the environment, i.e the camera trajectory should not cross walls is able to detect when the data is not sufficient to disambiguate the situation, i.e when the posterior distribution of the camera trajectory is multimodal and/or too dispersed.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom