Knowing Where I Am: Exploiting Multi-Task Learning for Multi-view Indoor Image-based Localization
Author(s) -
Guoyu Lu,
Yan Yan,
Nicu Sebe,
Chandra Kambhamettu
Publication year - 2014
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.28.125
Subject(s) - computer science , artificial intelligence , computer vision , orientation (vector space) , task (project management) , global positioning system , robotics , bundle adjustment , pose , image (mathematics) , robot , telecommunications , geometry , mathematics , management , economics
Indoor localization has attracted a large amount of applications in mobile and robotics area, especially in vast and sophisticated environments. Most indoor localization methods are based on cellular base stations and WiFi signals. Such methods require users to carry additional equipment. Localization accuracy is largely based on the beacon distribution. Image-based localization is mainly applied for outdoor environments to overcome the problem caused by weak GPS signals in large building areas. In this paper, we propose to localize images in indoor environments from multi-view settings. We use Structure-from-Motion to reconstruct the 3D environment of our indoor buildings to provide users a clear view of the whole building’s indoor structure. Since the orientation information is also quite essential for indoor navigation, images are localized based on a multi-task learning method, which treats each view direction classification as a task. We perform image retrieval based on the trained multi-task classifiers. Thus the orientation of the image together with the location information is achieved. We assign the pose of the retrieved image to the query image calculated from SfM reconstruction with the use of bundle adjustment to refine the pose estimation.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom