z-logo
Premium
Map Building Fusing Acoustic and Visual Information using Autonomous Underwater Vehicles
Author(s) -
Kunz Clayton,
Singh Hanumant
Publication year - 2013
Publication title -
journal of field robotics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.152
H-Index - 96
eISSN - 1556-4967
pISSN - 1556-4959
DOI - 10.1002/rob.21473
Subject(s) - computer vision , artificial intelligence , computer science , sonar , robot , bathymetry , smoothing , underwater , mobile robot , terrain , geography , cartography , archaeology
We present a system for automatically building three‐dimensional (3‐D) maps of underwater terrain fusing visual data from a single camera with range data from multibeam sonar. The six‐degree‐of‐freedom location of the camera relative to the navigation frame is derived as part of the mapping process, as are the attitude offsets of the multibeam head and the onboard velocity sensor. The system uses pose graph optimization and the square root information smoothing and mapping framework to simultaneously solve for the robot's trajectory, the map, and the camera location in the robot's frame. Matched visual features are treated within the pose graph as images of 3‐D landmarks, while multibeam bathymetry submap matches are used to impose relative pose constraints linking robot poses from distinct tracklines of the dive trajectory. The navigation and mapping system presented works under a variety of deployment scenarios on robots with diverse sensor suites. The results of using the system to map the structure and the appearance of a section of coral reef are presented using data acquired by the Seabed autonomous underwater vehicle.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here