Premium
CCM‐SLAM: Robust and efficient centralized collaborative monocular simultaneous localization and mapping for robotic teams
Author(s) -
Schmuck Patrik,
Chli Margarita
Publication year - 2019
Publication title -
journal of field robotics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.152
H-Index - 96
eISSN - 1556-4967
pISSN - 1556-4959
DOI - 10.1002/rob.21854
Subject(s) - simultaneous localization and mapping , robustness (evolution) , computer science , scalability , odometry , artificial intelligence , workspace , real time computing , computer vision , distributed computing , robot , mobile robot , database , biochemistry , chemistry , gene
Robotic collaboration promises increased robustness and efficiency of missions with great potential in applications, such as search‐and‐rescue and agriculture. Multiagent collaborative simultaneous localization and mapping (SLAM) is right at the core of enabling collaboration, such that each agent can colocalize in and build a map of the workspace. The key challenges at the heart of this problem, however, lie with robust communication, efficient data management, and effective sharing of information among the agents. To this end, here we present CCM‐SLAM, a centralized collaborative SLAM framework for robotic agents, each equipped with a monocular camera, a communication unit, and a small processing board. With each agent able to run visual odometry onboard, CCM‐SLAM ensures their autonomy as individuals, while a central server with potentially bigger computational capacity enables their collaboration by collecting all their experiences, merging and optimizing their maps, or disseminating information back to them, where appropriate. An in‐depth analysis on benchmarking datasets addresses the scalability and the robustness of CCM‐SLAM to information loss and communication delays commonly occurring during real missions. This reveals that in the worst case of communication loss, collaboration is affected, but not the autonomy of the agents. Finally, the practicality of the proposed framework is demonstrated with real flights of three small aircraft equipped with different sensors and computational capabilities onboard and a standard laptop as the server, collaboratively estimating their poses and the scene on the fly.