z-logo
Premium
Joint Semantic‐geometric Mapping of Unstructured Environment for Autonomous Mobile Robotic Sprayers
Author(s) -
Lin Xubin,
Su Zerong,
Zhu Zhihan,
Yuan Pengfei,
Zhu Haifei,
Zhou Xuefeng
Publication year - 2025
Publication title -
journal of field robotics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.152
H-Index - 96
eISSN - 1556-4967
pISSN - 1556-4959
DOI - 10.1002/rob.22553
ABSTRACT Mobile robotic sprayers are expected to be employed in outdoor insecticide applications for mosquito control, epidemic prevention, and disinfection. To achieve this, a comprehensive 3D environmental model integrating both semantic and geometric information is indispensable for supporting mobile robotic sprayers in autonomous navigation, task planning, and adaptive spraying control. However, outdoor environments for insecticide spraying, such as public parks and gardens, are typically unstructured, dynamic and prone to sensor degradation, posing significant challenges to both LiDAR‐only and camera‐only perception and mapping approaches. In this paper, a visual‐LiDAR fusion based joint semantic‐geometric mapping framework is proposed, featuring a novel 2D‐3D semantic perception module that is robust against complex segmentation conditions and sensor extrinsic drift. To this end, a Multi‐scale Vague Boundary Augmented Dual Attention Network (MDANet), incorporating multi‐scale 3D attention modules and vague boundary augmented attention modules, is proposed to tackle the image segmentation task involving dense vegetation with overlapping foliage and ambiguous boundaries. Additionally, a seed growth‐based visual‐LiDAR semantic data association method is proposed to resolve the issue of inaccurate pixel‐to‐point association in the presence of extrinsic drift, yielding more precise 3D semantic perception results. Furthermore, a semantic‐aware SLAM system accounting for dynamic interference and pose estimation drift is presented. Extensive experimental evaluations on public datasets and self‐recorded data are conducted. The segmentation results show that MDANet achieves a mean pixel accuracy (mPA) of 90.17%, outperforming competing methods in the vegetation‐involved segmentation task. The proposed visual‐LiDAR semantic data association method can tolerate a translational disturbance of up to 40 mm and a rotational disturbance of 0.18 rad without compromising 3D segmentation accuracy. Moreover, the evaluation of trajectory error, alongside ablation studies, validates the effectiveness and feasibility of the proposed mapping framework.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom