z-logo
open-access-imgOpen Access
Multi-sensor fusion of sparse point clouds based on neuralnet works
Author(s) -
Qiliang Yang,
Fei Liu,
Jingjing Qu,
Hui Jing,
Bing Kuang,
Wenjing Chai
Publication year - 2022
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/2216/1/012028
Subject(s) - point cloud , artificial intelligence , computer vision , computer science , lidar , fusion , image fusion , frame (networking) , cluster analysis , point (geometry) , sensor fusion , image (mathematics) , remote sensing , mathematics , geography , telecommunications , linguistics , philosophy , geometry
The fusion of laser point cloud and visual image depends on the point cloud density and the target framing effect, the traditional laser point cloud processing for sparse point cloud clustering effect is poor, it is difficult to frame small objects as well as medium and long distance objects. Then the subsequent sensor fusion is easy to miss the recognition of obstacles. In this paper, we improve the frame selection method for sparse point clouds, firstly build a deep learning framework pointpillar, use pointpillar to frame the sparse laser point clouds, then spatially calibrate the lidar coordinate system and camera coordinate system, project the lidar point clouds to the camera image, improve the late fusion method, effectively use the detection results of single sensor, and finally The late-fusion is performed with the target detection results of the camera image to output the exact distance as well as the category of the target. Experiments show that compared with the recognition effect of the traditional fusion algorithm, the number of frames is increased by 6 and the missed recognition rate is reduced from 31.41% to 12.31%.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here