
Rethinking 3-D LiDAR Point Cloud Segmentation
Author(s) -
Shijie Li,
Yun Liu,
Juergen Gall
Publication year - 2021
Publication title -
ieee transactions on neural networks and learning systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.882
H-Index - 212
eISSN - 2162-2388
pISSN - 2162-237X
DOI - 10.1109/tnnls.2021.3132836
Subject(s) - computing and processing , communication, networking and broadcast technologies , components, circuits, devices and systems , general topics for engineers
Many point-based semantic segmentation methods have been designed for indoor scenarios, but they struggle if they are applied to point clouds that are captured by a light detection and ranging (LiDAR) sensor in an outdoor environment. In order to make these methods more efficient and robust such that they can handle LiDAR data, we introduce the general concept of reformulating 3-D point-based operations such that they can operate in the projection space. While we show by means of three point-based methods that the reformulated versions are between 300 and 400 times faster and achieve higher accuracy, we furthermore demonstrate that the concept of reformulating 3-D point-based operations allows to design new architectures that unify the benefits of point-based and image-based methods. As an example, we introduce a network that integrates reformulated 3-D point-based operations into a 2-D encoder-decoder architecture that fuses the information from different 2-D scales. We evaluate the approach on four challenging datasets for semantic LiDAR point cloud segmentation and show that leveraging reformulated 3-D point-based operations with 2-D image-based operations achieves very good results for all four datasets.