z-logo
open-access-imgOpen Access
DISTRIBUTED DIMENSONALITY-BASED RENDERING OF LIDAR POINT CLOUDS
Author(s) -
Mathieu Brédif,
Bruno Vallet,
B Ferrand
Publication year - 2015
Publication title -
the international archives of the photogrammetry, remote sensing and spatial information sciences/international archives of the photogrammetry, remote sensing and spatial information sciences
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.264
H-Index - 71
eISSN - 1682-1777
pISSN - 1682-1750
DOI - 10.5194/isprsarchives-xl-3-w3-559-2015
Subject(s) - rendering (computer graphics) , computer science , point cloud , visualization , lidar , bottleneck , mobile mapping , usability , geolocation , computer vision , computer graphics (images) , georeference , ground truth , artificial intelligence , remote sensing , geography , human–computer interaction , physical geography , world wide web , embedded system
Mobile Mapping Systems (MMS) are now commonly acquiring lidar scans of urban environments for an increasing number of applicationssuch as 3D reconstruction and mapping, urban planning, urban furniture monitoring, practicability assessment for personswith reduced mobility (PRM)... MMS acquisitions are usually huge enough to incur a usability bottleneck for the increasing numberof non-expert user that are not trained to process and visualize these huge datasets through specific softwares. A vast majority of theircurrent need is for a simple 2D visualization that is both legible on screen and printable on a static 2D medium, while still conveyingthe understanding of the 3D scene and minimizing the disturbance of the lidar acquisition geometry (such as lidar shadows). The usersthat motivated this research are, by law, bound to precisely georeference underground networks for which they currently have schematicswith no or poor absolute georeferencing. A solution that may fit their needs is thus a 2D visualization of the MMS dataset thatthey could easily interpret and on which they could accurately match features with their user datasets they would like to georeference.Our main contribution is two-fold. First, we propose a 3D point cloud stylization for 2D static visualization that leverages a PrincipalComponent Analysis (PCA)-like local geometry analysis. By skipping the usual and error-prone estimation of a ground elevation, thisrendering is thus robust to non-flat areas and has no hard-to-tune parameters such as height thresholds. Second, we implemented thecorresponding rendering pipeline so that it can scale up to arbitrary large datasets by leveraging the Spark framework and its ResilientDistributed Dataset (RDD) and Dataframe abstractions

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here