
Dynamic Sensor Matching for Parallel Point Cloud Data Acquisition
Author(s) -
Simone Müller,
Dieter Kranzlmüller
Publication year - 2021
Publication title -
computer science research notes
Language(s) - English
Resource type - Conference proceedings
eISSN - 2464-4625
pISSN - 2464-4617
DOI - 10.24132/csrn.2021.3002.3
Subject(s) - point cloud , computer science , computer vision , artificial intelligence , object (grammar) , point (geometry) , matching (statistics) , data set , cloud computing , computer graphics (images) , mathematics , statistics , geometry , operating system
Based on depth perception of individual stereo cameras, spatial structures can be derived as point clouds. Thequality of such three-dimensional data is technically restricted by sensor limitations, latency of recording, andinsufficient object reconstructions caused by surface illustration. Additionally external physical effects likelighting conditions, material properties, and reflections can lead to deviations between real and virtual objectperception. Such physical influences can be seen in rendered point clouds as geometrical imaging errors onsurfaces and edges. We propose the simultaneous use of multiple and dynamically arranged cameras. Theincreased information density leads to more details in surrounding detection and object illustration. During apre-processing phase the collected data are merged and prepared. Subsequently, a logical analysis part examinesand allocates the captured images to three-dimensional space. For this purpose, it is necessary to create a newmetadata set consisting of image and localisation data. The post-processing reworks and matches the locallyassigned images. As a result, the dynamic moving images become comparable so that a more accurate point cloudcan be generated. For evaluation and better comparability we decided to use synthetically generated data sets. Ourapproach builds the foundation for dynamic and real-time based generation of digital twins with the aid of realsensor data.