
View Synthesis: LiDAR Camera versus Depth Estimation
Author(s) -
Yupeng Xie,
Sarah Fachada,
Daniele Bonatto,
Mehrdad Teratani,
Gauthier Lafruit
Publication year - 2021
Publication title -
computer science research notes
Language(s) - English
Resource type - Conference proceedings
SCImago Journal Rank - 0.11
H-Index - 4
eISSN - 2464-4625
pISSN - 2464-4617
DOI - 10.24132/csrn.2021.3101.35
Subject(s) - computer science , lidar , computer vision , artificial intelligence , rendering (computer graphics) , view synthesis , calibration , remote sensing , mathematics , geography , statistics
Depth-Image-Based Rendering (DIBR) can synthesize a virtual view image from a set of multiview images and corresponding depth maps. However, this requires an accurate depth map estimation that incurs a high compu- tational cost over several minutes per frame in DERS (MPEG-I’s Depth Estimation Reference Software) even by using a high-class computer. LiDAR cameras can thus be an alternative solution to DERS in real-time DIBR ap- plications. We compare the quality of a low-cost LiDAR camera, the Intel Realsense LiDAR L515 calibrated and configured adequately, with DERS using MPEG-I’s Reference View Synthesizer (RVS). In IV-PSNR, the LiDAR camera reaches 32.2dB view synthesis quality with a 15cm camera baseline and 40.3dB with a 2cm baseline. Though DERS outperforms the LiDAR camera with 4.2dB, the latter provides a better quality-performance trade- off. However, visual inspection demonstrates that LiDAR’s virtual views have even slightly higher quality than with DERS in most tested low-texture scene areas, except for object borders. Overall, we highly recommend using LiDAR cameras over advanced depth estimation methods (like DERS) in real-time DIBR applications. Neverthe- less, this requires delicate calibration with multiple tools further exposed in the paper.