z-logo
open-access-imgOpen Access
View Synthesis: LiDAR Camera versus Depth Estimation
Author(s) -
Yupeng Xie,
Sarah Fachada,
Daniele Bonatto,
Mehrdad Teratani,
Gauthier Lafruit
Publication year - 2021
Publication title -
computer science research notes
Language(s) - English
Resource type - Conference proceedings
eISSN - 2464-4625
pISSN - 2464-4617
DOI - 10.24132/csrn.2021.3002.35
Subject(s) - computer science , computer vision , artificial intelligence , lidar , rendering (computer graphics) , view synthesis , calibration , remote sensing , mathematics , geology , statistics
Depth-Image-Based Rendering (DIBR) can synthesize a virtual view image from a set of multiview images andcorresponding depth maps. However, this requires an accurate depth map estimation that incurs a high compu-tational cost over several minutes per frame in DERS (MPEG-I’s Depth Estimation Reference Software) even byusing a high-class computer. LiDAR cameras can thus be an alternative solution to DERS in real-time DIBR ap-plications. We compare the quality of a low-cost LiDAR camera, the Intel Realsense LiDAR L515 calibrated andconfigured adequately, with DERS using MPEG-I’s Reference View Synthesizer (RVS). In IV-PSNR, the LiDARcamera reaches 32.2dB view synthesis quality with a 15cm camera baseline and 40.3dB with a 2cm baseline.Though DERS outperforms the LiDAR camera with 4.2dB, the latter provides a better quality-performance trade-off. However, visual inspection demonstrates that LiDAR’s virtual views have even slightly higher quality thanwith DERS in most tested low-texture scene areas, except for object borders. Overall, we highly recommend usingLiDAR cameras over advanced depth estimation methods (like DERS) in real-time DIBR applications. Neverthe-less, this requires delicate calibration with multiple tools further exposed in the paper.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here