z-logo
open-access-imgOpen Access
Scene Construction from Depth Map Using Image-to-Image Translation Model
Author(s) -
Hasan Avsar,
Mehmet Sarıgül,
Levent Karacan
Publication year - 2022
Publication title -
akıllı sistemler ve uygulamaları dergisi
Language(s) - English
Resource type - Journals
ISSN - 2667-6893
DOI - 10.54856/jiswa.202205192
Subject(s) - discriminator , generator (circuit theory) , computer science , artificial intelligence , computer vision , image (mathematics) , translation (biology) , representation (politics) , image translation , depth map , generative grammar , deep learning , generative model , pattern recognition (psychology) , power (physics) , telecommunications , biochemistry , physics , chemistry , quantum mechanics , detector , politics , messenger rna , political science , law , gene
In recent years, deep learning approach to solve the image and video processing problems have become very popular. Generative Adversarial Networks (GANs) are one of the most popular deep learning-based models. GANs form a generative model utilizing two sub-models, namely, generator and discriminator. The generator tries to generate indistinguishably realistic outputs where the discriminator tires to classify the outputs of the generator as real or fake. These two models work together to achieve a successful generation of realistic outputs. This study aims to reconstruct daytime image of a given depth map data recorded with a camera or a sensor which can capture the depth map data during night time or in a lightless environment. Our model was used for reconstructing the 2D images for a given depth map representation of a known scene. The model was trained with the chess scene from 7-scenes dataset and realistic 2D images were successfully generated for the given input maps.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here