z-logo
open-access-imgOpen Access
Deep D2C-Net: deep learning-based display-to-camera communications
Author(s) -
Lakpa Dorje Tamang,
ByungWook Kim
Publication year - 2021
Publication title -
optics express
Language(s) - Uncategorized
Resource type - Journals
SCImago Journal Rank - 1.394
H-Index - 271
ISSN - 1094-4087
DOI - 10.1364/oe.422591
Subject(s) - computer science , artificial intelligence , convolutional neural network , deep learning , encoding (memory) , decoding methods , computer vision , feature (linguistics) , channel (broadcasting) , image quality , feature extraction , pattern recognition (psychology) , image (mathematics) , algorithm , telecommunications , philosophy , linguistics
In this paper, we propose Deep D2C-Net, a novel display-to-camera (D2C) communications technique using deep convolutional neural networks (DCNNs) for data embedding and extraction with images. The proposed technique consists of fully end-to-end encoding and decoding networks, which respectively produce high-quality data-embedded images and enable robust data acquisition in the presence of optical wireless channel. For encoding, Hybrid layers are introduced where the concurrent feature maps of the intended data and cover images are concatenated in a feed-forward fashion; for decoding, a simple convolutional neural network (CNN) is utilized. We conducted experiments in a real-world environment using a smartphone camera and a digital display with multiple parameters, such as transmission distance, capture angle, display brightness, and resolution of the camera. Experimental results prove that Deep D2C-Net outperforms the existing state-of-the-art algorithms in terms of peak signal-to-noise ratio (PSNR) and bit error rate (BER), while the data-embedded image displayed on the screen yields high visual quality for the human eye.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here