
Photo-Realistic Continuous Image Super-Resolution with Implicit Neural Networks and Generative Adversarial Networks
Author(s) -
Muhammad Sarmad,
Leonardo C. Ruspini,
Frank Lindseth
Publication year - 2022
Publication title -
proceedings of the northern lights deep learning workshop
Language(s) - English
Resource type - Journals
ISSN - 2703-6928
DOI - 10.7557/18.6285
Subject(s) - computer science , convolutional neural network , benchmark (surveying) , artificial intelligence , image (mathematics) , context (archaeology) , pixel , generative grammar , artificial neural network , domain (mathematical analysis) , adversarial system , pattern recognition (psychology) , generative adversarial network , code (set theory) , computer vision , algorithm , mathematics , paleontology , mathematical analysis , geodesy , biology , set (abstract data type) , programming language , geography
The implicit neural networks (INNs) can represent images in the continuous domain. They consume raw (X, Y) coordinates and output a color value. Therefore they can represent and generate images at arbitrarily high resolutions in contrast to convolutional neural networks (CNNs) that output a constant-sized array of pixels. In this work, we show how to super-resolve a single image using an INN to produce sharp and photo-realistic images. We employ a random patch-based coordinate sampling method to obtain patches with context and structure; we use these patches to train the INN in an adversarial setting. We demonstrate that the trained network retains the desirable properties of INNs while the output is sharper compared to previous work. We also show qualitative and quantitative comparisons with INN and CNN baselines on benchmark datasets of DIV2K, Set5, Set14, Urban100, and B100. Our code will be made public.