
Unpaired Image to Image Translation using Cycle Generative Adversarial Networks
Author(s) -
Abhinav Dwarkani,
Maitri Jain,
Jash Thakkar,
Kottilingam Kottursamy
Publication year - 2020
Publication title -
international journal of engineering and advanced technology
Language(s) - English
Resource type - Journals
ISSN - 2249-8958
DOI - 10.35940/ijeat.f1525.089620
Subject(s) - image translation , adversarial system , image (mathematics) , computer science , generative grammar , translation (biology) , consistency (knowledge bases) , artificial intelligence , function (biology) , set (abstract data type) , domain (mathematical analysis) , artificial neural network , machine learning , mathematics , mathematical analysis , biochemistry , chemistry , evolutionary biology , biology , messenger rna , gene , programming language
In this burgeoning age and society where people are tending towards learning the benefits adversarial network we hereby benefiting the society tend to extend our research towards adversarial networks as a general-purpose solution to image-to-image translation problems. Image to image translation comes under the peripheral class of computer sciences extending our branch in the field of neural networks. We apprentice Generative adversarial networks as an optimum solution for generating Image to image translation where our motive is to learn a mapping between an input image(X) and an output image(Y) using a set of predefined pairs[4]. But it is not necessary that the paired dataset is provided to for our use and hence adversarial methods comes into existence. Further, we advance a method that is able to convert and recapture an image from a domain X to another domain Y in the absence of paired datasets. Our objective is to learn a mapping function G: A —B such that the mapping is able to distinguish the images of G(A) within the distribution of B using an adversarial loss.[1] Because this mapping is high biased, we introduce an inverse mapping function F B—A and introduce a cycle consistency loss[7]. Furthermore we wish to extend our research with various domains and involve them with neural style transfer, semantic image synthesis. Our essential commitment is to show that on a wide assortment of issues, conditional GANs produce sensible outcomes. This paper hence calls for the attention to the purpose of converting image X to image Y and we commit to the transfer learning of training dataset and optimising our code.You can find the source code for the same here.