
Generating Video from Images using GANs
Author(s) -
A. P.,
Gupta Chetan,
Mohan Manoharan,
Priyanka BN,
N.S. Nagaraj
Publication year - 2020
Publication title -
international journal of innovative technology and exploring engineering
Language(s) - English
Resource type - Journals
ISSN - 2278-3075
DOI - 10.35940/ijitee.j7560.0891020
Subject(s) - discriminative model , computer science , generative grammar , conditional probability distribution , artificial intelligence , adversarial system , range (aeronautics) , generative model , machine learning , set (abstract data type) , pattern recognition (psychology) , process (computing) , sample (material) , data set , mathematics , statistics , materials science , chemistry , chromatography , composite material , programming language , operating system
Generative adversarial networks are a category of neural networks used extensively for the generation of a wide range of content. The generative models are trained through an adversarial process that offers a lot of potential in the world of deep learning. GANs are a popular approach to generate new data from random noise vector that are similar or have the same distribution as that in the training data set. The Generative Adversarial Networks (GANs) approach has been proposed to generate more realistic images. An extension of GANs is the conditional GANs which allows the model to condition external information. Conditional GANs have seen increasing uses and more implications than ever. We also propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models, a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. Our work aims at highlighting the uses of conditional GANs specifically with Generating images. We present some of the use cases of conditional GANs with images specifically in video generation.