z-logo
Premium
Deep Video‐Based Performance Cloning
Author(s) -
Aberman K.,
Shi M.,
Liao J.,
Lischinski D.,
Chen B.,
CohenOr D.
Publication year - 2019
Publication title -
computer graphics forum
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.578
H-Index - 120
eISSN - 1467-8659
pISSN - 0167-7055
DOI - 10.1111/cgf.13632
Subject(s) - computer science , artificial intelligence , computer vision , generative model , generator (circuit theory) , cloning (programming) , generative grammar , motion (physics) , artificial neural network , programming language , power (physics) , physics , quantum mechanics
We present a new video‐based performance cloning technique. After training a deep generative network using a reference video capturing the appearance and dynamics of a target actor, we are able to generate videos where this actor reenacts other performances. All of the training data and the driving performances are provided as ordinary video segments, without motion capture or depth information. Our generative model is realized as a deep neural network with two branches, both of which train the same space‐time conditional generator, using shared weights. One branch, responsible for learning to generate the appearance of the target actor in various poses, uses paired training data, self‐generated from the reference video. The second branch uses unpaired data to improve generation of temporally coherent video renditions of unseen pose sequences. Through data augmentation, our network is able to synthesize images of the target actor in poses never captured by the reference video. We demonstrate a variety of promising results, where our method is able to generate temporally coherent videos, for challenging scenarios where the reference and driving videos consist of very different dance performances.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here