
Restore DeepFakes video frames via identifying individual motion styles
Author(s) -
Zhang Haichao,
Lu ZheMing,
Luo Hao,
Feng YaPei
Publication year - 2021
Publication title -
electronics letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.375
H-Index - 146
eISSN - 1350-911X
pISSN - 0013-5194
DOI - 10.1049/ell2.12015
Subject(s) - computer science , motion (physics) , identity (music) , artificial intelligence , embedding , pipeline (software) , process (computing) , frame (networking) , machine learning , computer vision , telecommunications , physics , acoustics , programming language , operating system
Recent advance on highly realistic AI‐synthesised video approaches makes it hard to distinguish whether a speaker in video is real. Many existing DeepFakes detection approaches can be invalidated by using them as supervision in adversarial training, which in turn improves the performance of DeepFakes. This makes the problem a dilemma. However, human can recognise the identity of familiar persons by observing their motion styles. Inspired by this, the paper proposes a novel method to recognise original speaker's identity clues in DeepFakes videos by learning individual motion styles. As a biological signature, motion styles can neither be used in the training process of DeepFakes nor be modified to deceive detection methods without reducing realistic performance. Also, this paper proposes a novel pipeline to continuously restore the original frames from DeepFakes videos without knowing the DeepFakes approach in advance. Based on above ideas, our scheme makes it possible to restore DeepFakes videos. Through training the cross‐modal transfer module, the appearance embedding can be inferred from the identity code. Extensive experiments are conducted to reveal the effectiveness of the method, it shows for the first time that the original persons can be identified automatically and the original video from DeepFakes videos can be generated.