Premium
Learning a deep motion interpolation network for human skeleton animations
Author(s) -
Zhou Chi,
Lai Zhangjiong,
Wang Suzhen,
Li Lincheng,
Sun Xiaohan,
Ding Yu
Publication year - 2021
Publication title -
computer animation and virtual worlds
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.225
H-Index - 49
eISSN - 1546-427X
pISSN - 1546-4261
DOI - 10.1002/cav.2003
Subject(s) - computer science , interpolation (computer graphics) , artificial intelligence , motion interpolation , computer vision , motion (physics) , motion capture , deep learning , computer graphics (images) , computer graphics , video processing , video tracking , block matching algorithm
Motion interpolation technology produces transition motion frames between two discrete movements. It is wildly used in video games, virtual reality and augmented reality. In the fields of computer graphics and animations, our data‐driven method generates transition motions of two arbitrary animations without additional control signals. In this work, we propose a novel carefully designed deep learning framework, named deep motion interpolation network (DMIN), to learn human movement habits from a real dataset and then to perform the interpolation function specific for human motions. It is a data‐driven approach to capture overall rhythm of two given discrete movements and generate natural in‐between motion frames. The sequence‐by‐sequence architecture allows completing all missing frames within single forward inference, which reduces computation time for interpolation. Experiments on human motion datasets show that our network achieves promising interpolation performance. The ablation study demonstrates the effectiveness of the carefully designed DMIN. 1