Premium
Motion recognition of self and others on realistic 3D avatars
Author(s) -
Narang Sahil,
Best Andrew,
Feng Andrew,
Kang Sinhwa,
Manocha Dinesh,
Shapiro Ari
Publication year - 2017
Publication title -
computer animation and virtual worlds
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.225
H-Index - 49
eISSN - 1546-427X
pISSN - 1546-4261
DOI - 10.1002/cav.1762
Subject(s) - avatar , computer science , retargeting , motion capture , motion (physics) , perception , artificial intelligence , computer vision , virtual reality , virtual actor , biological motion , point (geometry) , human–computer interaction , human motion , psychology , geometry , mathematics , neuroscience
Current 3D capture and modeling technology can rapidly generate highly photo‐realistic 3D avatars of human subjects. However, while the avatars look like their human counterparts, their movements often do not mimic their own due to existing challenges in accurate motion capture and retargeting. A better understanding of factors that influence the perception of biological motion would be valuable for creating virtual avatars that capture the essence of their human subjects. To investigate these issues, we captured 22 subjects walking in an open space. We then performed a study where participants were asked to identify their own motion in varying visual representations and scenarios. Similarly, participants were asked to identify the motion of familiar individuals. Unlike prior studies that used captured footage with simple “point‐light” displays, we rendered the motion on photo‐realistic 3D virtual avatars of the subject. We found that self‐recognition was significantly higher for virtual avatars than with point‐light representations. Users were more confident of their responses when identifying their motion presented on their virtual avatar. Recognition rates varied considerably between motion types for recognition of others, but not for self‐recognition. Overall, our results are consistent with previous studies that used recorded footage and offer key insights into the perception of motion rendered on virtual avatars.