Premium
Learning Representations of Animated Motion Sequences—A Neural Model
Author(s) -
Layher Georg,
Giese Martin A.,
Neumann Heiko
Publication year - 2014
Publication title -
topics in cognitive science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.191
H-Index - 56
eISSN - 1756-8765
pISSN - 1756-8757
DOI - 10.1111/tops.12075
Subject(s) - motion (physics) , biological motion , computer science , categorization , anticipation (artificial intelligence) , artificial intelligence , sequence (biology) , perception , sequence learning , movement (music) , neuroscience , psychology , philosophy , genetics , biology , aesthetics
Abstract The detection and categorization of animate motions is a crucial task underlying social interaction and perceptual decision making. Neural representations of perceived animate objects are partially located in the primate cortical region STS , which is a region that receives convergent input from intermediate‐level form and motion representations. Populations of STS cells exist which are selectively responsive to specific animated motion sequences, such as walkers. It is still unclear how and to what extent form and motion information contribute to the generation of such representations and what kind of mechanisms are involved in the learning processes. The article develops a cortical model architecture for the unsupervised learning of animated motion sequence representations. We demonstrate how the model automatically selects significant motion patterns as well as meaningful static form prototypes characterized by a high degree of articulation. Such key poses are selectively reinforced during learning through a cross talk between the motion and form processing streams. Furthermore, we show how sequence‐selective representations are learned in STS by fusing static form and motion input from the segregated bottom‐up driving input streams. Cells in STS , in turn, feed their activities recurrently to their input sites along top‐down signal pathways. We show how such learned feedback connections enable predictions about future input as anticipation generated by sequence‐selective STS cells. Network simulations demonstrate the computational capacity of the proposed model by reproducing several experimental findings from neurosciences and by accounting for recent behavioral data.