The neural basis of depth perception from motion parallax
Author(s) -
HyungGoo R. Kim,
Dora E. Angelaki,
Gregory C. DeAngelis
Publication year - 2016
Publication title -
philosophical transactions of the royal society b biological sciences
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.753
H-Index - 272
eISSN - 1471-2970
pISSN - 0962-8436
DOI - 10.1098/rstb.2015.0256
Subject(s) - parallax , kinetic depth effect , neural substrate , depth perception , computer vision , perception , artificial intelligence , binocular disparity , motion (physics) , computer science , motion perception , observer (physics) , sign (mathematics) , binocular vision , mathematics , biology , neuroscience , physics , cognition , quantum mechanics , mathematical analysis
In addition to depth cues afforded by binocular vision, the brain processes relative motion signals to perceive depth. When an observer translates relative to their visual environment, the relative motion of objects at different distances (motion parallax) provides a powerful cue to three-dimensional scene structure. Although perception of depth based on motion parallax has been studied extensively in humans, relatively little is known regarding the neural basis of this visual capability. We review recent advances in elucidating the neural mechanisms for representing depth-sign (near versus far) from motion parallax. We examine a potential neural substrate in the middle temporal visual area for depth perception based on motion parallax, and we explore the nature of the signals that provide critical inputs for disambiguating depth-sign.This article is part of the themed issue 'Vision in our three-dimensional world'.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom