z-logo
open-access-imgOpen Access
A Quantitative Explanation of Responses to Disparity-Defined Edges in Macaque V2
Author(s) -
Christine Bredfeldt,
Jenny C. A. Read,
Bruce G. Cumming
Publication year - 2009
Publication title -
journal of neurophysiology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.302
H-Index - 245
eISSN - 1522-1598
pISSN - 0022-3077
DOI - 10.1152/jn.00729.2007
Subject(s) - feed forward , segmentation , macaque , binocular disparity , computer science , artificial intelligence , receptive field , invariant (physics) , pattern recognition (psychology) , computer vision , binocular vision , mathematics , neuroscience , psychology , control engineering , engineering , mathematical physics
Previous experiments have shown that V2 neurons respond to complex stimuli such as cyclopean edges (edges defined purely by binocular disparity), angles, and motion borders. It is currently unknown whether these responses are a simple consequence of converging inputs from a prior stage of processing (V1). Alternatively, they may identify edges in a way that is invariant across a range of visual cues defining the edge, in which case they could provide a neuronal substrate for scene segmentation. Here, we examine the ability of a simple feedforward model that combines two V1-like inputs to describe the responses of V2 neurons to cyclopean edges. A linear feedforward model was able to qualitatively reproduce the major patterns of response enhancement for cyclopean edges seen in V2. However, quantitative fitting revealed that this model usually predicts response suppression by some edge configurations and such suppression was rarely seen in the data. This problem was resolved by introducing a squaring nonlinearity at the output of the individual inputs prior to combination. The extended model produced extremely good fits to most of our data. We conclude that the responses of V2 neurons to complex stimuli such as cyclopean edges can be adequately explained by a simple convergence model and do not necessarily represent the development of sophisticated mechanisms that signal scene segmentation, although they probably constitute a step toward this goal.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here