z-logo
open-access-imgOpen Access
Retinal Input Instructs Alignment of Visual Topographic Maps
Author(s) -
Jason W. Triplett,
Melinda T. Owens,
Jena Yamada,
Greg Lemke,
Jianhua Cang,
Michael P. Stryker,
David A. Feldheim
Publication year - 2009
Publication title -
cell
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 26.304
H-Index - 776
eISSN - 1097-4172
pISSN - 0092-8674
DOI - 10.1016/j.cell.2009.08.028
Subject(s) - superior colliculus , biology , visual cortex , neuroscience , topographic map (neuroanatomy) , retinal , visual space , retina , visual system , sensory system , projection (relational algebra) , retinotopy , computer vision , artificial intelligence , computer science , perception , biochemistry , algorithm
Sensory information is represented in the brain in the form of topographic maps, in which neighboring neurons respond to adjacent external stimuli. In the visual system, the superior colliculus receives topographic projections from the retina and primary visual cortex (V1) that are aligned. Alignment may be achieved through the use of a gradient of shared axon guidance molecules, or through a retinal-matching mechanism in which axons that monitor identical regions of visual space align. To distinguish between these possibilities, we take advantage of genetically engineered mice that we show have a duplicated functional retinocollicular map but only a single map in V1. Anatomical tracing revealed that the corticocollicular projection bifurcates to align with the duplicated retinocollicular map in a manner dependent on the normal pattern of spontaneous activity during development. These data suggest a general model in which convergent maps use coincident activity patterns to achieve alignment.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom