Multistability in auditory stream segregation: a predictive coding view
Author(s) -
István Winkler,
Susan L. Denham,
Robert Mill,
Tamás M. Bõhm,
Alexandra Bendixen
Publication year - 2012
Publication title -
philosophical transactions of the royal society b biological sciences
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.753
H-Index - 272
eISSN - 1471-2970
pISSN - 0962-8436
DOI - 10.1098/rstb.2011.0359
Subject(s) - multistability , predictive coding , perception , auditory scene analysis , coding (social sciences) , auditory perception , computer science , communication , speech recognition , psychology , neuroscience , mathematics , nonlinear system , statistics , physics , quantum mechanics
Auditory stream segregation involves linking temporally separate acoustic events into one or more coherent sequences. For any non-trivial sequence of sounds, many alternative descriptions can be formed, only one or very few of which emerge in awareness at any time. Evidence from studies showing bi-/multistability in auditory streaming suggest that some, perhaps many of the alternative descriptions are represented in the brain in parallel and that they continuously vie for conscious perception. Here, based on a predictive coding view, we consider the nature of these sound representations and how they compete with each other. Predictive processing helps to maintain perceptual stability by signalling the continuation of previously established patterns as well as the emergence of new sound sources. It also provides a measure of how well each of the competing representations describes the current acoustic scene. This account of auditory stream segregation has been tested on perceptual data obtained in the auditory streaming paradigm.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom