z-logo
open-access-imgOpen Access
Rethinking dopamine as generalized prediction error
Author(s) -
Matthew P.H. Gardner,
Geoffrey Schoenbaum,
Samuel J. Gershman
Publication year - 2018
Publication title -
proceedings of the royal society b biological sciences
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.342
H-Index - 253
eISSN - 1471-2954
pISSN - 0962-8452
DOI - 10.1098/rspb.2018.1645
Subject(s) - dopamine , reinforcement learning , neuroscience , sensory system , mean squared prediction error , conceptualization , psychology , reinforcement , cognitive psychology , midbrain , identity (music) , computer science , artificial intelligence , machine learning , social psychology , physics , acoustics , central nervous system
Midbrain dopamine neurons are commonly thought to report a reward prediction error (RPE), as hypothesized by reinforcement learning (RL) theory. While this theory has been highly successful, several lines of evidence suggest that dopamine activity also encodes sensory prediction errors unrelated to reward. Here, we develop a new theory of dopamine function that embraces a broader conceptualization of prediction errors. By signalling errors in both sensory and reward predictions, dopamine supports a form of RL that lies between model-based and model-free algorithms. This account remains consistent with current canon regarding the correspondence between dopamine transients and RPEs, while also accounting for new data suggesting a role for these signals in phenomena such as sensory preconditioning and identity unblocking, which ostensibly draw upon knowledge beyond reward predictions.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom