
Dark control: The default mode network as a reinforcement learning agent
Author(s) -
Dohmatob Elvis,
Dumas Guillaume,
Bzdok Danilo
Publication year - 2020
Publication title -
human brain mapping
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.005
H-Index - 191
eISSN - 1097-0193
pISSN - 1065-9471
DOI - 10.1002/hbm.25019
Subject(s) - default mode network , reinforcement learning , psychology , perspective (graphical) , action (physics) , cognitive science , cognitive psychology , computer science , artificial intelligence , neuroscience , functional connectivity , physics , quantum mechanics
The default mode network (DMN) is believed to subserve the baseline mental activity in humans. Its higher energy consumption compared to other brain networks and its intimate coupling with conscious awareness are both pointing to an unknown overarching function. Many research streams speak in favor of an evolutionarily adaptive role in envisioning experience to anticipate the future. In the present work, we propose a process model that tries to explain how the DMN may implement continuous evaluation and prediction of the environment to guide behavior. The main purpose of DMN activity, we argue, may be described by Markov decision processes that optimize action policies via value estimates through vicarious trial and error. Our formal perspective on DMN function naturally accommodates as special cases previous interpretations based on (a) predictive coding, (b) semantic associations, and (c) a sentinel role. Moreover, this process model for the neural optimization of complex behavior in the DMN offers parsimonious explanations for recent experimental findings in animals and humans.