z-logo
Premium
The Outcome‐Representation Learning Model: A Novel Reinforcement Learning Model of the Iowa Gambling Task
Author(s) -
Haines Nathaniel,
Vassileva Jasmin,
Ahn WooYoung
Publication year - 2018
Publication title -
cognitive science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.498
H-Index - 114
eISSN - 1551-6709
pISSN - 0364-0213
DOI - 10.1111/cogs.12688
Subject(s) - iowa gambling task , reinforcement learning , inference , task (project management) , representation (politics) , machine learning , artificial intelligence , cognition , cognitive model , computer science , psychology , compromise , cognitive psychology , outcome (game theory) , mathematics , social science , management , mathematical economics , neuroscience , politics , sociology , political science , law , economics
The Iowa Gambling Task ( IGT ) is widely used to study decision‐making within healthy and psychiatric populations. However, the complexity of the IGT makes it difficult to attribute variation in performance to specific cognitive processes. Several cognitive models have been proposed for the IGT in an effort to address this problem, but currently no single model shows optimal performance for both short‐ and long‐term prediction accuracy and parameter recovery. Here, we propose the Outcome‐Representation Learning ( ORL ) model, a novel model that provides the best compromise between competing models. We test the performance of the ORL model on 393 subjects' data collected across multiple research sites, and we show that the ORL reveals distinct patterns of decision‐making in substance‐using populations. Our work highlights the importance of using multiple model comparison metrics to make valid inference with cognitive models and sheds light on learning mechanisms that play a role in underweighting of rare events.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here