z-logo
Premium
Scaling Up Learning Models in Public Good Games
Author(s) -
Arifovic Jasmina,
Ledyard John
Publication year - 2004
Publication title -
journal of public economic theory
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.809
H-Index - 32
eISSN - 1467-9779
pISSN - 1097-3923
DOI - 10.1111/j.1467-9779.2004.00165.x
Subject(s) - reinforcement learning , computer science , scaling , artificial intelligence , focus (optics) , discretization , space (punctuation) , machine learning , mathematical economics , mathematics , physics , mathematical analysis , geometry , optics , operating system
We study three learning rules (reinforcement learning (RL), experience weighted attraction learning (EWA), and individual evolutionary learning (IEL)) and how they perform in three different Groves–Ledyard mechanisms. We are interested in how well these learning rules duplicate human behavior in repeated games with a continuum of strategies. We find that RL does not do well, IEL does significantly better, as does EWA, but only if given a small discretized strategy space. We identify four main features a learning rule should have in order to stack up against humans in a minimal competency test: (1) the use of hypotheticals to create history, (2) the ability to focus only on what is important, (3) the ability to forget history when it is no longer important, and (4) the ability to try new things.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here