z-logo
Premium
Decision Theory with Resource‐Bounded Agents
Author(s) -
Halpern Joseph Y.,
Pass Rafael,
Seeman Lior
Publication year - 2014
Publication title -
topics in cognitive science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.191
H-Index - 56
eISSN - 1756-8765
pISSN - 1756-8757
DOI - 10.1111/tops.12088
Subject(s) - turing machine , computer science , decision problem , bounded rationality , bounded function , automaton , computation , theoretical computer science , model of computation , decision theory , turing , finite state machine , game theory , mathematical economics , artificial intelligence , mathematics , algorithm , mathematical analysis , statistics , programming language
There have been two major lines of research aimed at capturing resource‐bounded players in game theory. The first, initiated by Rubinstein ([Rubinstein, A., 1986]), charges an agent for doing costly computation; the second, initiated by Neyman ([Neyman, A., 1985]), does not charge for computation, but limits the computation that agents can do, typically by modeling agents as finite automata. We review recent work on applying both approaches in the context of decision theory. For the first approach, we take the objects of choice in a decision problem to be Turing machines, and charge players for the “complexity” of the Turing machine chosen (e.g., its running time). This approach can be used to explain well‐known phenomena like first‐impression‐matters biases (i.e., people tend to put more weight on evidence they hear early on) and belief polarization (two people with different prior beliefs, hearing the same evidence, can end up with diametrically opposed conclusions) as the outcomes of quite rational decisions. For the second approach, we model people as finite automata, and provide a simple algorithm that, on a problem that captures a number of settings of interest, provably performs optimally as the number of states in the automaton increases.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here