z-logo
open-access-imgOpen Access
Orders of limits for stationary distributions, stochastic dominance, and stochastic stability
Author(s) -
Sandholm William H.
Publication year - 2010
Publication title -
theoretical economics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 4.404
H-Index - 32
eISSN - 1555-7561
pISSN - 1933-6837
DOI - 10.3982/te554
Subject(s) - stochastic dominance , markov chain , population , mathematics , logit , mathematical optimization , stability (learning theory) , markov process , limit (mathematics) , mathematical economics , computer science , econometrics , statistics , mathematical analysis , demography , machine learning , sociology
A population of agents recurrently plays a two‐strategy population game. When an agent receives a revision opportunity, he chooses a new strategy using a noisy best response rule that satisfies mild regularity conditions; best response with mutations, logit choice, and probit choice are all permitted. We study the long run behavior of the resulting Markov process when the noise level η is small and the population size N is large. We obtain a precise characterization of the asymptotics of the stationary distributions μ N ,  η as η approaches zero and N approaches infinity, and we establish that these asymptotics are the same for either order of limits and for all simultaneous limits. In general, different noisy best response rules can generate different stochastically stable states. To obtain a robust selection result, we introduce a refinement of risk dominance called stochastic dominance , and we prove that coordination on a given strategy is stochastically stable under every noisy best response rule if and only if that strategy is stochastically dominant.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here