z-logo
open-access-imgOpen Access
Existence of Risk-Sensitive Optimal Stationary Policies for Controlled Markov Processes
Author(s) -
Daniel Hernández–Hernández
Publication year - 1999
Publication title -
applied mathematics and optimization
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.913
H-Index - 51
eISSN - 1432-0606
pISSN - 0095-4616
DOI - 10.1007/s002459900126
Subject(s) - mathematics , bellman equation , dynamic programming , markov decision process , optimal control , countable set , mathematical optimization , mathematical economics , markov process , markov chain , discounting , state space , function (biology) , economics , discrete mathematics , statistics , finance , evolutionary biology , biology
.    In this paper we are concerned with the existence of optimal stationary policies for infinite-horizon risk-sensitive Markov control processes with denumerable state space, unbounded cost function, and long-run average cost. Introducing a discounted cost dynamic game, we prove that its value function satisfies an Isaacs equation, and its relationship with the risk-sensitive control problem is studied. Using the vanishing discount approach, we prove that the risk-sensitive dynamic programming inequality holds, and derive an optimal stationary policy.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom