Evolutionary policy iteration for solving Markov decision processes
Author(s) -
Hyeong Soo Chang,
Hong-Gi Lee,
M.C. Fu,
S.I. Marcus
Publication year - 2005
Publication title -
ieee transactions on automatic control
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 3.436
H-Index - 294
eISSN - 1558-2523
pISSN - 0018-9286
DOI - 10.1109/tac.2005.858644
Subject(s) - signal processing and analysis
We propose a novel algorithm called evolutionary policy iteration (EPI) for solving infinite horizon discounted reward Markov decision processes. EPI inherits the spirit of policy iteration but eliminates the need to maximize over the entire action space in the policy improvement step, so it should be most effective for problems with very large action spaces. EPI iteratively generates a "population" or a set of policies such that the performance of the "elite policy" for a population monotonically improves with respect to a defined fitness function. EPI converges with probability one to a population whose elite policy is an optimal policy. EPI is naturally parallelizable and along this discussion, a distributed variant of PI is also studied.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom