z-logo
Premium
Optimizing Long‐term Hydro‐power Production Using Markov Decision Processes
Author(s) -
Lamond B.F.,
Boukhtouta A.
Publication year - 1996
Publication title -
international transactions in operational research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.032
H-Index - 52
eISSN - 1475-3995
pISSN - 0969-6016
DOI - 10.1111/j.1475-3995.1996.tb00049.x
Subject(s) - markov decision process , mathematical optimization , computer science , curse of dimensionality , dynamic programming , computation , term (time) , state space , recursion (computer science) , discretization , markov process , electric power system , power (physics) , mathematics , algorithm , artificial intelligence , mathematical analysis , statistics , physics , quantum mechanics
Modelling the long‐term operation of hydroelectric systems is one of the classic applications of Markov decision process (MDP). The computation of optimal policies, for MDP models, is usually done by dynamic programming (DP) on a discretized state space. A major difficulty arises when optimizing multi‐reservoir systems, because the computational complexity of DP increases exponentially with the number of sites. This so‐called ‘curse of dimensionality’ has so far restricted the applicability of DP to very small systems (2 or 3 sites). Practitioners have thus had to resort to other methodologies for the long‐term planning, often at the expense of rigour, and without reliable error estimates. This paper surveys recent research of MDP computation, with application to hydro‐power systems. Three main approaches are discussed: (i) discrete DP, (ii) numerical approximation of the expected future reward function, and (iii) analytic solution of the DP recursion.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here