A New Improved Penalty Avoiding Rational Policy Making Algorithm for Keepaway with Continuous State Spaces
Author(s) -
Takuji Watanabe,
Kazuteru Miyazaki,
Hiroaki Kobayashi
Publication year - 2009
Publication title -
journal of advanced computational intelligence and intelligent informatics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.172
H-Index - 20
eISSN - 1343-0130
pISSN - 1883-8014
DOI - 10.20965/jaciii.2009.p0675
Subject(s) - computer science , basis (linear algebra) , discretization , asynchronous communication , function (biology) , basis function , algorithm , range (aeronautics) , state (computer science) , mathematical optimization , mathematics , computer network , materials science , geometry , evolutionary biology , composite material , biology , mathematical analysis
The penalty avoiding rational policy making algorithm (PARP) [1] previously improved to save memory and cope with uncertainty, i.e., IPARP [2], requires that states be discretized in real environments with continuous state spaces, using function approximation or some other method. Especially, in PARP, a method that discretizes state using a basis functions is known [3]. Because this creates a new basis function based on the current input and its next observation, however, an unsuitable basis function may be generated in some asynchronous multiagent environments. We therefore propose a uniform basis function and range extent of the basis function is estimated before learning. We show the effectiveness of our proposal using a soccer game task called “Keepaway.”
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom