z-logo
open-access-imgOpen Access
A Novel Reinforcement Learning Architecture for Continuous State and Action Spaces
Author(s) -
Víctor Uc-Cetina
Publication year - 2013
Publication title -
advances in artificial intelligence
Language(s) - English
Resource type - Journals
eISSN - 1687-7489
pISSN - 1687-7470
DOI - 10.1155/2013/492852
Subject(s) - reinforcement learning , computer science , architecture , action (physics) , robot , function (biology) , artificial intelligence , state (computer science) , robotics , algorithm , art , physics , quantum mechanics , evolutionary biology , visual arts , biology
We introduce a reinforcement learning architecture designed for problems with an infinite number of states, where each state can be seen as a vector of real numbers and with a finite number of actions, where each action requires a vector of real numbers as parameters. The main objective of this architecture is to distributein two actors the work required to learn the final policy. One actor decides what action must be performed; meanwhile, a second actor determines the right parameters for the selected action. We tested our architecture and one algorithm based on it solving the robot dribbling problem, a challenging robot control problem taken from the RoboCup competitions. Our experimental work with three different function approximators provides enough evidence to prove that the proposed architecture can be used to implement fast, robust, and reliable reinforcement learning algorithms

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom