z-logo
open-access-imgOpen Access
Learning classifier system equivalent with reinforcement learning with function approximation
Author(s) -
Atsushi Wada,
Keiki Takadama,
Katsunori Shimohara
Publication year - 2005
Publication title -
citeseer x (the pennsylvania state university)
Language(s) - English
Resource type - Conference proceedings
DOI - 10.1145/1102256.1102277
Subject(s) - reinforcement learning , generalization , function approximation , learning classifier system , classifier (uml) , equivalence (formal languages) , computer science , temporal difference learning , reinforcement , artificial intelligence , generalization error , algorithm , mathematics , artificial neural network , discrete mathematics , mathematical analysis , psychology , social psychology
We present an experimental comparison of the reinforcement process between Learning Classifier System (LCS) and Reinforcement Learning (RL) with function approximation (FA) method, regarding their generalization mechanisms. To validate our previous theoretical analysis that derived equivalence of reinforcement process between LCS and RL, we introduce a simple test environment named Gridworld, which can be applied to both LCS and RL with three different classes of generalization: (1) tabular representation; (2) state aggregation; and (3) linear approximation. From the simulation experiments comparing LCS with its GA-inactivated and corresponding RL method, all the cases regarding the class of generalization showed identical results with the criteria of performance and temporal difference (TD) error, thereby verifying the equivalence predicted from the theory.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom