z-logo
open-access-imgOpen Access
Rule-base reduction in Fuzzy Rule Interpolation-based Q-learning
Author(s) -
Dávid Vincze,
Szilveszter Kovács
Publication year - 2015
Publication title -
recent innovations in mechatronics
Language(s) - English
Resource type - Journals
ISSN - 2064-9622
DOI - 10.17667/riim.2015.1-2/10.
Subject(s) - fuzzy rule , base (topology) , computer science , rule based system , interpolation (computer graphics) , reduction (mathematics) , knowledge base , artificial intelligence , fuzzy logic , data mining , algorithm , fuzzy set , mathematics , motion (physics) , mathematical analysis , geometry
The method called Fuzzy Rule Interpolation-based Q-learning (FRIQ-learning for short) uses a fuzzy rule interpolation method to be the reasoning engine applied within Q-learning. This method was introduced previously by the authors along with a rule-base construction extension for FRIQlearning, which can construct the requested FRI fuzzy model from scratch in a reduced size, implementing an incremental creation strategy. The rule-base created this way will most probably contain only those rules which were significant during the construction process, but have no important role in the final rule-base. Also there can be rules which became redundant (can be calculated by using fuzzy rule interpolation) thanks to another rule in the finished rule base. The goal of the paper is to introduce possible methods, which aim to find and remove the redundant and unnecessary rules from the rule-base automatically by using variations of newly developed decremental rule base reduction strategies. The paper also includes an application example presenting the applicability of the methods via a well known reinforcement learning example: the cart-pole simulation.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here