z-logo
open-access-imgOpen Access
Reinforcement Leaning of Fuzzy Control Rules with Context-Specitic Segmentation of Actions
Author(s) -
Hideki Yamagishi,
Hiroshi Kawakami,
Tadashi Horiuchi,
Osamu Katai
Publication year - 2002
Publication title -
journal of advanced computational intelligence and intelligent informatics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.172
H-Index - 20
eISSN - 1343-0130
pISSN - 1883-8014
DOI - 10.20965/jaciii.2002.p0019
Subject(s) - computer science , reinforcement learning , heuristics , artificial intelligence , machine learning , context (archaeology) , action selection , action (physics) , inference , paleontology , biology , physics , quantum mechanics , neuroscience , perception , operating system
Knowledge acquisition mainly involves two approaches: deriving general or abstract rules from human expertise such as heuristics of target systems, refined properly using further information, and extracting proper rules from experimental information, i.e., information on rewards and penalties obtained from all the possible alternative rules initially prepared ‐ our approach. Reinforcement learning methods are applied to problems where meaningful I/O sets cannot be specified beforehand. There are, however few algorithms to extract heuristics for action selection by using results of reinforcement learning. We propose a way to apply symbolic processing methods such as C4.5 to results of reinforcement learning where methods of fuzzy inference are incorporated. We also derive a proper action decision tree where conditions of proper actions for agents are effectively integrated and simplified.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom