Visualization of Learning Process in “State and Action” Space Using Self-Organizing Maps
Author(s) -
Akira Notsu,
Yuichi Hattori,
Seiki Ubukata,
Katsuhiro Honda
Publication year - 2016
Publication title -
journal of advanced computational intelligence and intelligent informatics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.172
H-Index - 20
eISSN - 1343-0130
pISSN - 1883-8014
DOI - 10.20965/jaciii.2016.p0983
Subject(s) - reinforcement learning , computer science , process (computing) , self organizing map , artificial intelligence , action (physics) , unsupervised learning , visualization , space (punctuation) , transfer of learning , machine learning , action learning , cooperative learning , artificial neural network , teaching method , mathematics , physics , quantum mechanics , operating system , mathematics education
In reinforcement learning, agents can learn appropriate actions for each situation based on the consequences of these actions after interacting with the environment. Reinforcement learning is compatible with self-organizing maps that accomplish unsupervised learning by reacting to impulses and strengthening neurons. Therefore, numerous studies have investigated the topic of reinforcement learning in which agents learn the state space using self-organizing maps. In this study, while we intended to apply these previous studies to transfer the learning and visualization of the human learning process, we introduced self-organizing maps into reinforcement learning and attempted to make their “state and action” learning process visible. We performed numerical experiments with the 2D goal-search problem; our model visualized the learning process of the agent.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom