z-logo
Premium
GENERAL GAME‐PLAYING AND REINFORCEMENT LEARNING
Author(s) -
Levinson Robert
Publication year - 1996
Publication title -
computational intelligence
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.353
H-Index - 52
eISSN - 1467-8640
pISSN - 0824-7935
DOI - 10.1111/j.1467-8640.1996.tb00257.x
Subject(s) - computer science , blueprint , reinforcement learning , exploit , artificial intelligence , domain (mathematical analysis) , heuristic , graph , representation (politics) , theoretical computer science , human–computer interaction , machine learning , mathematics , mechanical engineering , mathematical analysis , computer security , politics , law , political science , engineering
This paper provides a blueprint for the development of a fully domain‐independent single‐agent and multiagent heuristic search system. It gives a graph‐theoretic representation of search problems based on conceptual graphs and outlines two different learning systems. One, an “informed learner”, makes use of the graph‐theoretic definition of a search problem or game in playing and adapting to a game in the given environment. The other, a “blind learner”, is not given access to the rules of a domain but must discover and then exploit the underlying mathematical structure of a given domain. Relevant work of others is referenced within the context of the blueprint. To illustrate further how one might go about creating general game‐playing agents, we show how we can generalize the understanding obtained with the Morph chess system to all games involving the interactions of abstract mathematical relations. A monitor for such domains has been developed, along with an implementation of a blind and informed learning system known as Morphll. Performance results with MorphK are preliminary but encouraging and provide a few more data points with which to understand and evaluate the blueprint.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here