z-logo
open-access-imgOpen Access
On Monte Carlo Tree Search and Reinforcement Learning
Author(s) -
Tom Vodopivec,
Spyridon Samothrakis,
Branko Šter
Publication year - 2017
Publication title -
journal of artificial intelligence research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.79
H-Index - 123
eISSN - 1943-5037
pISSN - 1076-9757
DOI - 10.1613/jair.5507
Subject(s) - monte carlo tree search , reinforcement learning , computer science , tree (set theory) , search tree , artificial intelligence , adaptation (eye) , machine learning , search algorithm , relation (database) , monte carlo method , algorithm , data mining , mathematics , mathematical analysis , statistics , physics , optics
Fuelled by successes in Computer Go, Monte Carlo tree search (MCTS) has achieved widespread adoption within the games community. Its links to traditional reinforcement learning (RL) methods have been outlined in the past; however, the use of RL techniques within tree search has not been thoroughly studied yet. In this paper we re-examine in depth this close relation between the two fields; our goal is to improve the cross-awareness between the two communities. We show that a straightforward adaptation of RL semantics within tree search can lead to a wealth of new algorithms, for which the traditional MCTS is only one of the variants. We confirm that planning methods inspired by RL in conjunction with online search demonstrate encouraging results on several classic board games and in arcade video game competitions, where our algorithm recently ranked first. Our study promotes a unified view of learning, planning, and search.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom