Generalisation over Details: The Unsuitability of Supervised Backpropagation Networks for Tetris
Author(s) -
Ian Lewis,
Sebastian L. Beswick
Publication year - 2015
Publication title -
advances in artificial neural systems
Language(s) - English
Resource type - Journals
eISSN - 1687-7608
pISSN - 1687-7594
DOI - 10.1155/2015/157983
Subject(s) - generalization , computer science , backpropagation , artificial neural network , artificial intelligence , machine learning , variety (cybernetics) , deep learning , supervised learning , mathematics , mathematical analysis
We demonstrate the unsuitability of Artificial Neural Networks (ANNs) to the game of Tetris and show that their great strength, namely, their ability of generalization, is the ultimate cause. This work describes a variety of attempts at applying the Supervised Learning approach to Tetris and demonstrates that these approaches (resoundedly) fail to reach the level of performance of hand-crafted Tetris solving algorithms. We examine the reasons behind this failure and also demonstrate some interesting auxiliary results. We show that training a separate network for each Tetris piece tends to outperform the training of a single network for all pieces; training with randomly generated rows tends to increase the performance of the networks; networks trained on smaller board widths and then extended to play on bigger boards failed to show any evidence of learning, and we demonstrate that ANNs trained via Supervised Learning are ultimately ill-suited to Tetris
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom