z-logo
Premium
INTRODUCTION TO THE SPECIAL ISSUE ON GAMES: STRUCTURE AND LEARNING
Author(s) -
Pell Barney,
Epstein Susan L.,
Levinson Robert
Publication year - 1996
Publication title -
computational intelligence
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.353
H-Index - 52
eISSN - 1467-8640
pISSN - 0824-7935
DOI - 10.1111/j.1467-8640.1996.tb00249.x
Subject(s) - library science , citation , research center , computer science , medicine , pathology
The universality of strategy games suggests that they reflect some basic insights into the nature of human intelligence. Turing cited rudimentary chess reasoning as a hallmark of AI, and some of the earliest significant research was on chess and checkers. Today, computer game playing continues to be a central concern because it addresses the fundamental issues in AI: knowledge representation, search, planning, and learning. Moreover, the multi-agent nature of games makes game playing an excellent forum for addressing core issues such as contingency planning and reasoning about actions and plans of other agents, while competition between programs and with humans provides useful benchmarks and encourages progress. This special issue highlights recent innovative work by a broad spectrum of researchers and practitioners. There are clearly at least three very different approaches to computer game playing: a high-performance determination to play better than any human, a cognitively oriented exploration of learning and behavior, and a mathematical theory of heuristics and game playing. The competition and cooperation among these approaches drive exciting and significant work whose results extend to many other problems with large search spaces. The traditional A1 approach to game playing relies upon fast, deep search to look ahead from the current game state to all possible ways to complete the contest. For difficult games, there are so many alternatives that only a fragment of the future possibilities can be considered. Therefore, move selection must also rely upon a human-designed evaluation function to estimate the worth of game states prior to the end of a contest. This technique is classically supplemented by an extensive catalog of expert openings (an opening book) and precomputed solutions to simple endgame positions (an endgame database). Because this approach tends to rely on raw computer power to compensate for a lack of knowledge or selectivity in search, these methods are often referred to as the “brute-force’’ approach. By many standards, this brute-force approach has been remarkably successful. Carefully engineered, deep-searching computers now dominate all but a few humans in a number of challenging games, including chess, checkers, and Othello. The surprising success of the engineering approach on these games has prompted researchers in other fields to seek similar search-intensive solutions to their problems, including theorem proving and natural language processing (see Marsland’s discussion in (Levinson et al. 1991)). The brute-force approach to games has its limitations, however. First, where this approach is applicable, considerable engineering effort is required to achieve success. This effort manifests itself in highly efficient, special-purpose representations (sometimes with gamespecific hardware (Ebeling 1986)), fine-tuned evaluation functions with hand-crafted features

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here