z-logo
Premium
Integrated Learning: Controlling Explanation
Author(s) -
Lebowitz Michael
Publication year - 1986
Publication title -
cognitive science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.498
H-Index - 114
eISSN - 1551-6709
pISSN - 0364-0213
DOI - 10.1207/s15516709cog1002_5
Subject(s) - computer science , artificial intelligence , generalization , similarity (geometry) , predictability , machine learning , context (archaeology) , instance based learning , unsupervised learning , mathematics , mathematical analysis , paleontology , statistics , image (mathematics) , biology
Similarity‐based learning, which involves largely structural comparisons of instances, and explanation‐based learning, a knowledge‐intensive method for analyzing instances to build generalized schemata, are two major inductive learning techniques in use in Artificial Intelligence. In this paper, we propose a combination of the two methods—applying explanation‐based techniques during the course of similiarity‐based learning. For domains lacking detailed explanatory rules, this combination can achieve the power of explanation‐based learning without some of the computational problems that can otherwise arise. We show how the ideas of predictability and interest can be particularly valuable in this context. We include an example of the computer program UNIMEM applying explanation to a generalization formed using similarity‐based methods.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here