z-logo
Premium
An inquiry into computer understanding
Author(s) -
Cheeseman Peter
Publication year - 1988
Publication title -
computational intelligence
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.353
H-Index - 52
eISSN - 1467-8640
pISSN - 0824-7935
DOI - 10.1111/j.1467-8640.1988.tb00091.x
Subject(s) - commonsense reasoning , inference , computer science , artificial intelligence , frequentist inference , proposition , non monotonic logic , probabilistic logic network , bayesian inference , commonsense knowledge , rule of inference , natural language processing , bayesian probability , semantics (computer science) , description logic , epistemology , knowledge representation and reasoning , programming language , philosophy , autoepistemic logic , multimodal logic
This essay addresses a number of issues centered around the question of what is the best method for representing and reasoning about common sense (sometimes called plausible inference). Drew McDermott has shown that a direct translation of commonsense reasoning into logical form leads to insurmountable difficulties, from which McDermott concluded that we must resort to procedural ad hocery. This paper shows that the difficulties McDermott described are a result of insisting on using logic as the language of commonsense reasoning. If, instead, (Bayesian) probability is used, none of the technical difficulties found in using logic arise. For example, in probability, the problem of referential opacity cannot occur and nonmonotonic logics (which McDermott showed don't work anyway) are not necessary. The difficulties in applying logic to the real world are shown to arise from the limitations of truth semantics built into logic–probability substitutes the more reasonable notion of belief. In Bayesian inference, many pieces of evidence are combined to get an overall measure of belief in a proposition. This is much closer to commonsense patterns of thought than long chains of logical inference to the true conclusions. Also it is shown that English expressions of the “IF A THEN B” form are best interpreted as conditional probabilities rather than universally quantified expressions. Bayesian inference is applied to a simple example of linguistic information to illustrate the potential of this type of inference for AI. This example also shows how to deal with vague information, which has so far been the province of fuzzy logic. It is further shown that Bayesian inference gives a theoretical basis for inductive inference that is borne out in practice. Instead of insisting that probability is the best language for commonsense reasoning, a major point of this essay is to show that real inference is a complex interaction between probability, logic, and other formal representation and reasoning systems.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here