Premium
KYBURG AND VOLKSWAGENS
Author(s) -
Loui Ronald
Publication year - 1994
Publication title -
computational intelligence
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.353
H-Index - 52
eISSN - 1467-8640
pISSN - 0824-7935
DOI - 10.1111/j.1467-8640.1994.tb00152.x
Subject(s) - citation , computer science , library science , information retrieval
My understanding of probability is Kyburg’s. I believe the reference-class problem is the most interesting problem in probabilistic reasoning (machine learning take note!). I believe that probability should mediate the fit of scientific theory to observational data when predictive power and error must be traded against each other (plan recognition beware!). I believe the Bayesians are corrupt, or bankrupt, or in any case, up to no good; I believe that we accept statements as firmly as I believe anything. My problem with Kyburg has to do with computational practice. What are the implications of his philosophy for people who are building programs? What difference will it make when uncertain and inductive reasoning is just a component? Inexpressive language, resource-limited computation, and poor modeling of preference are surely larger concerns for the knowledge engineer. Is Kyburg suggesting something that will pay new dividends in artificial intelligence, or just a better metaphor? I think Kyburg’s essay is merely an arcane description of current practice; it is an apology for how we actually do things in our epistemological lives, unbeknownst to ourselves. It may be too painful for Bayesians and others to acknowledge, but it is what we already do, at least when we are rational. Elsewhere, Harrnan has been more successful at arguing why acceptance might make computational sense. A Bayesian knowledge engineer will condition on statements that are contingent, hence, not knowable with certainty, hence accepted. This is because we choose the input to the programs and we usually choose a high level of abstraction-we choose not to model uncertainty at the level of robotic sensor inputs (and most Bayesian inference engines are connected to diagnosis programs, not to robots). Perhaps robots and other image interpretation programs can be pure Bayesians and avoid acceptance altogether. The computational effort is immense when there is unwillingness to build abstractions. This explains why Cheeseman can remain a pure Bayesian: He is happy to crunch huge data sets. The rest of the Bayesians, Pearl, for example, admit that they accept contingent sentences and will continue to do so whether or not Kyburg notices. Identifying reference classes is something that knowledge engineers don’t much worry about since they do not often work directly with !sample data. But consider training connectionist networks on data sets and then using forward propagation to predict some property for new inputs. Shallow applications of connectionism of this kind abound: predicting bad weather, component of speech, control of motion, handwritten characters. A criterion of similarity must be determined, whether by the magic of back propagation or by the light of reasoned methods. The criterion must trade similarity against the desire to bring as much of the past to bear as possible. Consider, too, case-based reasoning in a legal or problemsolving domain. Ashley and Rissland’s “Waiting on Weighting: A Symbolic Approach to Least Commitment” perfectly describes the non-Bayesian alternative Kyburg advocates: Prediction need not be the result of weighting all past experience; some past experience might be excluded and “participate” only with a weight of zero. This is the same issue raised by reference classes. Kyburg ’s own methods are worth studying, implementing, and applying