z-logo
open-access-imgOpen Access
Application of Implicit Knowledge: Deterministic or Probabilistic?
Author(s) -
Zoltán Dienes,
Andreas Kurz,
Regina Bernhaupt,
Josef Perner
Publication year - 1997
Publication title -
psychologica belgica
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.511
H-Index - 33
eISSN - 2054-670X
pISSN - 0033-2879
DOI - 10.5334/pb.910
Subject(s) - probabilistic logic , matching (statistics) , psychology , implicit learning , probability theory , artificial intelligence , cognitive psychology , computer science , mathematics , statistics , cognition , neuroscience
This paper distinguishes two models specifying the application of implicit knowledge. According to one model, originally suggested by Reber (1967), subjects either apply sufficient knowledge to always produce a correct response or else they guess randomly (High Threshold Theory; subjects only apply knowledge when there is sufficient knowledge to exceed a threshold ensuring a correct response); according to the other model, suggested by Dienes (1992), subjects respond with a certain probability towards each item, where the probability is determined by the match between the items structure and the induced constraints about the structure (Probability Matching Theory; subjects match their probability of responding against their personal probability that the item belongs to a certain category). One parameter versions of both models were specified and then tested against the data generated from three artificial grammar learning experiments. Neither theory could account for all features of the data, and extensions of the theories are suggested. Dienes and Berry (1997) argued that there is widespread agreement about the existence of an important learning mechanism, pervasive in its effects, and producing knowledge by unconscious associative processes, knowledge about which the subject does not directly have metaknowledge. Let us call the mechanism implicit learning, and the induced knowledge implicit knowledge. This paper will address the question of how implicit knowledge is applied. Imagine an animal learning about which stimuli constitute food. Maybe its mother brings back different plants or animals for it to eat, and stops it from ingesting other plants or animals, which may be dangerous or poisonous. The details about what features go together to constitute something that is edible may be more or less complex; in any case, the animal eventually learns, let us say perfectly, to seek out only the edible substances in its environment. But before learning reaches this point, how should the animal behave towards stimuli about which it has only imperfect information and when the mother is not around to provide corrective feedback? One strategy would be to only ingest stimuli about which the animal’s induced knowledge unambiguously indicates are edible; this would prevent unfortunate fatalities. On this scenario, implicit knowledge may have evolved in general to direct action towards a stimulus only when the implicit knowledge relevant to the stimulus unambiguously indicates the category to which the stimulus belongs. Partial information may not even be made available for controlling behaviour in case the animal uses the knowledge to sample poisonous substances. Thus, if the animal were forced to respond to some stimuli about which it had imperfect information, the animal would be reduced to random responding. We will call this the High Threshold Theory (HTT) metaphorically, the knowledge lies fallow until it exceeds a high threshold. Now imagine an animal learning about which locations are likely to contain food. This scene differs from the last in that a wrong decision is not so likely to have catastrophic consequences. If the animal’s knowledge merely suggests that a location contains food, it still may be worth sampling in preference to another. There are two different states of affairs to be distinguished in this case: First, different locations may have objectively different probabilities of containing food. The animal may come to have perfect knowledge of the different probabilities. It may seem rational for the animal to always choose the location with the highest probability, but this is not so if the probability structure of the environment is subject to fluctuations. Then it is beneficial to occasionally sample even low probability locations because it allows an assessment of whether the probability remains low. Also, because other animals will in general be competing for the food, and there will be more competitiors at higher probability locations, it therefore pays to forage sometimes at low probability locations. In fact, animals do forage in different locations according to the probability of finding food there (Stephens & Krebs, 1986) that is, they probability match. Similarly, in predicting probabilistic events, people like other animals are also known to probability match rather than respond deterministically. Reber and Millward (1968) asked people to passively observe binary events at a rate of two per second for two or three minutes. When people were then asked to predict successive events they probability matched to a high degree of accuracy (to the second decimal place; see Reber, 1989, for review). Second, different locations may have imperfectly known probabilities of containing food. This is the case in which we are interested; how does the animal apply its imperfect knowledge? The knowledge may be imperfect because the locations have not yet been sampled frequently, or because the location is new and the animal does not know how to classify it. Consider an animal faced with choosing one of two locations. In the absence of certain knowledge about the objective probability structure, one strategy would be to search each location according to the estimated probability. For example, in the absence of any information, the animal would search each location with equal probability, and adjust the search probabilities as information was collected. Or if features of the locations suggested different probabilities, search could start at those estimated probabilities. The true probabilities could be honed in on more quickly if the animal starts near their actual values. We will call the hypothesis that the animal responds to stimuli with a probability that matches the stimuli’s expected probabilties Probability Matching Theory (PMT). If we allow probabilities to apply to singular events, then animals may respond probabilistically to stimuli for which there are no objective probabilities other than 0 or 1. For example, in classifying an edible stimulus as either being palatable or unpalatable, the animal may ingest it with a probability determined by the likelihood that the stimulus is palatable. In this case, the probability of the animal ingesting a stimulus may be interpreted as the animal’s ‘personal probability’ that the stimulus is edible. We have outlined two hypotheses about the way in which implicit knowledge could be applied: HTT and PMT. It may be that implicit knowledge is applied in different ways according to context; for example, according to the cost of a wrong choice, as in the examples of ingesting stimuli and searching locations given above. Or it may be that evolution has for economy produced a mechanism with a single principle of application. We will investigate these two hypotheses HTT and PMT in one context, that of people implicitly learning artificial grammars. In that field, both hypotheses have been suggested as accounting for human performance. In a typical artificial grammar learning experiment, subjects first memorize grammatical strings of letters generated by a finite-state grammar. Then, they are informed of the existence of the complex set of rules that constrains letter order (but not what they are), and are asked to classify grammatical and nongrammatical strings. In an initial study, Reber (1967) found that the more strings subjects had attempted to memorize, the easier it was to memorize novel grammatical strings, indicating that they had learned to utilize the structure of the grammar. Subjects could also classify novel strings significantly above chance (69%, where chance is 50%). Reber (e.g. 1989) argued that the knowledge was implicit because subjects could not adequately describe how they classified strings (see Dienes & Perner, 1996, and Dienes & Berry, 1997, for further arguments that the knowledge should be seen as implicit). Reber (1967, 1989) argued for a theory of implicit learning which combined HTT with the Gibsonian notion of veridical information pick-up. Specifically, Reber argued that implicit learning results in underlying representations that mirror the objective structures in the environment. In the case of artificial grammar learning, he argued that the status of a given test item would either be known (and would thus always be classified correctly); or the status was not known, and the subject guessed randomly. On this assumption, any knowledge the subject applied was always perfectly veridical; incorrect responses could only be based on failing to apply knowledge rather than on applying incorrect knowledge. Reber used these assumptions by testing each subject twice on each string without feedback. If the probability of a given subject using perfect knowledge of the ith string is ki then the expected proportion of strings classified correctly twice, once, or not at all by that subject is determined by the following equations: the proportion of strings classified twice correctly = p(CC) = k + (1-k)*0.5 the proportion of strings classified correctly just once = p(CE) + p(EC) = (1-k)* 0.5 the proportion of strings never classified correctly = p(EE) = (1-k)* 0.5 where k is the average of the kis. Under this model, the values of p(CE), p(EC) and p(EE) averaged across subjects should be statistically identical and lower than p(CC). If p(EE) is greater than p(CE) or p(EC), then this is evidence that subjects have induced rules that are not accurate reflections of the grammar; the incorrect rules lead to consistent misclassifications. Reber (1989) reviewed 8 studies in which subjects in the learning phase were exposed to grammatical stimuli presented in a random order so that the grammatical rules would not be salient, and in which in the test phase subjects were tested twice. When subjects were asked to search for rules in the learning phase, p(EE) was on average .22 and the average of p(CE) and P(EC) was .13. That is, when subjects were asked to learn

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom