z-logo
Premium
The Road Not Taken: A Metaphor Not a Model (Response to Frawley)
Author(s) -
Stanlaw James
Publication year - 2002
Publication title -
computational intelligence
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.353
H-Index - 52
eISSN - 1467-8640
pISSN - 0824-7935
DOI - 10.1111/1467-8640.00176
Subject(s) - citation , metaphor , state (computer science) , computer science , library science , information retrieval , artificial intelligence , linguistics , algorithm , philosophy
William Frawley’s ‘‘Control and Cross-Domain Mental Computation’’ is an attempt to establish a relationship between certain kinds of natural language breakdowns and the failures of computer programs and algorithms as they process code. Though this is an interesting and ambitious—perhaps valiant—undertaking, I believe it ultimately fails. I think the problems are best seen if we explicitly articulate the assumptions that underpin this project (though most he does not state explicitly). It is a rather lengthy and detailed list, but I think it merits close scrutiny, as many of these premises seem questionable or spurious Frawley seems to assume that mental processes depend on physical processes; that is, that the brain (or the ‘‘mind’’ or mental activity) can be reduced to electro-chemical-neural activities. As many of these basic things are digital (neurons firing or not, for example) they must necessarily be computational. Thus, the mind is computational. If the mind is computational, it is formally computable. If the mind is computable, then it is programmable. If it is programmable, then it is programmed. If it is programmed, then it is programmed in some sort of ‘‘mentalese’’ programming language. If there is such a programming language, it must look something like a computer programming language. Just as faults in a computer program can occur through hardware breakdowns or programming errors, so, too, can the mental programming language break down. In computers, programming or hardware breakdowns will manifest themselves as faulty ‘‘runs’’ (presumably giving diagnostic error messages to programmers). Likewise, there can be programming or hardware breakdowns in the human computer where these faulty ‘‘runs,’’ too, have readily detectable diagnostics which clinicians (and others) can identify. People with Williams syndrome, Turner syndrome, certain kinds of spina bifida, autism, and other similar disabilities have particular diagnostic linguistic deficiencies, as do some kinds of aphasics (such as those with SPI, ‘‘specific language impairment’’). Two different types of linguistic morphological behaviors are demonstrated by these two groups of patients. These differences reflect faults in the algorithms (or their processing) in the mental programming structure. These faults are analogous to (or perhaps even isomorphic with) problems of management of data flow in structured computer programming languages. Algorithms—of which programming languages are made—are thought to be composed of two components: ‘‘control’’ (information flow between parts of the program) and ‘‘logic’’ (management of data structures presumably different from control). The linguistic deficiencies found in the people mentioned above correspond to deficiencies/breakdowns in mental programming. Williams syndrome and Turner syndrome patients reflect problems of algorithm control; specific language impairment patients reflect problems of algorithm logic. Needless to say, this is a very long journey. Such a road is fraught with marauders, to say nothing of potholes, closed rest areas, and outdated maps. Though I cannot address every problem, I will now point out what I think are some detours and wrong turns. First, I believe the basic analogy Frawley is making in terms of brain modules and functions is incorrect. Following an older tradition in artificial intelligence, Frawley takes

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here