We must distinguish between what can be described or interpreted as X and what really is X. Otherwise we are just doing hermeneutics. It won't do simply to declare that the thermostat turns on the furnace because it feels cold or that the chess-playing computer program makes a move because it thinks it should get its queen out early. In what does real feeling and thinking consist?
Perhaps it's still best to start with the prototypic case from which the descriptors originated: Each of us knows exactly what it feels like to think or to feel. Doing "that" is what the terms mean, literally.
Now, after their initial baptism on feeling and thinking in their usual sense, these terms could in principle turn out to have picked out a natural kind, of which human subjective feeling and thinking were merely one special case. That is a scientific question. If the answer is affirmative, then (1) there must exist some objective correlates of the subjective states of feeling and thinking that have important lawful regularities which can be used to predict and explain things we are interested in predicting and explaining, and (2) it must be in some way useful to continue calling these objective correlates of feeling and thinking "feeling" and "thinking" rather than something else, even when there is no way to determine whether they are accompanied by their correlated subjective states, or even when they occur entirely independently of any subjective states.
We may all have views on the foregoing, and on how things are likely to turn out in the end, but it is important to note that at this time the requisite scientific regularities have not yet been found. Instead, theorists are conferring them on condidates "a priori" . The various forms of "computationalism" are instances of this kind of pre-emptive conclusion. It is being declared, more or less by fiat that "cognition" is computation. That having been declared, the theorist (or rather the hermeneuticist, under the circumstances), helps himself to the mentalistic interpretation, describing the computation in terms of what it "means," "refers to," or "thinks."
Let us see whether Dietrich is doing empirical theory here, or just hermeneutics: He distinguishes "computationalism" from "computerism" and "cognitivism." According to computerism, thinking is what a contemporary Von Neumann-style implementation of a Turing Machine in the form of a digital computer does. It is accordingly not hard to agree to disagree with computerism, for without strong further argument and evidence, it certainly seems unjustified to claim that a Vax thinks.
According to computationalism, to be thinking is to be computing certain functions. According to cognitivism, these functions are propositional ones. So cognitivism seems to be a special case of computationalism. Both seem to stand or fall on whether thinking is literally no more nor less than computing certain functions -- for the cognitivist, only those computable functions that can be interpreted as sentences, for the general computationalist, some less restricted subset of the computable functions.
If computationalism is empirical (and correct) rather than just hermeneutical, then its converse must be true too: Features of human thinking (such as consciousness, voluntariness, and what Searle 1980 has called "intrinsic meaning") that are not shared by the requisite kinds of computations are not essential to thinking. Fair enough, but in valiantly renouncing, say, "intrinsic meaning" (i.e., the property underlying the fact that when I say or think of a bird, I really mean a bird, those real little things out there, with feathers and wings, whereas when a page of a book or a line of computer code or a state of a computer "says" -- i.e., betokens or instantiates in some form -- "bird," that only means bird because I interpret it to mean bird) in favor of mere "meaning" (i.e., the property underlying the fact that the words of a book or the symbols in a computer program or the states of a computer computing a function can be interpreted as meaning "bird"), Dietrich seems to have come close to boldly declaring that all he is doing is hermeneutics after all. He's saying there's no difference between being X and merely being interpretable as being X.
But if we make a radical change of domains, for example, and consider some standard examples of hermeneutics, unadorned by the mystique of the mental, we will see that Dietrich's is not the kind of concession a cognitive theorist ought to be eager to make: Consider theological hermeneutics and the difference between (a) the (literal) wafer and (b) the body of Christ that it can be interpreted as symbolizing; or Freudian hermeneutics, and the difference between (a) the (literal) finger you pointed or dreamt of and (b) the penis it can be interpreted as symbolizing; or astrological hermeneutics, and the difference between (a) the (literal) stellar configuration and (b) the financial good fortune it can be interpreted as portending for you. Here, where there is no persuasive intrinsic interpretation, no irresistible parasitism on a mental synonym to bias us (at least for those of us who are nonbelievers), the arbitrariness of the relation between each respective (a) and (b) is quite transparent.
Is it any less arbitrary in the case of the mentalistic interpretation of computation? Consider Dietrich's attempt to argue that the symbols manipulated by a "Lisp Virtual Machine" (LVM) have "a meaning for the LVM, not just us." A good general prophylactic against unwitting projections of meaning is Searle's (1980)
a talisman to remind ourselves of the real obstacle that any computational hermeneutics is tilting at -- the real spoiler for any claim that formal symbol manipulations alone, no matter how they are implemented, really mean what we interpret them to mean. All hermeneutic hand-waving is brought to a dead standstill by
(henceforth SSSS for short). It's useless to pull out a dictionary and say "But look, I can translate SSSS thus-and so." As Searle has quite satisfactorily shown, such translations are parasitic on the meanings in the mind of the translator (or, as I prefer to put it, they are "ungrounded"; Harnad 1990). SSSS is as mindless as a dictionary entry or an IBM payroll calculation, whereas the meanings in our heads clearly are not.
Let us apply this prophylactic to Dietrich's actual text and see whether there's any meaning left over for the symbols to have "for the LVM" once we've exorcized the ghost of our own interpretations:
"Evaluating the expression ["SSSS"] requires the LVM to determine the value of the variable ["SSSS'"], which is some other Lisp expression, say, ["SSSS''"]. If the LVM could not do this, the expression would be syntactically ill-formed... But now notice, the LVM itself, in treating ["SSSS'"] as a variable, regardless of our interpretations (which are in fact quite different), is treating ["SSSS'"] as denoting the expression ["SSSS''"]. It follows from this that the LVM treats ["SSSS'"] as having a meaning...
I submit that this passage, now duly denuded of any projected meaning, is less likely to leave us deluded as to what really is and is not going on here. In fact, it should remind us of what a passage in a technical textbook looks like before we have understood the technical terms, namely, a bunch of meaningless symbols. Hence the only thing that follows from Dietrich's passage is what it is literally saying: "The LVM treats ["SSSS'"] as ["SSSS''"] on the basis of a look-up" -- which is the quintessence of pure, meaningless symbol manipulation.
Now there is one important sense in which semantic interpretations of symbol systems are less arbitrary than, say, astrological interpretations of stellar solar systems: It is one of the defining features of a formal symbol system that we can give it a systematic interpretation (Fodor & Pylyshyn 1988). So a symbol system's amenability to systematic interpretation is not arbitrary; it's a necessary condition for being a symbol system in the first place. Yet, as we saw above, the symbol system itself does not contain its interpretation(s). It is simply systematically amenable to having the interpretation(s) "projected" onto it, so to speak. But the possibility (and for some, the unavoidability) of systematically manipulating symbols without projecting any interpretation at all onto them ought to alert us to the fact that the interpretation itself is an independent factor, extrinsic to the symbol system.
Except in our own case. Whatever kind of a system the substrate of our mind is, the interpretations of our symbols can't be mediated by someone else's mind. Yet the conclusion Dietrich draws is different:
To sum up, I have shown that... computers are not merely formal symbol manipulators because they "look up" the values of variables, and anything capable of doing this is also not merely a formal symbol manipulator... If looking up the values of variables captured the notion of intentionality satisfactorily, then we could see why intentionality would be important for cognition...But, as I have shown, if you refrain from projecting an interpretation onto what they look up, it becomes obvious that the only thing being looked up is other meaningless symbols [SSSS]. Hence look-up does not capture meaning.
Dietrich adds a qualification:
Of course... the LVM is not conscious of the meaning ["SSSS'"] has. Nevertheless, the LVM's manipulations depend on the fact that ["SSSS'"] has a meaning, and indeed, on the meaning it has.But in reality the manipulations depend only on the formal rules for manipulating meaningless symbols on the basis of their shapes. The fact that the goings-on are systematically amenable to a semantic interpretation by me does not entail that they "depend on" or "use" that semantic interpretation.
There is also some ambiguity about what it is that computers are actually doing:
[W]e cannot view computers as merely formal symbol manipulators... to understand their behavior we must interpret their states... a computational explanation is interpretive... [and] relative to what theorists understand and find satisfactory... a computational explanation of a system's behavior REQUIRES attributing semantic contents to the system's states... interpreting the inputs, outputs and states of a system as elements in the domains and ranges of a series of functions... ascribing contents is necessary for understanding which function is being computed...First, in a symbol system the formal symbol manipulation must "by definition" be semantically interpretable. No one has used or said much about uninterpretable symbol manipulation. Second, the requirements of the theorist who is trying to interpret a formal symbol system are not the same as the requirements of a theorist who is trying to explain cognition (otherwise every algebraist is a cognitive scientist). To understand either books or formal symbol manipulators (the latter being exactly what computers are ), we do indeed need to interpret what they are saying or doing, but to understand in what our understanding consists is a job of another order of magnitude -- and a different subject. These two distinct tasks should be neither confused nor prejudged as one and the same. Unfortunately, the claim that cognition is computation seems to do both, and is in any case contradicted by the fact that, unlike in cognition, in computation the interpretation is clearly not part of the system, as is apparent when we block the temptation to project any of our own extrinsic meanings onto the symbols by sticking faithfully to ``SSSS'' : What's left is an uninterpreted formal system, governed by syntactic rules only, executable mindlessly by either a person or a computer.
A proliferation of fine mentalistic distinctions with little functional basis can be a burden for both sides of the question of whether or not cognition is computation:
Intentionality is sometimes defined as the property of mental states to be about things...[or] the property of a system to understand its own representations... [or] consciously understanding the world around it. [Yet] intentionality is supposed to be different from consciousness or conscious understanding...All the more reason for not making distinctions that over-reach our grasp. Until there is evidence to the contrary, there is no reason to believe that being able to think and being able to feel are separable. In fact, the former may well be grounded in the latter (Harnad 1987). Hence there seems little justification for assuming that there can be "intentionality" independent of consciousness. Computationalism has in any case not shown this, but simply assumed it, and in a particularly weak form, with "intentionality" reduced to mere "interpretability." The "mental" seems to have been left behind long ago.
If intentionality were just another word for consciousness, most of us could at least agree that it exists, though we still would not know what it is, or what it is for... [I]t seems as if consciousness, not a separate notion of intentionality, is doing the real work in [the objection that computers don't have intentionality].
Yet it won't do to reply:
no one has shown that intentionality is crucial for cognition... Searle has succeeded in showing that it is useless... I can assure you that I do not understand my own symbols... [and that] computers can introspect.Searle hasn't shown that intentionality is useless, just that it isn't computation. I don't understand what it means to say that one doesn't understand one's own symbols (except when one is doing, say, mathematics mindlessly); I think there is some confusion here with the fact that there are a lot of unconscious processes going on in my head to which I do not have introspective access. It does not follow from the fact that my thinking has a huge unconscious substrate that thinking is therefore unconscious, or independent of my capacity for consciousness. And I certainly can't imagine what it means to say that computers can introspect.
Dietrich rightly points out that
computationalism is incompatible with the notion that humans have any special semantic properties in virtue of which their thoughts are about things in their environment [or that] humans make willful decisions... Humans do not choose, they merely compute.As to the former, one must alas reply: so much the worse for computationalism. As to the latter, one can agree that humans do not choose (1982a) without agreeing that therefore they only compute (Harnad 1982b).
And the obvious answer to the observation that
If intentionality is only a semantic property, it is virtually ubiquitous. If intentionality is variable binding and lookup, then it is quite common. If intentionality is consciousness, etc. then it is quite rare.Is that it would appear that it's quite rare, then; and that it isn't computation.
If not computation, then what?
[A] computational explanation differs from a causal law which describes the causal state changes of a system... Computers are as "causally embedded" in the world as humans are [but] the causal connections of referring terms have no role in the computational strategy... which function a system is computing is the only matter of importance...
Since the computer's causal embedding in the world is irrelevant to the interpretation that ``SSSS'' refers to, say, a bird, and since there seems to be nothing in symbol manipulation itself to ground that connection, this again sounds like evidence against the thesis that thinking is computation. There are, after all, functional alternatives. Some processes just aren't computational: Heating isn't; flying isn't; transduction isn't. Why should thinking be? All four will certainly be computationally simulable in a way that can be systematically interpreted as heating, flying, transducing and thinking, but if that's clearly not the real thing in the first three cases, why should it be in the fourth? Perhaps the way to break out of the hermeneutic circle in which pure computation is trapped is to ground some of the "SSSS" 's bottom-up in the functional capacity of a hybrid nonsymbolic/symbolic system (Harnad 1987, 1989) to pick out the things in the world that the "SSSS" 's stand for. Cognition could well turn out to be more like transduction than like computation. This would certainly go some way toward distinguishing between what can be described or interpreted as meaning X and what really is meaning X.
2. "Cognizes" is, I take it, just a synonym for "thinks," so "cognition" means "thinking"; "feels" is rarer, so I will set it aside for now, to touch on it again briefly at the end of this commentary. "Intentionality," an unfortunate superadded piece of terminology, refers to that property of mental states such as thinking in virtue of which they are "about" something. For the purposes of this commentary, it will turn out to be necessary to distinguish whether something is really about something or merely interpretable as being about something.
3. To those with an aversion for formalism, or an incapacity to grasp its intended interpretation, that's what such passages continue to look like indefinitely, even after efforts to understand them. Indeed, some people seem unable to do mathematics as anything but meaningless symbol manipulation. To put it another way, they cannot understand it; they can only manipulate the symbols "as if" they understood them.
4. And Paeano's system (though not completely computable) has it just as surely as any other formal system does. This already suggests that using the property of systematic interpretability alone, computationalists would be hardput to specify which computable functions were not cognitive, and why not.
5. Though not all the way, as the other-minds problem is there to ensure that there will always be a degree of uncertainty about whether a system really has a mind or merely acts in way that is interpretable as if it had a mind -- a degree of uncertainty over and above the usual underdetermination of scientific theories by the data supporting them (Harnad, in preparation).
Harnad, S. (1982a) Consciousness: An Afterthought. Cognition and Brain Theory 5: 29 - 47.
Harnad, S. (1982b) Neoconstructivism: A unifying theme for the cognitive sciences. In T. Simon & R. Scholes, R. (Eds.) Language, Mind and brain. Hillsdale NJ: Erlbaum Associates
Harnad, S. (1987) (Ed.) Categorical Perception: The Groundwork of Cognition. Cambridge University Press.
Harnad, S. (1989) Minds, Machines and Searle. Journal of Experimental and Theoretical Artificial Intelligence 1: 5 - 25.
Harnad, S. (1990) The Symbol Grounding Problem. Physica D (in press)
Harnad, S. (in preparation) Other Bodies, Other Minds: A Modern Machine Incarnation of an Old Philosophical Problem.
Searle, J. (1980) Minds, Brains and Programs. Behavioral and Brain Sciences 3: 417 - 547.