" DST="" -->
Harnad, S. (1994) Levels of Functional Equivalence in Reverse Bioengineering: The Darwinian Turing Test for Artificial Life. Artificial Life 1(3): (in press)
Both Artificial Life and Artificial Mind are branches of what Dennett has called "reverse engineering": Ordinary engineering attempts to build systems to meet certain functional specifications, reverse bioengineering attempts to understand how systems that have already been built by the Blind Watchmaker work. Computational modelling (virtual life) can capture the formal principles of life, perhaps predict and explain it completely, but it can no more be alive than a virtual forest fire can be hot. In itself, a computational model is just an ungrounded symbol system; no matter how closely it matches the properties of what is being modelled, it matches them only formally, with the mediation of an interpretation. Synthetic life is not open to this objection, but it is still an open question how close a functional equivalence is needed in order to capture life. Close enough to fool the Blind Watchmaker is probably close enough, but would that require molecular indistinguishability, and if so, do we really need to go that far?
Computationalism, evolution, functionalism, reverse engineering, robotics, symbol grounding, synthetic life, virtual life, Turing test.
In Harnad (1993a) I argued that there was a fundamental difference between virtual and synthetic life, and that whereas there is no reason to doubt that a synthetic system could really be alive, there IS reason to doubt that a virtual one could be. For the purposes of this inaugural issue of "Artificial Life" , I will first recapitulate the argument against virtual life (so as to elicit future discussion in these pages) and then I will consider some obstacles to synthetic life.
First: What is it to be "really alive"? I'm certainly not going to be able to answer this question here, but I can suggest one thing it's not : It's not a matter of satisfying a definition, at least not at this time, for such a definition would have to be preceded by a true theory of life, which we do not yet have. It's also not a matter of arbitrary stipulation, because some things, like plants and animals, are indeed alive, and others, like stones and carbon atoms, are not. Nor, by the same token, is everything alive, or nothing alive (unless the future theory of life turns out to reveal that there is nothing unique to the things we call living that distinguishes them from the things we call nonliving). On the other hand, the intuition we have that there is something it is like to be alive -- the animism that I suggested was lurking in vitalism (Harnad 1993a) -- may be wrong. And it would be a good thing too, if it turned out to be wrong, for otherwise the problem of life would inherit the mind/body problem (Nagel 1974, 1986). More about this shortly.
Here's a quick heuristic criterion for what's really alive (though it certainly doesn't represent a necessary or sufficient condition): Chances are that whatever could slip by the Blind Watchmaker across evolutionary generations undetected is alive (Harnad 1993c). What I mean is that whatever living creatures are, they are what has successfully passed through the dynamic Darwinian filter that has shaped the biosphere. So if there are candidates that can commingle among the living indistinguishably (to evolution, if not to some other clever but artificial gadget we might use to single out imposters), then it would be rather arbitrary to deny they were alive. Or rather, lacking a theory of life, we'd be hard-put to say what it was that they weren't, if they were indeed not alive, though adaptively indistinguishable from things that were alive.
This already suggests that life must have something to do with functional properties: the functional properties we call adaptive, even though we don't yet know what those are. We don't know, but the Blind Watchmaker presumably knows; or rather, whatever it is that He cannot Know can't be essential to life. Let me be more concrete: If there were an autonomous, synthetic species (or a synthetic subset of a natural species) whose individuals were either man-made or machine-made -- which pretty well exhausts the options for "synthetic" vs. "natural," I should think -- yet could eat and be eaten by natural species, and could survive and reproduce amongst them, then someone might have a basis for saying that these synthetic creatures were not natural, but not for saying that they were not alive, surely.
We might have the intuition that those ecologically indistinguishable synthetic creatures differed from living ones in some essential way, but without a theory of life we could not say what that difference might be. Indeed, if it turned out that all natural life was without exception based on left-handed proteins and that these synthetic creatures were made of right-handed proteins (which in reality would block any viable prey/predator relation, but let's set that aside for now as if it were possible), even that would fail to provide a basis for denying that they were alive. So invariant correlates of natural life do not rule out synthetic life.
The animism lurking in our intuitions about life does suggest something else that might be missing in these synthetic creatures, namely, a mind, someone at home in there, actually being alive (Harnad 1982b). At best, however, this would make them mindless Zombies, but not lifeless ones -- unless of course mind and life do swing together, in which case we would fall back into the mind/body problem, or, more specifically, its other incarnation, the "other minds" problem (Harnad 1991). For the Blind Watchmaker is no more a mind-reader than we are; hence neither He nor we could ever know whether or not a creature was a Zombie (since the Zombie is functionally indistinguishable from its mindful counterpart). So if life and mind swing together, the question of what life is is empirically undecidable, and a synthetic candidate fares no better or worse than a functionally equivalent natural one.
All this has been about synthetic life, however, and synthetic life of a highly life-like order: capable of interacting adaptively with the biosphere. What about virtual life? Virtual life, let's not mince words, is computational life, and computation is the manipulation of formal symbols based on rules that operate on the shapes of those symbols. Not just any manipulations, to be sure; the ones of interest are the ones that can be systematically interpreted: as numerical calculations, as logical deductions, as chess moves, answers to questions, solutions to problems. What is critical to computation is that even though the symbols are systematically interpretable as meaning something (numbers, propositions, chess positions), their shape is arbitrary with respect to their meaning, and it is only on these arbitrary shapes that the rules operate. Hence computation is purely syntactic; what is manipulated is symbolic code. The code is interpretable by us as meaning something, but that meaning is not "in" the symbol system any more than the meaning of the words in a book is in the book. The meaning is in the heads of the interpreters and users of the symbol system.
This is not to minimize the significance and power of formal symbol manipulation, for that power is in fact the power of computation as it has been formalized by the fathers of modern computational theory (Turing, Goedel, Church, von Neumann; see Boolos & Jeffrey 1980). According to the Church-Turing Thesis, computation, in the sense of formal symbol manipulation, captures what it is that a mathematician means intuitively by a mechanical procedure. So far, every formalization of this notion has turned out to be equivalent. A natural generalization of the Church-Turing Thesis to the physical world is that every physically realizable system is formally equivalent to a symbol system (at least in the case of discrete physical systems, and to as close an approximation as one wishes in the case of continuous physical systems).
What all this means is that formal symbol manipulation is no mean matter. It covers a vast territory, both mathematical and physical. The only point at which it runs into some difficulty is when it is proposed as a candidate for what is going on in the mind of the interpreter or the user of the symbol system. It is when we thus suppose that cognition itself is computation that we run into a problem of infinite regress that I have dubbed "the symbol grounding problem" (Harnad 1990a). For whatever it might be that is really going on in my head when I think, my thoughts certainly don't mean what they mean merely because they are interpretable as so meaning by you or anyone else. Unlike the words in a static book or even the code dynamically implemented in a computer, my thoughts mean what they mean autonomously, independently of any external interpretation that can be or is made of them. The meanings in a pure symbol system, in contrast, are ungrounded, as are the meanings of the symbols in a Chinese/Chinese dictionary, symbols that, be they ever so systematically interpretable, are useless to someone who does not already know Chinese, for all one can do with such a dictionary is to pass systematically from one arbitrary, meaningless symbol to another: Systematically interpretable to a Chinese speaker, but intrinsically meaningless in itself, the symbol system neither contains nor leads to what it is interpretable as meaning. The meaning must be projected onto it from without.
The arbitrariness of the shapes of the symbols -- the shape of the code -- and the fact that computational algorithms, the rules for manipulating these meaningless symbols, can be described completely independently of their physical implementation, was exploited by the philosopher John Searle (1980) in his celebrated "Chinese Room Argument" against the hypothesis that cognition is just computation:
Computation is implementation-independent; the details of its physical realization are irrelevant. Every implementation of the same formal symbol system is performing the same computation and hence must have every purely computational property that the symbol system has. Searle accordingly pointed out that the hypothesis that cognition is just computation can only be sustained at the cost of being prepared to believe that a computer program that can pass the Turing Test (Turing 1964) in Chinese -- i.e. correspond for a lifetime as a pen-pal indistinguishably from a real pen pal -- would understand Chinese even though Searle, implementing exactly the same program, would not. The source of the illusion on which Searle had put his finger was the systematic interpretability of the symbol system itself: Given that the symbols can bear the weight of a systematic interpretation, it is very hard for us to resist the seductiveness of the interpretation itself, once it is projected onto the system. More specifically, once we see that the symbols are interpretable as meaningful messages from a pen pal, it is hard to see one's way out of the hermeneutic hall of mirrors this creates, in which the interpretation keeps sustaining and confirming itself over and over: We keep seeing the reflected light of the interpretation that we ourselves have projected onto the system (Harnad 1990b, c).
Searle simply reminded us that in reality all we have in this case is a systematically interpretable set of symbols. We would not mistake a computer simulation of, say, a forest-fire -- i.e., a virtual forest fire: a set of symbols and symbol manipulations that were systematically interpretable as trees, burning -- for a real forest fire because, among other things, the symbol system would lack one of the essential properties of a real forest fire: heat. Even a full-blown virtual-world simulation of a forest-fire, one that used transducers to simulate the sight, heat, sound and smell of a forest fire would not be called a forest fire (once the true source of the stimulation was made known to us), because "computer-generated forest-fire stimuli to human senses" on the one hand and "forest fires" on the other are clearly not the same thing. In the case of the "virtual pen pal," in contrast, there was nothing to curb the fantasies awakened by the systematic interpretability, so we were prepared to believe that there was really someone home in the (implemented) symbol system, understanding us. It required Searle, as yet another physical implementation of the same symbol system, to point out that this too was all done with mirrors, and that there was no one in there understanding Chinese in either case.
Between the purely symbolic forest fire and the one supplemented by "virtual-world" transducers traducing our senses, however, are important differences that are pertinent to the difference between the virtual and the synthetic (Harnad 1989). The hypothesis that Searle attacked, stated in full, would be: cognition is only computation, i.e. just implementation-independent symbol manipulation. Transduction, of course, is not just implementation-independent symbol manipulation. In this case, however, when the transduction is being driven by a symbol system and used only to fool our senses, the objection is the same: A real forest fire is clearly not the same as either (1) a pure symbol system systematically interpretable as if it were a forest fire (a virtual forest fire), or (2) a symbol system driving transducers in such a way as to give the sensory impression of a forest fire (a virtual-world forest fire). A real forest fire is something that happens to real trees, in the real woods. Although a real forest fire may be contained, so as not to incinerate the whole earth, there is in a real sense no barrier between it and the rest of the real world. There is something essentially interactive ("nonadiabatic") about it, "situated" as it is, in the real world of which it is a part. A real forest fire is not, in short, an ungrounded symbol system, whereas that is precisely what a virtual forest fire is.
Now any implementation of a virtual forest fire -- whether a purely symbolic one, consisting of interpretable code alone, or a hybrid "virtual-worlds" implementation, consisting of a symbol system plus sensorimotor transducers that generate the illusion of a forest fire to the human senses -- is of course also a part of the real world, but it is immediately obvious that it is the wrong part. To put it in the terms that were already used earlier in this paper, the purely symbolic virtual forest fire may be equivalent to a real forest fire to our intellects, when mediated by the interpretation occurring in our brains, and the hybrid sensory simulation may be equivalent to our senses, when mediated by the perception occurring in our brains, but in the world (the only world there is), neither of these virtual forest fires is functionally equivalent to a real forest fire. Indeed, virtual forest fires are truly "adiabatic": they are incapable of spreading to the world, indeed of affecting the world in any way qua fire (as opposed to food for thought or sop for the senses).
I write all this out long-hand, but of course there is no "Artificial Fire" Movement, some of whose adherents are arguing that virtual forest fires are really burning. It is simply obvious that real forest fires and virtual ones are radically different kinds of things, and that the kind of thing a virtual forest fire is in reality, setting aside interpretations, whether symbolic or sensory, is a symbol system that is capable of having a certain effect on a human mind. A real forest fire too can have an effect on a human mind (perhaps even the same effect), but that's not all a real forest fire is, nor is that its essential property, which has nothing to do with minds.
What about synthetic forest fires? Well, synthetic trees -- the functional equivalents of trees, man-made or machine-made, possibly out of a different kind of stuff -- might be possible, as discussed earlier in the case of synthetic creatures. Synthetic fire is harder to conceive: some other kind of combustive process perhaps? I don't know enough physics to be able to say whether this makes sense, but it's clearly a question about physics that we're asking: whether there is a physical process that is functionally equivalent to ordinary fire. I suspect not, but if there is, and it can be engineered by people and machines, let's call that synthetic fire. I see no reason for denying that, if it were indeed functionally indistinguishable from fire, such synthetic fire would be a form of real fire. The critical property would be its functional equivalence to fire in the real world.
So that would be a synthetic forest fire, functionally equivalent to a real one in the world. In the case of the virtual forest fire, another form of equivalence is the one usually invoked, and it too is sometimes called "functional," but more often it is referred to, more accurately, as computational, formal, or Turing equivalence (Boolos & Jeffrey). This is really an instance of the physical version of the Church-Turing Thesis mentioned earlier: Every physical system can be simulated by -- i.e., is formally equivalent to -- a symbol system. The relationship is not merely illusory, however, for the computer simulation, formally capturing, as it does, the functional principles of the real system that it is computationally equivalent to, can help us understand the latter's physical as well as its functional properties. Indeed, in principle, a virtual system could teach us everything we need to know in order to build a synthetic system in the world or to understand the causal properties of a natural system. What we must not forget, however, is that the virtual system is not the real system, synthetic or natural, and in particular -- as in the case of the virtual forest fire as well as the virtual pen pal -- it lacks the essential properties of the real system (in the one case, burning, and in the other, understanding).
The virtual system is, in other words, a kind of "oracle" (as I dubbed it in Harnad 1993a), being systematically interpretable as if it were the real thing because it is computationally equivalent to the real thing. Hence the functional properties of the real thing should have symbolic counterparts in the simulation, and they should be predictable and even implementable (as a synthetic system) on the basis of a translation of the formal model into the physical structures and processes it is simulating (Harnad 1982a). The only mistake is to think that the virtual system is an instance of the real thing, rather than what it really is, namely, a symbol system that is systematically interpretable as if it were the real thing.
Chris Langton was making an unwitting appeal to the hermeneutic hall of mirrors a few years ago at a robotics meeting in Flanders (Harnad 1993e) when he invited me to suppose that, in principle, all the initial conditions of the biosphere at the time of the "primal soup" could be encoded, along with the requisite evolutionary algorithms, so that, in real or virtual time, the system could then evolve life exactly as it had evolved on earth: unicellular organisms, multicellular organisms, invertebrates, mammals, primates, humans, and then eventually even Chris and me, having that very conversation (and perhaps even fast-forwardable to decades later, when one of us would have convinced the other of the reality or unreality of virtual life, as the case may be). If I could accept that all of this was possible in principle (as I did and do), so that not one property of real life failed to be systematically mirrored in this grand virtual system, how could I, Chris asked, continue to insist that it wasn't really alive? For whatever I claimed the crucial difference might be, on the basis of which I would affirm that one was alive and the other not, could not the virtual version capture that difference too? Isn't that what Turing Indistinguishability and computational equivalence guarantee?
The answer is that the virtual system could not capture the critical (indeed the essential) difference between real and virtual life, which is that the virtual system is and always will be just a dynamical implementation of an implementation-independent symbol system that is systematically interpretable as if it were alive. Like a highly realistic, indeed oracular book, but a book nonetheless, it consists only of symbols that are systematically construable (by us) as meaning a lot of true and accurate things, but without those meanings actually being in the symbol system: They are merely projected onto it by us, and that projected interpretation is then sustained by the accuracy with which the system has captured formally the physical properties it is modelling. This is not true of the real biosphere, which really is what I can systematically interpret it as being, entirely independent of me or my interpretation.
What makes it so unnecessary to point out this essential distinction in the case of a virtual forest fire, which no one would claim was really burning, yet so necessary in the case of virtual life, to which some people want to attribute more than meets the eye, again arises from something that Artificial Life has in common with Artificial Mind: The essential property each is concerned with (being alive and having a mind, resectively) is unobservable in both cases, either to the human senses or to measuring instruments. So this leaves our fantasy unconstrained when it infers that a virtual system that is systematically interpretable as if it were living (or thinking) really is living (or thinking). This temptation does not arise with virtual forest fires or virtual solar systems because it is observable that they are not really burning or moving (Harnad 1993d)
There clearly is an unobservable essence to having a mind (one whose presence each of us is aware of in his own case, but in no other, in knowing at first hand that one is not a Zombie), but is there a corresponding unobservable essence to being alive? I think not. There is no elan vital, and whatever intuition we have that there is one is probably parasitic on intuitions about having a mind. So what we are projecting onto virtual life -- what we are really saying when we say that virtual creatures are really alive -- is probably the same thing we are projecting onto virtual mind when we believe there's really someone home in there, thinking, understanding, meaning, etc. And when we're wrong about it, we are probably wrong for the same reason in both cases, namely, that we have gotten trapped in the hermeneutic circle in interpreting an ungrounded symbol system (Hayes et al. 1992).
Can there be a grounded symbol system? The answer will bring us back to the topic of synthetic life, about which I had promised to say more. And here again there will be a suggestive convergence and a possible divergence between the study of artificial life and the study of artificial mind: One way out of the hermeneutic circle in mind-modelling is to move from symbolic modelling to hybrid analog/symbolic modelling (Harnad et al. 1991, 1994), and from the pen-pal version of the Turing Test (TT or T2) (Turing 1964; Harnad 1992b) to the robotic version (the Total Turing Test, T3). To remove the external interpreter from the loop, the robot's internal symbols and symbol manipulations must be grounded directly in the robot's autonomous capacity to discriminate, categorize, manipulate, and describe the objects, features, events and states of affairs in the world that those symbols are interpretable as being about (Harnad 1987, 1992a, 1993b). T2 called for a system that was indistinguishable from us in its symbolic (i.e. linguistic capacities). T3 calls for this too, but it further requires indistinguishability in all of our robotic capacities: in other words, total indistinguishability in external (i.e. behavioral) function (I will consider indistinguishability in both external and internal [i.e. neural] function, T4, shortly).
A T3 system is grounded, because the connection between its internal symbols and what they are about is direct and unmediated by external interpretation. The grounding, however, is purchased at the price of no longer being a pure symbol system. A robotic mind would hence be a synthetic mind rather than a virtual one. There is of course still the possibility that the robot is a Zombie, and there are still ways to tighten the degrees of freedom still further: T4 would call for internal indistinguishability, right down to the last neuron and neurotransmitter. These could be synthetic neurons, of course, but they would have to be functionally indistinguishable from real ones.
My own guess is that if ungrounded T2 systems are underdetermined and hence open to overinterpretation, T4 systems are overdetermined and hence include physical and functional properties that may be irrelevant to cognition. I think T3 is just the right empirical filter for mind-modeling, because not only is it the one we use with one another, in our day-to-day solutions to the other-minds problem (we are neither mind-readers nor brain experts), but it is the same filter that shaped us phylogenetically: The Blind Watchmaker is no mind-reader either, and harks only to differences in adaptive function. So the likelihood that a T3 robot is a Zombie is about equal to the likelihood that we might ourselves have been Zombies.
Or is it? Let us not forget the "robotic" functions of sustenance, survival and reproduction. Are these not parts of our T3 capacity? Certainly a failure of any of them would be detectable to the Blind Watchmaker. A species that could not derive the energy needed to sustain itself or that failed to reproduce and maintain continuity across generations could not pass successfully through the Darwinian filter. And to be able to do that might turn out to call for for nothing less than molecular continuity with the rest of the biosphere (cf. Morowitz 1992) -- in which case T4 alone would narrow the degrees of freedom sufficiently to let through only life/mind. And synthetic life of that order of functional indistinguishability from real life would have to have a such a high degree of verisimilitude as to make its vitality virtually as certain as that of genetically engineered life.
Yet I am still betting on T3: The life-modeller's equivalent to the mind-modeller's T3 equivalence (lifelong robotic indistinguishability) is transgenerational ecological indistinguishability, and it is not yet clear that this would require molecular indistinguishability (T4). Certainly our models falls so far short of T3 right now that it seems safe to aim at the external equivalence without worrying unduly about the internal -- or at least to trust the exigencies of achieving external equivalence to pick out which internal functions might be pertinent, rather than to assume a priori that they all are.
That, at least, appears to be a reasonable first pass, methodologically speaking, as dictated by applying Occam's Razor to these two particular branches of inverse applied science: reverse cognitive engineering and reverse bioengineering, respectively. Ordinary forward engineering applies the laws of nature and the principles of engineering to the design and building of brand new systems with certain specified functional capacities that we find useful: bridges, furnaces, airplanes. Reverse engineering (Dennett, in press) must discover the functional principles of systems that have already been designed and built by nature -- plants, animals, people -- by attempting to design and build systems with equivalent functional capacities. Now in the case of natural living systems and natural thinking systems "life" (whatever that is) and "mind" (we all know what that is) seem to have "piggy-backed" on those functional capacities; it accordingly seems safe to assume they will also piggy-back on their synthetic counterparts (Harnad 1994).
The only point of uncertainty is whether external functional equivalence (T3) is a tight enough constraint to fix the degree of internal functional equivalence that ensures that life and mind will piggy-back on it, or whether internal functional equivalence (T4) must be captured right down to the last molecule. I'm betting on T3, in part because it is more readily attainable, and in part because even if it is not equivalence enough, we can never hope to be any the wiser.
Boolos, G. S. & R. C. Jeffrey (1980) Computability and Logic. (Cambridge, UK: Cambridge University Press).
Dennett, D.C. (in press) Cognitive Science as Reverse Engineering: Several Meanings of "Top Down" and "Bottom Up." In: Prawitz, D., Skyrms, B. & Westerstahl, D. (Eds.) Proceedings of the 9th International Congress of Logic, Methodology and Philosophy of Science. North Holland.
Harnad, S. (1982a) Neoconstructivism: A unifying theme for the cognitive sciences. In: Language, Mind and Brain. (T. Simon & R. Scholes, eds., Hillsdale NJ: Erlbaum), 1 - 11.
Harnad, S. (1982b) Consciousness: An afterthought. Cognition and Brain Theory 5: 29 - 47.
Harnad, S. (ed.) (1987) Categorical Perception: The Groundwork of Cognition. New York: Cambridge University Press.
Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25.
Harnad, S. (1990a) The Symbol Grounding Problem. Physica D 42: 335-346.
Harnad, S. (1990b) Against Computational Hermeneutics. (Invited commentary on Eric Dietrich's Computationalism) Social Epistemology 4: 167-172.
Harnad, S. (1990c) Lost in the hermeneutic hall of mirrors. Invited Commentary on: Michael Dyer: Minds, Machines, Searle and Harnad. Journal of Experimental and Theoretical Artificial Intelligence 2: 321 - 327.
Harnad, S. (1991) Other bodies, Other minds: A machine incarnation of an old philosophical problem. Minds and Machines 1: 43-54.
Harnad, S. (1992a) Connecting Object to Symbol in Modeling Cognition. In: A. Clarke and R. Lutz (Eds) Connectionism in Context. Springer Verlag.
Harnad, S. (1992b) The Turing Test Is Not A Trick: Turing Indistinguishability Is A Scientific Criterion. SIGART Bulletin 3(4) (October) 9 - 10.
Harnad, S. (1993a) Artificial Life: Synthetic Versus Virtual. Artificial Life III. Proceedings, Santa Fe Institute Studies in the Sciences of Complexity. Volume XVI.
Harnad, S. (1993b) Grounding Symbols in the Analog World with Neural Nets. Think 2(1) 12 - 78 (Special issue on "Connectionism versus Symbolism," D.M.W. Powers & P.A. Flach, eds.).
Harnad, S. (1993c) Turing Indistinguishability and the Blind Watchmaker. Presented at Conference on "Evolution and the Human Sciences" London School of Economics Centre for the Philosophy of the Natural and Social Sciences 24 - 26 June 1993.
Harnad S. (1993d) Discussion (passim) In: Bock, G. & Marsh, J. (Eds.) Experimental and Theoretical Studies of Consciousness. CIBA Foundation Symposium 174. Chichester: Wiley
Harnad, S. (1993e) Grounding Symbolic Capacity in Robotic Capacity. In: Steels, L. and R. Brooks (eds.) The "Artificial Life" Route to "Artificial Intelligence": Building Situated Embodied Agents. New Haven: Lawrence Erlbaum
Harnad, S, (1994) Does the Mind Piggy-Back on Robotic and Symbolic Capacity? To appear in: H. Morowitz (ed.) The Mind, the Brain, and Complex Adaptive Systems.
Harnad, S., Hanson, S.J. & Lubin, J. (1991) Categorical Perception and the Evolution of Supervised Learning in Neural Nets. In: Working Papers of the AAAI Spring Symposium on Machine Learning of Natural Language and Ontology (DW Powers & L Reeker, Eds.) pp. 65-74. Presented at Symposium on Symbol Grounding: Problems and Practice, Stanford University, March 1991.
Harnad, S. Hanson, S.J. & Lubin, J. (1994) Learned Categorical Perception in Neural Nets: Implications for Symbol Grounding. In: V. Honavar & L. Uhr (eds) Symbol Processing and Connectionist Network Models in Artificial Intelligence and Cognitive Modelling: Steps Toward Principled Integration. (in press)
Hayes, P., Harnad, S., Perlis, D. & Block, N. (1992) Virtual Symposium on Virtual Mind. Minds and Machines 2: 217-238.
Nagel, T. (1974) What is it like to be a bat? Philosophical Review 83: 435 - 451.
Morowitz, H. (1992) Beginning of Cellular Life. Yale University Press.
Nagel, T. (1986) The View From Nowhere. New York: Oxford University Press.
Searle, J. R. (1980) Minds, brains and programs. Behavioral and Brain Sciences 3: 417-424.
Turing, A. M. (1964) Computing machinery and intelligence. In: Minds and machines. A. Anderson (ed.), Engelwood Cliffs NJ: Prentice Hall.