<HTML>
<HEAD><TITLE>Artificial Life: Synthetic vs. Virtual</TITLE></HEAD>
<BODY>
<EM>
Harnad, S. (1993) Artificial Life: Synthetic Versus Virtual.
Artificial Life III. Proceedings, Santa Fe Institute Studies in the
Sciences of Complexity. Volume XVI.
</EM>
<HR>
<CENTER><H1>ARTIFICIAL LIFE: SYNTHETIC VS. VIRTUAL</H1></CENTER>
<ADDRESS>
<A  HREF="http://cogsci.soton.ac.uk/harnad">Stevan Harnad</A><BR>
Cognition et Mouvement URA CNRS 1166<BR>
Universite d'Aix Marseille II<BR>
13388 Marseille cedex 13, France<P>
<A  HREF="mailto:harnad@cogsci.soton.ac.uk">harnad@cogsci.soton.ac.uk</A>
</ADDRESS>
<P>
<B><FONT SIZE=+1>ABSTRACT:</FONT></B>
Artificial life can take two forms: synthetic and
virtual. In principle, the materials and properties of synthetic
living systems could differ radically from those of natural living
systems yet still resemble them enough to be really alive if they
are grounded in the relevant causal interactions with the real
world. Virtual (purely computational) "living" systems, in
contrast, are just ungrounded symbol systems that are
systematically interpretable as if they were alive; in reality they
are no more alive than a virtual furnace is hot.  Virtual systems
are better viewed as "symbolic oracles" that can be used
(interpreted) to predict and explain real systems, but not to
instantiate them. The vitalistic overinterpretation of virtual life
is related to the animistic overinterpretation of virtual minds and
is probably based on an implicit (and possibly erroneous) intuition
that living things have actual or potential mental lives.
<H2>The Animism in Vitalism</H2>
There is a close connection between the superstitious and even
supernaturalist intuitions people have had about both life and mind. I
think the belief in an immaterial or otherwise privileged "vital force"
was, consciously or unconsciously, always parasitic on the mind/body
problem, which is  that conceptual difficulty we all have in
integrating or equating the mental world of thoughts and feelings with
the physical world of objects and events, a difficulty that has even made
some of us believe in an immaterial soul or some other nonmaterial
animistic principle. The nature of the parasitism was this: Whether we
realized it or not, we were always imagining living things as having
mental lives, either actually or potentially. So our attempts to
account for life in purely physical terms indirectly inherited our
difficulty in accounting for mind in purely physical terms.
<P>
It is for this reason that I believe all positive analogies between the
biology of life and the biology of mind are bound to fail: We are
invited by some (e.g., Churchland 1984, 1986, 1989) to learn a lesson
from how wrong-headed it had been to believe that ordinary physics
alone could not explain life -- which has now turned out to be
largely a matter of protein chemistry -- and we are enjoined to apply
that lesson to current attempts to explain the mind -- whether in terms
of physics, biochemistry, or computation -- and to dismiss the
conceptual dissatisfaction we continue to feel with such explanations
on the grounds that we've been wrong in much the same way before.
<P>
But the facts of the matter are actually quite the opposite, I think:
Our real mistake had been in inadvertently conflating the problem of life
and the problem of mind, for in bracketing mind completely when we
consider life, as we do now, we realize there is nothing left that is
special about life relative to other physical phenomena -- at least
nothing more special than whatever is special about biochemistry
relative to other branches of chemistry. But in the case of mind
itself, what are we to bracket?
<H2>The Mind/Body Problem: An Extra Fact</H2>
The disanalogy and the inappropriateness of extrapolating from life to
mind can be made even clearer if we face the mind/body problem
squarely: If we were given a true, complete description of the kind of
physical system that has a mind, we could still continue to wonder
forever (a) why the very same description would not be equally true if
the system did not have a mind, but just looked and behaved exactly 
<EM>as if</EM>
it had one and (b) how we could possibly know that every (or any)
system that fit the description really had a mind (except by
<EM>being</EM>
the system in question). For there is a fact of the matter here -- we
all know there really are mental states, and there really is something
it's like to be a system that has them -- yet that fact is the very one
that eludes such a true, complete description, and hence must be taken
entirely on faith (a state of affairs that rightly arouses some
skepticism in many of us; Nagel 1974, 1986; Harnad 1991).
<P>
Now let me point out the disanalogy between the special empirical and
conceptual situation just described for the case of mind modelling and
the parallel case of life-modelling, which involves instead a true,
complete description of the kind of physical system that is alive;
then, for good measure, this disanalogy will be extended to the third
parallel case, matter-modelling, this time involving a true complete
description of any physical system at all (e.g., the world of
elementary particles or even the universe as a whole): If we were
given a true, complete description of the kind of physical system that
is alive, could we still ask (a) why the very same description would
not be equally true if the system were not alive, but just looked and
behaved exactly
<EM>as if</EM>
it were alive? And could we go on to ask (b) how we could possibly know
that every (or any) system that fit that description was really alive? I
suggest that we could not raise these questions (apart from general
worries about the truth or completeness of the description itself,
which, it is important to note, is <EM>not</EM>
what is at issue here) because in the case of life there just <EM>is</EM>
no further fact -- like the fact of the existence of mental states --
that exists independently of the true complete physical description.
<P>
Consider the analogous case of physics: Could we ask whether, say, the
planets <EM>really</EM>
move according to relativistic mechanics, or merely look and behave
exactly <EM>as if</EM>
they did? I suggest that this kind of question is merely about normal
scientific underdetermination -- the uncertainty there will always be
about whether any empirical theory we have is indeed true and
complete.
But if we assume it as given (for the sake of argument) that a theory
in physics is true and complete, then there is no longer any room left
for any conceptual distinction between the "real" and "as-if" case,
because there is no further fact of the matter that distinguishes them.
The facts -- all objective, physical ones in the case of both physics
and biology -- have been completely exhausted.
<P>
Not so in the case of mind, where a subjective fact, and a fact about
subjectivity, still remains, and remains unaccounted for (Nagel 1974,
1986). For although the force of the premise that our description is
true and complete logically excludes the possibility that any system
that fits that description will fail to have a mind, the <EM>conceptual</EM>
possibility is not only there, but amounts to a complete mystery as to
<EM>why</EM>
the description should be true and complete at all (even if it is); for
there is certainly nothing in the description itself -- which is all
objective and physical -- that either entails or even shows why it is
probable that mental states should exist at all, let alone be accounted
for by the description.
<P>
I suggest that it is this elusive extra (mental) fact of the matter
that made people wrongly believe that special vital forces, rather than
just physics, would be needed to account for life, But, modulo the
problem of explaining mind (setting aside, for the time being, that mind
too is a province of biology), the problem of explaining life (or
matter) has no such extra fact to worry about. So there is no
justification for being any more skeptical about present or future
theories of life (or matter) than is called for by ordinary
considerations of underdetermination, whose resolution is rightly
relativized and relegated to the arena of rival theories fighting it
out to see which will account for the most data, the most generally
(and perhaps the most economically).
<H2>Searle's Chinese Room Argument Against Computationalism</H2>
How much worry does this extra fact warrant on its own turf, then, in
empirical attempts to model the mind? None, I would be inclined to say
(being a methodological epiphenomenalist who believes that only a fool
argues about the unknowable; Harnad 1982a,b, 1991), but there
does happen to be one prominent exception to the viability of the
methodological strategy of bracketing the mind/body problem even in
mind-modelling, and that exception also turns out to infect a form of
life-modelling; so in some respects we are right back where we started,
with an animistic consideration motivating a vitalistic one!
Fortunately, this exception turns out to be a very circumscribed
special case, easily resolved in principle. In practice, however,
resolving it has turned out to be especially tricky, with most
advocates of the impugned approach (computationalism) continuing to
embrace it, despite counterevidence and counterarguments, for
essentially hermeneutic (i.e., interpretational) reasons (Dietrich
1990; Dyer 1990; cf. Harnad 1990b, c): It seems that the conceptual
appeal of systems that are systematically interpretable <EM>as if</EM>
they were alive or had minds is so strong that it over-rides not only
our natural skepticism (at least insofar as mind-modelling is
concerned) but even the dictates of both common sense and ordinary
empirical evidence -- at least so I will try to show.
<P>
The special case I am referring to is Searle's (1980) infamous Chinese
Room Argument against computationalism (or the computational theory of
mind). I am one of the tiny minority (possibly as few as two) who think
Searle's Argument is absolutely right. Computationalism (Dietrich 1990)
holds that mental states are just computational states -- not just any
computational states, of course, only certain special ones, in
particular, those that are sufficient to pass the Turing Test (TT),
which calls for the candidate computational system to be able to
correspond with us as a pen pal for a lifetime, indistinguishable from
a real pen pal. The critical property of computationalism that makes it
susceptible to empirical refutation by Searle's thought experiment is
the very property that had made it attractive to mind-modellers: its
implementation-independence. According to computationalism, all the
physical details of the implementation of the right computational
states are irrelevant, just as long as they are implementations of the
right computational states; for then each and every implementation will
have the right mental states. It is this property of
implementation-independence --  a property that has seemed to some
theorists (e.g. Pylyshyn 1984) to be the kind of dissociation from the
physical that might even represent a solution to the mind/body problem
-- that Searle exploits in his Chinese Room Argument. He points out
that he too could become an implementation of the TT-passing computer
program -- after all, the only thing a computer program is really doing
is manipulating symbols on the basis of their shapes -- by memorizing
and executing all the symbol manipulation rules himself. He could do
this even for the (hypothetical) computer program that could pass the
TT in Chinese, yet he obviously would not be understanding Chinese
under these conditions; hence, by transitivity of
implementation-independent properties (or their absence), neither would
the (hypothetical) computer implementation be understanding Chinese (or
anything). So much for the computational theory of mind.
<P>
It's important to understand what Searle's argument does and does not
show. It does 
<EM>not</EM>
show that the TT-passing computer cannot possibly be understanding
Chinese. Nothing could show that, because of the other-minds problem
(Harnad 1984, 1991): The only way to know that for sure would be to
<EM>be</EM>
the computer. The computer might be understanding Chinese, but if so,
then that could only be because of some details of its particular
physical implementation (that its parts were made of silicon, maybe?),
which would flatly contradict the implementation-independence premise
of computationalism (and leave us wondering about what's special about
silicon). Searle's argument also does
<EM>not</EM>
show that Searle himself could not possibly be understanding Chinese
under those conditions. But -- unless we are prepared to believe either
in the possibility that (1) memorizing and manipulating a bunch of
meaningless symbols could induce multiple personality disorder (a
condition ordinarily caused only by early child abuse), giving rise to
a second, Chinese-understanding mind in Searle, an understanding of
which he was not consciously aware (Dyer 1990, Harnad 1990c, Hayes et
al 1992), or, even more far-fetched, that (2) memorizing and
manipulating a bunch of meaningless symbols could render them
consciously understandable to Searle -- the emergence of either form of
understanding under such conditions is, by ordinary inductive
standards, about as likely as the emergence of clairvoyance (which is
likewise not impossible).
<P>
So I take it that, short of sci-fi special pleading, Searle's Argument
that there is nobody home in there is valid for the very circumscribed
special case of any system that is purported to have a mind purely in
virtue of being an implementation-independent implementation of a
TT-passing computer program. The reason his argument is valid is also
clear: Computation is just the manipulation of physically implemented
symbols on the basis of their shapes (which are arbitrary in relation
to what they can be interpreted to mean); the symbols and symbol
manipulations will indeed bear the weight of a systematic
interpretation, but that interpretation is not 
<EM>intrinsic</EM>
to the system (any more than the interpretation of the symbols in a
book is intrinsic to the book). It is projected onto the system by an
interpreter with a mind (as in the case of the real Chinese pen-pal of
the TT-passing computer); the symbols don't mean anything
<EM>to</EM>
the system, because a symbol-manipulating system is not the kind of
thing that anything means anything to.
<P>
There remains, however, the remarkable property of computation that
makes it so valuable, namely, that the right computational system will
indeed bear the weight of a systematic semantic interpretation (perhaps
even the TT). Such a property is not to be sneezed at, and Searle does
not sneeze at it. He calls it "Weak Artificial Intelligence" (to
contrast it with "Strong Artificial Intelligence," which is the form of
computationalism his argument has, I suggest, refuted). The
practitioners of Weak AI would be studying the mind -- perhaps even
arriving at a true, complete description of it -- using computer
models; they could simply never claim that their computer models
actually had minds.
<H2>Symbolic Oracles</H2>
But how can a symbol system mirror most, perhaps all the properties of
mind without actually having a mind? Wouldn't the correspondence be too
much to be just coincidental? By way of explication, I suggest that we
think of computer simulations as symbolic oracles. Consider a computer
simulation of the solar system; in principle, we could encode in it all
the properties of all the planets and the sun, all the relevant
physical laws (discretely approximated, but as closely as we like) such
that the solar system simulation could correctly predict and generate
all the positions and interactions of the planets far into the future,
simulating them in virtual or even real time. Depending on how
thoroughly we had encoded the relevant astrophysical laws and boundary
conditions, there would be a one-to-one correspondence between the
properties of the simulation that was interpretable as a solar system
-- let's call it the "virtual" solar system -- and the properties of
the real solar system. Yet none of us would, I hope, want to say that
there was any, say, motion, or mass, or gravity in the virtual solar
system. The simulation is simply a symbol system that is systematically
interpretable
<EM>as if</EM>
it had motion, mass or gravity; in other words, it is a symbolic
oracle.
<P>
This is also the correct way to think of a (hypothetical) TT-passing
computer program. It would really just be a symbol system that was
systematically interpretable as if it were a pen-pal corresponding with
someone. This virtual pen-pal may be able to predict correctly all the
words and thoughts of the real pen-pal it was simulating, oracularly,
till doomsday, but in doing so it is no more thinking or understanding
than the virtual solar system is moving. The erroneous idea that there
is any fundamental difference between these two cases (the mental model
and the planetary model) is, I suggest, based purely on the incidental
fact that thinking is unobservable, whereas moving is observable; but
gravity is not observable either, and quarks and superstrings even less
so; yet none of those would be present in a virtual universe either.
For the virtual universe, like the virtual mind, would really just be a
bunch of meaningless symbols -- squiggles and squoggles -- that were
syntactically manipulated in such a way as to be systematically
interpretable
<EM>as if</EM>
they thought or moved, respectively. This is certainly stirring
testimony to the power of computation to describe and predict physical
phenomena and to the validity of the Church/Turing Thesis (Davis 1958;
Kleene 1969), but it is no more than that. It certainly is not evidence
that thinking is just a form of computation.
<P>
So computation is a powerful, indeed oracular, tool for modelling,
predicting and explaining planetary motion, life and mind, but it
is not a powerful enough tool for actually 
<EM>implementing</EM>
planetary motion, life or mind, because planetary motion, life and mind
are not mere implementation-independent computational phenomena. I will
shortly return to the question of what might be powerful enough in its
place, but first let me try to tie these considerations closer to the
immediate concerns of this conference on artificial life.
<H2>Virtual Life: "As If" or Real?</H2>
Chris Langton once proposed to me an analogue of the TT, or rather, a
generalization of it from artificial mind to artificial life: Suppose,
he suggested, that we could encode all the initial conditions of the
biosphere around the time life evolved, and, in addition, we could
encode the right evolutionary mechanisms -- genetic algorithms, game of
life, what have you -- so that the system actually evolved the early
forms of life, exactly as it had occurred in the biosphere. Could it
not, in principle, go on to evolve invertebrates, vertebrates, mammals,
primates, man, and then eventually even Chris and me, having the very
conversation we were having at the time in a pub in Flanders -- indeed,
could it not go on and outstrip us, cycling at a faster pace
through the same experiences and ideas Chris and I would eventually go
on to arrive at in real time? And if it could do all that, and if we
accept it as a premise (as I do) that there would be not one property
of the real biosphere, or of real organisms, or of Chris or me, that
would not also be present in the virtual world in which all this
virtual life, and eventually these virtual minds, including our own,
had "evolved," how could I doubt that the virtual life was real?
Indeed, how could I even distinguish them?
<P>
Well the answer is quite simple, as long as we don't let the power of
hermeneutics loosen our grip on one crucial distinction: the
distinction between objects and symbolic descriptions of them. There
may be a one-to-one correspondence between object and symbolic
description, a description that is as fine-grained as you like, perhaps
even as fine-grained as can be. But the correspondence is between
properties that, in the case of the real object, are what they are
intrinsically, without the mediation of any interpretation, whereas in
the case of the symbol description, the only "objects" are the physical
symbol tokens and their syntactically constrained interactions; the
rest is just our interpretation of the symbols and interactions 
<EM>as if</EM>
they were the properties of the objects they describe. If the
description is complete and correct, it will always bear the full
weight of that interpretation, but that still does not make the
corresponding properties in the object and the description
<EM>identical</EM>
properties; they are merely computationally equivalent under the
mediation of the interpretation. This should be only slightly harder to
see in the case of a dynamic simulation of the universe than it is in
the case of a book full of static sentences about the universe.
<P>
In the case of real heat and virtual heat or real motion and virtual
motion the distinction between identity and formal equivalence is clear.
A computer simulation of a fire is not really hot and nothing is really
moving in a computer simulation of planetary motion. In the case of
thinking I have already argued that we have no justification for
claiming an exception merely because thinking is unobservable (and
besides, Searle's "periscope" shows us that we wouldn't find thinking in
there even if we <EM>became</EM>
the simulation and observed for ourselves what is unobservable to
everyone else). What about life? But we have already seen that life --
once we bracket the mind/body problem -- is not different from any
other physical phenomenon, so virtual life is no more alive than
virtual planetary motion moves or virtual gravity attracts. Chris
Langton's virtual biosphere, in other words, would be just another
symbolic oracle: yet another bunch of systematically interpretable
squiggles and squoggles: Shall we call this "Weak Artificial Life"?
(Sober 1992, as I learned after writing this paper, had already made
this suggestion at the Artificial Life II meeting in 1990).
<P>
<H2>Sensorimotor Transduction, Robotics, The "Total Turing Test," and
Symbol Grounding</H2>
<P>
I did promise to return to the question of what more might be needed to
<EM>implement</EM>
life if computation, for all its power and universality, is not strong
enough. Again, we return to the Chinese Room Argument for a hint, but
this time we are interested in what kind of system Searle's argument
<EM>cannot</EM>
successfully show to be mindless: We do not have to look far.
Even an optical transducer is immune to Searle, for if someone claimed
to have a system that could really see (as opposed to being merely
interpretable <EM>as if</EM>
it could see), Searle's "periscope" would already fail (Harnad 1989).
For if Searle tried to implement the putative "seeing" system without
seeing (as he had implemented the putative "understanding" system
without understanding), he would have only two choices. One would be to
implement only the <EM>output</EM>
of the transducer (if we can assume that its output would be symbols)
and whatever symbol manipulations were to be done on that output, but
then it would not be surprising if Searle reported that he could not
see, for he would not be implementing the whole system, just a part of
it. All bets are off when only parts of systems can be implemented.
Searle's other alternative would be to actually look at the system's
scene or screen in implementing it, but then, alas, he <EM>would</EM>
be seeing. Either way, the Chinese Room strategy does not work. Why?
Because mere optical transduction is an instance of the many things
there are under the sun -- including touch, motion, heat, growth,
metabolism, photosynthesis, and countless other "analog" functions --
that are <EM>not</EM>
just implementations of implementation-independent symbol
manipulations. And only the latter are vulnerable to Searle's
Argument.
<P>
This immediately suggests a more exacting variant of the Turing Test
-- what I've called the Total Turing Test (TTT) --  which, unlike the
TT, is not only immune to Searle's Argument but reduces the level of
underdetermination of mind-modelling to the normal level of
underdetermination of scientific theory (Harnad 1989, 1991). The
TT clearly has too many degrees of freedom, for we all know perfectly
well that there's a lot more that people can do than be pen-pals. The
TT draws on our linguistic capacity, but what about all our robotic
capacities, our capacity to discriminate, identify and manipulate the
objects, events and states of affairs in the world we live in? Every
one of us can do that; so can animals (Harnad 1987; Harnad et al. 1991).
Why should we have thought that a system deserved to be assumed to have
a mind if all it could generate was our pen-pal capacity?
Not to mention that there is good reason to believe that our linguistic
capacities are <EM>grounded</EM>
in our robotic capacities. We don't just trade pen-pal symbols with one
another; we can each also identify and describe the objects we see,
hear and touch, and there is a systematic <EM>coherence</EM>
between how we interact with them robotically and what we say about
them linguistically.
<P>
My own diagnosis is that the problem with purely computational models is
that they are <EM>ungrounded</EM>. There may be symbols in there that are
systematically interpretable as meaning "cat," "mat" and "the cat is on
the mat," but in reality they are just meaningless squiggles and
squoggles apart from the interpretation we project onto them. In an
earlier conference in New Mexico (Harnad 1990a) I suggested that
the symbols in a symbol system are ungrounded in much the same way the
symbols in a Chinese-Chinese dictionary are ungrounded for a
non-speaker of Chinese:  He could cycle through it endlessly without
arriving at meanings unless he already had grounded meanings to begin
with (as provided by a Chinese-English dictionary, for an English speaker).
Indeed, <EM>translation</EM>
is precisely what we are doing when we interpret symbol systems,
whether static or dynamic ones, and that's fine, as long as we are
using them only as oracles, to help us predict and explain things. For
that, their systematic interpretability is quite sufficient.
But if we actually want them to <EM>implement</EM>
the things they predict and explain, they must do a good deal more. At
the very least, the meanings of the symbols must somehow be grounded in
a way that is independent of our projected interpretations and in no
way mediated by them.
<P>
The TTT accordingly requires the candidate system, which is now a robot
rather than just a computer, to be able to interact robotically with
(i.e., to discriminate, identify, manipulate and describe) the
objects, events and states of affairs that its symbols are
systematically interpretable as denoting, and it must be able to do it
so its symbolic performance coheres systematically with
its robotic performance. In other words, not only must it be capable,
as any pen-pal would be, of talking about cats, mats, and cats on mats
indistinguishably from the way we do, but it must also be capable of
discriminating, identifying, and manipulating cats, mats, and cats on
mats exactly as any of us can; and its symbolic performance must square
fully with its robotic performance, just as ours does (Harnad 1992).
<P>
The TT was a tall order; the TTT is an even taller one. But note that
whatever candidate successfully fills the order <EM>cannot</EM>
be just a symbol system. Transduction and other forms of analog
performance and processing will be essential components in its
functional capacity, and subtracting them will amount to reducing
what's left to those mindless squiggles and squoggles we've kept coming
up against repeatedly in this discussion. Nor is this added performance
capacity an arbitrary demand. The TTT is just normal empiricism. Why
should we settle for a candidate that has less than our full
performance capacities? The proper time to scale the model down to
capture our handicaps and deficits is only <EM>after</EM>
we are sure we've captured our total positive capacity (with the
accompanying hope that a mind will piggy-back on it) -- otherwise we
would be like automotive (reverse) engineers (i.e., theoretical
engineers who don't yet know how cars work but have real cars to study)
who were prepared to settle for a functional model that has only the
performance capacities of a car without moving parts, or a car without
gas: The degrees of freedom of such "handicapped" modelling would be
too great; one could conceivably go on theory-building forever without
ever converging on real automobile performance capacity that way.
<P>
A similar methodological problem unfortunately also affects the TTT
modelling of lower organisms: If we knew enough about them ecologically
and psychologically to be able to say with any confidence what their
respective TTT capacities were, and whether we had captured them
TTT-indistinguishably, lower organisms would be the ideal place to
start; but unfortunately we do not know enough, either ecologically or
psychologically (although attempts to approximate the TTT capacities of
lower organisms will probably still have to precede or at least proceed
apace with our attempts to capture human TTT capacity).
<H2>The TTT Versus the TTTT</H2>
Empiricists might want to counter that the proper degrees of freedom
for mind modelling are neither those of the TT nor the TTT but the TTTT
(Total-Total Turing Test), in which the candidate must be empirically
indistinguishable from us not only in all of its macrobehavioral
capacities but also in all of its microstructural (including neural)
properties (Churchland 1984, 1986, 1989). I happen to think that the
TTTT would be supererogatory, even overly constraining, and that the
TTT already narrows the degrees of freedom sufficiently for the branch
of reverse engineering that mind-modelling belongs to. There is still a
kind of implementation-independence here too, but not a
computationalist kind:  There is no reason to believe that biology has
exhausted all the possibilities of optical transduction, for example.
All optical transducers must transduce light, to be sure, but apart
from that, there is room for a lot more possibilities along the
continuum on which the human retina and the Limulus ommatidia represent
only two points.
<P>
The world of objects and the physics of transducing energy from them
provide the requisite constraints for mind-modelling, and every
solution that manages to generate our TTT capacity within those
constraints has (by my lights) equal claim to our faith that it has
mental states -- I don't really see the one that is
TTTT-indistinguishable from us as significantly outshining the rest. My
reasons for believing this are simple: We are blind to Turing
indistinguishable differences (that's why there's an other-minds
problem and a mind/body problem). By precisely the same token, the
Blind Watchmaker is likewise blind to such differences. There cannot
have been independent selection pressure for having a mind, since
selection pressure can operate directly only on TTT capacity.
<P>
Yet a case <EM>might</EM>
be made for the TTTT if the capacity to survive, reproduce and
propagate one's genes is an essential part of our TTT capacity, for
that narrows down the range of eligible transducers still further, and
these are differences that evolution is <EM>not</EM>
blind to. In this case, the TTTT might pick out microstructural
features that are too subtle to be reflected in individual behavioral
capacity, and the TTT look-alikes lacking them might indeed lack a
mind (cf. Morowitz 1992).
<P>
My own hunch is nevertheless that the TTT is strong enough on its own
(although neuroscience could conceivably give us some clues as to how
to pass it), and I'm prepared to extend human rights to any synthetic
candidate that passes it, because the TTT provides the requisite
constraints for grounding symbols, and that's strong enough grounds for
me. I doubt, however, that TTT capacity can be second-guessed a priori,
even with the help of symbolic oracles. Perhaps this is the point where
we should stop pretending that mind-modellers can bracket life and that
life-modelers can bracket mind. So far, only living creatures seem to
have minds. Perhaps the constraints on the creation of synthetic life
will be relevant to the constraints on creating synthetic minds.
<P>
These last considerations amount only to fanciful speculation, however;
the only lesson Artificial Life might take from this paper is that it
is a mistake to be too taken with symbol systems that display the formal
properties of living things. It is not that only natural life is
possible; perhaps there can be synthetic life too, made of radically
different materials, operating on radically different functional
principles. The only thing that is ruled out is "virtual" or purely
computational life, because life (like mind and matter) is not just a
matter of interpretation.
<H2>Coda: Analog "Computation"</H2>
Let me close with a remark on analog computation. Searle's Chinese Room
Argument applies only to computation in the sense of finite, discrete
symbol manipulation. Searle cannot implement a transducer because the
transduction of physical energy is not just the manipulation of finite
discrete symbols. Searle would even be unable to implement a parallel,
distributed system like a neural net (though he could implement a
discrete serial simulation of one; Harnad 1993). Indeed, every form of analog
computation is immune to Searle's Argument. Is analog computation also
immune to the symbol grounding problem? I am inclined to say it is, if
only because "symbols" usually mean discrete physically tokened
symbols. Some people speak of "continuous symbols," and even suggest
that the (real) solar system is an (analog) computer if we choose to
use it that way (MacLennan 1987, 1988, in press a, b, c). We are here
getting into a more general (and I think vaguer) sense of "computing"
that I don't think says much about either life or mind one way or the
other. For if the solar system is computing, then everything is
computing, and hence <EM>whatever</EM>
the brain -- and any synthetic counterpart of it -- is actually doing,
it too is computing. When "implementation independence" becomes so
general as to mean obeying the same differential equations, I think the
notion that a system is just the implementation of a computation
becomes rather uninformative. So take my argument against the
overinterpretation of virtual life to be applicable only to finite,
discrete symbol systems that are interpretable as if they were living,
rather than to systems that obey the same differential equations as
living things, which I would regard as instances of synthetic rather
than virtual life.
<H2>References</H2>
Churchland, P. M. (1984) Matter and consciousness: a contemporary
introduction to the philosophy of mind  Cambridge, Mass.: MIT Press,
1984.
<P>
Churchland, P. M. (1989) A neurocomputational perspective: the
nature of mind and the structure of science Cambridge, MA: MIT Press,
1989.
<P>
Churchland, P. S.  (1986) Neurophilosophy: toward a unified
science of the mind-brain Cambridge, Mass.: MIT Press, 1986.
<P>
Davis, M. (1958) Computability and unsolvability.
Manchester: McGraw-Hill.
<P>
Dietrich, E. (1990) Computationalism.  Social Epistemology
4: 135 - 154.
<P>
Dyer, M. G.  Intentionality and Computationalism:  Minds, Machines,
Searle and Harnad.  Journal of Experimental and Theoretical
Artificial Intelligence, Vol. 2, No. 4, 1990.
<P>
Harnad, S. (1982a) Neoconstructivism: A unifying theme for the
cognitive sciences. In: Language, mind and brain
(T. Simon & R. Scholes, eds., Hillsdale NJ: Erlbaum), 1 - 11.
<P>
Harnad, S. (1982b) Consciousness: An afterthought.
Cognition and Brain Theory 5: 29 - 47.
<P>
Harnad, S. (1984) Verifying machines' minds. (Review of J. T.
Culbertson, Consciousness: Natural and artificial, NY: Libra 1982.)
Contemporary Psychology 29: 389 - 391.
<P>
Harnad, S. (1987) The induction and representation of categories.
In: Harnad, S. (ed.) (1987) Categorical Perception: The Groundwork of
Cognition. New York: Cambridge University Press.
<P>
Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical
and Experimental Artificial Intelligence 1: 5-25.
<P>
Harnad, S. (1990a) The Symbol Grounding Problem.
Physica D 42: 335-346.
<P>
Harnad, S. (1990b) Against Computational Hermeneutics. (Invited
commentary on Eric Dietrich's Computationalism)
Social Epistemology 4: 167-172.
<P>
Harnad, S. (1990c) Lost in the hermeneutic hall of mirrors. Invited
Commentary on: Michael Dyer: Minds, Machines, Searle and Harnad.
Journal of Experimental and Theoretical Artificial Intelligence
2: 321 - 327.
<P>
Harnad, S. (1991) Other bodies, Other minds: A machine incarnation
of an old philosophical problem. Minds and Machines 1: 43-54.
<P>
Harnad, S. (1992) Connecting Object to Symbol in Modeling
Cognition.  In: A. Clarke and  R. Lutz (Eds) Connectionism in Context
Springer Verlag.
<P>
Harnad, S. (1993) Grounding Symbols in the Analog World with Neural
Nets. Think 2: 12 - 78 (Special Issue on "Connectionism versus
Symbolism" D.M.W. Powers & P.A. Flach, eds.).
<P>
Harnad, S., Hanson, S.J. & Lubin, J. (1991) Categorical Perception and
the Evolution of Supervised Learning in Neural Nets. In:  Working
Papers of the AAAI Spring Symposium on Machine Learning of Natural
Language and Ontology (DW Powers & L Reeker, Eds.) pp. 65-74. Presented
at Symposium on Symbol Grounding: Problems and Practice, Stanford
University, March 1991; also reprinted as Document D91-09, Deutsches
Forschungszentrum fur Kuenstliche Intelligenz GmbH Kaiserslautern FRG.
<P>
Hayes, P., Harnad, S., Perlis, D. & Block, N. (1992) Virtual Symposium
on the Virtual Mind. Minds and Machines (in press)
<P>
Kleene, S. C. (1969) Formalized recursive functionals and formalized
realizability. Providence.  American Mathematical Society.
<P>
MacLennan, B. J. (1987) Technology independent design of neurocomputers:
The universal field computer.  In M. Caudill & C. Butler (Eds.),
Proceedings, IEEE First International Conference on Neural Networks
(Vol. 3, pp. 39-49).  New York, NY:  Institute of Electrical and
Electronic Engineers.
<P>
MacLennan, B. J. (1988) Logic for the new AI.  In J. H. Fetzer (Ed.),
Aspects of Artificial Intelligence (pp. 163-192).  Dordrecht:  Kluwer.
<P>
MacLennan, B. J. (in press-a) Continuous symbol systems: The logic of
connectionism.  In  Daniel S. Levine and Manuel Aparicio IV (Eds.),
Neural Networks for Knowledge Representation and Inference.  Hillsdale,
NJ:  Lawrence Erlbaum.
<P>
MacLennan, B. J. (in press-b) Characteristics of connectionist
knowledge representation.  Information Sciences, to appear.
<P>
MacLennan, B. J. (1993) Grounding Analog Computers. Think 2: 48-51.
<P>
Morowitz, H. (1992) Beginning of Cellular Life.  Yale University Press.
<P>
Nagel, T. (1974) What is it like to be a bat? Philosophical Review 83:
435 - 451.
<P>
Nagel, T. (1986) The view from nowhere.  New York: Oxford University
Press.
<P>
Newell, A. (1980) Physical Symbol Systems. Cognitive Science
4: 135 - 83.
<P>
Pylyshyn, Z. W. (1984) Computation and cognition.
Cambridge MA: Bradford Books
<P>
Searle, J. R. (1980) Minds, brains and programs.
Behavioral and Brain Sciences 3: 417-424.
<P>
Sober, E. (1992) Learning from functionalism: Prospects for strong AL.
In: C.G Langton (Ed.) Artificial Life II. Redwood City, Calif.:
Addison-Wesley
<P>
Turing, A. M. (1964) Computing machinery and intelligence. In:  Minds
and machines, A . Anderson (ed.), Engelwood Cliffs NJ: Prentice Hall.
</BODY>
</HTML>
