Presented at: Kazimierz Workshop on Naturalised Epistemology, Kazimierz Dolny, Poland, 1-5 September 2007 http://bacon.umcs.lublin.pl/~ktalmont/KNEW/
Stevan Harnad (UQAM & U. Southampton)
ABSTRACT: Nature is only interested in know-how, not "know-that": Foraging, feeding, fleeing, fledging, etc. So if know-how were all we had, then naturalizing epistemology would be easy (but neither epistemology, nor even language would have fledged). So is it enough just to add that knowing facts and formulas is part of the cognitive competence subserving our know-how? The answer may be a bit subtler than that, because the evolution of sociality and language have themselves "commodified" knowledge, so that acquiring a fact can be as much of an adaptive imperative as acquiring a fruit. But there is a bootstrapping problem, getting here from there: Acquiring facts cannot become like acquiring fruit until we have language. So it's down to the origins and adaptive value of language. Here is a hypothesis: Categorization is, at bottom, know-how: It's knowing what's the right thing to do with the right kind of thing (what to feed, flee or fledge, and what not) in order to survive, reproduce, and beat the competition. But if categories are based on our practical know-how, then the ones we already have can also be named (another case of know-how). And if categories can be named, then still other categories (that you have but I haven't, yet) can be described, even defined (for me, by you), by stringing those names into propositions with truth values. This is the capacity that sets our own species apart from all others: Every species that can learn can acquire categories by trial and error from direct sensorimotor experience, detecting the invariant sensorimotor features and rules that reliably distinguish the category members from the nonmembers. But only our species can also acquire categories from hearsay. And that not only opens up a vast wealth of potential categories, all the way from the practical to the platonic: more important, making all those invariant features and rules explicit and communicable saves us a lot of time, effort and risk in acquiring our adaptive know-how -- enough to have radically altered the brains of our ancestors at least 100,000 years ago, and turned them into us. It also made possible that form of distributed, collaborative, collective cognition we call culture.
Philosophers have long worried about the origin of knowledge: What do we know, and how do we know it?
Knowing-How vs. Knowing-That. What we know consists of two kinds of things : (1) Knowing-How, which is the things we know how to do and (2) Knowing-That, which is the things that we believe to be true (when they are indeed true). Strictly speaking, knowing-that is itself just a special case of knowing-how, in the sense that we can state verbally the propositions that we take to be true and we can also state that they are true. Being able to do that is itself a form of know-how. If you reply that underlying that special verbal know-how is something further -- say, that it also includes some information that we must possess -- rather than merely consisting of our ability to state something -- then it has to be pointed out, symmetrically, that information has to be possessed in order to have ordinary know-how too: It's just that that information is (usually) not as explicit when it underlies our ability to do something as it is when it is formulated as a proposition, and when the something we need to do is to state (and perhaps justify) that proposition.
We will return to implicit versus explicit knowledge later. There are also a few complexities regarding what does and does not count as knowing, that we will not be resolving here, for example, whether or not knowing is justified true belief:
Justified True Belief. There are the well-known Gettier problems about whether knowledge is or is not more than just believing, with justification, something that is in fact true. This only seems to be a problem for knowing-that, not for knowing-how. Of course, it is possible that when I am playing chess with someone, my opponent is not really playing chess, but 'schmess', according to a different set of rules, one that will eventually lead to an illegal chess move (along the lines of Nelson Goodman's blue/grue divergence). Or the rules of schmess might be formal duals of the rules of chess, differing in form, but generating all and only the exact same legal moves.
I don't think either of these cases generates a Gettier problem for know-how, because playing chess is something I know how to do, and I am successfully doing that whether I am playing according to the rules of chess or the rules of schmess (although with blue/grue schmess I am only doing it to an approximation that may eventually break down). This would only become a Gettier problem if the know-how in question were that of explicitly verbalizing the rules of chess, not merely playing in accordance with them. (This joins Wittgenstein's distinction between what we might now call implicit and explicit rule-following.)
We will also return to the Gettier problem for the case of knowing-that: For now this can be restated as the question of whether or not knowing-that an explicit formal rule is true amounts to anything more than (1) being able to follow it implicitly (which is just knowing-how), together with (2) the know-how to state the rule explicitly, (3) to 'justify' it explicitly (which amounts to still more verbal know-how, including knowing how to reason as well as how to make inductive inferences from evidence); and perhaps also (4) having arrived at the rule in the 'right' way. It is this question of how one comes to have explicit knowledge that is closest to the theme of this conference on naturalizing epistemology.
Darwinian Grounding. The temptation is to say that the missing element in the view of knowledge as justified true belief is that learning from a lifetime of empirical experience and reasoning is not 'justification' enough. What is missing is a Darwinian grounding that 'biologizes' our know-how in terms of its adaptive consequences (for our ancestors as well as for ourselves). I don't yet 'know' that 2+2=4 if I have (i) the know-how to count as well as to (ii) do arithmetic calculations; nor if I can (iii) state that 2+2=4, nor if I have the know-how to (iv) prove that 2+2=4 from Peano's axioms, nor if (v) 2+2=4 really is true (in either Plato's world or the real world, whichever you prefer) ; nor even if (vi) I fervently believe 2+2=4 to be true. Something could still be missing, according to the Gettier story. So if counting, doing arithmetic, acquiring arithmetic, iterative and recursive know-how, and acquiring and exercising theorem-proving know-how advantaged our ancestors in their struggle to survive and reproduce -- and if even a specific tendency to arrive at the belief that 2+2=4 was evolutionarily prepared in our genes -- then maybe that was what made believing that 2+2=4 into knowing rather than just justified true believing; or, rather, the 'justification' is extended from the history of the individual to the history of the species.
Language and Feeling. I actually doubt that's the whole story, although I do think Darwinian evolution had a big hand in upgrading justified true belief toward the status of knowledge. I think evolution's contribution was in the transition from the kind of implicit knowing-how that we share with all other species, to the unique, explicit knowing-that that is peculiar to our own species. This began with the advent of language, and it was our linguistic know-how itself that made it possible for us to make our beliefs formal and explicit, to share them, and to justify upgrading some of the true ones to the status of 'knowledge'. But I don't think the only critical element in that upgrade was language. I think it was also consciousness, by which all I mean is feeling. I know that 2+2=4 not just because I can say it, and can prove it, but because I feel it.
Now wait a minute: Aren't my feelings fallible? Surely my false beliefs feel true too! So surely feeling can't be essential to knowing!
Yes it can. All I said was that justified true belief is not enough to count as knowledge, and not even enough if our justificatory know-how has the full expressive power of natural language, formal reasoning, and inductive inference; nor even if it is the product of an adaptive legacy selected for and encoded in our genes. For if we really were just insentient survival machines (feelingless Zombies -- which is all that Darwinian evolution could ever predict or explain), then even if our adaptive know-how were Turing-Indistinguishable from what it is now -- i.e., even if we could do and say and justify and explain everything we can do/say/justify/explain now -- both the knowledge/belief distinction and the notion of a belief itself would become empty and meaningless. Indeed, our propositions would be meaningless (to us) if we were just feelingless survival machines. We would have Turing-Test-scale know-how, both behavioral and verbal, and all the data and data-processing power needed to generate that know-how, but we would simply be behaving adaptive automata, not believing, knowing minds.
And that would only be because we did not feel. Hence there would be nothing that it feels-like to know, and our verbalizing the proposition that 2+2=4 would be as meaningless (to us) as it is when a computer or robot (or film or book) verbalizes it today.
It would be meaningless to us, of course, because there would be no 'us', if we were such Zombies. They would just be adaptive automata that behaved exactly like us, because they had all of our know-how. This illustrates, in passing, why simply adopting Dan Dennett's 'intentional stance' -- which is to interpret one another as if we had beliefs and knowledge -- might be useful in predicting and explaining our know-how, but being interpretable and interpreted as if we had beliefs or knowledge does not entail that we really do have beliefs or knowledge, for the very same reason that being interpretable or interpreted as if we had feelings entails that we have feelings. It just implies that interpreting us in that way is feasible, and useful to the interpreter. And of course in our case it also happens to be true, because we really do have beliefs, knowledge and feelings. So when I said that feeling was the only missing element in the justified true belief equation, I did not mean that feeling that one knows is a sufficient condition for knowing: I meant that being able to feel something (anything) at all is a necessary condition for knowing. And that one of the essential components missing from a justified true belief, needed to upgrade it to knowledge, is the very same component that makes it into a belief in the first place, rather than merely the mechanical instantiation of an action or object or state that is interpretable (by a believer) as a proposition. And that that essential component is the capacity to feel.
Cartesian Cognition. But before we declare something known, rather than just believed and true, justification needs a closer look too. Let's return to the proposition that 2+2=4. Its justification is that it is provable and proven. It is also (a fortiori) true. If an entity has the know-how to instantiate that proposition as well as to justify (prove) it, then it remains to ask whether the entity believes it. It only believes it if it feels as if it believes it. And for that, the entity has to feel. If so, it has a justified true belief that 2+2=4. It also knows that 2+2=4.
There is no Gettier argument here to the effect that 2+2=4 might just happen to be true, and the proof might just happen to be correct, with the truth of 2+2=4 nevertheless not entailed by the validity of the proof: It is entailed by the proof. Logical entailment means true on pain of contradiction; that is also what means. So 2+2=4 is not just true but necessarily true. That means that the justification -- the proof -- was a sufficient condition for guaranteeing that 2+2=4 is true. So the justified true believer in 2+2=4 also knows 2+2=4 is true. Knowing becomes sufficiently justified true belief, with the sufficient justification being the proof. But now the question is whether there are any other true beliefs, apart from the provable ones, that are sufficiently justified to count as knowledge: I am with Descartes here. We can only know what we can know with certainty. That means the necessary truth of mathematics, provably true on pain of contradiction -- plus the Cogito, which is my knowledge that I am feeling, if/when I feel. That's all. The rest of what we call 'knowledge' is not knowledge at all, but just sufficiently justified beliefs that are highly probably true.
Sufficiently Justified Beliefs. All this by way of saying that 'naturalizing' our beliefs as 'justified' by our Darwinian pasts as encoded in our current genetic and neural machinery does not suffice to transform them into knowledge. But it doesn't need to. Beliefs that are sufficiently justified as highly probable are as good as knowledge in the real Darwinian world. Adaptive outcomes are not proofs or even optimizations; they are merely what Herb Simon called 'satisficing' -- doing well enough to beat the competition, so far. The rest of this paper will be about how our species evolved its unique capacity to acquire and transmit sufficiently justified beliefs.
As noted earlier, it all began with know-how : Species' know-how comes in two forms, continuous and categorical. Knowing how to walk is continuous know-how. Knowing what to eat is categorical know-how. We will now focus on categorical know-how, as that covers most of our cognitive capacity. To know how to categorize is to know how to do the right thing with the right kind of thing: what to approach, avoid, eat, mate-with, dominate, defer to, acquire, defend, feed; sit on, swim in, etc. Hence to know how to categorize is to know what kind of thing to do with what kind of thing. Much of animals' adaptive behavior is categorization. Even continuous behavior is often guided by categorical decision points.
Acquiring Categories through Evolution versus Induction. Some categories are inborn: The frog is born knowing what sort of visual stimulus to flick its tongue out at, to capture and swallow; the duckling knows (roughly) the kind of moving thing it should begin to follow from the moment it is hatched; the newborn baby knows what shape to suckle at from birth. Those categories were acquired before birth by the genes and nervous systems of each of these organisms' ancestors by what I would like to call 'Darwinian theft' in order to distinguish them from categories acquired during a lifetime of 'honest toil.' Darwinian theft is just that trial-and-error process of genetic variation and selection, shaped by the adaptive and reproductive consequences that its founder called 'natural selection'.
Most categories, however, are not inborn as a result of natural selection, but learned, especially among the higher vertebrates, mammals, and primates. To acquire categories by honest toil, organisms need a category induction mechanism that can learn from trial and error experience -- guided by feedback from the consequences of categorizing correctly and incorrectly, much the way evolution by natural selection works -- to detect the features and rules that reliably distinguish the members of the category from the non-members. That learning mechanism itself, of course, first has to be acquired through evolution.
Rudimentary learning mechanisms are known: neural nets, for example, are especially good at category learning, although none of them is yet able to learn categories at the natural scale of higher vertebrates, let alone higher mammals or primates. The important thing to note is that inductive learning is a dynamical process, just as natural selection is, occurring in real-time, during the lifetime of the individual organism, rather than in evolutionary time, during the history of a species. And that there are potential mechanisms for it.
So the acquisition of category know-how by organisms through honest toil during their lifetimes is not a mystery in principle, although its learning mechanism still needs to be worked out in practice. Hence the 'justification' (if justification is needed) for the categorical know-how arising from both Darwinian theft and inductive toil is the same: it leads to sufficiently adaptive consequences -- sufficient to survive, reproduce, and stay ahead of the competition so far. But just as Darwinian theft proved too time-consuming and uncertain a means of acquiring category know-how, and hence the mechanism for acquiring categories by inductive toil proved to be an adaptive advantage, so inductive toil itself has disadvantages, being likewise time-consuming, uncertain and risky. If one must learn what to do with what kind of thing from trial and error experience alone, one risks -- to put it in a caricatured way - being consumed by a predator one takes for prey.
Acquiring Categories through Induction versus Instruction. Let us look more closely at how categories are acquired via Darwinian theft and inductive toil. The successful categorizer has a category-detector or filter that can discriminate the members from the nonmembers on the basis of the sensorimotor features that distinguish them. This is effectively a sensorimotor feature-based rule that is implemented by the detector. In the case of evolved category-detectors like the frog's bug-detector, they were constructed by the trial-and-error process of natural selection. In the case of learned category-detectors, they were constructed from the trial and error experience of the learning mechanism. Either way, the category detector implements a feature detection rule that is sufficient to detect the members of the category as accurately as adaptivity demands and the organism can manage. The rule is implicit in the functioning of the detector. The detector performs according to the rule, but it need not formulate the rule explicitly. Is it possible to impove upon this in some way ?
In 1866 the Societe de Linguistique de Paris adopted a moratorium on papers hypothesizing about the origin of language because they were too speculative. The moratorium spread and held for about a century, until the burgeoning of empirical research in linguistics, biology, psychology, anthropology, neuroscience, and computer science reopened the topic at the New York Academy of Sciences in 1975. Although the topic of the origin of language, like that of the origin of life and the origin of the universe concerns a one-time, non-repeatable event remote in the past, it is possible to triangulate upon it in other ways, and one of them is computer simulation. With my collaborator, Angelo Cangelosi, we did some simple artificial life simulations of the origins of language to test the hypothesis that the advent and rapid, dramatic growth and success of language which permanently altered the brains of our ancestors about 100,000 years ago occurred because of the adaptive advantages of symbolic theft over sensorimotor toil.
Simulating the Advent of Language. We created a virtual world in which little pac-man-like creatures, in order to survive and reproduce, had to acquire various kinds of categorical know-how. Their prey consisted in mushrooms, some of which, depending on their features, were toxic and some of which were edible. So there were edible and inedible mushrooms. Other categories had to be acquired as well. Some of the mushrooms, depending on their features had to have their locations marked, others not. So there were the markable and the non-markable mushrooms. Both these categories had to be learned through honest toil. But then there was a third category, the mushrooms to whose locations the creatures had to return. The world was designed so that the returnable mushrooms happened to be the mushrooms that were both edible and markable, hence their features were simply the conjunction of the features of the edible and the markable mushrooms.
We put two populations of these virtual creatures into competition. One population was only able to learn categories the old, time-consuming way, through trial and error sensorimotor induction, guided by feedback from the consequences of success and failure. The other population had two means of acquiring categories at its disposal : the old way, by sensorimotor toil, and a new way, via symbolic theft or 'hearsay'. All of our creatures, when they categorized, also vocalized in a characteristic way reflecting their act of categorization. If they ate, they vocalized EAT, and if not, not ; similarly, if they marked, they vocalized MARK, and if not, not. So those that had successfully learned RETURN would vocalize EAT MARK RETURN.
This effectively gave the thieves two possible ways of learning RETURN. There was no choice about how to learn EAT and MARK, because prior to acquiring those categories, the creature had no feature detectors at all. But with RETURN, all the entrants already had the EAT and MARK detectors. So RETURN could either be learned directly the old, hard way, through honest toil, trial by trial, or the category could be 'stolen' through hearsay, by simply creating and using the RETURN detector out of the conjunction of the EAT and MARK detectors from having been 'told' that those were the features and that was the rule.
The Triumph of Symbolic Theft Over Sensorimotor Toil. Within a few generations, the 'thieves' out-survived and out-reproduced the 'toilers', so that there was nothing but thieves left. (You can ask me in the discussion period why this strategy could not be applied all the way down, in an evolutionarily stable way, to ground-level categories such as EAT and RETURN. It cannot, and the reason is related to the symbol grounding problem, which you will also have to ask me about in the discussion period.)
What we concluded from these simple toy stimulations was that the adaptive value of language to our ancestors had been that it made it possible for them to acquire categories -- which, remember, constitute the lion's share of our adaptive know-how -- without the risk, uncertainty and time-demands of sensorimotor induction. This advantage had proved to be revolutionary, and we all know what it ushered in: the oral tradition, writing, education, philosophy, mathematics, culture, scholarship and science. It also modified our old way of acquiring categories, for even in learning them via direct trial and error experience, we could now formulate and test hypotheses about the features and rules, using our newfound language of thought explicitly even when not speaking or being spoken to aloud.
The Advent of Knowing-That : Language also ushered in the special form of know-how that we call knowing-that : the know-how to name categories and to combine those category names into propositions with truth values, propositions that describe and define still further categories. This special form of know-how made categories into a commodity that could be acquired much the way physical commodities could be acquired. This did lead, among other things, to trade secrets and patents -- categories that we preferred to keep secret so that we could trade and sell something else -- but what is more remarkable is that language led mostly to a Category Commons : Categories themselves -- which is to say, knowledge -- were freely shared, because they are so much more useful to everyone when made public than when kept private. Wittgenstein was right that a private language could not be invented (because there would be no such thing as error or error-correction). But even if a private language could have been invented, it would never have evolved and become incorporated into our genotype and our neural hardware, because the adaptive value of language is in category acquisition for hearer(s) -- instruction sparing them the toil of induction. So if speakers had kept their categories private instead of describing and defining them publicly, it really would just be a language game with no Darwinian consequences.
From Praxis to Propositions Via Pointing and Pantomime. But just as we cheated -- or at least left out some nontrivial details -- in making the virtual creatures in our simulations vocalize spontaneously as they exercised their categorical know-how, so we are ducking details in referring to the originators of language as speaking and hearing. For it is highly unlikely that language was born in the oral/aural modality. The path from behavioral know-how or 'praxis' to linguistic knowing-that (which is the know-how to generate truth-valued propositions and even true ones) has a very natural intermediary, leading from praxis to propositions, namely, pointing and pantomime. Pointing and pantomime already constitute intentional communication, but in and of themselves, they are not yet propositional, hence not linguistic. Pointing and pantomime can be faithful or unfaithful to whatever they are singling out and depicting, but they cannot be true or false. For that, they first have to be weaned from praxis : their instrumentality and iconicity must be dropped or ignored, so that they can become or come to be treated as purely formal (as Saussure pointed out). Although I cannot give the details, I think it was in this passage from pantomime to propositionality, from instrumentality and iconicity to to a purely formal notational system for representing and conveying truth-valued knowledge-that, that language was actually born. But not long after that the many obvious virtues of the oral/aural medium took over from the more awkward gestural one, facilitated by the fact that now that the symbols were arbitrary and formal, it no longer mattered in what modality they were represented or conveyed.
Collaborative Cognition and the Category Commons. The last and probably the most important benefit of language that I will point out before closing was in making something that is today misnamed 'distributed cognition' possible. Of course there is no such thing as distributed cognition, any more than there can be a distributed migraine headache. Cognition -- the language of thought -- occurs only within individual heads. But two heads are better than one, and many heads are still better. Language made collaborative cognition possible, and with it the growth of the Category Commons, our collective, cumulative database of human know-how as well as know-that. It was the importance of category acquisition to the selfish genes of individuals that gave symbolic theft its original leverage, but once it propagated, collective benefits become possible that advantaged all of our selfish genes without necessarily handicapping or competing with anyone's individual interests.
A. & Harnad, S. (2001) The Adaptive Advantage of Symbolic Theft
Sensorimotor Toil:Grounding Language in Perceptual Categories.
Communication 4(1) 117-142
<>>Harnad, S. (1982) Consciousness: An afterthought. Cognition and Brain Theory 5: 29 - 47.
S. (1994) Levels of Functional Equivalence in Reverse
Bioengineering: The Darwinian Turing Test for Artificial Life. Artificial
Life 1(3): 293-301.
Harnad, S. (2001) Minds, Machines and Searle II: What's Wrong and Right About Searle's Chinese Room Argument? In: M. Bishop & J. Preston (eds.) Essays on Searle's Chinese Room Argument. Oxford University Press. http://cogprints.org/1622/Harnad, S. (2001) No Easy Way Out. The Sciences 41(2) 36-42. http://cogprints.org/1624/