ABSTRACT: Behavioral scientists studied behavior; cognitive scientists study what generates behavior. Cognitive science is hence theoretical behaviorism (or behaviorism is experimental cognitivism). Behavior is data for a cognitive theorist. What counts as a theory of behavior? In this paper, a methodological constraint on theory construction -- "neoconstructivism" -- will be proposed (by analogy with constructivism in mathematics): Cognitive theory must be computable; given an encoding of the input to a behaving system, a theory must be able to compute (an encoding of) its outputs. It is a mistake to conclude, however, that this constraint requires cognitive theory to be computational, or that it follows from this that cognition is computation.
An often-used example of a function that is not constructively defined (see Fraenkel et al. 1973, p. 266ff), and hence not acceptable to a constructivist, is the function that equals 1 if, somewhere in the decimal expansion of pi, there occurs the string (say)...123456789..., and that equals 0 if that string never occurs. The trouble with this definition is that we currently have no way of knowing whether or not that particular string (or any other arbitrary one) does in fact occur in the decimal expansion of pi, and hence we don't know whether a search for it will ever halt. An outcome based on a search that may never halt is indeterminate, and hence the function in question is not a constructive one.
There is a deep link between constructibility and computability,
via the theory of Turing machines and recursive functions
(see Davis, 1985; Kleene, 1969). Computability will be
mentioned again later, but for now, the only feature of
mathematical constructivism to be carried over in
the following analogy is that of being answerable to an
explicit test as to whether something exists or whether a
procedure is well-defined. What cannot be carried over
is the unique role of the constraint of consistency (or
noncontradiction) in mathematics. For although a constructivist
may profess agnosticism with respect to unconstructed objects
whose claim to existence rests only on noncontradiction (and the
law of the excluded middle), he recognizes that the dictates of
apagogic proof are certainly not arbitrary, and that the formal
activities of nonconstructive mathematicians can hardly be said
to be empty ones.
3. Underdetermination and Unification in Science.
Science, too, must be consistent, not only in the formal sense,
but also in the sense of not contradicting data. Unfortunately,
however, the universe of things we can say that are self-consistent
and consistent with existing data is still so large
that, unlike the truths of mathematics,
the truths of science are said to be
"underdetermined": Empirical evidence is always compatible with many
Hence, extra constraints, such as parsimony
and generality, have had to be brought in to narrow the options and to
help choose among the rival theories.
Parsimony prefers the theory, all else being equal, that posits
the smallest number of parameters.
Generality prefers the theory, all else being equal, that
accounts for the largest or most diverse body of data. Parsimony
and the size of a data-set can, in principle, be quantified,
but questions as to what constitute the boundaries
of the data domain of a theory -- in other words, how
diverse the theory should aspire to be -- are not so easily
settled. There is a definite unificationist trend in science,
toward accounting for as much data as possible by one unified
theory, even if it cuts across existing specialty or disciplinary
boundaries. Hence, it is risky to try to legislate boundaries in
advance when defining a theoretical program. However, it is clear
that some unifying theme is necessary, if only so that theorists
can come to agree that they are working on the same kind of
Logical positivism is no longer considered a viable philosophical or methodological foundation for science. One of the main reasons for this is the impossibility of fully separating theory from data. What one defines as data, and how, are already infected with one's theoretical preconceptions. It is not clear to what extent, if any, this problem actually affects ordinary day-to-day scientific practice; but the theory-ladenness of data may represent yet another reason why it might not be a good idea to be too absolute about disciplinary boundaries (in terms of the kinds of data one is prepared to count as "native" to a discipline). Hence the only remaining candidates for unifying a scientific endeavor are internal ones -- internal to the theory, as opposed to the data.
Behaviorism's animus toward theory had two bases -- one valid and the other invalid, in my view -- that became inextricably conflated across the years and have yet to be properly sorted out. The valid basis was the rejection of all forms of animism as scientifically inadequate. These included any data-gathering and theory-testing using the method of subjective introspection and any attempt to explain behavior in terms of mental rather than physical causes. Behaviorism correctly recognized that behavior (and perhaps also neural activity) represented psychology's only objective data domain -- that the contents of subjective experience could count neither as data nor as theory.
This valid antitheoretical position can be summarized as the rejection of "mentalism", which had been the attempt to give a theoretical explanation of behavior in terms of what was going on in the "mind". The rejection of mentalism was overgeneralized by behaviorism, however, and applied to any attempt to give a theoretical explanation of behavior in terms of what was going on in the "head", i.e., theoretical inferences concerning all unobservable events or processes were inadmissible. This exclusion of all forms of "internalism" amounted to a rejection of theory, and was based on a seriously inadequate philosophy of science, one that, among other things, would have restricted physics, too, to observables only, hence never allowing it to develop beyond kinematics: no electrons, no quarks, no fields, no superstrings. It would even have blocked the development of the functional principles that would have been needed to make psychology the behavioral engineering science that behaviorism itself had envisioned.
So in many respects cognitive science is merely behaviorism with a theory, at last. What psychology still needs, however, is a unifying constraint that will (1) narrow the universe of theoretical possibilities to a tractable size, (2) give an indication as to what sort of science a peculiarly "psychological" one might be, and, most important, (3) determine objectively what does and does not count as theory in this field.
Something going by the name of "constructivism" already exists in psychology. Constructivists (Rock, 1980; Ullman, 1980) are those perceptual theorists who define themselves by way of contrast with the Gibsonian view that perception occurs "directly" (Gibson, 1979). Gibsonians hold that since stimulation must contain all the information that is necessary to govern adaptive perceptuomotor performance, the information must also be sufficient for performance, being merely "picked up" passively by the nervous system. The constructivists deny this, claiming that the nervous system must do some active processing ("construction") in order to generate adaptive performance. Where the truth lies in this particular disagreement is not the direct concern of this chapter (see Chapter X, section 5.1); rather, it must be pointed out that neither the Gibsonians nor the constructivists are being particularly "constructive" here, in the sense the word is used by mathematicians. In mathematics, construction is a decisive activity. One cannot be left disputing over whether there is or is not a particular string of digits in the decimal expansion of pi: One must actually come up with it (or with a finite method for coming up with it).
What could perform the function of this decisive test in cognitive science? There does exist a "construction kit" of rather recent vintage that seems able to do just that: the digital computer. But why should a piece of here-today/gone-tomorrow technology be taken seriously? Not long ago it was thought that people were like clocks or steam engines. The answer is that computers may indeed come and go; but the theory of computation (Kleene, 1969) and the theory of information (Shannon & Weaver, 1949) are here to stay, as eternal as the other platonic verities; and it is the computer as the implementation of an abstract computing device (a Turing machine) that is being proposed here as the construction kit. This is not the simplistic (and obviously wrong) idea that people are like Vaxes, but the suggestion that if one has any clear constructive ideas as to what people are like, the way to test them is to formalize them and see whether they will work on a computer.
Not only all psychological theory, but all scientific theory in general can be seen as modeling input/output relations. The boundary conditions and experimental preparation are the input, and the theory is supposed to predict the outcome: the experimental observations. In principle, all physical theories are computer-testable. When a set of difference equations is solved by hand, and actual values are put in, the theorist is performing the role of the computer. Indeed, in certain complex problems of astrophysics and statistical mechanics, a real computer is actually used to determine what outcomes the theory predicts, given real data as input.
If we call this restriction to computer-testable theories "neoconstructivism" (to distinguish it from the mathematical and the perceptual kinds of constructivism), then all the physical sciences are already clearly neoconstructive (usually trivially so, in the sense that one need not have recourse to a real computer to demonstrate neoconstructiveness in physics; rather, as in mathematics, a paper and pencil will do). Hence neoconstructivism alone cannot give the cognitive sciences a unique identity; it only answers the question of what should count as cognitive theory. Armed with a theoretical criterion, however, the cognitive sciences seem to be characterized uniquely enough by their data-domain, which can be broadly described as all human and animal and human-like and animal-like performance and competence. Yet the cognitive sciences apparently do resemble astrophysics and statistical physics more than they do other domains of physics, because the performance of organisms, unlike the mechanics of a billiard game, does not seem to be explainable without the aid of the computer. Probably the most realistic model for cognitive science is theoretical engineering: the science of getting devices to do what we can do, thereby providing a candidate functional explanation of how we do it.
6. Rival Views of Cognitive Science.
The neoconstructive criterion for cognitive theory
can be compared with some other current candidates.
is clearly only very minimally an
criterion; in other words, it does not involve listing all
the instances of what is cognitive.
Even the delineation of the
data domain as organismic and organism-like performance
is potentially so general as to
include, if necessary, billiard-ball interactions or cream
dispersing in a cup of coffee. The proposal is merely that
performance and performance capacities like those of real organisms
are what we should be investigating and attempting to model.
The objection can be raised that bipedal locomotion does not seem particularly cognitive, even though it's clearly organismic performance. A reply is that whether to proceed bottom-up or top-down (in one's theorizing) is a methodological choice, and one is free to take the lowest levels for granted, if one feels safe in doing so. But for the Gibsonians for example) it proved rather difficult to leave locomotion out of the equation, even in accounting for the kinds of complex visual pattern recognition that no one would want to deny were cognitive (Gyr, Willey, & Henry, 1979). Moreover, it would appear to be far less justifiable to restrict cognition's phenomenal field to, say, language or language-like performance, particularly before one has any (neoconstructive) idea as to what language and language-like performance really are, and where they stop and lower-order performance begins. Some of the perplexities about the linguistic capacities of apes (Harnad et al. 1976) illustrate the problems involved in attempting to draw extensional border-lines between performances of which one has no neoconstructive understanding.
There are also rival intensional definitions of "cognitive," that is, definitions in terms of distinctive properties. Some have to do with incompletely understood concepts such as "representation" and "intentionality." "Representation" is something on whose meaning philosophers and psychologists have yet to settle. We know that it has something to do with concepts such as referring, describing and meaning, and with the relation of words, images and symbols to their respective objects. It may even have something (if only negatively) to do with representation as the concept is used in art and literature. But it is certainly something that philosophers have not yet pinned down to anyone's satisfaction. It is not clear that cognitive science needs to wait, rather than seeing how far it can get with its more noncommital concepts of "encoding" and internal structures and processes. Equating "cognitive" with "representational" at this point would seem unproductive.
"Intentionality" is another rival candidate. It means, variously, (1) what it is that such acts and states as "believing," "thinking," "wanting," "expecting," "knowing," etc., have in common with one another and fail to share with such acts and states as "walking," "sitting" and "eating." This is said to have something important to do with the fact that (2) the words, acts and states in the first category seem to have some sort of built-in ("intended") object: One wants something, thinks something, etc. It is also regarded as significant that (3) swapping these objects (as in "I am thinking of the Morning [vs. Evening] Star") fails to remain true to my intentions (given that they are both the same star, but I don't know it, and I had the former star, not the latter, in mind) whereas with ordinary coreferential substitution ("All men are mortal" vs. "All homo sapiens are mortal") truth value is preserved. This conviction that intentionality is the unique and distinctive "mark of the mental" has become so strong that in the parlance of some enthusiasts the word "intentional" has even replaced the word "mental" altogether (and this with a sense of having made progress!). Finally, intentionality seems to be (4) something that one can assign by way of an interpretation (Haugeland, 1978) or adopt as a "stance" (Dennett 1984, 1987*): "That person (dog, machine) behaves as if he (it) thinks, wants, believes, etc." -- It is not yet clear how helpful or constructive the contribution of (1) - (4) to cognitive theory will turn out to be.
The last candidate definition is based on the explicit declaration that what is cognitive is what is computational (Pylyshyn, 1980, 1984). It could even be wrongly concluded on the basis of the material in this chapter that this last proposal would be congenial to a neoconstructivist. After all, computationality is likewise a methodological distinction, with minimal extensional commitments; and in fact it even happens to be neoconstructive! However, computationality is far too restrictive, and overly biased toward one specific theoretical approach to cognition, the symbol-manipulative approach, which is simply not the only neoconstructive possibility that exists.
To clarify this possible misinterpretation: Neoconstructivism dictates that all cognitive theories must be computable, but not necessarily "computational". Computability requires only that a theory be sufficiently specific and explicit to be formalized into a computer program that can test symbolically whether, given the kinds of inputs that characterize the phenomenon being modeled, the program indeed succeeds in generating the kinds of outputs predicted, using the kinds of principles posited in the theory. The additional constraint of computationality would count as cognitive only those successful programs that had generated their performance using symbol manipulation as their sole theoretical principle. Neoconstructivism, on the other hand, would admit as cognitive any computably successful generation of performance, even if it involves (the symbolic simulation  of) transducers/effectors, other analog devices, analog/digital converters, pre-wired special-purpose modules, statistical filters, parallel processors, connectionist networks, servomechanisms, or any other trick that works. All of the latter have real physical functions that are symbolically simulable, but not symbolically implementable; that is, they are computable but not (exclusively) computational.
The computationalists seem to have been overly influenced by such suggestive properties of natural language as that every "state of affairs" seems to be potentially describable (to any desired degree of approximation -- see Steklis & Harnad, 1976); or by properties of computation, such as that every "process" seems to be potentially simulable computationally (to an approximation). As a consequence, whenever it is proposed that an analog or other nonsymbolic process is going on, computationalists tend to rejoin that "then it's not cognitive" (or, more revealingly, that it's not "yet" or not "fully" cognitive). This restriction seems quite arbitrary. After all, computationality is a concept that post-dates such pretheoretic ideas as "perceiving" and "thinking." Why should the question of whether these activities are "really" cognitive now turn out to depend on whether the brain happens to have decided to implement them by the "right" means, by computationalists' lights?
But, in the end, any debate between neoconstructivists and computationalists in cognition can, like the debate between the constructivists and the Gibsonians in perception, really only be settled one way: neoconstructively. Chapters X and X accordingly present a specific neoconstructive framework for grounding higher-order cognition in elementary categorical perception. Chapter X reviews the empirical and theoretical findings of categorical perception research and Chapter X presents a 3-level model for learning and representing categories.
2. Modern complexity theory (Chaitin, 1975; Rabin, 1977) has provided informational and computational principles for quantifying parsimony.
3. A computer-science analogue of this problem is the impossibility of fully separating a computational model from its specific computer implementation. Yet in both cases (theory/data, model/implementation) the apparent absence of pragmatic consequences seems to diminish the importance of the problem.
4. Unfortunately, however, the two forms of internalism (mental and cranial) continue to be conflated today; so, along with the newfound license to posit internal processes going on in the head, a new mentalism in also creeping in, consisting of a tendency to overinterpret those inferred intracranial processes in terms of what is going on in the mind. (See Chapters X and X, and Harnad, in preparation, d.)
5. This Turing criterion is the topic of Chapter x, below.
6. Independently of my own initial proposal of neoconstructivism (Harnad 1982a) as the criterion for cognitive theory, Johnson-Laird (1983) has made a similar proposal.
7. In an important sense to be explored later in this volume, cognitivists can still be characterized as "behaviorists," in that they recognize that the organism's input and output are all that they will ever get by way of actual data-points. (Perhaps all scientists are "behaviorists" in this sense.) But cognitivists are really better described as "reconstructed" behaviorists, in the following respects:
(a) Cognitivists aspired to a neoconstructive theory to account for the (performance) data.
(b) Cognitivists can use information from introspection, from neuroscience, or from any other source, as long as it makes a constructive contribution to their explanatory theory and its capacity to account for the performance data. Moreover, in holding themselves accountable for all performance, cognitivists have no reason not to try to model its underlying subjective phenomenology -- as inferred from, say, verbal report -- particularly because successfully simulating the phenomenology would have a high presumptive likelihood of facilitating the performance-modeling. This is discussed further in Chapters X and X.
(c) Cognitivists are not constrained to the push-pull dynamics of raw performance: higher-order regularities of performance and performance capacity (competence) are perfectly respectable data.
(d) Cognitivists recognize the weakness of the behaviorist's conventional arsenal of concepts such as reinforcement and association (see Harnad 1984) from the neoconstructive standpoint. They are simply insufficient to generate the behavior capacities they purport to explain. Indeed, there is an affinity between the Gibsonians and the behaviorists, in that, although it is clear that stimuli, rewards, and punishments must carry the necessary information for adaptive performance, this fact alone (and its immediate parameters) are not sufficient to "generate" the performance (Ullman 1980). Cognitivism attempts to discover the internal structures and processes needed for a self-sufficient account of performance.
Perhaps it is the neoconstructive variety that Wittgenstein (1967) had presciently in mind when he likened the behaviorists to the intuitionists in mathematics.
8. The issue of modularity and the problem of how to "ground" symbolic function in nonsymbolic function are discussed in chapters X, X and X.
9. As in the Episcopal Church, there may emerge a "high" and "low" school of cognitivism, with the practitioners of the high holding that the low is only modeling the "vegetative" aspects of performance, and not the cognitive ones at all. It remains a practical, methodological question whether such schisms are worth encouraging, and whether there are any logical as opposed to merely ideological bases for them. Indeed, even the distinction between organismic and neural "performance" may not turn out to be a productive one for cognitive science to insist on too strongly. (These modularity issues are further discussed in Chapter X.)
10. Chapters X and X discuss symbolic functionalism vs. nonsymbolic functionalism.
11. The crucial distinction between formal/symbolic simulation and physical/causal implementation is discussed fully in Chapter X, section 2.2. I do agree with Pylyshyn (1978) that the distinction between computer simulation of cognition (which tries to model our performance "the way we really do it") and artificial intelligence (which is a kind of computational "l'art pour l'art") is a rather artificial one, based on such temporary and arbitrary factors as the lack of generality of our current models ("toy problems") and the low degree of rigor of the performance constraints we currently choose to adopt. There is at this point (largely for want of a coherent rival to computability as the means of testing cognitive theory and physical causality as the means of implementing it) no reason to doubt that all approaches will ultimately converge as we approach a complete model for the whole organism's performance (a model that can pass the "total turing test"). In any case, the underdetermination of theory by data is as much a fact of theoretical life in cognitive science as it is in physical science. A great deal of ill-founded criticism of computational modeling can be traced to (1) misunderstandings about underdetermination and generality -- there may be many ways to compute a factorial, but are there many (equiparametric) ways to design a universe? or even a whole organism (see Dennett, 1978)? -- and to (2) vagueness about mechanism and explanation. Even sophisticated critics (e.g. Searle, 1980) have inadvertently built their negative cases on the transitory degeneracy of the current state of the art in this new science, taking for granted the restricted horizons of today's toy problems. Neoconstructivism is the relevant generalization for these critics to contend with, rather than the ambiguities of simulation, toy problems and symbol-manipulative modules. (A full discussion of modularity, underdetermination and alternatives to symbol-manipulation appears in Chapter X.)
12. One can hardly persist in calling something a "trick" once it becomes sufficiently general and convergent.
13. In Chapter X, I sketch a three-level representational system to account for categorization capacity. What is called the "symbolic system" in that chapter is clearly more computational, whereas the "iconic system" is more analog. The "categorical system" mediates between the two, grounding the symbolic system in the nonsymbolic one. I see no reason, however, why the entire three-level system should not be called "cognitive." (The model in Chapter X is not formalized or computer-tested, and hence it is not yet neoconstructive; but it is clearly motivated by the goal of eventual computer-testability.)
14. Was this chapter itself neoconstructive? No, it was merely methodological and foundational. Although what psychology certainly needs most at this time -- aside from significant data -- is neoconstructive theory, this does not mean that there are grounds to stop worrying about what we are doing altogether. There is still plenty of room for foundational and pretheoretical soul-searching. For example (assuming that we are all committed to an eventual causal account of organismic function, i.e., a physically implementable mechanism), do there exist any coherent alternatives to the neoconstructive constraint on theory-building? Is the analog/digital distinction coherent? And what are the scope and limits of computational implementation in cognition, as opposed to computational simulation (i.e., how much of cognition is actually symbolic)?
Davis, M. (1958) Computability and unsolvability. Manchester: McGraw-Hill.
Davis, M. (1965) The undecidable. New York: Raven.
Dennett, D. C. (1978) Why not the whole iguana? Behavioral and Brain Sciences 1: 103-104.
Dennett, D.C. (1982) The myth of the computer: An exchange. N.Y. Review Books XXIX (11): 56.
Fraenkel, A. A., Bar-Hillel, Y. & Levy, A. (1973) Foundations of set theory. New York: Elsevier.
Gibson, E. J. (1969) Principles of perceptual learning and development. Engelwood Cliffs NJ: Prentice Hall
Gibson, J. J. (1979) An ecological approach to visual perception. Boston: Houghton Mifflin
Gyr, J., Willey, R., & Henry, A. (1979) Motor-sensory feedback and geometry of visual space: a replication. Behavioral and Brain Sciences 2:59-94.
Haugeland, J. (1978) The nature and plausibility of cognitivism. Behavioral and Brain Sciences 1: 215-260.
Haugeland, J. (1985) Artificial intelligence: The very idea. Cambridge MA: MIT/Bradford.
Heyting, A. (1971) Intuitionism: An introduction. New Jersey: Humanities.
Johnson-Laird, P. M. (1983) Mental models. Cambridge MA: Harvard University Press.
Kleene, S. C. (1969) Formalized recursive functionals and formalized realizability. Providence, R.: American Mathematical Society.
Pylyshyn, Z. (1978) Computational models and empirical constraints. Behavioral and Brain Sciences 1:93-127.
Pylyshyn, Z. W. (1973) What the mind's eye tells the mind's brain: A critique of mental imagery. Psychological Bulletin 80: 1-24.
Pylyshyn, Z. W. (1980) Computation and cognition: Issues in the foundations of cognitive science. Behavioral and Brain Sciences 3: 111-169.
Pylyshyn, Z. W. (1981) The imagery debate: Analogue media versus tacit knowledge. Psychological Review 88: 16 - 45.
Pylyshyn, Z. W. (1984) Computation and cognition. Cambridge MA: MIT/Bradford
Rabin, M. O. (1977) Complexity of computations. Communications of the Association of Computer Machinery 20:625-633.
Searle, J. R. (1980a) Minds, brains and programs. Behavioral and Brain Sciences 3: 417-424.
Shannon, L. E., & Weaver, W. (1949) The mathematical theory of communication. Urbana: University of Illinois Press.
Skinner, B. F. (1984a) Methods and theories in the experimental analysis of behavior. Behavioral and Brain Sciences 7: 511-546.
Skinner, B. F. (1984b) Reply to Harnad. Behavioral and Brain Sciences 7: 721-724.
Steklis, H. D. & Harnad, S. R. (1976) From hand to mouth: Some critical stages in the evolution of language. Annals of the New York Academy of Sciences 280: 445-455.
Ullman, S. (1980) Against direct perception. Behavioral and Brain Sciences 3: 373 - 415.
Ullman, S. (1980) Against direct perception. Behavioral and Brain Sciences 3: 373 - 415.
Wittgenstein, L. (1953) Philosophical investigations. New York: Macmillan
Wittgenstein, L. (1967) Remarks on the Foundations of Mathematics. Cambridge, Mass.: M.T. Press.