Two: Human Uniqueness and Cognative Evolution
“And it is true that while we are embodied, and firmly rooted in the natural world we are also reflective. We can, and should, in various ways transcend and refine the given.”
Anthony O’Hear, Beyond Evolution (Oxford: Clarendon, 2002), 100
“The Imagination is one of the highest prerogatives of man. By this faculty he unites, independently of the will, former images and ideas, and thus creates brilliant and novel results.”
Charles Darwin, The Descent of Man, and Selection in Relation to Sex (Princeton: Princeton University Press, 1981), 45
In the first chapter I argued for a postfoundationalist approach to interdisciplinary dialogue, which implied two important moves for theology. First, as theologians we should acknowledge the contextuality and the epistemically crucial role of interpreted experience, and the way tradition shapes the values that inform our reflection about God and what some of us believe to be God’s presence in the world. Secondly, however, a postfoundationalist notion of rationality in theological reflection should open our eyes to an epistemic obligation that points beyond the boundaries of the local community, group, or culture, toward a plausible form of interdisciplinary dialogue. The overriding concern here is as follows: while we always come to our interpersonal and cross-disciplinary conversations with strong personal beliefs, commitments, and even prejudices, a postfoundationalist approach should enable us to realize that there is also much that we share, and help us identify these shared resources of human rationality in different modes of knowledge so as to reach beyond the boundaries of our traditional communities in cross-contextual, cross-disciplinary conversation.
Taking seriously context and interpreted experience also makes it virtually impossible to talk about interdisciplinary theology—or any kind of public theology, for that matter—without first talking about the role of tradition in the task of theological reflection. The philosophical and epistemological challenges that undergird this kind of claim, and the very basic fact that epistemic and moral traditions shape our patterns of research in both science and theology, will be of central importance for the very specific case study in interdisciplinary dialogue that I am arguing for here. I have argued that for in the both theology and the sciences the following would be true.
First, we always relate to our world through interpreted experience only. As such we have no standing ground, no place for evaluating, judging, and inquiring, apart from that which is provided by some specific tradition or traditions. In this sense interpretation is at work as much in the process of scientific discovery as in different forms of knowledge; it goes all the way down and all the way back, whether we are moving in the domain of science, morality, art, or religion.
Second, because we cannot think and act apart from our embeddedness in tradition(s) and worldviews, our epistemic task is to stand in a critical relation to our tradition(s) and worldviews. This requires a conscious and deliberate step beyond the confines of particular traditions and disciplines, and implies that the task and identity of a discipline is definitively shaped, not only by its location in the living context of specific tradition(s), but also by the more comprehensive, transversal space of interdisciplinary reflection.
In chapter 1, I argued that the problem of interdisciplinarity is the first clear theme that could be reconstructed from Lord Gifford’s charge to his lecturers. A second theme emerges in the issue of human distinctiveness, or human uniqueness, and along with this the challenge of finding a possible shared interdisciplinary research trajectory on this very specific topic for theology and the sciences. The problem of human uniqueness, of course, must have existed for as long as humans have found it necessary to distinguish themselves from the rest of the living world. As human beings, it seems, we are doomed forever to agonize over what it means to be human (cf. Tattersall 2003: 232). Some, like historian of science Robert Proctor, see the idea of “humanness” as a notion we invented to elevate ourselves over other creatures, and it usually has as much to do with the myths we create as with the empirical realities that science focuses on. As such we humans seem to be animals with an odd capacity for self-aggrandizement, and one common, shared way in which we are all human is that we constantly involve ourselves in exercises of both self-definition and self-deception (cf. Proctor 2003: 228). On one level our remarkable capacity (some would call it mania) for self-definition could no doubt also be seen as one of the crowning achievements of our species. No one single trait or accomplishment, however, should ever be taken to signal definitively what it means to be human. Moreover, we will soon see that whatever we define as our true “humanness,” or even our human uniqueness, ultimately reveals a deeply ambivalent moral choice, for we are not just biological creatures, but as cultural creatures we have the ability to determine whom we are going to include, or not, as part of “us” (cf. 228f.). Therefore, talking about human uniqueness when defining ourselves implies a crucially important moral dimension precisely because the inclusion or exclusion of others as “fully human,” or not, gives shape to the social and cultural contexts we create and experience.
Ian Tattersall has argued recently that from a scientific point of view, the problem of human uniqueness may have seemed a relatively simple one, at least as long as our closest known relatives were the living apes, for what makes us human is simply whatever makes us different from the great apes (cf. Tattersall 2003: 232). In recent years, of course, it has become increasingly difficult to pinpoint what exactly this difference is, especially in light of the quite stunning 99 percent genetic similarities between humans and chimpanzees. All this is complicated even further because we now know that humans had an evolutionary past of more than 5 million years, containing a great diversity of now-extinct hominids that were much more closely related to us than the living apes are, but that still seem to be somehow different from us. In this work I will follow Tattersall and other paleoanthropologists in using the term “human evolution” in the quite specific sense of covering the history of the hominid family, excluding the great apes. Furthermore, what is highly significant, and will most certainly have ramifications for an interdisciplinary study such as this, is that a paleoanthropologist like Tattersall regards only living and historic Homo sapiens (and those fossils that were anatomically and behaviorally like them) as “fully human” (cf. 233). In addition, definitions of what it means to be “human” or even “fully human” have not only shifted and changed with time but have also been deeply influenced by reigning cultural ideologies and worldviews, thus exposing science itself as deeply influenced by worldviews and moral perspectives. Center stage here will be the timing of human evolution, that is, the antiquity or recency of the emergence of creatures called Homo sapiens, who are, of course, none other than us.
With this it already becomes clear that, as far as cognitive matters go, the biological evolution of our capacity for rational thought may not be the only issue we have to consider seriously when trying to define “human uniqueness.” Certainly our various capacities for intelligent reflection and action have an evolutionary basis, but the way we use these capacities on a cultural level seems to be more about complex rational
selection than natural
selection (cf. Rescher 1990: 6). Most importantly, the problem of cognitive evolution will challenge us by raising the question of how rational selection is superimposed on natural selection. Against the background of all the millions of species that have come and gone over the millennia, it still is a in the fact that a complex transmissible culture developed in only one of them. In this one it developed explosively with radical innovations. As I briefly mentioned in chapter 1, there seems to be only one line that leads to persons, to self-awareness and consciousness, and in that line the steady growth of cranial capacity makes it difficult to think that intelligence was not being selected for. And at exactly this point, evolution by natural selection passed into a higher level of complexity as nature transcended itself into culture (cf. Rolston 1996: 69).1
With human consciousness, conscious self-awareness, and culture, radically new elements like conscious experiences have emerged, and along with thoughts have also come values and purposes and ultimately a propensity for rational knowledge (cf. van Huyssteen 1998: 38). It is precisely in an attempt to understand our ability to cope intelligently with an increasingly intelligible world through knowledge that the impact of the theory of evolution is felt far beyond the boundaries of biology. The emergence of new phenomena at different levels of scale and organizational complexity in nature necessarily entails the emergence of new processes and laws at these levels. Evolution in nature thus reflects a succession of new strata of operational complexity. New products and processes constantly develop from earlier modes of organization, bringing new orders of structure into being. Consequently, the emergence of intellectual and psychological processes (and the accompanying realms of meaning and purpose) is simply another step in the established developmental process of increasing functional complexity. This “purposive intelligence” could be seen as a novel phenomenon that emerges at a new level of operational complexity. On this view, then, having a causally productive account of the biological origins of our mental processes differs markedly from actually having reflective and experiential access to what we are thinking about. Coming to grips with cognitive issues like meaning, intention, and purpose means understanding them. And understanding them means also performing them, and performing them requires having the kind of mind that has access to those particular and unique mental experiences (cf. Rescher 1990: 122).
In this chapter I will quite specifically ask how diverse disciplines like contemporary paleontology and evolutionary epistemology may support or conflict with our commonsense notions of human uniqueness. First, we will look at how notions of humanness in paleontology have shifted and changed with time, and how cultural ideologies and moral worldviews have deeply affected notions of human distinctiveness. This will confirm not only that ideologies and worldviews deeply influence science, but also that, ironically, well-intentioned moral perspectives can at times actually constrain scientific development. Secondly, I will argue that Charles Darwin’s views on human identity and human nature still function as the canonical core of the ongoing discussion on human uniqueness. It is Darwin’s views on human distinctiveness, moreover, that very specifically are still shaping our views on the evolution of human cognition. Thirdly, I will argue that the epistemic implications of these views on the evolution of human cognition are today most clearly represented in contemporary evolutionary epistemology. Evolutionary epistemology’s focus on the evolution of human cognition will not only reveal embodied human cognition as a mediator between biology and culture, but will link the question of human uniqueness directly to embodied consciousness, aesthetic imagination, moral awareness, and the propensity for religious belief.
Human Distinctiveness in Paleontology
The intriguing fact that in paleoanthropology definitions of human uniqueness have changed with time and are deeply influenced by worldviews and ideologies is discussed by historian of science Robert Proctor in an important recent article in Current Anthropology (Proctor 2003). Proctor succeeds admirably in contextualizing and historicizing research on human antiquity and recency, and convincingly shows how much this subject matter is inextricably interwoven with seemingly “extrascientific” factors (cf. also Hochadel 2003: 231). Like all our disciplinary ventures and reasoning strategies, the development of paleoanthropological ideas should always be understood in the context of their times, i.e., within the historical-cultural network of ideas in which they are, or were, embedded (cf. Tattersall 2003: 233). One such seminal idea that has often been greatly influenced by the moods and ideologies of the times is the origin and timing of human evolution. There is, of course, still radical disagreement over the timing of human evolution, understood as the in the coming-into-being of the language-using symbolic cultural creatures we are today (cf. Proctor 2003: 212). Conceptions of human antiquity have changed rather radically over time, with the most recent evidence and opinion pointing to the relatively recent emergence of consciousness, language, kindled fire, 50 compound-tool use, and other aesthetic, creative signs of human symbolic intelligence.
One of the most fascinating historical examples of the influence of “extrascientific,” political or moral views on the definition of human uniqueness is found in the influence of post-Holocaust definitions of humanness in paleoanthropology. For many years after the horrific events of World War II it was fashionable, and politically correct, to project “humanness” onto every early hominid discovered by paleontologists—even Lucy, an australopithecine female, one of our “oldest ancestors,” and other earlier hominids were granted humanity. Today, however, it is more common to see the australopithecines as more chimplike, while distinctions between anatomically modern Homo sapiens and behaviorally modern Homo sapiens have become commonplace. At the same time, paleoanthropologists are also arguing against seeing our closest human relatives, the Neanderthals, as “fully human” (cf. Proctor 2003: 212; also Tattersall 1998). Proctor has provocatively called this recent trend the “dehumanizing” of early hominids, a transformation clearly visible in the separate sciences of archeology, paleontology, and molecular anthropology (cf. 2003: 213). But how important is this so-called dehumanizing of early hominids for understanding humanness in our own species?
A. Human Uniqueness as a Moral Issue
The idea that humanness is a relatively recent phenomenon certainly includes both anatomically modern Homo sapiens, in a biological sense, and even more recently, behaviorally modern Homo sapiens, in a cultural sense, taking into account the best-established dates for the first forms of human ornamentation, abstract or representational art, compound-tool use, deliberate grinding and polishing, and similar signs of human symbolic intelligence (cf. Proctor 2003: 312). What is fascinating to observe, however, is that the so-called dehumanizing of early hominids, by focusing on the recency of the humanness of Homo sapiens, has made earlier hominid diversity acceptable and even politically correct. This important change in perspective can certainly be seen as a response to new discoveries in these fields. It is also, however, a distinct move away from the immediate political and moral influence of post—World War II and post-Holocaust understandings of race and brutality. On this view conceptions of hominid and racial diversity were deeply intertwined, and one impact of racial liberalism after World War II seems to have been a delay in the recognition of the richness of fossil hominid diversity as a result of fears of excluding and “dehumanizing” one or another now-extinct hominid from the so-called family of man. For this reason there seems to be an inherent arbitrariness in deciding when “they” became “us,” and precisely because of this the question, When did humans really become human? must also be seen as, among other things, a very specific moral question (cf. Proctor 2003: 213).
Against this background Proctor has shown clearly why, following the 1950 UNESCO Statement on Race, which rightly rejected race as an unscientific, social myth, the idea of “humanness” was pushed back so far in time that some scholars even suggested that Ramapithecus
, a creature that lived around 14 million years ago, should be seen as a hominid and a tool user, implying that this creature was “human” in some deep and inclusive sense (cf. 214). By the mid-1970s this approach was widely accepted, but this kind of reigning consensus that equated “hominid” with “humanity” has significantly broken down in the past two decades. Very important also, and highly publicized, is the discovery of biological
recency, i.e., the idea that humans share with chimpanzees a common ancestor who lived as recently as 5 million or 6 million years ago. This close temporal kinship was not discovered until the 1960s, and not widely accepted until the 1980s, by which time it was also shown independently by Richard Lewontin that genetic differences between races are minor and racial differentiation cannot be very old (cf. Proctor 2003: 214; Lewontin 1974: 152-57). In this way contemporary scientists have successfully argued for both the animality of our humanity and the triviality of racial distinctions (cf. Proctor 2003: 214). The fascinating result of this research was that both the break between humans and the great apes and the separation of races from one another were thus diminished, while at the same time human uniqueness was ever more carefully defined. This is illustrated well when we take a closer look at the origins of the idea that human evolution is a truly recent phenomenon, and the clear trend in the past few decades to push forward the dates for many of the qualities we often regard as central to being human, qualities like language ability, control of fire, consciousness, and ritual behavior. Early hominids are now actually seen as “less human” because some of the earliest dates for some of the traits seen as typically “modern” have not held up to scientific scrutiny, especially data from archeology, paleontology, and molecular anthropology (cf. 214).2
The paleontological argument focuses specifically on the crisis caused by fossil hominid diversity. In the 1960s and 1970s spectacular South African and East African hominid fossil finds showed clearly that more than one species of in the hominids must have coexisted at many points in the course of hominid evolution.3
Many paleoanthropologists today place the total number of hominid species at about twenty, divided into four or five distinct genera, two of which are Homo
. Hominid diversity seems to have peaked about 2 million years ago when three, four, five, or possibly even more separate hominid species coexisted on our planet, all in East Africa (cf. 215). All this makes the present state of affairs, where there is only one surviving species, namely, us, a very unusual situation in the 5-million-year span of hominid evolution. In fact, from this particular scientific point of view we humans are indeed “alone in the world.”
Of course, in nature the persistence of only one species, and of one successful lineage, is rare, and is a distinction that Homo sapiens share with the aardvark (Lewin 1993: 5). Modern humans are the sole surviving representatives of the hominid lineage, and as a result we now seem to lack the imagination to think of ourselves otherwise. The trappings of culture and the world we humans create through language, consciousness, and the force of mythology are so powerful and all-enveloping that by definition they seem to exclude the possibility of sharing such an adaptive niche with another species like us. We have to realize, therefore, that whatever we mean by “human uniqueness” today, that which separates humans from other animals, is something that might not have been conceived of in this way earlier in human prehistory. Any contemporary notion of human uniqueness thus reflects an acknowledgment that this “uniqueness” is a product of the evolutionary process, or as Roger Lewin forcefully puts it, it reflects “the accretion over time of powerful adaptations by a bipedal ape” (cf. 5-11).
Against this background the importance of the moral dimension of our decisions for defining human uniqueness becomes even more apparent. Proctor’s argument, that the earlier hominid fossil diversity was initially not accepted without a struggle, thus points directly to the resistance of the liberal antiracialist climate of the post-Auschwitz era, when it was dogmatically assumed that only one hominid species could exist at any given time. As we just saw, this was tied to the reevaluation of race in the early post—World War II era, when a broad cultural consensus began to emerge that all humans living today are equal in cultural worth, part of the “family of man,” culminating then in the 1950 UNESCO Statement on Race, which correctly branded race as an “unscientific” category and humankind’s “most dangerous myth” (cf. Proctor 2003: 215). In the 1950s and 1960s the hypothesis that there was only a single species of humans present on earth at any time envisioned a linear nonbranching evolutionary sequence according to which, as Proctor puts it, Australopithecus begat erectus, erectus begat Neanderthal, and Neanderthal begat sapiens. In this climate it was very difficult, and politically incorrect, to accept the rich diversity of the fossil hominid history. What happened, however, was that not just racism but also racial diversity became unfashionable after the revelation of the crimes of the Nazis, with the resultant worldwide and important campaigns to end racial discrimination in all its many forms. As a result, fossil hominid diversity was deliberately underplayed as attitudes toward the ancestral (or extinct) hominid “other” got caught up in race relations (cf. 221).
The single-species hypothesis was dealt its first solid blow in 1959, when Mary Leakey discovered the 1.8-million-year-old Zinjanthropus at Olduvai Gorge in Tanzania, a fossil with hyper-robust features, now known as Australopithecus boisei (cf. 221f.). Homo habilis, found at Olduvai in the early 1960s, further undercut the assumption of a single-stalk, nonbranching evolutionary tree. Homo habilis was clearly more humanlike than Australopithecus but quite older than had previously been imagined for our genus, and was the first real evidence that Homo must have lived at the same time as australopithecines. The idea that apelike hominids were not just replaced by more humanlike hominids, but that multiple hominid genera coexisted, took some time to assimilate. The final “nail in the coffin,” to quote Proctor directly, came when Richard Leakey announced the discovery of a Homo erectus skull old enough to have coexisted with Australopithecus boisei, followed by the discovery that there were at least two kinds of Australopithecus (cf. 222).
Since the 1970s the trend has been to argue that hominids prior to Homo sapiens
were not, after all, as “human” as once thought. This argument was accompanied by an increasing appreciation for the fact that it might not
be such a bad thing not to be fully human, and also a growing sense that it is not really in the racist at all to believe that nonsapiens
hominids were radically different from us. This overview of shifting perspectives of how scientists have viewed hominid history not only confirms how ideologies and worldviews influence science, but is also a fascinating example of how a well-intentioned, highly moral viewpoint can curb and restrain scientific development. Hominid diversity became more acceptable when purged of its earlier racist overtones, and because of this difference finally became something that could be celebrated (cf. 222f.).4
The question of hominid evolution and the likelihood of the emergence of some form of “human uniqueness” in Homo sapiens also leads to another important set of questions. For most paleoanthropologists the emergence of a gulf between humans and other animals was in no sense inevitable. Evolution is generally seen as a contingent process of the moment, responsive to prevailing circumstances, especially those of climate and environment, and different conditions would have led to a different, subsequent evolutionary history. On this dominant view human beings, like all species, also seem to be the product of historical contingence. Retrospect lends the evolutionary process the appearance of inevitability, but the final pattern we are trying to reconstruct and understand is the result of environmental circumstances as they happened to have occurred (cf. Lewin 1993: 12). There are, however, also important and challenging exceptions to this view, most notably proposed in the recent work of scientists like Ian Stewart and Simon Conway Morris.
In his Life’s Other Secret: The New Mathematics of the Living World (1998), Stewart probes the relationship between intelligibility, evolution, and the mathematical order of the world of nature. Pointing to the way in which mathematics may describe the structure and evolution of life, Stewart in this book invites us to enter a world “deeper than DNA.” He argues forcibly against biological or genetic reductionism, but not along the well-known lines that many scientists, philosophers, and theologians have followed in challenging the genetic determinism presented by much of neo-Darwinism and sociobiology. He argues that a world where everything blindly obeys instructions coded in its DNA is precisely what science does not find. DNA was indeed the first secret of life discovered by science, but Stewart wants to turn our attention to life’s other secret—the universal mathematical principles of growth and form that even DNA exploits (cf. 1998: 93).
Stewart places his argument within the broader context of the important notion of emergence
. We know that our universe obeys simple rules, the laws of nature. We also know, however, that life behaves in ways that do not seem to be built explicitly into those rules. Life is flexible, life is free, life seems to transcend the rigidity of its physical origins. And it is this kind of transcendence that is called “emergence.” Emergence is not, however, the absence of causality; rather, it is a web of causality so intricate that the human mind cannot grasp it (cf. 7). Any attempt to understand life, then, is ultimately incomplete without an effective theory of emergent features.5
In the end Stewart reminds us that the organic world is just as mathematical as the inorganic world, but that the mathematical basis of living
things is just more subtle, more flexible, and more deeply hidden (cf. 14). We should not, however, be naive about the expression of mathematics in the living world, we should not expect life to display only
mathematical patterns, and we most definitely should not expect to see pure mathematical patterns, undecorated by genetic and evolutionary thinking (cf. 23). Genes fit into this picture by adding significant flexibility to growth and form because they control and select the physical patterns that an organism needs. In this sense physics imposes constraints on what biology can do, and in this sense genes are not the laws of life but are the means through which these laws operate.
It is here that Stewart identifies the reason genes should not be called the “key to life”: they are a key, and an enormously important one, but behind them lie something much deeper—the true laws of biology, the mathematical rules into which the genetic code is plugged (cf. 25ff.). This thesis forms the basis for much of the argument in his book: there is mathematics at every level of life, and it concerns not just the growth and forms of organisms, but also their molecular bases, their microstructures, their control systems, their movements, their patterns, their behaviors, their means of communication, their relationship with each other and with their environment, and the historical paths by which they have evolved. The role of genetics in the development of organisms can therefore be understood only if we also bear in mind the effect of physical constraints. These constraints are not so much limiting as liberating; what we do not see is a world where everything blindly obeys instructions in the coded in only its DNA. Genes, therefore, are not a blueprint, but should rather be seen as something like a recipe: a cell carries out its genetic instructions; the laws of physics and chemistry produce certain consequences; and only when you put the two together do you get an organism (cf. 88).
Stewart also introduces the concept of phase space as the heart of this argument. Phase space is an image for the fact that every event that does happen is surrounded by a ghostly halo of nearby events that did not happen but could have. Phase spaces are large, since they comprehend a wide range of all possibilities. So if a rule system is sufficiently rich, then all sorts of possibilities lurk within its phase space. For Stewart this idea reveals the true significance of mutations in evolution—they do not just make evolution possible, they also enable the system to explore its phase space (cf. 118f.). The role of selection also now becomes clearer, since it makes the exploration of this space very efficient. If all that happened were just random mutations, “the system would wander around in its phase space like a drunkard, tottering one step forward, two steps back” (118). With selection, however, bits of phase space that do not work (i.e., that fail to promote fitness) are eliminated. Selection thus helps the system to home in on the most interesting regions of phase space, the places where useful things happen, and which are the central features of the evolutionary landscape.
Stewart’s research here interestingly dovetails with that of paleontologist Simon Conway Morris, whose work on the Burgess Shale in British Columbia has placed the history of life in a new set of contexts and so by implication has shed new light on our place in the history of evolution. The Burgess Shale, with its remarkable richness of fossil remains, has become an icon for every student of the history of life, and Conway Morris guides us through this landscape and its significance, ultimately arguing for a radical reconsideration of the whole concept of evolution in the Darwinian framework.
In his Crucible of Creation: The Burgess Shale and the Rise of Animals
(1998), and more recently, Life’s Solution: Inevitable Humans in a Lonely Universe
(2003a), he argues that although the Darwinian framework provides the logical underpinning to explain organic evolution, the actual pattern of life
we observe may require a more complex set of explanations. Conway Morris also famously engages the work of Stephen Jay Gould, as well as “hard” or ultra-Darwinists like Daniel Dennett and Richard Dawkins.6
Conway Morris starts his argument against Gould by developing his own interpretation of evolution and evolutionary continuity. Although there certainly is an evolutionary continuity in the history of life, he argues, nearly all the species that have ever lived are currently extinct and entire ecosystems have also vanished. In these past worlds there was much that was novel and has no counterpart today. But much is familiar too. This has not so much to do with evolutionary continuity as with the phenomenon known as convergence
(cf. 1998: 13). Conway Morris claims that there has been no systematic, focused study of convergence, and of the constraints of form and pattern in evolution. One reason for this is that it is often just taken for granted. It does hide, however, a circular argument in which scientists can easily become trapped: Are organisms similar because they have converged or because they are the descendants of a common ancestor?
Conway Morris’s point is that whichever way one looks at it, some form of order, as well as distinct convergent features, has almost invariably emerged in the evolutionary process. This becomes a central feature of his argument, and he goes on to claim that this effectively undermines Gould’s argument on the role of contingent processes in shaping the tree of life. In his words: “Contingency is inevitable, but unremarkable. It need not provoke discussion, because it matters not. There are not an unlimited number of ways of doing something. For all its exuberance, the forms of life are restricted and channeled” (13).
This discussion on construction and constraint is especially important for the very specific problem of the emergence or rise of human intelligence. For Gould, if the history of evolution were to be repeated, the likelihood of humans again evolving is virtually nil, and humankind is just a quirky, evolutionary accident. Conway Morris makes the point that what is important here is not so much the origin, destiny, or fate of a particular lineage, but the likelihood of the emergence of a particular property, like human consciousness
(cf. 14). Here the reality of convergence suggests that human intelligence would almost certainly always emerge,7
and we are in fact, as the subtitle of his latest book suggests, “inevitable humans in a lonely universe.” Thus, Conway Morris makes an important argument against Gould’s overstatement of the role of contingencies in evolution, and argues that the constraints we see imposed on evolution suggest that underlying the huge diversity of forms there is in fact an interesting predictability.
As for Homo sapiens, we may be the product of evolution, but we certainly seem to have the ability to transcend our biological origins. Whether we like it or not, we are a unique species, with unique responsibilities (cf. 218). What this finally implies, as we saw with Stewart, is that in evolutionary development DNA is not the sole determining factor of human distinctiveness. It exists in interaction with the spectrum of (restrained) possibilities provided by the laws of nature. Ultimately, the arguments of both Conway Morris and Stewart imply that when we go beyond biological evolution—beyond mathematics and biology, beyond even the natural sciences—to richer notions of culture, the notions of phase space and convergence should include not only human self-awareness, creative intelligence, and consciousness, but also the fact that we actually use our conscious, self-aware minds in creative new ways to do uniquely human things in art, science, and religion. As we will soon see, precisely this argument has recently been developed by Anthony O’Hear.
The burning question that now demands our attention is, what does the emergence of this kind of complexity tell us about the actual process of cognitive evolution in the human species? The evolution of the human brain can certainly be seen as nature’s struggle to provide the machinery of information management and control needed by creatures of increasing versatility (cf. Rescher 1990: 108). The interesting question, certainly for those of us who are theologians, now becomes the following: If ultra-Darwinists are correct, why would this amazing brain, so intricately developed and emerged through a stunning, interactive evolutionary process, so massively deceive us only as far as the propensity for religion is concerned? This is especially pressing if we keep in mind that once intelligence is let in the door, it gets, in Rescher’s words, the “run of the house,” and becomes a catalyst for further development simply because intelligence is, as it were, developmentally self-energizing (111). To this important point I will return after an analysis of contemporary evolutionary epistemology.
Biological accounts of the origins of human intelligence and rationality can therefore effectively explain the origins and development of mental operations, but they cannot make their experiential character intelligible. What Rescher argued earlier in another context can now tentatively be suggested for the emergence of religion and religious faith: the existence of mental functions that lead to the creation of epistemic values like meaning and purpose can certainly be explained in terms of evolutionary principles (as can the propensities for religion and religious faith), but their qualitative nature is nevertheless something that can be adequately comprehended only “from within,” from a performer’s rather than an observer’s perspective (cf. 122f.). What this implies is that we cannot in any way reduce the “inner dimension” of intentionality, purpose, faith, and meaning—always embedded in broader cultural contexts —to an evolutionary account of physical processes. Biological evolution tells us how human minds arise and come to acquire their talents and capacities, but what we do with them is explained only by cultural evolution. Similarly, biology can explain the emergence of the human mind and of that unique ability we call rationality, but the ways in which we use this ability for rational thought lie beyond the scope of biology and of evolutionary explanations. A Darwinian account of the development of human rationality does leave open the possibility of a more complex understanding of notions like purpose and meaning, and therefore what we call religion or religious faith. If in fact there are limitations to evolutionary explanations, then a Darwinian account of the origin of mind cannot exhaust the understanding of human consciousness, and of traits like intentionality and purpose, because intrinsically different issues are at stake here. An acceptance of the biological origins of human rationality should leave ample scope for the development of meaning, values, and purpose in the broader cultural domain of our thought and action (cf. 126).
B. Human Uniqueness and Hominid Evolution
As for actual human origins, scientists are now arguing that molecular evidence indicates that the hominid group was established about 5 million to 6 million years ago, although the earliest putative hominid fossil (a small fragment of a cranium) is just 5 million years old. Indeed, the evidence is extremely fragmentary until we reach a little in excess of 3.5 million years ago: the jaw fragments and famous footprint trail at Laetoli, Tanzania. The record begins to improve substantially with material from the Hadar region in Ethiopia, the Lake Turkana sites in northern Kenya, Olduvai Gorge in Tanzania, and various famous South African sites like the Swartkrans and Sterkfontein caves. It is significant that virtually all hominid fossil remains that date earlier than about 1 million years ago have been found in Africa (cf. Lewin 1993: 15ff.).
The primary hominid adaptation, in the sense of a true evolutionary innovation that established the group, and that today is of supreme importance in discussions about human uniqueness, was bipedalism
. Bipedality was most probably an adaptation for efficient locomotion. The first hominid species might well have been hominid only in the way it stood and moved. However, in the first known hominid species, Australopithecus afarensis
, dated to 3.75 million years ago, the dentition already differs from that of the ape. It is humanlike to the extent of displaying relatively small canines, a thick enamel layer, and cheek teeth better designed for grinding tough material than for dealing with fleshy fruit (24). The overall appearance of the head is apelike, but the fossilized leg and pelvic bones indicate a strong adaptation to upright walking, although this creature was very much a bipedal ape, though not completely so.8
After this a pronounced second shift occurred. This novel adaptation can be described as a large-brained, small cheek-toothed, bipedal ape, the true beginning of the genus Homo
. The earliest known species to follow this adaptation was Homo habilis
, the oldest fossils of which are close to 2 million years old. Homo erectus
followed, beginning about 1.7 million years ago. The new anatomy probably involved a dietary shift that included meat. With the emergence of Homo
, the hominid group was at its most densely branched, and at this time at least three species coexisted (cf. 28).
From the limited amount of skeletal evidence available, there is no question that by 2 million years ago all hominids were fully capable bipeds: the overall shape of the pelvis, the angle of the thigh bone, and the platform structure of the feet attest to that. One of the most spectacular discoveries of the century was a 1.6-million-year-old Homo erectus youth from west of Lake Turkana, the so-called Turkana Boy. Although powerfully muscled, this creature was tall and slender, having shed all apelike proportions, and the newly evolved, slender build was clearly an adaptation to greater activity, including the ability to be an effective runner (cf. 28). After about 1.5 million years ago, the two types of hominid adaptation coexisted, and after that a new trend was developing: australopithecines were becoming extinct, with the extreme version (Australopithecus boisei) persisting the longest. Most anthropologists would argue that by about 1 million years ago, only one species of hominid existed in Africa: Homo erectus, the species that eventually expanded into the lower latitudes of the Old World.
About half a million years ago there began to appear hominids that bore Homo erectus features together with certain modern ones, including expanded brain size. This new form of hominid, specimens of which have been found in Africa, Europe, and Asia, is generally known as “archaic sapiens.”
The use of this term indicates an uncertainty by anthropologists as to the status of these individuals: they were neither Homo erectus nor Homo sapiens, but may have been a transitional form between the two (cf. 29). The term “archaic sapiens” is more a descriptive shorthand than a formal classification. The crucial problem for paleoanthropologists, however, was the following: What exactly was the evolutionary trajectory along which our ancestors eventually became modern? And was it a steady, gradual, accumulative process, or a relatively sudden, late transformation? The fossil record—as always, rather patchy—shows that Homo habilis may have appeared first around 2.5 million years ago, but the first good fossil evidence is a little less than 2 million years old. The transition from habilis to erectus involved a slight enlargement of the brain and a change in facial features, including the development of prominent ridges about the eyes, the browridges. Once this configuration evolved, it persisted throughout the history of the Homo lineage and can clearly be seen as a stable adaptation (cf. 30).
With the origin of modern humans, however, there was a change in skeletal build, not so much in shape as in strength. Homo erectus bones are very much the same shape and size as those of Homo sapiens, but are more thickly buttressed. In other words, modern humans were physically active, like Homo erectus, but not quite as strong. At the same time, increase in brain size was quite prominent. To evolutionary biologists the trajectory of evolutionary change, gradual versus punctual, or continuist versus discontinuist, has been the subject of lively debate, and still dominates the discussion on human uniqueness. Those who favor gradual change see evolution as a continuous response to selection pressure. Those who emphasize sudden, punctuational change interspersed between periods of stasis argue that factors relating to evolutionary development may constrain evolutionary responses to selection pressure, so that change, when it comes, comes rapidly. The latter pattern, proposed in 1972 by Niles Eldredge and Stephen Jay Gould, is known as punctuated equilibrium. Lewin has argued persuasively that two decades of debate and empirical study clearly indicate that both punctuational and gradual modes of evolutionary change occur, though there are still differences of opinion as to their relative importance (cf. 31f.).
Importantly though, as Homo erectus expanded from Africa into the Old World, tool technology naturally went with it. It would be the evolution of language, however, and not just the development of technology, that would provide the evolutionary foundation upon which natural selection would build the bigger brain. In fact, some scholars believe that spoken language reaches back to the beginning of the Homo lineage and was the prime cause of brain expansion (cf. 32). This position obviously supports the gradualist evolution of language capacities, and contrasts with the punctualist view that language capacity was the defining element in the biological shift to modern humans, and happened late in this evolutionary process. This strikingly reveals how different scholars may place different interpretations on the same evidence because of their differing theoretical preconceptions. But what are the dominant interpretations of human evolution today? Two of the strongest contenders are the so-called Out of Africa hypothesis and the Multiregional Evolution hypothesis.
We can come to terms with these very different theories more easily if we first look briefly at the arrival of the Upper Paleolithic in western Europe. About 40,000 years ago the long-established and relatively limited flake technology of the Middle Paleolithic was suddenly replaced by an exquisitely refined and rapidly evolving blade technology of the Upper Paleolithic, which also included extensive use of bone and antler as raw material. In addition, evidence of body ornamentation began to emerge, as did spectacular artistic expression on cave walls. In Lewin’s striking words, the Upper Paleolithic people, modern humans, had apparently burst on the scene with dramatic effect, leaving the less refined Middle Paleolithic people behind (cf. 66). If, however, the origin of modern humans is the real issue, one may have to go back much further in history. The archeological signal in Europe 40,000 years ago is strong and clear, although it almost certainly indicates an event in the history of modern humans that relates not so much to human origins as to what it means to be human in the modern sense of the word. But when talking about the origins of modern humans, we do have to be careful not to express a too Eurocentric archeological point of view.
Radiocarbon dating strongly suggests that the Upper Paleolithic period lasted from about 45,000 to 10,000 years ago. The Upper Paleolithic in western Europe was the period of fully anatomically human people, the period of Homo sapiens, or the so-called Cro-Magnons, no different from ourselves, people who had the same bodies and brains that we do. The period that preceded the Upper Paleolithic in western Europe, the Middle Paleolithic, was characterized by the presence of Homo neanderthalensis. The Neanderthals, much debated today for their alleged capacities or noncapacities, did not seem to make art, and clearly had a simpler “tool kit” of intellectual capacities. Various dating techniques show that the Middle Paleolithic period lasted from approximately 220,000 years ago to about 45,000 years ago. It was preceded by the Lower Paleolithic, which lasted from nearly 3 million years ago to about 220,000 years ago. The Lower Paleolithic was characterized by the presence of a number of hominids, including Homo erectus and Homo habilis (who made the first stone tools), whose fossils have been found in Africa (cf. Lewis-Williams 2002: 39f.).
During the Upper Paleolithic transition, which in western Europe lies between 45,000 and 35,000 years ago, the Neanderthals gave way to Homo sapiens, with the two forms living side by side, at least in some areas, for some thousands of years. This transition clearly was an immensely important period in the human story: it is during this time that human consciousness and intelligence emerged, and with it creative, artistic, and religious imagination. For this reason this important period is also called the Upper Paleolithic Revolution or the “Creative Explosion” (cf. Lewis-Williams 2002: 40). What clearly emerged in the Upper Paleolithic was a distinct morphological gulf between the Neanderthals and the Upper Paleolithic people. The most recent version of the Out of Africa hypothesis will pick up on this important issue and not require that Neanderthals evolved into Upper Paleolithic or Cro-Magnon people.
The difficult task of archeologists and paleoanthropologists has been made a little easier by the emergence of new specimens and better dates, and also the emergence of new techniques for comparing anatomy. What is important is that there seems to have been no genetic continuity between the archaic populations (including the Neanderthals) in Europe and the Near East, and the modern humans that eventually became established there. In other words, the early populations in that region of the world were not the ancestors of the people who lived there later (cf. Lewin 1993: 69). Outside of Europe the fossil evidence is scarce, but in Africa it was becoming apparent that certain specimens with modern human characteristics (such as fragments from the Klasies River Mount in South Africa, and Border Cave) were among the oldest anywhere in the world, in fact, most probably the oldest (cf. 69).
Uncertainties in dating these specimens have plagued their interpretation for years. New techniques such as thermoluminescence and electron spin resonance have been applied recently with some considerable success, however (cf. 70). Both techniques depend on natural radioactivity in the soil, which boosts electrons to higher energy levels in “target” materials such as flint or tooth enamel. The longer such material is buried, the greater will be the accumulation of boosted electrons, thus providing a measure of the passage of time since burial. The thermoluminescence technique measures the higher-energy electrons by the release of light that occurs when the target material is heated. Electron spin resonance detects the high-energy electrons more directly. As a result of new dates from these techniques, confidence is building that some of the early anatomically human species in southern Africa are close to 100,000 years old, making them contemporaries of Neanderthals in Europe and Asia. In this way the Out of Africa model (the notion that anatomically modern humans arose somewhere in sub-Saharan Africa) was firmly established, although only later would geneticists argue that modern humans from Africa spread out into the rest of the world, completely replacing all existing populations of premodern people (cf. 71ff.). On this view two
migrations are in fact discerned: that of Homo erectus
, about a million years ago, often referred to as Out of Africa 1, the original and still influential Darwinian view;9
and a second, Out of Africa 2, beginning around 100,000 years ago, of modern human populations. Thus there seems to have been an early appearance of anatomically modern humans in Africa at least 100,000 years ago, followed over the next 70,000 years by a disappearance of archaic forms and the establishment of these modern forms in the rest of the world. Whether this change was the result of newly evolved language, superior technology, environmental factors, new forms of social organization, or sharper cognitive skills that gave Homo sapiens
an edge over archaic sapiens
people, is still being hotly debated.
In direct opposition to the Out of Africa hypothesis stands the Multi-regional Evolution hypothesis. According to this model, Homo sapiens emerged throughout the Old World through gradual evolutionary change, the direct product of existing archaic populations. This view requires no population movements, nor is there any replacement of populations by evolutionarily advanced people. This model, however, is not as extreme as earlier versions of the unilinear hypothesis that envisioned complete isolation of geographically separate groups (races) stretching back as far as a million years, and that also implied a deep genetic division between living races. This model does argue for a degree of contact among different populations, promoting gene flow between them and effectively maintaining a network of evolutionary interconnections. Multiregional evolution does however argue that some of the features that distinguish major human groups such as Asians, Australian Aborigines, and Europeans evolved over a long period in the regions where they are still found today (cf. 73ff.). There remains an ultimate African source for all these groups, but this refers to the population expansion out of Africa of Homo erectus, at least a million years ago (Out of Africa 1). Regional differences began to develop at the far reaches of the population expansion throughout Africa, Asia, and Europe, laying down the typical racial characteristics that we all know today. The move from Homo erectus to archaic sapiens to Homo sapiens is then seen as occurring throughout the Old World with sufficient interaction among populations to maintain a rough contemporaneity. Regional characteristics were then maintained through a degree of isolation, while genetic cohesion was favored by a degree of gene flow. In this model there is clearly no significant mutation that produces a key innovation, such as implied by the Out of Africa 2 hypothesis.
A third and novel source of evidence would ultimately make an important difference in current paleoanthropological discussions, namely, the so-called Mitochondrial Eve and other genetic evidence (cf. 87ff.). Mitochondrial DNA analysis unequivocally supports the Out of Africa 2 hypothesis. In fact, molecular data pushed the hypothesis even further than its anthropological authors initially envisaged, by indicating a complete replacement of established archaic sapiens populations by incoming modern humans. The Out of Africa 2 in the hypothesis, all things considered, now outranks the Multiregional Evolution model (cf. 108ff.). Of special importance is the surprising lack of genetic variation among modern human populations. Most genetic variation among modern humans as a whole is accounted for by variation among individuals, not between populations. Such a pattern is clearly consistent with the Out of Africa 2 hypothesis, in which all modern humans derive from a single genetic pool, and which recognizes that all living humans have descended from a small group of Africans who lived around 100,000 to 200,000 years ago. These behaviorally modern humans are therefore relatively recent in a biological sense. The Out of Africa 2 scenario has risen to great prominence, not only because of its central idea of an “African Eve,” but also through the clarity and simplicity of its opposition to the so-called multiregional hypothesis, based on the strength of its molecular methods, which cast doubts on a multiregional model that might presume deep and invidious racial distinctions (cf. Proctor 2003: 215).
Interestingly, in addition, both gradualists and scholars who have followed Gould and Eldredge’s punctuated equilibrium model are now arguing for the recency hypothesis. Recency may not be the same as suddenness, but it has become as popular among Gouldians as among anti-Gouldians (cf. 215). And here the focus is on the dramatic so-called cultural big bang, the Upper Paleolithic revolution and the explosive growth of human creativity around 45,000 years ago. As will become clear in the fourth chapter, this cultural breakthrough gave us not only spectacular prehistoric imagery and representational “art,” but also the first evidence of ornamentation, new and changing tool styles, and the first evidence of religion and possibly religious ritual (cf. Klein 1999: 512-17). If moral choices can play such an important shaping role within science, then it will come as no surprise that in our interdisciplinary attempts to understand human origins, the narratives of paleoanthropologists and other scientists may interweave with the specific moral choices of theologians working in their own disciplines. Whether this happens or not will clearly hinge on ideological and worldview choices. No wonder, then, that on the problem of human uniqueness, science and theology may at times conclude with stunningly conflicting perspectives: on the one hand, Homo sapiens can be seen, and has been seen for about 2,000 years in Christian theology, as created in “the image of God,” both rationally and morally superior to all other creatures; on the other hand, in the wake of post—World War II horrors, Homo sapiens has also been described as a “mentally unbalanced predator” endowed with vigorous and destructive aggressive instincts (cf. Cartmill in de Waal 2003: 230).
I believe there is an important interdisciplinary way out of this very real dilemma, and that is by following the lead of the theory of traditions as developed in chapter 1, and asking about the origin of the problem of the evolution of human uniqueness within the context of the biological sciences. Here it becomes important to go back to the way Charles Darwin viewed this issue, then to ask what the canonical heart of this tradition was, and how it has since shaped views on human identity.
Charles Darwin on Human Uniqueness
Is our pursuit of some description or definition of human uniqueness a valid interdisciplinary research project, or are we victims of a rather arrogant form of species narcissism? There clearly seems to be something at least to our “commonsense” perception of human uniqueness; it is human beings, after all, who are reaching out to understand the immensity and complexity of our universe, the complexity of ourselves, of one another, and even of God. Also, there does seem to be a distinctly higher level of complexity and organization, and a richness of experience, that occurs in humans more than anywhere else, or in anything else, in our universe, at least as far as we can tell.
In fact, this is true not only of our everyday conceptions of ourselves. In an unparalleled focus on what it means to be human, a large number of the sciences now are not only building on commonsense perceptions of human uniqueness, but also seem to be invading some of the most traditional theological territories. Christian theologians, like Christians everywhere, traditionally believe that there is something absolutely unique about being human, precisely because humans are believed to be created in the image of God, as Genesis 1:26-28 so clearly states. However, sciences like evolutionary biology and genetics, studies in artificial intelligence and robotics, neuroscience, cognitive science, cognitive psychology, and evolutionary epistemology (to name just a few) are all now directly challenging any unnuanced notion(s) of human uniqueness, and by implication, therefore, this traditional Christian doctrine of the imago Dei.
The troublesome question is, how should theology respond to the way the sciences are challenging, and even deconstructing, the notion of human uniqueness? Would any Christian theology that ventures forth bravely into interdisciplinary dialogue still be able to maintain, for instance, that there is some deeper, divine purpose to being human, and by implication then also to human evolution? Would Christian theology be able to argue that a cosmic process producing intelligent persons is what one would expect if God is intelligent and personal (cf. Barbour 2002: 56f.)? My answer to these and related questions will unfold against the background of the brief theory of traditions developed in chapter 1. I believe the history of ideas of theological reflection shows that theological traditions have always been extremely sensitive to the culture(s) in which they are embedded. I will suggest that we get at this problem by looking at how theological traditions, specifically the doctrine of the imago Dei, evolve and respond to cultural pressure. In this specific case study on human uniqueness, we will attempt to facilitate a transversal intersection of various voices as theology goes into conversation with the sciences of paleoanthropology and archeology. What kind of dialogue will result, and what kind of challenges will be revealed, if we happen to uncover some common concerns, some overlapping interests between these very diverse reasoning strategies? For Christian theology the question will undoubtedly be: How much of a heart, a canon or unchanging identity, does the notion of the imago Dei have? And how much of the distinctiveness of this canon, as embedded in the rich galaxy of meaning of this historical doctrine, is protected by its own gravitational pull, its core Christian identity? How much of this core identity might be open and receptive to cultural pressure? And if this core notion of human uniqueness in theology does turn out to be somewhat fluid and changing, what would be the limits and boundaries that this galaxy of meaning would allow? Finally, and maybe most importantly, is this process of pursuing the intelligibility and integrity of the doctrine of the imago Dei helped, or hindered, by the ever increasing volume of voices from the sciences?
In my first chapter the argument was made that a postfoundationalist approach to interdisciplinary conversation will always emphasize how the contextual and pragmatic nature of different forms of rational inquiry will reveal important epistemological overlaps between the natural sciences and other modes of intellectual reflection such as theology. In this way I argued against promoting an already existing universal rationality that assumes a common, shared rationality for all, but I also argued against the reduction of human rationality to mere local contexts. Instead, I argued for a transversal rationality that honors the universal intent of all human rationality, thereby enabling a cognitive parity between various and diverse fields of inquiry (cf. van Huyssteen 1999: 172, 176). This move toward a richer notion of embodied rationality enabled us to interpret multiple aspects of our experience, thus opening up a place for the theological voice in interdisciplinary discourse. The fact that our beliefs are always anchored in interpreted experience, and our rationally compelling experiences are always already embedded in even broader networks of disciplinary beliefs, also necessitated a theory of traditions that revealed how dependent our research is on reliable traditions, even as we critically reevaluate them in an ongoing process of interdisciplinary reflection.
At this point, then, we turn to science and ask: How is our perception of the origins and heart of a scientific tradition hindered or helped by this theory of traditions? My argument now will be, first, that Charles Darwin’s conception of human identity and human nature still functions as the canonical core of the ongoing discourse on human evolution, and that, second, the powerful galaxy of meaning of this canonical core of Darwin’s views on human distinctiveness, maybe more than ever before, is shaping our views on the evolution of human cognition. The epistemic implications of these views are most clearly represented today in contemporary evolutionary epistemology. Finally, teasing out the interesting evolutionary route from Darwin to evolutionary epistemology will set the interdisciplinary stage, so to speak, and create the necessary transversal space, for a dialogue between theology and the sciences on human uniqueness.
The canonical core of Darwin’s views on human identity is most clearly spelled out in his major work, The Descent of Man, and Selection in Relation to Sex (1981). The dominant theme of this groundbreaking work, as the title clearly states, is that humans descended from other animals and were not specially created. In it, however, Darwin does more than just marshal evidence for the continuity between humans and other animals. The book also represents his attempt to study intelligence as a central figure of adaptive change, and to study it in that organism in which he saw it as most prominent, the human being (cf. Bonner and May 1981: viii). The idea that evolution by natural selection could account for the origin of humans was taken up quickly by others as a direct result of Darwin’s new ideas, notably by T. H. Huxley in his Evidence as to Man’s Place in Nature (1863), and by the German biologist Haeckel, who even invented an imaginary missing link between ape and human, which he called Pithecanthropus alalus, the speechless ape-man. It is especially interesting to look back on the dissenting voice of Alfred R. Wallace, who published an essay in 1864 saying that the bodily structure of humans could be entirely accounted for by natural selection, but that the “mind of man” was created by some ’higher intelligence” (cf. Bonner and May 1981: xi). In this sense one could say that the real controversy surrounding the relation between humans and other animals in Darwin’s time was essentially already a problem of science versus religion (cf. xxi).
In their introduction to the Descent of Man, John Tyler Bonner and Robert M. May make the interesting observation that in today’s ongoing and lively in the debate about human evolution, the idea that the unique place of humans in evolutionary history was guaranteed by divine intervention (as Wallace believed) has now been replaced by an argument that is interestingly similar in structure to that of Wallace and other nineteenth-century religionists: the human body is indeed a biological structure, clearly descended from the apes, but human culture, which stems from the extraordinary and unique minds of humans, is on a new, higher hierarchical level of its own, and biology cannot tell us much about this level (cf. xxii). We will focus on this issue again later, but we seem to have here the conviction that human culture and civilization is so special, and so different from anything in the animal world, that it can be analyzed only on its own terms. Thus the quest for human uniqueness, in spite of sociobiology and recent studies in animal behavior that challenge such a notion, lives on, and human attributes like consciousness, moral awareness, language, and other mental qualities, as in Darwin’s time, are still seen to separate “man from the beasts” (cf. xxii). Darwin himself, of course, argued for an unbroken continuum between animals and humans and saw the difference between them as one of degree only.
How does Darwin argue for this continuity and for the peculiar identity of humans? Insofar as “it has often and confidently been asserted, that man’s origins can never be known,” Darwin warns us that “ignorance more frequently begets confidence than does knowledge: it is those who know little, and not those who know too much, who so positively assert that this or that problem will never be solved by science” (Darwin 1981: 3). From this Darwin famously argued that humans are the codescendants with other species of some ancient, lower, and extinct form—a conclusion he did not see as spectacularly new, especially in light of the earlier work of Lamarck, and the German naturalist Haeckel, and others (cf. 4f.). Darwin’s acceptance of the shared descent between humans and animals is strikingly illustrated in the following statement: “But the time will before long come when it will be thought wonderful, that naturalists, who were well acquainted with the comparative structure of man and other mammals, should have believed that each was the work of a separate act of creation” (33).
For Darwin “there is no fundamental difference between man and the higher mammals in their mental facilities” (35). In the second chapter of The Descent of Man, Darwin discusses how animals and humans share emotions, attention, and even memories (cf. 34ff.). He saw imagination, however, as one of the “highest prerogatives of man,” although animals admittedly possess some power of imagination (for instance, they clearly have vivid dreams). By this faculty the human being unites, independently of the will, former images and ideas, and thus creates brilliant and novel results (45).
Darwin was very clear, however, about what really seemed to be unique about humans: “Of all the faculties of the human mind, it will, I presume, be admitted that Reason stands at the summit” (46). Darwin does carefully show that even here there is continuity, and it must be admitted that animals do possess some power of reason: animals are often seen to pause, deliberate, and resolve, although it is often difficult to distinguish between the power of reason and that of instinct (46). Therefore, instead of being separated from the higher animals by their mental faculties as an inseparable barrier, it was clear for Darwin that animals possessed the same faculties of imitation, attention, memory, imagination, and reason, though in very different degrees (cf. 49).
Darwin does agree that the language faculty has justly been considered one of the chief distinctions between humans and the lower animals (cf. 53ff.). Animals of various kinds, however, do communicate by expressing cries of many kinds. Articulate language certainly is peculiar to humans, although even humans, in common with the lower animals, use inarticulate cries to express meaning, aided by gestures and the movement of the muscles of the face (54). For Darwin language undoubtedly has as its origin the imitation and modification, aided by signs and gestures, of various natural sounds, the voices of other animals, and the instinctive cries of humans themselves. And as the human voice was used more and more, also in interaction with the superior development of the human brain, the vocal organs would have been strengthened and perfected through the principle of the inherited effects of use, and this would have reacted on the power of speech (cf. 57). Furthermore, Darwin concluded, the “extremely complex and regular construction of many barbarous languages, is no proof that they owe their origin to a special act of creation” (62).
As for discussing and possibly pinpointing the issue of human uniqueness, Darwin was rather skeptical: “It would be useless to attempt discussing these high faculties, which, according to several recent writers, make the sole and complete distinction between man and the brutes, for hardly two authors agree in their definitions” (62).
These special faculties could certainly not have developed fully in humans until their mental powers had advanced to a high standard, and this implies the use of a perfect language. Darwin goes on to show that even the sense of beauty is not exclusive to humankind, as the way female birds respond to the often spectacular beauty of the males of the species clearly shows (cf. 6365). But how about humankind’s sense of religion?
For Darwin there certainly was no clear evidence that humankind was in the “aboriginally endowed with the ennobling belief in the existence of an Omnipotent God” (65). In fact, Darwin saw ample evidence that many humans had no idea of one or more gods. Should a person choose to include under the term “religion” the belief in unseen or spiritual agencies, however, everything is different, for this kind of belief seems to be almost universal, especially for what Darwin referred to as “the less civilized races” (cf. 65). This belief clearly arose in the following way: as soon as the important faculties of the imagination, wonder, curiosity, together with some power of reasoning had begun to develop, the human being would have naturally craved to understand what was happening around him or her, and would have “vaguely speculated on his own existence” (65). Furthermore, the belief in spiritual agencies would easily pass into the belief in the existence of one or more gods. “The feeling of religious devotion is a highly complex one, consisting of live, complete submission to an exalted and mysterious superior, a strong sense of dependence, fear, reverence, gratitude, hope for the future, and perhaps other elements. No being could experience so complex an emotion until advanced in his intellectual and moral facilities to at least a moderately high level” (68).
Darwin was clear that these facilities tracked the biology of earlier stages of life, for instance in the state of mind of the dog that deeply loves its master. Finally, only as our reasoning powers develop and we move beyond religious superstitions do we realize what an infinite debt of gratitude we owe to the improvement of our reason, to science, and to (what Darwin called) our accumulated knowledge (cf. 69).
In his discussion of allegedly unique human traits, Darwin finally focuses on moral sense or conscience, which he clearly regarded as the most important characteristic to be discussed regarding differences between humans and lower animals. This moral sense “is summed up in that short but imperious word ought, so full of high significance. It is the most noble of all the attributes of man, leading him without a moment’s hesitation to risk his life for that of a fellow-creature” (70). Moreover, Darwin wanted to do something that he felt other scholars at the time had not done: discuss the issue of moral awareness exclusively from the perspective of natural history (cf. 71). In opting for this approach Darwin was convinced that the study of the lower animals could throw light “on one of the highest psychical faculties of man” (71).
What Darwin basically proposed is that any animal, well endowed with social instincts, could in principle acquire a moral sense or conscience, “at least as soon as its intellectual powers had become as well developed, as in man” (72). This could happen for various reasons: social instincts would lead animals to take pleasure in the society of their fellows; or the evolution of the power of language would lead to a situation where the wishes of the members of the same community could be distinctly expressed, and the common opinion of how each member ought to act for the public good would naturally become the guide for all action, and thus be reinforced and strengthened by habit, and obedience to the wishes and judgment of the community (cf. 72f.). This does not mean that social animals would become exactly like humans. It means that just as various animals have developed a sense of beauty, so they might have a sense of right and wrong. A certain class of animal actions certainly points to this, such as what looks like a manifestation of conscience in dogs, and obedience in baboons (cf. 73ff.), but for Darwin a moral being clearly is one who is capable of comparing past and future actions or motives, and approving or disapproving of them (cf. 88).
These views now get Darwin very close to identifying human uniqueness: “We have no reason to suppose that any of the lower animals have this capacity; therefore when a monkey faces danger to rescue its comrade, or takes charge of an orphan monkey, we do not call its conduct moral. But in the case of man, who alone can with certainty be ranked as a moral being, actions of a certain class are called moral, whether performed deliberately after a struggle with opposing motives, or from the effects of slowly-gained habit, or impulsively through instinct” (89).
A clear pattern of Darwin’s views on human nature and identity has now emerged, and I believe that the canonical core of his views on human identity can be summarized as follows.
1. For Darwin there can be no doubt that the difference between the mind of the “lowest man” and that of the highest animal is immense (cf. 104).
2. Nevertheless, the difference in mind between humankind and the higher animals, great as it is, is certainly one of degree and not of kind (cf. 104).
3. If we do want to maintain that certain abilities, such as self-consciousness, intelligence and abstraction, moral awareness, etc., are peculiar to humans, it might very well be that these are the incidental results of other highly advanced intellectual faculties. And these faculties again are mainly the result of the continued use of a highly developed language. Darwin also intriguingly speaks of “the half-art and half-instinct of language” that still bears the stamp of its own gradual evolution (cf. 105).
4. As for a religious sensibility, Darwin clearly argued that “the ennobling belief in God is not universal with man; and the belief in active spiritual agencies naturally follows from his other mental powers” (105). Darwin thus denied that the idea of God is innate or instinctive in humans, although he did concede that a belief in spiritual agencies seemed to be universal in all humans, and is directly connected to human imagination (cf. 394f.).
5. It is, however, specifically our moral sense that perhaps best defines “the highest distinction between man and the lower animals” (106). Also, a natural explanation for the foundation of morality suffices; thanks to humankind’s social instincts, with the aid of its intellectual powers and the effect of habit, both the mental and moral faculties of humankind gradually evolved (cf. 105f.).
6. We humans, then, are “unique” as a result of, as Darwin himself would put it, the evolution of our superior intellectual facilities and our social habits. Our “wonderful advancement,” however, is ultimately dependent on the unique evolution of language (cf. 137). In the final sentence of the final page of The Descent of Man, Darwin speaks of our “God-like intellect,” but also of the fact that in our bodies we still bear the indelible stamp of our lowly (animal) origins (cf. 404f.).
For Darwin the many superior attributes we have (including the fact and advantages of bipedalism; cf. 140ff.), and the wonderful inventions this has led to, are all the direct result of the evolutionary development of human cognition, our powers of observation, memory, curiosity, imagination, reason, and moral sense (cf. 138). For this reason Alfred R. Wallace’s position on human uniqueness continued to perplex him: “I cannot, therefore, understand how it is that Mr. Wallace maintains that ‘natural selection could only have endowed the savage with a brain a little superior to that of an ape’”(137). Darwin had no doubt that the case for that distinct place of humans in the process of evolution had been clearly argued: “I have now endeavoured to shew that some of the most distinctive characters of man have in all probability been acquired, either directly, or more commonly indirectly, through natural selection” (151).
In this way Darwin not only wanted to aid in overthrowing the dogma of what he called “separate creations,” and with that the idea that each species had been purposely created (cf. 151). He also established the core Darwinian view, in different shapes and forms, that to this day defines the canon of the ongoing tradition of the theory of human evolution.
My developing argument in this chapter is that Darwin’s conception of human identity and human nature, with its very specific focus on the evolution of human cognition, still functions as the canonical core of the ongoing discourse on human evolution, and that the powerful galaxy of meaning of this canonical core of Darwin’s views on human identity, maybe more than ever before, is shaping our current views on the evolution of human cognition. The epistemic implications of these views on cognition are most clearly represented today in contemporary evolutionary epistemology. Finally, teasing out the interesting evolutionary route from Darwin to contemporary evolutionary epistemology will set the interdisciplinary stage, so to speak; create the necessary transversal space for a dialogue between theology and the sciences on human uniqueness; and in doing so will provide us with specific transversal links to paleoanthropology. In fact, it is precisely evolutionary epistemology’s focus on the evolution of human cognition that will reveal cognition as the mediator between biology and culture, and in so doing provide paleoanthropology with a crucial role in the wider debate about human origins (cf. Lake 1992: 268).
Evolutionary Epistemology and Human Uniqueness
In Duet or Duel? Theology and Science in a Postmodern World (1998), I argued that when we take the evolution of human cognition seriously, we quickly realize that even theological reflection is radically shaped by the ongoing influence of its traditions, and therefore by its social, historical, and cultural embeddedness, and is also definitively shaped by the deeper biological roots of human rationality. And yet it is precisely the voice of evolutionary epistemology that has been almost totally neglected by contemporary theology. The very basic assumption of evolutionary epistemology is that we humans, like all other living beings, result from evolutionary processes and that, consequently, our mental capacities are constrained and shaped by the mechanisms of biological evolution. Clearly, if all human knowledge is in some sense or another shaped by its biological roots, then the study of evolution should be of extreme importance, not only for an understanding of the phenomenon of human knowledge (cf. van Huyssteen 1999: 4f.), but also for what we might want to define as the uniquely human aspect of this process. It may turn out that Darwin was right: the evolution of human cognition ties us directly to the animal world, even as it also sets us radically apart from our animal ancestors.
Various philosophers have argued that it should not surprise us that as human beings we could have acquired intelligence, enabling us to secure information and survive in the world. Nicholas Rescher has correctly argued that human intelligence naturally arises through evolutionary processes precisely because it provides one very effective means of survival. Human rationality, seen in the broadest sense of the word as our particular human ability to cope intelligently with an intelligible world, can therefore be seen as conducive to in the human survival, which makes the explanation of our cognitive resources fundamentally Darwinian (cf. Rescher 1992: 3f.). The implication for a sense of human uniqueness seems to be clear; the imperative to understand is something altogether basic for Homo sapiens. In fact, we cannot function, let alone 76 thrive, without reliable information of the world around us.
The phrase “evolutionary epistemology” was introduced into academic discourse mainly by Donald T. Campbell (cf. 1974: 47-89).10
All evolutionary epistemologists would agree that the theory of evolution in essence is a theory of knowledge precisely because the process of evolution is the principal provider of the organization of living things and their adaptations. Therefore, if adaptations of all sorts are forms of knowledge, then evolution itself is the process by which knowledge is achieved. Evolution thus turns out to be more than just the “origin of species”; it is a much richer process that leads to, among other things, the thoughts and ideas that we have in our heads, and thus to knowledge as commonly understood (cf. Plotkin 1993: 21). Evolutionary epistemologists would take the evolutionary explanation for human intelligence much further, however, and would also locate the discipline of evolutionary epistemology within the broader context of the history of contemporary epistemology and philosophy in general.11
Already in Duet or Duel? I argued that evolutionary epistemology might offer us a postfoundationalist way out of the problem of having to choose between foundationalist and nonfoundationalist approaches to human rationality. The shift away from positivism, and from all varieties of foundationalism, remains significant, especially in our attempts to critically define human uniqueness. Evolutionary epistemology highlights the fallibilist nature of our rational judgments and explains that there is progress in the growth of knowledge, but does not assess such progress as an increase in the accuracy of depiction or as an increase in the certainty of what we know. Evolutionary epistemology also reveals extreme nonfoundationalist and antirealist positions as forms of epistemic narcissism, because it makes us think that knowledge is not a relation between the knower and what is known, but a narcissistic reflection of our own image in our society, or of our society in us (cf. Munz 1985: 7). Biology suggests that our power of abstraction and our ability or faculty for having distinct expectations are the result of natural selection, that our cognitive apparatus is adaptive, and that the whole of our knowledge consists of theories that are embodied proposals (i.e., organisms) or disembodied proposals (i.e., conscious theories) made to the environment. On this evolutionary epistemological point of view it is possible to see knowledge as an interactive relationship between an embodied knower and something that is known, and one can now actually cease to identify the knower with a subjective, mental, inner consciousness and the known with the rest of the outside, “real” world (cf. 12). For the rich and complex development of evolutionary epistemology, the works of Karl Popper, Konrad Lorenz, and Donald Campbell have been seminal. In this discussion I will briefly focus on Popper, and then specifically evaluate the contributions of Henry Plotkin and Franz Wuketits to our understanding of the evolution of human cognition.
The general thrust of Popper’s thought links up directly with Darwinism, and with Darwin’s momentous insight that evolution proceeds by natural selection. In Darwinism there is no teleology, no goal, no instruction, and the relentless elimination of nonfitting organisms produces better results than any design or plan could have produced. Popper’s immense contribution to our knowledge of human knowledge lies in his extension of Darwinian evolution to knowledge in general. As is well known, Darwin’s theory of natural selection as the motor of evolution finds its complement in Popper’s theory that we do not gain knowledge by induction, but propose theories to the environment, which subsequently falsifies most of these theories and thus provides the criteria for the retention of those it fails to falsify. We are able to in the know, and we are here, because of the relentless elimination of all those pieces of knowledge or organisms that are not a fit to the environment; Popper’s views of the acquisition of knowledge, then, like Darwinian evolution, very specifically focused on a negative process of elimination and falsification (cf. 78 Munz 1985: 15).
Although Popper began by examining the growth of knowledge in science, ultimately his inquiry developed into an evolutionary epistemology in which the growth of knowledge is seen as continuous with biological and cultural evolution. Popper famously did not accept Darwinism as a testable scientific theory because of its near tautological nature (the best fitted to survive will survive) and its lack of testability (cf. Stanesby 1988: 54f.). However, it did provide a “metaphysical research program,” also for evolutionary epistemology, and for understanding the emergence of human consciousness as something irreducible to any lower level of existence, a comparatively rapid evolution that has enabled the human mind to become to a large extent emancipated from dependence on its immediate environment. It is exactly this phase of cultural evolution, where language plays a crucial and defining role, that provides the link between the human being’s biological and cultural evolution. Through language, then, humans have created a body of objective knowledge. And this knowledge is an evolutionary artifact that enables us humans to profit from the trials and errors of our ancestors (cf. Popper 1998: 126ff.). Derek Stanesby puts it well: from this point on, human evolution is the evolution of knowledge (cf. Stanesby 1988: 59, 65). It is especially through Popper’s views, then, that evolutionary epistemology became a metatheory of science (cf. Wuketits 1990: 45ff.).
The link of Darwinism to Popperian philosophy of science clearly signaled the arrival of a new conception of rationality. For Darwin human rationality consisted in the making of mistakes, in comparing mistakes, and in retention of those forms of replication that are the most adaptive fits to the environment. Thus, already Darwin introduced us to the idea that rationality does not consist in the avoidance of error, but in the occurrence of error and the elimination of error by natural selection. We can therefore say that on this view rationality consists in making bold guesses and conjectures and then subjecting them to ruthless criticism. “In this view of rationality, the path of reason is not a secure path which leads from certainty to certainty; rather, it is a wild display of the imagination, the products of which are scrutinised by criticism” (Munz 1985: 16).
The rational person, on this view, is not the person who controls his or her imagination, but the person who subjects the products of his or her imagination to criticism. In this novel view of rationality the rational person learns through his or her mistakes, without necessarily implying any teleology in this process of falsification.
A. Evolutionary Epistemology as Embodied Epistemology
An even richer development in contemporary evolutionary epistemology is found in the interdisciplinary work of Henry Plotkin and Franz Wuketits. Plotkin links together evolutionary biology, psychology, and philosophy, and thus presents a new science of knowledge that picks up exactly on the canonical core of Darwin’s notion of human distinctiveness by tracing an unbreakable link between instinct and our human ability to know. Since our ability to know our world depends primarily on what we call intelligence, Plotkin wants to understand intelligence as an extension of instinct. Thus emerges what is at the heart of the argument of evolutionary epistemology: the idea that the capacity for knowledge is deeply rooted in our biology and, in a special sense, is shared by all living things. Ultimately he develops an interactionist epistemology that refines the notion of knowledge as a kind of “incorporation of the world” by the knower (cf. Plotkin 1993: ix). In doing so he carefully explores the question why we, both as a species and as individuals, ever came to know anything about our world and ourselves.
Plotkin develops his argument by pointing to the central role of the apparent “fit,” or matching, of all living things to the features and conditions of their world. This “matching” can be seen as a result of the way living creatures somehow incorporate into themselves those aspects of the world that they actually interact with. It is these amazing and often beautiful forms of harmonious interaction between living things and the world that we call adaptations (cf. xiii). Adaptations have formed the most powerful foci of the study of biology and are formed by a very long process of interaction between the environment and successions of organisms that make up lineages of organisms extending over thousands, hundreds of thousands, and millions of years. They are the crucial determinants of whether organisms survive and reproduce or not. The study of adaptations is not only central to biology, distinguishing it from other sciences, but is also rather central to the human sciences because we human beings are, in Plotkin’s words, a finely woven cloth of adaptations, as are all other animals (cf. xiv). But what is the exact connection between these adaptations and knowledge? Plotkin gives two closely related answers to this, which reveal his stance as a contemporary evolutionary epistemologist. First, the human capacity to gain and impart knowledge is itself an adaptation, or a set of adaptations. In developing this point Plotkin correctly argues that we would simply not understand human rationality and intelligence, or human communication and culture, until we understood how these seemingly “unnatural” attributes are deeply rooted in biology. In his words, they are the special adaptations that make us special (xiv). Our amazing ability to know our world with sophisticated intelligence is an unarguable product of human evolution, and there are no substantive alternative ways of understanding our extraordinary capacity for knowledge. Second, adaptations, in an evolutionary sense, are themselves knowledge, forms of “incorporations” of the world into the structure and organization of living things. In a sense, then, all adaptations are forms of biological knowledge, and knowledge as we usually understand it would then be a special case of biological knowledge (cf. xv). The structure of all organisms is therefore directly informed by their environment, and that is why adaptations, by definition, almost always work. Rescher’s well-known argument for a direct correlation between intelligence and intelligibility clearly resonates with Plotkin’s argument that human knowledge in much the same way conforms to the relational quality or fit that all adaptations have (cf. Rescher 1988: 176ff.).
Plotkin’s most valuable contribution to the question of the evolution of human cognition, however, is found in the way he develops the relationship between instinct and rationality. Plotkin considers instinct as unlearned and unthinking adaptive behavior, a kind of behavior that is knowledge in the same way that any form of camouflaging coloration of an insect constitutes knowledge of its surroundings. In this sense adaptations are instances of knowledge, and human knowledge is a special kind of adaptation (cf. Plotkin 1993: 117ff.). Plotkin goes further and importantly links human knowledge and its adaptive development to the concept of epigenesis, the notion that development is not a simple and inevitable unfolding or growing process, but instead is highly variable within certain constrained limits. This variability results from a cascade of immensely complex interactions between genes, developing features of the organism, and the environment in which development occurs (cf. 247). Epigenesis tells us something very important: although adaptive structures and even behavior in part have genetic causes, they are not necessarily invariant in form and may sometimes vary quite widely as a result of the environment in which the development occurs (cf. 124). Plotkin goes even further and talks about this kind of “developmental plasticity” as in itself an adaptive device and as such a knowledge-gaining device.
What emerges here is that knowledge and its development clearly have two components, one internal and the other external. These developmental processes and the genes that initiate and participate in them should therefore be seen as the integrated way in which all knowledge is gained. Human knowledge, then, is a product of this complicated, interactive, and dynamic developmental process. This has direct consequences for the way we should view the distinction between behavior that occurs without thought, which is not affected by the processes of learning and memory, and behavior that is affected by such processes. The first we may call instinct, the second rationality, which as such is the product of reason, intelligence, learning, and memory (cf. 125ff.). Darwin had a lot to say about instinct in Origin of Species, but little about rationality. All that changed with The Descent of Man, which was very much concerned with the “mental powers of man.” Darwin, however, made little attempt to consider the function of rationality or what the relationship might be between rational powers and instinct. For Darwin much of human rational powers begin in the rational abilities of “lower” animals, and Darwin’s main message, according to Plotkin, was that, extraordinary as human rational powers may seem, they do not set human beings apart from their fellow creatures (cf. 127). On this view rationality is an attribute we share with other species, from whom we have inherited it. So, for Darwin, also for instincts or rational abilities, in the end it was the continuity between species that had to be stressed.
For Plotkin it would obviously be tremendously important to refute the false assumption that instinctive behaviors should conceptually be kept apart from behaviors that arise through processes of learning and intelligence. It does seem important initially, however, to keep apart instinctive and learned behaviors if we really want to understand why rationality evolved in the first place. Knowledge, as commonly understood, is a product of the processes of rationality, and if we are to understand what knowledge is, we must understand the origins and nature of rationality (cf. 130). What this means is that we must be able to answer the question why all behavior isn’t instinctive. This is the same as asking why rationality evolved at all. This question—why all behavior is not instinctive and why therefore rational powers evolved in the first place—concerns the ultimate causes of rationality. Remarkably, this issue attracted little attention until quite recently. Darwin never put the question in this form, and for a hundred years almost none of his followers did either. Today, however, we realize that rationality, intelligence, thought, and the ability to learn and remember are never just open-ended activities but are, rather, constrained, limited, and directed by genetically determined instinctive behavior. Learning, for instance, is primed by our genes, and in this sense instincts are more basic and reliable than our rational abilities. And yet, in spite of this, in the rationality did evolve. This must mean that the balance has shifted from favoring instincts to favoring intelligence and rationality. To understand this we have to ask what the limitations of instinct are, and how rationality makes up for these limitations (cf. 133).
On this view human knowledge is just one kind of a more widely defined biological knowledge, and science itself is then a very special case of this human knowledge. Along these lines Plotkin wants to develop a universal Darwinism where evolution not only produces all transformations in time, but even our brains are to be seen as “Darwin machines,” and the way we gather knowledge as another form of universal Darwinism (cf. xvii). For human knowledge this will have huge ramifications: all animals, including our species, that can learn and think are born knowing implicitly what they must learn and think about. In humans this touches directly the way we come to master language, recognize significant people in our lives, reason, react emotionally, and share knowledge through culture. Plotkin thus correctly presupposes that our extraordinary capacity for gaining and communicating knowledge must always be understood, first, as a part of our nature, and only then as an issue in nurture (cf. xviii).
This prompts Plotkin, in a very interesting argument, to warn against a too simple solution to the old “nature-nurture problem.” We often hear that the regular dichotomies that characterize the nature-nurture distinction, like the instinct-intelligence and the genes-experience dichotomies, are false dichotomies. Behind this argument lies the strong conviction that all genes require an environment to develop. Since development requires both genes and experience, however, every trait acquired by the process of evolution requires both nature and nurture. For Plotkin this argument is fundamentally wrong because the solution it provides is “horizontal” in that it still maintains the separation of genes, on the one side, and the role of the environment, on the other side, with some kind of developmental integration in the middle (cf. 164). This kind of “interactionism” does not match up to the “verticalism” that Plotkin wants to propose: intelligence, as a “secondary heuristic,” is subsumed under, embedded in, and enclosed by instinct as the “primary heuristic.” The horizontal components of interactionism, the internal and external causes, still remain present at both levels. The really important issue, however, is the comprehensive concept that includes all the levels of the hierarchy, and that concept, Plotkin argues, is adaptation, and therefore knowledge (164). On this view intelligence, nested under development and under the genetic process, does not allow any claims that something is caused by either nature only or nurture only. This kind of language and imagery is wrong because intelligence is an adaptation, and the required integration can only be achieved vertically, not horizontally (164). What Plotkin, therefore, wants to refute is the doctrine of “separate determination,” i.e., that behavior controlled by rationality and intelligence should be viewed as quite separate from the kind of behavior controlled by biology and instinct. But these two can never be separated in this way, and rationality and intelligence should rather be seen as extensions of instinct that cannot be separated from it. On this more embodied, holistic view of human knowledge, instinct can indeed be called the mother of intelligence (cf. 165). Importantly, because of its nesting under the primary heuristic, namely, its embeddedness in instinct, intelligence cannot easily and automatically be equated across species. The intelligence of each and every species is directly tied to its genes, and human intelligence too can be understood only in context of human genes. This, then, is Plotkin’s argument for species-typical intelligence, or domain-specific intelligence (cf. 165, 237).
On this view the causes of intelligent behavior are still clearly divided between genes and development, on the one hand, and the capacity to gain knowledge and act on it through creative intelligence, on the other. The causes of intelligent behavior, however, can never be reduced exclusively to genetics, because intelligence then would only passively reflect the nature of those instructional processes and the circumstances of the world that are imposed on it. On this view intelligence would become merely a rather extended but entirely deterministic device understandable entirely in terms of genetic determinism, on the one hand, and the history of the organism-environment interaction on the other. What saves intelligent behavior from such a reductionist account is the presence of selectional processes in the mechanism of intelligence itself (cf. 176). Genetic reductionism can never be invoked, then, as an explanation of the behavior and evolution of beings that are intelligent. And this is of course especially true for Homo sapiens: once intelligence has evolved in a species, self-conscious brains have a causal force equal to that of genes. For evolutionary theory to be complete regarding humans, intelligent behavior has to be included, and so does a very peculiar feature of intelligent behavior, namely, culture (cf. 177). Plotkin’s view on cultural evolution here clearly emerges in opposition to the faulty Lamarckian view that wanted to include behavior as one of the causes of evolution. Plotkin plausibly argues that behavior becomes causally significant in the process of evolution only if the behavior itself is driven by intelligence, itself an evolutionary process involving unpredictable variant generation.
Finally Plotkin wants to make the point that culture should be seen as a third-level heuristic, another form of “Darwinian machine,” and hence another means of gaining knowledge of the world based on evolutionary processes (cf. 225). On this view human cognition is revealed as the mediator between biology and culture, and human knowledge emerges as a web of relationships that matches different levels of the hierarchy with different features of the world. The leitmotiv of Plotkin’s argument thus is clear: knowledge as commonly understood is a special kind of adaptation, and all adaptations should be seen as forms of knowledge. This clearly means that knowledge is a complex set of relationships between genes and past selection pressures, between genetically guided developmental pathways and the conditions under which development occurs, and between a part of the consequent phenotypic organization and specific features of environmental order (cf. 228). This is also what Plotkin means when he argues that it would be human conceit to think that knowledge is something both unique to our species and located only in our heads. He clearly maintains the canonical core of Darwin’s notion of human evolution and sees knowledge as a pervasive characteristic of all life, exemplified by all adaptations in all living creatures (cf. 229).
This now implies that the natural-selection paradigm can be generalized (or rather, metaphorized) to include some of our most crucial and broadly conceived epistemic activities such as learning, thinking, imagination, science, and even religion (cf. van Huyssteen 1998: 140ff.). Franz Wuketits explains this move very well by saying that any living system is an information-processing system. Echoing Plotkin, he argues that information processing should be seen as a general and typical characteristic of all organic nature. We as humans do indeed exhibit the most sophisticated type of gathering and preserving of information about certain important aspects of reality, and this information processing too can certainly be explained as an evolutionary phenomenon (cf. Wuketits 1990: 4). Also, therefore, especially for theologians the following should be true: if we take the theory of evolution seriously, we should take evolutionary epistemology seriously. This will of course mean that, precisely as far as our cognitive abilities go, we will ultimately be challenged to discern whether the theory of evolution by natural selection can be seen as adequate for explaining the moral, aesthetic, and religious dimensions of human knowledge and rationality.
Wuketits, while arguing positively that evolutionary epistemology is not destined to “destroy” religion, unfortunately does not grasp how religion and religious knowledge can and should be integrated into this broader, comprehensive epistemology. He ultimately avoids this difficult issue, but at the same time correctly states that the main purpose of this kind of (evolutionary) epistemology is still to meet the need for a comprehensive approach to the problem of knowledge that will take us beyond the limitations of traditional disciplinary boundaries. He thus importantly argues that evolutionary epistemology, rightly understood, has to lead to an interdisciplinary account of our epistemic activities (cf. 4ff.). As I see it, not only philosophy but also theology will benefit greatly by incorporating scientific research regarding ourselves as conscious “knowing subjects” and our genetic makeup, anatomy, and physiological abilities into this comprehensive paradigm. In this broader sense evolutionary epistemology becomes not just an interesting option for theology, but a necessary one, as it opens up a way for re-visioning our study of human knowledge by giving us a fresh epistemological look at interdisciplinary issues.
Wuketits has correctly called the emergence of evolutionary epistemology a truly Copernican turn in philosophical epistemology (cf. 6). He calls his own version of evolutionary epistemology a systems theory of evolution, which argues for an approach based on, but also going beyond, Darwin’s theory of evolution by natural selection. For this reason he turns to conceptions that specifically go beyond Darwin. He wants to argue, first, that evolution is determined not only by external selection but also by intraorganismic constraints on evolutionary change, and second, that the flow of biological information is not unidirectional but bidirectional (cf. 22). Wuketits now unfolds his systems-theoretical approach to evolution that first of all implies that environmental change by itself does not suffice as an “evolutionary pressure.” In fact, organic evolution exhibits patterns of its own dynamics that effectively go beyond environmental constraints. It indeed seems plausible that evolution is influenced by structures and functions of the organism itself. Wuketits thus proposes a flow of cause and effect in two directions and concludes that in the process of evolution by natural selection, organism and environment are codetermined (cf. 23).
As with Henry Plotkin’s interpretation of the evolution of cognition, Wuketits argues that the evolution of life results from internal (intraorganismic) as well as external (environmental) selection. These internal and external forms of selection work together to build the systems conditions of evolutionary change. And in this sense the systems theory of evolution is a revised and extended version of the classical theory of natural selection. The point is that for Wuketits evolution is an “open process” and in fact creates its own laws a posteriori (cf. 24). At the biological level the principles of organic evolution apply fully to the human species; humans, like other organisms, result from organic evolution caused by genetic recombination, mutations, environmental selection, and intraorganismic (internal) constraints. But the story of our species is virtually the story of the growth of the human brain; the ascendance of humankind is due to the preeminence of the brain and not only to bodily prowess (cf. 27ff.). And as we have seen, one thing that makes our species unique in the animal kingdom is our capacity for culture. The crucial question 86 then becomes: Can evolution be extended to culture? Can culture be explained in terms of organic evolution?
Wuketits deals with this extremely important issue by arguing in the following way. First he stresses the thesis that human culture relies on specific brain structures and functions: it is a result of the peculiar development of the human brain and can be regarded as the most sophisticated expression of the brain’s power. The main problem that now arises, of course, is whether biological explanations of the brain will be enough to explain the particular paths of our cultural evolution. Wuketits wants to show that it is unwarranted to reduce the complex patterns of human culture to the principle of organic evolution alone (cf. 30f.). Cultural evolution exhibits its own characteristics and systems conditions. Certainly the emergence of culture has been propelled by organic forces, but, however crucial, the biological approach will not be sufficient to explain the complex and peculiar paths of cultural evolution. Clearly, the principles of biological evolution can therefore not be translated directly into explanations for culture, religion, or society.
On the one hand, then, organic evolution—particularly the evolution of the human brain—can be seen as the basis of cultural evolution. On the other hand, the latter can never be reduced to the former. Cultural evolution requires explanations beyond the biological theory of evolution in its strictest sense. Therefore the term “evolution” applies to both the development of the organic world, from unicellular organisms to humans, and the development of culture. Or in Wuketits’s words, biology offers the necessary conditions of culture, but it does not offer the sufficient conditions (cf. 31). Cultural evolution (including the evolution of ideas, scientific theories, and religious worldviews) cannot be reduced to biological evolution. Echoing Plotkin, Wuketits argues that the study of human evolution can therefore clarify the preconditions of cultural evolution, but it cannot explain the particular paths a culture will take (cf. 33).
Closely following Plotkin’s views on the relationship between instinct and intelligence, one of the central claims of evolutionary epistemology can now be restated as follows: not only has evolution produced cognitive phenomena, but evolution itself can be described as a cognition process or, more precisely, a cognition-gaining process. The thesis that evolution is a cognition process obviously implies that knowledge, and the ability for gaining knowledge, is an information-processing procedure that would increase an organism’s fitness. And already at the prerational level information processing is characterized as a cycle of experience and expectation. So, when we come to the uniqueness of human knowledge, this process of knowledge gaining as information processing turns out to be a universal characteristic of all living beings, which again confirms that human rationality has a biological basis, and as such can be seen only as embodied rationality. And precisely because human rationality everywhere shares deeply in this biological basis, human rationality as such reveals a universal intent that links together all our diverse and complex epistemic activities.
It is thus that evolutionary epistemology opens our eyes to the kind of comprehensive, integrative epistemology that does not have to emerge as a modernist metanarrative for human knowledge. And if our various and diverse cognitive activities are linked together by the evolutionary resources of an embodied human rationality, then evolutionary epistemology succeeds, on an intellectual level, in revealing a space for true interdisciplinary reflection, the kind of epistemic context that would be a safe and friendly space for the ongoing conversation between reasoning strategies as diverse as theology and science (cf. van Huyssteen 1998: 149).
The biological resources of human rationality are enhanced, furthermore, by Wuketits’s very helpful distinction between three levels of information processing that we all share as human beings:
1. The genetic level, which refers to the development (ontogenesis) of living systems. Genetic information can be transmitted from one generation to the next only by inheritance. Wuketits importantly stresses, however, that this kind of information processing is not to be confused with any kind of cognitive structure (cf. 1990: 55).
2. The preconscious level, where all animals require an information-processing system like the nervous system.
3. The conscious level: which is the level of rational knowledge that comes with consciousness. At the conscious level we encounter a level of intellectual information processing and self-awareness that represents the particular state of human consciousness (55).
What we see, then, in the work of both Plotkin and Wuketits is a hierarchy of information processing and also a hierarchy of cognition processing in the living world, with human rational knowledge emerging as the most sophisticated type of information processing to which we have access (cf. 55). Wuketits correctly stresses that information processing, and therefore the gaining of knowledge, is an important biofunction and indeed can be regarded as a characteristic that increases the organism’s fitness in a Darwinian sense. It is hard to imagine Darwin not agreeing with the formula “without cognition no survival” (cf. 58). And in the process of the evolution of knowledge, our interpreted experiences and expectations have a central role to play. I argued earlier that we as humans relate to our worlds through interpreted experience only, that our expectations are therefore always based on our interpreted experiences, and that these experiences in turn lead to new expectations (cf. van Huyssteen 1999). Evolutionary epistemology helps us to understand this connection as a result of long-term evolutionary processes. Changing experiences will obviously lead to changed expectations, and the cycle of experience and expectation in the individual is thus clearly the result of evolution.
In a broader sense the members of a population or a species have often managed to have the same experiences again and again. In the long run these experiences will be genetically stabilized so that any member of the species will be equipped with what we might call “innate expectations,” i.e., a “program” of expectations based on the accumulation of the experiences of a species. In this sense, again, evolution can be described as a universal learning process or cognition process (cf. Wuketits 1990: 68). The evolution of living systems thus implies an overall increase in cognitive abilities. In Wuketits’s words: “Thus we may argue, their evolutionary history has prepared animals to grasp at least some important aspects of the world—those aspects of the world that have been experienced by thousands of individuals during a long line of evolutionary processes” (68).
This process, also called phylogenetical memorizing, explains why kittens snarl at dogs, why many of us almost irrationally fear snakes, and so on. And although genetic information itself does not have the character of cognitive structures, such structures certainly can be transmitted genetically. An organism gathers experience through its sense organs and processes this experience through the nervous system. The development of these sense organs and the nervous system in an individual animal depends on specific genetic coding, and in this sense the peculiarity of experiencing aspects of the world is indeed genetically programmed (cf. 68f.).
Evolutionary epistemology thus reveals the process of evolution as a holistic, embodied belief-gaining process, a process that in humans, too, is shaped preconsciously. All our beliefs, and also, I would argue, our religious beliefs, thus have distant evolutionary origins and were established by mechanisms working reliably in the world of our ancestors, even if on a broader cultural level our beliefs and convictions are not explained by biological origins. This is the reason that the theory of evolution by natural selection cannot offer an adequate explanation for beliefs that far transcend their biological origins. It does, however, again underline the fact that cognition is a general characteristic of all living beings, and that human rationality can be fully understood only if its biological roots are understood. This is true even if human rationality at some important point transcends these biological roots. Precisely this important point has also been argued by Plotkin, who has shown that there is a clear evolutionary link between evolution on a genetic level and the evolution of our intellectual and rational capacities—a relationship, however, that can never be seen to be deterministic in any reductionist sense of the word (cf. Plotkin 1993: 176ff.). Our rational capacities are thus part of the process of evolution by natural selection, but cannot be understood deterministically. In Plotkin’s work, as became clear earlier, this has led to the bold suggestion that the evolution of human rationality becomes comprehensible only if on this level we reject the deterministic influence of genes (cf. also Hančil 1999: 17).
Peter Munz also gives an excellent interpretation of how not only so-called “a priori knowledge” but also “expectations of regularities in nature” are deeply embedded in the embodied, biological origins of human rationality. Someone like Immanuel Kant could not explain this, but the biological explanation shows that precisely the emergence of a priori expectations is shaped by the process of evolution. Taking seriously the biological origins of rationality should therefore finally convince us that there are regularities in nature, and that the a priori expectations we have of them are justified, and that their presence in our minds or nervous systems can be explained by the theory of evolution by natural selection (cf. Munz 1985: 29f.). The biology of evolution and regularities in nature thus go together very closely; without such regularities there could be no adaptation by natural selection because every organism that is adapted is actually adapted to the regularities of its environment. In this sense one could state that evolution by natural selection and adaptation would not be possible, let alone conceivable, without the assumption that there really are regularities in nature. It is precisely the organisms that have the right expectations that are selected, are “adapted,” and ultimately survive. In this sense one could also argue that we would not be here if there were no regularities in nature (cf. 30).
The more important question, however, is how it has happened that we know about these regularities and have expectations about them. The answer cannot be found in observation only, because without specific expectations we would not even be able to recognize these regularities. For Munz this is a in the strong argument for the plausibility of evolutionary epistemology, with its focus on the origin and evolution of our cognitive abilities, our powers of abstraction, and the ability to have expectations; evolutionary epistemology is a successful program precisely because of its wider scope and problem-solving ability (cf. 34). As for our powers of abstraction (i.e., our ability to recognize differences and similarities, and the capacity to abstract from individual observations and experience), this specific critical intellectual ability would not have been there if we did not have expectations of similarities and differences. Precisely this power to abstract is the result of evolution and the most basic form of adaptation to the environment (cf. 37).
Evolutionary epistemologists are therefore clearly making the strong point that there is a congruence between “external” and “internal” reality, i.e., between nature and embodied cognition. Any organism’s perception of certain aspects of reality is conducted by genetically programmed dispositions that are the results of evolutionary learning processes/experiences. Every species thus lives in its own cognitive niche. The human cognitive niche is constrained by the experiences of our phylogenetic ancestors, so that we have developed organs for the perception of only those aspects of reality of which it was imperative for our species to take account (cf. Wuketits 1990: 101). On this view it is very clear why an exclusively adaptationist view, according to which cognition (like all other biofunctions) has been just an adaptation to given, external structures, does not suffice as an explanation of the relations between cognition and reality. In the place of this, evolutionary epistemologists like Wuketits, Plotkin, and Munz are clearly suggesting a more holistic, systems-theoretical approach that makes clear that organisms are active systems and problem solvers, and that their cognitive capacities are constrained by their own architecture and not just formed by external requirements. From this follows the important conclusion, then, that there are indeed regularities in nature, and that those life-forms that are adapted (in the sense that they behave as if they expected the regularities in their environment) have a better chance of survival than those that are not. In this sense, through the process of natural selection and through heredity, the ability to have expectations of regularities is phylogenetically a posteriori: organisms “learn” from the environment, not as individuals but as a species, precisely because only those that are adapted to expecting the regularities survive and reproduce (cf. Munz 1985: 34f.). Ontogenetically the expectation of regularities is, however, a priori, and each individual and organism, as Karl Popper already argued (cf. Popper 1998: 47ff.), is born with “expectations” in place, which is why Wuketits could refer to our “phylogenetic memories.” Munz phrases it well: in this way the regularities that exist in nature are eventually transferred by the organisms that survive by natural selection, and in this way the “order of nature becomes the nature of order” (cf. Munz 1985: 35).
At this point it is important to realize that two distinct programs seem to be emerging in contemporary forms of evolutionary epistemology. One is the attempt to account for the cognitive mechanisms in animals and humans by extension of the biological theory of evolution to those structures of living systems that are the biological substrates of cognition, like brains, nervous systems, and sense organs (cf. Wuketits 1990: 5). The other is the attempt to explain human culture, including science and religion, in terms of evolution, but not in a sociobiological sense. Both programs are interrelated, but they help us make an important distinction between two levels of evolutionary epistemology: that of a natural history or biology of knowledge, and that of evolutionary epistemology as a metatheory for explaining the development of ideas, scientific theories, religious views, and theological models in terms of evolutionary models. Whether we approach this kind of metatheory from a religious, a specifically theistic, or a resolutely naturalistic viewpoint will of course determine whether there will be a legitimate place for religion and theological convictions in it. So, again we see that the choice is never just for science and against religion (or vice versa), but for or against certain comprehensive worldviews from which religion and science will emerge with either a significant level of compatibility or locked in serious conflict.
Implicit in the evolutionary explanation for the origins of human rationality is also evolutionary epistemology’s crucial contribution to what, in the first chapter, I called a postfoundationalist epistemology. Evolutionary epistemology breaks through the traditional modernist subject-object polarization and reveals the basis for a postfoundationalist epistemology by showing:
first, that all cognition is a function of active embodied systems that relationally interact with their environments;
second, that cognitive capacities are the result of these interactions between organisms and their environments, and these interactions have a long evolutionary history; and
third, that cognition is a process to be described not as an endless, accumulative chain of adaptations building on one certain foundation, but rather as a complex interactive process in which we move beyond our biological roots without ever losing touch with them (cf. also Wuketits 1990: 96). It is therefore clear that human knowledge is indeed constrained by biological factors, but that it also very much depends on cultural determinants. Precisely in an interactionist epistemology the cultural and biological determinants of knowledge would therefore be directly interrelated, and precisely for this reason our rational knowledge also goes beyond what is genetically fixed.
Our ongoing focus on human evolution helps us to realize that the acquisition of rational knowledge is the latest achievement in the long chain of the evolution of information processing, and in this sense it can be seen as an amazing evolutionary novelty. Wuketits sees the emergence of human rationality as such an epoch-making event that it has given evolution a new direction (cf. 108). This leads him to the perception that, because of their rationality, humans are unique among all living organisms. However, many who would otherwise agree with evolution sometimes hold that humans are unique because human rationality can be seen “supernaturally” (almost like Darwin’s contemporary, Alfred R. Wallace) as “God’s work” (cf. 108). In direct opposition to this so-called supernaturalist view, Wuketits argues that from the point of view of evolutionary theory there really is nothing suprarational about our species, although our unique status in nature certainly is uncontestable. But Wuketits wants to explain the emergence of life on earth, and of human consciousness and rationality, without resorting to any supernatural or “mystical factor” (108). But God, supranaturalism, and mysticism are all indiscriminatingly lumped together.
Wuketits’s basic argument, however, that human rationality and its emergence might be ascribed to a principle of integration and self-organization, i.e., a self-organizing brain providing for ever more and increasingly complex properties, does not have to be in conflict with religion, and with faith in God, at all (cf. van Huyssteen 1998: 153f.). He is certainly right in that if self-organization can be regarded as one of the most important characteristics of our universe, then human rationality may also be traced back to the formation or self-organization of brain mechanisms (cf. Wuketits 1990: 109). His main point, therefore, is that the brain alone is responsible even for the most sophisticated mental phenomena and that these phenomena are to be explained as particular expressions or properties of the brain. From an evolutionary point of view, then, the human brain is an information-processing system that has increased fitness in the human race, since information processing generally has a certain survival value for any organism.
The simple message of evolutionary epistemology thus is that the information that living organisms get from the world is sufficiently accurate to allow for survival and reproduction. The world in which we live seems to be intelligible, at least to some extent, and the structures of this world do not seem to exist only in our imaginations. As epistemological fallibilists, we also know that even in science it is never possible to arrive at a complete and definitive understanding of reality. But precisely the epistemological ramifications of the process of evolution allow us to hypothesize about the reality of our world. In this sense most evolutionary epistemologists would claim that the ability to arrive at a relatively accurate understanding of our world is in a sense the ultimate survival value (cf. Han/il 1999: 21). Human rationality, when defined as our ongoing quest for the deepest and most accurate level of understanding of reality, thus emerges from the heart of the process of evolution by natural selection.
B. Evolutionary Epistemology and Religion
A theological redescription of the ramifications of evolutionary epistemology for human rationality and culture at this point will clearly reveal the possibility of exciting links between theology and the sciences. If our genes do not completely determine our culture and our rational abilities, then it might be as reasonable to expect that our genes, our cultures, and our rational abilities may also not completely determine the enduring and pervasive need of humans for symbolic thought, metaphysics, and ultimately life-transforming religious faith. This certainly is no argument for the existence of God, but it is an argument for the rationality of religious belief in terms of a nondeterministic theory of evolution by natural selection.
Even Wuketits would argue that the need for metaphysics, and metaphysical explanations, seems to be a general characteristic of all humans (cf. Wuketits 1990: 117). In all human societies we find metaphysical systems that include notions of life after death, the “other” world, etc. As to how this is to be explained, Wuketits gives a surprisingly clear answer: metaphysical belief is a result of particular interactions between early humans and their external world and thus results from specific life conditions in prehistoric times (cf. 118). My question, however, would be: Why should we, so suddenly and only at this point—the development of this metaphysical aspect of our cultural evolution—so completely distrust the phylogenetic memory of our ancestors? In the end, this version of evolutionary epistemology reveals a naturalist and reductionist prejudice against the very human propensity for religion, as well as a very reductionist view of religious faith itself.
This prejudice is clearly revealed when not only a reductionist view of religion, but also an inadequate and scientistic view of human rationality, suddenly surfaces. Wuketits now defines metaphysics as the human need for metaphysical beliefs, including religion and all other irrational worldviews (cf. 118). He skims over this superficial treatment of religion by remarking that human beings, as rational beings, are obviously also capable of irrationality, for ever since the emergence of rationality humans have invented irrational belief systems whenever they lacked “rational” explanations and then projected them onto their worlds. Here Wuketits follows rather blindly the popular idea of anthropomorphistic projection, in which humans cannot imagine that there might be processes in the universe without any purpose, so they invented the purposeful universe according to their own teleological actions. He seems to want to explain away all religion by seeing all metaphysics as constrained by emotions and illusions reaching back to the living conditions of prehistoric humankind (cf. 118).
But if metaphysical beliefs, on this naturalistic view at least, do not really tell us anything about “first causes” or “last purposes” (i.e., God), but rather about our own propensity for such beliefs (cf. 118), why did they evolve on such a massive scale in the history of our species? And why should we distrust our phylogenetic memories only on this point? Obviously, my arguments here should not be seen as an attempt to reconstruct an argument for the existence of God, but only as making a case for the naturalness of religion, the meaningfulness, necessity, and rationality of religious belief, which cannot just be explained away rather naively by seeing it as “invented” earlier by our sometimes wildly irrational species. I would much rather argue that Darwin (cf. 1981: 68f.) was right in his thesis that metaphysical and religious beliefs in humans were related to evolutionary processes, and therefore could be explained like any other mental capacity in the light of human evolution. But this would make invalid, or help to explain away, only exotically abstract and excessively constructivist notions of the divine, or God, not necessarily theistic belief as such, and certainly not the phenomenon of religion. Wuketits is right on one important point, and that is that our “marvelous brain” has indeed given rise to creative imagination (cf. 1990: 119), but why would not his earlier relational, or interactionist, epistemological viewpoint (his “systems approach” that even included a weak form of hypothetical realism; cf. 73ff.) now again be plausible in at least explaining the existence of religion(s) too? Wuketits finally has no “rational” reason for explaining away religion as “irrational.” A resolute naturalism, thinly disguising a rather positivistic view of natural scientific rationality, not only seems to be inconsistent with his argument that biology can never fully explain culture, but also blinds him to what may lie beyond a strictly scientific rationality and may be only tentatively caught through imagination and religious faith.
Anthony O’Hear (2002) has argued along similar lines that while the theory of evolution is successful in explaining the development of the natural world in general, it is of more limited value when applied to humans, human nature, and culture. It is rather puzzling that O’Hear, in his evaluative critique of evolutionary epistemology, never mentions or discusses the seminal contributions of Plotkin and Wuketits to this debate (cf. 50-83), and therefore fails to see how closely these evolutionary epistemologists relate to his own views on these matters. O’Hear has argued that because of distinctive traits like consciousness, self-awareness, reflectiveness, and rationality, we humans indeed have the ability to take on cognitive goals and ideals that cannot be justified in terms of survival-promotion or reproductive advantage only. Therefore our very typical quest for rational knowledge, but also our moral sensibilities, and our aesthetic appreciation of beauty, while all deriving in important ways from our biological nature, once they emerge cannot be analyzed only in biological or evolutionary terms. In this sense, then, we clearly transcend our biological origins, and in doing so have the ability to transcend what is given us in both biology and culture. O’Hear, however, wants to push even further: we are prisoners neither of our genes nor of the ideas we encounter as we each make our personal and individual way through life (vii).
Closely resembling the evolutionary epistemologists áe encountered in this chapter, O’Hear sees as the first and most distinctive trait for humanness the fact that we human beings are material beings. We are, first of all, embodied beings, and as such what we do and think and feel is conditioned by our embodiment. In this sense, in our very typical ways of obtaining knowledge, in our capacity for moral awareness, and in our perceptions of beauty (and, I would add, in our ancient religious disposition), our materiality is presupposed and exploited in a host of ways. Our senses condition and filter our knowledge in interaction with the material world, and our moral, aesthetic (and religious) sensibilities are intimately linked to the perception of visible, tactile, and aural things, and to embodied suffering and pain due to the limitations of our very embodied physicality (1f.). It is this materiality of our existence that has come about through the process of biological evolution.
It is also obvious, however, that in various respects we do not behave like most material objects. We are self-aware and critically conscious, and although degrees of consciousness are something we share with a fairly large proportion of the animal world, we have developed the ability, over and above mere consciousness, to think critically and discursively, to be aware that we are so thinking, and to express these thoughts in language and other symbolic forms. In fact, the presence of thought, reflection, and self-conscious belief is what makes human activity different from the conscious but unreflective behavior of nonlinguistic animals (cf. 49). It is precisely in the expression and development of this propensity for self-conscious thought that we have produced a large diversity of cultural artifacts and systems by which our lives are surrounded, conditioned, and made meaningful. Our comparatively large and complex brains clearly play a crucial role in the production of thought and culture. Nevertheless, human thought and culture do exhibit certain important characteristics that are rare if not unique in the rest of nature (2f.). It is against this background that we have to look at the very human abilities for reflective knowledge, moral awareness, aesthetics, and, I would add, a propensity for religious awareness and religious belief. And it is these propensities of the human mind that cannot be explained by naturalistic evolutionary accounts of human nature and behašnly.
O’Hear does present us with an interesting and important perspective on religion. The inbuilt ambivalence of human reason, with its very specific limitations and at the same time its transcendence as a process, has the further ability of finding concrete expression (some would say fulfillment) in religion and religious belief (26f.). What this means for O’Hear is that there is in our very nature as self-conscious but finite beings an ontological tension that naturally expresses itself religiously, and comes close to what I have called the “naturalness of religion.” Therefore religious awareness, with its intimations of human limitations and our natural disposition to try to overcome these limitations, mirrors very precisely our nature as reflective thinkers. In this sense religious belief does not simply understand and express the ambivalence of the limitations and transcendence of our rational abilities. It also sees our drive toward something transcending human powers as reflected in the fabric of the universe, a reality that is greater than, and transcends, empirical reality. Religious belief, therefore, implies that there is this transcendent aspect to reality, and that we humans are part of, and related to, this dimension of transcendence. For religious believers it will be natural to interpret the emergence of consciousness and self-consciousness as revelatory of something deep in the universe, something inexplicable by physics, something behind the material face of the world (cf. 27).
Like all evolutionary epistemologists, O’Hear takes seriously the fact that important epistemological conclusions can be drawn from the fact that we human beings are products of evolutionary development. While our activities, including all forms of our knowledge, are certainly rooted in our biological inheritance, as human beings operating in a human-cultural world we have taken on goals and activities whose aims and rationales cannot be explained in biological terms only (50). It is exactly on this point that O’Hear is very close to Plotkin and Wuketits. While evolutionary epistemology tells us that it clearly is biologically advantageous to have survival-promoting beliefs, human knowledge is also about more than survival-promoting beliefs. It is in this sense, when considering the nonadaptive aspects of our cognitive drives, that we can say that in human knowledge, in moral awareness, in aesthetic appreciation, and in religious awareness we transcend our biological origins. This perspective enables us to see human cognition as the mediator between biology and culture, and cultural evolution as requiring explanations beyond the biological theory of evolution. It is in this sense of the word that Wuketits could argue that biology offers the necessary conditions for culture, but not the sufficient conditions (cf. Wuketits 1990: 31). Also for O’Hear, the study of human evolution can clarify the preconditions of cultural evolution, but it cannot explain the particular paths that human culture will take through rational knowledge, moral awareness, aesthetic appreciation, and religious propensity. Evolutionary epistemology, therefore, clearly functions on two levels: that of a natural history or biology of knowledge, and that of a metatheory for explaining the development of ideas, scientific theories, religious views, and even theological models in terms of evolutionary models.
And as eventually we turn to notions of human uniqueness in paleoanthropology and in theology, maybe the most important lesson learned from evolutionary epistemology is that we are embodied in and interactive with the world, and taking this fact as a premise in our epistemology will pull us away from pure forms of idealism and antirealism, as well as from naive realism, which downplays our embodiment, and treat all our knowledge as the result of perceiving an already divided up and categorized world. O’Hear, when discussing this kind of embodied, interactive knowledge, concludes by making a truly strong postfoundationalist claim: “[T]here is no theory of the world, no knowledge which is not the result of interaction between organism and world, and which is not, in its classifications and conclusions, coloured by the interests and perspectives of the organism” (2002: 99).
Precisely because experience, and the experience of embodied living, is the basis of all our knowledge, our species’ sensory knowledge and its knowledge by scientific inference cannot suddenly be seen as the only “rational” form of knowledge, with religious knowledge isolated as a type of knowledge gained only by “mere belief,” and therefore irrational (cf. Wuketits 1990: 121). Amazingly this point of view is actually supported by Wuketits’s next argument. If we should ask whether we are justif̌ speaking of cultural evolution as we do of biological evolution, the answer, as we saw in our discussion of Plotkin’s work, should be yes. We are not only justified to do this, but it is necessary, since there is one common trait here: both organic and cultural evolution can be regarded as complex learning processes, with human cognition as the crucially important mediator between them. Culture can therefore be understood as the most sophisticated learning process requiring particular modes of explanation and a particular type of evolutionary epistemology that goes beyond strict Darwinism. Wuketits, therefore, correctly argues that although there are biological constraints on cultural evolution, culture is not reducible to biological entities. Cultural evolution indeed depends on specific biological processes, and our cultures therefore are part of a grandiose universal natural history, but cultural evolution, once it started, obeyed its own principles and gave human evolution an entirely new direction, even acting back on organic evolution (cf. 130f.). In chapter 4 we will see how this recognition that the evolution of human cognition functioned as a mediator between biological and cultural evolution will open up the exciting possibility of acknowledging the crucial role of paleoanthropology in the wider debate on human origins. Only through paleoanthropology and cognitive archeology will we be able to grasp something of the cognitive abilities of our remote ancestors by studying the cultural expressions of their amazing, symbolic minds.
Wuketits’s arguments strongly support the contention that it would be a serious fallacy to use the principles of biological evolution to explain cultural evolution, let alone the evolution of religion. Certainly the necessary condition for the emergence of cultural evolution was biological evolution, and particularly the evolution of the human brain. Cultural evolution has channeled the creative human brain, and Wuketits himself has argued that although the human brain is the producer of all culture, this does not mean that the particular pathways of cultural evolution are prescribed by any single brain mechanism. Hence, cultural evolution has its own dynamics, going beyond the dynamics of biological, organic change. Exactly on this point evolutionary epistemology differs seriously from the genetic determinism of sociobiology. But for evolutionary epistemology to be truly nonreductionistic and nondeterministic, we should take seriously the argument, made even by Wuketits, that we humans are in a sense genetically disposed to religious and metaphysical beliefs (cf. 155, 199).
Holmes Rolston argued a similar point very clearly. In nature information travels intergenerationally through genes, while in culture information travels neurally, as people are educated into transmissible cultures. In nature the coping skills are coded on chromosomes. In culture the skills are coded in craftsmen’s traditions, in technology manuals, or in religious rituals, texts, and traditions (cf. Rolston 1996: 69). This information transfer on a cultural level can be several orders of magnitude faster than on a genetic level, and can in fact leap over genetic lines. As human beings we have developed a great diversity of cultures, and each heritage is historically conditioned, perpetuated by language, and conventionally established precisely by using symbols with locally effective meanings. Therefore, while animals adapt to their niches, human beings adapt their ecosystems to their needs. For this reason animal and plant behavior are never determined by anthropological, political, technological, scientific, ethical, or religious factors, and in human evolution natural selection pressures are finally relaxed in the emergence of culture (cf. 69). From this it naturally follows that two of our most enduring, most meaningful, and most dominant cultural achievements, science and religion, are both products of this remarkable historical development; they are intimately entwined with the process of biological evolution, although ultimately not determined by it.
In his Gifford Lectures (1999) Rolston developed further the convincing argument against any easy reduction of religion and ethics to biology. In this specific argument he takes on both ultra-Darwinism and the kind of sociobiological orthodoxy that ultimately would want to “naturalize” not just science, but also ethics and religion. The particular focus of his book is the emergence of complex biodiversity through evolutionary history, with as its focal point the remarkable genesis of human beings with their capacities for science, ethics, and religion. The most crucial conceptual task of the book, however, is to relate cultural evolution to natural/organic evolution, and to account for the way values are created and transmitted in both natural and human cultural history. Rolston acknowledges that Darwinian biology is a brilliant achievement, all the more so when coupled with that of genetic and molecular biology. Biology has been less successful, however, in relating itself to culture (cf. Rolston 1999: xi). It is on this point that Rolston argues that there is a genuine novelty that emerges with culture, and that while it is important to see how biological phenomena give rise to culture, it is just as critical to realize how culture exceeds biology.
Rolston finds the “uniqueness” of our species in our remarkable ability to be conscious, self-aware, intelligent, and capable of rational decisions. Unlike other animals, we humans are not just what we are by nature, but we come into this world by nature quite unfinished and then become what we become through culture. As Rolston puts it, the products of culture are myriad—languages, rituals, tools, clothing, houses, computers, and rockets—and are directly tied to ideas, and the home of ideas is the human mind (cf. xii). Other higher animals may also have minds, but the human mind is the only mind that permits the building of complex transmissible cultures. Humans are indeed the only species that think about their ideas, that teach their ideas to the next generation, and that make creative ideological achievements that can be transmitted from generation to generation. Some of these most outstanding achievements of human culture are science, art, ethics, and religion, achievements that rightly could be called emerging phenomena in culture. Historically ethics, art, and religion have been present in every classical culture, oral or literate, but science—in its current form at least—is a relatively late arrival in literate cultures. And among these cultural achievements of our species, science, ethics, and religion are the principal carriers of value (cf. xii).
This argument is enhanced by what evolutionary epistemologists are arguing about objectivity, and about the realist quality of our experiences. Munz even believes that biology can “help” philosophy by providing the missing link in arguments about objectivity. He does this by carefully unpacking the way Popper’s philosophy of science was embedded in Darwinian evolution. If we accept that we are here in the world, we must accept that the world is the sort of world that has brought about our existence. Our presence, therefore, is not only a guarantee of an objective reality as the result of which we are; it is also evidence of the fact that objective reality must be of a certain kind, for if it were different we would not be here (cf. Munz 1985: 237). Certainly for Popper, and for Munz, the theory of evolution by natural selection implies simply taking the presence of an objective world for granted. In this sense, as we have seen, evolutionary epistemologists speak of a “hypothetical realism”; the world we want to explore epistemically is the kind of world that has produced the sorts of beings who would want to explore it (cf. 238).
Implied in evolutionary epistemology’s notion of hypothetical realism is that, instead of asking what kind of mind is required to know the world, we should rather ask what kind of world the world must have been to produce the sort of mind we have. The “realism” involved here is “hypothetical” because, since we have embodied minds, it is a reasonable hypothesis that there must be a real world that, by a process of evolutionary selection, has produced our minds (cf. 242). This hypothetical realism is a realism without correspondence theories, and without the kind of empiricism that would claim only sense experience as a foundation of all knowledge. Munz argues that Darwin also used the argument that one need not follow a plan to bring about the achievement of design (256). So, Darwin was not in the first place arguing against design; the sheer constraints of the environment on evolving organisms were bound to produce design. He did, however, attack the argument from design, i.e., the belief that since there was design in evolution, there must have been a plan, a divine providence. Evolutionary epistemologists have therefore been able to characterize evolution as a process that, though lacking intentionality and foresight, is nevertheless creative and productive of design (cf. 256).
Like our ancestors in the plant and higher animal kingdoms, we humans store the kind of correct knowledge that is conducive to our survival. In our prehuman ancestors this knowledge was stored in the gene pools of populations and was species-specific, so that each species had a different knowledge of the reality it lived in because it was a temporarily successful adaptation to a specific environment. In human beings this knowledge is partly stored in the gene pools and partly held collectively in the memories and traditions of each society. This knowledge is not private opinion, or the convictions of individuals, because human beings know what to eat, what foods are poisonous, how to conceive babies, how to deliver them, how to rear them, and how to find food. Without such “correct” knowledge, handed down from generation to generation, human beings could not survive. With the enlargement of the brain and the development of the neocortex, this knowledge became consciously held knowledge, and with consciously held knowledge humans can now learn by trial and error (cf. 295).
We have now seen that the argument from evolutionary epistemology confirms the rather dramatic, mediating role that the evolution of human cognition plays between biology and culture, and that the evolution of human cognition strengthens the argument for the plausibility of the naturalness of religious belief. The final question is whether the nature of this process of complex cognitive evolution, revealed as interactive and epigenetic, and the hypothetical realist claims that flow from this on a philosophical level, tells us anything about the realist claims of some religions.
If we limit our understanding of reality to the more naturalist/reductionist views of an evolutionary epistemologist like Wuketits, then religious faith should be interpreted as an adaptation that has become obsolete and irrational, even if it might have been beneficial for our remote ancestors. Czech scholar Hančil has recently made a strong theological argument that this kind of reductionist objection can be raised only on the assumption that God does not influence our world in any way, and that all processes in this world are therefore determined by natural laws only. For Hančil, breaking out of the circularity of this scientistic argument could open our eyes to the fact that God does in fact influence events in our world. And if God does influence events in this world, then God’s influence would be part of the data that the in the adaptive process uses for the selection of available variants. Against the background of this strong view, Hančil wants to argue that it may be logical to claim that such a cognitive adaptation would have given us humans the abilities also to reason about the reality of God (cf. Hančil 1999: 240; cf. also Hančil 102 and Ziemer 2000: 11ff.).
The critical question we have to ask, however, is whether this stronger realist claim still is consistent with evolutionary epistemology’s implied hypothetical realism. Evolutionary epistemology has certainly shown that the principles of evolutionary development can and should be extrapolated to the evolution of human cognition, and to its bridging function between biology and culture. Precisely for this reason, as we will see in chapter 4, paleoanthropology will take a central role in the wider discussion of the evolution of human cognition in this project. Not only is there overwhelming evidence of the interaction between physical and cultural evolution in the evolution of the human mind, but we can understand the modern human mind only by understanding the prehistory of the human mind (cf. Mithen 1996: 66ff.). This is the reason why a scientist like Ian Tattersall can claim that we humans are not just more intelligent than our ancestors, but are differently intelligent (cf. Tattersall 1998: 32), intelligent in ways that allow us to manipulate the environment around us in a qualitatively unique manner. In this sense evolutionary explanations of the development of the human brain during a long period of 5 million or 6 million years point to the uniqueness of our human cognitive capacities (cf. Hančil 1999: 240).
In chapter 4 I will return to this argument, with a special focus on the work of Steven Mithen (1996), Ian Tattersall (1998), Terrence Deacon (1997), and David Lewis-Williams (2002). The issue of human uniqueness has also functioned prominently in the work of Christopher Wills (1998), who has argued provocatively that the evolution of Homo sapiens is actually accelerating, and that our power over nature has done nothing to halt evolution’s unrelenting march. Our physical evolution is certainly continuing, according to Wills, perhaps at the same rate it always has, but it is cultural change that is exploding and laying the groundwork for a far more rapid evolution of our mental abilities in the future. What is unique about us as humans is precisely the speed and power with which evolutionary processes have acted on our minds, and even the ability to devise religion and powerful mythology is due to the same evolutionary forces (cf. Wills 1998: 6f., 202f.).
Merlin Donald (1991; 2001) has also argued that the bridge from biology to culture is necessarily cognitive, and that a complete evolutionary proposal for human evolution should address the cognitive level and its capacity for cultural change (cf. 1991: 10ff.). Instead of seeing it as hovering free from its biological embeddedness, culture is connected closely to the process of evolution via the cognitive capacities of the brain. In this sense, then, human cognition will prove to be the link or mediator between cultural and biological levels, a claim that supports the redefinition of evolution as a knowledge-gaining process, as suggested by evolutionary epistemologists like Plotkin and Wuketits.
Hančil, in his theological evaluation of these claims of evolutionary epistemology, has correctly argued that the evolution of religion and religious beliefs must be closely dependent on the development of human cognitive abilities (cf. Hančil 1999: 243). This does not mean, of course, that religion can be reduced to what can be ascertained through paleoanthropological or archeological research. Religion and religious belief, on the contrary, are clear examples of how the development of the human mind distinguishes humans from other species, even from our closest relatives, Homo neanderthalensis (cf. 244). On this evolutionary epistemological view religion and religious belief become part of what in essence it means to be human. Hančil’s conclusion is supported by an argument by Tattersall, who has stated that precisely because every human society possesses religion of some sort, complete with origin myths that purportedly explain the relationship of humans to the world around them, religion cannot be discounted from any discussion of those human behaviors we see as unique (cf. Tattersall 1998: 201). And if human cognition is the link between biology and culture, as Donald and Hančil have argued, and if, as I am arguing, it is only through paleoanthropology that we can come to a fuller understanding of the cognitive capacities of our earliest ancestors, then Philip Hefner’s claims about our distant evolutionary past become theologically relevant: in the Upper Paleolithic period, our earliest human ancestors met a challenge and confronted it through the formation of myths and rituals, and these ancient myths and rituals must have organized the kind of information that was necessary for survival through elaborate cultural systems (cf. Hefner 1993: 278; cf. Hančil 1999: 244).
This is where Hančil wants to make the theological point that it is not enough to conclude that the human mind has a unique ability for religion and myth. Not just the use of myth, but also the contents and messages of particular myths must have greatly influenced the behavior of our ancestors, as they still do for us today. For Hančil this means applying evolutionary principles to the development of the structure and also the contents of human religious reasoning. In fact, evolutionary principles must be used for discerning the validity and appropriateness of the content of our religious traditions (cf. 1999: 254). At this point Hančil falls back on the promise of evolutionary epistemology and its principle of hypothetical realism. The interactive systems theory approach of evolutionary epistemology implies that every development is “caused” by interacting with the environment. In culture and religion, however, biological explanations would not suffice, and a much broader picture is needed. In Christian theology even more is needed, and Hančil now includes God as one of the explanatory factors of evolutionary reality. He claims that religion and culture can be interpreted as evolutionary adaptations to the external reality of God, and that these religious beliefs, correctly reflecting the nature of God, are beneficial to believers (cf. 259; also Hančil and Ziemer 2000: 16). In this strong if not startling claim, the principles of hypothetical realism are now stretched to comprehend a divine or ultimate reality to which religion, also the Christian faith, has responded responsibly. On this view our bodily structures, as well as our cognitive capacities, culture, and religions, are the products of an evolutionary process that responds to the reality of our environment, now conceived of in the widest possible sense to include God.
I believe Hančil has given us serious arguments for why theologians should engage in an interdisciplinary dialogue with evolutionary epistemology precisely on the issue of human uniqueness. Evolutionary epistemology can become a tool for theological reasoning (cf. Hančil 1999: 271), but clearly in a much more limited sense, and with much more limited but still profound claims. Indeed, in the transversal connection with evolutionary epistemology, theology can most probably find, in the rich notion of cognition, shades of an important historical trend in theology to closely associate human distinctiveness with notions of rationality and cognition, as will become clear in chapter 3. Evolutionary epistemology will challenge theology, however, to take aboard seriously the implications of the biological origins of an embodied human rationality, as well as the embodied history of the evolution of human cognition. This will have serious consequences for reductionist and overly abstract theological notions of the imago Dei.
In the interdisciplinary conversation between theology and evolutionary epistemology, theology should take seriously the evolution of human cognition, but the evolutionary epistemologist also should learn to appropriate the critical antireductionist message that the theologian ought to voice. In being open to what it could learn from evolutionary epistemology about human uniqueness, theology could then speak out forcefully and appeal to epistemologists, in terms of the principles of their own evolutionary epistemology, to take seriously the phylogenetic memories of our remote ancestors on the origins of imagination and religious awareness. If the origin of the human mind is indeed closely tied to the kind of cognitive fluidity that includes symbolic and mythical dimensions, as we will see in chapters 4 and 5, then the origins of our cognitive behavior are not fully understood unless we also take seriously the origins of religious behavior. On this view, then, the prehistory of the human mind points to the naturalness of religion, and supports the broader argument for the rationality and plausibility of religious belief. This will not provide theologians a generic argument for the existence of God, but it might give more credibility to the way they express themselves contextually in presupposing the existence of God within the boundaries of the discipline of theology.
The principles of evolutionary epistemology, liberated from scientistic reductionism, can support these claims for the plausibility of religious belief as part of the remarkable cognitive capacity that contributes to our notions of human uniqueness. However, it would be too much for a theologian to go beyond this minimal transversal connection with evolutionary epistemology and use the argument from hypothetical realism to claim maximally the existence of God, and of God as an explanatory factor in the evolution of human cognition. Han/il’s otherwise excellent and promising interdisciplinary argument takes this one step too far by first invoking the existence of God in the midst of an epistemological argument, not recognizing that at this point the interdisciplinary, transversal moment has passed. The epistemological lesson learned here is that success in the evolutionary struggle, considered on its own, does not guarantee the truth or adequacy of beliefs or perceptual representations (cf. O’Hear 2002: 60f.). It is a huge overstatement, therefore, to try to make the argument theological by including God as an explanatory factor. It is epistemologically problematical because it overstretches the capacities of hypothetical realism, and it is theologically problematical because it does not take into account the power of imagination, and the spectacular ability of the human mind to delude itself. Positing God as a reality factor in this generic way also weakens theology and makes it vulnerable to interdisciplinary attack. On the other hand, a theology that is already firmly embedded in the history of its own tradition of beliefs, including what it believes about God, may indeed be liberated to find its public, postfoundationalist voice precisely by discovering that the reality claims we make in theology are resonant with, and reinforced by, the defining role that the emergence of religious awareness played in the evolution of human cognition.
In this chapter it became clear that the very human obsession with selfdefinition is directly related to the evolution of human cognition, and to our distinctive capacity for self-reflective awareness and self-consciousness. The 106 following also became clear.
1. In our search for an adequate definition of humanness, no single trait or capacity like intelligence or rationality should ever be taken as the definitive word on human uniqueness.
2. Talking about human uniqueness, whether in theology or the sciences, always implies a moral dimension; defining ourselves as “fully human” always both includes and excludes others, and thus contributes directly to the sociopolitical and cultural contexts we create and live by. As a direct result of this very contextual nature of all our definitions of human uniqueness, definitions of what it means to be “human” or “fully human” have not only shifted and changed with time, but have also been influenced deeply by worldviews and moral perspectives. What this implies for this interdisciplinary study on human uniqueness is a postfoundationalist understanding of the fact that all our disciplinary ventures and reasoning strategies, in this case specifically the development of our ideas in paleoanthropology and theology, should be understood in the context of their times, i.e., within the historical-cultural network of ideas in which they are, or were, embedded.
3. In this chapter it was argued that Charles Darwin’s conception of human identity and human nature, with its very specific focus on the evolution of human cognition, still functions as the canonical core of the ongoing discourse on human evolution. I also argued that the powerful galaxy of meaning emerging from Darwin’s views on human identity, maybe more than ever before, is shaping our current views on the evolution of human cognition. The epistemic implications of these views on cognition are most clearly represented today in some forms of contemporary evolutionary epistemology. Finally, teasing out the interesting conceptual route from Darwin to evolutionary epistemology has now set the interdisciplinary stage, so to speak, and has created the necessary space for a dialogue between theology and the sciences, quite specifically on human uniqueness, and in doing so has provided us with very specific transversal links to paleoanthropology. In fact, it is precisely evolutionary epistemology’s focus on the evolution of human cognition that revealed embodied human cognition as the mediator between biology and culture, and in so doing provided paleoanthropology with a crucial role in the wider debate about human origins.
4. Evolutionary epistemology has proved to be very fruitful in investigating the consequences that the theory of evolution by natural selection may have for philosophical epistemology, for our theories of knowledge in different disciplines, and for the origin and development of our cognitive structures, our cognitive maps and abilities (cf. van Huyssteen 1998: 134). As such, evolutionary epistemology facilitates precisely the kind of postfoundationalist challenge for a more comprehensive and integrative, holistic approach to human knowledge that I argued for in chapter 1. Moreover, the qualified form of hypothetical realism implied by evolutionary epistemology reveals embodied human cognition as not only a form of illusion or mere cultural construction. In chapter 5 we will see how transversal connections with neuroscience and paleoanthropology may help us take this argument further in an attempt to understand the spectacular material legacy of cave art that we inherited from our Paleolithic ancestors, and the possible nature of the first religion(s) they might have practiced in those remote, dark caves in southwestern Europe. This will reveal that it is precisely in this sense that human phylogenetic memory is expressed in our genotype as well as in our predisposition to rely on a religious framework when searching for ultimate meaning in life. In this sense one could indeed say that, even though we may aspire critically to understand the cultural pressures that have been influencing metaphysical views and religious convictions in the course of past millennia, our deepest beliefs and firmest convictions reach back further than any cultural influence currently shaping their expressions (cf. Han/il and Ziemer 2000: 15). It is in exactly this way, also, that evolutionary epistemology facilitates a postfoundationalist argument for the rationality of religious belief, and for the fact that religion, and religious intelligence, has always been a response to the holistic search for meaning in our experience.
5. A postfoundationalist approach to evolutionary epistemology allows us, then, to understand how religion and religious knowledge should be integrated into a broader, holistic epistemology. Franz Wuketits has correctly argued that evolutionary epistemology necessarily leads to an interdisciplinary account of our epistemic activities (cf. Wuketits 1990: 4ff.). It does so because this very comprehensive, holistic approach to the problem of embodied cognition by definition takes us beyond traditional disciplinary boundaries.
6. Evolutionary epistemology thus facilitates a multidisciplinary approach to the central theme of this study, namely, the unprecedented contemporary focus on what it means to be human. It has already become clear that various sciences, but also evolutionary epistemology itself, are now building on and extending our commonsense perceptions of human uniqueness, and by doing that are also crossing over into some of the most traditional theological territories. Christian theologians, like Christians everywhere, traditionally believe that there is something absolutely unique about being human, precisely because humans are believed to be created in the image of God, as Genesis 1:26-28 so clearly states. However, disciplines like evolutionary biology, paleoanthropology, archeology, genetics, artificial intelligence and robotics, neuroscience, cognitive science, cognitive psychology, and evolutionary epistemology (to name just a few) are now all directly challenging any unnuanced notion(s) of human uniqueness, and by implication, therefore, this traditional Christian doctrine of the imago Dei.
The important question now will be, how should theology respond to the way the sciences are challenging, and even deconstructing, the notion of human uniqueness? Would any Christian theology that ventures forth bravely into interdisciplinary dialogue still be able to maintain, for instance, that there is some deeper, divine purpose to being human, and by implication then also to human evolution? As indicated earlier, my answer to these and related questions will unfold against the background of the brief theory of traditions developed in the first chapter. I believe the history of ideas of theological reflection shows that theological traditions have always been extremely sensitive to the culture(s) in which they were embedded. I will suggest, therefore, that we get at this problem by looking at how theological traditions, specifically the doctrine of the imago Dei, have evolved and responded to cultural pressure. In this specific case study on human uniqueness, I will ultimately attempt to facilitate a transversal intersection of different disciplinary voices as theology goes into conversation with the sciences of paleoanthropology and archeology.
What kind of dialogue will result, and what kinds of challenges will be revealed, if we happen to uncover some common concerns, some overlapping interests between these very diverse reasoning strategies? Finally, and maybe most importantly, is this interdisciplinary process of trying to revision the notion of the imago Dei helped, or hindered, by voices from the sciences of paleoanthropology, archeology, and neuroscience?