One of the records we have set up in this marathon series of lectures is to have reached number nine without ever actually saying what we are talking about. That is, nobody has yet attempted to define ‘mind’. And I'm not going to try to break that worthy record. The word can be used in a number of different senses; either in a quite restricted sense, referring only to the allegedly rational mind of man, or in a broader sense, referring to mental activities of other animals, and so on. I'm saying this just to warn you that I'm going to use it in rather a broad sense, to cover mental activities, or mind-like activities in other animals as well as man.
I agree with John Lucas that it would be frustrating to restrict the use of the word only in reference to conscious language, although one could, of course, write one's dictionary in that form if one wished to. However, the conscious and language-using mind has certainly evolved from non-linguistic mental activities in less highly evolved forms of life. To restrict consideration of mind to its most highly evolved forms is to commit the opposite mistake to reductionism, against which Christopher Longuet-Higgins warned us in his first lecture. We may not always be able to explain the complex by reference only to the simpler—as he reminded us—but I think it is even more true that we cannot hope to understand the complex if we leave the simpler altogether out of account. I shall, therefore, in this lecture start my consideration of the mind by making some remarks about mental activities in less highly evolved organisms than man.
To the biologist the most fundamental question to ask about any aspect—whether it is an activity or an organ—of an organism, is often put in the form ‘what is its function?’. This is a shorthand way of asking, in the first place ‘how is this particular activity or organ involved in the whole network of activities which enable the organism to keep alive?’ And since we know that all living organisms are involved in evolution, that question in itself implies ‘what role does this activity or organ play in enabling the organism to reproduce and contribute to the evolutionary history of the population of which it forms a part?’ The basic point of Darwin's theory of natural selection, on which the whole of biology is based, is the simple argument that any activity which does not contribute, or contributes ineffectively, to the production of offspring in later generations will in the course of time eventually disappear for that very reason. All activities and organs of biological organisms can therefore be said to serve well or ill the ‘goal’ of evolutionary transmission. They are all subjected to natural selection, which over the generations tailors them so that they serve this goal better and better.
The goal of leaving offspring throughout periods of evolutionary time can, of course, only be achieved by interactions between organisms and their environment, since an organism must continually obtain new supplies of nutrients, energy and so on, and often must escape from enemies. To pass the test of natural selection an organism must, therefore, always adapt; that is, interact with the environment in which it finds itself in some way which enables it to persist and eventually to reproduce. One can say that any process of adaptation—whether it is a chameleon changing its colour to blend with its surroundings, or a man living in the high Andes growing a larger pair of lungs and a higher concentration of red blood corpuscles in response to the lower oxygen pressure, is brought about by a goal-seeking mechanism, a mechanism which seeks the goal of evolutionary perpetuation.
From this point of view, the more highly evolved types of nervous activity, which control complex behavioural responses to stimuli which impinge on the sense organs from the external world, are subtle and more or less powerful mechanisms for adapting to a variable environment. Evolution has gradually brought into being increasingly complex and effective adaptive mechanisms of this kind. Wherever along this evolutionary sequence we wish to start using the word mind, it will still be true that what we are talking about is fundamentally a mechanism for achieving the goal of self-perpetuation or the leaving of offspring over long periods of time. The acquisition of language, at some point in the evolutionary history of man, introduced a new mechanism for adaptation—I think we would all agree, a more powerful mechanism than any other animal has at its disposal to deal with the environment in all its complexity. (I am not proposing to deal with the question whether this is ‘progress’. There is, of course, the alternative, for instance, burying oneself in the mud of the sea bed and going on reproducing there without bothering about anything further, a policy which has been satisfactorily followed by certain sea-shells which have remained unchanged since the Cambrian period at the very beginning of the history of cellular life on earth).
The first step in the biological approach to an understanding of mind is, therefore, to enquire what we know in general about adaptive mechanisms and their evolution. I should warn you that what I have to say about this is rather unorthodox. I believe that the processes of evolution, particularly when we talk about more highly evolved organisms, are considerably more complicated, subtle and interesting than they are often supposed to be. During the last 40 or 50 years most biologists seem to have been persuaded to accept almost as unquestionable dogma that evolution can be fully accounted for in terms of two very simple factors: the occurrence of new hereditary variations by a process of gene mutation which takes place entirely at random without rhyme or reason; and the process of natural selection, which amounts to no more than the different efficiencies of organisms in leaving offspring which grow up to contribute to the reproductive population of the next generation.
I believe both statements are true enough as far as they go, but I think neither of them goes far enough to be at all illuminating. They were first propounded and enshrined as dogma at a time when students of heredity had just realised the importance of studying the offspring from crosses between known and carefully selected individual parents, and had, also quite recently, discovered that hereditary characteristics are transmitted by separate discrete identifiable factors, or genes. It is natural that their first step was to try to account for evolutionary processes in terms of that kind; but since then, we have come to realise that in many ways these terms are too simple.
The first over-simplification is to try to talk of evolution in terms of individuals. An individual does not evolve, he dies. Evolution is a phenomenon that takes place at the level of populations, and any theory of evolution must be a theory of populations, not of individuals. This has the consequence of shifting the emphasis away from the particular genes contained in the single individual to questions concerning the whole collection of genes carried by the set of individuals making up the population, i.e. to what is called the ‘gene pool of the population’. Now it is certainly true that new genes can only be added to the gene pool by the rare process of random mutation, but gene pools, when studied, turn out to have a more sophisticated character than being just a collection of individual genes. The genes which persist in such a pool, through many generations, are found to be in some way fitted to one another, so that any bunch of them which comes together to form a new individual is likely to give a harmonious individual—the genes are said to be co-adapted. Such co-adapted genes found in a natural population tend to be fitted together to control development in a chreodic way, so that our hands nearly always develop five fingers, even though every one of us contains many different genes and every one of us has developed in rather differing circumstances.
I think that this means that in considering the later stages of evolution, which produce things like complex forms of behaviour and neural activities which we might be tempted to associate with mind, we can almost afford to forget the way in which new individual genes originate. The random nature of gene mutation remains important in very simple organisms like bacteria, and perhaps in the evolution of chemical constituents of the body which are closely related to particular genes, such as identifiable proteins. But when we are considering complex organs like hands or eyes or, still more, brains, the random origin of individual gene mutations seem to me no more important than the random processes which have given rise to the individual stones which make the aggregate formed into concrete blocks. In discussing the architectural differences between a building by say, Le Corbusier and one by Denis Lasdun, one is only peripherally interested in the different aggregates they have specified for their concrete. I do not think it adds much more light to a discussion of the evolution of the human or chimpanzee mind to keep harping back to the random nature of gene mutation.
A second important point in which the neo-Darwinist formulation of evolutionary processes was drastically oversimplified concerns hereditary characters. I have just been talking (as they do) in terms of genes and natural selection, but of course natural selection has nothing directly to do with genes. It is not a set of genes which leaves offspring in the next generation, but an organism developed under the control of those genes. This distinction is known in genetics as that between the genotype, which is the collection of hereditary potentialities, and the phenotype, which is the entity in which those potentialities come to realisation. Natural selection acts on phenotypes, and phenotypes, of course, have characters which are influenced not only by the hereditary potentialities with which they start but by the environmental circumstances in which those potentialities develop.
A few decades ago, people used to try to make a distinction between ‘hereditary characters’ which were completely determined by the genetic constitution of the organism, and ‘acquired characters’ which were completely dependent on the external and environmental circumstances during the development of the organism. In the first flush of excitement at discovering the existence of individual genes, writers about evolution—the so-called neo-Darwinists—argued that only hereditary characters could contribute to the next generation and thus to evolution, and that acquired characters were completely without influence on evolution. It has taken biologists a long time to realise that this must be nonsense, simply because it is impossible to make a distinction between hereditary and acquired characters. Every character of an organism must be ‘hereditary’ in that it could not have been produced had the organism not had a hereditary potentiality for developing it; similarly every character must be to some extent acquired, in that all development must go on in some environmental circumstances and the final outcome will be influenced in smaller or greater degree by these circumstances. Of course the development of some characters is more easily and obviously influenced by external circumstances than that of others, but that is only a quantitative difference, not a difference in principle.
If the development of an organism is affected by its environment in a manner which improves the chance that the organism will succeed in leaving offspring, this will obviously increase its contribution to later evolution, that is to say its natural selective value. In fact we can say that natural selection will favour organisms which acquire useful characteristics. Now, if one combines this simple fact with the rather more unexpected one I mentioned in my first lecture, namely that developmental processes are difficult to alter, one finishes up with a very powerful evolutionary mechanism. Say we have a population of animals which has to meet some new challenge offered to it by an altered environment, there may be some individuals in the population whose development is changed by the environment in a way which makes them better able to deal with the challenge—they show a capacity for adaptive modification. They will, therefore, be favoured by natural selection. After this selection has gone on for some considerable number of generations the new pathways of development will have gradually acquired a more pronounced chreodic character which can itself be difficult to modify. In fact should the environment now revert to what it was before the whole process started, the organism may well go on developing in the way in which it adapted to the changing environment. This process, which I have called genetic assimilation, gives exactly the same end result as the theory proposed by Lamarck, at one time espoused by Darwin but rejected by modern biology, of the direct inheritance of required characters.
The fact that by a slightly more sophisticated development of modern biology we have come across an evolutionary mechanism which can produce the same effect, is I think not without importance in relation to mind. It shows how a behavioural pattern, which in an earlier evolutionary stage emerged only in the actual presence of a certain environmental situation, might if it was selected as useful over many generations, become habituated to a chreodic developmental pathway which would operate even in the absence of that environmental situation. It is because of the existence of this mechanism that I am not alarmed by the suggestions of people like Chomsky, that man has an ‘innate’ capacity for the use of language. If at an early stage in his evolution it was useful for an individual to be able to adapt to a language-using community, i.e. to learn language as fast as possible, selection for this capacity might well have brought about a genetic assimilation of at least the bases for what had originally been only a learned adaptive response.
The third great oversimplification of the neo-Darwinist statement of evolution implies that the thing that is being selected (genotype according to the conventional statement, phenotype as I suggest) finds itself inevitably subjected to certain selective pressures, arising from ‘its environment’. In fact most higher organisms select their environment before they allow the environment to select them. Release a hare and a rabbit in the middle of a field, the rabbit will run off to the hedge and live its life there, while the hare will be content to live its life in the open field. Even plants, in a more rudimentary way, make some sort of selection of the circumstances in which they will develop. If a seed falls on stony ground in the desert, it simply refuses to germinate until the next shower of rain comes along and gives it an environment at least somewhat appropriate to its needs. However, there can be no doubt that this reciprocal relationship of mutual feedback, between an organism which selects an environment and an environment which then selects the most efficient organisms, assumes greater importance as we go higher up the evolutionary scale.
In particular, I think it must be of crucial importance in the evolution of behavioural patterns and of anything which we might call a mind. For instance, at some point in their evolutionary history, the ancestors of horses began to eat primarily the grasses of open plains, and not for instance the leaves of shrubs. They then had to deal with the possibility of being attacked by carnivores, and they came to deal with this threat by running away rather than by standing on their hind legs and trying to fight off the attack with their front feet, as giraffes do, for instance. One might somewhat figuratively say that they had ‘chosen’ to inhabit one type of environment rather than the other, and to adopt one type of strategy against predators rather than another. But, of course, that mode of expression should not be taken to imply a conscious process of choice. However, once evolution had started to go in those directions, this defined the character of the natural selection that would be exerted, and evolutionary changes went on in the same direction for a very long period. The mind of the horse has evolved into that of a plains-dwelling fleet-footed animal, which runs away from its enemies. The mind of the buffalo, on the other hand, is that of a plains-dweller which faces its enemies and charges them.
Such types of animal minds, evolved in relation to a reciprocal interaction between the selection of environment by animal and of animal by environment, are what we refer to rather crudely as instincts. An instinct is a pattern of behaviour which is to a major extent dependent on the hereditary constitution of the animal. It is a mistake, however, to think that it is in all cases wholly dependent on the genetic constitution, and that the environment plays no role in shaping the behaviour. I do not want to discuss the science of animal behaviour, so I will mention only one example which illustrates two ways in which the environment is important in the development of instinct. In my first lecture I mentioned weaver birds, who build elaborate nests consisting of a completely enclosed nest chamber, approached through a tubular entrance. Each species of weaver birds builds nests of a different shape. I do not know why different species should adopt differently shaped homes, but the fact that they do shows that there is a very strong hereditary element in their behaviour. However, birds build a better finished, and more competently constructed, nest in their second year than they do in their first. There is, therefore, an element of learning involved. Consider the problem of a bird approaching a half finished nest. It has got to decide just how to weave the piece of straw in its beak in amongst the other pieces of straw. It has been found there are certain kinds of weaving stitches which it can do, but it never, for instance, ties a proper knot. However, it has always to discover some way of adapting the particular types of weaving process at its command to the particular circumstances which confront it. This involves highly adaptive behaviour; much more adaptive to the environment than one might imagine if one simply wrote the instincts down as hereditary.
We may say that instinctive behaviour is behaviour related to a rather well-defined goal, but often demanding a more flexible adaptive type of behaviour, including the possibility of learning from experience, in deciding exactly how that goal shall be reached. I myself should not refuse to use the word mind in connection with organisms which showed this type of behaviour. The main point I should like to emphasise is that in such cases the goal towards which the instinct drives has certainly not been decided by any conscious choice of the organism, but by this subtle evolutionary process of natural selection within a framework which has been set by the previously existing instinctive behaviour.
I think it might be argued that certain computer programmes have already reached, or are about to reach, this level of instinctive mind-like behaviour, reaching a goal which has been set for them by some outside agency. For instance, Winograd's programme for understanding English sentences is comparable at best to an instinctive mind in that its goal was not set for it by itself, but by somebody else, namely its programmer. But in achieving this goal it shows some of the flexibility and capability of learning of the kind we have seen in the weaver bird.
In man, evolution has produced a mind at a different and higher level of complexity, namely a rational mind. Piaget, that great student of the development of the human mind during childhood, argues that the rational mind involves two faculties which have rather different relations to the external world, and that it is only by the combination of these two faculties that it reaches such a degree of efficiency that it makes up almost the whole of man's mind and reduces his instinctive activities to a very minor component. The two faculties Piaget distinguishes are, firstly, the ability to learn from experience, and secondly, an appreciation of logical mathematical theorems. To learn from experience is surely to adopt new ways of behaving—to write new programmes, if you like—which are adapted to achieving some goal in the light of the surrounding circumstances. One can easily see that it could become, and in man it has become, an extremely flexible and powerful way of achieving goals. But do we need to invoke another separate faculty of appreciating logico-mathematical relations? Could we not regard them also as the results of learning from experience? Piaget argues that this is not good enough. Although in practice a child may have to learn by experience that you cannot have a triangle in which one side is longer than the sum of the other two sides, yet this logico-mathematical relationship would remain true whether the child learnt it or not. The learning, he says, is in this case merely becoming acquainted with a relationship which is not dependent on what happens to go on in the environment. When the child learns that larger things tend to be heavier than smaller ones, he is really reaping the fruits of experience; but when he learns that 2 + 2 = 4 he is merely becoming acquainted with something which is essentially independent of experience. I am not sure whether to accept this argument of Piaget's or not. If one does, there is a great difficulty, which I do not think Piaget can quite resolve, in trying to decide how these logico-mathematical relationships enter into our mind.
These two faculties of Piaget's are of course supported by, indeed probably dependant on, the ability to use language. But I also would like to leave on one side, because I feel quite incompetent to deal with it, the very interesting problem of the relations between learning from experience and logico-mathematical relationships on the one hand and semantics and grammar in linguistics on the other. I suspect there are close parallels between these two pairs, and perhaps some of the linguists here will be able to enlighten us about them.
I should like to conclude by going back to the problem of goals. As we have seen, the goals of instinctive mentalities are determined by the evolutionary natural selective processes which I have sketched. The goals of rational minds appear to be set by those minds themselves. John Lucas seemed indeed to wish to make this the fundamental definition of a rational mind. There is, of course, another current of thought, the deistic, which would maintain that the long-term goals for man are set by the will of God, and that the goals or purposes which a rational mind may set before itself at any given time, are to be regarded as partial short term goals, which may well be mistaken. I think we can discuss the formation of goals in man's rational mind only if we recognise that there are goals of longer and shorter term, and of greater and lesser degree of compellingness. Introspection certainly seems to show us that we can perceive a certain situation and formulate a conscious goal with respect to it, and we shall then judge behaviour as rational or not, according to whether it tends to the achievement of that goal. I do not at this point want to raise the question whether the formulation of such conscious goals demands free will or not. I want instead to emphasise the fact that the goals are formulated in relation to certain sets of circumstances. I agree with John Lucas that the ability to formulate such goals—to make up one's mind as he says—is an essential, and probably the most essential, aspect of the functioning of minds at the rational level.
I want now to raise the question, could we imagine a machine, a computer, which achieves this degree of rationality? I do not think anyone has yet done so, and I am not certain whether anyone is even trying to do so. However, I see no reason in principle why it should not be done, at least within some specific context. This context would correspond to the adoption of a specific strategy of survival, such as the horse's strategy of running away. In such a context, it seems to me that the goal implied by the context might be deduced simply from examining the causal relations between the components. I will use an analogy which first occurred to me many years ago. If a Martian were dropped out of his flying saucer and landed in a deserted bassoon factory, and was able to look around at the various bits and pieces, and experiment to see how they fitted together, it seems to me that he would be able to come to the conclusion that the goal of the whole set-up was to manufacture bassoons, even if he had never heard or thought of such an instrument before. Or, consider the Project MAC set-up used by Winograd. There is a set of rectangular blocks of different colours, and apparatus for receiving information about an image of them reduced to dimensions, and an arm for manipulating them, provided with rectangular movements in three dimensions of space. Surely a computer would be able to come to the conclusion that the goal of the whole set up is to move the blocks about, along linear paths in space, possibly piling them one on top of the other, but not for instance rotating them or trying to turn them inside out.
So there is a possibility, I think, that even a computer could formulate a goal. Could it formulate a sensible goal? Well, let's have another example. Suppose you had a computer, and fed into it the whole international air schedule time-tables, so that it had all the information about all the scheduled air flights from anywhere to anywhere. Then you put into it simply the input, let's say Edinburgh-Athens, and say ‘go ahead’. It might inspect what it has got in its memory, and what could be done, and it might possibly come up with the answer that the quickest way of going from Edinburgh to Athens is as follows… But it might also come up with the longest way to go from Edinburgh to Athens, without going to any one place more than once. You could cover the globe up and down, and have a wonderful time going from Edinburgh to Athens. And there would be nothing in particular, in the set-up so far, by which it could decide which of those two goals made sense. But if you then went up to another level of complexity, and also said that the output from that stage has got to be fed into another operation, in which only three months’ time is allowed, there is another engagement somewhere else, only a certain amount of money, and so on, the computer might well—it would, I think—be able to come to the conclusion that the goal of the longest possible journey didn't make sense when it had to be referred to the next stage; And the goal that did make sense, referred to the next stage, was the shortest time.
If this is accepted, then computers might be expected to achieve the modest amount of rationality involved in formulating goals related to defined but unexplained sets of circumstances. Should they ever succeed in doing so, this would certainly be a long step towards the achievement of mind in machines. However, two further steps would remain before machines achieved anything really comparable to the mind of man.
The first is one which perhaps man himself has not yet fully succeeded in taking; that is, to define a goal in relation, not to a small selection of the circumstances in the surrounding universe, but rather to the universe as a whole. If one believes that a goal is inherent in a set of circumstances ℐ or to put it theologically, that God's will is immanent in his creations—then the ultimate task for the rational mind is to discover what that goal is. This, the discovery of natural ethics, or natural religion is, surely, the greatest endeavour which mankind is still engaged in—as indeed Adam Gifford must have thought when he endowed these lectures.
The other aspect of human mind which this discussion has left out of account is the non-rational parts of it, I mean the appreciations of beauty, the emotions of joy, sorrow and so on. It seems to me that all those aspects of mind which are concerned with controlling reactions to experienced circumstances might in principle be carried out by machines capable of computation and perception in the sense of developments of modern computers; but the aesthetic and emotional aspects of the mind seem something quite different and our discussions of rationality and language should not tempt us to forget them entirely.
Professor Waddington's lecture, which is the last of the full length lectures this year, has provided us with a very good lead in to next year's topic, which, as I think you know, is to be The Development of Mind. He has already shown us some of the problems that we are going to meet when we try to connect the notion of mind with that of evolution and development. But he is obviously right that we haven't yet achieved the task that we set ourselves for this year, even this small-term, self-selected goal, of talking about the nature of mind. We haven't been able to come up with a satisfactory, agreed, definition of what it is for something to have a mind.
Now I think that the disagreements between us, at least between Waddington and myself, are partially just terminological, but partially concern a matter of substance. I mean that the question whether animals other than human animals have minds is partially a terminological issue. In the terminology which Professor Waddington favours, men have rational minds and other animals—and he thinks perhaps insects—have non-rational minds, whereas in the terminology that I favoured men and perhaps intelligent Martians, if there are any, have minds, and animals have consciousness, but not minds—with the possible exception, of course, of Washoe, who is between the two. If she really has got language, and can really develop the language to something like a human level, then it wouldn't surprise me to learn that she was able to exhibit the defining characteristics of mind. Now, so far, as I say, this is just a matter of terminology, because we both of us attribute consciousness to animals. We both of us, with some slight hesitation, preserve language to man. We both think that language and consciousness are of importance with regard to the definition of mind. Where we differ, of course, is in where we place the emphasis. I think that language is much more essential to mentality than consciousness is, and Professor Waddington takes the opposite point of view.
The substantial point of disagreement, I think, is this: why do we call the things that we want to call minds—why do we want to call them ‘mental’? What is the criterion by which something is ‘mental’? And, in particular, what is the relation between having a mind and displaying the nervous structure or patterns of behaviour on which we base the judgement that something has a mind? Is the relation between mind and behaviour a necessary one, or a contingent one? Are these signs of mentality, like the use of language, part of what is meant by ‘having a mind’, or are they just bits of, perhaps very convincing, inductive evidence? Now, I think the former. I think that to have such properties as the ability to use language, to do mathematics, and so on, is just what is meant by ‘having a mind’, and that these aren't mere inductive evidence suggesting that things have minds. I think that Waddington is inclined to think that the exhibition of adaptive behaviour in animals is inductive evidence for their having mind. If he thinks that, the question arises: ‘what is this mind that it is inductive evidence for—what is the hypothesis to which these bits of evidence are relevant?’ And I think the hypothesis is, that these animals, perhaps even insects, have a certain essentially private inner something which is rather like the certain essentially private inner something which I have, and which only I really know that I have.
This is an important point of disagreement between us, because I think that this is the point at which Waddington takes the step out of biology, into mythology. This notion of the essentially private mind, the something within to which I alone have a privileged access, which is a set of experiences which I cannot communicate to others—this is a myth created principally by Descartes. Or perhaps I am exaggerating the genius of Descartes: it is a myth which is perhaps part of human nature, but a myth which Descartes first gave detailed philosophical shape to. He did so specially in the Meditations, to which John Lucas alluded yesterday: the meditations of a lonely spirit, doubting the existence of the world, doubting the existence of his own body, trying so far as possible to be a lonely, isolated point, independent of any other mind (except, as it in the end turned out, the mind of God—but independent of any other created mind) for its knowledge of truth, for the meaningfulness of the language that it used.
Descartes, beside creating this, as I think, mythical entity, showed that the only evidence there could be for the existence of a mind of this kind was the use of language. And because animals don't use language, he therefore concluded that animals don't have this kind of mind. Wittgenstein, who is often thought of as being somebody who refuted Descartes, in a way just continued this part of Descartes’ work. He showed that even language didn't provide sufficient evidence for the existence of this sort of private mind. He therefore concluded that human beings didn't have it either—indeed, that it was altogether mythical, because a mind which is to have the type of thoughts which Descartes has in his Meditations, must be a mind which possesses a language, and, Wittgenstein argued, a language is something which presupposes at least a potential community of language-users.
I'd just like to go back to a point made last night. It was clear that when I said, following Wittgenstein, that thought presupposed language, I was misunderstood by John Lucas, who took me to mean that every time I think a thought the actual thought must be formulated in words in my consciousness. I didn't mean this at all: I meant that thoughts of any complexity are thoughts which presuppose the possession of a language, though of course they need not be formulated in words at the time when I think them. Let me give a terribly simple example of this. As I was walking along the Canongate this morning, I saw a notice above a shop, which said Thistle Freezers’. And I was puzzled by this—I thought, why should anyone want to freeze thistles? And then I thought, oh, of course, I'm in Scotland, and the thistle is a heraldic symbol. It's a brand name; it means not thistle freezers, but thistle freezers. Now the point of this trivial story is that this thought, which has taken me several seconds to relate, went through my mind in a flash like that. I didn't enunciate all these words, sotto voce, as I had the thought, yet it is also clearly a thought that nobody who didn't know a language—indeed, nobody who didn't know English—could ever have had. I think that not even John Lucas would think that cats and dogs, for all we know, are going around having thoughts of just that kind, only the poor things can't tell us about them.
Just a final point, about the evolutionary part of Waddington's paper. I have some difficulty about the notion of evolution of language, but I don't want to develop this point because it's something which we shall have to consider in detail next year. I was interested that in his paper, when he was formulating an evolutionary account of language, he said that it might be that natural selection might favour people who adapted well to a language-using community. I've no doubt that it might, but of course this presupposes the existence of a language-using community, and the origin of the language-using community is something that remains to be explained. Though I share in Waddington's belief in the importance of goals for mentality, I find it hard to follow him in the belief that we might be able to detect a goal of the universe as a whole. If there is a Maker of the universe, and he tells us what his goal is, then well and good. But I don't think that we, like Martians dropped into the bassoon factory of the world, can work out a universal goal from our surroundings. I don't think the Martians could, unless they knew what it was to play a bassoon. I think that one cannot work out what is the end-point of a cyclical operation unless one is in a position to make some sort of value-judgement about the different stages of the cycle. I think that if somebody were to just study the evolutionary cycle in the world, they might well come to the conclusion that the world was a factory for producing corpses. Some corpses take longer to produce than others, but after all that is the climax, at the moment after the strivings of the evolutionary process and the strivings of nature during the preparation of the corpse. Once the corpse is produced, the striving ends, and the product is allowed to disintegrate.
Let me reply very briefly, to give time for Christopher and John. Let me say, first of all, that I do not insist on the importance of consciousness for mind. I happen to think that there is a lot of consciousness about in the World—but I haven't any very good grounds for that except logical grounds, and normally I don't find it necessary to ask myself whether a mouse is conscious, or an insect is conscious or not. Again, let me say that I tend to agree with Wittgenstein. I don't think of the mind as an entity. You asked, if I say something has got a mind, what would this be evidence for? It is not evidence for the existence of any sort of entity. I should tend to say things have mind, or engage in mental activity, if their behaviour, if their reacting to stimuli, is much more complex than simple reflexes. So long as I can see that they are reacting to stimuli, and the reaction to the stimulus is not a bit obvious to me, if I can't see how they came to do such an extraordinary thing as building a nest, then I might say this is a mental activity. It is really a name for a type of neural behaviour more complex than one can easily fathom. I don't think you used mind in that very broad sense. You want to confine it to a particular type of complex nervous activity, namely language-use; well, perhaps you would not totally confine it to that, but aren't you really saying that things which don't have language don't have minds, in the way that you and I do.
I was very much interested indeed in what Waddington said. I agree very much that the idea of a goal is an integral part of the concept of mind; and so is the idea of ‘intention’. An organism which can have intentions I think is one which could be said to possess a mind. And intention demands, it seems to me, more than just having a goal, because you might say bacteria had, in some sense, a goal—I mean, to grow, and divide, and so forth. But I think the concept of intention goes beyond this, and involves the idea of the ability to form a plan, and make a decision—to adopt the plan. The idea of forming a plan, in turn, requires the idea of forming an internal model of the world. And in a more sophisticated sort of internal model, of course, there must be room for a model of yourself. And that is where self-consciousness, I think, begins, and gets important. But an internal model of the world, roughly speaking, has to be an abstraction from the world, which, roughly speaking, tells you what the world is likely to do to you if you behave in such and such a way. And, of course, if we live in a community and have to deal with one another, so we want to know how other people are going to behave if we treat them in such and such a way. And language is one of the highest culminations of this progression, from goals, through plans and intentions, to the ability to communicate our plans and intentions and desires and goals to one another. I think that to draw a line somewhere up the evolutionary scale, and say that animals below the line haven't got minds, and that those above have, is rather like trying to draw a sharp line between being asleep and being awake. There are degrees of wakefulness and awareness, and I think it is the same with the mind, language being its highest manifestation, as far as we can tell.
I shall quarrel with Waddington for not quarrelling enough with Kenny, who treated him very unfairly, and put forward the fork of either turning Waddington into a behaviourist, or else into an inductivist, and at one moment was, as Kenny is very fond of doing, discovering behind Wad's outward and visible appearance a Cartesian bogey. Once, when he was younger, Kenny wrote a thesis attacking the Cartesian errors of one Lucasius, who he believed was a tall old gentleman with a white beard. But I think that Waddington gave away too much in saying of course he is only interested in behaviour. He isn't only interested in behaviour. He's interested in behaviour for a reason, that it seems to be mindlike, and there is a certain logical gap here, which I think is rather important for our understanding of mind. First of all, it seems to me quite clear that when we come across purposive behaviour we think that it does suggest that there is a reason—that the weaver birds are doing something, because we can assimilate, we feel we can ask the weaver bird ‘Why are you doing this?’ And it will answer our question with a sentence beginning ‘In order to do this, that, and the other’. That is to say, purposes seem to be mind-like because in our own experience a purpose is an acceptable answer to the question of ‘why?’, ‘why are you doing it?’, but it is not a sufficient condition of being mind-like, because it is not the only answer. And the reason why there is a certain amount of difficulty, which Kenny was exploiting, is that although it is suggestive of there being a mind that we see a purpose, it isn't conclusive. This is why the homeostatic mechanisms, like thermostats, and so on, which have been discovered by some mathematicians recently, seem to be rather worrying. This is why a hundred years ago the theory of natural selection seemed to be rather worrying. It seemed to be giving an alternative explanation of how a purpose was achieved. And therefore in our understanding of the nature of mind, I think we shall be right to say that to be able to have purposes is a characteristic feature, but it is not a defining feature. We also expect these purposes to be explained, given reasons; I slightly disagree with Longuet-Higgins in his definition of intention. I think intentions are different both from reasons and from purposes. Essentially purposive behaviour is suggestive of there being a mind, but not conclusive if we want to know more fully what sort of reasons there are for having these purposes.
I rather like Christopher's introduction of something like intention as well as plan. ‘Goals’ is certainly too broad a word to use—as he says, you can have goals for bacteria and so on, and nobody would think of giving them minds. Something like intention—as John has pointed out, intention may not be quite the right word, but something of that kind.
Now, as regards the point that John was making. I am of course professionally an embryologist, and nothing is more obviously purposeful than an egg turning itself into a chicken, and yet—this looks purposeful, but it isn't really purpose. If you took it as purpose, this is certainly not enough to allow you to attribute a mind to it. I think that mind has got to have something akin to purpose, of the same general nature as purpose, and I think it has got to be tied up with the nervous system. I'm afraid I'm materialist enough for that. I'm not going to give anything a mind that hasn't got a nervous system. Now I don't say those two points are quite enough to define mind, but at least those two seem to me to be necessary.