You are here

Tenth Lecture. Questions and Answers

KENNY. This lecture will be a question-and-answer session among ourselves, and I'd like to begin by explaining the batting order. You will remember that we've been, some of us, rather shy about offering a definition of mind, but I think there have emerged four different criteria for distinguishing minds from non-minds which could be put in order like this:

For Wad, goals are perhaps the most important thing separating minds from non-minds; for Christopher, it is plans and procedures; for myself, it is symbols; and for John, it is autonomy. You can arrange these in a pyramid, because the lower down ones apply to more entities than the upper ones do: there are very many things that have goals, but very few have John's particular sublime autonomy. In our discussion we shall start at the bottom and work upwards, taking turns to answer questions in the following order: Waddington, Longuet-Higgins, Kenny, Lucas. So it is Wad to whom I put my first question.

You referred in your last talk, as a possible explanation of the emergence of goals in nature, to recent work of Kauffman. If I understood it rightly, this work shows that a series of random switching instructions given to an array of lights may give rise in quite a surprising way to an orderly pattern of flashing. This work, it seems to me, may show how from an anarchic beginning we can get the development of a cycle, but the existence of a cycle isn't the same thing as the existence of a goal. The things that occur in a cycle don't necessarily occur for the sake of the cycle, or for the sake of one stage of the cycle, as you can see by considering say the oxygen cycle. It's only where there is a cycle in which things occur for the sake of a stage in the cycle that there are goals, I think. Now it isn't at all easy to identify a stage of a cycle as a goal. Bertrand Russell once tried to do so by saying that the goal of a behaviour cycle of an organism was that stage of the cycle which brought the activity to an end, the stage of a cycle which was followed by quiescence. And of course, that won't do as you can see by considering the well-known phenomenon of the pyjama cycle. It would follow from Russell's account that all human activity is directed to the goal of getting into pyjamas, because the one human activity that is most often followed by periods of quiescence is the donning of pyjamas. So I'd like to ask you, Wad, how you identify goals, and how you distinguish between cycles and goals?

LONGUET-HIGGINS. Wad, I gather that you regard the possession of goals as central to having a mind, and that you offer the chreod as a concept useful for thinking about goals. Do you regard the existence of chreods as an explanation of goal-directed phenomena, or as a fact requiring explanation?

LUCAS. Are goals, as you understand them, necessarily accompanied by intentions, or not?


In reply to Tony Kenny I did not mean to imply that Kauffman's work was an example of a goal appearing from an anarchic background. Kauffman observed that the system tended to go into a limit cycle; it got into a state in which it was going round and round one repeating sequence of changes, and if it was disturbed it tended to get back on to this limit cycle or on to one of a small number of alternative cycles. This situation is rather similar to the phenomenon known as homeostasis, in which a system gets into a stationary condition, to which it tends to return if disturbed. The concept of a goal is, I think, only applicable to systems which are continuing to change, and which tend to change along a defined course even after disturbance. These are systems which show homeorhesis rather than homeostasis. The point I wanted to make was only that the appearance of a limit cycle, like homeostasis, represents a rather elementary type of order, which can come into being in a system which appears at first sight to be quite chaotic. The appearance of a chreodic system, which one could speak of as having a goal, would, of course, be a more complex form of order.

Now to turn to Christopher's point. I certainly regard chreods as things to be explained and not as themselves explanations. It is a concept which I think is useful to keep in mind when we are dealing with the higher levels of Tony's pyramid. The point of it is that it reminds us that in discussing mental events we are probably dealing with systems very unlike the ordinary causal systems we usually come across. In these the nature of the output is usually closely connected with the nature of the input, and a change in input leads to a change in output. In a chreodic system, however, the output may be almost independent of the input provided only that the input falls within a certain range; but if it falls outside that range, the output may switch over into some quite different type. I have indicated the nature of such systems by the model of valleys with watersheds between them, but this is, of course, merely a descriptive device. It does not in any way explain why the system should behave like this.

Now there is John's question. Is a chreod or goal the same thing as an intention? No, of course it is not. The idea of an intention implies consciously formulating a wish to attain some particular goal. I am discussing the basement level of Tony's pyramid, the biological processes occurring in systems which are prior in evolution to the appearance of man as a language-using animal. I am looking in this pre-human world to try to see the foundations out of which such things as intentions may have evolved.

Questions to Longuet-Higgins

KENNY. I wanted to ask a question arising partly from last year, when you said, Christopher, that the difficulties which John had presented against artificial intelligence from the findings of Gödel and Tarski could be avoided if one switched from the indicative use of language to the imperative use of language so that one could avoid the difficult concept of truth. Now, it seems to me that the important thing is not what makes a difference between the indicative and the imperative (which is a difference between two sorts of speech act, assertion versus command), but rather what is in common to the two, namely, the notion of matching between the world and a bit of language. In the case of a statement, the onus of match is on the statement to match the world, i.e. to be true. In the case of a command, the onus of match is on the thing commanded, to match the command, i.e. to be obedient. Now it is really the notion of match that gives rise to the problems which troubled Tarski. And I wanted to ask you whether you think that your concept of meaning, as essentially the procedures which the words or sentences used give rise to, can avoid the difficulties of the concept of matching.

LUCAS. I, too, want to sit on Longuet-Higgins on his being too much wedded to programs. I think there are in fact two people here: there is Christopher, a very reasonable person who is extremely intuitive, and often thinks quickly without being programmed, and also there is the Professor who is very clearly wired up in the imperative mood and lays down sharp instructions about what is to happen. And my point isn't so much Tony's that instructions often raise the question whether they are going to be obeyed or not, but rather the point that I tried to make last year that even within the formal confines of mathematics, not every problem can be algorithmically solved. And I still want to press Christopher whether he does or does not believe that every program, that every rational procedure must be algorithmic?

WADDINGTON. I want to ask Christopher a question about consciousness. We all know something about our own consciousness because we experience it. We know also that it can occur in a number of different forms, for instance under the influence of alcohol or other drugs, and people assure us that it can be altered by various techniques of yoga, controlled breathing and so on. Moreover, if animals such as dogs and cats are conscious—personally I don't see how we can ever tell whether they are or not—if they are, surely their consciousness must be of quite a different character to our own, since for instance they rely on different modalities of sense, such as smell. We have, therefore, I think to accept the possibility that there may be different forms of consciousness possible in the universe, so the question I want to ask is: can we suppose there is anything corresponding to consciousness in a computer? Can Christopher think of any way of deciding whether one of his intelligent computers is conscious or not?


First of all, Tony's question about the lecture last year in which I tried to develop an imperative theory of meaning. I did so by contrasting indicative languages, such as the first-order predicate calculus, with imperative languages, such as programming languages, in which you write your stuff and feed it into the computer and something actually happens. And I was adopting the imperative language as a way into the theory of meaning because it seemed to me that what we were really trying to understand was the way language works, and the way it's used. I hadn't very much to say about the way language is produced, but I thought there was something worth saying about what happens to utterances when they enter the hearer's ears. I was trying to suggest that the meaning of an utterance in a natural language is possibly to be compared with the meaning of an utterance in a programming language, because an utterance must mean something to somebody and one must never forget the person to whom it does or does not make sense.

The idea was that the meaning of an utterance is to be identified with the process of thought which it represents. The point of a programming language for a computing system is that you can specify in that language a very large number of processes of interest and usefulness—to us, of course, rather than to the computer, but that's by the way. Perhaps a better way of making the point would be not to talk about the indicative or the imperative but the infinitive. Let me try to illustrate this idea with an example. I will write on the blackboard a simple mathematical expression, namely (2+3)×5. I suggest that what that means is: ‘to add 2 and 3 and then multiply the result by 5’. The unadorned infinitive clearly specifies a procedure. So far there is no ‘speech act’, but I can turn this infinitive into a speech act simply by putting the words ‘I order you’ in front, and then it becomes an imperative. So one might suggest that sentences in natural language are ultimately built from infinitives, to which one has to append a proforma in order to turn the thing into a statement, command or question. One must of course distinguish the procedure of adding 2 and 3 and multiplying the result by 5, from the answer which results from this procedure—every logician knows the problems that arise if you don't.

I hope this may help to answer Tony's direct question whether procedural semantics avoids any of the difficulties associated with the matching of statements to states of the world. In one way, I think, it does. To learn a language is to discover how to modify one's views and habits on the basis of what people say, and conversely, how to influence other people's world views and dispositions by choosing one's words appropriately. If utterances are regarded as functions from states of the hearer's mind to other states of his mind, then once the hearer has learned the language, the only matching problem that remains is that of judging whether the world as he knows it agrees with the view of it which is suggested to him by any given declarative sentence. This is my own view of the problem of establishing the truth of a statement in natural language, and it fits in with the view that the truth of a statement cannot be ascertained in principle till we know what it ‘means’.

To John—how do I cope with the non-algorithmic aspects of reason, and I am wedded to the view that reason must be algorithmic? Well, I can't really form a coherent concept of reason or rationality except in terms which will make it clear to me how one's thoughts follow one another, or how one's decisions are related to the situations in which one finds oneself, or how one's beliefs are determined by what one is told, or how moral considerations enter into the decision to do one thing rather than another. I think this is simply the way we think and the way we discuss matters. Now, of course, it might very well be the case—and it almost certainly is the case—that real human beings are not deterministic organisms for which we could give a total specification. But this issue of physical determinism versus indeterminacy is really utterly unreal as an issue of practical or philosophical importance. I mean, we have the Indeterminacy Principle anyway, we have the utter impossibility in practice of giving a description to the state of motion of all the atoms and molecules in one's brain and one's environment, so even if our brains were causally determined in the strict sense, it wouldn't really make any difference to the way we could usefully talk about the relation between physics and what we do. So I'm not deeply concerned about that issue. I'm not wedded to the idea that we have to be deterministic organisms in order to be algorithmically describable. On the contrary, I regard the chreodic character of the human mind as quite astonishing. With all the perturbations which enter one's consciousness, the fact that one can stick to a train of thought, even roughly, is a very remarkable fact, so it seems as if we are to a very considerable extent helped by nature to behave like automata for the purpose of thinking. But supposing we were not in fact describable as automata, and that there was an element of randomness—dare I use the word?—which meant that no account whatever could be given of why I did a certain thing; I don't feel that that fact ought to cause any more trouble to my view of the mind than to John's view. I can't allow that John would be able to say: ‘He did it because he wanted a piece of chocolate more than he wanted a piece of toffee’, while I couldn't make some corresponding algorithmic statement of what went through my mind; if I couldn't, then John would have to withdraw his description too.

Finally, I have to answer Wad's question—what would make me regard a computer program as conscious. This is very hard; I find it much the hardest of the three questions, I must admit. One can make a few obvious remarks about consciousness, such as that a sufficient condition of consciousness at a given time is a person's capacity to remember what happened at that time. If I say to you: ‘What were you doing at 3 a.m.?’ you probably won't be able to give me the answer because you were asleep, you were unconscious. On the other hand, if you tell me you met somebody for lunch and he said this and that to you, that seems to be compelling evidence that you were conscious at lunch time. So one might be tempted, and I once was, to define consciousness as the rate at which your memory changes—when you are asleep your memory isn't being modified by experience—but that doesn't seem to be quite good enough. Because one can certainly be conscious—one can be sure that one was conscious at a certain time—and be unable to remember or report anything about one's state of mind at that particular time.

I think that another basic necessity in defining consciousness is the concept of a world model. A conscious being has to have at least some internal representation of the world (that's a necessary, not a sufficient condition for being conscious). Now, it strikes me that a possible way of defining consciousness—at least in animals and in human beings—is to say that one's degree of awareness or consciousness is essentially measured by the rate at which one is currently reorganising one's world model. This definition is just sufficiently blurred at the edges to leave open the interesting question of whether we are conscious when we are dreaming. Because when one is dreaming one is in a sense reorganising one's world model, and indeed we can remember our dreams. It also allows that we can be conscious without receiving sensory input from the world; and when one is deep in thought but blind to the world, surely one is still conscious. So I would say that the rate at which one is reorganising one's world model is really what we have in mind when we talk about one's degree of consciousness.

Now, the final question of course, is what about computing systems? How can we tell whether a computing system is reorganising its internal representation of the world, and how fast? We can only know, of course, by studying the program which has been written for it. Now, I'm not one of those who think that the fact that a human being wrote the program is philosophically frightfully important in our discussion of the nature of the activity which goes on inside the computing system when the program is actually running, any more than Tony Kenny is worried by the thought that the man from IBM might appear and say to him: ‘Very sorry, sir, you have to stop talking now—I've come to service you’. I regard my consciousness as my own property, whether or not I have been produced by the intentions of somebody else. So I think we'd simply have to look and see whether the program was reorganising in some interesting way its world model, and if it were, then according to my definition, it would be conscious at the time.


But Christopher, surely you could write a program which would cause your computer to be continually revising its world model, either in a random fashion, or in response to various external inputs, so that according to your definition you could easily arrange for your computer to be conscious. I don't think I am convinced.


Well, I'm not entirely happy with my definition; but I've tried, I've done my best!

Questions to Kenny

WADDINGTON. I think Tony has been the one amongst us who has laid most stress on logical arguments. For instance, he argued that language could not have evolved, because evolution implies inheritance from ancestors, and clearly the first users of language could not have learnt this from their ancestors, who by definition would not have language. This is an argument based on one type of logic. Now there are several types of logic available, and I want to ask Tony what he thinks is the status of logical argument. Are the different types of logic simply different tools which we can pick up and use when appropriate, as we use a plough when ploughing a field, but use a different machine, a reaper, when harvesting the crop? Is any particular kind of logic just a tool suited for some tasks and not for others? And if we use a logic and get what seems to be the wrong answer, which does not agree with what we had previously thought, under what circumstances do we accept the logic and say that we must have been wrong previously; or do we stick to our previous opinion and reject the logic as being unsuitable for use in this connection?

LONGUET-HIGGINS. A few lectures ago, you defined mind as a capacity for intelligent activity, and intelligent activity you defined as activity requiring the use of symbols. You also implied that language is essential to mind. Would you agree that it is possible to think symbolically without necessarily being able to communicate in language?

LUCAS. I want also to try and tie Tony down a bit about language and how far this presupposes mind. I still feel, Tony, that you are too wedded to language and are always explaining mind in terms of language, and while I don't want to deny the importance of language to mind, it seems to me that language only can be understood and meant in the context of one mind communicating to another; I can never make out what you really think about this.


Firstly, I'd like to say to Wad that I didn't argue that an evolutionary explanation of the origin of language was impossible; I tried to show that there was an important difficulty which I thought had been neglected. But Wad's point about logic is extremely important and for that reason I'll leave it to the end.

I'll turn for the moment to Christopher and John who both think, for different reasons, that I overplay the role of human language in the understanding of the human mind. Christopher thinks I err on one side, because I'm not willing to allow that what goes on in the innards of the computer can count as language; and John thinks that I err on the other side because I leave out of my account of language the personal private mental realm which John thinks is the thing that language exists to communicate. Now, I think that my definition of mind in terms of symbols, in fact, contains all the good points in the other definitions. No doubt, all the other symposiasts feel the same about their definition; but let me explain why I think this about mine.

Wad is right in thinking that there is a very close connection between mentality and having goals, but I think that not any old goal will do as proof that you have a mind. I think only long-term goals of a rather special kind are proof that you have a mind, and goals of that kind can only be formulated in language. Conversely, an activity is only a linguistic activity if it is the activity of a being which is capable of having long-term goals of that kind.

Similarly with regard to plans, a very important part of the notion of mind is being able to plan; but I think only certain kinds of plan are an indication of having a mind, namely symbolic plans and not the simple kinds of plans that dogs and apes can make. Conversely, again, only somebody who is capable of plans of the appropriate kind can be genuinely said to be using symbols.

Now for autonomy. I think that to possess a language or something like it is necessary if one is to be able to have and formulate the types of reason for action which are characteristic of autonomous agents, and I think again that only agents with a certain degree of autonomy can be said to have a language at all. Like Christopher, and unlike John, I don't think that an agent has to be so autonomous as to be indeterministic in order to have a mind or to have a language, but I do think that it has to have a degree of autonomy.

What about consciousness? None of us has chosen it as the defining feature of mind, but obviously it plays a great part in human mentality. I think that the consciousness which is essential if one is to have a mind, is self-consciousness. I don't think that consciousness in the sense of perception gives one a mind because animals perceive, and I don't think animals have minds in the sense that we have them. But human beings are self-conscious and this is part of what is involved in having a mind. Now, I think that self-consciousness is connected with language; indeed, as I have argued earlier, one cannot be self-conscious unless one has a language. This is because there is no way of distinguishing between knowing that something is the case, and knowing that one knows that something is the case, between being in a certain state and knowing that one is in that state, unless one has a symbolic as well as a non-symbolic way of expressing that knowledge and expressing that state. Again, conversely, it seems to me that any being who has a language must be capable of observing rules, and that only beings capable of self-consciousness can do the appropriate sort of self-correction that is involved in really following rules and not just being governed by rules.

I've spoken a lot about language, but I did in fact define mind, at least when I was on my best behaviour, with reference to the ability to operate with symbols rather than with language, and I left it open whether there might be other symbolic outputs which would indicate minds. So I should, I suppose, answer the question: ‘What else other than language would I take as being an output which indicated the presence of a mind?’ It isn't easy to think of something, but I don't think it's impossible. If we came across a group of organisms which had no language, but which used money as we do, then we should say those organisms had minds. I'm not sure whether or not it is conceivable that there should be a social organisation which had money but did not have language, but if it is conceivable, then I would be prepared to accept it as an indication that the organisms in question had minds. This is because of the abstract nature of money and its use in embodying long-term goals and its usefulness in the preservation and expansion of autonomy, but essentially because it is symbolic and in very much the same way as language is.

I think it's worth developing the deep analogy between money and language—among other things, it brings out some of the limitations in the early work in artificial intelligence in simulation of language. In various airports, you can find machines which change money for you. You put a £5 note in at one end and you get five £1 notes, or 500p out of the other end. Now, the input and output to these machines is genuine money. In the same way, the input and output of Christopher's computer is genuine English, when he plays with it his game ‘Waiting for Cuthbert’. But, of course, the money doesn't derive its value from what goes on inside the money-changing machine; it doesn't have any value apart from the resources, the labour and the promises of human beings who use the money. Again, the processes in the machine which mediate between the £5 going in and the £1 notes coming out don't themselves have any value except independently as bits of hardware. Similarly, the English which is typed into Christopher's computer and the English which comes out of it are genuine bits of language, but they don't get their meaning from what goes on inside the computer between the input and the output, they get their meaning from the activities, the knowledge, the experience and the conventions of the human beings who use the language. Again the processes in the computer which mediate between the question: ‘Will Cuthbert come?’ and the answer: ‘No, he never will’, these processes don't themselves have meaning.

I think that the analogy is to that extent exact, but of course, I wouldn't want to push it too far. That would be unfair to Christopher, because I think it's absolutely clear that if one wanted to understand the monetary system, one wouldn't learn anything at all by putting together or taking to bits the changing machine, whereas I do believe, as Christopher does, that we can learn a great deal about the nature of language by writing programmes to simulate natural languages.

The analogy also has its uses in answer to John's question. Value isn't conferred on money by any private and spiritual act of the mind, but by a set of public and communal conventions. Similarly with meaning, the attempt to confer meaning by a private introspective act is as futile as the attempt to give value to a currency by a naked act of the will.

I turn to Wad, from language to logic. Logic is in one way a more sublime expression of mind than language; in another way it has nothing special to do with language at all. Let's take a simple logical truth, namely: ‘If either p or q, and not p, then q’. That is a logical truth, or a pattern for an indefinite number of logical truths. Now, you can consider that at four levels.

First of all, there is the way in which this law of logic applies to the world: there has never been an event which was a violation of it, and there never could be such an event. To that extent, logic governs the world as well as our minds, though that is perhaps a misleading way of putting it.

Secondly, animals like dogs can apply logic. I'm told that if a police dog is tracking a criminal and comes to a fork in the road, it sniffs at one fork and if it doesn't find the track there, then it immediately goes on to the other road without any further sniffing. If that is true then the dog is, as it were, making an application of the logical law. But I wouldn't want to say that the dog knew any logic, because it doesn't have the ability to express this truth with the generality with which I've expressed it—it can only apply it in particular cases.

Thirdly, normal human beings know simple truths of logic and express this knowledge in their use of words like ‘either’, ‘or’, ‘not’, ‘therefore’, ‘but’ over a wide variety of topics.

Finally, there are people like Aristotle and Frege who formalised this knowledge that we all have intuitively and turned it into a branch of mathematics. Now, it's only in sense four that there are these alternative logics that Wad is talking about. One can get various different systematisations of the logical truths of the propositional calculus, but the possibility of giving these various formulations is in itself not much more interesting than the fact that you can write nine as IX or as 9, and the existence of these alternative formulations doesn't mean that there is any possibility of dispensing with logic as it applies to the world any more than the possibility of Roman numerals means that we might count in a different way.

You asked: if a logic excluded a conclusion that you want to get, should you reject it? We have to ask in turn: do you want to get the conclusion any old how, or do you want to get only true conclusions by valid methods? Assuming that it's the latter, and that what you want is a formal system, then you have to choose a logic which is consistent. If it has been proved consistent, you will know that you won't ever be led by the logic from true premises to a false conclusion. If you don't care whether the system is consistent, you might as well go in for wishful thinking. Logic is indeed a tool, but you want a tool that will lead you only to true conclusions from true premises, and to make sure that your logic is of that kind may demand quite hard work.

Questions to Lucas

LONGUET-HIGGINS. In your talk yesterday, you said: ‘Any view of the universe large enough to include the fact that minds exist will itself have to be so large, so rational, and so personal, as to deserve the appellation of “God”’. Can I be right in understanding you to assert that a fully mature science would qualify for that title? If not, what are you actually suggesting?

WADDINGTON. The question that I want to put in a way grows out of that. What I took John to mean, was that any view of the universe broad enough to include the existence of mind, could be taken as a description of God. Now the point I want to raise can perhaps be put, in a rather frivolous form, by asking the question: what was God like before the universe included any living beings or any minds? There could not then be any view of the universe broad enough to include minds. Again, at some time in the far future, say in the year A.D. 2,000,000, if evolution continues until then, there will surely be super-minds of some kind, and what will the nature of God be at that time?

But I should really prefer to put the problem in another way. We have been agreeing that the world has a structure, and that this is a structure in which processes are related by instructions or algorithms. Now, such a system is essentially open-ended. It is one in which things can be created. In fact, one might say that during the course of evolutionary history there have been creative changes in the world, which have brought into being such things as minds, as those are defined by Tony and John. This is another way of making the point which I referred to as a change in the nature of God in my first few sentences. Now John has been laying great stress on autonomy and freedom. I think really the question I want to ask is whether by freedom he means simply complete freedom to do whatever you wish, regardless of circumstances; or whether he would relate his idea of freedom in any way to the creative, open-ended character of the universe. It seems to me that there ought to be some connection between what John refers to as freedom, and the open-ended, creative character of the universe which you might call ‘the evolution of God’, but I don't know quite what this relation is, and I should like to hear John talk about it.

KENNY. My question is very close to the other two. John, you insist frequently on the rationality of the universe, but there are two different things this might mean and I'm not sure which of them you have in mind. You may mean that the universe is rational in the sense that it is intelligible to rational creatures like ourselves, in which case I think, none of us would be likely to deny this. Or you may mean that the universe is rational in the sense that it exists and develops the way it does for a reason: that is, that there is a final cause for the universe, there is an end or purpose to the universe. Now, there is a big difference between these two senses of rationality and I'd like to know in which sense you think the universe is rational.


The diagram drawn by Dr Kenny (see p. 137) showed only four of the characteristic features of minds. But there are others. In particular there are various forms of creativity, spontaneity and originality, which I should place at a higher level than autonomy, and near the apex. Wad is quite right to think that these are very important marks of mind, which we have not sufficiently considered. Without them, my emphasis on autonomy would collapse into soggy libertarianism. And I certainly should want to disown that. The reason why I've been stressing autonomy, which carries with it always the possibility of being wrong, is because I think the possibility of things going wrong at the level of autonomy is a necessary condition of being right at the level of creativity and of being right in an original non-algorithmic way. That is to say, I do argue against Tony that the liberty of spontaneity implies, in some sense, the liberty of indifference. This is why I am an indeterminist. But although this means that I am defending my colleagues' right to be wrong, the underlying reason why I am so anxious to secure to them the right to be wrong, is that this is a necessary condition of their ever being right. So long as they are being wrong, while I defend their right to be wrong, I am sorry that they are wrong, and try and persuade them of the true opinion; and these amount to more original and creative ideas nearer the apex.

Now, let me try and answer Christopher and Tony together. Christopher is right in having caught me in soft and woolly thinking, as I now look at that passage in my text he pointed out. The antecedent of ‘itself’ is unclear; do I mean ‘God’, or do I mean ‘view of the universe’, and this makes me open to Wad's other question: ‘What was God like before the universe included any living beings or any minds?’ There is a difficulty here because it is very easy to slip between ways of talking—various descriptions—and what one is describing. This underlies also the difficulty which Tony is bringing up about rationality. Ideally, I should like to be able to put an absolutely sharp distinction between something there which is referrable to and something here which is my means of referring to it. But when we get into the realms of philosophy and metaphysics, it is very difficult to maintain this distinction. Often what we are talking about is almost constituted, almost but not quite, by what we say about it, just as it is in perception. Often, we are only able to see the things that we were expecting to see. It is quite difficult to drive a wedge in there, just as it is quite difficult for me to draw the distinction, which Tony rightly thought needed to be drawn, between saying that the universe is rational in the sense that it is intelligible to the human reason, and saying that the universe is rational in the sense that it exists for reason. Once this distinction is drawn, then I will accept it and say we first of all come to the conclusion that the universe is intelligible by us; and then when we face some difficult question as when a mother is bereaved and loses her infant, we say: ‘Why is the universe like this?’ And if we attempt to make out that it was the manifestation of a creative mind, we can ask further questions there. This is an instance where the inference can be made explicit, so that Tony can query it. But it is difficult often to make this sufficiently explicit; just as when we are talking about other people, it is difficult to distinguish the overt behaviour patterns from the mind behind them. Rather we talk of reading a man's mind. If one pushes this distinction through too far, one often lands himself in an entirely unnecessary problem of other minds. Similarly, when we are talking about God, my confusion between talking about a world view and talking about the God in terms of whom I understand the world, is not quite a legitimate, but often almost a necessary, conflation of terms.

Finally, Christopher asks me about a world view sufficiently large to include us being necessarily, therefore, one which is based on the existence of God. I would like to be able to produce a straightforward argument of that sort—or perhaps even quote a scholastic tag about explanations necessarily being as strong and as powerful as all things to be explained. But I can't produce a positive argument—all I can do is hand-wave in this direction, and then hope that whenever an alternative hypothesis is put forward I shall be able to refute it. I tried to show this scheme of refutation originally in that Gödelian argument, and then yesterday when I was saying that Monod's account, just because it doesn't embrace enough, can be shown in some sense as self-defeating and incoherent; but whether I can do this always is a matter on which you may have legitimate doubts.

Let me now change my tone of voice. As I was to be the last speaker my colleagues didn't want to give me the last word as against them, but for them. And now I'm no longer quarrelling with them but speaking for us all, saying a certain number of ‘thank-yous’. Some ‘thank-yous’ are really to the Gifford Committee rather than to you here, for bringing the two of us, the Southcountrymen, up to Edinburgh for many repeated, valuable and enjoyable visits; and from all of us for bringing us together, in creating new friendships and giving all of us a great deal to ponder and to think about. But to you here, we want to give another pair of thanks. We were very alarmed when we got the Principal's letter—we shy, retiring dons were being asked to come out of our ivory towers and to go into ‘showbiz’. We had got to be entertaining and we had got to communicate—that, I think, is the trendy word. As Professor Longuet-Higgins was saying a couple of days ago, communication is something which is essentially two-way, and something which comes to dons very difficultly—we normally sit in our studies and we write down, we cross a word out there, and there is no feed-back—this is often just as well: it is a one-way process, and that is very easy at the giving end; but here it has been very much a two-way process, and we have depended greatly on your responses. We have found it very valuable to be, as it were, carried along by you; sometimes you even laughed at our jokes; we think that sometimes you followed our arguments; and we have been very largely carried along by the mere fact of your being here. When we were planning these lectures, with dire warnings from our colleagues and from the Gifford Committee and from previous lecturers, we were told that the curve of attendance was a negative sigmoid one, and that after the second lecture we were going to be talking to five dear old ladies knitting. Well, I can see that some of you are ladies and I can see that some of you are old; whether any of you are dear or not, is something which is not open to immediate inspection, though I am sure that you are. I'm reasonably confident, from looking at you many times, that none of you have been knitting. And we are all absolutely certain that there hasn't been only five of you. And for this we thank you.

From the book: