# Tenth Lecture. Conclusions

## Decision

LONGUET-HIGGINS

I have chosen ‘decision’ as a subject, because I think it is a topic we didn't really do justice to last time, and also because yesterday evening, when we were meeting over sherry, some of you seemed to be interested in this being pursued. So I'd like, if I may, to try and explore a very quick train of thought—it won't take me more than two or three minutes—and then I expect it'll be pounced upon by my colleagues, and they will tear it to shreds. But anyway, here goes.

For me, a decision is a sort of turning-point in a mental process. A mental process, of course, is something very complex. One thing I'd like to stress—and this has been said many times—is that when you think about the mind you are thinking about something happening, rather than a static, jelly-like entity. The idea of mind essentially involves ideas of process. Well, let's think of a decision as a turning-point in a process—in a mental process. And the concept of process is almost inseparable from that of mind. You may remember that I was trying to stress the idea of process in suggesting how we might get a better understanding of the mind; and there's a special way of describing processes, which is familiar to mathematicians and such people, and that is by what is called an algorithm. So let's say that a process is what can be specified by an algorithm. A typical algorithm will be a recipe for making a souffle, or the routine for long division. I'll give you another, non-cooking, non-mathematical algorithm in a moment. If you want to represent an algorithm, you will find it convenient do do so in a programming language. Indeed, programming languages are designed exactly for this purpose. One of them, a well-known one, is called ALGOL—it's rather an ugly name, but what it stands for is ‘algorithmic language’. If you have represented an algorithm in this language, you have written a program. And a program is something which you can actually run on a computer—this is where the computer comes in. It is important to realise that a computer program is essentially an abstraction. The computer scientist talks about the computer as a bit of ‘hardware’, and he talks about programs, by contrast, as ‘software.’ It's a misleading contrast, of course, because software isn't ‘soft’ in any sense. It's abstract as opposed to concrete, really; in discussing algorithms we are discussing something extremely abstract, not something concrete. Now, of course, our mental processes take place in our brains—so our minds are to our brains, in a certain sense, what software is to hardware. That's a rather facile comparison, but it'll do.

I said I'd write up an algorithm. This is where the idea of a decision comes in. Supposing you had the job of filling a bucket with stones, then the following algorithm would actually work, as I think you'll see. First of all, ‘start’ and then proceed by first of all asking a question—‘Is the bucket full?’ The answer may be ‘Yes’, in which case ‘Stop’. Or, it may be ‘No’—in which case, ‘Put in another stone’ and proceed back to the question. So the algorithm has a decision point—the point at which the question is answered. The interesting thing about computers, and algorithms written for them—is that you can write for them algorithms which have tests in them. I have said all this because there's obviously been some misunderstanding about where computers come into this story at all. I brought them in because of the most obvious characteristic of what we call the mind is that we have thoughts. And if one wants to describe these at the correct level of abstraction, one has to do so algorithmically—otherwise there is no way of describing them that I can see. And these algorithms can be expressed in a very precise and unambiguous manner as programs which can be run on computers for testing purposes, and that's why the computer analogy is helpful.

This seems to me to be a very, very low level of decision. You could call it a decision if you like; but couldn't you programme your computer simply by saying ‘Go on putting stones into the bucket until one falls off, then stop’. Then it wouldn't have to decide anything until it came to the end of the process. It seems to me you formulated your recurrent decisions in terms of asking a question—do you stop, or don't you stop? But you could have programmed it in simply—‘Go on doing what you are doing until eventually something happens, and then you stop’. I should use—you might say I should want to use—the word ‘decision’ for something which you might call ‘decision’ too, such as ‘decide whether you are going to fill the bucket with stones or with water’. I'd want to use ‘decision’ really as a crest between two chreods. One stable system of behaving here, and another stable system of behaving there, and you decide which of these routines you will go into.

LONGUET-HIGGINS

With the second point I completely agree, but something presumably will decide you to go into one or the other. And that'll be the circumstances prevailing at the time—As to the first point—in fact if you want to implement the concept of going on until the bucket overflows, you must keep your eyes open all the time, so that you don't fail to notice when the bucket overflows. So you've got to be constantly looking, and that is exactly what this programme is in fact doing. It looks every time you put a stone in, to see whether the bucket is full to overflowing. I mean, you could say ‘If the bucket overflows, then you've gone on too far’—but that's a trivial point—a trivial difference, really. But an ‘until’ statement can't be implemented on a computer without a recurrent test to see if the condition is satisfied.

KENNY

I don't think you really have dealt with Waddington's point, have you? This is only a decision in the sense in which I suppose you might say a leaf which is being blown by the wind, first one way and then another, makes the decision to go one way and then the other. If you want to say that, I don't object to your usage. It's bizarre, but that's your own affair. Are you really saying that computers make decisions only in the sense that a leaf makes a decision when it goes one way or the other? If so, I don't think a computer is a very good model for a mind.

LONGUET-HIGGINS

Well, I don't understand in what sense the leaf can be represented by an algorithm with conditional jumps. I mean, if you—

It's not a leaf falling—it's a leaf blowing backwards and forwards.

KENNY

Yes. If the wind was blowing that way, it moves that way, and if the wind moves another way, it moves the other way. Well, I mean—do you say ‘Have you hit the ground yet—if so, stop’?

LONGUET-HIGGINS

Well, we can of course view the whole operation of a computation running on a computer as a deterministic affair. If you describe it at that level, and talk about the hardware changing its state, then there isn't any question of decisions being made. But the same thing could be said of your brain, at least in my view. On the other hand, that's not the illuminating way to describe the situation. And I think the question is, how is the situation best described, so that we can understand it. And I think that for a leaf falling it isn't an illuminating way of describing it, to say that the leaf is constantly making decisions. It somehow seems to be a description which is not called for by that situation.

LUCAS

Well, it may be illuminating, and it may be illuminating for you—and I think I can see why. It is picking up one aspect, which is quite an important one. Roughly, you are thinking first of all of continuous processes, and their nodes, at which either one thing or another thing can happen. This will work both for what the computer is doing and for what people are doing. Decisions are the critical points when you can go either to the right or to the left. That part of the analogy, I can see, is illuminating; but I don't think it otherwise is illuminating, because it leaves out a great many points which we feel are important. I find it very difficult to say what a decision is, but one point is, I think, that it is something which one would give a reason for.

LONGUET-HIGGINS

Well, as applied to human beings I couldn't disagree with that—and indeed, yesterday, you remember, when we were talking about decision I wanted to say that I thought the distinctive characteristic of mind—mental activity as opposed to any other sort of activity—was that you had intentions. You intended to do what you did. And this idea is not captured here at all. I agree with you, but I think that the concept of a decision is one which is valuable in this connection, and can be applied in this context without violating the meaning of the word. Whereas I think it would not be possible to maintain that there was any sense in which the bucket-filling program intended to make that decision.

KENNY

But you do have an answer—don't you?—to John's point, which I think is a perfectly correct one, that a decision in the only interesting sense is something for which reasons can be given. Because I understand that you have programs—a program that you play noughts and crosses with—which does something rather like giving reasons.

LONGUET-HIGGINS

Yes indeed, that is so. Stephen Isard and I have been writing programs in which we can play a game, noughts and crosses, with the program and then say ‘Why did you do that?’ And the answer can be ‘Because if I had not you could have won’. That sort of thing. And we are trying to put such sentences together in a principled linguistic fashion, by examining the meanings of the words, and the English syntax, and so forth. We are trying to capture the idea of a reason for doing something. The reasons offered by the program are reasons in a rather primitive, crude sense, but they don't refer as yet to a high level goal of which the program itself is aware. The program's higher level goals are in fact set for it by the programmer.

## Consciousness

It seems to me that consciousness is a subject we haven't discussed sufficiently, and I was accused by Tony of having some rather peculiar views about it. He said that he thought I believed that consciousness an essential criterion of mind; that consciousness is an essential characteristic of a thing if we are to attribute mind to it. Now I find this quite difficult to answer for a reason which will come out in a second—one of the reasons, that is. But let me say first that, if consciousness were to be adopted as a criterion of mind, it would be a signally useless one, because the only way to tell whether any other thing is conscious is to ask it. And that you can only do to human beings; you can't ask a rabbit or a mouse whether it is conscious or not. I therefore think that really the concept of consciousness is not applicable to anything but a language-using animal. Now, you may say in opposition to that—well, you anaesthetize rats and rabbits, and so on, in the lab, and you are quite confident that you make them unconscious. And in a way one is, but actually one is simply overplaying one's hand at this point. One is using an anaesthetic which in man, at least, abolishes consciousness before it abolishes muscular movement. You are not using this drug which John told us about as a sort of horror story of the anaesthetized but conscious person. You are using one of the standard anaesthetics. And then, really by analogy with what happens in man, you go on until your rat is completely immobilised. Muscular movement has been put out, and you then imagine that its consciousness, if it is conscious, will have been put out. And if, when it recovers after any operation, it shows no sign of having had a traumatic pain experience, evidence supports you. But you are really only hypothesising that it is conscious in the first place; you can't really obtain any factual evidence. You can't ask it, and it can't tell you.

The second point I want to make about consciousness is that it does seem to me the key private thing about mind. You can talk to another person about what you are conscious of, but this is communication about consciousness. You can't communicate the consciousness itself. If you did, it would be like communicating things. If I could communicate my consciousness of this paper in front of me, the person I communicated it to would have the same experience of the piece of paper that I had. He may be conscious of something, seen from his point of view, and I may be able to describe aspects of my consciousness of it, and talk about it. But I can't communicate it.

One question one must ask—again, I think, attributed to me by Tony at one point, is: where does consciousness arise from? Our definitions of the normal material objects we talk about, such as atoms, or anything else of that kind, contain no reference to consciousness. We say consciousness is tied up with activities of our brains, and our brains are made up of atoms, but it seems to me that there is no way of passing from a normal scientific definition of an atom to any conclusion, or any consequence, about consciousness. Scientific definitions of the atom are in terms of its interactions with other material bodies. And to my mind there is a complete conceptual gap between any type of material interaction, however complex, and self-awareness. I don't see that it is any good saying ‘oh well, the brain is a very complicated arrangement of atoms, and they therefore can become conscious’. That is begging a question which is really unbeggable. If there is a complete conceptual gap between material entities and consciousness, one conclusion that has been drawn, for instance by Whitehead, was that, as we know there is consciousness somewhere, namely in ourselves, then something in some way akin to it must be in all entities; not that all entities have a consciousness as developed and evolved as our own, but there must be some property of them, from which consciousness could have evolved. That is a logical argument, which I find not very easy to refute. But I suspect that the whole question arises really from some sort of an improper formulation of the subject to begin with. I don't feel in the slightest able to expound this at all clearly—but our primitive experience is a consciousness of something; I don't think you can be conscious without being conscious of something. Normally we are conscious of experiences, of occasions of experience, in which there is consciousness, and in which there may be sticks and stones and trees, and what have you. Then we break up this experience into the sticks and stones on the one side, and the perceiving conscious subject on the other. I think it's when we break the primitive experience up in this way that we force ourselves to this paradoxical question, where does the consciousness come from—it can't come from the material that we leave on the other side of the equation. But this, I think, is getting beyond the philosophical capacities of the simple biologist, and it's time we heard some more about it from a professional.

LONGUET-HIGGINS

Could I just put in a short remark? One sufficient condition for having been conscious at a particular time is that one can remember afterwards what was going on at that time.

Is that really true? I thought that people could remember things that they didn't actually perceive at the time.

LONGUET-HIGGINS

Well, perhaps they can, but certainly it's not a matter of common experience. Whereas it is very much a matter of common experience that if you can remember what was happening at noon you can't have been asleep at the time. It seems to me that you've talked about consciousness in two senses. One was consciousness as being awake, as opposed to being asleep; whereas you also spoke about self-consciousness, and I think that raises rather a different set of questions. It's arguable that—

No, I wasn't really meaning consciousness in the sense of reflecting, ‘well, am I now conscious or not?’

LUCAS

Even so, there are I think a number of important distinctions. You took ‘consciousness’ and ‘being conscious of as though they were quite the same, but I think quite often I'm not conscious of something although I am conscious, and these are in fact answers to rather different questions.

Do you think you can be conscious, but conscious of nothing?

LUCAS

It is a state which some of my pupils are in. Can I pick up one other point—I thought that at one stage you were carefully demolishing the arguments, which seemed to me to be very good, for describing consciousness or the capacity for having experience at least, to animals. Having knocked this down, you then suddenly turned your agnosticism upside down, and imputed consciousness to the sticks and stones which Tony laughed at you last time for doing.

That was the paradox I was trying to bring out.

LUCAS

But I think this is a paradox.

Yes.

KENNY

I think there was also another paradox, wasn't there? The other night you were suggesting that Christopher and I had been mistaken in concentrating so much on language, and that there were beings who had minds even though they didn't have language—namely, dumb animals, who you said have minds because they are conscious.

No, I didn't say ‘because they were conscious’. I said they had minds, but it wasn't because I knew they were conscious.

KENNY

So you do know they have minds, but you aren't sure whether they are conscious? But then what is the ground for saying with certainty that they have minds, if you think it is a dubitable hypothesis that they have consciousness?

Because I use ‘mind’ to refer to a highly complex neural interrelation. If I see a thing behaving in a very complicated way—if I see a cat watch a mouse go down a hole and I know that the hole goes round a corner, and comes out over there, and the cat goes over there to wait for the mouse, I say it's got a mind. But whether it is conscious or not I can't tell—it probably is conscious, but I don't really ask the question ‘is it conscious or not?’. But if I do ask that question, I come to the conclusion that sticks and stones must be in some sense conscious too. I ask if an animal has got a mind, in the sense of: is its behaviour a complex pattern?

LONGUET-HIGGINS

But we do know about cats that they can actually remember what happened yesterday. And dogs can certainly remember smells, and so forth. And we have no reason to suppose that sticks and stones can. So there's good evidence, it seems to me, for consciousness—as opposed to walking in one's sleep, as it were—as far as cats go, but no evidence whatever as far as sticks and stones go.

Well, is it evidence for consciousness? It's evidence for their behaviour—that it is modified by something that happened yesterday.

LONGUET-HIGGINS

You mean that they were in a learning situation yesterday?

Yes. They were in a learning situation yesterday, and today they remember. But that does not involve using the word ‘remember’ in the sense in which it means that you can call up a mental picture in consciousness. I'm saying that all you can say is that the animal's behaviour today is modified by what happened to its nervous system yesterday.

LUCAS

That's not all you can say. This is your evidence, but it doesn't seem to me that it is all you can say, because if this were all that you could say, then you would take the position that Descartes took, that there is no reason whatever for objecting to vivisection, and one should no more complain of this cat screaming as you degutted it, than you feel sorry for the car when you clash the gears.

No; I think, as I said, that they probably are conscious, but I don't reckon that I can get any definite evidence of it. But in the absence of definite evidence against it, I would prefer to operate on the basis that they presumably are.

LONGUET-HIGGINS

But what's wrong with this definition of consciousness as opposed to self-consciousness, that one has a memory and that one is putting things into it at a particular time?

Well, computers have memory stores that beat mine absolutely hollow, but—

LONGUET-HIGGINS

Well would this not do as a definition of that sort of consciousness?

Just because you've got a memory store, made of whatever it is made of doesn't show you are conscious in any sense of the word that—

LONGUET-HIGGINS

Well, put it this way: couldn't we use a different word for having a learning capacity and a memory? There's nothing about sticks and stones which gives us the slightest reason for supposing that they have learning capacity and memory. So I think that there are no good grounds for—

For saying that these things have minds. Capacities don't really apply—They probably are conscious, but I don't know.

KENNY

When do you think they are even probably conscious? I mean, if consciousness is something so much beyond the reach of evidence, why can you go even as far as a probability?

Because it's absolutely within the reach of evidence in myself. I am clear that I—

KENNY

But isn't it very rash to generalise from a single case like that?

There's only one earth, but nobody would say you can't generalise about things on earth because there's only one of it. Of course you can generalise from a single case.

KENNY

But the parallel generalisation would be to say that every planet is inhabited. Of course, we only happen to know that one of them is inhabited, but why shouldn't the others all be like this? That is the parallel between saying ‘Your body is conscious, therefore every body is conscious’.

LONGUET-HIGGINS

But surely you would agree that there's a difference between the concept of consciousness as you've been using it, and the concept of self-consciousness?

Yes, yes, I would agree there's a difference in that. You can be perfectly conscious and it's only quite rarely that you are self-conscious. I wouldn't put it past some people never to be self-conscious.

## Language and Thought

KENNY

I want to follow on from what Waddington has been saying. What he said this evening showed that I misunderstood him in thinking that he was equating consciousness and mind, but it has made it clear to me that I wasn't mistaken in attributing to him the belief in this essentially private entity to which only he had access. Only I was mistaken in thinking that he called it ‘mind’, when in fact he calls it ‘consciousness’. And I have been arguing that the notion of this essentially private entity is a mythical one. I don't want to go over the arguments for that again, but rather to try to bring out the relationship between the issue of privacy and the issue of language.

I think that Longuet-Higgins and I both attribute a greater importance to language for the study of mind than our two other colleagues. And we have been attacked for this in different ways by Waddington and Lucas. But I think that John, at least, in his paper the other night, tended to identify the issue of privacy versus publicity with the issue of language versus non-language. The horror story about the person suffering in the operation and unable to express his pain, was of course meant as an argument against the privacy of sensation and it was not—or I can't see how it was—an argument against the language-dependent nature of mind. I've said that I don't want to go over the privacy argument again, but I would like to develop a bit the argument about the relationship between language and thought. I've said that many thoughts, indeed all the thoughts which are particularly characteristic of human beings (as distinct from the thought that there's a mouse in the corner or the thought that it's about time that that can of dog food was opened, which are thoughts that might be attributed to animals) are thoughts which it is inconceivable that somebody can have unless he has the use of language. I don't want to say that every thought is, as it were, formed in the mind, in impeccable sentences, or is even verbalised at all—what I do want to say is that the criterion of identity for thoughts, that is to say, how you specify what a particular thought is, how you tell one thought from another thought, is given by the expression of thoughts in words.

We symposiasts in these meetings have been trying to communicate thoughts. What we have given you, of course, has been words. But the words aren't just a surrogate for thoughts. In communicating the words we do communicate the thoughts, and until we have the words the thoughts—those particular thoughts—are not quite clear to ourselves either. Any expression of a thought which I can give to myself I can give to you too. And this is the way in which the issue of privacy does connect with the issue of language, because of the public nature of language. There are, of course, thoughts which flash through our minds, as I said yesterday, which are not at all verbalised. But the example that I gave yesterday was meant simply to illustrate the modest claim that it was possible to have a thought which was not verbalised which nonetheless was a thought which only a language-user could have. And so most of the evidence which has been brought by people in these meetings and at the reception last night, showing that there, can be unverbalised thoughts, was not to the point, was not a counter-example to the claim about the language-dependentness of thought, because the simple example I gave showed that there can be unverbalised thoughts which nonetheless only a language-user could have.

I said that the criterion of identity for thoughts was their expression in language. If you want to know what thoughts are going through somebody's mind, even your own mind, you can only express them, you can only nail down the thought as that particular thought, by giving it expression in language. Of course, one's relation to one's own thoughts is different from one's relation to other people's thoughts, because the onus of expressing the thought and the capacity to express the thought, belongs to me in the case of my own thoughts, whereas for other people it is how they express the thoughts that decide what the thought is.

I have said that if I communicate the right words to you, I am genuinely communicating my thoughts. And against Waddington I would say I can also communicate my experiences. He said, ‘I can only communicate about my experiences—I can't communicate my experiences’. But to communicate an experience precisely is to describe it to somebody else. What he had in mind was sharing an experience in the sense in which two people would have the same experience. Now in the perfectly natural sense of having the same experience, of course, two people can have the same experience. Whatever the experience he was having when he was looking at his paper, I could have it too if I got into the same place and looked at the paper as well. What he meant was, I think, not this natural sense of having the same experience, but he meant we couldn't have numerically the same experience. And he argued from that that experiences are somehow essentially private. I don't think the argument works. We can't have numerically the same nose, because the nose I have is my nose, and the nose he has is his nose, but this doesn't mean that noses are essentially private objects.

LUCAS

I want to protest at one point. There seem to be two Kenny theses—one is one which is true, that all human beings that we ever come into contact with are language-users, and therefore all the difficult thoughts that we are concerned with, not only the Thistle Freezers, but many others, are in fact possessed by people who have the use of language and if they didn't have the use of language they wouldn't be in the position to come to Edinburgh and see about Thistle Freezers. But this could be guyed, I think, by another point, that it is inconceivable, we could say, that there could be someone who has these high-grade thoughts unless he's been fed. One might start using a sort of low-grade argument for High Tables—that Oxford dons don't think unless they have a proper intake of calories. And then you could say, much more importantly, that of course people always have parents. And there are a whole lot of other things, and these are important truths, but they don't really define the nature of thought. And then there was a shift: as soon as he'd said this point, which is true but not relevant, Tony then said something about the criterion of identity—how you specify. Well, of course, if I am asked to specify, I must use language. I can't answer his questions without using words. But I still protest that there is something, that people do have thoughts. I remember—it's just come to my mind this moment—that Beethoven, I think in one of his letters, says that he can't say what is in his mind; but if only the person was here to listen to a tune, that would express it.

LONGUET-HIGGINS

If I understand Tony Kenny, he attaches special importance to language because it is a symbolic representation of our mental processes. But the same could be said of music, and I would insist that one could think in music, and I imagine Tony would have allowed that to be included. But I do, I think, take issue with him on Waddington's cat, and that mouse-hole, because I am deeply impressed by Waddington's cat. It obviously thinks—we can express in words what it is thinking: ‘That mouse has gone into that hole; that hole goes up to that point there; I will go to that point’—and it jolly well does. Surely that is thought, although it is not linguistically expressible by the cat. And so I feel myself out of sympathy with you on that point.

KENNY

No, I entirely agree with you, that the cat can have that sort of thought, and in a rather complicated sentence I tried to exclude that sort of thought from the type of thought which I was claiming could only be expressed in language. What I should say is that I think that all thoughts are expressible. There are some thoughts, like thoughts about mice and holes, that are expressible in non-linguistic behaviour, but there are also many thoughts, such as all the thoughts that we've heard in this room during the Gifford Lectures, which are only expressible in linguistic behaviour. What I'm against are the entirely inexpressible thoughts that John thinks are the really important things.

With regard to John's comparison with the importance of being fed in order to think, I do believe that the impossibility of having thoughts about philosophy, say, without language, is an inconceivability. It doesn't seem to me inconceivable that there can be thoughts without there being food. As John indeed knows, I, unlike him, think that it is not inconceivable that computers have thoughts. But computers, unlike Oxford dons, don't need to live at High Table. WADDINGTON

May I take up a point which Christopher rather casually threw out. Language is only a symbolic description of situations. Now this was an important point—I really was not the slightest convinced by Tony when he compared his nose to my piece of paper, at the end of his talk. The point I was trying to make is that you can send things from one person to another, including pieces of paper, and including noses—it's not so often done with noses, but you do get people giving each other their hearts. And these are perfectly transferable. But you cannot transfer my experience of the piece of chalk or the piece of paper. It is, I think, private in a way that a nose or a heart is certainly not private, if you let yourself get into the hands of a surgeon.

KENNY

We can only one of us have the nose at the same time, and it is that sort of thing that is the parallel with the experience.

## Reflection

LUCAS

‘Reflexivity’, I should have written down. I am going to toss the ball first, I think, in the direction of Christopher Longuet-Higgins. Going back to our fifth lecture, when he tried to stymie me in the conversation, by putting the question ‘Would a rational being fail to answer this question in the affirmative?’ And he thought this would catch me on the horns of a dilemma. Would a rational being (that's who I'm supposed to identify with) fail to answer this question, thereby showing himself not to be rational? Or would he say ‘yes’, thereby saying that he is not rational. (See pp. 67–8.) And I escaped from that dilemma by claiming that there was a loop in it. ‘Would a rational being fail to answer this question?’ And that is where you say ‘What?’ and force the computer, or whatever it was on Forrest Hill, back to say it again—‘Would a rational being…’ And later on it turned up in our later discussions, that there is a danger of a programme ending up in a loop, and he didn't quite like to ban all such things as being true or false, or as feasible or non-feasible, but said there was some problem here. And I think there is. And the reason why I first of all want to wear a slight white sheet is that although in this question there does seem to me to be a loop, there are some other self-referring statements, or self-referring questions, where there doesn't seem to be anything terribly wrong. ‘Is this an English sentence?’—we don't seem to find that unintelligible. And what I'm not at all clear about at the moment, is where we do and where we do not get these apparent vicious circles. Some sentences that we can find in some formal languages are ones such that we can get a perfectly good recursive class of them, and others not. And I want to throw this open really, for him to pick up. It is relevant not only because it lies near the core of my Gödelian argument, but because, as we were arguing a day or two ago, there seems to be some parallel about the ability of a mind to reflect on itself. That is, it is because a mind can reflect on itself that we get leverage for Gödel's theorem. It's because a mind can reflect on itself that we get all sorts of problems about the nature of mind. One coming up only this evening, the possible distinction between consciousness and self-consciousness.

Well, I'll throw this first of all to Longuet-Higgins.

LONGUET-HIGGINS

Well, plainly, reflexivity is a characteristic of natural language. I think we would all agree, unless we were particularly pernickety, that ‘Is this an English question?’ is in fact a well-formed English question, with a perfectly clear meaning, and that the question should be answered in the affirmative. I think there's no doubt in anybody's mind that that is alright. The possibility of the sentences of the language—sorry, let me put it at the semantic level,—statements of the language referring to statements in the language, including themselves, or others which refer back to them, is something which is obviously characteristic of human language. It is also a characteristic of human thought. It raises problems. It means that if we think of utterances in natural languages, as I would like to think of them, as equivalent to programs which we write to one another, then those programs can have loops in them. And it's because a program can have a loop in it that some English sentences are in fact never going to be fully capable of elucidation. This really makes perfectly good sense. If you think of sentences in natural language as equivalent to pieces of program for a computer, some programs do have loops, but that doesn't mean that programming languages are silly, and it doesn't mean that computers are irrational, and it doesn't mean, I think—applying it to ourselves—that we are irrational or silly or that our language is silly. It's only when we approach language in the wrong way—when we treat English, for example, as an indicative language—that we really get into trouble. That was the burden of my song when I was contrasting the semantics of indicative languages, where you have the Tarski trouble, with the semantics of English, where I think you are really dealing with an imperative language.

KENNY

I wanted to follow up Christopher's objection to John on this line. You remember that John Lucas argued that minds were not machines because, given any machine working algorithmically, we could produce something which would be like a Gödelian formula, that is to say, we could present it with a formula which we could see to be true, but the machine couldn't prove to be true. When John first produced this argument, one of his critics, I think it was Professor Whiteley, made the following objection: he said “Take this sentence: ‘John Lucas cannot consistently make this judgment’.” Now, he said, clearly any other human being except John Lucas can see that this is true, without inconsistency. But clearly John can't make this judgment without inconsistency, therefore that shows that we all have a property which he doesn't have, which makes us as much superior to him as we all are to computers. And I wonder what his answer was to that.

LUCAS

Well, I will answer that. People always want to see me locked in battle with a computer, each of us trying to get the upper hand. And I have a much more modest ambition: I just want to show that we are different. And of course it is absolutely true—I do concede that other people are superior to me, and one of their very reasons is that they are other. But what that sentence shows is that other people are other—and I don't think even Dr Kenny would deny that.

I still feel that I ought to be a bit more unhappy about this, and the reason is that there is some criterion of what does and does not count as self-reference, which has eluded me. The simple solution which I was using here, seems to me to be a bit too strong—it's rather like the solution which was proposed in mathematics after Russell's Paradox emerged, where we ban all of those classes which are defined in terms of themselves. And one throws all the impredicable sets out of the window. And then one has a very great task in reconstructing mathematics, without any impredicable definitions at all. And this seems to me to be too stringent. I want to get something slightly more relaxed, which will discriminate between those entirely acceptable in predicable definitions,—where it is perfectly reasonable to say ‘of course I know what sentence he is making, of course I know what question he is asking’, and I would just simply be rude or silly if I keep on interrupting him after the word ‘this’ or the word ‘question’—and other cases—where there does seem to be some very substantial point at issue. This is something which is I think really more a matter of mathematical logic, where I rather hope that Christopher will come up with more things, because is seems to me to arise in the imperative as well as in the indicative mood. But I don't think this is just simply a matter of mathematical logic. It does seem to me that one of the characteristic features of the mind is its ability to reflect on itself, and this is why we keep on running into a whole series of difficulties. We get into the paradoxes which I mentioned, I think two nights ago, that if I am going to regard myself as a mind I am not only conscious but I am conscious that I am conscious. Not only can I think but I can think of my thinking. And there is a sort of certain infinite capacity of the mind which parallels the infinity of the natural numbers, and is the reason why one can, in the case of—do you want to interrupt, Professor Longuet-Higgins?

LONGUET-HIGGINS

Well, yes, I would like to make a point here. There's something which is very difficult to do, but I think that most computer scientists reckon that it will be done with success before very long. That is, to write programs which can study themselves. There's no reason to imagine that this is beyond the reach of ingenious programming, but until we have that kind of thing, you would be quite right to say that we definitely have the edge over the computers! But I would also like to say something else. You won a very good debating point against Tony just a moment ago, in saying that the Gödel argument shows that he is ‘other’, or that you are ‘other’. But I don't think that's quite how you use the argument in your book. You were rather suggesting that there was some superiority attached to human beings because they could always out-Gödel a machine, but, by implication, but never explicitly, a machine could never out-Gödel them. But actually I could write a program to print out that question to you, and that would out-Gödel you. And so we can't easily conclude from this style of argument anything very interesting about the mind.

KENNY

Well, now he's withdrawn even further from his original position, making the difference between men and computers only like the difference between one man and the next man, one computer and the next computer.

LUCAS

This is enough, though—if I am as unlike any computer as I am unlike Kenny, then I can be sure that I am what I always thought I was. I think self-reflection is not only a characteristic of mind; it is one that can be taken in a rather grandiloquent or perhaps a rather more humble way. If I remember rightly, Aristotle also saw the ability to reflect on oneself as a characteristic of mental power, and in fact he went further, and said that it was the peculiar activity of the Aristotelian God that he should spend his time thinking about his own excellence. Now I think there are objections to this, on good Christian doctrinal grounds, as well as those of natural theology, but the terms of Adam Gifford's bequest prevent me basing any argument on revealed religion, and I shall leave it for a later occasion to try and see more fully what is wrong with this on grounds of logic alone. This will need further researches by Longuet-Higgins. But for the moment I think we should draw just this one alternative conclusion, and that is not to think that it is a mark of the mind as exemplified in God to reflect upon its own perfection, but rather with us it is a characteristic feature of the mind that it enables men to reflect on their own imperfections, both in their thinking and in their doing.

From the book: