I want to take up where Kenny left off, because I share his conviction that natural language provides us with an incomparable window into the human mind. Indeed, it was argued by Alan Turing, one of the pioneers of modern computing, that any creature which could pass for a human being in a conversation on a teleprinter, must be regarded as capable of intelligent thought. The only trouble with Turing's test is that a computer programme has already passed it. Here's a dialogue which actually developed between Weizenbaum's programme, Eliza, and a salesman who called it up by mistake, thinking he was connected to one of the professors at M. I. T.
Salesman Do you think that I can use the computer this morning?
Computer Why do you ask?
Salesman I might be able to make some additional telecomp sales.
Computer Why aren't you sure?
Salesman My prospective customers have never seen a demonstration of the system.
Computer Does this mean very much to you?
Salesman Of course it does.
Computer You seem very positive about that.
Salesman This is ridiculous.
Computer What do you mean, this is ridiculous?
Salesman Is there any reason why I can't use the computer now?
Computer Tell me more about your prospective customers.
Salesman Please dial me up on 491–1850.
It's not difficult, being wise after the event, to see roughly how the programme works. It's designed to hold on to the conversational initiative at all costs, by changing the subject if necessary, in order to hide its inability to answer the human being's questions. But we react to this explanation of Eliza's victory with mixed feelings. Having applauded Weizenbaum's technical ingenuity, and his cynical appreciation of human gullibility, we are left with the conviction that Eliza has only passed the Turing test by cheating, and that her responses, though a good deal more to the point than a parrot's, can scarcely be regarded as thoughtful contributions to the conversation.
In this lecture I want to try and put my finger on some of her intellectual deficiencies. Not as one of her detractors, because I regard her creation as a major event in psychology, but because I share Weizenbaum's conviction that the serious attempt to model human conversation is likely to be one of the most fruitful ways of discovering the nature of intelligent thought.
It could be argued that the remarkable progress which has been made by Chomsky and his school in the description of natural utterances has been at least partly due to the decision to attend primarily to the forms of utterances, rather than to their content. Only when we have brought to order the things people actually say can we hope to relate what they say to what they mean; at least, this is the faith which seems to have inspired much of the modern work in transformational grammar. In this lecture I shall dare to try and pass beyond the forms of utterances to their meaning—to venture on to the treacherous ground of semantics. But before doing so, I would like to revert briefly to a case study which was mentioned by Kenny, in order to bring out the distinction between structure and meaning.
Here I'm going to write on the board a little grammar which generates the English names of the natural numbers from 1 to 999. There's a set of words which we can call W1, and these are ‘one’, ‘two’,…, ‘nine’. There's another set of words called W2, namely ‘ten’, ‘eleven’, and so forth, up to ‘nineteen’. There's a third set of words called W3, namely ‘twenty’, ‘thirty’, and so forth up to ‘ninety’. We can form composite words, W4, by taking a word of type W3 and conjoining it, with a hyphen, with a type W1 word. We can define a class of words, W5, to be any word in the set W1 or W2 or W3 or W4. We can define a word string, W6, as one which starts with the word W1, goes on with the word ‘hundred’, and then optionally proceeds with the word ‘and’ followed by a word of type W5. And then the name of an integer from 1 to 999 is either a word of type W5 or a word of type W6. That looks very complicated, and it is rather, but we recognize that it's true. Supposing, for example, I want to form a name. I can choose a word of type W6; let's do that by taking a word of type W1, say ‘two’, followed by ‘hundred’; and then if I've chosen this option I'll say ‘and’ followed by a word of type W5, which might be, say ‘ten’. Well now, that grammar tells us what we are allowed to say, if asked to name any number between 1 and 999, but it doesn't tell us what to say if we have a particular number in mind. At the very least, we should like to have rules for naming any number presented to us in arabic notation; and conversely, for translating any string of words, such as ‘three hundred and nineteen’ into a corresponding arabic numeral. The general problem of assigning meanings to utterances or finding an utterance to express a particular meaning is, of course, extremely tricky, because a disembodied idea is a most difficult thing to capture. But we can obtain at least a glimmering of how to assign meaning to utterances by thinking about this very simple example. When someone says ‘three hundred and nineteen’, how do we know what to write down? In this case, it's quite easy to see. We think of the digit whose name is ‘three’; the word ‘hundred’ tells us to put two noughts after it, and the word ‘nineteen’ tells us to replace the noughts by the pair of digits named by the word ‘nineteen’. Each word in the utterance ‘three hundred and nineteen’ triggers a process in the mind of the hearer, and the overall result of these processes is? The evocation of the arabic numeral 319 in the hearer's mind. In the next few minutes I want to explore the implications of this simple idea.
Broadly speaking, I shall suggest that the meaning of an utterance is nothing more or less than the processes of thought which it is intended to evoke in the mind of the hearer, and that the job of semantics is to elucidate the relation between utterances and the processes which they symbolise. In developing this theme, I shall need the concept of a computation, so perhaps I should just explain that computation is anything that a computer can be made to do, by programming it in a suitable computing language. And a program is, conversely, just a string of symbols which, when you put them into a computer, will result in the performance of a computation ∼ or so one always hopes.
Now, broadly speaking again, logical languages are of two kinds, indicative languages and imperative languages. An indicative language is a language such as the prepositional calculus; an imperative language is the sort of language in which we program computers. In the prepositional calculus we encounter formulae of this sort: P, ∼P, P & Q, P v Q, P ⊃ Q, where the letters P and Q are so-called ‘atomic’ symbols, and the other symbols, ‘∼’, ‘&’ and ‘v’ are called ‘connectives’; and you may also have opening and closing brackets in your formulae. There are rules for deciding whether a formula is ‘well-formed’ or not, and these rules may be compared with the rules of syntax in the natural language. There are also rules for generating new well-formed formulae from old ones, and an enterprising firm has actually brought out an educational game in which the players are required to produce new formulae in just this way. But the whole point of the calculus resides in its semantics, in the way in which the formulae are to be interpreted. The atomic symbols, P, Q, and so on, are supposed to represent assertions—never mind what, we are not interested—which are either true or false. ∼P represents the negation of P, and so it is false if P is true, and true if P is false. The symbol ‘&’ is self-explanatory—P & Q is true if both P and Q are true, but false otherwise. ‘P v Q’ is false if both P and Q are false, and true otherwise. So you must think of the symbol V as representing ‘and/or’. And finally there's the symbol ‘⊃’ which is sometimes referred to as ‘implies’; but this has to be taken with a grain of salt, because the formula ‘P ⊃ Q’ is true unless Q is false and P is true. In other words the formula ‘P ⊃ Q’ is true even if P is false. But that is one of the well-known little problems about explaining the propositional calculus to people when you first teach it to them. Now, if one of these formulae—let's consider a more complicated one, namely (P & (P ⊃ Q)) ⊃Q—turns out to be true whatever truth values we assign to P and Q, it's called a tautology; and if turns out to be false whatever truth values we assign to P and Q, then it's called a contradiction. You see the concept of truth is playing a very central role here.
Now there's a considerable body of theory about the semantic interpretation of indicative languages such as the propositional calculus; but far the most interesting part of this theory is concerned with languages whose domain of discourse, on some interpretation, includes the statements of the language itself. In a natural language such as English, the domain of discourse certainly includes the statements of the language, because we can say ‘your statement is false’; but as a result we run into the Liar Paradox, ‘this statement is false’. It was in fact the Liar Paradox on which Alfred Tarski built his famous theorem about the impossibility of expressing the concept of arithmetical truth within arithmetic itself. But perhaps it is not so very surprising that we should have trouble with the concept of truth when we are discussing our own statements. What actually happens when we first meet the Liar Paradox? We observe that it can be true only if it's false, and false only if it's true. So we go into a loop just like a badly programmed computer—and never emerge with a definite truth-value. If the statement did have a truth value we could obviously never find it, so perhaps we should draw in our horns and allow that there are certain statements to which the concept of truth just doesn't apply. But what about the concept of meaning, if the concept of truth fails us in such a simple case?
My colleague, Stephen Isard, and I have been thinking about this problem, and some of the following thoughts are as much his as mine. In ordinary human conversation, we make rather few directly verifiable statements; there's little point in telling people what they could easily verify for themselves. We say things for various reasons, but not by any means always to provide the hearer with factual information. If I ask you to shut the door, and you do so, that is enough to convince me that you have grasped my meaning. But my sentence was in the imperative, not the indicative mood. If I ask you the time you display your grasp of my meaning by saying ‘half past five’, but this time my sentence was in the interrogative mood. Your answer is, of course, an indicative statement, and I could in principle check its truth by looking at the clock; but I probably cannot see the clock, or I would not have bothered to ask the time in the first place. For these and other reasons we have chosen to view English sentences not as logical formulas like those on the board, with truth values in some interpretation, but as messages which are to be interpreted as sets of instructions to the hearer. On this view a natural language is not to be thought of as an indicative language but as an imperative language analogous to the languages which are used for programming computers. When we want a computer to compute something for us, we have to represent the desired computation as a program, an ordered set of instructions in a computing language. When we want a person to do us a favour we can likewise express our wishes by an utterance, a command, or a question in natural language. The other person may, of course, decide not to meet our request; but that does not alter the meaning of the request any more than the meaning of the computer program is thrown in doubt if, on some particular afternoon, the program doesn't run because of a failure in the power supply.
You may be wondering whether, in giving priority to commands and question, I have forgotten about statements. No, I haven't—but in some ways statements are a little bit more subtle. They seem to arise most naturally as the answers to questions, even if the question is not explicit. Take for example the very simple statement ‘I do’. If you ask me whether I like chocolate, I shall reply ‘I do’. But if you hold up a box of chocolates at a children's party and say ‘who likes chocolates?’, the answer will be ‘I do’. The difference of intonation in the two cases arise from the difference between the questions to which the statement might be an answer. The present situation, in this room, in which I am holding forth in a sequence of indicative sentences, is a highly artificial one. But even on this occasion I'm hoping that each statement I make may raise a question in your minds to which the following statement will help to supply an answer. It is notoriously difficult to try and teach anyone something in which they are not interested.
The view that English sentences are to be interpreted as instructions to be followed by the hearer, explains quite naturally how a sentence may fail to convey meaning in one situation, even though it is quite intelligible in another. If I turn suddenly to the Principal and say ‘Follow that taxi!’, he will either say ‘What taxi?’, or, more likely, lead me gently out of the room. Almost everything we say is riddled with presuppositions, and if the hearer doesn't share them, he is unable to carry out the intended computation. And like a good computing system, with internal checking procedures, he will probably point out why. So any good model of human conversation ought to do the same.
Another feature of English that fits naturally into the computational setting is the fact that sentences, especially fragmentary ones, depend for their interpretation not only on the objective situation but on what has just been said. An obvious example is the single word utterance, ‘yes’, which would be very difficult if not impossible to represent in a formal, indicative, language, because of Tarski's theorem. But ‘yes’ is quite easy to symbolise in a computing language. The language POP—2, invented by my colleagues in the Department of Machine Intelligence, allows one to conduct without difficulty the following short conversation:
Is 4 equal to 2 + 2?
Yes. The conversation goes like this: you type into the computer: 4 = 2 + 2, and then an = sign and a > sign; and you press the carriage return and the computer types back∗∗1, which means ‘yes’. (The asterisks, I might add, are just there to remind you that you didn't type that line yourself.)
In the semantics of indicative languages, as I've tried to explain very briefly, the concept of meaning is bound up with that of truth, and the truth of a statement is determined by examining the truth values of its constituent parts and combining them in a manner determined by the syntax of the statement. The semantics of English, viewed as an imperative language, is rather different. Here, it seems, the concept of meaning is prior to that of truth, the meaning of an utterance being the sequence of mental processes which it is supposed to evoke in the hearer, possibly culminating in an action, if the utterance was a command, or in a reply if the utterance was a question. If the utterance was a statement, there may be no observable response, only a change in the hearer's state of mind. But it is quite futile for the speaker to say anything at all unless he has some idea of the hearer's mental state; otherwise, there is no prospect of influencing the hearer's thought processes in an appropriate manner. The situation is therefore a great deal more complicated than anything which we encounter in mathematical logic. But one very useful idea may be borrowed from Tarskian semantics, namely that of determining the meaning of a sentence by combining the meanings of its constituent parts in accordance with the syntax. If I say ‘Please shut the door and open the window’, I have conjoined two sentences, and thereby indicated that I want you to do both the things mentioned. And if I ask ‘Did you understand my last remark?’ I am inviting you to recall the last remark that I made and to tell me whether you understood it. It seems likely, indeed, that the sole function of syntax in natural languages is to signify without ambiguity the nature and the precedence of the various steps in the desired computation. That is certainly the function of syntax in a programming language, as every programmer takes for granted.
I started by quoting an actual conversation with Eliza Weizenbaum, and making a few patronising comments about her intellectual limitations. Her strong points are her syntax and her conversational strategy. She is very good at distinguishing statements from questions and commands, at picking out noun phrases—‘Tell me more about your prospective customers’—and at transforming between the first person and the second person singular. Her weak point is her semantics: she doesn't understand what is said to her, and she doesn't mean what she says. She doesn't understand, because she treats sentences not as pieces of program to be implemented, but merely as objects to be manipulated. She doesn't mean what she says, because her utterances are either pre-packaged or cleverly thrown together from the raw material supplied by her victim. But she has been an inspiration to other people to do better, and great heights have recently been scaled in a program by Terry Winograd, of MIT. Dr. Winograd will be giving a public lecture about his work in this room on Wednesday week, and so rather than steal his thunder, I shall mention some of our own attempts to express semantic ideas in programmatic form.
One of the first tasks we undertook was to construct a semantics of tense and aspect in English, mainly by the light of nature, though we found that Reiehenbach had to some extent anticipated our ideas. The problem was to devise a program which would read an English question and convert it into a set of logical operations. These in turn, these logical operations, would then operate on the information available to the program, and generate an appropriate reply. The available information would be partly factual, of course, but partly also linguistic. Because one of the features of human conversation, as I mentioned before, is the frequent occurrence of fragmentary utterances, which depend for their interpretation on the current tense, the current subject, and the sentences which have just been uttered. We are so accustomed to this that we hardly notice it as a problem. But perhaps you will see what I mean in a minute, when I quote some conversation with one of our own programs.
This program is designed to answer questions about the arrival of guests at a party, and it is supplied in advance with the information that Al, Bob, and Bert, Charlie, David, and Derek, Ed, Fred, and others, arrive in alphabetical order. A non-guest, called Cuthbert, hasn't been invited and isn't coming, and the program knows that. The time of speech was originally set for just before the party, and here is the ensuing conversation. (I want you to notice that some of the questions are very fragmentary.) Human Will Fred arrive before Harry? Program Yes. Human Just before? Program No. Human When Fred arrives, will Bob have come, since Cuthbert?
Program Cuthbert never will.
Human When Bob arrives, will Harry have come since David?
Program David won't have by then. And now we move the time of speech up to just after the arrival of Charlie.
Human Has David turned up?
Program No, not yet.
Human Has Cuthbert?
Program No, he never will.
Human Will Bob arrive before Bert?
Program Bert already has.
The program works by first parsing the question in a primitive way, to determine the tense, aspect, subject, and time phrases, if any, and fills any essential gaps by referring to what has been said already. It then interprets the question semantically. If the question turns out to be ill-posed, because the tense is inappropriate, or because of some false presupposition about the timing of events, then the reply has to indicate the fact. And that is just about as far as it goes. I could mention some of the problems which arise when we come to think about sentences involving the words ‘may’, ‘might’, ‘must’, ‘can’, and so on. We are trying to straighten out our ideas about these, and about other modal expressions, by writing a program with which we can play noughts and crosses, and at the same time discuss the game with the program, in English. But perhaps I'd better use the remaining time to try and draw the threads together.
First, and very emphatically, I am not suggesting that human beings are just like computers, or that computers are just like human beings. But our own mental capacities certainly include the ability to compute, and, what is more, to perform computations beyond the wildest dreams of all but the wildest people. The reason why machines which compute arouse such passions is just because, like the dog standing on its hind legs, they can do it at all. But unlike existing computers, human beings can think up programs for themselves, and natural language is the means whereby we suggest programs to one another. I use the word ‘suggest’ not because I want to build a wall of agnosticism between science and man, but merely because it would be plain folly to assert that the programs we talk are inevitably and infallibly implemented in the minds of our hearers. Of course they aren't—and for a hundred and one very good reasons. We may speak indistinctly, imprecisely, or unintelligibly, and we shall not be understood aright if we fail to understand the mind of the person we are addressing. Human communication is hairraisingly unreliable. The wonder is that against such odds we can achieve it at all.
Secondly, in a more positive vein, I am trying to recommend a new fashion in the construction of psychological theories. By and large, theories of the brain and its workings are severely limited, both in form and in subject matter. Insofar as they venture beyond purely descriptive accounts of behaviour, the mathematics which they employ is usually limited to elementary algebra, calculus, and probability theory. But the system to which psychological theories are addressed is by far the most complex computing system in existence. Surely its understanding is going to call for concepts and theories at least as sophisticated as those which are found necessary to describe the workings of man-made computing systems? It is not just a matter of putting the old wine of behavioural psychology into the new bottles of logical notation. We must make the concepts of logic and computation an integral part of psychology, and in particular the psychology of language.
Finally, as you may have noticed, my thesis about language has suffered from a very significant omission, which will ultimately have to be repaired. It is one thing to attempt to interpret human utterances as programs for other human beings, but quite another to explain where and how these programs originate. Chomsky has stressed the difficulty of understanding this creative aspect of our use of language, and we are still very much in the dark as to how to explain it. If the meaning of an utterance or a program is the computation which it is designed to specify, how do we manage to pass from a wish to a thought, and from a thought to its expression in a form in which it is intelligible to other human beings, or to a computer? Even computing languages are only understood by computers, not actually talked by them, and one of the major challenges of the theory of language is to account for the generation of utterances on the basis of non-linguistic motivations. But it would be dangerous to assume that this problem will never be solved, and if it is we shall almost certainly gain in the process some deep insights into the nature of conscious thought.
My first complaint with Professor Longuet-Higgins is on his first page—at the very outset, where he sets the examination that Eliza is to take and pass. He quotes from Turing ‘any creature which could pass for a human being in a conversation must be regarded as capable of intelligent thought’. And I want to say that Turing here is completely wrong. This may be a necessary condition, but it's not a sufficient one. It may be a qualifying examination—if Eliza can't even fool a salesman, then she's due for redundancy anyhow—but there's a great deal more to it. The so-called imitation game smells very strongly of the old verification principle, that the meaning of a word is in its method of verification, and that we can tell that something's got a mind because its output is just what we should expect, but I think this is very clearly wrong if we move to the paradigm case, which is one which will be boringly familiar to philosophers, when we consider the problem of how we know whether someone else really is in pain. The standard example is a man with a knife sticking through his vitals, with blood pouring out of him, screaming and groaning, and writhing, and then the philosopher, with a certain air of detachment, wonders whether the person really is having pain sensations or whether we should perhaps rather say that to have pain isn't really to have pain, but to be writhing on the floor screaming, uttering ‘I'm in pain’, and various other things. And it seems to me very very clear that here is one of the many cases where commonsense is a much better guide than philosophy. It seems to me that if we are to do the philosophy of mind at all, we must not reach the conclusion that other people don't have feelings; and that this is one of many cases where the criteria for applying a word, the examination which people have to pass, although important, is not the same thing as what we say when we say that Eliza has got a mind, or that this person really is in pain.
I think perhaps here I might add a little bit in parentheses, out of time, to defend myself against some remarks that were made yesterday, that I couldn't properly deal with—when it was pointed out that my arguments against Turing machines were of a very grisly, uncongenial kind. But the point is that I was taking on a Turing machine on its own ground. For myself, I reject Turing's thesis, but it is worth making the point that even if we are going to examine the output alone, still there is a test which Eliza fails, and necessarily fails, and my defence of the uncouthness of the lingo I use is that if you are going to tell a computer that it is a blockhead you must do so in a language so artificial that even it can understand what you are saying. Still, that's only by the way.
Now, let me rather than go on with several further quarrels with Longuet-Higgins, and agree with him, but disagreeably. That is, I want to take some points he makes—two points—like truth, and his program (rather than propositional) analysis of human language, but take it rather further than he would like to take it. First of all, on his theory of truth, which I want to introduce now. The biblical theologian, Longuet-Higgins, puting forward a thoroughly biblical theory of truth.
I didn't put one forward at all.
These (the two asterisks on the blackboard—see p. 97) are really breathings or vowels, but the very truth that he was putting forward was one which he expressed in terms of the word ‘yes’, or, rather better, the Hebrew ‘amen’. That's that. And the reason why he was led to this is instructive, Tarski's theorem shows roughly that we can't simply just take the word ‘true’ as an adjective, in the technical language you can't have a recursive predicate which will pick out all and only those well-formed formulae which satisfy the normal conditions for being true. And therefore Longuet-Higgins sees that we can't hope to have the propositions on the Day of Judgement going up two by two, p and not-p, to be divided as they were always predestined to be, into sheep on the one hand, and goats on the other. This is what we can't have. But he still wants to maintain—and of course he's entirely right to—a difference between some propositions, those which are true and worthy to be received, and others which are less good. And therefore he's trying to put forward a theory in which what you have to do is to pick out some and vouch for those. The computer will vouch for these propositions individually, and says ‘yes, this is the right one’. You'll vouch for it, you say this is the one to be trusted, and it is an interesting peculiarity of English and Ancient Hebrew that in these two languages, and as far as I know, alone, there is a very strong connection between the work of propositional approval ‘true’, ‘amen’, or אמן and the word for saying ‘I trust’, ‘this is sure’, ‘this is reliable’. And this is one point which I think is going to be of very great importance for understanding language, that we have to see it not so much as something which can be put in Algol, but first and foremost as an interchange between two (let me call them for the moment) operators, who are putting forward perhaps suggestions, perhaps questions, perhaps commands, and then evaluating what the other person is saying. And somebody puts forward a remark, and you may agree with it, and then in computerese you go∗∗1, or if you are talking colloquial English ‘yes’, or to follow Strawson you say ‘ditto’, or if you were in church you might well say ‘amen’. And we should see language, if we are going to try and understand language, much more in these personal terms. Earlier, we considered Descartes, and I'll come back to him later, who took a very solitary view of man, and said ‘cogito ergo sum’. The truth for linguistics is ‘loquor ergo es’: ‘I speak, therefore you exist, you are a person’. Or again, to get it into rather more fashionable jargon, you want to say the conversation presupposes an I—thou relationship, rather than an I—it one.
This raises the second point which Longuet-Higgins was taking up, the large number of questions about what is presupposed in programme exchanges with a computer. I'd put in a slightly different way. I'd put in the form of an unpleasing question, ‘What does Longuet-Higgins see in computers that he is so keen on carrying on cosy conversations with them?’ And he was beginning, and I think this is an important point, where we are getting some illumination, he was beginning to give the answer. He wanted to stress the imperative mood; instead just simply saying things, telling. One of the things he wants to use language for is telling the computer what to do. Another—again English usage is important—he took the example to see what the time was, telling the time, where he said—and I think this is very important—that one of the things is where one person is in a position to tell the other something which he is in a position and the other is not in a position, to know. That is the person theory of knowledge underlined here, that knowledge is not as universal as would otherwise be too easily assumed.
Well, first of all, I was simply quoting Turing's test as an interesting and rather thought-provoking suggestion which Turing made, but the whole purpose of my paper, as I'm sure you realise, was to try and get inside and see what further conditions we really have to satisfy before we can say that the system we are addressing is thinking intelligently. Just to take a very simple example, we can discover, by examining a computation specified for a particular computer whether when we ask it to multiply two numbers together it really multiplies them together, or merely looks up the answer in a multiplication table. So we have to enquire into the detailed nature of the processes before we can decide whether the meaning is being captured by the system in question. And that was what I was most anxious to show.
As for my ‘theory of truth’—I haven't got a theory of truth, and I'm not offering one. I'm offering a theory of the meaning of natural utterances. Incidentally, I can't help picking you up; if ‘loquor ergo es’ presupposes that you are speaking to a person, do you, when you program a computer, suppose that you are speaking to a person? I personally wouldn't make such an assumption without any further question, but perhaps you would. As for using language for ‘telling computers what to do’, I think our audience will have realised that this was a mere parody of what I was trying to say. The purpose of today's exercise was to try and elucidate the nature of language, of our own language as well as any other language that we might meet anywhere. And my thesis is simply that language is a medium by which we communicate to one another, and we communicate to one another by influencing one another's mental processes, and it's the active nature of the process of comprehension which is the thing I was trying to express. And the fact that it is a highly structured activity, which of course is at least as rich in its structure as any of the computer programs we can write. And so I'm not quite sure on what grounds I'm being attacked. I can see that you like to parody, but is there any serious objection in your mind?
I'd like to take up what I think was the central, original suggestion of your paper—namely the definition of meaning. You made the very interesting suggestion that the meaning of an utterance was the process of thought that it's intended to evoke in the mind of the hearer, and that meaning is to be interpreted in terms of commanding, or imperatively, rather than in terms of truth. Now I think that this is a very interesting suggestion, and I share your belief that logicians have concentrated too much on the indicative mood and rather neglected the imperative mood. But I'm not sure quite whether your new definition of meaning will achieve what you want it to do. I don't think that your paper did contain a very clear argument in favour of this definition of meaning; after offering the definition you went on to the fascinating account of how your programmes can talk about noughts and crosses, and play ‘Waiting for Cuthbert’. But, this was clearly not meant by you as any sort of argument in favour of your definition of meaning, and I think that the argument that did seem to be offered was this: one cannot define meaning in terms of truth, because of Tarski's work. That is to say, one cannot say that the meaning of a proposition is given by saying in what circumstances it is true, and in what circumstances it's false, because of the danger of generating paradoxes like the liar paradox. So one cannot define meaning in terms of truth conditions. And you suggested that instead we approach meaning through the imperative.
Now, if one is to understand the meaning of an imperative utterance, it seems that one must have some idea of what would count as the fulfilment of an imperative. After all, your computer only knows what your instructions to it mean, in the rather Pickwickian sense in which it does know this, if it can carry them out. If what it did were not a fulfilment of your instructions, if it couldn't obey you, then there would be no grounds at all for saying that it knew what it meant. Now, I think it's clear that if one attempts to give an account of meaning in terms of fulfilment conditions, exactly the same problems arise as in trying to give an account of meaning in terms of truth conditions, because one can very easily construct an imperatival analogue of the liar paradox. One can give the command, ‘fail to carry out this command’. And this is just as paradoxical as the liar paradox. What is wrong with the liar paradox is that if you take the statement ‘this statement is false’ you find that, if it is true, then it's false, and if it's false, then it's true. Equally, with the command ‘fail to carry out this command’, you find that if it is obeyed it is disobeyed, and if it is disobeyed, it is obeyed. I think that if I had enough formal skill I would have no difficulty in re-writing this in a formal way, in the same way as Tarski has done for truth.
Those of you who aren't accustomed to logical discussion I think are very likely to have a strong feeling that there's some trick or triviality about these examples, such as the Gödel formula that John Lucas has talked about, the liar paradox that Longuet-Higgins has talked about, and this paradox of the unfulfillable command that I have just produced. Don't we all know that these are utterly silly things to say? Nobody would ever say them, so why do we have to take any notice of them in trying to construct a theory of language? If we just don't say such things there won't be any problem. In a way that is true, but this knowledge that we all have that these are silly things to say, is itself something that is absolutely crucial to the understanding of language and it's something which a computer can't have. If we knew how to give a computer the knowledge of what was and what wasn't a silly thing to say, then we would have got enormously far in a task where, as Longuet-Higgins has said, we're only just beginning to scratch the surface.
Before Christopher comes back, I should like to add something about the question of defining meaning in terms of commands instead of in terms of statements. To the biologist this is obviously very attractive. It's fairly clear that human speech must have evolved from animal communications, and animal communications are basically in the form of commands. Animals see things in terms of actions to be taken, food to be approached, danger to be run away from, and so on, and their communications with each other are fundamentally in the form of commands rather than descriptive statements of states of affairs. But surely, if one accepts that you've got to start by recognising that the fundamental basis of language is in a command mode; surely, the great feature of human language is just exactly that it allows you to make so much more complicated commands. You are going to state, not just ‘danger’, but ‘I see a snake with a black diamond pattern running down its back’, or something like this. You can say this is really an instruction to run away from an adder, but really you are describing—the command implies a description of a complex state of affairs.
It's how the human language gets complexified that seems to me to be the real problem. The living world gets much more complex as evolution goes on, and this is I think basically to be understood by pointing out that if you've already got four or five systems ∼ elements, a, b, c, d, e,—an animal can always find a way of making a living by exploiting a relationship between the elements b and c, for example. If you've got rivers with snails in them and men walking into them, you can have a Schistosome parasite that lives part of its life in the snail and part of its life in man, and produces a most unpleasant disease in, what is it, about a third of the world's population, or something of that kind. If you have any system which is already slightly complex, evolution can use this to make a further degree of complexity. I suspect this is what must have happened in human language. Possibly, if you say that every noun is really a programme—a chair is something, defined as being something to be sat on—then what you've got to think of is, in what way are these programmes combined with another to make a more complex entity, like a table set for dinner, which has a table and a number of chairs around it. This seems to me really the important thing about human language, because what it really does for mankind is to make it possible to deal with it in much greater detail and complexity than you can do without it. Its point is its detailedness, not just that its fundamental roots are commands.
I'd just like to pick up one of these things—what do I say if speaking to a computer, in regard to ‘loquor ergo es’. The point is I don't regard speaking to a computer, or saying ‘stop’ to a toy car, which then stops, as speaking. And the crucial reason is one which Kenny mentioned, and that is, with instructing a computer—either it does what it is told or it didn't understand the programme. With a person there is another possibility: it understands perfectly well, but says ‘no’; and this, what you had on pp. 104–5 as a possibility is the crucial one, that people have minds of their own, and may understand and yet refuse to obey.
Well, to take the last point first: yes, I quite agree. But I think there are situations in which the same thing could conceivably happen with a computer. The program ‘compiles’ perfectly well, and it runs, but there is some fault in the output or something, and the result doesn't come out as you want it. I think one can argue about that kind of detail for a long time.
I must say, I found Tony Kenny's point extremely good. I entirely accept what he says, It's no good trying to define the meaning of an utterance in terms of its actual fulfilment. I think, what I was trying to say was that the meaning of an utterance is the computation which it specifies. Now that computation may or may not terminate. It may or may not actually be performed. But that is what the statement means. And if you say ‘fail to carry out this command’, or ‘disobey me’, essentially this is a computation with a loop in it, which could never possibly terminate; and we can see that that is the meaning of the sentence. And I think it is too cowardly, intellectually, to withhold meaning from self-referential statements, or indeed from self-referential commands.
I also entirely agree with Wad about the really tough problem about language from the biologist's point of view, being how on earth did it evolve to its present state of complexity and power. And a corresponding ontogenic problem, how do children acquire their knowledge of language? I suppose, as in many other branches of biology, although he would correct me, I am sure, we might expect that ontogeny would recapitulate phylogeny to a certain extent, although doubtless with very many shortcuts. I suppose it's conceivable that one might hope to throw some tight on the evolutionary problem of language by actually studying language acquisition in children, but I speak under serious correction here, and I would just like to know whether you think so.
I think: probably.