You are here

7: Bayesian Coherentism and Rationality

Our central focus is on the notion of warrant—that quantity enough of which is sufficient, together with truth, for knowledge. As an account of warrant, Bayesianism clearly won't do the job; for purposes of this book, therefore, no more, strictly speaking, need be said. But Bayesianism is much too interesting to dismiss in such summary fashion. Bayesians typically speak not of warrant, but of rationality and they have subtle and fascinating things to say about it. One of the things they say is that conformity to Bayesian constraints is necessary for rationality. But what is this rationality of which they speak? In this chapter we shall explore this question: rationality (along with justification) is a crucially significant notion neighboring warrant and is important for coming to a solid understanding of it. We shall explore this multifaceted notion of rationality and ask whether there is any interesting facet of it of which Bayesianism is a good account.

I. The Varieties of Rationality

A. Means—Ends Rationality and Foley Rationality

One of the slipperiest terms in the philosophical lexicon, ‘rationality’ is many things to many people. According to Richard Foley,

rationality is a function of an individual pursuing his goals in a way that he on reflection would take to be effective. Since epistemic rationality is concerned with the epistemic goal of now believing truths and now not believing falsehoods, the Aristotelian conception suggests that it is epistemically rational for an individual S to believe p just if he on reflection would think that believing p is an effective means to his epistemic goal.1

The generic notion of rationality of which Foley's is a species is what our Continental cousins, following Max Weber, sometimes call Zweckrationalität, the sort of rationality displayed by the actions of someone who strives to attain his goals in a way calculated to achieve them. Clearly there is a whole constellation of notions lurking in the nearby woods: what would in fact contribute to your goals, what you take it would contribute to your goals, what you would take it would contribute to your goals if you were sufficiently acute, or knew enough, or were not distracted by lust, greed, pride, ambition, and the like, what you would take it would contribute to your goals if you were not thus distracted and were also to reflect sufficiently, and so on. This notion of rationality has assumed enormous importance in the last 150 years or so. (Among its laurels, for example, is the complete domination of the development of the discipline of economics.) Rationality thus construed is a matter of knowing how to get what you want; it is the cunning of reason. (Zweckrationalität might also be called ‘Jacobean rationality’, after the Old Testament patriarch Jacob, famed for cunning if not integrity.) Foley's specifically epistemic rationality is a special case of Jacobean rationality: the case where the goal in question is the epistemic goal of now having true beliefs and now not having false beliefs. Foley rationality is a property one of your beliefs has if, on sufficient reflection, you would think that holding that belief was an effective means to achieving that epistemic goal.

By way of brief digression: Foley rationality is intuitively important; and Foley develops it with depth and subtlety.2 It is important to see, however, that Foley rationality does not provide the materials for an account of warrant (nor, of course, does Foley claim it does). There may be interesting connections between warrant and Foley rationality; but I can be Foley rational in accepting a belief B, even if B has no warrant or positive epistemic status for me. Descartes speaks of those “whose cerebella are so troubled or clouded by the violent vapors of black bile, that they constantly assure us that they think they are kings when they are really quite poor, or that they are clothed in purple when they are really without covering, or who imagine that they have an earthenware head, or are nothing but pumpkins or are made out of glass” (Meditation I). So imagine someone doing something that on the face of it looks at best wholly eccentric and at worst insane. Suppose one of your friends takes to wrapping his head with great swaths of cotton batting, cutting tiny holes for his eyes, ears, and nose; then by way of applying a finishing touch, he puts on a necessarily oversize football helmet. He never goes out without this getup, and he makes superhuman efforts to avoid even the most moderate bumps. He does not play football, of course, but he also avoids such apparently unhazardous activity as walking under oak trees when acorns are falling and apple trees during picking season. This behavior seems egregiously foolish and wholly irrational; but then we discover that (due to black bile or brain lesion) he has come to believe, like Descartes’ madmen, that his head is made of glass. (He believes his head is a hollow spheroid, made of thin and fragile crystal.) This belief on his part may be utterly mad; but given that he has it, one can see why he acts as he does. From his perspective, which includes that bizarre belief as well as a wholly understandable desire to avoid a shattering experience, this mode of behavior seems perfectly sensible—rational, as we might say.

His behavior displays means-ends rationality; but further, his mad belief may display Foley rationality. For perhaps this belief, due as it is to cerebral malfunction, is deeply ingrained in him and wholly immune to reflection: no matter how much he reflected, he would still hold it—indeed, hold it even more firmly—and still think holding it a good way to achieve his goal of believing truths and not believing falsehoods. But of course the belief would have little or no warrant for him. A high degree of Foley rationality, therefore, isn't anywhere nearly sufficient for warrant; to get a condition sufficient for the latter, we should have to add at the least that the agent's cognitive faculties are not subject to this sort of cognitive disorder.

B. Aristotelian Rationality

According to Aristotle, man is a rational animal. Aristotle was no doubt right in this as in much else: but what did he mean? One of the most venerable uses of the term ‘rational’ is to denote certain kinds of beings: those with ratio, the power of reason. Such creatures are able to hold beliefs; they are capable of thought, reflection, intentionality. Rational beings are those that are able to form concepts, grasp propositions, see relationships between them, think about objects both near and far. This is the sense in which man is a rational animal. Creatures can of course differ with respect to their rational powers, the strength or excellence of their reason or ratio. Man is a rational animal, but certain other animals also appear to display some rudimentary powers of reason, and perhaps there are still other creatures (angels, Alpha Centaurians) by comparison with whom, cognitively speaking, we human beings pale into insignificance. So a second sense of the term: a creature is rational if it has the power of reason. (Clearly, being rational in this sense is a necessary condition for having knowledge; it may also be sufficient for having some knowledge or other, but of course it is not sufficient for any particular bit of knowledge.)

C. Rationality as the Deliverances of Reason

Aristotelian rationality is generic: it pertains to the power of thinking, believing, and knowing. But there is also a very important more specific sense; this is the sense that goes with reason taken more narrowly, as the source of a priori knowledge and belief.3 Most prominent among the deliverances of reason are self-evident beliefs—beliefs so obvious that you can't grasp them without seeing that they couldn't be false. Of course, there are other beliefs—38×39 = 1,482, for example—that are not self-evident, but are a consequence of self-evident beliefs by way of arguments that are self-evidently valid; these too are among the deliverances of reason. So say that the deliverances of reason is the set of those propositions that are self-evident for us human beings, closed under self-evident consequence. This yields another traditional kind of rationality: a belief is rational if it is among the deliverances of reason and irrational if it is contrary to the deliverances of reason. (A belief can therefore be neither rational nor irrational, in this sense.)4 Rationality in this sense is clearly species (or kind) relative; beings of more impressive intellectual attainments might well find much self-evident that is beyond our cognitive grasp.

There are various analogical extensions of this use of the term ‘rational’ and its cohorts, and analogical extensions of the concept it expresses. First, we can broaden the category of reason to include memory and experience and whatever else goes into science; this is the sense of the term when reason is contrasted with faith. Second, a person can be said to be irrational if he won't listen to or pay attention to the deliverances of reason. He may be blinded by lust or inflamed by passion, or deceived by pride: he might then act contrary to reason—act irrationally, but also believe irrationally. Thus Locke:

Let never so much probability land on one side of a covetous man's reasoning, and money on the other, it is easy to foresee which will outweigh. Tell a man, passionately in love, that he is jilted; bring a score of witnesses of the falsehood of his mistress, ‘tis ten to one but three kind words of hers, shall invalidate all their testimonies… and though men cannot always openly gain-say, or resist the force of manifest probabilities, that make against them; yet yield they not to the argument.5

D. Deontological Rationality

There is another important extension of this sense. Evidentialist objectors to theistic belief say that it is irrational to believe in God without having (propositional) evidence. Here they don't have in mind Foley rationality (they would not be mollified by a demonstration that even after sufficient reflection the theist would continue to think believing in God a good way to achieve his epistemic goals); nor do they mean that believers in God (sadly enough) are not rational creatures; nor do they necessarily mean that belief in God is contrary to the dictates of reason (they need not think that one can deduce the nonexistence of God from propositions that are self-evident to one degree or another). What then do they mean? An important clue is the way these critics often assume the moral high ground, sometimes even sounding a bit self-righteous in the process. Thus Michael Scriven:

Now even belief in something for which there is no evidence, i.e., a belief which goes beyond the evidence, although a lesser sin than belief in something which is contrary to well-established laws, is plainly irrational in that it simply amounts to attaching belief where it is not justified. So the proper alternative, when there is no evidence, is not mere suspension of belief, e.g., about Santa Claus; it is disbelief. It most certainly is not faith.6

Here Scriven is thinking of propositional evidence: the evidence afforded for one of your beliefs by way of an argument from other propositions you believe, for example. Now why is it irrational to believe that for which there is no evidence? Return to Descartes and Locke (see pp. 11ff); and suppose you agree with them that there is a duty to refrain from believing a proposition (a proposition that is not either self-evident or appropriately about your own mental life) unless there is (propositional) evidence for it. Suppose, more particularly, you agree with them that it is self-evident, a deliverance of reason, that there is such a duty. Then to believe a proposition of that sort without evidence is to go contrary to the deliverances of reason: not by believing a proposition that is contrary to reason, but by believing in a way that constitutes flouting a duty, a duty such that it is a deliverance of reason that there is such a duty. To flout this duty is to go contrary to the deliverances of reason; it would be natural, therefore, to extend the use of the term and call such beliefs ‘irrational’. In that extended sense of the term, belief in God without propositional evidence is irrational, if indeed it is self-evident that there is the sort of duty Locke and Descartes say there is. (Of course, the fact is that is not self-evident.)

Rationality in this sense, clearly, is very close to the classical notion of justification, as in chapter 1. Indeed, this claim that proper belief in God requires propositional evidence is often put in terms of justification; in these contexts, ‘justification’ and ‘rationality’ are often used interchangeably (a fact we understand when we see that this variety of rationality, like classical justification, is essentially deontological). Note that here, as with ‘justification’, there are many analogical extensions and additions to the use of the term, and many cases where it is used in forgetfulness of the original basis of its application. It is in this way that the term ‘irrational’ can come to be used as simply a name for a certain kind of behavior, a kind of behavior that, by many earlier users of the term, was thought to have the property (say, that of going contrary to duty) it expressed on the earlier use of the term. In this way someone can come to think that it is irrational to believe without propositional evidence even if she no longer believes that there are those epistemic duties Locke and Descartes say there are—although then it is no longer clear just what she is saying about such believings when she says that they are irrational, or why their being irrational should be thought a mark against them.

E. Rationality as Sanity and Proper Function

One who suffers from pathological confusion, or flight of ideas, or Korsakov's syndrome, or certain kinds of agnosia, or manic depressive psychosis will often be said to be irrational; after the episode passes, he may be said to have regained rationality. Here ‘rationality’ means absence of dysfunction, disorder, impairment, pathology with respect to rational faculties. So this variety of rationality is analogically related to Aristotelian rationality; a person is rational in this sense when no malfunction obstructs her use of the faculties by virtue of the possession of which she is rational in the Aristotelian sense. Rationality as sanity does not require possession of particularly exalted rational faculties; it requires only normality (in the nonstatistical sense) or health, or proper function. This use of the term, naturally enough, is prominent in psychiatric discussions—Oliver Sack's man who mistook his wife for a hat,7 for example, was thus irrational. In this sense of the term, an irrational impulse may be rational: an irrational impulse is really one that goes contrary to the deliverances of reason; but undergoing such impulses need not be in any way dysfunctional or a result of the impairment of cognitive faculties. To go back to some of William James's examples, that I will survive my serious illness might be unlikely, given the statistics I know and my evidence generally; perhaps we are so constructed, however, that when our faculties function properly in extreme situations, we are more optimistic than the evidence warrants. This belief, then, is irrational in the sense that it goes contrary to the deliverances of reason; it is rational in the sense that it does not involve dysfunction. (To use the terminology of my Warrant and Proper Function, the module of the design plan involved in the production of this belief is aimed, not at truth, but at survival).

II. Bayesian Constraints and Rationality

Now which, if any, of these concepts of rationality does the Bayesian have in mind when she declares that rationality requires satisfying coherence, conditionalization or probability kinematics, and perhaps Reflection? Note that these Bayesian constraints are thought of in two quite different spirits. First, the Kantian way: the conditions in question are proposed as norms for our epistemic behavior—epistemic rules or maxims, perhaps—which, like norms generally, are none the worse for being seldom met:

Bayesian decision theory provides a set of norms for human decision making; but it is far from being a true description of our behavior. Similarly, deductive logic provides a set of norms for human deductive reasoning, but cannot be usefully reinterpreted as a description of human reasoning.8

To the extent that we fail to obey these rules or conform to these norms (on this way of thinking of the matter) we are allegedly irrational or, at any rate, less than wholly rational.

But second, there is the Platonic way: the Bayesian constraints may be thought of as descriptive of the intellectual life of ideal cognizers, as characteristic of ideally rational persons, knowers with maximal ratio. Thus Paul Horwich:

More specifically, the Bayesian approach rests upon the fundamental principle: (B) That the degrees of belief of an ideally rational person conform to the mathematical principles of probability theory.9

So here we are idealizing, perhaps in the way in which we do physics by thinking about frictionless planes and point masses. Human beings are not in fact coherent, but then automobiles are not point masses and roads are not frictionless plains; still, we can learn a lot about the way automobiles move on roads by treating them as if they were point masses and frictionless plains. Perhaps the Bayesian means to be talking about how human beings would function if there were not the intruding analogues of friction. Or perhaps we are idealizing in a different way: we are describing the intellectual characteristics of an ideal knower, a being of maximum reason, maximum ratio, whether or not an idealized human knower. In either case, the thought is that the Bayesian constraints form a pattern for us, a sort of Platonic eidos that constitutes an ideal for us and our intellectual life. And of course there is an intimate connection between these two: if the Bayesian conditions describe the intellectual life of an ideally rational person, then insofar as we do not conform to them taken as maxims or rules, we fall short of that ideal.

A. Coherence

But now we must ask a question that has been clamoring for attention: Why must we conform to the Bayesian constraints, if we are not to be irrational? Why, for example, must we be coherent to be rational? Here there are substantially three answers: first, what for want of a better name, I shall call “the argument from means-ends rationality,” initially in the form of a Dutch book argument and then in a deeper form; second, the argument from ideality; and third, the analogical argument.

1. The Dutch Book Argument

The conclusion of the Dutch book argument is that I am irrational if not coherent.10 Why so? And how are we to think of this irrationality that allegedly fastens, like a rapacious lamprey, to one who does not conform to Bayesian constraints: what sort of irrationality is this? Which (if any) of the previously noted forms of rationality is at stake? Clearly not Aristotelian rationality or rationality as sanity; one can be a sane and properly functioning rational animal without being coherent. Nor is it among the deliverances of reason that there is a duty to be coherent. So perhaps it is means-ends rationality that is at stake. Perhaps the thought is that if I am not coherent, I am vulnerable to a Dutch bookie, and that does not fit well with my aims (which include among other things hanging on to my fortune, meager though it is).

But just how is the argument supposed to run? No doubt in general it would be means-ends irrational (ceteris paribus) to accept knowingly a series of bets such that no matter what happens, I lose. (It would also be irrational to accept such a series of bets even if I didn't know it was of that distressing character; this would be irrational in a broader means—end sense in which taking any action guaranteed to frustrate my ends is [ceteris paribus] irrational, whether or not I know it is guaranteed to frustrate my ends.) But suppose I do bet on A; why must I also be prepared to bet on not-A? Indeed, who says I have to bet at all? Dutch book arguments picture us as wildly enthusiastic and totally committed bookies, posting odds (perhaps on a giant board on the front lawn) on every proposition we come across, ready to take on all comers and cover any bets that are fair according to those odds. (I'll give you 100 to 1 that Caesar crossed the Rubicon, 1010 to 1 that arithmetic is incomplete, 1 to 3 that there are two John Paul Jones's in the phone book on Fiji, 4 to 1 on Existentialism,…) If I were to do this, then no doubt a logically omniscient bettor could drain my pockets; for, of course, I am not logically omniscient and, as I will argue in sec. 4 (p. 145), not even consistent with respect to my full beliefs (for the paradox of the preface, see p. 145). But are there any such bettors—more relevantly, am I likely to encounter one? And if I do encounter one, can't I just refuse to bet? Are these logically omniscient bettors a problem worth worrying about?

What means—ends rationality requires is not that I be coherent or post coherent odds. What it requires is that if I am not coherent, I avoid betting with logically omniscient bookies, just as I avoid betting on historical facts with historians or on points of law with lawyers. But in fact means-ends rationality requires something vastly stronger: it requires that I stay out of that whole miserable betting situation. I don't have the time to get involved with all that odds posting, all those efforts to figure out what I believe about this and that (whether, for example, I believe that the theory of evolution is more likely than, say, supralapsarianism); I don't have the time or money to put up that big board. Why should I waste time doing things as silly as all that? There are other ways in which I would much rather spend my allotted three score and ten—other ways that will contribute much better to my goals. The Dutch book argument, therefore, clearly goes nowhere, as an argument for the conclusion that it is irrational not to be coherent. No doubt it would be irrational for me to engage in wagers with logically omniscient bookies if I am incoherent or know that I am, but it doesn't follow that I am means-ends irrational if not coherent.

We can say something stronger: it would be means-ends irrational for me to try to become coherent. This is evident as follows. A principal source of my incoherence is my lack of what is sometimes called ‘logical omniscience’. But this is not quite the right term. I am less than logically omniscient, all right, just by virtue of the fact that there are many necessary truths I have never heard of. But the problem for my being coherent is not just that there are necessary truths I have never heard of; the problem is that many of the necessary truths I have heard of are such that it would be irrational for me to believe them to the maximal degree. Consider Goldbach's Conjecture, for example, or the claim that each object has a qualitative individual essence, or the view that objects have haecceities and that Socrates’ haecceity could not have existed if Socrates himself had not. I have little idea which if any of these are true; I therefore give some credence to each of them, but also some credence to each of their denials. But, of course, each is noncontingent, necessarily true if true at all; I am therefore incoherent. Many other noncontingent propositions are such that while I think I can see that they are true, I can't see their truth as clearly as that of elementary truths of logic or arithmetic: for example, the propositions that no propositions are sets,11 that (pace Meinong and Castañeda) there are no objects that do not exist, and (contra existentialism) that even if Socrates had not existed, the possible worlds in which he exists would still have existed, and so on. Hence I believe them to some degree less than the maximal degree. But isn't this just what rationality requires? Would it be rational in the means—ends sense, given my limitations, for me to try to achieve coherence, thus trying to believe every noncontingent proposition I think of to the maximal or minimal degree? Of course not. According to John Locke, the wise man proportions his belief to the evidence: but this holds for noncontingent truths as well as contingent truths. One of my goals is to try to achieve a wise and judicious frame of mind in which I proportion my degree of belief, with respect to noncontingent truths, to their degree of obviousness, or their obviousness with respect to what is obvious, or the enthusiasm with which they are endorsed by those who know. But then means-ends rationality does not require that I be coherent;12 it requires, instead, that I recognize my limitations and believe a noncontingent proposition with maximal firmness only if it is maximally evident for me.13

2. The Deeper Means-Ends Argument

But perhaps we could think of Dutch book arguments as dramatizing and pointing to a deeper sort of problem: if I am not coherent, then my views will be such as to necessarily diminish my chances of being right; and since I have a stake in being right, isn't that means-ends irrational? Isn't it also Foley irrational in that it interferes with my goal of now believing truth and not now believing falsehood? Thus van Fraassen:

Let me clarify this by means of the distinction between reasonableness and vindication in the evaluation of right action, right decision, and right opinion. Whether or not you were vindicated in a decision or action depends on the outcome it led to in the actual circumstances that obtained—much of which you could not have known or reasonably expected. Whether or not your present opinion about tomorrow's weather will be vindicated depends on tomorrow's actual weather. Lack of vindication can be a reproach, as Machiavelli pointed out, but it cannot impugn the rationality of the action or opinion. Whether or not that was reasonable depends on factors settled at the time and, in some sense, accessible. The paradigm of irrationality is to form or organize your actions, decisions, or opinions so as to hinder needlessly your chance of vindication. If your opinion is self-contradictory, you have sabotaged yourself in the worst possible way—you have guaranteed that your opinion will not turn out correct—but milder forms of self-sabotage are easily envisaged.14

Among those milder forms of self-sabotage, we might think, is incoherence: if we are incoherent, we cannot be completely vindicated, and “A decision is unreasonable if vindication is a priori precluded.”15 What is the force of ‘unreasonable’ here? Perhaps van Fraassen is thinking of means-ends rationality: among my goals is vindication, or rather a style of epistemic life in which vindication is not a priori excluded. Incoherence, however, is incompatible with achieving that style of life; so incoherence is means-ends irrational. Here we have an argument from means-ends rationality that is independent of Dutch book considerations and is both deeper and more plausible than Dutch book arguments. Still, I think the reply is essentially the same. After all, we have already ruined the possibility of (complete) vindication by accepting any non-contingent truth to a degree different from 0 or 1, Here there is at best a conflict among my goals. Perhaps it would be good (if possible) not to preclude a priori the possibility of complete vindication; but it is also good to respond appropriately to the difference in warrant, for me, between different noncontingent propositions. I want to believe noncontingent propositions to the degree to which they are obvious to me, or clearly supported by propositions that are obvious to me, or attested to by those in the know. And the fact is I want this a good deal more than I want to avoid precluding a priori the possibility of complete vindication. I could achieve that latter condition only by believing all necessary propositions (or all those that come within my ken) to the maximal degree and all impossible propositions to the minimal degree. This seems to me to require a degree of opinionation inappropriate for beings such as we, who know of our own limitations.

3. The Argument from Ideality

But isn't it true that an ideal intellect would satisfy these conditions? Surely an ideally rational person, a person possessed of perfect ratio or reason, a perfect knower, would satisfy them, just by virtue of being thus ideally rational. And if an ideal intellect would be coherent, then isn't coherence an ideal for any intellect? According to J. Howard Sobel, “Logical omniscience, being certain of every necessary truth, and high opinionation, having quite definite degrees of confidence in all propositions, are further aspects of an ideal for intellects.” He adds that “A person has a stake in intellectual perfection, a deeply personal stake, and compromises made here are always degrading in a sense.”)16 Are we not therefore irrational or at any rate less than wholly rational to the extent that we fail to achieve coherence? We could see the argument of the preceding section as related to this one. Part of my reason for rejecting coherence as required by means—ends rationality is just that some necessary truths seem much more obvious to me than others; but of course an ideal intellect would not labor under that handicap; so for such an ideal intellect, means—ends rationality as well as ideality would require coherence.

Now an ideal intellect, an ideal knower, would indeed conform to most of these conditions. God, for example, is omniscient; further, we may suppose he believes every true proposition to the maximal degree and every false proposition to the minimum, so that his beliefs are coherent. True, they probably are not strictly coherent (for him many17 true contingent propositions probably have a subjective probability of 1), but that is only because he is not subject to the sorts of limitations that make it inappropriate (if it is) for us to believe contingent propositions to the max. Since he knows that he never makes a mistake, absence of strict coherence is not hubris for him. Furthermore, since his opinions do not change (or rather, do not change in the relevant fashion),18 he trivially satisfies conditionalization and probability kinematics. Of course he also satisfies Reflection: since he is essentially omniscient, at any time t he believes all truths to the max, including the truth, with respect to any later time t and any truth B, that at t he will believe B to the max. There are or may be intellects less exalted than God but more exalted than we who also conform to the Bayesian conditions—intellects characterized by logical omniscience, say, even if they are not perfect (or even very good) with respect to contingent truth. (There may also be intellects more ideal than we who do not display logical omniscience, but approach more closely to it than we; and there may also be intellects more ideal than we who are further from coherence than we, but superior in other respects.) A wholly ideal intellect would certainly meet these Bayesian conditions; indeed, a completely ideal intellect, an ideal cognizer, a knower than which none greater can be conceived, would be essentially coherent—such that it is not possible that it fail to be coherent; for such a person would be essentially omniscient (omniscient in every possible world in which it exists). Perhaps we should go still further (following Anselm) and argue that a really ideal intellect would be necessarily coherent, coherent in every possible world; for such an ideal person, we might argue, would be necessarily omniscient—essentially omniscient and necessarily existent. (So if it is possible that there be a completely ideal intellect, it is necessary that there is one.)

But even if this last is too extravagant, the premise that an ideal intellect would indeed be coherent seems correct. What follows, however, for us? Not much, so far as I can see. Of course, it does not follow that we should try hard or even at all to achieve coherence. In an ideal world, the Red Cross does not exist; despite our knowledge of that fact, the Red Cross has nothing to fear from us. In an ideal world there are no lawyers, police, cancer research, or dishwashers; we know better than to try in consequence to eliminate lawyers, cancer research, dishwashers, and the police force. The same goes for intellectual ideality. Trying hard to achieve coherence would deprive us of other goods, indeed, of other epistemic goods. An ideal intellect would be maximally opinionated, and that in a dual sense: it would have opinions on everything and would hold all its opinions to the max. But should I try to do that? Of course not. I can't sensibly try to achieve coherence with respect to noncontingent truths; but even for contingent truths it might be foolish to invest much effort in it. If I spend too much time trying to detect incoherence in my credence function, I may have little time left to learn new truths or appropriately reflect upon those I have already learned.

Accordingly, I am not irrational in any of the senses distinguished previously by virtue of failing to conform to these conditions. I am not means—ends irrational for so failing; nor am I insane; nor do I then fail to be a rational animal; nor do I then act or believe contrary to the deliverances of reason; nor is there a duty (for me) to be coherent. I am not intellectually defective or worse than I ought to be simply because I am nowhere near being an ideal intellect, that is, simply because there are or could be intellects vastly superior to mine. An ideal intellect would know everything—certainly everything in the past, maybe everything in the future as well. It does not follow that I am irrational in any sense or in any way intellectually defective because I know nothing about the language spoken by pre-Celtic Scots. Am I locomotionally deficient by virtue of the fact that I can't run as fast as a cheetah or fly like a falcon (to say nothing of a really ideal locomotor)? Am I deficient with respect to power simply because I am not omnipotent?

True, I am not an ideally rational intellect; but it does not follow that I am (in any sensible sense) irrational. Ideal rationality and irrationality are not complementary properties, even within the class of intellectual beings. Being less than ideally rational is a state one achieves simply by being the sort of intellect for which there are or could be superiors; only God manages to avoid this condition. It does not follow that the rest of us are all irrational, defective in some way. I am irrational if I fail to function properly from a cognitive point of view; but I can function perfectly properly even if I am nowhere near ideality. A Model T Ford can be in perfect running order, even if it can't keep up with a new Thunderbird.

As we have already seen, the states and conditions of an ideal intellect are not by that very fact appropriate ideals or standards for me; and, indeed, my so taking them might be arrogant (as when I condescendingly insist on speaking German with German speakers whose English is much better than my German), or means-ends irrational, or merely ludicrous (as when I persist in vainly trying fancy dunks, because that's how Michael Jordan and Dominique Wilkins do it). I display nothing but hubris in taking for myself goals not suited to my powers, or measuring myself by standards inappropriate for the kind of being I am, even if there are beings—beings superior to me—who do meet these ideals and standards. The argument from ideality therefore fails.

4. The Analogical Argument

According to Ramsey, the probability calculus is no more than an extension to partial beliefs of formal logic. But then coherence is analogous to consistency in full belief: I should strive for the former just as I should for the latter, and failure to be coherent is irrational just as is failure to be consistent. Jeffrey19 and others endorse this idea.

It is by no means obvious, of course, either that I should strive for consistency in every context or that failure to achieve it is irrational. After all, there is the Paradox of the Preface: I write a book named I Believe! reporting therein only what I now fully believe. Being decently modest, I confess in the preface that I also believe that at least one proposition in I Believe! is false (although I have no idea which one[s]). Then my beliefs are inconsistent, in the sense that there is no possible world in which they are all true; but might they not nevertheless be perfectly rational? Given my sorry track record, is it not perfectly sensible and rational for me to suppose that at least one proposition in the book is false? True: I can't rationally entertain any proposition in the book and believe that it is true and furthermore false; but I can rationally believe of each that it is true, while also believing that their conjunction is false. I believe every proposition in the book; I do not believe their conjunction and in fact think it is false;20 and isn't this precisely what rationality, following the evidence, requires?

Still, in some way—a way difficult to specify, given the enormous diversity and articulation of the human cognitive design plan—one obviously ought to strive for consistency. In the same way, then, shouldn't we also strive for coherence? Initially, however, there looks to be an absolutely crucial disanalogy between coherence and consistency: for while I can withhold full belief, it looks initially, at any rate, as if I can't withhold partial belief. I have an option with respect to consistency; if I see that A and B are inconsistent, I can withhold full belief with respect to one or the other, thus running no risk of inconsistency. Not so (it initially seems) for incoherence; I can't withhold partial belief; for no matter what I do, I will be apportioning credence between the proposition in question and its denial. To avoid inconsistency, I simply become less opinionated; this won't help with respect to incoherence, and indeed guarantees it in the case of noncontingent propositions.

But perhaps this initial appearance is deceiving: am I really obliged to assign some degree of credence or other to just any proposition I encounter? Can't I withhold credence altogether—even when I entertain the proposition in question? Isn't there a difference between withholding credence with respect to some belief A, on the one hand, and, on the other, affording the same degree of credence to A and its denial? You ask me how likely it seems to me that A is true; all I can say is that it seems at least as likely as 2 + 1 = 4 and no more likely than 2 + 1 = 3; and the same goes for B. It would then be true that they seem about equally likely to me, but also true, perhaps, that I don't assign either any nontrivial credence. If I can in this way withhold credence, then the objection to the analogy does not hold. If I can withhold belief altogether, believing neither A nor its denial to any degree at all, then I have the same option with respect to coherence as I have with respect to consistency: if I see that my beliefs are incoherent, I can locate the problem and withhold credence from the offending beliefs.

But there is still a crucial disanalogy. I can see that my beliefs are incoherent just by noting that there are noncontingent propositions to which I do not assign either the maximum or minimum degrees of belief; if I am obliged to withhold credence in all these cases, I will have an opinion on a noncontingent proposition only if I have absolute certainty with respect to it—that is, believe it or its denial to the max. But surely this is not required by rationality. I believe that arithmetic is complete, but I am not absolutely certain of it; does rationality really require that I either assign no credence at all to this proposition, or else believe it to the max? I don't think so. The most we can say, I think, is that there is a certain state of affairs I recognize as valuable (and as an epistemic value at that) such that absence of coherence guarantees (a priori) that the state of affairs in question is not actual. It does not follow, however, that there is a sensible sense in which I am irrational if I am not coherent. It would also be good to have blinding speed—to be able to run as fast as a greyhound, say; but it doesn't follow that I am locomotorily deficient if I have some property that precludes blinding speed. It would be very good to be able to play the piano better than any human being can in fact play; those who can't are not necessarily musically deficient. A coherent philosopher would be a strange and unlovely creature. Philosophical propositions are for the most part noncontingent; so for nearly any philosophical proposition you pick, either she would have no views at all on it—assign it no credence at all—or else she would be absolutely certain of it or of its denial. Hardly an ideal philosophical interlocutor. I conclude that none of the forms of rationality we distinguished requires coherence, and some of them require incoherence.

B. Conditionalization and Probability Kinematics

If coherence is not a sensible ideal for us, neither is changing belief by conditionalization or probability kinematics; furthermore, neither is required for rationality. Before arguing the point, however, I want first to defend Bayesians against a certain complaint. Consider my first credence function—my Ur-function, we might say: according to Bayesians it is privileged in that any deviation from its pristine conditional probabilities is irrational. But why should that credence function be thus exalted? (As the forty-five-year-old dentist said, Why should some sixteen-year-old have the right to decide that I must spend the rest of my life being a dentist?) What is so special about that original Ur-function? According to Bayesians themselves, it is no more rational an sich than any of indefinitely many others.

Now here the following complaint is sometimes lodged against Bayesians.21 Suppose U is the conjunction of propositions to which I afforded full belief in my Ur-function; and let e be the conjunction of propositions on which I have since come to bestow full belief: then there are many coherent states S of belief such that there are coherent credence functions C coinciding with my Ur-function on U, and such that C conditionalized on e yields S. So suppose I change from my present belief state to a new coherent belief state S and do not do so by conditionalization: according to the Bayesian, this is irrational. But, says the complainant, why so? There are any number of coherent credence functions (coinciding with my Ur-function on U) from which S comes by conditionalization on e; any of those functions is as rational as my Ur-function; any could have been my Ur-function, if one of them had, then I would have been rational to reach S; so what is irrational about my now changing to some other function in that family? But of course the Bayesian has a reply: what is irrational about that is just that it is changing belief in some way other than by conditionalization or probability kinematics. Although the objection is indeed suggestive, its precise bearing is not quite clear. It seems to beg the question against the Bayesian, refusing to take seriously his suggestion that changing belief in some other way is irrational. Perhaps it is less an argument than a cry of incredulity.

So this isn't a serious objection. But the real question, as it seems to me, is this: why can't I sensibly come to see that my credence function, though coherent, needs to be changed? The simplest cases would again involve non-contingent propositions. Perhaps I believe to the max that even if Socrates had not existed his haecceity would have; I believe this just as firmly as 2 + 1 = 3. You get me to see that even if I am right, it isn't genuinely obvious, not nearly as obvious, anyway, as 2 + 1 = 3. I then pull in my horns and believe it more moderately (not changing belief in any other proposition); have I not in fact done the rational thing? Yet I have changed belief in a way inconsistent with probability kinematics. Furthermore, my original maximal belief (supposing it true) was consistent with my being coherent; my later, less than maximal belief was not; and yet the change from one to the other seems perfectly rational, perfectly sensible, and perhaps even required by rationality. Wouldn't I be irrational if I persist in my opinionation?

But of course the same can go for credence in contingent propositions. Couldn't a change in my credence function originate from my coming to see that one of my conditional credences P(AB) is improper—even if I continue to invest the same degree of credence in A and in B? I think more about this proposition B I thought provided excellent if nonconclusive evidence for A; I come to see that its evidential value is not as significant as I believed, although my degree of credence in A and in B does not change. Thus a change in my credence function originates from this change in P(AB) with no change in P(A) or P(B). And then my credence function changes, but not by way of probability kinematics. Couldn't this nonetheless be a perfectly rational change?

C. Reflection

1. Explanation

Van Fraassen's Reflection (see chapter 6, p. 125)22 is of great interest in itself, and van Fraassen has subtle and fascinating things to say about it; to do justice to it or them would require more space (and insight) than I can command. (I shall try not to make a sow's ear out of a silk purse.)

In “Belief and the Will” he put the principle as follows:

Pat(A⁄Pat + x(A) = r) = r

where Pat is the agent a's credence function at time t, Pat + x is the agent's credence function at a later time t + x, and Pat + x(A) = r is the proposition that at t + x, a believes A to degree r.23

I now fully believe that in three weeks I will believe to degree .8, say, that the Athenians won the Peloponnesian War; a little calculation shows that if I conform to Reflection, my present degree of belief in that proposition will also be .8.24 I am now convinced that truth is not merely what my peers will let me get away with saying; to conform to Reflection, I must now also believe that there is no chance at all that by a year from now I will have changed my mind. I now believe to about .9 that nominalism is false; if I conform to reflection, then I do not now accord. 2 credence to the supposition that a year from now I will believe the denial of nominalism to that degree (see n. 35). More generally, an agent satisfies reflection at t just if her degree of belief in A at t, on the condition or supposition that her belief at the same or later time t + x in A will be r, is r. She violates it if, for example, she does not completely reject the proposition that at some future time she will come to believe a false proposition fully.25

Reflection so stated applies only to ‘sharp’ subjective probabilities; it does not accommodate the case where, for example, you think it at least twice as likely as not that it will rain tomorrow afternoon (you're on a summer hiking trip in the Colorado Rockies), but no more than six times as likely as not, and your opinion is no more definite than that. Then your credence in that proposition is vague. We could represent your credence by an interval of real numbers (rather than a specific number): [. 67, 86]. You might think it at least as likely as not that Paul will be more than ten minutes late, but have no more definite opinion; we could then represent your degree of credence for this proposition as the interval [.5, 1]. Many or most of our credences, I suppose, are vague. Partly to accommodate vague belief, van Fraassen recently proposed a more general version of reflection:

Opinion Reflection Principle. My current opinion about event E must lie in the range spanned by the possible opinions I may come to have about E at later time t, as far as my present opinion is concerned.26

You are about to throw a fair coin; at the moment you think it as likely to come up heads as tails; you also believe that in a few seconds you will believe the proposition it came up heads either to the maximal degree or to the minimal degree; you satisfy Opinion Reflection Principle (call it ‘New Reflection’) because your present credence in that proposition lies within the range spanned by the possible opinions you now think you will have in a few seconds. You are beginning a course in philosophy; you are presently rather undecided about nominalism, thinking it no more likely than its denial. The instructor, however, is known to be both a nominalist and a persuasive teacher; you think it quite likely that by the end of the course you will afford a higher degree of credence to it than you do now; in fact, you think that at the end of the course you will believe nominalism at least twice as likely as its denial. Then your current degree of belief in nominalism does not lie within the range of opinion you now think you will have at the end of the course and you violate New Reflection. To conform to it you must now think nominalism is at least twice as likely as its denial (your current degree of credence for nominalism must lie in the interval [.67, 1]).

Now it looks initially as if van Fraassen means to propose Reflection (both old and new) in the familiar way: as a condition of rationality; you are rational only if you (or your credence function) conform to Reflection.27 So taken, it looks vulnerable to certain kinds of criticism, (and indeed has not lacked for critics).28 It is tempting to suggest examples of the following sort: suppose you foresee that you will soon be suffering from some condition interfering with the proper function of your intellectual faculties. You learn that the glass of Kool-Aid you have just drunk was laced with the psychedelic drug LSQ, which you know causes those who drink it to believe very firmly that they can fly;29 or you believe that you will soon begin a drinking spree and will firmly believe, after ten drinks, that you can drive home perfectly safely.30 (As you presently see things, your future credence in the offending propositions lies in a very narrow interval bounded above by 1). But then conformity to Reflection (both old and new) requires that you endorse those foreseen future opinions: to conform to it you must now believe that you can fly, or that you will be able to drive home perfectly safely after those ten drinks—and that seems ridiculous.

Van Fraassen has a rejoinder—a rejoinder that is initially puzzling. A proposed counterexample, he says, must squeeze between the Scylla of my refusing to recognize that future opinion as genuinely mine, and the Charybdis of my acting in a way inconsistent with my integrity as an epistemic agent. (Integrity seems to seize center stage here, supplanting rationality.) On the one hand, in some of these cases I would not really see those future credences as really mine:

The question is not only what foreseen transitions—ways of changing my mind—I, the subject, classify as pathological or reasonable. The question is also what I am willing to classify as future opinions of mine. When I imagine myself at some future time talking in my sleep, or repeating (with every sign of personal conviction) what the torturer dictates or the hypnotist has planted as post-hypnotic suggestion, am I seeing this as myself expressing my opinion as they are then? I think not.31

This is the death or disability defense.

On the other hand, integrity as an epistemic agent requires that I now commit myself to following epistemic policies that I now stand behind, that I now endorse as rational or right. Suppose a scientist learns that materialism is caused by a certain dietary deficiency, and that for the last year his diet has been deficient in just that way. How should he respond? Not by saying: “I know that materialism is false, of course, but it looks (sadly enough) as if I shall soon believe it is true.” No; what epistemic integrity requires, van Fraassen thinks, is that such a person now say (to himself if not to others) “Forewarned as I am, and as no one before us could be, I shall take good care to change my mind about materialism only for good reasons, and not in an irrational fashion.”32 Perhaps, though, he recognizes that the deficiency will be too much for him; it will obstruct proper function, making it impossible for him to regulate my opinion in a rational way: “In that case,” says van Fraassen,

he will no longer be able to formulate a considered opinion, but be at the mercy of strong impulses which he himself classifies as irrational. He will not be in control. From his present point of view, his future behavior will then be a sad parody of epistemic activity. The death or disability defense comes into play.33

Now how, precisely, shall we understand this defense? The basic claim is that in a proposed counterexample, a proposed alleged rational violation of Reflection, either the agent is not clearly recognizing the future opinion as really hers, or else she is not making the commitment (required by integrity as epistemic agent) to allow her beliefs to change only in ways she now sees as right. (Perhaps this is less a claim than a challenge.)

But the defense is initially baffling. There are two problems: first, can I sensibly claim that those foreseen opinions won't really be mine? Second, how is the commitment that epistemic integrity allegedly requires—how is that commitment relevant to alleged counterexamples to the claim that a rational person conforms to Reflection? Take the first, and consider the materialism case. I bleakly foresee that I won't be able to resist the onslaught any better than anyone else; it is clear to me that I won't be able to make sure that I change my opinion in reasonable ways: can I really claim, with any show of propriety, that the future opinion I sadly foresee really won't be mine? It is hard not to sympathize with Patrick Maher: “A defender of Reflection might try responding to such counterexamples by claiming that the person you would be when drunk is not the same person who is now sober.… But this is a desperate move. Nobody I know gives any real credence to the claim that having 10 drinks, and as a result thinking they can drive safely, would destroy their personal identity.”34 True: we might say “When she gets drunk, she's a wholly different person.” But this is only a manner of speaking. Can I really claim, with any show at all of plausibility, that this materialist I foresee will not be me? It hardly seems so.

And now take the second question about van Fraassen's response. A proposed counterexample that manages to avoid the Scylla presented by the previous considerations is likely to founder in the Charybdis created by the requirement that I must now resolve to change opinion rationally. But how is that commitment so much as relevant? What counterexample candidates does it defeat? How is it part of a defense of Reflection?

So how shall we understand van Fraassen here? Perhaps as follows. His critics,35 I think, have paid insufficient attention to a distinction he clearly thinks crucially important: the distinction between, on the one hand, making an autobiographical statement about your credence function and, on the other, expressing or avowing your opinion. Consider promises, he says. Today I say sincerely, “I promise you a horse”; I am not making an autobiographical statement about what I am presently doing, not reporting on my current activities. (Could a bystander sensibly respond, “You are in error; you aren't really promising him anything at all”?) I am not reporting or commenting on my behavior, but promising you a horse, thus instituting and accepting an obligation to you I didn't have before. Now epistemic judgments (judgments of the sort A seems to me more likely than not, or my personal probability for C is high) resemble promises in that they are not, says van Fraassen, statements of autobiographical fact. I say, “It seems likely to me that he did her wrong”; what I do in making this judgment is less like saying that I was born in Michigan than like promising you a horse. Epistemic judgments are even more like expressions of commitment or intention:

It seems then, that of the alternatives examined, epistemic judgments are most like expressions of intention.… If I express this intention to an audience, then, just as in the case of a promise, I invite them to rely on my integrity and to feel assured that they now have knowledge of a major consideration in all my subsequent deliberation and courses of action.36

Van Fraassen's critics have had little to say about this suggestion; but surely something like it is both true and important. Creeds—the Apostles’ Creed, for example—are also sometimes called ‘confessions’ (The Augsburg Confession, The Belgic Confession); but to confess your faith is not (or not merely) to make an autobiographical statement about the condition of your psyche. (Nor is it to admit, shamefacedly, that unfortunately you do hold the opinions in question.) Creeds typically begin with ‘Credo’ or ‘I believe’: “I believe in God the Father Almighty, maker of heaven and earth.” But if I use this creed to express what I believe, I am not merely reporting the result of self-examination, as I might be if I told you that every now and then I am subject to doubts about one or another element of the creed. I am doing something quite different: something that involves making or renewing a commitment. I am calling to mind an epistemic stance I have taken; I am renewing, restating, retaking that stance. Seriously using a creed, furthermore, is only a special case of a more general phenomenon: stating or expressing one's considered opinion.

According to van Fraassen, therefore, there is an important distinction between autobiographical statement of fact and expression of opinion, epistemic judgment. I say to you: “You tell me that you believe that democracy is a good thing, but given that you believe it's a good thing, what do you think are the chances that it really is a good thing?” Here the right first response, van Fraassen thinks, would not be a factual, autobiographical remark based, perhaps, on the available statistics about the frequency with which you've been wrong in the past, or the frequency with which other people are wrong in similar beliefs, or anything of that sort. The right first response must be either to reject the question as an impertinence or to say something like “Are you serious? They're very good, of course.” This is a synchronic case; but something similar holds in the diachronic case. You ask me, “What is your opinion of the likelihood that democracy is a good thing, given the supposition that tomorrow you'll believe that it is?” Again, the right first response, he thinks, the response required by integrity, is: “That it is very likely, of course.” But this response is not to be thought of as an autobiographical report; it is an epistemic judgment, an expression of opinion, more like a promise or commitment than a factual report.

Suppose we try to get a closer look. To satisfy New Reflection, my current opinion about an event E must lie in the range spanned by the possible opinions I now think I may come to have about? at future time t. But what is it, exactly, to satisfy this condition? The ‘factualist’ says something like this: there are beliefs or opinions, which are mental states of some sort, and they come in degrees. For Paul to satisfy Reflection (in this instance) is for the third-person statement (as made, perhaps, by Eleanor)

Paul's opinion about E lies in the range spanned by the possible opinions he presently thinks he may come to have about E at future time t

to be true. That is a factual statement about Paul. (It “is a proposition.”)37 And according to the factualist, Paul satisfies Reflection if and only if this proposition is true.

So says the factualist; van Fraassen, however, is no factualist but a voluntarist:

I have argued that it [Reflection] is in fact indefensible if we regard the epistemic judgment… as a statement of autobiographical fact. The principle (Reflection) can be defended, namely as a form of commitment to stand behind one's own commitments, if we give a different, voluntarist interpretation of epistemic judgment. I call it ‘voluntarist,’ because it makes judgment in general, and subjective probability in particular, a matter of cognitive commitment, intention, engagement.38

What is it, then, to satisfy Reflection on a voluntarist reading? I'm not sure, but perhaps the following can bring us to the right neighborhood. When Eleanor uses such sentences as,

Paul's current opinion about E is .6, and he now thinks that at t his opinion about? might lie in the interval ‹.5, 7›,

or

Paul's personal probability for A, on the condition that a week from now his probability for it will be .9, is .9,

she makes a factual, biographical, general or specific remark about Paul and his credence function. However when Paul uses the first person analogue of these sentences

My current opinion about E is .6, and I now think that at t my opinion about? might lie in the interval ‹.5, 7›,

or

My personal probability for A, on the condition that a week from now my probability for it will be .9, is .9,

then on the voluntarist view he is not asserting what Eleanor asserts; he is instead expressing his opinion, making an epistemic judgment; this is something like making a promise or avowal or commitment. When he expresses (avows) his opinion, he is committing himself to something or other—perhaps to continuing to hold this opinion unless some reason for no longer holding it shows up, perhaps to changing opinion only in an acceptable, rational, proper way—acceptable and proper, of course, by his lights, and by his present lights. In making epistemic judgments, on the voluntarist view, I commit myself to managing my opinion properly, to changing opinion only in ways consistent with my integrity as a responsible maker of judgments, a rational agent, a player in the judgment and assertion game (which is no game, but a deeply important feature of human life). Whatever precisely it is to satisfy Reflection, on a voluntarist reading, it is at least to be committed in a certain way, to make or be prepared to make commitments of some kind. (Someone who is more sympathetic to dispositions than van Fraassen might think of it as something like being disposed to make, being willing to make, setting oneself to make Reflectionlike expressions of opinion.) It is to be in the state of mind, epistemic and otherwise, that can properly be expressed by epistemic judgments. Thus “Belief and the Will,”

I conclude that my integrity, qua judging agent, requires that, if I am presently asked to express my opinion about whether A will come true, on the supposition that I will think it likely tomorrow morning, I must stand by my own cognitive engagement as much as I must stand by my own expressions of commitment of any sort.39

“If I am presently asked… I must stand by…”; that suggests satisfying Reflection is a matter of being disposed to make Reflectionlike judgments or expressions of opinion (commitments) on the appropriate occasions.

Now return to the two problems with van Fraassen's defense of Reflection (that I can hardly claim that materialist won't be me and that it is hard to see how voluntarism figures in); how does this voluntarism help? Well, in the materialist case I foresee that I won't be able to withstand the onslaughts of disease or dietary deficiency. Those future opinions won't be arrived at by free and rational activity on my part. Of course, I recognize that the future may bring intellectual dysfunction, disability. If it does, I will no longer be able to regulate my opinion properly; it will be out of my control and out of my hands. And to the extent that I foresee that my future opinion will in fact be out of my control, I can't properly make Reflection-like judgments about it; I can't sensibly make commitments about it, anymore than I can make a commitment not to be subject to the ills our flesh is heir to. I don't take responsibility for those opinions; and perhaps in that sense we can say that they aren't really mine.

Integrity requires me to express my commitment to proceed in what I now classify as a rational manner, to stand behind the ways in which I shall revise my values and opinions. It is on this basis that I rely with confidence on my future opinion, to the modest extent of satisfying the Reflection principle. But integrity pertains to how I shall manage what is in my power. My behavior, verbal or otherwise, is no clue to opinions and values when it does not bespeak free, intentional mental activity.40

Reflection is to apply to free, intentional epistemic activity—epistemic activity insofar as it is within my control.

2. Does Rationality Require Reflection?

As I understand it, then, van Fraassen's view is that integrity requires making Reflection-like commitments; apparent counterexamples will either be cases where the agent is not making such commitments, or cases where she foresees that her future epistemic activity will not be within her control. What shall we say about this view? First, I shall set aside the important question of the degree to which my beliefs are in fact within my power, the degree to which it is within my power to regulate them. This is a difficult and thorny issue;41 but clearly we have some degree of control (even if it is only indirect), and that provides Reflection, construed voluntaristically, with an area of application. We should note next that we have drifted a considerable distance from the typical Bayesian project of proposing conditions or criteria for rationality. What is at issue here is less rationality (at any rate in the senses distinguished at the beginning of this chapter) than integrity—integrity as a rational, judging agent. There is such a thing as integrity as a parent, teacher, judge, expert; there is also integrity as a rational agent, a player in the belief and assertion game. So suppose we think about Reflection and integrity in those terms, and suppose we think about the former voluntaristically. Does rational integrity demand that I conform to Reflection in the sense of being committed to Reflectionlike expressions of opinion (or being in the state properly expressed by such judgments) in the appropriate circumstances? (Of course, it isn't all that easy to see precisely what integrity as a rational agent amounts to, but for the moment we can perhaps make do with the sort of rough-and-ready grasp of it we initially have.)

I think not; but perhaps the way in which it does not need not disturb van Fraassen much. According to van Fraassen, Old Reflection, the version proposed in “Belief and the Will” (see my p. 125) is entailed by New Reflection, the more general version proposed in Ulysses.42 Old Reflection seems to run into trouble, particularly with respect to extreme cases of belief to degrees 1 and 0. Thus (on the assumption that full belief is to be represented by a personal probability of 1), if I conform to this principle, I can't now invest any credence at all in the proposition that at some future time t I will fully believe a false proposition: for example, I can't accord any credence at all to the proposition that the theory of evolution is false, but a year from now I shall fully believe it.43 Similarly, if I now fully believe some proposition, I can accord no credence at all to the proposition that at future time t I shall have less than full belief in that proposition.

Now perhaps we should ignore these extreme cases on the grounds that the peculiar results we get there are just artifacts of the model; or perhaps we should say that Reflection is not designed to apply to them. A more interesting response would be as follows: these unhappy consequences arise only if we take Reflection in a factualist, autobiographical fashion; they disappear if we take Reflection voluntaristically. But how, exactly, does taking it voluntaristically help? This is not clear to me, and I don't think van Fraassen gives us much help here.

But suppose there is a good answer along these lines. Even so, there remains a problem with Reflection, both Old and New. For consider forgetting. At present I firmly believe (to about .99, say) that I spoke with my friend Fred at about 2:00 P.M. on October 15, 1991. (That was this afternoon, and I remember it clearly.) A little calculation shows that if I conform to Reflection, my present degree of belief in the proposition that at some future time f I will believe this proposition to .1–that is, P(Pf(A) = .1)—can be no greater than .0012. But isn't this disconcerting? The fact is I believe Pf(A) = .1 (f a year from now, say) quite strongly. My present degree of belief in the proposition that a year ago today I was speaking with Fred at 2:00 P. M. would be in the neighborhood of .1 (it seems to me that I speak to him at about that time maybe one out of ten days); and I expect that (assuming I survive the year) my degree of belief in the corresponding proposition a year hence will be about the same.44 Nor would matters be improved by thinking in terms of New Reflection: my present high credence for this proposition is not within the range of opinions I now suppose I may have a year from now. Examples of this kind are legion. I have just looked up your telephone number; I believe it is n; by the day after tomorrow I shall have forgotten what it is and shall bestow on the proposition that it is n a relatively low degree of belief.

And here is the point: my integrity as a player in the belief-and-assertion game does not require that I do or try to do anything different here. It does not require that I do my best to bring it about that I continue to believe these things to the same degree that I now believe them; that would be at best foolish. I now believe that there are three books on my desk and believe this very firmly; I now firmly believe that there are two squirrels chasing each other in my backyard; I don't expect to believe either of these things at all firmly a couple of weeks from now. (Call the present time ‘t’; I don't now expect that then I will firmly believe that at t there were two books on my desk.) But there is nothing here threatening my integrity. I can't remember all these trivial things, of course; but I am not required by integrity to try to remember each particular one or as many as I can. Doing so, indeed, might clutter my mind in such a way as to inhibit other cognitive functioning that is more important.

A good bit of what van Fraassen says, however, suggests that he doesn't mean to say that Reflection should apply to cases like forgetting. He speaks of ‘considered opinion’; his examples are drawn from cases like that of a weatherman, whose job it is to have reliable views and make reliable pronouncements on the weather, or a scientist who discovers that materialism is due to dietary deficiency. It is in cases like this, cases of what we might call considered opinion, that Reflection (voluntaristically construed) is most plausible. That is, it is in cases of this sort where it is plausible to suppose that integrity as a player in the belief-and-assertion game requires me to make Reflectionlike epistemic judgments. I am now a serious nominalist; if I take part with integrity in the belief-and-assertion practice, I must resolve or be prepared to resolve to see that this opinion changes, so far as in me lies, only for good reason. But the same does not go for forgetting the thousands of trivial things I now rather passively believe. Here my beliefs change, but not because I have acquired a reason for giving them up (not by acquiring new evidence, say); and this is of course in no way a violation of integrity.

Accordingly, suppose we restrict attention to the areas where van Fraassen intends Reflection to apply. Let's suppose further (as seems right to me) that truth lies somewhere in van Fraassen's neighborhood: there is indeed an element of commitment or expression of intention involved in epistemic judgments, and integrity as a participant in the activity of belief and assertion requires that one be prepared to make such Reflectionlike avowals (whatever precisely they are). It's worth noting first, that this does not, of course, preclude making autobiographical judgments as well. Consider promising again. Although (as van Fraassen points out) ‘I promise you a horse’ does not ordinarily function as a statement of autobiographical fact, I can certainly note that I am promising you a horse even as I do so (to his amazed chagrin, he heard himself promising her a horse!). So why can't I believe, as you clearly can, that I am promising you a horse? And why can't I believe, as you clearly can, the factual proposition that my credence in a given proposition is high? I make whatever commitment integrity requires; I resolve to change belief only in rational ways and for good reason; but can't I also and quite sensibly have a personal probability for the proposition that I will do what I commit myself to doing? I promise to be faithful to you; if you ask me how likely it is that I will be faithful, I can't properly respond by citing statistics about the frequency with which promises of this sort are indeed followed by faithfulness. But can't I nevertheless have an opinion (a subjective probability) on the question how likely it is that I will keep the promise? I resolve to lose 15 pounds in two months; can't I also ask myself how likely it is that I will really stick to the diet this time? And perhaps conclude that it isn't very likely?

Of course, these two (making the resolve or commitment or promise, and also opining that there is a good chance that I will not maintain the resolve or keep the promise) seem to interfere with each other. It isn't simply that the proprieties governing promising require that when asked how likely it is that I will keep the promise I just made, I respond by reaffirming it. That is indeed so; but there is also the fact that my believing it unlikely that I will stay on the diet this time (my having a low subjective probability for this proposition) saps my resolve to do so. My knowledge that I sometimes lie and the consequent belief that I am likely to do so again interferes, in a way, with my resolving henceforth to tell the truth. In the same way, my thinking it likely that I will not change opinion only in ways that I now endorse, saps my resolve to change opinion only in those ways.

Still, won't the mature cast of mind contain both elements? Perhaps not both in the same thought, so to speak; perhaps I can't commit myself to really staying on the diet this time, and also, simultaneously, and with full awareness, judge on the basis of past performance that it is unlikely that I will do so. Perhaps I can't resolve to keep my promise to be faithful this time, and in the same breath (the same thought) assign a rather low subjective probability to the proposition that I will indeed do so. In order to do both-in order to commit myself fully but also recognize that there is a good chance I won't do what I commit myself to doing—I must maintain a sort of distance between the two. I do indeed thus commit myself; at the next moment I recall how things have gone in the past and how they are likely to go again, assigning a fairly high probability to the proposition that I will fail again; and I can rapidly switch between these two frames of mind. No doubt, this attitude requires a certain flexibility or subtlety of mind (some might even call it doublemindedness); but isn't this just what integrity and self-knowledge, in the human condition, require? Integrity requires that we resolve and commit ourselves wholeheartedly, unreservedly, to keeping our promises, staying on the diet, fighting our tendencies to self-aggrandizement and to pursuing our own goals and seeking our own welfare even at the expense of others; but self-knowledge requires the reluctant and rueful realization that the chances of failure here are very good indeed. No doubt this doublemindedness is to be regretted from some perspectives; in a perfect world there would be no such thing. But the world is not perfect: given what the world is, such an attitude seems the appropriate one.

As with promises and resolves generally, so with the epistemic resolves and commitments connected with Reflection. Let's agree that van Fraassen is right; integrity requires that I resolve to form and change opinion only in ways I now fully endorse (insofar as this is within my power); and suppose we also agree that the way in which this resolve is to be expressed is by way of making first-person Reflectionlike judgments. As van Fraassen suggests, this is the first thing to do in many situations, as when you are asked what your personal probability for nominalism (or the existence of God, or that there are individual essences) is on the condition that a year from now you will firmly believe it. The first thing to do is to make a Reflectionlike commitment. But there is also a second thing to do. There is that proposition about me, which others can believe or disbelieve; and if they can, why can't I? You can reflect on the question how likely it is that I will keep the promise I have just made; your personal probability for my keeping it (given my track record) may be considerably short of maximal. I too can reflect on that question, almost (but not quite) as I make the promise; and I too may have to conclude, sadly, that the chances of my keeping it are considerably less than maximal; and if I conclude this, then my personal probability for my keeping it (like yours) will be less than maximal. In the same way, perhaps, in making the Reflectionlike judgment, I express a certain state of mind, perhaps the state of being committed to forming and changing opinion only in ways I now endorse; but in almost the same thought I can undergo the chastened realization that things may very well not go this way. Chances are good that I will continue to form too high an opinion of my own powers and accomplishments; I may continue to be sometimes careless and biased in the ways in which I form judgments about others; I may continue to let fatigue, desire for ease and enjoyment, reluctance to put forth the necessary effort, lack of patience, desire for immediate reward and gratification-I may continue to let these prevent me from managing my opinion as well as I should and from coming to learn or see what I would if I did manage it properly.

Van Fraassen is right: integrity as knowing agent, as player in the belief-and-assertion game, requires that I make the sorts of commitment he calls us to. But rationality and self-knowledge may nonetheless require me to violate Reflection, taken factually, taken in the third-person mode, as a proposition about myself. In fact it does so require, and that even when it is free, rational, and considered opinion (not epistemic malfunction, nor the vagaries of inconsequential fact I now know but will have forgotten by tomorrow) that is at issue. If I satisfy Old Reflection, I don't afford any probability at all to the proposition that at future time t I will fully believe what is false; that must be as unlikely, as far as I am concerned, as that I will become a married bachelor. But of course it isn't. You may think the problem here is just the extreme value of full belief; but not so. I now firmly (to about .98, say) believe Serious Actualism: the view that objects have no properties in worlds in which they do not exist (not even nonexistence). A little calculation shows that if I conform to Reflection, my present degree of belief in the proposition that at future time f (a year from now, say) I will believe the denial of this proposition to degree .9–P(Pf(−SA) = .9)—can be no greater than .002 (see n. 42). But that seems to me unrealistically low; I have changed my mind about this before, and obviously it can happen again.

I believe to a much higher degree that it is wrong to try to advance one's career by lying about others. But, like others, I am liable to corruption. I must sadly realize that there is some likelihood that I will be corrupted in this matter, coming to see my own interests as of such overwhelming importance that I endorse any means at all of serving them. The probability, as I see it (based on what I know of myself and others), of my coming to believe that there is nothing wrong with so doing is low, but not nearly as low as Reflection requires. I believe more strongly yet that some things are right and others wrong; but again, must I not realize that there is a nonnegligible probability that 5 years from now I will have become a moral nihilist? I have never seriously considered the thought that the moral distinctions we all think we see are really illusory; what would happen if I were to look into this possibility, seriously studying the works of moral nihilists and antirealists? There is the possibility that I would change my mind, be corrupted; can I claim that the probability of this possibility is vanishingly small? I'm afraid not.

Again, distance is important. When I consider, entertain the proposition that some actions are genuinely wrong, the idea that I should reject this belief, becoming a moral antirealist or nihilist, seems wholly ridiculous. But when I reflect on the inconstancy we humans display, when I think about what I know about myself and others, I must regretfully admit that the idea, sadly enough, is very far from ridiculous. What rationality and integrity require of me, therefore, is indeed the sort of commitment van Fraassen suggests; but they also require a concomitant and chastened personal probability for the proposition that I will indeed carry out my commitment-a probability that in many cases will violate Reflection.

By way of conclusion: perhaps the most interesting aspect of van Fraassen's defense of Reflection is his voluntarism, and his suggestion that an epistemic judgment really involves a sort of commitment or resolve to regulate opinion, form and change belief, in ways that you now see as right and proper. This suggestion is fascinating (if a bit obscure); I believe there is something true and important about it. But there are also those propositions about me and my credence function; there are also those propositions to which the factualist draws our attention. Here a mature self-awareness requires that I have opinions; and these opinions, even if I am rational, or rather in particular if I am rational, need not conform to Reflection.45