You are here

Two | Studying Nature

Some concepts have imperialistic tendencies. They colonise areas where they did not orginally belong. ‘Information’ is a modern example, so much so that we now describe ourselves as living in an information society, in which information processing is one of the main activities, information one of the main products, and some people look forward to the day when human beings themselves can be understood as little more than bundles of information.

A similar escalation occurred with ‘nature’. I described in the previous chapter how the third meaning of the word, ‘nature’ as the whole physical universe, began to predominate. When combined with the second meaning, ‘nature’ as a force or principle, the way was open to a thoroughgoing naturalism in which the same concept was used for both the active and the passive sides of a single reality. One could speak of Nature doing this or that, as if it were a controlling agency. But what it acted on was also Nature, the whole of the physical world. This close link between what nature is understood to be, and the laws which govern it, now lies at the heart of modern science.

Nor is it just nature in general which is thought to be reducible to laws, and ultimately, it is hoped, to sets of equations. The particular natures of things, their innate properties, nature in the first of its three senses, are also treated as reducible to the lawlike behaviour of their constituent parts. Out goes Aristotle's belief that things are to be understood in terms of their purposes, to be replaced by the more Platonic notion that they can be explained by the exposure of their logical structures, the fundamental idea, nowadays generally conceived as a mathematical pattern, which makes them what they are. Within this amalgamation of the different meanings of ‘nature’ into a comprehensive naturalism, we find what many believe to be the essence of the scientific revolution. The totality of things must be ultimately explicable in terms of mathematical formulae which govern, not only how nature in all its parts behaves, but also fundamentally what nature is. To know the rules, therefore, is potentially to know everything, even the mind of God. Hence the much misunderstood search, popularised by Stephen Hawking, for a ‘theory of everything’. One can't get more imperialistic than that.

There is a nice illustration of imperialist ambition in the title of that most prestigious scientific journal—Nature. The whole gamut of scientific enterprise, from natural history to the most abstruse mathematical formulation of some fundamental theory, is summed up in a single word. I suspect that this represented in some measure the sentiments of the journal's founder, of whom it was said that he ‘exhibited an arrogance which would still have been offensive even had he been the Author of Nature’.

The word ‘science’ itself has also undergone significant changes, thereby contributing to the unspoken assumption that total knowledge was somehow within its grasp. It began as a general word for different kinds of knowledge, particularly of the more theoretical kind. But it was not until the late eighteenth century that it began to acquire a more specialised meaning, linked with experiment, empirical experience, and the methodical study of the natural world. The real revolution did not take place, however, until the mid-nineteenth century, when scientific discoveries began to have a major social impact, and when scientists themselves were struggling to professionalise their work, often against ecclesiastical vested interests. Their struggle was a sub-text of the agenda in the controversies over Darwinism, and accounted for much of T. H. Huxley's spleen against his clerical opponents.1

It comes as something of a shock to realise that the word ‘scientist’ was not coined until 1840. Nevertheless an important step in that direction had been taken in 1831 with the formation of the British Association for the Advancement of Science. Similar associations and institutes were at the same time being created all over Europe, but Britain was unlike other countries in that its Association used the word ‘science’ in the singular to describe its activities, whereas all others, outside the English-speaking world, used the plural ‘sciences’.2 While the difference might seem trivial, its effects have been anything but. The decision was made during a crucial period of transition from a generalised concept of natural philosophy to the designation of distinct scientific disciplines. The word ‘science’ was itself beginning to acquire a narrower meaning. By adopting its use in the singular the British Association in effect drew a sharp line between those disciplines which properly belonged within this newly defined realm of ‘science’, and those which did not. It is a distinction which had further long-term implications, in that only science, in its now restricted sense, was regarded as employing something called ‘the scientific method’, which in turn came to be identified by many people as providing the only rational basis for knowledge. There was some excuse for this attempted take-over bid, in that there were competing imperialisms to counter. More than forty years after the foundation of the British Association it could still be written, not entirely in jest, about the Master of Balliol, a classicist and theologian:

First come I; my name is Jowett.

There's no knowledge but I know it.

I am the Master of this college:

What I don't know isn't knowledge.3

The counter-claim that there is a single alternative source of knowledge—science—is not usually made in jest,4 but has not been without its problems. It has led, for instance, to endless boundary disputes about what is properly scientific, and what is not. Are psychoanalysis and sociology sciences, for example? And does it matter if they are not? And where does the class distinction between science and lesser forms of knowledge leave theology, once ‘the queen of the sciences’? I am not accusing the British Association of deliberate imperialism. Its founders were responding to the need to give scientific work a clearer image and a more professional basis. The result, though, has been to convey the impression that there is a monolithic block of knowledge, all of which fulfils certain rigorous conditions of verifiability, and thus alone has the right to make truth claims. Wise scientists know that this is not true, and are often conspicuous for their intellectual humility. The intelligentsia on the fringes of the scientific world have not always shared the same lack of presumption. Lowes Dickinson, writing in 1905, is fairly typical of what later became known as scientism: ‘Religious truth, like all other truth, is attainable, if at all, only by the method of science.’5 Similarly one of Bertrand Russell's early aims was to build philosophy itself on science.

In the rest of Europe, where the word was used in the plural, the implication was that there can be a range of sciences, appropriate to different aspects of human experience, including those human sciences like history which, by virtue of their subject matter, cannot rely on such methods as controlled and replicable experiment, or mathematical analysis. This need not make them uncritical or irrational. It simply means that their findings cannot have the same degree of exactitude, nor the same rigorousness of proof, as is possible in those sciences whose subject matter is more easily controllable, and less deeply embedded in the ambivalences of ordinary experience.

In the latter half of the nineteenth century the German philosopher Wilhelm Dilthey drew a famous distinction between the natural sciences and the human sciences, the former concerned with explanation, the latter with understanding.6 He defined the human sciences as those studies which cannot exclude the role of the human mind in expressing intentions, generating meaning, and discerning values. Whereas the natural sciences seek for explanation in terms of objective entities and relationships, the human sciences seek for understanding through a process of interpretation within the framework of the total lived experience of being human. As Dilthey saw it, the human sciences should examine the thoughts and utterances and behaviour of human beings in such varied fields as psychology, sociology, history, art, religion and literature, without allowing individual disciplines to obscure the complexity and interconnectedness of actual human existence.

Much has happened since Dilthey's day, not least the growth of the flourishing discipline of hermeneutics, which seeks to act as a bridge, interpreting one age or culture to another. But in the English-speaking world, which still uses ‘science’ in the singular, the idea that there can be a wide range of sciences, each critical in its own proper way, and each with its own validity, has gained little foothold in popular perceptions of what science really is. The so-called hard sciences, those which fulfil the criteria of controlled experiment and mathematical rigour in their narrowest senses, are the stated ideal to which all other scientific enterprises should try to conform. I recall some words by a distinguished neurologist, which some fifty years ago opened my eyes to the narrowness of this ideal.

For too many amongst us… the inadequate conception that ‘science is measurement’ and concerns itself with nothing but the metrical has become a thought-cramping obsession, and the more nearly a scientific paper approximates to a long and bloodless caravan of equations plodding across the desert pages of some journal between small and infrequent oases of words, the more quintessentially scientific it is supposed to be, though not seldom no one can tell—and few are interested to ask—whither in the kingdom of ordered knowledge the caravan is bound. Whatever may be true of the physical sciences, the day is not arrived when all the truths of medicine and biology can be reduced to this bleak residue, or when living nature can be comprehensively expressed in what fashion decrees shall be called a protocol.7

Modern versions of the reductionist dream, as expressed in popular culture, frequently latch onto a mistaken interpretation of the phrase ‘theory of everything’.8 What such a theory actually attempts to do is to unify some highly complex mathematical formulations in the far reaches of theoretical physics, by bringing together the two very different theories—relativity and quantum mechanics. It falls far short of the much more ambitious, and much less plausible, notion that a single theory can explain everything, and that all other sciences ought to be reducible without remainder to physics, and ultimately to mathematics.

A Hierarchy of Sciences

The subject is so crucial to questions about the meaning and scope of the concept of nature, that it is worth looking in a little more detail at how different scientific disciplines can be interrelated, and at the kind of interchanges which can take place between them. The biologist E. O. Wilson at one time elaborated a model of such interchanges, commonly known as the sandwich theory, to illustrate the simplest and most obvious kind of interaction.9 Imagine a hierarchy of disciplines in which each is sandwiched between one above it, which it seeks to reformulate as far as possible in terms of its own laws and concepts, and one below it, whose encroachments it seeks to resist. A typical sandwich might consist of biology, chemistry and physics. Biology, in the upper layer, tries to resist being swallowed by chemistry, in the middle, which in turn resists the complete reduction of its concepts to those of physics, at the bottom of the sandwich. The biologist is perfectly well aware that living things are composed of chemical substances, an ever-increasing number of which can now be identified and their interactions mapped. A chemical structure, the genome, provides the score for this chemical orchestra, and there are also elegant ways in which physical and even mathematical principles can be shown to underlie many biological structures and behaviours. A zebra's stripes, for instance, are neither random, nor each one individually mapped. They develop as a pattern through some mathematically analysable chemical interactions of a kind found also in tigers and fish and the shells of molluscs.10 But a zebra's behaviour cannot be reduced to chemistry or mathematics, and there is no way in which biological phenomena can be understood without introducing concepts like organisation, adaptation, and the much-derided notion of purpose—none of which belongs within chemistry itself. The distribution of stripes may be explicable in terms of mathematics and chemistry, but the origin and purpose of stripes has everything to do with biological evolution and the advantages of camouflage.

By contrast, chemistry's attempt to resist reduction to physics is more a matter of convenience than fundamental principle. In principle it ought to be possible to derive the whole of chemistry from physics. But it would be intolerably complicated to do so and, if it were actually done, chemistry would become so unwieldy as to be useless. It is enough in practice for physics to provide explanations for the most basic chemical concepts, of which valency and atomic weight were among the first examples. There is also the hybrid realm of physical chemistry. Chemistry, in short, can make considerable territorial gains, but not a complete take-over bid, within the realm of biology, and is itself vulnerable in principle, though not in practice, to assimilation by physics. By contrast neither chemistry nor biology can make any direct contribution to physics and mathematics, other than by alerting them to new problems requiring solution. Chaos and complexity theories, for example, have arisen out of this kind of interaction.11 Indeed most of the striking scientific advances more often than not take place on the frontiers between established disciplines.

The pattern of potential encroachments from below means that the human sciences, which are near the top of the multi-layered sandwich, as dealing with more complex and highly differentiated subject matter, tend to receive more interpretative insights, which may be constructive or destructive, from the lower, seemingly more fundamental, disciplines than they can contribute to them in return. Theology, for example, has gained immeasurably from, and also in some respects been radically changed by, a better understanding of the physical universe as disclosed by the natural sciences. But it has had to defend itself, among many other things, from assumptions about an all-embracing physical determinism, which is more or less taken for granted as belonging to a proper scientific methodology. The conflict is endemic because the scientific search for efficient causes necessarily presupposes determinism. This presupposition, however, is itself vulnerable to philosophical questioning about how causality can actually be proved, and about the role of the human mind in enabling us to recognise causal connections. Since the days of Hume and Kant these have been familiar philosophical problems,12 but to go further into them would take me too far from my main theme. The usual theological riposte to the kind of determinism which would eliminate human free will, treads simpler ground by appealing to the ordinary experience of human life. Actually to believe that all one's thoughts and actions were predetermined would cut the nerve of moral effort. Paradoxically it would also be destructive of critical and creative initiatives, not least those daring new thoughts which led philosophers to ponder such matters as causality in the first place. In face of this and other kinds of attack from below, theology's repeated defence is to point to the primary awareness of being responsible and creative human beings. It resists the relegation of this awareness to a less significant status and role, on the grounds that it cannot be over-ridden by theoretical assumptions derived from the methodology of the natural sciences, which are themselves one of its products.

Theology's own contribution to the natural sciences has often been to remind them of this wider, more value-orientated context in which their work is done. In so doing it also seeks to provide a rationale for the unity and intelligibility of the natural world.

Sociology occupies a rather ambivalent position in the hierarchy, and is equally adept at seeking to colonise both theology and the natural sciences, on the grounds that they are social constructions.

The general picture is that the traffic downwards, say from a human science to a physical one, consists in showing that the latter's concepts are too narrow to contain all that the more human science needs to take account of. New properties, needing new forms of conceptual understanding, emerge at successive levels, and cannot be reduced to what, in a physical sense, may be more fundamental categories. The traffic upwards may in turn shed new light on higher level phenomena, by revealing how some complex processes may have quite simple explanations. The zebra's stripes, for instance, are not a miracle of planning; they merely follow a formula. In short, the sandwich model of a hierarchy of interrelated sciences, studying different subject matters and using different methods and terminologies, offers a subtle and dynamic understanding of what the sciences are and how they work, and provides a rich many-levelled concept of nature, which wears different faces dependent on the different questions put to it. The model, in other words, is prima facie evidence that the concept of nature as studied by science needs much more careful articulation than it usually receives, as also does the concept of science itself. But, like all such models, it is an over-simplification.

The actual interrelationships and cross-overs between the different scientific disciplines are much more complex than up and down movement through a series of layers might suggest. Nevertheless the assumption that science, as it were, bottoms out in physics is deeply entrenched. It is with this assumption in mind that I now return to some of the most fundamental physical concepts themselves, and ask on what it is that, in the end, the whole edifice of the scientific knowledge of nature purports to be resting.

Rational Explanation

Einstein, who more than anyone else in the last century changed our understanding of the ultimate structure of the physical world, never did a single laboratory experiment, except presumably as a student. He did, of course, use experimental results obtained by others, but his great achievements were in the realm of scientific imagination, in thinking the hitherto unthinkable and expressing it in mathematical form. It was said of him that ‘He believed that theories into which facts were later seen to fit were more likely to stand the test of time than theories constructed entirely from experimental evidence.’13 He had an extraordinary feel for the way physics ought to be, and for the ability of pure thought to grasp reality, almost as if he were in the world of the ancient Greeks who had shared the same dream. And he had acquired the mathematical skills which enabled such thoughts to take a precise and calculable form.

In the end, as he well recognised, theories have to be tested by experience, but for a long time the tests of his own theories were minimal, and the results not entirely conclusive. Nor have they remained unchallenged. Recent speculations that the velocity of light might not have been constant during the history of the universe would, if confirmed, destroy the central assumption on which relativity is based.14 But when the two theories of relativity were first published it was the cogency of their mathematics, and their ability to explain what had hitherto been physical anomalies, which gave them their persuasive power. The fact that Einstein went on to squander the rest of his life in trying to find a mathematical reconciliation between relativity and quantum mechanics, is striking evidence of his unyielding belief that mathematics held the ultimate clue to nature. It was typical of him that he should even be instinctively uncomfortable with the idea that there are fundamental constants in nature, such as the velocity of light itself, and he would no doubt have wanted to look beyond the six independent numbers which the present Astronomer Royal believes are both necessary and sufficient to explain why the universe is as it is.15 Natural constants of this latter kind appear to be simply given, and it has not been possible to derive them from existing theories, other than by the extravagant postulate of an infinity of universes in which all possible constants are represented. Einstein complained about the very existence of such irreducible factors:

A theory which in its fundamental equations explicitly contains a constant would have to be somehow constructed of bits and pieces which are logically independent of each other; but I am confident that this world is not such that so ugly a construction is needed for its theoretical comprehension.16

But why should anybody expect mathematics to tell us everything? Why should it loom so large in the physicist's understanding of nature? It is part of the belief that physical reality fulfils the requirements of logic. That is an idea which ought at first sight to be congenial to theologians, as it was to the Greeks. Our God is a God of order. But in practice too rigorous an application of what is currently accepted as logic can lead, and has led, to terrible mistakes. One of the most persistent ideas in the history of Western thought, from Plato to the beginning of the nineteenth century, was that nature ought to be regarded as a ‘great chain of being’.17 It was assumed that in a rational world created by God as an expression of his goodness, all possible forms of the good would need to be made actual. The hallmark of creation, in other words, was fullness, plenitude. Nature was envisaged as a continuum, stretching from the stars to the tiniest creature, in which every logically possible form of being was represented. Things existed because it was logically necessary for them to exist, so that they could fill the place allotted to them within the created order. There was even dispute about whether separate species existed, or whether there was complete unbroken gradation from one living thing to another, with species representing only artificial distinctions, a matter of language rather than of ‘unmovable boundaries set by nature’.18 Locke, for instance, was inclined to the view that the distinctions between species were only a matter of definition—a view which interestingly enough is beginning to creep back in our own day. Thus the search for ‘missing links’, which nowadays we associate with Darwinism, was not at first an evolutionary quest at all. It rested on the assumption that the gaps in natural history must be closed because, as Leibnitz put it, ‘God makes the greatest number of things that he can.’19

The trouble with this idea of plenitude was that it became increasingly difficult to square with actual experience. Despite the valid sense in which evolution can be described as an exploration of all possible ways of being alive, natural selection remains a haphazard contingent process, not a necessary one, and there is no guarantee that, in a world of time and chance, all that might be will be. To start with a supposed theoretical necessity, and to allow it to shape the expectations with which nature was studied, was a recipe for eventual disillusionment. Yet it went on for centuries, and not least during the Age of Reason. It leaves us with the question of how far we in our day are justified in subscribing to theoretical formulations about the ultimate nature of things, expressed in mathematics so complex that few people in the world can understand it. Are we in danger of falling into the same trap?

Some years ago I received a long series of letters from a retired professor of physics who believed passionately that Einstein was wrong about relativity, and that the world was in danger if it was so foolish as to persist in developing techniques which depended on his being right. Why he felt an archbishop might help him was never clear to me, but it was obvious that he had been ostracised by all his scientific colleagues, even though most of them could not provide a satisfactory answer to the logical problem about time travel which was perplexing him.20

His question was this. ‘According to the special theory of relativity, two similar clocks, A and B, which are in uniform relative motion and in which no other differences exist of which the theory takes any account, work at different rates. The situation is therefore entirely symmetrical, from which it follows that if A works faster than B, B must work faster than A. Since this is impossible, the theory must be false.’

This looks like a straightforward logical argument, and it was obvious that a great many of the distinguished scientists to whom it was sent were nonplussed by it. Nevertheless they took Einstein on trust, and relied on their more mathematical colleagues to demonstrate a different kind of logic which is beyond the grasp of most ordinary mortals. The controversy is now forgotten and my correspondent is dead, but I mention the incident to illustrate how far modern physics has taken us into a realm where ordinary perceptions of what is logical fail us, where it is no longer possible to picture the reality we are dealing with, and where theoretical physicists are entirely dependent on highly sophisticated mathematical tools and abstract concepts to make what sense they can of the empirical data.

The Key Role of Mathematics

Perhaps my question about why ultimate explanations are to be sought in mathematics needs to be reshaped. How is it that this particular form of abstract thinking has been so successful in matching the results of experiment? Part of the reason must be that mathematical techniques were developed specifically in order to tackle the precise and complex calculations scientists needed to make. There is an element of circularity here. Newton, for instance, developed a form of calculus when he was in his early twenties in order to describe the orbit of the moon. Other seventeenth century scientists were doing the same in trying to tackle various mechanical problems. Einstein, by contrast, took his mathematics off the peg. It just so happened that, some forty years before he needed it, the kind of geometry required by the General Theory of Relativity had been devised as a purely theoretical exercise by a young German mathematician, Georg Riemann, who had been exploring the geometry of curved surfaces. But in the second half of Einstein's life, when he was obsessed with repeated attempts to unify physics by bringing together relativity and quantum mechanics, the mathematics was not there. Nor could he invent it.

He was not alone in having come to believe that one should look to mathematics to supply answers to the most fundamental questions about the nature of physical reality. Sir Arthur Eddington, the most famous astrophysicist of his day, spent years trying to derive the basic physical constants from abstract theory by performing what many of his colleagues dismissed as ‘arithmetical gymnastics’.21 He was duly ridiculed for it. Yet both men in their way were feeling back to the ancient Greek view of a world based on rational necessity. It was as if mathematics were felt somehow to exist independently of the minds of those who construct theories. From this perspective mathematical truths are simply there, waiting to be discovered rather than invented, because they are believed ultimately to represent the way things are.

Invented or discovered? A tool devised to fit the circumstances of the world as perceived by physicists, or the discovery of sets of logical relationships which in some sense really exist? The debate has continued, and the answer seems to be—a bit of both. From a biological perspective one of the oddities in the story of the triumph of mathematics is that the logical operations of the human mind seem so well adapted to deal with complexities which were not even remotely in view during the hundreds of thousands of years in which our mental capacities were evolving. Perhaps the answer is that the capacity to use highly abstract concepts in making sense of experience developed hand in hand with the need to do so. Each may have prompted and promoted the other. Sometimes they have been out of step, as when speculative theories have raced ahead of empirical discovery, or when discoveries have been missed because their implications were too complex to handle.

How did it all begin? There is evidence that the practice of; counting can precede the concept of number. A West African people, for instance, count by using different body parts, just as young children use fingers. They also use shortcuts, making two hands the equivalent of ten fingers without counting the fingers individually. But the whole process appears to be an entirely physical operation, unrelated to the patterns of thought which, among numerate people, can be substituted for the actual use of fingers and hands.22 To have the concept of number requires an ability to abstract and universalise, capacities which there may have been no need to develop in isolated, non-literate cultures, where all measurements were approximate, and only the simplest business transactions took place. Forms of social life are perfectly possible, in which actions and stories convey all that needs to be communicated.

In Chinese imperial culture there was a form of counting which entailed putting rods into boxes, another primarily physical action, but one with much greater potential for being conceptualised. In fact as early as the thirteenth century B.C., there are descriptions of boxes arranged in groups of ten, with different positions of the groups representing different powers of ten—compelling evidence that the Chinese already used quite a sophisticated decimal system. The system could also allow for some boxes to be empty, thus implicitly suggesting the concept of zero. The Babylonians also had a symbol signifying an empty space. There is uncertainty, though, as to whether it was the Chinese, the Indians following the Babylonian example, or even the Indo-chinese, who actually invented the concept of zero as a number, one of the crucial insights opening the way to modern mathematics.23 But there is no doubt that the Chinese made astonishing mathematical advances, many of which undergirded their equally remarkable technology. However, for reasons which may have had something to do with their excessively bureaucratic culture, they failed to exploit these advances as fully as they might have done in the construction of scientific theory. The result was that the science and the technology gradually faded away and were forgotten, until unearthed in the twentieth century by the monumental work of Joseph Needham.24 It was in the West, partly with Arab help, that the application of mathematics to the systematic study of nature was to generate theoretical foundations for science which had previously been lacking.

The Pythagorean philosophers had been able to take the first tentative steps towards a scientific approach to the natural world because, as mentioned in the previous chapter, they had discovered a structural principle in music that every string player has to learn—number determines the quality of sound. As the name Pythagoras reminds us, they were renowned for their geometry, but they could not pursue their arithmetical insights much further. Geometry is possible without the concept of zero, but arithmetic soon runs into insoluble contradictions. It also quickly becomes clear that simple numerical ratios and geometrical forms are not in themselves adequate to account for the actual complexity of the world. Nor, even in the realm of sound, is mathematics of much help in distinguishing between music and noise, not least because there is an inevitably subjective element in the distinction. Furthermore, numbers in classical Greece and Rome were extremely difficult to handle as long as each number was separately represented by a letter of the alphabet. Anyone who has tried to do long division using Roman numerals will know the frustrations. And the Greek system was even worse. Complicated calculations only became possible when the Arabs introduced Europe to our present system of writing numerals, according to which zero is a number and the size of any number depends on the position of its various numerals. Thus one can stand for ten, a hundred, or a thousand, and so on, depending on what follows it. This invention, or rather reinvention, of a notation whereby number depends on position, two thousand years after the Chinese had done the same thing with their boxes, was one of the crucial turning points in the invention of the modern world. It meant that numbers could at last be manipulated in ways which began to match the actual complexity of experience.

In the absence of an effective numerical system the ancients focused mainly on geometry. It seems likely that the abstract concept of ‘form’ grew from this concentration on geometrical shape, and so gradually became for Aristotle the main word for describing the nature of things. We thus reach the curious paradox that though he wrote about the philosophy of numbers, he did not actually use them as a basis for explanation. It was only with Galileo some 2000 years later that mathematics began to occupy its key role in physics, and so started the process of rapid mutual growth which has now put them both beyond the reach of all but a select few. But even he was still heavily dependent on geometry. ‘Philosophy,’ he wrote

is written in that vast book which stands ever open before our eyes, I mean the universe; but it cannot be read until we have learnt the language and become familiar with the characters in which it is written. It is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word.25

In the years which followed, mathematics evolved to match the emerging complexity of nature. It has also transcended nature in its ability to bring logic and order at a fundamental level, not only to our understanding of the physical universe, but to the exploration of possible worlds, and even to the notion of an infinity of universes. Has this been a story of discovery or invention? The difference may seem hardly to matter, in that the confidence to transcend hands-on experience has in practice arisen out of scientific successes on a much more mundane level. Mathematics has been a highly successful instrument for making and manipulating significant measurements on earth, because this is what numbers were invented to do. The application of it to the exploration of the stars, to the intimate probing of the structure of matter itself, and even to the statistical analysis of human behaviour, has at every stage seemed like a reasonable extrapolation of insights and techniques which have been shown to work. But the more comprehensive its sway, and the more complex the abstractions to which it leads us, the sharper the questions become as to whether this is all just a wonderful human invention, or whether mathematics itself somehow discloses to us the true nature of reality.26

Before considering the further implications of this question, it will be useful to look more closely at the kind of explanations of the physical world, which mathematics has made possible.

An Intelligible Universe

The scientific study of nature has to a large extent relied on the exploitation of a simple principle which is common both to mathematics and to those natural sciences which most depend on it. It is the principle that a comparatively small number of basic constituents can in combination give rise to an enormous variety of possible outcomes.27 Twenty-six letters of the alphabet are more than enough for all the books which will ever be written. Start with the concept of number and a few logical rules, and the whole of mathematics can unfold itself. Start with a few basic particles, nowadays further reduced in number by being defined in terms of a series of symmetrical relationships, and in theory it ought to be possible to explain the entire physical universe. We now know that all life depends on a genetic code made by different arrangements of only four amino acids. The greatest compression of all occurs in information technology, which has reached the theoretical limit. All information can be coded in terms of just two variables, 0 or 1, open or closed, yes or no.

Whether such compressed information is adequate to convey all that human beings can know, is a different question, to which I shall be returning later. Aristotle was clear that it could not, which is why he refused to accept the reductionism entailed in Democritus's atomic theory, and in consequence had to postulate a particular form determining the nature of each class of object he studied. As he saw it, reality in botany and biology subsisted in actual plants and animals, not in intellectual abstractions. Though based on a sound instinct, at the time this proved to be a false choice. Democritus had in principle grasped the idea that limitless differences could be explained in terms of various combinations of a limited number of fixed entities, whereas Aristotle's concentration on organic models blocked the way to further scientific progress. Darwin, by contrast, managed to bring coherence to biology, grappling with the amazing diversity of living things, through an insight which could be stated in a single sentence.

The rules by which complex forms can be generated may differ in kind. If they follow a simple linear pattern, as when B invariably follows A and is itself always followed by C, then no matter how complex the outcome, this can, like the movements of the planets, be predicted provided that the formula is known. But when the rules do not follow a linear pattern, when C following on from A and B also has repercussions on A, then things are not so easy, and reliable prediction may be impossible. Non-linear equations may have multiple solutions. What this means in practice is that when the interactions between the different parts of a system are dependent on one another, the outcome can be extremely sensitive to small changes in the initial conditions. This is the best-known implication of what is now called Chaos Theory, and it has become apparent that the world is full of such systems, the weather being a prime example.28 Though the powers of prediction in such systems are always likely to be limited, this does not mean that what happens in them is indeterminate. Even the unpredictable should in principle be traceable back after the event to identifiable causes, so the belief that it is possible to compress immense complexity into a few basic rules, is not undermined by events which follow a non-linear pattern. Indeed it is this belief in the ultimate reducibility of even the most complex phenomena to an ordered mathematical pattern, which has been the inspiration for research in fields which at first sight seemed unpromising. But, as I have argued earlier, with reference to the sandwich model, the strict reductionist approach is not the only, and not always the most appropriate, way of understanding what is going on. Nor does it follow from the successes of reductionist science that the rules all have to be mathematical, as Darwin himself demonstrated. But it is worth noting how confidence in Darwinism greatly increased as, in the 1930s, evolutionary theory was given a statistical basis.29

These caveats apart, the general picture of the physical world which has gained credence in our day is that it is composed of a relatively small number of discrete constituents, whose behaviour can be represented by mathematical equations. Every electron can be treated as being identical with every other electron, every neutron identical with every other neutron, and so on down the scale. The fact that, according to quantum theory, even energy itself comes in discrete packets, provides a reason why the properties of atoms are not infinitely variable. Their components can only exist in a limited number of states. If this were not so, science would be impossible because there would be no fixed conceptual base on which to build. As it is, physics can rest on the belief that the fundamental constituents of matter are not infinitely variable, even though nature at this quantum level is not picturable by us, and though mathematically comprehensible, seems to defy ordinary logic. In trying to picture it we run into the same problem as that faced by the earliest Greek philosophers. The ultimate stuff of reality cannot be like any one part of our ordinary experience—air or fire or water or any other picturable phenomenon, as was once supposed—or we would be plunged into a vicious circle of explanations, all of them depending on each other.

Nevertheless, despite the perplexities about ultimate reality, a system of this kind in which a small range of identical but numerous components interact according to fixed rules, is in principle intelligible to us, even though it may not be fully describable, nor its future state fully predictable. It is intelligible, because it allows us to trace fundamental connections between definable events, and thus to understand mathematically how and why things happen; this is the task of the physical sciences. I have been concerned to make the point, though, that the natural world of ordinary experience is not fully describable in these abstract terms. The emergence of new properties at different levels of complexity shows the need for an understanding of each level in terms appropriate to it, and this is why there has to be a hierarchy of sciences. Events at every level are less than fully predictable because of the complexity of the interactions. But there is the further reason, familiar in quantum theory, that to observe the fundamental operations of nature is also to change them. In trying to elucidate nature, we cannot in the end leave ourselves as observers out of the equation. The natural world contains minds which have developed hitherto undreamt-of mathematical powers whereby to compress huge quantities of data into formulae. But in doing so we also become aware of limits to our understanding, some of which are inherent in the basic constituents of nature itself, and some of which are inherent in our own role in studying it.

It is thus not only in quantum theory that we meet, as it were, a barrier of uncertainty and indescribability. At other levels too in the scientific hierarchy there are limitations on what can be known. In biology, for instance, it is not possible to overcome individual unpredictability. Populations of animals can be treated statistically with a fair degree of precision. But individuals cannot be known thoroughly enough for any observer to be certain about their exact behaviour. If they were to be studied with the necessary thoroughness, that in itself would be likely to change what they did. These uncertainties created by the role of the observer are even more evident in a subject like history. The writing of history is a matter of selection and construction; selection because the data are potentially inexhaustible, and construction because to make sense of them the historian has to have some preliminary aims and organising concepts. As for theology, it has to start with the daunting recognition that its subject matter ultimately surpasses all our human powers of knowing. All knowledge of God is to some extent constructed, conditioned knowledge; constructed in that to communicate at all about God has required the development of a language of symbols, metaphors and analogies, and conditioned because this process of forging an appropriate language can only take place within particular historical circumstances, and be embodied in particular stories and statements. Nevertheless the theologian, while admitting the difficulties, may validly ask whether any other form of knowledge entirely escapes similar constraints.

With this thought in mind we need to look again at the seemingly unanswerable questions about the nature of mathematics itself.

Limits of Intelligibility

Earlier when asking whether mathematics is an invention or a discovery, I suggested that it might be a bit of both. The answer makes a difference to our understanding of nature because, if the deep structure of things is only comprehensible in mathematical terms, we need to know whether, and to what degree, our understanding has been achieved through concepts we have ourselves invented and imposed on nature. This is not quite the Kantian question about knowledge, but is related to it. Kant believed that mathematics could be synthesised by rational thought on the basis of intuitions about quantity (arithmetic) and space (geometry). These intuitions, however, soon ran into trouble, and only a few decades were to pass before geometers were beginning to make the intuition of space look decidedly arbitrary. Early in the nineteenth century the first explorations of the possibility of non-Euclidian geometry demonstrated that different concepts of space were conceivable. In the following century Einstein completed the process of destroying Kant's intuition with his non-Euclidian and highly counter-intuitive concept of what spatial reality is like.

The belief that arithmetic could be synthesised from a bare intuition of quantity fared no better. Bertrand Russell was responsible for the most famous and strenuous effort to go beyond Kant by attempting to derive mathematics entirely from logic.30 It failed and, as was seen later, necessarily failed, when it was demonstrated by Gödel's famous incompleteness theorem that there can be no consistent formal system in which every mathematical truth is provable.31

If then pure logic cannot give us mathematics, what can? The suggestion that in some Platonic sense mathematical reality does not need to be grounded in our own powers of deduction, but exists apart from our knowledge of it, has its attractions, not least to those who would locate it somehow in the mind of God. Certainly there are features of mathematics which support the view that it grasps hold of something out there waiting to be discovered.32 The beauty of much of it, the endless fascination of such ineluctable facts as the distribution and properties of prime numbers, and the intuitive grasp by great mathematicians of theorems which cannot at the time be proved, point to a kind of reality independent of us. Public interest in the recent proof of Fermat's Last Theorem, for instance, after 250 years of trying, has focused on the mystery of how Fermat could have claimed a discovery, when the actual demonstration of it had to make use of methods which could not possibly have been available to him at the time.33 Did he just make a lucky guess? Or did he have the kind of feel for mathematical reality that Einstein had for physics?

It is clear, too, from what I have said earlier about counting, that mathematics has empirical origins, and its symbiotic relationship with modern science has tended to maintain this sense of rootedness in reality. There are difficulties, though, when both of them transcend the boundaries of what the human mind can envisage. If, as is now claimed, theories about the ultimate constitution of matter require a mathematics capable of handling anything between six and 25 dimensions of space, it stretches credibility to suppose that appropriate mathematical constructions of this inconceivable complexity were somehow pre-existent, simply waiting to be discovered.34 It is equally hard to know what existence in 25 dimensions could possibly mean outside the mathematical clothes in which the idea has been dressed. Multiple dimensions, alongside many other esoteric mathematical constructions, look much more like convenient fictions, invented for particular purposes, than like pre-existing truths written somehow into the constitution of the universe. It may transpire that these hugely complex mathematical constructions, which at present seem to represent the furthest that theoretical physicists can take us in the attempt to grasp the ultimate secrets of nature, are no more than the products of human imagination, finely tuned to the questions physicists have chosen to ask. Or they may be claimed as real discoveries on the grounds that they represent the sort of answers which nature requires us to give. The whole subject is immensely difficult, both scientifically and philosophically. In philosophical terms it is part of the longstanding and highly nuanced controversy between realists and structuralists, and I confess to being a long way out of my depth. It is usually the case, however, that where such longstanding controversies exist, both sides are eventually found to have been expressing some part of the truth. In such a baffling context a comment by the mathematician Hans-Christian Reichel seems apposite: ‘What we learn from mathematics is just as unanswerable as “What do we learn from Tolstoy's War and Peace?”’35

War and Peace is a work of fiction, a marvellous creation of the human mind. It is also rooted in real events, and would have lost a major part of its significance had it been set in Ruritania rather than in Russia. Like all great works of art it communicates truthfully on a variety of different levels, but it can only do so by the personal involvement of the reader in the story. What we learn from it depends on who we are, and what we ask of it. The suggestion that this might also be the case in the scientific study of nature is likely to meet with resistance from most scientists. Is it not one of the distinguishing marks of good science that the scientist should be invisible, an austerely objective operator, in principle replaceable by any other operator with similar skills? The repellently detached style in which scientific papers are usually written, witnesses to this distancing of the person from the results of their research, thus hiding the emotional reality and general messiness of actual scientific thinking and experimentation. A caravan of equations plodding across an arid desert has no need to display a human face. A high level of abstraction may be a mark of scientific excellence, but it leaves out half the picture. It is in highlighting the other half, the personal and social factors, that sociologists of science have brought a different perspective to bear on scientific work, disclosing insights which have not been wholly welcomed within the scientific community.

A Sociological Perspective

The suggestion that research in the natural sciences may not be as objective, rational, and straightforwardly descriptive of reality as it claims to be, frequently meets with outright rejection.36 The sociologist would claim in reply to be doing no more than subjecting the processes and circumstances of scientific work itself to the same kinds of observation, theorising, and critical scrutiny, which have been responsible for the stunning success of the natural sciences in the past three or four centuries. The subject matter may be more difficult, and the concepts less sharply defined, but the need for criticism is no less great.

It is this sort of criticism—whether by others or better still by scientists themselves—focused on the human context of research, which may help to throw light on some of the problems at which I have been hinting. In particular there is the notion of fundamental explanation, in which there seems to be an uncomfortable element of circularity. We think we know what rationality is on the basis of our publicly shared experience of the ordinary world. Mathematics, which is at least in some respects a human construction, pushes rationality to extreme limits in trying to lay bare the fundamental structure of nature, even when its conclusions are unpicturable and seem to defy the logic of the ordinary world on which its analysis has been based. Knowing that we ourselves are part of nature, with minds which have evolved for quite other purposes than pursuing abstract logic, how can we be confident in the powers of reason as we approach these seemingly illogical limits? Furthermore our concepts of what is rational, rooted in ordinary experience, have also been socially conditioned. We are therefore predisposed to accept the practical results of science because they confirm our experience of the kind of world we have now come to expect. But who are ‘we’ who do these things and adopt these attitudes? We are members of a particular culture which, over many centuries, has learnt to rely on scientific knowledge as the most effective basis for the understanding and successful manipulation of the natural world. Science is trusted because it works, and because it can in the long run create consensus. Though intrinsically its successes may seem to depend on supposedly crucial experiments, in practice it depends on the general acceptance of its results by those fellow-specialists who understand them. In fact it is an essentially social activity.

As such it falls within the scope of sociology, as well as exemplifying one of the defining characteristics of post-modern culture, namely an awareness of the conditioned nature of all truth-claims. The mind which seizes on an apparently obvious connection has itelf been conditioned in what to expect. Each of us belongs to our time and place, with all the limitations that entails. In the seventeenth century, for example, explanations of natural phenomena were sought entirely in terms of mechanical causes, however unlikely the mechanism proposed, because this is what Cartesian philosophy prescribed and what educated society was led to expect. Nerves were thought to be pipes, down which liquid flowed to contract the muscles. However ridiculous the idea might seem to us now, it was a reasonable hypothesis in its own day, given the conceptual limitations of a science in which the notion of irritable tissue had not been invented, and electricity was unknown. It would be surprising if we in the twenty-first century were not blinkered by our own set of conceptual limitations. It may be that such limitations will only be exposed by the kind of cross-disciplinary discussion which enables unconscious presuppositions and cultural conditioning to be brought out into the open. Such periodic reassessments are the inevitable consequence of having acquired minds disposed to be reflexively critical, and sociology is potentially one of the useful tools for doing just this, by alerting us to the influence of our social environment.

Why then is it regarded with such suspicion? If it is true that scientists are creatures of their own time, and that scientific theories are social constructions from concepts available in a particular culture, what grounds are there for objecting? One reason is that some sociological critics of the natural sciences have tended to go so far in relativising all truth claims that the choices between them begin to look arbitrary. Thomas Kuhn's ground-breaking work on ‘scientific revolutions’ with its claim that there may be ‘incommensurable paradigms’ has helped to encourage the view that there may be no rational way of deciding between different world views.37 This was surely not Kuhn's intention, but it has been grist to the mill for postmodernists and others, including unfortunately some religious apologists.38 The temptation to define incompatible belief systems as ‘incommensurable paradigms’, and yet as all capable of being regarded as true, has all too often proved irresistible.

Sociological analysis of particular research establishments has usefully drawn attention to the significance of local factors and personalities in research priorities, styles of work, and the needs and assumptions of individual researchers.39 It can also give substance to legitimate fears about the increasing dependence of such establishments on commercial interests, and the difficulties of adequate peer review under the conditions of commercial confidentiality. It has proved easy, however, to exaggerate the importance of such insights by inflating them into the much more dubious claim that science is essentially local, and locally conditioned, and only becomes universal by negotiation with the scientific community as a whole.

In both examples the identification of a strong element of social construction within scientific work is frequently dismissed by natural scientists as a threat to what they conceive their work be really about. But this is to disregard some useful insights to the necessarily conditioned character of all human knowledge, by confusing them with judgements about bias and limitations in scientific method itself. As the author of a recent study of stages in the development of thinking has put it:

The long history of philosophical attempts to figure out what good science should be, turned out to be a history of increasing awareness of the subjective element in science. Where science long has tried to pin down reality as the objective ‘known’, the philosophy of science has shown the presence of the subjective ‘knower’ more and more clearly in all supposedly objective knowledge. Historians of science have joined in this expose. So have those doing sociology of knowledge. Unfortunately, philosophers, historians, and sociologists have sometimes concluded that because science is a human construction, it cannot have universal validity and reliability.40

But how, when the subjective element is properly recognised, are universal validity and reliability to be defended, except on the basis I have already described? The point is that in practice scientific methods work. It is possible in fact to give a Darwinian account of science in which those theories survive, which provide he best explanations and the best predictive success.41 Just as certain organisms succeed because they are adapted to their environment, so certain theories succeed because they make sense of some aspect of reality, and are fruitful in breeding further discoveries. A potential difficulty with this analogy is that organisms do not adapt to the environment as a whole, but to some niche within it where they are best at doing their own thing.42 But even this, as I have already indicated, is not unlike what actually happens in scientific laboratories, where much of what goes on is local, particular, adjusted to a special set of circumstances, and driven by the needs and assumptions of those doing the research. The evolutionary model has a further refinement, in that living organisms are not passive players within their competitive world. They are active participants trying to make the best of it on the basis of their own special characteristics. Nor, likewise, are scientific theories passive receptacles into which observations are collected and then ordered. Good science starts with theories and tests them against observations, sometimes finding them strengthened in the process, sometimes in need of adaptation, and sometimes destroyed. In a word, scientific research is theory-laden, in the sense that theory comes first, before the experiments designed to test it.

Generalising this point to refer to the scientific enterprise as a whole, it is possible to envisage a kind of virtuous circle in which, despite initial misconceptions, the understanding of nature is progressively refined through the testing and revision of theoretical constructions, until they fit more and more closely the reality of what is actually observed. However, uncritical reliance on supposedly established theoretical constructions, entails the alternative possibility that the virtuous circle whereby reality is disclosed, may turn out to be not so virtuous after all. Furthermore while most scientists are prepared to recognise that all research is theory-laden, there is much greater reluctance to accept that theory-ladenness might apply to the scientific enterprise as a whole.

Blindness to these possibilities may have the effect of bracketing out aspects of reality which do not fit the preconceptions of a scientific culture. It may promote a concept of rationality which is not adequate to the full range of human experience. This has certainly happened in the past, as in my previous example of the seventeenth-century attempt to explain the action of nerves in purely mechanical terms. Similarly, current attempts to understand consciousness in terms of information theory may shed some light on the way we think, but at the cost of setting on one side whatever it is we mean by subjective experience. Information theory itself has not until very recently been seen as relevant to be mysterious world of quantum phenomena, yet the simple idea that nature only answers the questions we ask it, and in the form we ask them, may go a long way towards explaining some of the oddities at this quantum level.43 I shall be returning in the final chapter to the idea that a whole dimension of our human environment and experience has been bracketed out by the assumption that nature, as currently understood, is all there is.

Conclusion

I have been arguing in this chapter that the study of nature in its wholeness requires a range of disciplines, and different levels of understanding. All explanations at whatever level, to a greater or lesser degree entail some element of human construction, and all confront us with an element of givenness, against which our ideas about reality have to be measured. We can look for ultimate reality in the fine structure of matter but the mathematics which gives us our only means of understanding it has itself an ambivalent status as both invented and discovered. Or we can look for reality, as Aristotle did, in the world as actually experienced. Reality, from this perspective, is what confronts us here and now, organisms not atoms, which is why our probing of other levels of understanding should not be allowed to detract from the wholeness of lived experience.

Between these extremes there is plenty of room for discussion, and some appreciation of the many dimensions of the concept of nature may be a good place to start. As we shall see in subsequent chapters, the classic dualism between nature and culture may have to be redefined. It may have to be conceded that a God-like perspective on the natural world is not available to us, precisely because we are not gods. Indeed it may be that theology's main contribution to the discussion is to go on offering reminders of that fundamental truth.

  • 1.

    The theme is extensively documented in Adrian Desmond and James Moore's Darwin (Michael Joseph, 1991).

  • 2.

    I owe this point to Martin Rudwick's 1996 Tarner Lectures on Constructing Geohistory in the Age of Revolution.

  • 3.

    From H. C. Beeching's The Masque of Balliol, c. 1870.

  • 4.
    An exception is Hilaire Belloc's:

    These things have never yet been seen,

    But scientists who ought to know

    Tell us that it must be so.

    So let us never never doubt

    What nobody is sure about.

  • 5.

    Lowes Dickinson, Religion: A Criticism and a Forecast (Dent, 1905) p. viii. Quoted by Roger Lloyd in The Church of England in the Twentieth Century (Longmans, 1946). It is ironic that at about this time Lenin was rigorously pursuing his disastrous belief that there is indeed a science of human history, Marxism, whose predictions because they are scientific are infallible, and can therefore be used to justify whatever means are employed to fulfil them. Robert Service, Lenin (Macmillan, 2000) p. 193.

  • 6.

    H. P. Rickman, Wilhelm Dilthey (Paul Elek, 1979).

  • 7.

    F. M. R. Walshe, Critical Studies in Neurology (E. & S. Livingstone, 1948) pp. viii–ix.

  • 8.

    John D. Barrow, Theories of Everything. The Quest for Ultimate Explanation (Vintage Edn, 1991). The scientific sections of this chapter owe much to Barrow's exposition.

  • 9.

    E. O. Wilson, ‘Biology and the Social Sciences’ in Zygon, Sept. 1990, p. 247. The theory is discussed by Philip Hefner in R. M. Richardson and W. J. Wilman's Religion and Science (Routledge, 1996) pp. 401ff.

  • 10.

    Ian Stewart, Life's Other Secret. The New Mathematics of the Living World (Allen Lane, 1998).

  • 11.

    James Gleick, Chaos. Making a New Science (Cardinal, 1987). M. M. Waldrop, Complexity. The Emerging Science at the Edge of Order and Chaos (Penguin, 1992).

  • 12.

    Hume's most famous contribution to philosophy was his demonstration that efficient causality cannot be shown to be anything more than regular succession. The idea of a necessary connection between cause and effect exists only in the mind. Kant responded to this latter point by arguing that causality is one of the universal categories through which all phenomena are necessarily perceived.

  • 13.

    Ronald W. Clark, Einstein. The Life and Times (Hodder and Stoughton, 1973) p. 73.

  • 14.

    New Scientist, 23 September 2000, pp. 33–5.

  • 15.

    Martin Rees, Just Six Numbers (Weidenfield and Nicolson, 1999).

  • 16.

    Barrow, op. cit. p. 89.

  • 17.

    Arthur O. Lovejoy, The Great Chain of Being. A Study of the History of an Idea (Harvard, 1936). Citations are from the Harper Torchbook Edition, 1960.

  • 18.

    The Great Chain of Being. A Study of the History of an Idea, p. 229.

  • 19.

    The Great Chain of Being. A Study of the History of an Idea, p. 179.

  • 20.

    Herbert Dingle was at one time Professor of History and Philosophy of Science in the University of London. He published his question, and the story of his many attempts to find an answer to it, in Science at the Crossroads (Martin Brian and O'Keefe, 1972).

  • 21.
    There is a hint of what was later to become an obsession in Sir Arthur Eddington's 1927 Gifford Lectures, The Nature of the Physical World (Cambridge, 1928) pp. 243–4:

    … a strictly quantitative science can arise from a basis which is purely qualitative… the laws which we have hitherto regarded as the most typical natural laws are of the nature of truisms… the mind by its selective power fitted the processes of Nature into a frame of law of a pattern largely of its own choosing; and in the discovery of this pattern of law the mind may be regarded as regaining from Nature that which the mind has put into Nature.

  • 22.

    Barnes, op. cit. p. 38.

  • 23.

    Charles Seife, Zero: The Biography of a Dangerous Idea (Souvenir Press, 2000). An entertaining and informative book, but spoilt by a highly tendentious reinterpretation of church history.

  • 24.

    Robert Temple, The Genius of China (Prion, 1998) pp. 139ff. The book is a merciful distillation of Joseph Needham's vast series of volumes on Science and Civilization in China.

  • 25.

    Quoted by R. G. Collingwood, The Idea of Nature (Oxford, 1945) p. 102.

  • 26.

    In his early years Bertrand Russell was fascinated by mathematics which he saw as providing our only human access to what is absolute and eternal. Wittgenstein, who was eventually to disillusion him, described mathematics as no more than a technique. The story of their relationship is to be found in Ray Monk, Bertrand Russell. The Spirit of Solitude (Vintage, 1997).

  • 27.

    Barrow, op. cit. Chapter 9 passim.

  • 28.

    The point is that the weather is affected by virtually everything, including itself. This self-reflexiveness enables small causes to have large and distant consequences—the famous butterfly effect.

  • 29.

    R. A. Fisher, The Genetical Theory of Natural Selection (Oxford, 1930). This was a key text in the application of Mendelian genetics to Darwin's theory, which then enabled natural selection to be put on a statistical basis.

  • 30.

    In Principia Mathematica, written in collaboration with A. N. Whitehead. The story is told in Monk, op. cit., and also more accessibly in Ray Monk and Frederic Raphael (eds.), The Great Philosophers (Weidenfield and Nicolson, 2000) pp. 259–91.

  • 31.

    Gödel's theorem arises out of the kind of logical puzzles which can occur when sentences refer to themselves, e.g. is a Cretan lying when he says ‘All Cretans are liars’? If he is, then he isn't, but if he isn't, then he is. Similar things happen when a logical system, like mathematics, seeks to provide logical justification for itself. A formal statement of Gödel's theorem might be ‘All consistent axiomatic formulations of number theory include undecidable propositions.’ One of its implications is that not everything that is true can be proved. See Douglas R. Hofstadter, Gödel, Escher, Bach: an Eternal Golden Braid (Penguin, 1979) p. 17.

  • 32.

    G. H. Hardy's A Mathematician's Apology (Cambridge, 1940) is a classic and very readable statement of this point of view.

  • 33.

    Simon Singh, Fermat's Last Theorem (Fourth Estate, 1997).

  • 34.

    Barrow, op. cit. p. 101.

  • 35.

    Quoted in Arthur Gibson, God and the Universe (Routledge, 2000) p. 92.

  • 36.

    The Epilogue to Alan Sokal and Jean Bricmont's Intellectual Impostures (Profile Books, 1998) is a vigorous example of this kind of rejection, with the fire mainly directed towards leading French postmodernists.

  • 37.

    Thomas S. Kuhn, The Structure of Scientific Revolutions (Chicago, 1962).

  • 38.

    Wittgenstein's famous description of different ‘language games’ must bear some responsibility for this. George A. Lindbeck's The Nature of Doctrine. Religion and Theology in a Postliberal Age (SPCK, 1984) is a much discussed example of this tendency. A more recent example, this time from the scientific side, is Stephen Jay Gould's Rocks of Ages: Science and Religion in the Fullness of Life (Jonathan Cape, 2000).

  • 39.

    J. Wentzel van Huyssteen, The Shaping of Rationality. Towards Interdisciplinary in Theology and Science (Eerdmans, 1999) p. 49.

  • 40.

    Barnes, op. cit. p. 187.

  • 41.

    Karl R. Popper, Objective Knowledge. An Evolutionary Approach (Clarendon Press, 1972).

  • 42.

    Barbara Herrnstein Smith, Belief and Resistance. Dynamics of Contemporary Intellectual Controversy (Harvard, 1997) pp. 139–40.

  • 43.
    New Scientist, 17 February 2001, pp. 26–30. The article is a summary of a paper by Anton Zeilinger on ‘A Foundational Principle for Quantum Mechanics’. The following quotation captures the gist of it:

    Because we can only interrogate nature the way a lawyer interrogates a witness, by means of simple yes-or-no questions, we should not be surprised that the answers come in discrete chunks. Because there is a finest grain to information there has to be a finest grain to our experience of nature.

From the book: