Anyone taking a class in Modern Philosophy will learn that one of the most important issues in 17th and 18th Century philosophy was the debate between rationalists and empiricists. In 2005, Matthias Steup and Ernest Sosa edited a book entitled Contemporary Debates In Epistemology (Oxford: Basil Blackwell, 2007), which includes a chapter entitled ‘Is There A Priori Knowledge?’ (pp. 98–122). In this chapter, Laurence BonJour defends rationalism and Michael Devitt defends empiricism. So, this philosophical debate has been going on for four centuries, and it still has not been settled. This is the kind of thing that gets philosophy a bad reputation. If a dispute can continue for four centuries without resolution, that is surely an indication that nobody knows how to tell a good answer from a bad one. In this article I want to consider why the debate is unresolved after so much time.
The original debate between rationalists and empiricists took place at the dawn of modern science, and all the philosophers involved were trying to understand the principles of modern scientific discovery. Mathematics is an integral part of modern scientific method. We study mathematics not by making observations, but by intense thinking, often carried out in solitude. We can imagine God making a world in which pigs fly and the moon is made of green cheese, but it does not appear that even God could make a world in which 2 + 2 = 5: this type of thought-experiment suggests that mathematical truths are necessary. (Descartes in fact believed that God could make such a world, but Leibniz denied it. To this day, necessary truths are often conceived of as propositions that are true in every possible world.) In the early era of modern science, observations overthrew long established beliefs about the behaviour of falling objects and the behaviour of heavenly bodies. However, at that time, the geometry of Euclid looked set to stand forever as a timeless description of laws that govern the shapes of all objects in space. Here we have the rationale for the core belief of rationalists, the belief that our most reliable source of knowledge is an innate faculty, which may be called rational intuition or rational insight, that provides us with certain knowledge of necessary truths, such knowledge being known as ‘a priori’ to contrast it with knowledge derived from observation, which is known as ‘a posteriori’ knowledge.
On the other hand, empiricists focussed on the fact that modern science advanced when people were prepared to let observations of how things actually behave overcome prejudices about how they ought to behave. Even today, I sometimes encounter students who suppose that if two objects of the same shape and volume but different weight (strictly speaking one should say mass) are dropped from a great height at the same time in stable weather conditions then the heavier object will hit the ground first. This is, of course, incorrect. The mass of an object does not affect the speed with which it falls. Of course, it is far more dangerous to be hit by a falling grand piano than a small violin travelling at the same speed. The falling grand piano requires a more urgent reaction from us, so the false belief about its greater speed might be useful in some situations. But it is still a prejudice to be overcome, and observations matter because they enable us to overcome such prejudices. This is one of the cornerstones of modern science.
Rationalists do not deny that there are some questions that can only be settled by observation. Empiricists differ from rationalists because they believe not merely that some of our knowledge about the nature of reality derives from experience, but that all of it does. In the 17th Century, this meant denying the existence of innate knowledge.
However, in the 20th and 21st centuries, it has emerged that to deny the existence of innate knowledge is itself a prejudice that must be overcome by observation. A baby that has not yet learned to talk cannot tell us what it thinks, but by studying its reactions, we can tell when it is surprised. A baby that sees a magic trick where an object apparently appears out of nowhere shows surprise – more surprise than if one object is apparently transformed into another.Footnote 1 The baby cannot speak, but it is easy to imagine that if it could, it would say, ‘That's impossible: objects don't just appear out of nowhere. Nothing can come from nothing.’ I have frequently found that students who first begin to study metaphysics take for granted that nothing can come from nothing, and are even puzzled when I ask for a justification for what is, as far as they are concerned, an obvious and self-evident truth. So, we have empirical grounds for saying that belief in a principle such as ‘Nothing comes from nothing’ is innate. The claim that such principles are innate was one of the main differences that separated rationalists from empiricists, and it seems that the rationalists were right.
So, if scientific evidence supports innate beliefs, why didn't empiricists simply admit defeat?
One important reason is that we know something today that we did not know in the 17th Century. Innate beliefs are probably the result of our evolutionary heritage. The child's confidence that nothing comes from nothing is derived not from his or her own experience, but from trial and error across countless generations. So it is now possible to reconcile one of the basic claims of empiricism, that all knowledge is derived from experience, with one of the basic claims of rationalism, that some knowledge is innate: knowledge may be innate in the life of the individual, but derived from experience in the history of the species.
The fact that these two basic tenets of empiricism and rationalism can be reconciled does not, however, end the debate. For empiricists, the reason it is important to realize that all knowledge is based on experience is to undermine false claims to knowledge. The innate belief that nothing comes from nothing has proved itself to be useful when dealing with objects visible to the naked human eye. However, we know very well that principles that apply to visible objects do not apply to sub-atomic particles. Quantum-physics is highly counter-intuitive: it deals with entities whose behaviour defies the expectations we have inherited from our ancestors. Our innate beliefs supply us with the prejudices that must be overcome by careful observation.
Here is an example of why the debate between rationalists and empiricists has important implications for so many areas of philosophy. Suppose that we are considering the question of God's existence. The principle that nothing comes from nothing sounds like a good starting point for an argument in favour of there being a God. Rationalists in the 17th Century thought that such innate principles were our most reliable source of knowledge. However, if the only basis for believing that nothing comes from nothing is the experience of our ancestors, then there is no particular reason to suppose we can apply this principle to problems like the origin of the universe and use it as a stepping stone to the conclusion that God exists. Our suspicion that the rabbit in the magician's hat did not really appear out of nowhere is well-founded, but it does not follow that we can treat ‘Nothing comes from nothing’ as a universal truth, applicable to any situation. The empiricist could claim that, until they have been tested by experience, our innate expectations and beliefs about the world cannot really be classed as knowledge.
It appears then that by conceding ground on the issue of innate beliefs, empiricists are able to win the broader battle over the limits of knowledge. However, we must remember that rationalists are no less capable than empiricists of adapting their philosophy in the light of new discoveries.
In fact, one of the most important revisions that rationalists have made is to accept the possibility of revising a priori beliefs. In the 17th Century, a priori beliefs were valued by rationalists because of their certainty. They were considered to be unshakeable foundations for knowledge. However, it is important to distinguish between certainty and necessity. To say a particular belief is certain is to say something about the person or group who holds the belief and their attitude towards it. To say that a truth is necessary is to say not that it is impossible not to believe it, but that it is something that has to be true. As I have noted, since the time of Leibniz it is customary to say that necessary truths are true in every possible world. An almighty creator can choose to make pigs with or without wings, so it is a contingent truth that pigs cannot fly. But if the all powerful creator creates two pigs in Eden and two pigs in Egypt, at least four pigs would then exist in that world; not even an all-powerful creator can avoid this consequence. This is (arguably) what is meant by saying that it is a necessary truth that ‘2 + 2 = 4’. According to Stanislaus Dehaene, truths such as ‘2 + 2 = 4’ seem self-evident because we humans have evolved a ‘number sense’ – hence the title of his book, The Number Sense (London: Penguin, 1997).
Learning calculus, on the other hand, requires much more effort. Indeed, even if we stick to basic addition, as the numbers involved increase our sense of confidence decreases. However, to say that something is true in every possible world is not a statement about our level of confidence. I might be unsure whether my answer to a particularly difficult problem of arithmetic is the correct one, indeed, I might have no idea what the answer is at all, and yet still be of the opinion that the correct answer (whatever it is) is necessarily the correct solution answer to that particular problem. In some cases, our ability to arrive at a conclusion exploits judgements about certain truths being necessary. We cannot verify that the sum of two odd numbers is always an even number by checking out every particular case, but we do not need to. A number is odd if the final digit is odd, and even if that last digit is even. If I add two odd numbers, abc and def, so that the sum is another number, ghi, the last digit of the final number, i, will be the sum of the last digit of the two numbers being added, that is i = c + f. We can, of course, verify that the sum of any two odd numbers from 1–9 is an even number by checking each possible combination, and since abc and def are odd numbers, the last digit of each, c and f will be an odd number from 1–9. So i, being the sum of two odd numbers from 1–9, is even, and therefore ghi is an even number. So whatever two odd numbers I take, their sum is bound to be an even number.
At every stage the proof relies on our observing the way certain things are bound to be necessarily, for example, that the last digit of an odd number is bound to be an odd number. The conclusion covers an infinite number of cases that I have not verified, but my expectation (or rather my knowledge) about these cases relies on something much stronger than the fact that so far, every time I have added two odd numbers the result has been even.
This exploitation of necessity seems to be integral to any form of deductive reasoning. The goal of deductive reasoning is to produce arguments that are valid, in the sense that it would be absurd to assert the premises of the argument and deny the conclusion. When we pronounce that a deduction is valid, we are saying that if the premises are true, then it necessarily follows that the conclusion is true.
As we have just seen, the knowledge that something is necessary enables us to arrive at universal truths without verifying every particular instance. We have seen that necessary truths are sometimes described as being true in every possible world. We know what is in this world by observation, but we cannot observe any of the other possible worlds, so any statement of the form ‘It is necessarily true that …’ goes beyond what we have observed – even if ‘we’ refers to all humans throughout history. For rationalists, this is a powerful reason for supposing that we must have some source of knowledge that is independent of observation. Mathematics is based on deduction, deduction requires judgements about necessary connections, and judgements about necessity cannot be based on observation, so there must be some source of knowledge that is non-observational, and that is precisely what is meant by a priori knowledge. There is no more reason to demand that our source of a priori knowledge be infallible than there is to say that because we rely on sight for so many of our judgements, our vision must be perfect if we are to have any knowledge of the external world.
There is though one particular revision of mathematical knowledge that has been held against rationalism, and that is the demise of Euclidean geometry. In the 4th to 3rd centuries BCE Euclid constructed a system of geometry in which everything is derived from five axioms. Euclid's geometry, which is based on intuitive ideas such as the principle that between any two points there is a unique straight line, agrees with perfectly with common-sense. As I have already stated, in the 17th Century Descartes and Spinoza assumed that Euclid's edifice would last forever and aspired to find foundations for all knowledge that would be as durable as Euclid's axioms. There was always some dissatisfaction amongst mathematicians with Euclid's fifth axiom, which was felt to lack the self-evidence of the others, and there were many attempts to use the first four axioms to construct a proof of the fifth. In the 18th Century, Saccheri and Lambert independently attempted to prove the fifth axiom by a process of reductio ad absurdum – if you start from the assumption that the fifth axiom is false, and demonstrate that some contradiction follows, you demonstrate that the fifth axiom must be true. Without realizing it, Saccheri and Lambert had entered the domain of non-Euclidean geometry; the assumption that the fifth axiom is false leads to conclusions that are contrary to common-sense, but not to an inherent contradiction. In the early 19th Century, Bolyai, Gauss and Lobachevsky carried out similar work, this time with a full awareness of the implications: geometry need not start from the intuitive axioms of Euclid. We can take counter-intuitive axioms that seem self-evidently wrong, and derive consistent consequences from them.Footnote 2 Initially, this all seemed like pure speculation – imagining crazy alternative universes that God might have created, but didn't. However, Einstein's theory of General Relativity is based on the idea that the structure of space is non-Euclidean.
This looks like another example of how observational knowledge overcomes prejudices. The assumption that our intuitions about the nature of space were implanted in us by God was an obstacle to be overcome on the way to understanding the actual structure of space as revealed through experiments. It is true, of course, that rationalists no longer claim that rational intuition is infallible, but we are not dealing here with a minor oversight. It seems that for centuries our beliefs about geometry, in which mathematicians had such great confidence, were mistaken. The jewel in the crown of rationalism turned out to be a fake.
However, this overlooks the fact that the original break from Euclidean geometry was made not by experimental physicists, but by mathematicians engaged in pure reasoning. Einstein himself commented that, when he came to work on General Relativity, mathematicians had already solved many of the formal problems by developing non-Euclidean geometry.Footnote 3 Of course, the discovery that non-Euclidean space is a geometrical possibility did not lead immediately to the conclusion that our universe is non-Euclidean. Once you know that some possible universes are Euclidean and some are not, it is an empirical question which kind of universe we inhabit. Until Einstein, most people who considered the issue assumed, incorrectly, that the answer to this question was the one supplied by our ancestral intuitions. Still, the fact that not every possible universe is Euclidean was a genuine a priori discovery. The central point of empiricism is that modern science depends on our being able to overcome our preconceptions by paying attention to the evidence of our senses. The lesson of non-Euclidean geometry is that we do not need to use our senses to overcome these preconceptions: sometimes pure reason is enough.
So far then, two arguments have been advanced in favour of rationalism. The first is the empirical argument that we do in fact have innate knowledge. The second was the argument that mathematical proof requires us to make judgements about what is necessary, and that empirical means do not suffice for this. It is natural to assume that both these arguments point to a single faculty: rational intuition, our in-built capacity to recognize necessary truths as self-evident. However, this assumption should be challenged. Our innate knowledge is an evolutionary inheritance, a set of useful beliefs about the environment in which we evolved. There is no reason why contingent truths, and even useful half-truths (such as the belief that heavy objects fall more rapidly) should not be included within this legacy. Although innate, such knowledge is the product of experience. Distinct from this is our ability, with great effort, to engage in deductive arguments that take us step by step into realms that sometimes defy our intuitions, to reach conclusions that exceed the bounds of what we have experienced. If the second argument in favour of rationalism is correct, this latter ability requires a priori knowledge. Non-Euclidean geometry indicates both the limitations of our intuitive inheritance and the power of our hard-won deductive knowledge to exceed by hard toil what evolution has given us for free. The outstanding challenge is to explain how such deductions are possible.
Of course, the debate does not stop here. Empiricists have their own ways of explaining our capacity for deduction. Some empiricists think we should abandon the idea of necessary truths completely, others would argue that necessary truths are truths not about the world, but about the concepts through which we think about the world, thus domesticating the notion of necessity. Due to lack of space, I cannot pursue these issues here, but then I make no pretence that this paper is a comprehensive discussion of the debate. There is however one final argument for empiricism that I would like to present.
The final empiricist argument is as follows. We are a long way from having a full understanding of the complex process by which humans acquire knowledge. However, we can be sure that this is a natural biological process. Knowledge derived from observation involves a chain of cause and effect. Light bounces off an object and strikes my eye, and a camera. A series of reactions follow in each case. In the case of the camera, this results in an image being stored. In the case of my eye, the result is that after the image has been processed my brain is in a slightly different state, and the new state involves some increase in my knowledge. The process that takes place in my brain is far more complicated than that which takes place in the camera, and is one that we do not yet fully understand. Still, in each case, we are dealing with nothing more than a chain of physical causes. We know that chains of physical causes exist, and all we need to do to understand a posteriori knowledge is acquire a more detailed understanding of the chain leading from an object being observed to a particular state of the brain.
However, a priori knowledge (if there is such a thing) is independent of observation. It cannot simply be fitted into a chain of cause and effect as described above, because there is no observed object that forms the start of the chain. Consequently, empiricists claim that rationalism is a mystical doctrine, since it seems that a priori knowledge would have to come from outside the chain of cause and events that is, so far as we know, constitutive of nature. So, according to this empiricist argument, rationalism involves a belief in a supernatural source of knowledge.
Rationalists do not accept this charge of mysticism. They are confident that we have a priori knowledge, for as they argue, without a priori knowledge we would not be able to make the kind of judgements about what is necessary that is required by our deductive practices. They also maintain that our capacity to acquire such knowledge is entirely natural. Granted, we might not yet have even a simple model of the process by which such knowledge is acquired, but that is a matter for future research. After all, the empiricist can hardly claim that it is altogether impossible for the process of knowledge acquisition to differ from the causal-chain model, since one of the main motivations for empiricism is the awareness that future discoveries may not fit our current preconceptions. The belief that all our knowledge has to be describable as a causal chain from object to brain, that this is the only way things could possibly be, is precisely the kind of claim that empiricists are supposed to reject. The strongest claim that the empiricist is entitled to is only that so far nobody has provided a coherent alternative to the causal-chain model of knowledge. The rationalist must hope for some breakthrough similar to the discovery of non-Euclidean geometry. Indeed the rationalist, believing that we are entitled to make judgements about what is necessary, might well insist that if the empiricist is correct that the causal-chain model cannot explain a priori knowledge, then it necessarily follows that there is more to knowledge than the causal-chain model suggests.
So to return to the original question, why, after so much time, is the debate about empiricism and rationalism still going on? The first thing to notice is that the terms ‘empiricism’ and ‘rationalism’ have shifted their meaning since the 17th Century. In the 17th Century, empiricists denied that we have innate ideas, in the 21st Century, this no longer holds. In the 17th Century, rationalists thought Euclid's geometry was beyond question. Today, the fact that geometry has transcended our Euclidean intuitions can be seen as a testimony to the power of a priori reasoning.
This tendency for philosophical vocabulary to shift over time is a source of frustration for many students. They naturally want simple clear definitions of terms like ‘empiricism’, not a complicated historical narrative. Understandably, they would prefer that the philosophical community stick to clear and unchanging definitions of its key vocabulary.
However, although the shifting terminology may create confusion, it is a symptom of something positive. The definitions of ‘empiricism’ and ‘rationalism’ have changed as philosophers have responded to new information. Some questions that were once matters of speculation have now been answered scientifically, and these answers raise further questions. Our success in answering some of the great questions of the 17th Century is a reason for hoping that the outstanding questions will eventually be answered as well.