The ‘Miracle Argument’ For Scientific Realism

Alan Musgrave

 

Scientific realism is the view that science seeks the truth and sometimes finds it. This is a pretty commonsensical view and is probably the instinctive philosophy of most working scientists. Yet down the ages philosophers have found it problematic and have come up with various antirealist views about science. Some antirealists say that science is all false. Others say that some science might be ‘true’, but go in for some fancy antirealist theory of truth. Yet others grant that some science might be true in the realist sense of ‘true’, but say that science does not or should not seek truth. The leading contemporary version of this last view is Bas van Fraassen’s constructive empiricism. Its slogan is ‘the name of the scientific game is saving the phenomena’. A scientific theory saves its phenomena if it issues nothing but true predictions about observable states of affairs, if it is ‘empirically adequate’. Truth is stronger than empirical adequacy. A true theory will be empirically adequate, but an empirically adequate theory need not be true. According to constructive empiricism, what science seeks and sometimes finds is empirically adequate theories.

Hilary Putnam said ‘Realism is the only philosophy that does not make the success of science a miracle’ (1975: 73). Putnam said this when he was a realist, before his conversion to so-called ‘internal realism’, which is a kind of Kantian idealism. The remark refers to what has been called the ‘Miracle Argument’ or (by van Fraassen) the ‘Ultimate Argument’ for realism. What exactly is the argument? Is it a good argument? Is it a good argument for realism, as opposed to constructive empiricism?

First, I will explain how I think the argument is to be construed. Second, I will show that thus construed it is a good argument. Third, I will show that it does favour realism as opposed to constructive empiricism. Finally, I shall consider the chief objections to it.

The Miracle Argument says that realism about science is the best explanation of the success of science, which would otherwise be ‘miraculous’. It is a special case, or special type of case, of ‘inference to the best explanation’. So, how does that inference work in general, before we turn to this particular case of it?

Inference to the best explanation

Inference to the best explanation (hereafter IBE, for short) is a pattern of argument that is ubiquitous everyday life as well as in science. Van Fraassen has a homely example:

I hear scratching in the wall, the patter of little feet at midnight, my cheese disappears—and I infer that a mouse has come to live with me. Not merely that these apparent signs of mousely presence will continue, not merely that all the observable phenomena will be as if there is a mouse, but that there really is a mouse. (1980: 19-20)

Here, the mouse hypothesis is supposed to be the best explanation of the phenomena, the scratching in the wall, the patter of little feet, and the disappearing cheese.

What exactly is the inference in IBE, what are the premises, and what the conclusion? Van Fraassen says ‘I infer that a mouse has come to live with me.’ This suggests that the conclusion is ‘A mouse has come to live with me’ and that the premises are statements about the scratching in the wall, etc. Generally, the premises are the things to be explained (the explanandum) and the conclusion is the thing that does the explaining (the explanans). But this suggestion is odd. Explanations are many and various, and it will be impossible to extract any general pattern of inference taking us from explanandum to explanans. Moreover, it is clear that inferences of this kind cannot be deductively valid ones, in which the truth of the premises guarantees the truth of the conclusion. For the conclusion, the explanans, goes beyond the premises, the explanandum. In the standard deductive model of explanation, we infer the explanandum from the explanans, not the other way around—we do not deduce the explanatory hypothesis from the phenomena, rather we deduce the phenomena from the explanatory hypothesis.

The intellectual ancestor of IBE is Peirce’s abduction, and here we find a different pattern:

The surprising fact, C, is observed.
But if A were true, C would be a matter of course.
Hence, …A is true.
(C. S. Peirce, 1931-58, Vol. 5: 189)

Here the second premise is a fancy way of saying ‘A explains C’. Notice that the explanatory hypothesis A figures in this second premise as well as in the conclusion. The argument as a whole does not generate the explanans out of the explanandum. Rather, it seeks to justify the explanatory hypothesis. Abduction belongs in the context of justification, not in the context of discovery. (This is a point of some importance. Peirce’s abduction was once touted, chiefly by Norwood Russell Hanson, as a long neglected contribution to the ‘logic of discovery’. It is no such thing.)

Abduction is deductively invalid. We can validate it if we view it as a deductive enthymeme and supply its missing premise, ‘Any explanation of a surprising fact is true’. But this missing premise is obviously false. Nor is any comfort to be derived from weakening the missing premise (and the conclusion) to ‘Any explanation of a surprising fact is probably true’ or to ‘Any explanation of a surprising fact is approximately true’. It is a surprising fact that marine fossils are found on mountain-tops. One explanation of this is that Martians came and put them there to surprise us. But this explanation is not true, or probably true, or approximately true.

IBE attempts to improve upon abduction by requiring that the explanation is the best explanation that we have. It goes like this:

F is a fact.
Hypothesis H explains F.
No available competing hypothesis explains F as well as H does.
Therefore, H is true
(William Lycan, 1985: 138)

This is better than abduction, but not much better. It is also deductively invalid. We can validate it if we view it as a deductive enthymeme and supply its missing premise, ‘The best available explanation of a (surprising) fact is true’. But this missing premise is also obviously false. Nor, again, will going for probable truth, or approximate truth, help matters.

There is a way to rescue abduction and IBE. We can validate them without adding missing premises that are obviously false, so that we merely trade obvious invalidity for equally obvious unsoundness. Peirce provided the clue to this. Peirce’s original abductive scheme was not quite what we have considered so far. Peirce’s original scheme went like this:

The surprising fact, C, is observed.
But if A were true, C would be a matter of course.
Hence, there is reason to suspect that A is true.
(C. S. Peirce, 1931-58, Vol. 5: 189)

This is obviously invalid, but to repair it we need the missing premise ‘There is reason to suspect that any explanation of a surprising fact is true’. This missing premise is, I suggest, true. After all, the epistemic modifier ‘There is reason to suspect that…’ weakens the claims considerably. In particular, ‘There is reason to suspect that A is true’ can be true even though A is false. If this missing premise is true, then instances of the abductive scheme may be both deductively valid and sound.

IBE can be rescued in a similar way. I even suggest a stronger epistemic modifier, not ‘There is reason to suspect that…’ but rather ‘There is reason to believe (tentatively) that…’ or, equivalently, ‘It is reasonable to believe (tentatively) that…’. What results, with the missing premise spelled out, is:

It is reasonable to believe that the best available explanation of any fact is true.
F is a fact.
Hypothesis H explains F.
No available competing hypothesis explains F as well as H does.
Therefore, it is reasonable to believe that H is true

This scheme is valid and instances of it might well be sound. Inferences of this kind are employed in the common affairs of life, in detective stories, and in the sciences.

Of course, to establish that any such inference is sound, the ‘explanationist’ owes us an account of when a hypothesis explains a fact, and of when one hypothesis explains a fact better than another hypothesis does. If one hypothesis yields only a circular explanation and another does not, the latter is better than the former. If one hypothesis has been tested and refuted and another has not, the latter is better than the former. These are controversial issues, but they are not the most controversial issue. That concerns the major premise. Most philosophers think that the scheme is unsound because this major premise is false, whatever account we can give of explanation and of when one explanation is better than another. So let me assume that the explanationist can deliver on the promises just mentioned, and focus on this major objection.

It is objected that the best available explanation might be false. Quite so—and so what? It goes without saying that any explanation might be false, in the sense that it is not necessarily true. It is absurd to suppose that the only things we can reasonably believe are necessary truths.

But what if the best explanation of a fact not only might be false, but actually is false? Can it ever be reasonable to believe a falsehood? Of course it can. Suppose van Fraassen’s mouse explanation is false, that a mouse is not responsible for the scratching, the patter of little feet, and the disappearing cheese. Still, it is reasonable for us to believe it, given that it is the best explanation we have of those phenomena. Of course, if we find out that the mouse explanation is false, then it is no longer reasonable for us to believe it. But what we find out is that what we believed was wrong, not that it was wrong or unreasonable for us to have believed it.

It is objected that being the best available explanation does not prove a hypothesis to be true. Quite so—and again, so what? The explanationist principle— ‘It is reasonable to believe that the best available explanation of any fact is true’—means that it is reasonable to believe or think true things that have not been proved to be true. Philosophers who think that it can only be reasonable to believe what has been proved will reject the explanationist principle. Such philosophers accept what I have called the ‘justificationist principle’, according to which a reason for believing something must be a reason for what is believed. Explanationism, or the explanationist principle, stands opposed to justificationism, or to the justificationist principle. As I see it, the rejection of justificationism lies at the heart of Karl Popper’s critical rationalism. And explanationism, as I understand it, is part and parcel of critical rationalism. But most philosophers see things differently, accept justificationism, and go in for the usual formulations of IBE, where the conclusion is that the best explanation is true (or probably true, or approximately true). Critics rightly object that such conclusions do not follow. They can be made to follow, but only by adding to the premises an absurd metaphysical principle like ‘The best available explanation of any fact is true (or probably true, or approximately true)’. Most philosophers, instead of making these absurd metaphysical principles explicit, reply that abduction or IBE has some special merit in some special abductive or non-deductive logic. Critical rationalists reject justificationism, and have no need of non-deductive reasoning, either. IBE is for them a perfectly valid deductive form of reasoning, with a non-justificationist epistemic principle as its major premise. Thus critical rationalism and explanationism go hand-in-hand.

But do explanationism and realism go hand-in-hand as well? Why cannot constructive empiricists also accept IBE and put their own gloss upon it? When we consider van Fraassen’s mouse hypothesis, truth and empirical adequacy coincide, since the mouse is an observable thing. But when it comes to hypotheses about unobservables, truth and empirical adequacy come apart. Why cannot the constructive empiricist also accept IBE, but only as licensing acceptance of the best available explanation as empirically adequate, not as true? As Howard Sankey puts it:

The question is why it is reasonable to accept the best explanation as true. Might it not be equally reasonable to accept the best explanation as empirically adequate …? (2006, p. 118)

My answer to this question is NO. Suppose that H is the best explanation we have of some phenomena. Remember the Truth-scheme: It is true that H if and only if H. Given the Truth-scheme, to believe that H and to believe that H is true are the same. Given the Truth-scheme, to accept that H and to accept that H is true are the same. So what is it to accept that H is empirically adequate? It is not to accept H, for this is the same as accepting that H is true. Rather, it is to accept a meta-claim about H, namely the meta-claim ‘H is empirically adequate’ or equivalently ‘The observable phenomena are as if H were true’. Call this meta-claim H*. Now, and crucially, H* is no explanation at all of the phenomena. The hypothesis that it is raining explains why the streets are wet—but ‘The phenomena are as if it were raining’ does not. Ergo, H* is not the best explanation—H is, or so we assumed. (Actually, all we need assume here is that H is a better explanation than H*.) So given IBE, H* should not be accepted as true. That is, given IBE, H should not be accepted as empirically adequate.

I wonder which part of this argument those who believe that there is a constructive empiricist version of IBE will reject. Not IBE—at least, they are pretending to accept it. Not, presumably, the Truth-scheme. Not, presumably, its consequence, that to accept H and to accept H as true are the same thing. Not, presumably, the equivalence of ‘H is empirically adequate’ and ‘The observable phenomena are as if H were true’. Not, presumably, the claim that H is a better explanation of the phenomena than ‘The phenomena are as if H were true’.

What this shows is that realism and explanation go hand-in-hand. If you try to recast IBE in terms of empirical adequacy rather than truth, you end up with something incoherent. You start off thinking that it is reasonable to accept the best explanation as empirically adequate. And you end up accepting something that is no explanation at all.

This does not refute constructive empiricism. It only refutes the idea that constructive empiricists can traffic in explanation, and in IBE, just as realists do. It is no accident that down the ages acute anti-realists have pooh-poohed the idea that science explains things. Van Fraassen should join Duhem in this, as he already has in most other things.

The Miracle Argument

So much for explanation, and for IBE, in science. The same considerations apply to IBE in meta-science, to the so-called Miracle Argument for scientific realism. Here what is to be explained is not a fact about the world, like the scratching in the wall or the disappearing cheese. What is to be explained is a fact about science, the fact that science is successful. The success in question is predictive success, the ability of a theory to yield true predictions about the observable, and the technological success that often depends upon this. The key claim is that the best explanation of a theory’s predictive success is that it is true. Given this claim, IBE licenses reasonable belief in the truth of that theory.

It is only consistent empirical success that can be explained in terms of truth. You cannot explain the partial success of a falsified theory in terms of its truth. (This is an important point, to which I shall return.) So the explanandum is of the form ‘All T’s predictions about observable phenomena are true’, or (putting it in van Fraassen’s terminology) ‘T is empirically adequate’, or (putting it in surrealist terminology, following Leplin 1993) ‘The observable phenomena are as if T were true’. The realist thought is, that T’s actually being true is the best explanation of why all the observable phenomena are as if it were true.

Can we know for sure that any theory is empirically adequate? To accept a theory as empirically adequate and set out to explain why is to generalise beyond the available evidence, to make an ‘inductive leap’. But it is no different with explanation in science itself. Scientific explananda are typically general. Scientists typically seek to explain general statements, rather than statements of particular fact. For example, they seek to explain why sticks look bent when half-immersed in water, not why my walking-stick looked bent last Thursday when I dipped it in the Leith. Some radical inductive sceptics deny that any general statement is true. The best reply to them is to ask how they know this (self-refuting) statement to be true. Other radical inductive sceptics deny that any general statement can be known for certain to be true. The best reply to them is to agree, but to insist that some general statements can be rationally adopted as true. Scientists who set out to explain why sticks look bent in water obviously suppose that sticks do look bent in water …always. And it is reasonable for them to suppose this, despite the fact that it might turn out to be mistaken, as the inductive sceptics rightly point out.

As in science, so also in metascience. Metascientists can reasonably suppose that a theory is empirically adequate, and set out to explain why. If the metascientific explanatory project is to be rejected because it involves an ‘inductive leap’, then science’s typical explanatory projects must be rejected on the same ground.

Even so, to claim that a theory is empirically adequate is to make a very strong claim. It is to claim that all the empirical regularities predicted by a theory are true. There are both vertical and horizontal ‘inductive leaps’ involved in such a claim. To say that any particular predicted empirical generalisation is true involves the vertical inductive leap from examined cases to all cases. To say that all the predicted empirical generalisations are true involves the horizontal inductive leap from the generalisations we happen to have tested to all of the generalisations. So, what is to be explained, empirical adequacy, is already epistemically problematic. But let us set this aside. After all, we are dealing here with antirealists who think empirical adequacy an epistemically respectable category, but who baulk at truth.

So, supposing that it makes sense to try to explain empirical adequacy, how exactly does truth do it? Suppose the theory in question asserts the existence of unobservable or theoretical entities. The theory will not be true unless these existence claims are true, unless the theoretical entities really exist, unless the theoretical terms really do refer to things. So part of the realist story is that T is observationally adequate because the unobservables it postulates really do exist. But this cannot be the whole realist story. Reference may be a necessary condition for success, but it cannot be a sufficient condition. A theory may be referential yet false and unsuccessful (more on this later). The other part of the realist story is that what the theory says about the unobservables it postulates is true.

The Miracle Argument says, not just that truth explains empirical adequacy, but that it is the only explanation, or at least the best explanation. To evaluate this claim, we need to pit the realist explanation of success, in terms of successful reference and truth, against other possible antirealist explanations. What might such antirealist explanations be like? Van Fraassen replaces truth by empirical adequacy as an aim for science. But it is obvious that we cannot satisfactorily explain the empirical adequacy of a theory in terms of its empirical adequacy:

T is empirically adequate.
Therefore, T is empirically adequate.

This explanation is no good because it is blatantly circular. Other antirealist explanations are also circular, but not so blatantly circular as this one. Laudan replaces truth by problem-solving ability as an aim for science. As he explains, an empirical problem is posed by a question of the form ‘Why G?’, where G is some empirical generalisation. So to say that a theory is a good empirical problem-solver is just to say that it yields lots of true empirical generalisations. So, when we unpack the definitions, what we have once again is an explanation of empirical adequacy in terms of empirical adequacy.

Then there is Jarrett Leplin’s surrealism, which is short for ‘surrogate realism’ (Leplin 1993). Surrealism arises by taking some theory T and forming its surrealism transform T*: ‘The observed phenomena are as if T were true’. It is clear that ‘The observed phenomena are as if T were true’ is merely a fancy way of saying that T is empirically adequate. That being so, we cannot satisfactorily explain the empirical adequacy of T by invoking the surrealist transform of T. For that is, once again, explaining empirical adequacy just by invoking empirical adequacy.

Kyle Stanford’s ‘Antirealist explanation of the success of science’ (Stanford, 2000) does no better either. Jack Smart suggested long ago that the Copernican astronomer can explain the predictive success of Ptolemaic astronomy by showing that it generates the same predictions as the Copernican theory does and by assuming the truth of the Copernican theory (Smart, 1968, p. 151). Smart wrote:

Consider a man (in the sixteenth century) who is a realist about the Copernican hypothesis but instrumentalist about the Ptolemaic one. He can explain the instrumental usefulness of the Ptolemaic system of epicycles because he can prove that the Ptolemaic system can produce almost the same predictions about the apparent motions of the planets as does the Copernican hypothesis. Hence the assumption of the realist truth of the Copernican hypothesis explains the instrumental usefulness of the Ptolemaic one. Such an explanation of the instrumental usefulness of certain theories would not be possible if all theories were regarded as merely instrumental. [Smart (1968), p. 151.]

Now by the ‘instrumental usefulness’ of Ptolemaic astronomy, Smart obviously means its predictive success. So, his suggestion is that realists can explain the predictive success of a false theory in terms of its predictive similarity to the true theory. Stanford considers Smart’s suggestion, and says of it:

Notice that the actual content of the Copernican hypothesis plays no role whatsoever in the explanation we get of the success of the Ptolemaic system: what matters is simply that there is some true theoretical account of the domain in question and that the predictions of the Ptolemaic system are sufficiently close to the predictions made by that true theoretical account. [Stanford (2000), p. 274.]

This is quite wrong. The detailed content of the Copernican theory, and the fact that some of the detail of the Ptolemaic theory is similar to it, is essential to the explanation of the success of Ptolemaic theory. There are many examples to illustrate this—I will give only one. The periodic retrograde motions of the superior planets are explained in Copernican astronomy by the fact that the earth overtakes those planets as it makes its annual journey around the sun. In Ptolemaic astronomy the earth is stationary at the centre of the universe, and makes no annual journey around the sun. Yet Ptolemaic astronomy also correctly yields the periodic retrograde motions of the superior planets. How? In Ptolemaic astronomy retrograde motions are explained by assigning each planet an epicycle-deferent system as it rotates around the stationary earth. It predicts the retrograde motions of the superior planets correctly because the period of the epicycle assigned to each superior planet is one year. The annual motion of the earth in Copernican astronomy ‘corresponds’ to the annual periods of the epicyclic motions of the superior planets in Ptolemaic astronomy. This is why the two theories make the same predictions in the case. The reason why the predictions are correct is that the Copernican theory is true, the earth does take a year to circle the sun.

Stanford suggests that ‘it is the fact that the Ptolemaic system is predictively similar to the true theoretical account of the relevant domain that explains its usefulness, not that it is predictively similar to the Copernican hypothesis as such.’ (275). This, again, is quite wrong. No explanation, or no good explanation, of Ptolemy’s usefulness is to be had simply by saying that it makes the same predictions as the true theory does. For this is just to say ‘It is predictively as if Ptolemy were true’.

Stanford generalises this example into an antirealist explanation of success in general. The predictive success of any theory is to be explained by saying that the theory makes the same predictions as the true theory (whatever that is). But this is explaining ‘T is predictively successful’ by saying ‘It is predictively as if T were true’, or for short, ‘T is predictively successful’. It is incredible that earlier in his paper (268-9) Stanford accepts that we cannot satisfactorily explain empirical adequacy in terms of empirical adequacy, nor can we adequately explain it in the surrealist way. Yet what he ends up with is just a variant of the surrealist explanation.

Stanford says, in defence of his proposal, that unlike the realist, constructive empiricist and surrealist proposals, which all appeal to some relationship between the theory and the world to explain its success, his proposal ‘does not appeal to a relationship between a theory and the world at all; instead it appeals to a relationship of predictive similarity between two theories’ (276). This seems to be a double joke. First, there are not two theories here at all, there is one theory and an existential claim that there is some true theory somewhere predictively similar to it. Second, you hardly explain the success of T by saying ‘T is predictively similar to some other theory T*’, for T* might be false and issue in false predictions. The truth of T*, whether T* is spelled out or just asserted to exist (as here), is essential to the explanation of T’s success. The relation of T* to the world is essential, in other words.

(Stanford’s proposal collapses into the realist proposal if we allow T* to be identical with T. For then we explain the success of T by saying that it is predictively similar to some true T*, namely T itself. Nothing in Stanford’s presentation rules this out. In particular, predictive similarity is a reflexive relation, which every theory bears to itself. However, Stanford obviously wants his proposal to be a rival to the realist proposal, so we ought in charity to assume that T and T* are distinct theories.)

Stanford counts it a virtue of his proposal that it does not involve asserting the truth of any particular theory—all that is asserted is that there is some true theory T* predictively similar to T. It might be thought that, simply by invoking the truth of some unspecified theory or other, Stanford’s proposal remains a realist proposal. (This is suggested by Psillos, (2001), p. 348.) Not so. I can satisfy Stanford by invoking the truth of the surrealist transform of T. But then I end up saying that it is the truth of ‘The phenomena are as if T were true’ that explains T’s success. I can also satisfy Stanford by invoking the truth of Berkeley’s surrealist philosophy. It is the truth of ‘God creates experiences in our minds as if science were true’ that explains why science is successful. Surrealist transforms are by design structurally similar to what they are transforms of. There is nothing realist about them. And, to repeat, Stanford previously conceded that the explanations of success they offer are no good.

In the Ptolemy–Copernicus case, the empirical success of a false theory (Ptolemy) is explained by invoking its similarity to a true theory (Copernicus). The similarity explains why the two theories make the same predictions—the truth of the second theory explains why the predictions of the first theory are true even though the first theory is false. The surrealist transform of Ptolemy’s theory—‘Observed planetary motions are as if Ptolemy’s theory were true’—follows from Ptolemy’s theory and from Copernicus’s. Realists about Copernicus become surrealists about Ptolemy, in order to explain the empirical adequacy of Ptolemy. But Copernican realism, not Ptolemaic surrealism, is doing the explaining here. Copernicus tells us why the phenomena are as if Ptolemy were true.

The key premise of the Miracle Argument was that the truth of a theory is the best explanation of the empirical adequacy of that theory. So far, at least, that key premise seems to be correct. From which it follows, provided we accept IBE, that it is reasonable to believe that an empirically adequate theory is true. (Of course, this argument assumes a realist theory of truth, which makes of truth something more than empirical adequacy. If we go in for an ‘empirical adequacy theory of truth’, which collapses truth into empirical adequacy, then the Miracle Argument also collapses.)

There are two worries about the argument so far. The first is that it concerns an extreme case, that of empirically adequate theories. How common in science are these? I shall come back to this worry in the next section. The second worry is more subtle. We have assumed that truth explains empirical adequacy better than empirical adequacy does, because the latter ‘explanation’ is completely circular. Now normally, when we go for explanatory depth as opposed to circularity, we would like some independent evidence that the explanation is true. But there can be no independent evidence favouring an explanation in terms of truth against a (circular) explanation in terms of empirical adequacy. The realist explanation may tell us more than the antirealist explanation, but in the nature of the case there can be no evidence that the more it tells us is correct. My response to this is to bite the bullet: there are explanatory virtues that do not go hand-in-hand with evidential virtues. How could the two go hand-in-hand, when the explanatory rival is by design evidentially equivalent?

It should really be obvious that explanatory virtues do not always go hand-in-hand with evidential virtues. The ancients explained the motions of the fixed stars by saying that they were fixed on the surface of an invisible celestial sphere which rotates once a day around the central earth. Compare that hypothesis with its surrealist transform, the hypothesis that the stars move as if they were fixed to such a sphere. The realist hypothesis is explanatory, the surrealist hypothesis is not, despite the fact that the latter is expressly designed to be evidentially equivalent with the former. Similarly with the nineteenth-century geological theory of fossil formation, G, and Philip Gosse’s surrealist transform G*: God created the universe in 4004 BC as if G were true. There are quite different explanations here, but no geological evidence can decide between them—it was not on evidential grounds that nineteenth-century thinkers rejected G* out of hand. Finally, and most generally, consider the realist explanation of the course of our experience proffered by common sense and science, R, with its Berkeleyan surrealist transform R*: God causes our experiences as if R were true. Again, no experience can decide between R and R*, since R* is expressly designed to be experientially equivalent with R.

These examples are meant to show that realists should not be browbeaten by the fact that antirealists can come up with alternative hypotheses to the realist ones which empirical evidence cannot exclude. These alternatives can be excluded on explanatory grounds. Either they provide no explanations at all, or only incredible ones. It is the same with antirealist explanations of the success of scientific theories in terms of their empirical adequacy (however precisely formulated). Such explanations are either no explanations at all, or completely inadequate circular ones, and can be rejected as such.

This, if accepted, only shows that the realist explanation of science’s success is better than some antirealist ones. Perhaps there is another antirealist explanation that we have not yet considered? And in any case, how good an argument for realism is it, that the truth of a theory best explains its empirical adequacy?

Laudan’s historical critique

Larry Laudan is the foremost critic of the miracle argument on historical grounds. He points out, first of all, that the global claim that science is successful is a hopeless exaggeration. Many scientific theories are spectacularly unsuccessful. We must confine ourselves to successful theories, rather than to science as a whole. But even among successful theories, there are many that enjoy some success, but are not completely successful. What this means is that a theory yields some true observational consequences and some false ones, saves some regularities in the phenomena but gets others wrong. Now assuming that the scientists involved have made no logical or experimental error, and assuming that the false predictions have actually been tested, a partially successful theory of this kind has been falsified. No sensible realist can invoke the truth of a falsified theory to explain its partial success! As we saw, it is only consistent empirical success, or empirical adequacy, that can be explained in terms of truth.

Laudan’s historical objection to scientific realism consists mainly in producing examples of theories that were successful yet neither referential nor true. But most of Laudan’s historical counterexamples fall away, once we realise that they are examples only of partially successful theories, theories that were successful for a while but later turned out to be false (and in some cases, non-referential). No realist ever invoked truth to explain the partial success of a falsified theory. The Miracle Argument concerns only a very special case, the total predictive success of an empirically adequate theory. As we have already seen, the realist is right that truth (and reference) is the best explanation of empirical adequacy.

Actually, it is even worse. The chief target of Laudan’s famous ‘confutation of convergent realism’ (Laudan 1981) is what we might call ‘referential realism’, the idea that ‘reference explains success’. To be fair to Laudan, this was the view that one could glean from incautious formulations to be found chiefly in Putnam’s writings. It is a view which has spawned what I have called ‘entity realism’, the idea that realists need not believe in the truth or near-truth of any theories, that it is enough just to believe in the theoretical entities postulated by those theories. It is not for nothing that Laudan attributes to the realist claims like ‘A theory whose central terms genuinely refer will be a successful theory.’ And he proceeds to refute this claim by giving examples of referential theories that were not successful, and of successful theories that were not referential.

Referential or entity realism is a hopeless form of realism. There is no getting away from truth, at least for realists. To believe in an entity, while believing nothing else about that entity, is to believe nothing or next to nothing. I tell you that I believe in hobgoblins. ‘So’, you say, ‘You think there are little people who creep into houses at night and do the housework.’ To which I reply that I do not believe that, or anything else about what hobgoblins do or what they are like—I just believe in them. It is clear, I think, that the bare belief in hobgoblins—or equivalently, the bare belief that the term ‘hobgoblin’ genuinely refers—can explain nothing. It is equally clear, I think, that mere successful reference of its theoretical terms cannot explain the success of a theory. Laudan has an excellent argument to prove the point. Take a successful theory whose terms refer, and negate some of its claims, thereby producing a referential theory that will be unsuccessful. ‘George Bush is fat, blonde, eloquent and atheistic’ refers to Bush all right, but would not be much good at predicting Bush-phenomena.

The ink spilled on reference is not wasted ink. That is because reference is (typically) a necessary condition for truth. A theory which asserts the existence of an entity will not be true unless that entity exists. But reference is not a sufficient condition for truth. A theory can be referential, yet false—and referential, yet quite unsuccessful. Laudan exploits the fact that truth requires reference—and adds that near-truth requires reference as well. He produces examples of non-referring theories that were successful, and argues that since they were non-referential theories they were neither true nor nearly true. What chance, then, of explaining success in terms of truth and reference?

But no sensible realist ever explained partial success in terms of truth and reference. Laudan produces no example of a consistently successful or empirically adequate theory that was (we think) neither true nor referential. The Miracle argument, as we have considered it so far, refers only to the special case of empirically adequate theories.

But can the realist take comfort from this rejoinder? Laudan might now object that the realist has jumped out of the frying pan into the fire. Empirical adequacy is an extreme case, rare, perhaps even non-existent, in the history of science. Most, perhaps all, theories in the history of science enjoy, at best, only partial success. It is the sum of these partial successes that phrases like ‘the success of science’ refer to. Now if the realist is only going to invoke truth to explain empirical adequacy, partial success is left unexplained. And, since the ‘success of science’ is a collection of partial successes, the success of science is left unexplained as well.

There is, moreover, a further, very obvious antirealist question. If partial success is explicable at all, it must be explicable in terms other than truth. So why can we not explain total success in terms other than truth as well? I shall defend an obvious realist response to this: just as total success is best explained in terms of truth, so also partial success is best explained in terms of partial truth.

Partial truth versus verisimilitude

Return to the Ptolemy–Copernicus case. So far we have assumed that Ptolemaic astronomy was empirically adequate, and Copernican astronomy true. Of course, neither assumption is strictly correct. What is really the case is that Ptolemy’s explanation of retrograde motions shared a true part with Copernican theory. That true part, common to both theories, sets out the relative motions of earth, sun and superior planets.

Partial truth is not the same as verisimilitude. Verisimilitude is closeness to the truth—the ‘whole truth’—of a false theory taken as a whole. Partial truth is just truth of parts. A simple example will make the difference clear. ‘All swans are white’ is false, because of the black swans in Australasia. (I had to get this baby example in—as some uncharitable soul once joked, having black swans in it is Australasia’s chief contribution to the philosophy of science!) Despite its falsity, ‘All swans are white’ is predictively successful in Europe, and bird-watchers find it useful to employ it there. I do not know how close to the (whole) truth ‘All swans are white’ is, and none of the captains of the verisimilitude industry can tell me in less than 100 pages of complicated formulas. I do know that ‘All swans are white’ has a true part (a true consequence) ‘All European swans are white’, whose simple truth explains the success European bird-watchers have.

The simple example with the swans can be generalised. A false theory T might be successful (issue nothing but true predictions) in a certain domain D. Explain this, not by saying that T is close to the truth, but by saying that ‘In domain D, T’ is true. A false theory T might be successful (issue nothing but true predictions) when certain special conditions C are satisfied. Explain this, not by saying that T is close to the truth, but by saying that ‘Under conditions C, T’ is true. A false theory T might be successful as a limiting case. Explain this, not by saying that T is close to the truth, but by saying that ‘In the limit, T’ is true. Notice that ‘In domain D, T’ and ‘Under conditions C, T’ and ‘In the limit, T’ are all logical parts of T, that is, logical consequences of T. Of course, the conjunction S of the successes of T is also a logical consequence of T. But while S does not (satisfactorily) explain S itself, ‘In domain D, T’ or ‘Under conditions C, T’ or ‘In the limit, T’ might explain S perfectly well. These restricted versions of T are not the same as its surrealist transform—restricted versions of T may be explanatory while its surrealist transform is not.

Of course, if we accept such an explanation, it immediately raises the question of why the restricted version of T is true while T is false. Typically, it is the successor theory to T that tells us that T is true in a certain domain, or under certain special conditions, or as a limiting case. Still, that this further question can be asked and answered does not alter the fact that a true restricted version of T can explain T’s partial success while T’s surrealist transform does not.

It is the same with approximate truth, as when we say that ‘It is 4 o’clock’ or ‘John is 6 feet tall’ are only approximately true. What we mean is that ‘It is approximately 4 o’clock’ or ‘John is approximately 6 feet tall’ are true. And if we want to be more precise, we can say that ‘It is 4 o’clock give or take 5 minutes’ or ‘John is 6 feet tall give or take an inch’ are true. Approximate truth is not to be explained by trying to measure the distance of a sentence from the (whole) truth. Approximate truth is truth of an approximation. Approximate truth is a species of partial truth, since the approximations in question are logical parts of what we began with. ‘It is 4 o’clock’ logically implies ‘It is approximately 4 o’clock’ as well as ‘It is 4 o’clock give or take 5 minutes’, and ‘John is 6 feet tall’ logically implies ‘John is approximately 6 feet tall’ as well as ‘John is 6 feet tall give or take an inch.’

I have come to believe that the entire verisimilitude project was a bad and unnecessary idea. Popper’s definition of the notion of ‘closeness to the (whole) truth’ did not work. The plethora of alternative definitions of ‘distance from the (whole) truth’ that have taken its place are problematic in all kinds of ways. And what was the point of the verisimilitude project? Precisely to explain how a false theory can have partial success. Now it is obvious that a true theory will be successful—after all, true premises yield true conclusions. But it is not obvious that a theory which is close to the truth will be successful, since near-truths yield falsehoods as well as truths. We should eschew the near-truth of false wholes, in favour of the simple truth of their parts. We should explain partial success in terms of truth of parts. Whole truths are wholly successful, partial truths partially successful. Either way, it is simple truth, not verisimilitude, that is doing the explaining.

Some success is not surprising—novelty

I am not saying that partial success can always be explained by partial truth in this way, nor am I saying that it need be so explained. There is a kind of partial predictive success that needs no explanation at all, because it is no ‘miracle’ at all—it is not even mildly surprising! Here is a simple schematic example to illustrate what I mean. Suppose a scientist has the hunch that one measurable quantity P might depend linearly on another measurable quantity Q—or perhaps the scientist does not even have this hunch, but just wants to try a linear relationship first, to see if it will work. So she measures two pairs of values of the quantities P and Q. Suppose that when Q is 0, P is 3, and when Q is 1, P is 10. She then plots these as points on a graph, and draws a straight line through them representing the linear relationship. She has performed a trivial deduction:

P = aQ + b, for some a and b.
When Q is 0, P is 3 (so that b = 3).
When Q is 1, P is 10 (so that a = 7).
Therefore P = 7Q + 3.

Now, the point to notice is that the hypothesis P = 7Q + 3 successfully predicts, or ‘postdicts’, or at least entails that when Q is 0, P is 3, and that when Q is 1, P is 10. Are these successes miraculous, or even mildly surprising? Of course not. Those facts were used to construct the hypothesis (they were premises in the deductive argument that led to the hypothesis). It is no surprise or miracle that the hypothesis gets these things right—they were used to get the hypothesis in the first place.

This trivial example illustrates a general point. Success in predicting, or post-dicting, or entailing facts used to construct a theory is no surprise. It is only novel predictive success that is surprising, where an observed fact is novel for a theory when it was not used to construct it.

Finally, a realist can say that accidents happen, some of them lucky accidents, in science as well as in everyday life. Even when a fact is not used to construct a theory, that theory might successfully predict that fact by lucky accident. It is not my claim that the correct explanation of predictive success is always in terms of truth or partial truth. My claim is that the best explanation of total predictive success is truth, and that the best explanation of partial predictive success (where it is not a lucky accident) is partial truth.

Nancy Cartwright argues that the predictive success of science is always a kind of lucky accident. It always arises from what Bishop Berkeley called the ‘compensation of errors’. According to Cartwright, the laws or theories in science are always false (I shall come back to this). But scientists busy themselves to find other premises which, when combined with these false laws, will generate true predictions. And, scientists being clever folk, it is no wonder that they succeed. A trivial example may make the point clear. Suppose the ‘phenomenological law’ we want is ‘Humans are two-legged’, and the false law of nature we have to work with is ‘Dogs are two-legged’. What do we have to add to the false law to get the phenomenological law? Well, the auxiliary hypothesis ‘Humans are dogs’ will do the trick. And two wrongs, carefully adjusted to each other, make a right.

Bishop Berkeley complained that the mathematicians of his day were only able to get correct results in their calculations because they systematically made mistakes that cancelled one another out. Berkeley observed that there was nothing so scandalous as this in the reasoning of theologians. Cartwright thinks the scandal is endemic in the reasoning of physicists: ‘Adjustments are made where literal correctness does not matter very much in order to get the correct effects where we want them; and very often… one distortion is put right by another’ (Cartwright 1983, p. 140).

Now in a case like this, one would be crazy to suppose that the best explanation of the theory’s predictive success is its truth. The success is accidental, from a logical point of view. Of course, the success is no accident at all from a heuristic point of view. It is, in fact, a variant of a case with which we are already familiar. We use a known fact (‘Humans are two-legged’ in the trivial example), and a false theory we have (‘Dogs are two-legged’, in the trivial example), to generate an auxiliary theory (‘Humans are dogs’, in the trivial example) that will get us back to the known fact. It is no miracle that we get out what we put in. And our success in getting it is no argument for the truth of what we get it from.

Why does Cartwright think that the laws of physics lie, that is, are always false? The laws lie, she thinks, because they idealize or simplify things—they are false because they do not tell the whole truth. This is a mistake. ‘Nancy Cartwright is clever’ is not false, just because it does not tell the whole truth about Nancy Cartwright. Similarly, Newton’s law of gravity is not false just because it does not tell the whole truth about the forces of nature.

Never mind this. The important point is that predictive success is no miracle if the predicted facts are used to construct the theory in the first place. What is miraculous is novel predictive success. And the best explanation of such ‘miracles’ is truth, either truth of wholes or truth of parts.

References

Cartwright, N. (1983) How the Laws of Physics Lie, Oxford: Oxford University Press.

Laudan, Larry (1981) ‘A Confutation of Convergent Realism’, Philosophy of Science, 48: 19-49.

Leplin, Jarrett (1993) ‘Surrealism’, Mind, 97: 519-524.

Lycan, William (1985) ‘Epistemic value’, Synthese, 64: 137-164.

Peirce, C. S. (1931-58) The Collected Papers of Charles Sanders Peirce, ed. C. Hartshorne and P. Weiss, Cambridge, Mass: Harvard University Press.

Psillos, S. (2001) ‘Predictive Similarity and the Success of Science: A Reply to Sanford’, Philosophy of Science, 68: 346-355.

Putnam, H. (1975) Mathematics, Matter and Method: Philosophical Papers, volume 1, London: Cambridge University Press.

Sanford, K. J. (2000) ‘An antirealist explanation of the success of science’, Philosophy of Science, 67: 266-284.

Sankey, H. (2006) ‘Why is it rational to believe scientific theories are true?’, in C. Cheyne and J. Worrall (eds), Rationality and Reality: Conversations with Alan Musgrave, Dordrecht: Springer, 109-132.

Smart, J.J.C. (1968) Between Science and Philosophy, New York: Random House.

Van Fraassen, Bas (1980) The Scientific Image, Oxford: Clarendon Press.