1. INTRODUCTION
Some years ago, Daniel Hausman argued in an insightful paper published in Economics and Philosophy that methodologists pursuing ‘realist’ projects such as Uskali Mäki, Tony Lawson and their respective schools, had better relabel their projects because it is not realism as such that divides them from other methodologists but rather more specific claims they maintain we should interpret realistically such as claims about causal capacities, mechanisms or powers (Hausman Reference Hausman1998). The issue that separates traditional scientific realists from anti-realists is their stance on how to interpret statements about ‘unobservables’. Hausman argues convincingly that while economics theorizes about many entities that are indeed undetectable by human sensation, its theoretical entities are mere analogues of folk entities and as such do not pose any special epistemological problems.
Hausman is right that worries about the existence of economics unobservables – such as preferences, beliefs, firms, inflation rates and financial crises – are not of much philosophical interest. These kinds of worries are, however, not the only way to understand the realism/anti-realism issue. In line with much recent philosophy of science, an alternative is to see models at the centre of the debate and, specifically, to regard the treatment of idealizations or ‘false assumptions’ as differentiating the two camps. Indeed, in Friedman's (Reference Friedman1953) essay ‘unobservables’ played no role; but it is rightly regarded as the classic statement of one version of anti-realism – instrumentalism – in economics (Boland Reference Boland1979), and there is certainly much in that essay many other methodologists would disagree with.
The aim of this paper is twofold. My first aim is to formulate a version of realism and instrumentalism each that are philosophically interesting and applicable to economics, pace Hausman. With respect to this aim the paper does realists a favour. If they must defend realism, they should do so in ways that are philosophically interesting and non-trivial. My second aim is less favourable to realists. In particular, I aim to defend instrumentalism first by supplying a number of positive considerations in its favour and second by defusing common realist arguments. I will end with a disclaimer regarding a worry Friedman's (Reference Friedman1953) essay raised.
2. THE REALISM/INSTRUMENTALISM DIVIDE: GETTING IT RIGHT
Given the philosophical climate today as well as some specific facts about economics, it would be too easy to formulate the realism-instrumentalism issue in ways that trivialize the matter. Consider the following passage (Mäki Reference Mäki2005: 238):
It is sufficient for qualifying as a realist to hold weaker beliefs of forms such as these: Either Y [a scientific entity] exists or it doesn't; Y is the kind of entity that it has a chance of existing; It [sic] is not incoherent to think Y exists; Y might exist, and if it does, it would help explain phenomenon P.
If belief that Y does or doesn't exist suffices for being a realist, it would indeed be hard not to be one. Even less watered-down positions stand in danger of trivially qualifying as either realist or instrumentalist in virtue of what nearly everyone in the philosophical community believes nowadays. Few philosophers have any serious special concerns about entities that are not observable by the unassisted senses. Thus, if rejecting an epistemically significant dichotomy between observable and unobservable automatically qualified one as realist, most of us would be realists. Moreover, certain formulations of realism-instrumentalism make it hard to apply to economics such that attractive positions arise. If there are no epistemically problematic unobservable entities in economics (as argued by Hausman) but the attitude towards such entities were the defining difference between realism and instrumentalism, it would simply not matter whether an economic methodologist falls on this side or the other. Nevertheless I believe that one can outline positions that have considerable substance, that are controversial, and that can be applied to economics in methodologically and philosophically interesting ways. My aim in this section is to formulate just such versions the respective positions.
There are two dimensions to the scientific realism/instrumentalism divide: an axiological and an epistemic (Lyons Reference Lyons2005; Chakravartty Reference Chakravartty and Zalta2011). The axiological dimension is best introduced by means of a metaphor. According to the metaphor the scientific realist thinks of science as striving to discover a ‘mirror of nature’ (Rorty Reference Rorty1979): its theories, models, statements or results aim to provide a representation of the world that is faithful to how this world is. Its orientation is contemplative. Its goal, perhaps not the only, but by far the most significant, is the kind of understanding true accounts of nature confer upon the mind. Its figurehead is Aristotle who, for instance, in his Metaphysics proclaimed (I.980a21):
All men by nature desire to know. An indication of this is the delight we take in our senses; for even apart from their usefulness they are loved for themselves; and above all others the sense of sight. For not only with a view to action, but even when we are not going to do anything, we prefer sight to almost everything else. The reason is that this, most of all the senses, makes us know and brings to light many differences between things.
The instrumentalist, by contrast, thinks of science as striving to build a ‘toolbox’: its theories, models, statements or results aim to provide its users with devices for orienting themselves in this world and to mould it into shape according to their values and aspirations. Its orientation is practical. It regards understanding as an at best subsidiary goal to the more worthy ultimate goals of successful anticipation of and control over phenomena. Truth plays a much attenuated role in this image of science, if any – for there are many useless truths, just as there are many useful falsehoods. Truth is a virtue at best only as long as it is conducive to purpose. Its figurehead is Francis Bacon who thought, famously, that ‘knowledge is power’ and that ‘to know something (a natural phenomenon) amounts to being able to (re)produce that very phenomenon on any material substratum susceptible of manifesting it’ (Pérez-Ramos Reference Pérez-Ramos and Peltonen1996: 115).
The way I understand it, the difference between the two is not so much in terms of beliefs about what science achieves or can achieve but rather in terms of what is considered valuable. Realists maintain that there is a fact of the matter whether an account is true or false of the world and that, everything else being equal, the truer the account is the better it is. Instrumentalists are simply not concerned with that question. A good account is a useful account, and truth has at most instrumental value.
Classically, realists demand that at least sometimes science achieves to come up with theories that are true, at least approximately, with respect to everything they say about nature in both their observable and unobservable content. Classical instrumentalists are interested only in a theory's observable content, and in particular whether it predicts accurately (Popper Reference Popper1963: 107ff., for instance, understood the realism issue on those terms; for more recent contributions to the realism debate which build on the observable/unobservable distinction, see Psillos Reference Psillos1999; Chakravartty Reference Chakravartty2007).
To divide the camps along the observable/unobservable line would make the debate uninteresting to economists, however. It is not that realism understood in this sense is a complete non-issue (see for instance Guala Reference Guala, Lehtinen, Kuorikoski and Ylikoski2012 on the ‘reality of preferences’). But the bulk of methodological action lies elsewhere. The great debates concern not whether preferences, beliefs, firms, inflation rates and financial crises exist but whether or not certain specific ways of representing these entities in scientific models are appropriate. Let me give three examples. Critics of prospect theory, an alternative to standard expected utility theory, often charge that the theory addresses only one-time decisions presented to experimental subjects and therefore doesn't sufficiently account for learning opportunities and competitive pressure for rational behaviour that exist in markets (Myagkov and Plott Reference Myagkov and Plott1997). Neither side denies that there are unobservable decision processes responsible for observable behaviour. Rather, the disputed issue is whether ‘prospect theory and the methods used to support it can be employed to produce a model that captures data in a purely economic context’ (Myagkov and Plott Reference Myagkov and Plott1997: 802; emphasis added). One of the main bones of contention in the CPI controversy was whether the U.S. Consumer Price Index should be modelled as a so-called ‘cost-of-living index’ which assumes that all consumers make their purchasing decisions in such a way as to maximize utility given their budget constraints (Reiss Reference Reiss2008: Chs 2–3). Whether or not inflation exists is not an issue in this debate. Finally, after the financial crisis of 2008 a small industry of methodological papers trying to convince us that the unrealistic models of the economy modern economics tends to use have helped to cause the crisis has appeared (e.g. Colander et al. Reference Colander, Goldberg, Haas, Juselius, Kirman, Lux and Sloth2009). Once more, the issue is not whether the financial crisis or its causes are ‘real’ but rather whether certain idealizations economists often employ in their models help to explain the crisis.
The claim models are at the core of scientific practice is not new (Morgan and Morrison Reference Morgan and Morrison1999). But if we understand the realism debate as one about idealizations, we run the risk of trivializing the matter in favour of instrumentalism. This is because of the commonplace that ‘all models are false’ (Box and Draper Reference Box and Draper1987: 424; Hughes Reference Hughes1990: 71). That this is so can be easily seen once we, with Ron Giere and many others, think of models as maps (Giere Reference Giere1988). By their very nature maps contain many falsehoods. A perfectly accurate representation that mirrors every single detail of its target would cease to be a genuine map because it would be utterly useless. Genuine maps must idealize heavily – for instance, by omitting countless detail because of scaling and simplification or by deliberate misrepresentation for convenience (when, say, a curved street is represented as straight). Something analogous is true of models in science.
The realist therefore has to define a set of aspects of a model the truth or truthful representativeness of which she thinks is valuable, a set which is smaller than the set of all of the model's aspects (for some will be false for sure), and at the same time larger or at least different from the set of aspects the instrumentalist regards as relevant. If the realist valued the truth of all aspects of a model, her stance would be trivially mistaken. At the same time, she must aim for more than ‘whatever makes the model's relevant predictions accurate’, for that would leave no space for the instrumentalist.
There are many forms of ‘partial’ realism in the philosophy of science: semirealism, entity realism, structural realism among others (see Chakravartty Reference Chakravartty1998; Hacking Reference Hacking1983; Worrall Reference Worrall1989, respectively). Some of these have counterparts in economics. For instance, Hoover (Reference Hoover1995) can be understood as a defence of entity realism (one of his arguments explicitly draws on Hacking Reference Hacking1983); Ladyman and Ross (Reference Ladyman and Ross2007) defend structural realism as viable option for both physics and economics.
I would nevertheless argue that causal realism is the most relevant form of partial realism for contemporary methodological debates. One reason is that the most vocal defenders of realism in economics seem to endorse causal and not any of the other forms of partial realism (Lawson Reference Lawson1997, Reference Lawson2003; Mäki Reference Mäki2005, Reference Mäki2011b). Another, that significant recent debates in methodology are intimately connected with causality. For instance, Guala's ‘methodology of experimental economics’ builds on and elaborates Mill's methods of causal inference (Guala Reference Guala2005: Ch. 4). The debate concerning external validity or extrapolation focuses on the extrapolation of causal claims from a test to a target population (Guala Reference Guala2005; Steel Reference Steel2008). Similarly, the debate whether economics is well advised to import evidence-based methodologies from medicine looks at randomized evaluation as a method of causal inference (Teira and Reiss forthcoming).
I therefore propose to define the realism/instrumentalism divide in terms of the respective attitude towards causal claims. Accordingly, let us say that realists regard their value (truth) to be realized by models that are (approximately) true in every detail that is causally relevant for an outcome of interest. For instance, if the outcome of interest is the velocity a (relatively heavy) falling body assumes a specific time after being released near the surface of the Earth, a model that predicts correctly may assume for instance: that the body has no colour (because colour is under most circumstances irrelevant for the speed of fallFootnote 1), that all of its mass is concentrated in a single point (because the shape of the body makes a negligible difference for the case at hand), that air resistance is zero (ditto). But it cannot not assume that gravity is an important factor that produces the outcome. The instrumentalist's model, by contrast, may assume anything about any aspect of the phenomenon of interest, including the generating causal factor, as long as it correctly anticipates the outcome. Of course, in order to generate a successful prediction, the instrumentalist's model must be based on robust relationships between measured variables. Once robustness (or reliability) of empirical relationships is achieved, however, whether or not the relationships are truly causal is not significant. To give a silly example, suppose all heavy bodies were white and all light bodies black. An instrumentalist could use the spurious correlation between colour and rate of fall to build models that adequately predict the behaviour of such bodies – and regard it as realizing his or her values. A realist would not consider that model a good one.
But thinking about realism in terms of what is considered valuable is not quite enough because to value truth is compatible with science never achieving its aim, and even with the impossibility of science achieving its aim (Kitcher Reference Kitcher1993: 150). The realist must therefore demand more. This ‘more’ lies in the epistemic dimension of realism. Consider the classical debate one more time. One motivation behind instrumentalism is some sort of scepticism regarding our ability to know the true causal structure that is responsible for phenomena of interest. Some instrumentalists do not aim at truth regarding the underlying causal structure because they think it is not knowable. Realists, by contrast, hold a more optimistic position, namely that true causal structure is at least sometimes knowable.
Let us distinguish two forms of scepticism: foundationalist and contextual. The foundationalist sceptic believes that a whole class of claims (say, about unobservables or causal relations) is in principle unknowable. Her arguments are of a general epistemological nature. For instance, a foundationalist empiricist may argue that the unobservable is unknowable because all genuine knowledge is necessarily based on sense impressions. The contextual sceptic, by contrast, has no patience for arguments of this kind. He too may believe that certain classes of claims are unknowable but with two differences. First, the classes are narrower and concern specific scientific domains, periods and stages of development of a science. Second, his sceptical arguments are based on subject-specific factual knowledge. A contextual sceptic about history, say, may argue that claims of historical causation are not knowable because all reliable methods of causal inference require more background knowledge than we currently have, or background knowledge of a different kind. This means however that future developments, in methods or accumulation of background knowledge, may change his attitude towards historical claims.
Full-blown foundationalist scepticism about causal structure is regarded by most as unwarranted today. It is relatively uncontroversial to assert that causal knowledge is in principle no less reliable than other types of scientific knowledge. Causal knowledge is hard to come by, and impossible to come by without background knowledge of the right kind. But then every new scientific finding builds on others, and most agree that on the whole there's nothing different in establishing causal or other kinds of claims.
Contextual scepticism about causal knowledge in economics is more plausible by contrast. The arguments are well-rehearsed: social phenomena are complex, open and evolving; experimentation is always difficult and often unethical, not financeable or technologically not realizable; there is no well-confirmed theory (that would allow the identification of the econometricians’ instrumental variables or building of a structural model); there isn't much reliable background knowledge to build on; most variables are measured with systematic error; etc.
The resulting difference between the two positions is of a more quantitative than qualitative nature in this respect but no less pronounced. The realist acknowledges the difficulties but is optimistic that they can be overcome and certainly believes that science should aim to overcome them. The instrumentalist is more impressed by them. While he remains faithful to contextualism in his understanding that these problems are contingent upon the current state of knowledge, he regards them as real obstacles to progress, thinks that it's a bad idea to hunt what cannot be had and aims to make the best of what we can do right now.
The following picture emerges:
3. THREE CHEERS FOR INSTRUMENTALISM
In the paper on realism, Dan Hausman observes that instrumentalism has three different motivations (Reference Hausman1998: 187): pragmatism, positivism and pessimism. Each of these corresponds loosely to a dimension of the debate that I have described above. The pragmatist seeks cash-value, not truth. The positivist, robust relations between measurable quantities rather than hidden causal structures. The pessimist is sceptical about our abilities to learn causal structures. He takes methodological hurdles seriously and restricts himself to what can reasonably be expected to grow epistemic fruit. As we'll see in a moment, the three motivations double up as arguments in favour of instrumentalism.
Cheer 1 (‘Pragmatism’): Once we have usefulness, truth is redundant
Realists value true models or theories. Let us, for now, focus on valuing truth in so far as it is knowable. Sceptic arguments will be considered below. The first thing to observe is that realists will require more than truth simpliciter. It is trivial to generate any number of truths one wishes. Count the number of objects in your office. Measure each object's weight and compute an average. Line them up in order of ascending weight and write down the first letter of the name of each object. Now determine which place that letter has in the alphabet and apply the following formula (where n 1 is the first number, n 2 the second and so on): (n 1 – n 2)/n 3 + (n 4 – n 5)/n 6 + . . . + . . .
Truth is sought, to the extent that it is sought, because it serves some purpose. Philip Kitcher puts the point as follows (Reference Kitcher1993: 93f.; emphasis original):
The most obvious pure epistemic goal is truth. [. . .] But, in my judgment, truth is not the important part of the story. Truth is very easy to get.[Footnote suppressed] Careful observation and judicious reporting will enable you to expand the number of truths you believe. Once you have some truths, simple logical, mathematical, and statistical exercises will enable you to acquire lots more. Tacking truths together is something any hack can do. . . The trouble is that most of the truths that can be acquired in these ways are boring. [. . .] What we want is significant truth.
I will say a little more about significance below. For now, let us understand the term as ‘cognitively or practically valuable’. We want truth, then, because it is sometimes significant. And we want truth to the extent that it is significant.Footnote 2 But here comes the catch: once the floodgates of significance are open, there is little reason to stop at useful or significant truth. Indeed, the above passage from Kitcher continues (emphasis original): ‘Perhaps, as I shall suggest later. . ., what we want is significance and not truth’.
Why would that be? Here is a simple argument. When truth and significance coincide, the instrumentalist has in principle no trouble accepting a true model – of course, to the extent that its truth is knowable. It's not that he deliberately seeks falsehood. He is just not bothered by it. But often truth and significance come apart. No-one will seek models that are true but lack significance. So the interesting category is that of false models that have significance. The instrumentalist has an idea what it means for a model to be ‘good enough’ in the light of a given purpose. To use the example of the falling bodies again, assuming away air resistance may be good enough when the object is a compact ball but not when it is a feather (cf. Friedman Reference Friedman1953: 16–17). In this case the ‘significance’ of the model is given by its ability to predict the rate of fall, and further details about the purpose will determine how accurate the prediction will have to be. The realist will find many such models wanting in an important respect. A false model may temporarily and heuristically be accepted for practical purposes but it does not provide what she is ultimately after.
Suppose, then, a new model can be built that has just the same degree of significance along one dimension as the old model – the same degree of predictive accuracy, say – but it has the additional virtue of being true. There are now two possibilities. Either the true model brings with it some additional significance (along some other dimension). In this case nothing is lost by demanding significance alone, as the instrumentalist does. Or the model brings no additional significance. But if this is the case, it is very hard to see what that additional value should be.
Let me expand a bit. Classically, economics regards models or theories as significant only if and to the extent that they allow to predict, to control, or to explain economic phenomena, or a combination of these (Menger Reference Menger1963). I will say more about prediction and control below in the context of models representing causal structures, so let me here focus on the relation between truth and explanation.
Perhaps the idea is that the predictive model mentioned above isn't a full model unless it also explains. And a false model cannot be explanatory. So here is an aspect of significance that cannot be had without truth.
This last claim is mistaken, however. One doesn't have to go far afield to find accounts of scientific explanation that make do without truth. Most famously, Bas van Fraassen's ‘pragmatics of explanation’ is a case in point (van Fraassen Reference van Fraassen1980: Ch. 5). From Nancy Cartwright we learn that ‘The Truth Doesn't Explain Much’ (Cartwright Reference Cartwright1983: Essay 2) but models, which do, are ‘works of fiction’ (Cartwright Reference Cartwright1983: 153). Philip Kitcher's account of explanation as unification would also fit the bill because he requires instantiations of unifying argument patterns to be ‘acceptable’ to the community rather than true (Kitcher Reference Kitcher, Kitcher and Salmon1989).Footnote 3 There are also accounts of explanation that regard models as explanatory even when they don't provide (approximately) true descriptions of their targets (Bokulich Reference Bokulich2011). The instrumentalist can therefore happily accept the demand for explanatory models.
One might object that the above reasoning begs the question. The mentioned accounts are perhaps accounts of ‘explanation’ but they fail to demonstrate that false models genuinely explain. For truth is a precondition of genuine explanatoriness. The problem with that suggestion is that it would render those areas of science insignificant that are heavily model driven, much of which scientists in general and economists in particular would themselves consider explanatory (for a detailed discussion, see Reiss Reference Reiss2012). Of course, a philosophical account of explanation has to distinguish genuinely explanatory models from those that merely ‘save the phenomena’. Such an account should not, however, portray scientists as making systematic mistakes about what to consider explanatory, even less as almost always getting it wrong. An account of explanation that regards truth as an essential ingredient would do just that.
Cheer 2 (‘Positivism’): There is something disturbing about causal structure
I will continue focusing on knowable truth here, this time truth about causal structure. The discussion of the extent to which causal structure is knowable in economics I will leave for the next sub-section. The question is: is it always advisable to seek models that represent true causal structure?
Causal truths were once sought because they were thought to bring usefulness in tandem. Aphorism 3 of Francis Bacon's Novum Organum illustrates the idea (Urbach and Gibson Reference Urbach, Gibson and Eds1994: 43):
Human knowledge and human power meet in one; for where the cause is not known the effect cannot be produced. Nature to be commanded must be obeyed; and that which in contemplation is as the cause is in operation as the rule.
Consequently, there has been a long philosophical tradition in which causal claims doubled up as useful or significant claims of some sort. The most celebrated outcome of this tradition is John Stuart Mill's ‘symmetry thesis’ (see for instance Ruben Reference Ruben1990: 123) according to which causal explanation and prediction are two sides of the same coin: since causal laws are nothing but statements about regularities, upon observing the cause the effect can be predicted using the law; conversely, upon observing an effect, the cause can be retrodicted using the law whereby the effect is explained.
The symmetry thesis has long since fallen into disfavour, notably in the philosophy of social science literature (see for instance Elster Reference Elster1989: ch. 1). An effect can always be prevented by an intervening or disturbing cause, which makes causal knowledge of limited usefulness for prediction. After the fact, though, events can be explained because we know the cause has been ‘successful’, i.e. not prevented by an interference. Or so the story goes.
Therefore, if we seek predictive success, we don't necessarily want to build causal models. A technical result from econometrics illustrates this fact. In the early days of forecasting theory, models were built on two presumptions: (a) that the econometric model provides a true representation of the underlying data-generating structure (or ‘mechanism’); (b) that that structure remains stable within the forecasting horizon. Forecasters now reject these ideas after realizing that models are frequently mis-specified (i.e. they do not represent the underlying structure correctly), and socio-economic systems tend to change a lot. Thus, it cannot be proved that true causal models beat non-causal models in forecasting competitions. An important property for forecasting success is that of adaptability: a model that adapts more rapidly to a change in the underlying structure will beat a model that, after the occurrence of a break, is permanently off track. But since (as of today) causal models tend not to be robust to such shifts, they are often outperformed by non-causal models – models not based on variables describing the underlying data-generating structure (Hendry and Mizon Reference Hendry and Mizon2001).
The point is not that causal models are never predictively successful. It is that if we have a causal model, predictive power is an additional fact about the model; a model is not predictively successful qua representing causal structure. Conversely, there are predictively successful models that are non-causal.
Nancy Cartwright has argued in a series of papers (Cartwright Reference Cartwright2006, Reference Cartwright, Kincaid and Ross2009) that the same is true of relation between causality and invariance – the property required for successful policy and control. It is of benefit to quote her at length (Cartwright Reference Cartwright, Kincaid and Ross2009: 417–418):
Whether we start with real causation or some other relation and whether we end up with the ability to predict what happens under . . . manipulations, what makes for the connection between the two is invariance. [. . .]
The logic is simple. We have an association. We assume it to be invariant under a particular kind of manipulation. So we are able to use that association to predict what happens under the specified kind of manipulation.[Footnote suppressed] This logic works no matter whether the starting association is causal or not. [David] Hendry's proposal [for a definition of cause] is a case in point.
What good is causation then? It is generally supposed that there is some special connection between causation and policy prediction. [. . .] But that does not seem to be the case.
I won't go through the details of the argument here. Her point is the same as that made in the context of prediction: if we have a causal model, invariance under manipulation is an additional fact about the modelled relation; a relation is not invariant qua causal.Footnote 4 Conversely, there are models representing invariant relations that are non-causal.
So what's disturbing about causal structure has not (only) to do with its epistemic status. I'm assuming for now that there are philosophically sound and practically workable tests for causality. The worry is that even when we are able to establish causality, to make a causal model practically useful, additional facts about the represented relations have to be discovered. But once we've discovered the additional facts, it is irrelevant whether the relation at hand is causal or not.
It might be objected at this point that causal models have one virtue for sure: they are explanatory. However, in my view this point can only be made by presupposing a causal account of explanation and thereby begging the question. Other properties of the explanatory relation – say, relevance (van Fraassen), unificatory power (Kitcher), structural dependence (Bokulich) – may or may not be possessed by a given causal relation, whether or not it does so is an additional discovery and non-causal relations do sometimes possess such properties. The situation is therefore exactly the same as with prediction and invariance.
In sum, the point is this: why seek to discover underlying causal structure? The classical answer is that knowing causal structure has a number of cognitive and practical virtues. We know now that causal knowledge has these virtues at best contingently but not necessarily. It is other features that realize the cognitive and practical virtues we seek, and causality may or may not co-occur with these other features. But then to the extent that we do want the virtues, we should seek the relevant features and not causality.
Cheer 3 (‘Pessimism’): It's better to do what one can than to chase rainbows
Very deep epistemic worries about the knowability of causal relations as well as the possibility of a lack of usefulness of causal knowledge aside, building causal models has enormous informational requirements. The probabilistic theory of causation proves this point. In one formulation this theory says that to learn a new causal law ‘C causes E’ from probabilistic dependencies, one has to stratify with respect to all other causes of E (Cartwright Reference Cartwright1979). That is, only when the probabilistic dependence of E on C persists conditional on all other causes of E, the probabilistic dependence is indicative of a causal relation. But when are we ever in the fortuitous position to know all but one causes of an outcome of interest?
Experimentation can reduce these informational requirements but they're not always an option in economics. A reasonable response would be to accept the peculiarities of the economic world and devise inferential strategies that allow learning about this world efficiently and effectively. I want to illustrate this thought with some results from two research groups, one in decision theory and the other in econometrics.
Gerd Gigerenzer and his ABC Research Group (Gigerenzer et al. Reference Gigerenzer and Todd1999; see also Gigerenzer and Selten Reference Gigerenzer and Selten2001) are famous for their idea of ‘fast and frugal heuristics’. Their starting point is a criticism of mainstream rational choice theory, which, in their view, requires ‘demonic strength’ in terms of memory, attention, cognitive skill as well as time and other resources to search for information on the part of the decision maker. Real decision makers are bounded in many ways and therefore decision rules should be adapted to the capacities of the decision makers and the environment in which they operate. The idea of fast and frugal heuristics is meant to capture just that: rules that are ‘ecologically rational’ in that they exploit structures found in decision environments, simple enough so they work when decision makers’ resources are limited and yet powerful enough to demonstrate good reasoning.
The other example is a principle called ‘Marschak's Maxim’ by econometrician James Heckman and his colleagues (e.g. Heckman and Vytlacil Reference Heckman, Vytlacil, Heckman and Leamer2007: 4849). Marschak (Reference Marschak, Hood and Koopmans1953) observed that for many questions of policy analysis there is no need to identify fully specified models that are invariant to whole classes of policies. If the researcher instead focuses on particular interventions, all that may be needed are combinations of subsets of the structural parameters – those required to identify the effect of the policy intervention – which are often much easier to identify. In other words, Marschak's Maxim says that one should formulate the policy question one seeks to address as specifically as possible because that way informational requirements can be dramatically reduced. For some questions, identifying only the reduced form of a structural model will suffice, for others partial knowledge of the full model.
The standard structural estimation literature in econometrics Heckman and Vytlacil are criticizing seeks to identify full causal models. Often enough, however, this cannot be achieved given data limitations. The less we know, the more important it is to honour Marschak's Maxim so that policy questions can be addressed.
Principles such as ‘make decisions using fast and frugal heuristics’ and Marschak's Maxim attempt to strike a balance between the realist's exaggerated optimism about our ability to know the social world and outright scepticism. In so doing they also present good reasons to be an instrumentalist.
4. COMMON REALIST DEFENCES OF IDEALIZATIONS
Before concluding with a disclaimer, I want to examine a number of defences of realism in the light of the fact that all models contain false assumptions. As we will see, they neither work in general, nor do they apply to the relevant cases in economics.
False models are approximately true
According to this line of defence, all models are false but to some extent ‘approximately true’ – and approximately true models are harmless from the point of view of the realist's aims (Elgin and Sober Reference Elgin and Sober2002). Apart from notorious difficulties in clarifying the notion of ‘approximate truth’ (for attempts, see Niiniluoto Reference Niiniluoto1987; Boyd Reference Boyd and Savage1990; Weston Reference Weston1992; for an application to economics, Niiniluoto Reference Niiniluoto and Mäki2002), this argument is to a large extent confused. It is correct that there are cases in which a false assumption can indeed be regarded as ‘approximately true’, namely when the assumption concerns the value of a quantitative causal factor. If, to use Friedman's favourite example again, the phenomenon of interest is the free fall of a compact ball that was dropped from the roof of a building, and the salient aspect its distance travelled after a given passage of time, a model that assumes that the ball falls in a vacuum is approximately true because air resistance is small and in fact negligible for the application at hand. This reasoning already concedes much to the instrumentalist because the terms ‘small’ and ‘negligible’ are relative to the model user's purpose. Ignoring this point, a model that assumes that a quantity has a zero value when it is in fact small can certainly be regarded as ‘approximately true’ in that respect.
However, economics’ idealizations are seldom of that kind. Typically, economics models ascribe properties to actors and institutions that these don't have and explain outcomes by way of causal processes that don't exist. Consider Friedman's own economic and non-economic examples. Models assuming that firms maximize their expected returns (Friedman Reference Friedman1953: 21ff.), that leaves on a tree maximize the amount of sunlight they receive (p. 19) or that billiard players know ‘the complicated mathematical formulas that would give the optimum directions of travel’ (p. 21) portray outcomes as having radically different generative mechanisms than they in fact have. Businessmen in fact (thinks Friedman – Reference Friedman1953: 22) price at average cost, leaves do not deliberate (p. 20) and the billiard player ‘just figures it out’ (p. 22). A businessman who prices at average cost does not maximize expected returns, not even approximately. A leaf that doesn't deliberate does not deliberate, not even approximately. And a billiard player who ‘just figures it out’ does not use complicated mathematical formulas, not even approximately. Most theoretical models in economics contain idealizations of this and not the ‘extreme value of a real quantity’ type. It may of course be the case that the outcomes of the actual generating mechanisms exactly or approximately coincide with the outcomes that would have been generated by the supposed mechanisms. But this would be grist for the instrumentalist's and not the realist's mill. According to creationists, God created the world 5000 years ago as if it had been a product of the laws of nature including Darwinian evolution. Few people would say that (supposing Darwin's story is in fact correct) the creationists’ ‘model’ is ‘approximately true’ because the outcome is the same as that of Darwinian evolution.
False models represent isolated causal factors
Uskali Mäki (e.g. Reference Mäki and Dilworth1992, Reference Mäki2003) argues that one way in which statements such as ‘businessmen maximize expected returns’ are false is that the profit motive is only one of numerous motives that are at play in the determination of business decisions. This analysis conflicts with the one given above. According to this analysis, the profit motive is there, so the story goes, but it doesn't fully unfold because other motives (inattention, altruism, illusions of grandeur or what have you) ‘check’ it. Economic models portray what businessmen would do in isolation, in the absence of other motives (or, more generally, causal factors).
The maximization of returns is, however, not a factor that has the right form to continue contributing to a situation when it is ‘checked’ by disturbances. A businessman cannot consistently aim to maximize returns and at the same time honour other commitments. Once other commitments are part of the decision-making process, the maximization of returns is out. Maximization is an all-or-nothing affair. Of course, businessmen can maximize subject to constraints, including non-economic constraints. But this would take those other commitments as given rather than trading them off against maximizing returns, and it isn't what Mäki has in mind anyway.
One doesn't have to think too hard to come up with examples of factors that at least have the right form to play the role of causal factors in the isolationist's sense. John Stuart Mill's ‘pursuit of wealth’ – one of the factors defining the domain of political economy – comes to mind (Mill Reference Mill1948 [1830]). One can consistently pursue wealth as well as other goals. Importantly, it is also possible that the pursuit of wealth continues to make a difference to outcomes, even when the operation of this factor is modified by other factors.
The problem with Mäki's suggestion, when applied to the right kinds of factors, is that by and large economic factors do not operate in the way Mill envisaged. Mill thought economic factors are like those of mechanics. When air resistance ‘checks’ the primary causal factor gravity, gravity continues to contribute to the overall result. The speed of a falling body may be slowed down relative to a fall in vacuum but gravity leaves its mark nevertheless. Economic factors are not like that. By and large, what a factor does depends on the whole setting in which it operates. Of course, factors that are important in one setting often also contribute to another. But the nature of the contribution will normally not be predictable on the basis of what has been learned in already observed situations (Reiss Reference Reiss2008: Chs 5, 9).
Even if economic factors were like those of mechanics, and this is my final remark on causal factors considered in isolation, successful application of the method of analysis and synthesis does not necessarily speak in favour of realism about the isolated causal factors. Nancy Cartwright has indeed defended a form of realism about what she calls ‘causal capacities’ (Cartwright Reference Cartwright1989). Many philosophers of economics who have proposed similar ideas about causation, most notably Tony Lawson and other critical realists, are also realists about causal powers (Lawson Reference Lawson1997, Reference Lawson2003; see also the essays on social science in Groff Reference Groff2008). But there are alternatives. Eric Watkins has recently given a Kantian treatment of causal powers (Reference Watkins2005). There is also a fictionalist account of causal powers by Hans Vaihinger (Reference Vaihinger and Ogden1924). The idea that complex situations can be analysed by examining the laws that describe what individual factors do in isolation and then to predict what happens on the basis of these laws plus a law of composition is metaphysically neutral. Thus, if models in economics did indeed faithfully represent what economic factors do in isolation – albeit not what they do in more complex settings – and there were ways to anticipate the latter on the basis of knowledge of the former, the instrumentalist would still not have to fret.
False models are heuristic devices and can be made true(r) by de-idealization
A final defence of realism in the light of the fact that all models are false is the Hegelian one of regarding models not individually but as a sequence progressing towards perfection. Individual models may well be false but error is eliminated progressively through de-idealization, de-isolation and the like (Nowak Reference Nowak1980; McMullin Reference McMullin1985).
Unfortunately, de-idealization strategies don't normally work and are therefore seldom employed. Frigg and Hartman (Reference Frigg, Hartmann and Zalta2006) observe that there are two related problems with the strategy:
First, as Cartwright . . . points out, there is no reason to assume that one can always improve a model by adding de-idealizing corrections. Second, it seems that the outlined procedure is not in accordance with scientific practice. It is unusual that scientists invest work in repeatedly de-idealizing an existing model. Rather, they shift to a completely different modeling framework once the adjustments to be made get too involved [. . .].
Frigg and Hartman are mostly interested in physics but their points reappear in economics with a vengeance. Because of the high standards of mathematical elegance, equilibrium solutions, methodological individualism and rationality economics models must comply with, it is not normally possible to tinker with individual assumptions that are deemed ‘too highly idealized for the purpose at hand’ while leaving others fully intact when building a new, less idealized model. Rather, as Frigg and Hartman observe about physics, when a factor is deemed too important to be ignored, the framework is changed altogether. Thus, the economics of information, transaction cost economics or the economics of imperfect competition do not provide de-idealized versions of the ‘standard partial equilibrium model’ with perfect information etc. – they're rival models. To give a concrete example, Alexandrova and Northcott remark about a group of models in auction theory (Reference Alexandrova, Northcott, Kincaid and Ross2009: 311):
Yet for none of these assumptions was de-idealization feasible. It was simply not possible, at least at the time, to build a model incorporating more realistic versions of the assumptions and to check the effect of these changes on the model's predictions. Indeed there simply was no one theoretical model capable of representing the actual auction as a whole, even at a very abstract level.
5. CONCLUSIONS AND A DISCLAIMER
In conclusion, I want to draw attention to the fact that the stance on the realism-instrumentalism debate has genuine methodological implications. Is truth a value in itself or only in so far as it's conducive to purpose? Is building causal models, in so far as it can be done, always a good idea? If it can't be done, shall we just give up or do what we can as best as we can? I've argued that the instrumentalist's answer to these questions is more convincing than the realist's. There remains one more challenge to respond to.
Instrumentalism has sometimes been interpreted as an invitation to sloppiness (Hutchison Reference Hutchison2000; for a discussion, see Hands Reference Hands2003; Kincaid Reference Kincaid1996: 227f.). Indeed, Friedman's purpose behind writing the 1953 essay was to insulate neoclassical models from criticism based on questionnaire evidence (for a discussion of the context of Friedman's essay, see Backhouse Reference Backhouse and Mäki2009), and some have argued that the essay licensed the ‘formalist revolution’, i.e. the shift towards ‘formal rigour’ and ‘mathematical elegance’ as dominant modelling goals (Blaug Reference Blaug2003).
But in fact, the opposite is the case. If anything, the instrumentalist has to take empirical evidence more seriously than the realist because he doesn't have the excuse that empirical anomalies can be ignored for the sake of greater realism, a progressive research project or what have you. The instrumentalist's model either serves its purpose or it doesn't. That economists tend to attend to evidence in a haphazard manner only means that they don't take their own methodological prescriptions seriously (in so far as the latter are instrumentalist).
Instrumentalist methodology, therefore, does not prevent one to take a critical stance towards economic practice. What it does prevent one to do is to criticize models for lack of realisticness (as happens a great deal – see for instance Colander et al. Reference Colander, Goldberg, Haas, Juselius, Kirman, Lux and Sloth2009 on the role of unrealistic models in causing the financial crisis). Rather, one will have to think hard about the purposes one finds worth pursuing and then determine whether or not modelling practices help in realizing or rather present obstacles for achieving these aims. While this is an issue for another paper, doubts about the worthiness of the purposes current mainstream models in economics seem to be good at are not difficult to raise.