1. Introduction
Many philosophers of science subscribe to scientific realism. Unfortunately, there is much less agreement about what this doctrine amounts to. My suggestion in this article is that the renormalization group framework in high-energy physics provides an instructive case study when it comes to the question of how realism ought to be formulated. The claim that the renormalization group has important implications for a realist view of quantum field theory (QFT) has been mooted in the past by David Wallace (Reference Wallace2006, Reference Wallace2011) and more recently by Porter Williams (Reference Williams2017). I develop this line of thought in a broader context here and argue that there are lessons to be learned for the broader realism debate. On the one hand, the role the renormalization group plays in identifying aspects of QFT models we should take representationally seriously supports a local approach to articulating the realist thesis; rather than attempting to explicate how theories latch onto the world in general terms, it shows that resources found in particular scientific contexts can be a crucial part of this story. On the other hand, it points to a strategy for responding to Kyle Stanford’s “trust argument.” Stanford challenges the realist to tell us what of our current theories will survive future scientific progress. While this can seem to be an impossible task in the abstract, I will suggest that it may become more tractable at the local, theory-by-theory, level.
The plan for the article is as follows: section 2 provides an opinionated overview of some key issues surrounding the formulation of realism, section 3 introduces the renormalization group and explains how it helps substantiate a realist analysis of QFT models, and section 4 draws out some broader morals for the formulation problem.
2. The Formulation Problem
What is scientific realism? A naive answer is that it is the claim that our best-confirmed scientific theories are true. It quickly becomes obvious that construing realism in this way renders it a wildly implausible doctrine, however. One much-discussed reason is the pattern of theory change found in the historical record. Examples of theories that made accurate predictions in their day but later turned out to be false are legion in the history of science, and while philosophers with realist sympathies have pushed against the idea that this undermines the connection between empirical success and truth entirely, they typically admit that it gives us grounds to doubt that current theories get everything exactly right. There are also ample indications within contemporary science itself that our theories are not entirely veridical. To take a particularly stark example, our most fundamental physical theories, QFT and general relativity, furnish mutually inconsistent accounts of what the world is like, and powerful theoretical arguments suggest that a completely new framework is needed to fully describe gravitational phenomena.
What should the would-be realist replace this naive formulation with? What seems to be needed is a more modest epistemic attitude toward predictively successful theories: something stronger than merely taking them to be empirically adequate, as the constructive empiricist advises, but weaker than believing everything they say about the world. A common move when outlining the realist position in its broad brushstrokes is to replace the word ‘true’ in the naive formulation with ‘approximately true’. Ultimately, though, this only postpones the problem, as what it means for a theory to be approximately true stands in dire need of clarification. While there is a great deal of disagreement over how a more precise formulation of realism should be developed in detail, some consensus has emerged about the general form it should take. It is widely agreed, among both contemporary realists and antirealist critics, that any formulation worthy of serious consideration must be ‘selective’, recommending we commit ourselves to some parts of a successful theory’s content, while rejecting, or remaining agnostic about, others. The challenge to spell out the sense in which successful theories are approximately true can then be answered, at least in part, by pointing to a subset of their claims about the unobservable that hit their mark.Footnote 1 This selective approach owes its popularity to the dominant realist approach to the challenge of historical theory change. In response to pessimistic inductionists, realist commentators have urged that the empirical success of superseded theories did not depend on their claims that conflict with present science. Rather, the theoretical constituents of these theories that were really responsible for their accurate predictions are retained in their successors and therefore have a shot at describing the world as it is. The lesson that has been drawn from the debate over theory change then is that the realist’s epistemic optimism ought to be directed at the parts of a theory that underwrite, and explain, its predictive successes rather than its representational content as a whole.
This idea is sufficiently vague that it can be fleshed out and interpreted in very different ways, leading to a proliferation of selective formulations of realism in the recent literature. There are many points of divergence between these variants, but I will focus here on two key questions on which the renormalization group sheds interesting light: whether selectivists should spell out their commitments in global or local terms and whether it is possible to implement the selective strategy prospectively, in advance of future scientific developments.Footnote 2
Once a broadly selective approach to the formulation problem has been adopted, the obvious question is which parts of our theories should we be committed to. There are different particular answers to this question, but there are also different types of answers. A more global answer (in my terminology) aims to provide a general characterization of the belief-worthy content of any successful theory. Saatsi (Reference Saatsi2017) calls this sort of approach ‘recipe realism’: the ideal being a formula that takes in theories and spits out beliefs about the world in a completely regular way. One brand of realism that has sometimes been understood in these terms is epistemic structural realism.Footnote 3 Following Worrall (Reference Worrall1989), contemporary incarnations of this doctrine have been inspired by episodes of theory change in physics in which posited entities, such as the luminiferous ether and gravitational field, are dropped, but continuities exist at the level of mathematical structure. The conclusion drawn is that it is the structural claims of our theories that realists ought to put their faith in. One attempt at making this precise has been to identify the structural content of a theory with its Ramsey sentence, apparently furnishing a procedure for picking out the claims structuralists ought to commit themselves to that can be applied across the board. Many contemporary brands of realism at least aspire to this sort of blanket statement about which aspects of our theories get things right.
No formulation of this kind has achieved anything close to widespread acceptance, however. One problem is that the diversity of science makes generalization a risky business. Recipes for identifying veridical content that are compelling in one area of science may be much less so in others. Stanford (Reference Stanford2003) attacks structural realism, for instance, by pointing to examples from biology in which mathematical structure does not seem to be preserved through theory change. Another worry is that the drive toward generality leads global formulations to abstract away from peculiarities of particular theories that are relevant to appraising their representational success. While both Newtonian gravity theory and thermodynamics can arguably be described as sharing structure with more fundamental physical theories, the specific structural claims that are retained at the fundamental level are quite different in each case, as is the sort of explanation this affords of the theory’s empirical successes. Even in contexts in which the structuralist intuition has some purchase then, the distinction between structural and nonstructural claims seems to be too coarse-grained to pinpoint the parts of a theory that drive its success. Ultimately, global formulations tend to find themselves in the uncomfortable position of being simultaneously too general and not general enough to convincingly carry out the selective project.
These sorts of concerns have led some philosophers to move toward a more local response to the formulation problem (Barrett Reference Barrett2008; Saatsi Reference Saatsi2016, Reference Saatsi2017). Instead of trying to specify how successful theories relate to the world in one fell swoop, this approach implements the selective strategy on a theory-by-theory basis. The question of what sort of doxastic attitude we ought to take toward the theoretical claims of Newtonian gravity, for instance, is answered via a close study of the theory itself rather than invoking some general criterion for realist commitment. Barrett (Reference Barrett2008) points, in particular, to foundational research on geometrized formulations of Newtonian gravity (surveyed in Malament [Reference Malament2012]), which allows us to precisely describe how some of its extra-empirical content is embedded within general relativity, as doing the real work in spelling out the sense in which the theory is approximately true. This is a thoroughly theory-specific story, and the thought is that carrying out the selective strategy in practice will typically turn on local scientific arguments and theoretical resources. Local realists still adopt a general epistemic stance toward science: they take the empirical success of our theories to be explained, at least for the most part, by the fact that they are getting something right about the world. But the task of spelling out the specific claims about the world engendered by this stance is delegated to philosophers of the specific sciences.
While this move avoids some of the difficulties plaguing global formulations, however, it seems to have a serious cost when it comes to the issue of prospective applicability. Local selectivists might be able to point to intertheoretic relations with general relativity to clarify the representational status of Newtonian gravity, but what about general relativity itself? Global selective realists will have an answer here: they will wheel out their formula for identifying belief-worthy content. But what can the localist, who eschews this kind of response, say? The cases that Barrett and Saatsi point to as exemplars for the way in which the approximate truth of theories can be cashed out locally invariably turn on their embedding within more fundamental theories. Consequently, realism about our current best theories might seem to end up amounting to little more than a promissory note: the local selective realist claims that general relativity latches onto reality in a way that explains its success, but exactly how we cannot yet say.Footnote 4 Just as Newton had no idea which parts of his theory of gravity would be preserved in contemporary gravitational physics, we seem to be in a similar epistemic situation with respect to current fundamental physics.
This sort of worry is the basis of Stanford’s (Reference Stanford2006) “trust argument.” According to Stanford, any form of realism worthy of the name must tell us which features of our theories we can trust now, not merely in retrospect. I suspect that the perceived need to respond to this challenge is a key reason why defenders of realism have sought to defend a global formulation of their doctrine that can be projected into the future. An alternative reaction to this line of attack, however, is to simply refuse the bait and deny that realists need to state their epistemic commitments prospectively. After all, the claim about general relativity sketched above clearly goes beyond constructive empiricists’ stance toward the theory; they would, of course, remain completely agnostic about general relativity’s claims about the unobservable and deny the need for an explanation of its empirical success in terms of its extra-empirical representational success. Saatsi (Reference Saatsi2016) simply bites the bullet here and calls his local version of realism ‘minimal realism’ in recognition of the fact that some intuitions about what a realist attitude toward current scientific theories amounts to are not necessarily borne out on this formulation.
It is at this juncture in the dialectic that the renormalization group becomes interesting. The application of renormalization group methods in high-energy physics provides another example of how local theoretical resources can play an important role in substantiating a selective realist reading of a theory. Crucially, though, this story is prospective in character, operating in advance of developments beyond the standard model of particle physics. What this suggests is that abandoning a global formulation of realism does not necessarily mean ceding the possibility of prospective commitments entirely, and a localized realism need not be as minimal as Saatsi suggests.
3. The Renormalization Group and Selective Realism
The renormalization group is a widely applicable framework for investigating the behavior of systems at different length and energy scales. The basic strategy is to study the action of a coarse-graining transformation—an operation that takes us from an initial model to a new one that lacks some of the degrees of freedom associated with variations at small-length scales/high energies but shares its large-scale/low-energy properties. How this procedure is implemented depends a great deal on the sort of systems one is interested in, and consequently, renormalization group methods take on diverse forms in different areas of physics (and beyond). I will focus here on the application of renormalization group methods to QFT and specifically on the momentum space approach pioneered by Wilson and Kogut (Reference Wilson and Kogut1974).
This story starts with the path integral expression for the partition function, Z. This crucial quantity encapsulates basically everything there is to know about a QFT model. In particular, all of a theory’s correlation functions (vacuum expectation values of field operators at disparate space-time points) can be derived from it. For a single scalar field ϕ, the partition function is associated with a functional integral:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220105130104654-0199:S0031824800011156:S0031824800011156_df1.png?pub-status=live)
where S[ϕ] is the action of the model, and the measure indicates that a sum is being taken over all possible configurations of the field. As is well known, there are grave difficulties with precisely defining this integral for a field that varies over a continuous space-time. One way around this problem is to introduce an ultraviolet cutoff Λ—an upper limit on the possible momenta of field states. A straightforward way of doing this is to place the theory on a lattice so there is a minimal distance over which the field can vary. Once this is done it is possible to give a precise meaning to the path integral. The Wilsonian renormalization group is then based on setting up a coarse-graining transformation on cutoff QFT models of this kind.
Wilson’s insight was that, instead of evaluating the whole path integral at once, we can start with the contribution due to high-momentum field configurations, whose Fourier transforms have support above some value μ. This part of the path integral can be computed separately and absorbed into a shift in the action. In symbols,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220105130104654-0199:S0031824800011156:S0031824800011156_df2.png?pub-status=live)
This defines a transformation that takes us from an initial cutoff QFT model to a new one, which has a lower cutoff, and a modified dynamics, but behaves like the original (specifically, sharing its long-range correlation functions). This is often informally described as ‘integrating out’ the field modes associated with variations on small-length scales.
We can view this transformation as inducing a ‘flow’ on a space of models, with dimensions corresponding to all possible interactions between fields. Studying this flow has proved to be a powerful source of information about the scaling properties of QFT models. The most important discovery for our purposes is the phenomenon of universality in the low-energy regime. It turns out that QFT models with wildly different dynamics display very similar low-energy behavior. Consider, for the sake of concreteness, the class of scalar field theories with actions of the form
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220105130104654-0199:S0031824800011156:S0031824800011156_df3.png?pub-status=live)
where m is a mass parameter and {λ4, λ6, …} are couplings for possible interaction terms. Under repeated applications of the coarse-graining transformation, the renormalization group flow of this class of theories can be shown to be attracted toward a two-dimensional surface spanned by m and λ4 (as shown in fig. 1).Footnote 5 This behavior is believed to hold generally: while infinitely many interaction terms between a set of fields are possible, the renormalization group transformation induces a flow toward a finite-dimensional surface spanned by so-called renormalizable parameters—those with nonnegative mass dimension. This means, in essence, that large classes of QFT models look the same at suitably large-length scales.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220105130104654-0199:S0031824800011156:S0031824800011156_fg1.png?pub-status=live)
Figure 1. Renormalization group flow of scalar field theories to a surface spanned by renormalizable parameters.
What does all this have to do with scientific realism? The thought is that these renormalization group results give us the means to develop a selective realist reading of current QFTs.
On the one hand, the renormalization group helps us identify features of QFT models that we should not take representationally seriously.Footnote 6 Quantum electrodynamics (QED) and the other component theories of the standard model of particle physics have famously produced some of the most accurate predictions in the history of science. Much of this success takes the form of estimates of cross sections for scattering events produced in particle colliders, with the current threshold on experimentally attainable energies being at the order of 1013 eV. The renormalization group results just discussed reveal that many features of current QFT models do not really make a difference to these empirical successes, however, in the sense that they can be varied without affecting scattering cross sections at these energy scales. For one thing, it establishes that these quantities are highly insensitive to the imposition of an ultraviolet cutoff, as well as the details of how this is done. We can also vary the dynamics of a model at the cutoff scale without affecting its predictions; adding nonrenormalizable interactions to the QED action, for instance, does not undermine its empirical adequacy. What this tells us is that many of the claims QFT models make about the world at the fundamental level do not contribute to, and are not supported by, the empirical success of the standard model. Of course, we also have external reasons to doubt that QFTs describe reality at all scales: the QFT framework itself is expected to break down as the Planck scale is approached. But the renormalization group gives us a precise way of pinpointing the parts of current theories that we should disavow, or at least remain agnostic about.
On the other hand, it seems to provide us with the means to articulate positive commitments supported by the success of the standard model. The classes of QFT models that share the same low-energy predictions arguably make common claims about relatively large-scale, nonfundamental, aspects of the world. Giving a precise characterization of this shared content is one of the central challenges facing the sort of realist view of QFT I am proposing here, but a preliminary strategy is to point to correlation functions over distances much longer than the cutoff scale as appropriate targets for realist commitment. These quantities are preserved by the renormalization group coarse-graining transformation and encode the long-distance structure of a QFT model. They are also directly connected to its successful predictions: you cannot vary the long-distance correlation functions of a theory without drastically affecting its low-energy scattering cross sections.Footnote 7 Furthermore, in demonstrating that these large-scale properties of a QFT model are insensitive to what is going on at very high energies, the renormalization group is also telling us that these features are largely independent of the details of unknown physics at currently inaccessible energy scales. We thus have reason to be confident that these features of current QFTs will be retained through future theory change, in one way or another, whatever physics beyond the standard model has in store for us.Footnote 8
The picture that emerges from these considerations then is that QFTs enjoy a kind of coarse-grained representational success, capturing some (relatively) long-distance, low-energy, features of the world while distorting its fundamental structure. A potentially useful comparison here is to continuum models in fluid mechanics, which misrepresent the atomic structure of real fluids but accurately describe many of their bulk properties. This fits well with the effective field theory approach to QFT that has come to dominate high-energy physics in the wake of Wilson’s work on the renormalization group; at least part of what is meant when physicists characterize the standard model as an effective field theory is that it correctly describes the physics of currently probed scales but should not be trusted at higher energies. For the aspiring scientific realist this differentiated attitude toward the content of QFT models offers a way of making precise the sense in which these theories are approximately true along selectivist lines.
4. Some Morals
We have only scratched the surface of how renormalization group methods affect our understanding of QFT, and many aspects of the preceding discussion are controversial. Wallace (Reference Wallace2006, Reference Wallace2011) and Williams (Reference Williams2017) advance similar (and I hope complementary) analyses of the epistemic significance of the renormalization group, but Doreen Fraser (Reference Fraser2011) takes a much more deflationary line, which conflicts with some of the claims endorsed above. There remains a great deal of work to be done in developing and defending the sort of realist view of QFT just outlined then.Footnote 9 I want to conclude, however, by returning to the broader question of how scientific realism ought to be formulated.
What sort of general morals can be extracted from this case study? First and foremost, it offers further support for a localized response to the formulation problem. The appeal to the renormalization group framework in the discussion of the previous section exemplifies the local selective realist’s claim that local scientific resources often play a crucial role in articulating the relationship a theory bears to the world. Furthermore, we found no need for an overarching thesis about which parts of our theories get things right. The resulting analysis of the representational success of QFT models does, admittedly, chime with the intuitive picture underlying epistemic structural realism: the fundamental ontological claims of QFT models were rejected while nonfundamental, broadly structural, features are singled out for realist optimism. But the putative distinction between structural and nonstructural features did no real work in identifying appropriate targets for realist commitment and ultimately adds little to the picture furnished by the renormalization group. This all suggests that we ought to abandon as misguided any attempt to provide a fully general characterization of the approximate truth of empirically successful theories.
What really makes this case significant for the broader formulation debate, however, is that it does not turn on the explicit embedding of a superseded theory within a more fundamental successor. This has important implications for the issue of prospective applicability. The worry, remember, was that, without a general recipe for identifying the belief-worthy content of a theory, the local realist will be able only to make a highly tentative and provisional claim about the representational success of our current most fundamental theories. Local realists like Saatsi have basically accepted this conclusion but reject Stanford’s assertion that giving up on explicit prospective commitments means giving up on realism entirely. The renormalization group case, however, suggests that we do not need a global formulation of realism to sustain prospective commitments: local theoretical resources can also play a role in supporting judgments about which parts of present theories will be preserved through theory change. The information the renormalization group provides about the dependencies between the high- and low-energy properties of QFT models seems to put us in a better epistemic situation with respect to the standard model than Newton was with respect to his theory of gravity. This opens up the possibility of a localized response to Stanford’s trust argument. In some scientific contexts it may be appropriate to eschew prospective judgments entirely and adopt the sort of minimal realist position advocated by Saatsi; but where scientific arguments support it, a more full-blooded realist reading of a theory, which includes commitments about which parts of its content can be trusted to remain a part of future science, may be possible.