1. Introduction
There have been many new accounts of “deterministic chance” that purport to capture the essential features of chance (see, e.g., Strevens Reference Strevens1999; Loewer Reference Loewer2004; Hoefer Reference Hoefer2007; Ismael Reference Ismael2009; Emery Reference Emery2015). Unfortunately, all of these give up on many of the intuitive features of chance made popular by Lewis’s (Reference Lewis1980) codification. For instance, on some of these views, an event can have a nontrivial chance (not zero or one), even though it is determined to occur. Additionally, some past events can have nontrivial chances. On some of these views, the frequencies of events cannot diverge arbitrarily far from the chances, and the chances play no metaphysical role in bringing about subsequent events.
I will not rehearse the well-known objections to those kinds of views here. Rather, I present an alternative view of chance that respects our intuitions about the features of genuinely chancy events. I use the ‘Mentaculus’ machinery developed by Albert (Reference Albert2000) and Loewer (Reference Loewer2004, Reference Loewer2009) to show how in worlds that evolve deterministically (e.g., Newtonian and Bohmian worlds), all scientific probabilities can be derived from a single initial probability distribution. My account differs from Albert and Loewer’s in its fundamental, metaphysical structure because I take the initial probability distribution to represent an irreducible, metaphysically robust, initial chance event, while they take the initial probability distribution as reducible—more precisely, they take it to supervene on the entirety of the actual world (past, present, and future). This article does not attempt to explain how it is that chance events metaphysically produce or govern subsequent states, as I largely agree with those who take production and governance to be unanalyzable, fundamental relations. Nor does this article explain why it is possible to faithfully represent chance events with probability measures—a puzzle that is as interesting as it is difficult to solve. Instead, this article finds a logical and plausible space for a view that accommodates robust, metaphysical chance in a world that currently is evolving deterministically.
I argue that positing just one initial chance event can justify the usefulness and explain the ubiquity of nontrivial probabilities to epistemic agents like us, even if there are no longer any chance events in our world.Footnote 1 I will show how this account recovers the intuitive features of chance. In section 2, I present the idea of inherited chances in the case of a specific event—namely, a deterministically evolving coin toss. In section 3, I present Albert and Loewer’s hypothesis that the entire universe is no different, fundamentally, from such a coin. In section 4, I describe Albert and Loewer’s Humean interpretation of the relevant probabilities and explain why their account rejects many of the platitudes about chance. In section 5, I present an alternative interpretation that accommodates the chance platitudes. I argue that such an account can explain why nontrivial probabilities are scientifically useful and metaphysically objective, even though, in an important sense, they are merely epistemic. Finally, I speculate about the initial chance event itself.
2. Inherited Chances
Consider a very simple case of inherited chance: a coin flip in a classical, deterministic, Newtonian world.Footnote 2 The coin can begin its toss in many different ways, depending on how it is flipped. Its speed can be fast or slow, it can spin quickly or slowly, and it can be flipped from high off the ground or close to the ground. A full specification of these initial conditions constitutes the system’s microstate. And, if the dynamical evolution is deterministic, then each specific initial microstate evolves into a specific final microstate. Some of those final microstates are specific ways of landing heads, and some are specific ways of landing tails. Very tiny changes in the initial conditions can affect whether the coin lands heads or tails. For instance, a coin that lands heads would have landed tails, if only it had been thrown with slightly more speed or with a bit more rotation or from a slightly higher point. Interestingly, even though the outcome depends so sensitively on the initial conditions, roughly half of the initial conditions lead to outcomes in which the coin lands heads, and half lead to outcomes in which the coin lands tails.Footnote 3
Now, suppose that each possible initial coin microstate has the same chance of coming about.Footnote 4 Then, those chances propagate through the deterministic evolution, with the result that each final microstate has the same chance. Since roughly half of the possible final microstates are heads and half are tails, it follows that the chance of heads is 1/2, and the chance of tails is 1/2. Furthermore, only a minuscule fraction of the final states include anything other than heads or tails, such as the coin landing on its edge. Therefore, there is a very, very high chance that the coin will land on a face side and a very, very low chance it will land on its edge.
Thus, even in a world with deterministic evolution, the existence of initial chances yields the following inherited chances:
• Chances for individual events—there is a chance of 1/2 that the next coin toss lands heads.
• Chances for sequences of events—there is a chance of 1/32 that the next five coin tosses all land heads.
• Generalizations for events—coins have a much, much higher chance of landing on their faces than on their edges.
3. The Mentaculus
Albert (Reference Albert2000) and Loewer (Reference Loewer2004, Reference Loewer2009) defend the surprising claim that the entire universe is not relevantly different from such a coin. Thus, we ought to shift our attention to the ultimate initial conditions—those of the entire universe. So, suppose that each of the specific ways in which the universe could have begun has the same chance. Then each subsequent, deterministic evolution of those initial conditions has the same chance. If Albert and Loewer are right, then the familiar statistical patterns and generalizations at every level of scientific investigation can be derived, in principle, from this initial chance distribution over possible microstates. They call this proposal the Mentaculus.
The universe’s initial chance event can be thought of as a fundamentally chancy die with an infinite number of sides tossed at the first instant,Footnote 5 while the subsequent evolution of the universe can be thought of as a chain of deterministic dominoes—one knocks over the next, which knocks over the next, and so on. If the chancy universal ‘die’ toss had only six sides, its toss would determine which of six different chains of dominoes to set into motion. And, because each side of the die has the same chance of landing up, each domino chain has the same inherited chance of being set into motion. If an event A occurs somewhere in only one domino chain, it will have an inherited chance of one-sixth of occurring. If two of the six domino chains include event A, then the inherited chance of A’s occurrence is 2/6 or 1/3. This procedure also applies to combinations of events (conjunctions, disjunctions, sequences of events, etc.). Thus, the chance of event [A and B] occurring is just the total number of domino chains that include both event A and event B divided by the total number of domino chains.
Additionally, this procedure gives us a recipe for calculating conditional chances. Thus, the chance of event B, given event A is just the number of domino chains in which both A and B occur divided by the number of domino chains in which A occurs. To use a concrete example, if you would like to know the chance of rain this afternoon, you must conditionalize the universe’s initial chance of rain this afternoon on the relevant facts you have learned—the existence of planet earth, the temperature and barometric pressure at your location this morning, and the local climate and weather patterns.Footnote 6 Otherwise, given all of the ways the universe could have started out, and how few of those ways even include the planet earth at all, the chance of rain this afternoon would have a really, really low chance. But, after conditionalizing, the derived, conditional chance is the relevant, and intuitive, chance of rain this afternoon.Footnote 7 Of course, as limited, epistemic agents, we never actually make these calculations. But, according to the Mentaculus hypothesis, the models that we use in the special sciences—themselves based largely on observations, evidence, and background information—are nevertheless consistent with such a calculation.
If the universe is Newtonian, Albert and Loewer hypothesize that the correct initial probability measure is the uniform Lebesgue measure over a restricted (low-entropy) region of phase space—the space that represents the possible positions and velocities of every particle. If the universe is Bohmian, then the uniform measure is over a restricted region of configuration space—the space that represents the possible positions of every particle.
Note that the hypothesis of equal initial chance is not an a priori claim or a claim that all logical possibilities have equal chances. Rather, it is an empirical hypothesis that gets support from the patterns we see in the actual world. As “coin-scientists,” we observe many coin tosses and begin to theorize about coin tosses in general. We make a hypothesis: each initial microstate has the same chance. Again, this is not quite right since the states are continuous, and we use a measure. In the case of coins, we hypothesize a measure that is bell-shaped rather than flat. This is because it is most likely that a coin is tossed between 1 and 3 feet in the air and very unlikely that it is tossed only a couple of inches or 10 feet in the air. Similar reasoning applies to its speed of rotation, and so on. Interestingly, both a uniform measure and a bell-shaped measure (as well as a variety of other measures) yield the result that roughly half of all coin tosses land heads and half land tails.
In the case of the coins, we have independent reasons to prefer the bell-shaped measure due to our background knowledge about how coins are tossed. By contrast, in the case of the initial conditions of the universe, we have no such background knowledge. Tim Maudlin (personal communication) and others have argued that it is plausible that any measure absolutely continuous with the Lebesgue measure will yield the same results for most of the events for which we use scientific probabilities. These measures may, however, assign different inherited chances for single events, such as the Cubs winning the World Series in 2020. Perhaps there is one correct measure that assigns the correct chance, or perhaps such events do not properly have chances. At any rate, part of the Mentaculus hypothesis is that the uniform probability measure of the initial state is ‘carried forward’ to small subsystems of the universe. In other words, the uniform (or bell-shaped) measures we assume to be true of coin flips, weather patterns, and a shuffled deck of cards are consistent with, and in principle, derivable from, the initial uniform measure.Footnote 8
Importantly, due to theoretical limitations on computation, we cannot directly evaluate the inherited chances of the universe. For one thing, there are far too many particles (around 1080) to calculate the subsequent evolution of even a single initial state of the universe. Additionally, there are continuum-many possible initial states of the universe. Therefore, this hypothesis of Albert and Loewer’s remains a purely theoretical one. Nevertheless, I will argue that if they happen to be right, the Mentaculus provides us with a story for how we can have justified, nontrivial credences even in a world that currently is evolving deterministically.
4. Humean Chance
It is important to note that this proposal—deriving all chances from an initial chance—is a conditional proposal: if there is an initial chance distribution over states, then there is a final chance distribution over states. Chance at the beginning yields chance later on. But, if we want a complete account of chance, something must be said about the nature of those basic, initial chances. This is why many accounts of “deterministic chance” are incomplete.Footnote 9 To be complete, a theory of chance must account for all chances, not just the inherited ones.
According to Albert and Loewer, the initial chance distribution is not metaphysically any different from the inherited chances. For them, both are instances of Humean chance—something that lacks any kind of metaphysically dynamic or metaphysically fundamental features. It follows from their acceptance of the metaphysical doctrine of Humeanism: all that exists, fundamentally, is the actual distribution of categorical (nonmodal) properties in space-time—the Humean mosaic. Thus, everything else—including chance—supervenes on, or reduces to, the mosaic.Footnote 10 This contrasts sharply with other metaphysical pictures of the world, some of which hold that chance events (at least partly) determine, generally via production or governance, how the actual world turns out. The Humean claim is that the entire world—past, present, and future—generates or subvenes the laws and chances, while anti-Humeans think the laws and chances (at least partly) generate the world.
Contemporary Humean accounts of chance are extensions of David Lewis’s account. According to Lewis (Reference Lewis1983, Reference Lewis1994), chances are given by the laws of nature. And the laws of nature are “deductive systems that pertain not only to what happens in history, but also to what the chances are of various outcomes in various situations” (Lewis Reference Lewis1994, 480). In order for chances to reduce to the Humean mosaic, the laws that include them must reduce to the mosaic. Lewis proposes that laws are merely statements that do a good job of systematizing the mosaic by balancing simplicity, strength, and fit.Footnote 11
So, let us return to the example of the Newtonian coin toss. Part of our scientific theory of coin tosses says that coins land heads about as often as they land tails and that they almost never land on their edges. If Albert and Loewer are right, then there is an incredibly simple and informative way to systematize this information about coin tosses, as well as all of the other statistical and general regularities in the world:
DDL The Deterministic Dynamical Laws of Newtonian mechanics describe how any particular initial condition evolves throughout time.
PH The Past Hypothesis constrains the initial conditions to a certain low-entropy subset of possible configurations.
SP The Statistical Postulate assigns a normalized, uniform measure to the region of mathematical space that represents those possible configurations.
Note that we could have systematized the mosaic with a description of the exact initial microstate plus DDL. This would have been maximally informative, as it would have entailed every fact about the actual mosaic. However, it would have been prohibitively complicated. It is much simpler to use SP in our systematic description of the world.Footnote 12 Additionally, it would have left us with no explanation for the useful probabilities in our scientific reasoning.
It is tempting to interpret these three statements as metaphysically explanatory. Indeed, I will argue this is exactly how we ought to interpret them. But for the Humean, DDL, PH, and SP merely summarize what actually happens. It is fairly straightforward to see how DDL and PH could be descriptions: DDL describes how the qualitative states at different times are related to one another, and PH describes qualitative features of the initial state. But giving a Humean interpretation of SP will require a bit of unpacking. Indeed, the details of SP are especially important since SP is where Albert and Loewer locate basic chance.
Technically, SP merely specifies a mathematical measure over a restricted possibility space—namely, phase space. There is no explicit mention of chance. The measure assigns numbers to regions in the space that represent initial events. Because the deterministic trajectories through phase space do not intersect, diverge, or converge, volumes in phase space are conserved. Thus, the initial measure over initial events yields a measure (and thus a number between zero and one) for every subsequent event. The measure also assigns numbers to subsets and intersections of regions. This implies that every collection (and every sequence) of events has an associated number between zero and one. For instance, the number associated with a coin toss that lands tails is just the measure of the space representing a tails landing, divided by the measure of the space representing that coin toss: 1/2.
Let us grant that these measures yield the correct numerical values for subsequent events (e.g., that coin flips landing heads are associated with a measure of 1/2). Why do Albert and Loewer think these numbers are chances? They agree with Lewis (Reference Lewis1980) that chance is as chance does. And, they claim that these numbers in their theory satisfy the important chance roles. I agree with their standard of success, but I do not agree that their Humean account can meet that standard. I will not cover the many arguments that have been given against the variety of accounts of Humean chance. But, it is widely acknowledged that Albert and Loewer’s particular Humean account of chance allows for the following counterintuitive results:
• Past events may sometimes have nontrivial chances (e.g., if there are currently no records of an event that actually happened, then that event has a chance of not occurring).
• The chance of an event depends on how much you know about the present state of the world. (Chances are conditionalized, e.g., on the currently surveyable macrostate, or e.g., on the entire current microstate. Thus, there are many different, and equally correct, values for the chance of a single event.)
• Worlds that match with respect to their frequencies cannot differ with respect to their chances. (This is because the chances supervene on the frequencies. It is arguable whether this has additional problematic implications for counterfactuals.)
• The frequency of some types of chance events cannot diverge arbitrarily far from the chance of that type of event. (Otherwise, that type of event would have had a different chance that ‘matched’ the frequency more closely.)Footnote 13
• One cannot always set one’s credence to the chance, as Lewis’s (Reference Lewis1980) Principal Principle recommends. The Principal Principle says that, barring inadmissible information (such as information from the future), we ought to set our initial credence in an event’s occurrence to that event’s chance of occurring.Footnote 14
These counterintuitive results have been developed into objections by a variety of authors. While I find the Humean responses to these objections unsatisfactory, my aim in this article is merely to present an alternative picture of chance that does not have any of these counterintuitive consequences. The account includes robust, fundamental metaphysical chance, and I argue that it can be accepted even if we currently live in a deterministic Newtonian or Bohmian world.
5. One Metaphysical Chance
In this section, I show how a robustly metaphysical account of chance is compatible with deterministic evolution, as long as there was a single chance event at or near the beginning of the universe.
5.1. The Initial Chance Event Is Metaphysically Robust
On the account presented here, genuine, metaphysical, fundamental chance is just as we always thought. Before a chance event occurs, it has a nontrivial value; after its occurrence, the chance of that event goes to one (or zero, if the event did not occur) for all time. Additionally, we ought to set our credences to the chances, just as the Principal Principle recommends, and update those credences by conditionalization as we get new information. Finally, this account allows for metaphysically powerful chance events that partially determine (by production or governance) subsequent states.Footnote 15 This account is intuitively plausible if there are many chance events, as on some theories of quantum mechanics, or if there is only one chance event at the very beginning of our universe. Thus, this account recovers the intuitive features of chance that Humean accounts have rejected:
• Past events always have chances of one or zero.
• The chance of an event does not depend on how much you know about the present state of the world. (Only credences are conditionalized on knowledge, e.g., on the currently surveyable macrostate or on the entire current microstate. Thus, there can be many different and equally correct credences different people with different evidence ought to have for a single event, although there is only one correct chance value at that time.)
• Worlds that match with respect to their frequencies can differ with respect to their chances.
• The frequency of a type of chance event can diverge arbitrarily far from the chance of that type of event.
• One should always set one’s credence to the chance, as Lewis’s (Reference Lewis1980) Principal Principle recommends. If one does not have access to the current chances, one ought to update one’s initial credences (which ought to be set to the initial chances) by conditionalizing on new information.
I acknowledge at the outset that there is an odd feature of this account: if the universe is currently evolving deterministically, then there was only one truly genuine chance event in our world. I will argue that this is not as odd as it may seem at first.
5.2. No More Chances?
On my account, if the current dynamical evolution of our universe is deterministic, then there are no more truly chancy events. Is this a cost of my view? It depends. Certainly many people (myself included) take determinism and chance to be straightforwardly incompatible ways for the universe to evolve. But, what are we to make of the wide variety of cases that people typically take to be chance events?
I do not think we should take it for granted that most ascriptions of ‘chance’ pick out truly chancy events. (Indeed, I argue that almost none of these ascriptions are accurate.) For instance, people often assert that coins have a 50% chance of landing heads. Does this mean that any satisfactory metaphysical theory of chance must recover the result that coins really do have a 50 : 50 chance of landing heads? No. Take, for instance, the fact that certain people can flip a fair coin in such a way that the coin always lands heads. The reason, of course, has to do with the physical explanation for how coins flip. If a person (or a coin-flipping machine) is precise enough with the initial toss (height, rotation, etc.), then the coin is certain to land heads. And, once we understand how coin flips work, and that someone with enough information could always predict how any coin will land, we retract our initial ascription of chance and say instead, “I suppose coins aren’t really chancy after all.” Since we are willing, on the basis of new scientific information, to revise our opinions about which events are chancy and which are not, it cannot be part of the meaning or concept of ‘chance’ that coins, roulette wheels, and horse races are chancy events.Footnote 16 I think we can learn that events that seem chancy, in fact, are not. It may well turn out—for instance, if our universe evolves deterministically—that all of the events we thought were chancy in our world are, in fact, not chancy.
Many object to the above reasoning by arguing that this would mean probabilistic credences are likewise useless. After all, if every chance is zero or one, what good could come of setting our credences to anything else? I hope to show that using nontrivial probabilities (the mathematical object, not the metaphysical chance) can be useful and justified in a deterministic world, but only if that world contained an initial metaphysical chance event.
5.3. Inherited Chances Become Epistemic Probabilities
Recall that in cases of inherited chance, a chance over initial states plus deterministic evolution yields inherited chances. Furthermore, after that initial chance event’s occurrence, all of the chances that were inherited from that initial chance event go from their nontrivial values to one or zero. Indeed, this is precisely how we think of coins. If we think of the initial conditions as being genuinely chancy, as soon as the precise position, height, velocity, and so on, are fixed (by that chance event), the coin’s outcome is ‘already’ fixed (even if it is still flying through the air). Similarly, since the chancy big bang has already occurred, if the world is Newtonian or Bohmian, there are no longer any nontrivial, metaphysically genuine chances in the world. Indeed, there have been no nontrivial chances for roughly 13 billion years. However, as I argue below, there are still plenty of epistemically useful, nontrivial probabilities.Footnote 17
The inherited chances of events are derived from the initial chances. Even though these events now have only trivial chances (as the single chance event has already occurred), they provide a ground for nontrivial, objective probabilities for subsequent events. Probabilities are mathematical entities that are distinct from metaphysical chances (although they can be used to represent chances). Our need for these probabilities comes from our epistemic limitations.Footnote 18 Recall that the initial chance yields inherited chances for all subsequent events and sequences of events. Thus, it yields an inherited chance for a long sequence of coin tosses, half of which land heads and half of which land tails. Maintaining our deterministic picture, the chance of such a sequence is now one or zero—either the actual sequence of coin tosses will contain half heads tosses, half tails tosses, or it will not. Nevertheless, given our state of ignorance of the microstate, we cannot actually determine such facts. Thus, it is incredibly useful to keep track of what those metaphysical, inherited chances were. This is because those chances tell us what credences we ought to have.
Consider the case of 10 coins flipped at once. There is an initial, inherited chance of 1/210 that they all land heads. If you were considering these 10 flips from the first moment of time, and knew nothing but the initial inherited chances, by the Principal Principle, you ought to have a credence of 1/210 that they all will land heads. And, if you have learned nothing more about those tosses, you have nothing to conditionalize on, so your credence of 1/210 remains unchanged. This is the case even though the chance that they will all land heads is now one or zero. Furthermore, this holds even if the tosses have already occurred. This is simply because you have no additional information to use to update the initial probability (that represents the initial chance). Suppose you learn that all the coins landed the same way (although you do not know whether that was all heads or all tails). Now, you ought to conditionalize the probability measure on this new information and arrive at the credence of 1/2 that all of the coins landed heads.
Importantly, these credences are objective because they are based on the initial metaphysical chances. Before the initial chance event has occurred, the inherited chance that all 10 coins land heads is 1/210. Thus, you can do no better than to set your credence to 1/210. However, after the initial chance event has occurred, the inherited chance that all 10 coins land heads goes to one (if they will) or zero (if not). Ideally, if we had perfect information of the initial microstate and the ability to calculate the subsequent evolution from that microstate (and if the Mentaculus is correct), we would likewise set our credence to one or zero. However, as limited epistemic agents, we can merely conditionalize on the information we do have. Thus, in a deterministically evolving world, scientific probabilities are epistemic, in the sense that an ideal agent with perfect knowledge of the initial state and infinite computational abilities would be able to accurately set all of her credences in future events to one or zero.
5.4. Deterministic Evidence of Initial Chances
Consider another example. Suppose you are given a very large stack of photos, each of which shows an image of two dice. As you go through the stack you begin to notice patterns. The die on the right side of the photo shows each face roughly one-sixth of the time, and similarly for the die on the left side of the photo—although the two do not seem to correlate with each other in any interesting way. You use these patterns to infer that the photos are of dice that, when they were rolled, had an equal and independent chance of landing on each side. Why? Because if that were true, it would be extremely likely (chance is ineliminable) that they would produce the frequencies in the images you have observed. Likewise, if there were an initial chance event at the beginning of the universe, it is overwhelmingly likely we would see the wide variety of statistical patterns we do see.
5.5. Deterministic Evidence and Prediction
To see how prediction works, let me highlight another feature of the dice-and-photos example. You can use the hypothesis about the fundamental chances of the dice to make predictions about future photos of the dice. For instance, you predict that each face of each die will continue to show up on the photos roughly one-sixth of the time. Note that you do all this even though the chance events are no longer metaphysically chancy. After all, the dice were tossed and photographed long before you were handed the stack of photos.
Thus, we have a picture of how fundamental, initial chances give rise to epistemic probabilities. The epistemic probabilities apply to events that may no longer be metaphysically chancy. Indeed, if the evolution in the world is deterministic, all nontrivial, scientific probabilities are purely epistemic—although not subjective, as they are objectively grounded by the initial, metaphysical, fundamental chances. Importantly, it is only via these probabilities that we have epistemic access to what the fundamental, initial chances were. And, we use these probabilities to make (objectively justified) predictions about the future.
5.6. What If There Are No Initial Chances?
Now, imagine that you are handed a similar stack of images but that in this case you are told (by a reliable, trustworthy source) that these images are not photos of chancy dice. Rather, the stack of images has been around since the first moment of the universe; that is, the stack has been around forever.Footnote 19 In this case, what could justify any prediction about the images on the cards you have not yet turned over? It seems you would have to postulate some uniformity principle that applies to stacks of images—perhaps that if a pattern exists in part of a stack of images, then that pattern will be present in the entire stack. Without such a postulate, and in the absence of information (or hypotheses) about how the stack of images came to be, there is nothing on which to base such a projection.
And yet, this is exactly the kind of situation the Humean thinks we find ourselves in with respect to the whole world. For the Humean, the statistical patterns in the world are not evidence of an initial chance event. They exist fundamentally—they are not produced by anything, let alone by the laws of nature or chance events. On the contrary, the patterns subvene, and are best summarized by, a probability measure over initial states. First come the patterns, then come the laws and probabilities.
Is the Humean worse off with respect to future predictions? According to my view, an initial chance event and deterministic subsequent evolution justify nontrivial credences in future events. Thus, we are justified in the wide variety of scientific predictions about the future because of the existence of the initial chance event. According to the Humean, we project the past patterns into the future. This requires an assumption of uniformity—we must take it for granted that the patterns in the future will resemble the past. While it certainly seems to be an additional postulate to which the Humean is committed, Barry Loewer (personal communication) has objected to this reasoning. He claims that both views have to assume that a uniform measure is the right one. The Humean assumes that the measure faithfully represents the actual patterns of events in the world, while a defender of my view assumes that the initial chance event is accurately represented by an analogous measure. For instance, there are nonuniform measures over the possible initial microstates that are likely to yield the actual past patterns but also likely to yield wildly different future patterns. Our past evidence underdetermines which of these measures is the one that accurately represents the initial chance event.Footnote 20
Thus, the question is really this: is it better to assume the events of the universe, past, present, and future, have a kind of uniformity or that the initial chance event has a kind of uniformity? It will come as no surprise that I think the latter is to be preferred. I find it plausible to think of the universe as having an initial state and as producing subsequent states in accordance with the laws of nature (some of which may be chancy) or fundamentally powerful properties (some of which may be propensities). If so, then it seems far more reasonable to posit a uniformity principle that applies only at that first moment. But, Humeans do not share these intuitions about the production and governance of the universe, so I cannot rely on them without begging the question against the Humean. Nevertheless, for philosophers who are antecedently sympathetic to governing laws of nature or powerful properties, I have shown how an assumption about an initial chance event justifies subsequent nontrivial epistemic probabilities.
5.7. What Is the Initial Chance Process?
It would be nice to be able to say more about the ‘initial’ chance process. While I do not have a definitive proposal, I see three promising avenues for further exploration. The first is to locate the chance event outside the universe. Thus, the initial configuration of the universe had a chance of being actualized, and alternative configurations also had chances of being actualized. This has the downside of preventing the chance process from being a physical process. Thus, the (pre-)initial chance could not have been governed by physical laws or subsumed under dispositional physical properties (because those are restricted to the actual, physical world). But, this proposal has the upside of dovetailing nicely with some of the literature on fine-tuning and the ‘likelihood’ of different worlds. And, of course, one could always posit “metalaws” that govern universe creation—perhaps, “law of the multiverse?”).Footnote 21
The second option locates the initial chance event within the actual world but very near, or at, the beginning of its evolution. For instance, according to some popular current theories in cosmology, the universe began as a hot, dense collection of matter. This collection was too dense to be stable and exploded in a chance process with specific chances of ‘banging’ into different configurations. The outcome was merely one of the infinite number of possible configurations of elementary particles. The details of such a process are very complicated and rely on as-yet-underspecified features of quantum gravity. Nevertheless, this kind of theory demonstrates how an early chance event (or collection of chance events) could be followed by deterministic evolution. Even though these cosmological theories posit the same laws throughout time (including the metaphysically chancy processes), it is plausible that the large-scale structure of the early universe was radically affected by early chance processes, while later evolution was roughly deterministic with quantum indeterminacy having only restricted or limited effects. This view has the upshot of treating all chance processes on a par, as physical processes that exist within the universe. It also has the positive feature that it is open to empirical confirmation or refutation.
A third way of locating the initial chance process is to appeal to the idea that universes are themselves created within a wider multiverse.Footnote 22 One version of this view, proposed by Carroll (Reference Carroll2010), postulates that as a universe expands, quantum fluctuations can lead to a bit of space-time becoming isolated as a kind of bubble, creating a new “bubble universe” within it. These bubbles rapidly expand, in much the same way that we think our universe expanded 13 billion years ago. Thus, there may not be a single, initial condition for the universe. Rather, our present universe may be another universe’s bubble progeny. In such a case, there would be general principles—likely involving both quantum mechanics and general relativity—that determine the chances of each bubble universe’s ‘initial’ conditions. This idea has similar costs and benefits to the previous possibility.
Note that the last two options involve taking a stand on the laws of nature or the fundamental properties. On such views, the big bang is itself a chance process; therefore, the universe could not be wholly Newtonian or Bohmian—the first moment or process would be a chancy exception to the deterministic evolution they posit. Prima facie, this does not seem to be a benefit or a cost but merely something to make explicit.
6. Conclusion
Albert and Loewer’s work on deterministic evolution and initial chance distributions is incredibly powerful and suggestive. However, I have argued that their story is only half right. It explains how inherited chances arise but gives the wrong metaphysical picture of the initial chance distribution. A metaphysically fundamental chance event can avoid the costs of a Humean interpretation while justifying our subsequent use of probabilities in everyday situations and in the special sciences, even in a world currently governed by deterministic laws of nature. Further work remains to be done on the exact nature of that initial chance process.