Hostname: page-component-745bb68f8f-lrblm Total loading time: 0 Render date: 2025-02-11T02:26:00.173Z Has data issue: false hasContentIssue false

Blurring Out Cosmic Puzzles

Published online by Cambridge University Press:  01 January 2022

Rights & Permissions [Opens in a new window]

Abstract

The Doomsday argument and anthropic reasoning are two puzzling examples of probabilistic confirmation. In both cases, a lack of knowledge apparently yields surprising conclusions. Since they are formulated within a Bayesian framework, they constitute a challenge to Bayesianism. Several attempts, some successful, have been made in a Bayesian framework that represents credal states by single credence functions to avoid these conclusions, but none of them can do so for all versions of the Doomsday argument. I show that adopting an imprecise framework of probabilistic reasoning allows for a more adequate representation of ignorance and explains away these puzzles.

Type
Confirmation Theory
Copyright
Copyright © The Philosophy of Science Association

1. Introduction

The Doomsday argument and the appeal to anthropic bounds to solve the cosmological constant problem are two examples of puzzles of probabilistic confirmation. These arguments both make ‘cosmic’ predictions: the former gives us a probable end date for humanity, and the second a probable value of the vacuum energy density of the universe. They both seem to allow one to draw unwarranted conclusions from a lack of knowledge, and yet one way of formulating them makes them a straightforward application of Bayesianism. They call for a framework of inductive logic that allows one to represent ignorance better than what can be achieved by a Bayesian approach that represents credal states by single credence functions so as to block these conclusions.

1.1. The Doomsday Argument

The Doomsday argument is a family of arguments about humanity’s likely survival.Footnote 1 There are mainly two versions of the argument discussed in the literature, both of which appeal to a form of Copernican principle (or principle of typicality or mediocrity). A first version of the argument endorsed by, for example, Leslie (Reference Leslie1990) dictates a probability shift in favor of theories that predict earlier end dates for our species assuming that we are a typical—rather than atypical—member of that group.

The other main version of the argument, referred to as the ‘delta-t argument’, was given by Gott (Reference Gott1993) and has provoked both outrage and genuine scientific interest.Footnote 2 It claims to allow one to make a prediction about the total duration of any process of indefinite duration on the basis of only the assumption that the moment of observation is randomly selected. A variant of this argument, which gives equivalent predictions, reasons in terms of random selection of one’s rank in a sequential process (Gott Reference Gott1994).Footnote 3 The argument goes as follows:

Let r be my birth rank (i.e., I am the rth human to be born), and N the total number of humans that will ever be born.

  1. 1. Assume that there is nothing special about my rank r. Following the principle of indifference, for all r, the probability of r conditional on N is p(rN) = 1/N.

  2. 2. Assume the following improper prior probability distribution for N: p(N) = k/N, where k is a normalizing constant whose value does not matter.

  3. 3. This choice of distributions p(r|N) and p(N) gives us the prior distribution p(r):

  4. 4. Then, Bayes’s theorem gives us , which favors small N.

The choice of the Jeffreys prior for the unbounded parameter N in step 2 is such that the probability for N to be in any logarithmic interval is the same; that is, we have . This prior is called improper because it is not normalizable, and it is sometimes argued that it is justified when it yields a normalizable posterior. Although this is a contentious assumption, we will see that no other precise distribution would allow us to avoid the conclusion of the Doomsday argument.

To find an estimate with a confidence α, we solve for x, with . Upon learning r, we are able to make a prediction about N with a 95% confidence level. Here, we have . That is, we have .

According to that argument, we can make a prediction for N on the basis of knowing our rank r only and of being indifferent about any value r conditional on N may take. We should be troubled by the fact that we can get so much information out of so little. If N is unbounded, an appeal to our typical position should not allow us to make any prediction at all, and yet it does.

1.2. Anthropic Reasoning in Cosmology

Another probabilistic argument that claims to allow one to make a prediction from a lack of knowledge is commonly used in cosmology, in particular to solve the cosmological constant problem (i.e., explain the value of the vacuum energy density ρV). This parameter presents physicists with two main problems:Footnote 4

  1. 1. The time coincidence problem: we happen to live at the brief epoch—by cosmological standards—of the universe’s history when it is possible to witness the transition from the domination of matter and radiation to vacuum energy (ρMρV).

  2. 2. There is a large discrepancy—of 120 order of magnitudes—between the (very small) observed values of ρV and the (very large) values suggested by particle-physics models.

Anthropic selection effects (i.e., our sampling bias as observers existing at a certain time and place and in a universe that must allow life) have been used to explain both problems. Anthropic selection effects make the coincidence less unexpected and account for the discrepancy between observations and possible expectations from available theoretical background. But there is no known reason why having ρMρV should matter to the advent of life.

Steven Weinberg and his collaborators (Weinberg Reference Weinberg1987, Reference Weinberg2000; Martel, Shapiro, and Weinberg Reference Martel, Shapiro and Weinberg1998), among others, proposed that, in the absence of satisfying explanations, anthropic considerations could play a strong, predictive role. The idea is that we should conditionalize the probability of different values of ρV on the number of observers (or a proxy, such as the number of galaxies) taken as a function of that parameter. The probability measure for ρV is then , where is the prior probability distribution, and ν(ρV) the average number of galaxies that form for ρV.

By assuming that there is no known reason why the likelihood of ρV should be special at the observed value, and because the allowed range of ρV is very far from what we would expect from available theories, Weinberg and his collaborators argued that it is reasonable to assume that the prior probability distribution is constant within the anthropically allowed range, so that dp(ρV) can be calculated as proportional to ν(ρV) d ρV (Weinberg Reference Weinberg2000, 2). Weinberg then predicted that the value of ρV would be close to the mean value in that range (assumed to yield the largest number of observers). This “principle of mediocrity,” as Vilenkin (Reference Vilenkin1995) called it, assumes that we are typical observers.

Thus, anthropic considerations not only help establish the prior probability distribution for ρV by providing bounds, but they also allow one to make a prediction regarding its observed value. This method has yielded predictions for ρV only a few orders of magnitude apart from the observed value.Footnote 5 This improvement—from 120 orders of magnitude to only a few—has been seen by their proponents as vindicating anthropically based approaches (see, e.g., Weinberg Reference Weinberg and Carr2007).

1.3. The Problem: Ex Nihilo Nihil Fit

The Doomsday argument and anthropic reasoning share a similar structure: (1) a uniform prior probability distribution reflects an initial state of ignorance or indifference, and (2) an appeal to typicality or mediocrity is used to make a prediction. This is puzzling: these two assumptions of indifference and typicality are meant to express neutrality, and yet from them alone we seem to be getting a lot of information. But assuming neutrality alone should not allow us to learn anything.

If anthropic considerations were only able to provide us with one bound (either lower or upper bound), then the argument used to make a prediction about the vacuum energy density ρV would be analogous to Gott’s (Reference Gott1993) delta-t argument: without knowing anything about, say, a parameter’s upper bound, a uniform prior probability distribution over all possible ranges and the appeal to typicality of the observed value favors lower values for that parameter.

I will briefly review several approaches taken to dispute the validity of the results obtained from these arguments. We will see that dropping the assumption of typicality is not enough to avoid these paradoxical conclusions. I will show that, when dealing with events we are completely ignorant or indifferent about, we can use an imprecise, Bayesian-friendly framework that better handles ignorance or indifference.

2. Typicality, Indifference, Neutrality

2.1. How Crucial to Those Arguments Is the Assumption of Typicality?

The appeal to typicality is central to Gott’s delta-t argument, Leslie’s version of the Doomsday argument, and Weinberg’s prediction. This assumption has generated much of the philosophical discussion about the Doomsday argument in particular. Bostrom (Reference Bostrom2002) offered a challenge to what he calls the Self-Sampling Assumption (SSA), according to which “one should reason as if one were a random sample from the set of all observers in one’s reference class” (57). In order to avoid the consequence of the Doomsday argument, Bostrom suggested to adopt what he calls the Self-Indicating Assumption (SIA): “Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist” (66). But as he noted himself (122–26), this SIA is not acceptable as a general principle. Indeed, as Dieks summarized: “Such a principle would entail, e.g., the unpalatable conclusion that armchair philosophizing would suffice for deciding between cosmological models that predict vastly different chances for the development of human civilization. The infinity of the universe would become certain a priori” (Reference Dieks2007, 431).

The biggest problem with Doomsday-type arguments resting on the SSA is that their conclusion depends on the choice of reference class. What constitutes “one’s reference class” seems entirely arbitrary or ill defined: Is my reference class that of all humans, mammals, philosophers, and so on? Anthropic predictions can be the object of a similar criticism: the value of the cosmological constant most favorable to the advent of life (as we know it) may not be the same as that most favorable to the existence of intelligent observers, which might be definable in different ways.

Relatedly, Neal (Reference Neal2006) argued that conditionalizing on nonindexical information (i.e., all the information at the disposal of the agents formulating the Doomsday argument, including all their memories) reproduces the effects of assuming both SSA and SIA. Conditionalizing on the probability that observers with all their nonindexical information exist (which is higher for a later Doomsday and highest if there is no Doomsday at all) blocks the consequence of the Doomsday argument without invoking such ad hoc principles and avoids the reference-class problem (see also Dieks Reference Dieks1992).

Although full nonindexical conditioning cancels out the effects of Leslie’s Doomsday argument (and, similarly, anthropic predictions), it is not clear that it also allows one to avoid the conclusion of Gott’s version of the Doomsday argument. Neal (Reference Neal2006, 20) dismisses Gott’s argument because it rests only on an “unsupported” assumption of typicality. There are indeed no good reasons to endorse typicality a priori (see, e.g., Hartle and Srednicki Reference Hartle and Srednicki2007). One might then hope that not assuming typicality would suffice to dissolve these cosmic puzzles. Maor, Krauss, and Starkman (Reference Maor, Krauss and Starkman2008) showed, for instance, that without it, anthropic considerations do not allow one to really make predictions about the cosmological constant, beyond just providing unsurprising boundaries, namely, that the value of the cosmological constant must be such that life is possible.

My approach in this article, however, will not be to question the assumption of typicality. Indeed, in Gott’s version of the Doomsday argument given in section 1.1, we would obtain a prediction even if we did not assume typicality. Instead of assuming a flat probability distribution for our rank r conditional on the total number of humans N, , let us assume a nonuniform distribution. For instance, let us assume a distribution that favors our being born in humanity’s timeline’s first decile (i.e., one that peaks around r = 0.1 × N). We would then obtain a different prediction for N than if we had assumed one that peaks around r = 0.9 × N. This reasoning, however, yields an unsatisfying result if taken to the limit: if we assume a likelihood probability distribution for r conditional on N sharply peaked at r = 0, we would still obtain a prediction for N upon learning r (see fig. 1).Footnote 6

Figure 1. Posterior probability distributions for N conditional on r, obtained for r = 100 and assuming different likelihood distributions for r conditional on N (i.e., with different assumptions as to our relative place in humanity’s timeline), which each peak at different values . Lowermost curve corresponds to a likelihood distribution that peaks at τ → 0, that is, if we assume N → ∞.

Therefore, in Gott’s Doomsday argument, we would obtain a prediction at any confidence level, whatever assumption we make as to our typicality or atypicality, and we would even obtain one if we assume N → ∞. Consequently, it is toward the question of a probabilistic representation of ignorance or indifference that I will now turn my attention.

2.2. A Neutral Principle of Indifference?

One could hope that a more adequate prior probability distribution—one that better reflects our ignorance and is normalizable—may prevent the conclusion of these cosmic puzzles (especially Gott’s Doomsday argument). The idea that a uniform probability distribution is not a satisfying representation of ignorance is nothing new; this discussion is as old as the principle of indifference itself.Footnote 7 As argued by Norton (Reference Norton2010), a uniform probability distribution is unable to fulfill invariance requirements that one should expect of a representation of ignorance or indifference—nonadditivity, invariance under redescription, invariance under negation: if we are ignorant or indifferent as to whether α, we must be equally ignorant as to whether ¬α.Footnote 8 For instance, in the case of the cosmological constant problem, if we adopt a uniform probability distribution for the value of the vacuum energy density ρV over an anthropically allowed range of length μ, then we are committed to assert, for instance, that ρV is three times more likely to be found in a any range of length μ/3 than in any other range of length μ/9. This is very different from indifference or ignorance, hence the requirement of nonadditivity for a representation of ignorance.

These criteria for a representation of ignorance or indifference cast doubt on the possibility for a probabilistic logic of induction to overcome these limitations.Footnote 9 I will argue that an imprecise model of Bayesianism, in which our credences can be fuzzy, will be able to explain away these problems without abandoning Bayesianism altogether.

3. Dissolving the Puzzles with Imprecise Credence

3.1. Imprecise Credence

Bayesian probability generally operates under the assumption that an agent can represent her credence by a single sharp numerical value between 0 and 1. A common gripe against Bayesian approaches is that this assumption is psychologically unrealistic (see, e.g., Kyburg Reference Kyburg1978). Moreover, for those who think of probabilities in terms of betting behavior, it would be more realistic to deal with an interval of betting prices (bounded by a selling price and a buying price), rather than a unique value (see Smith Reference Smith1961).

In a model of imprecise credences (or ‘imprecise probabilities’ by misuse of language) developed and defended by, for example, Walley (Reference Walley1991) and Joyce (Reference Joyce2010), credences are not represented merely by a range of values but rather by a family of probabilistic credence functions. In this model, an agent’s credal state can be represented by a family C of probabilistic credence functions [ci], whose properties are those common to all the credence functions in this credal state. On this account, one’s credal state upon learning that a certain event D obtains is the set of the updated credence functions

In this model, each credal function (i.e., each member of a family of functions that represents an agent’s credal state) is treated as in a Bayesian approach that represents credal states by single credence functions. Precise probabilities are therefore a special case of the imprecise probabilities model.

Different criteria for making comparative confidence claims exist in the literature: for instance, we can say that one will be more confident in an event than in another event if

  • it has maximum lower expected value (Γ-minimax criterion),

  • it has maximum higher expected value (Γ-maximax),

  • it has maximum expected value for all distributions in the credal set (maximality),

  • it has a higher expected value for at least one distribution in the credal set (E-admissibility), or

  • its lower expected value on all distributions in the credal set is greater than the other event’s highest expected value on all distributions (interval dominance).Footnote 10

This imprecise model is interesting when it comes to representing ignorance or indifference: it can do so with a set of functions that disagree with each other. If the agent is a committee whose members’ opinions correspond to the credal functions that constitute the agent’s credal state (i.e., the whole set), then this situation corresponds to one of indecision resulting from the disagreement between the committee members. How this indecision arises will depend on which of the above rules we adopt.

3.2. Blurring Out Gott’s Doomsday Argument: Apocalypse Not Now

Let us see how we can reframe Gott’s Doomsday argument with an imprecise prior credence for the total number of humans N or more generally for the length of any process of indefinite duration X. Let our prior credence in X, CX, be represented by a family of credal functions

, each normalizable and defined on

. Thus, we avoid improper prior distributions. All we assume is that X is finite but can be indefinitely large. We have no reason to exclude from our prior credal set CX any distribution that is monotonically decreasing and such that

,

.Footnote 11 Let then our prior credence consist in the following set of functions, all of which decrease but not at the same rate (i.e., similar to a family of Pareto distributions),

with γ > 1 and k γ a normalizing constant such that

. The limiting case γ → 1 corresponds to X → ∞, but γ = 1 must be excluded to avoid a nonnormalizable distribution.

If we do not want to assume anything about the distributions in CX (other than their being monotonically decreasing), this prior set must be such that it contains functions of decreasing rates that are arbitrarily small. That is, , , such that . This requirement applies not to any of the functions in CX but to the set as a whole.

Following the steps of the argument given above in section 1.1, we obtain the following expression for the distributions in the credal set

representing our prior credence in r:

Bayes’s theorem then yields an expression for the posterior credal functions in CN|r:

For each credal function in CN|r, we can find a prediction for N with a 95% confidence level, by solving

for x, with

.

We will find a prediction for N given by our imprecise posterior credal set CN|r by determining its upper bound, that is, a prediction all distributions in CN|r can agree on. Now, as γ → 1, the prediction for x such that diverges. In other words, this imprecise representation of prior credence in N, reflecting our ignorance or indifference about N, does not yield any prediction about N.

Choosing any of the predictions given by the individual distributions in the credal set would be arbitrary. Without the possibility for my prior credence to be represented by an infinite set of probability distributions rather than by a single probability distribution, I cannot avoid obtaining an arbitrarily precise prediction. Other distributions, such as distributions that decrease at different rates, could be added to the prior credal set, as long as they fulfill the criteria listed at the beginning of this section. However, no other distribution that we could include would change this conclusion.

3.3. Blurring Out Anthropic Predictions

We are ignorant about what value of the vacuum energy density ρV we should expect from our current theories. We can see that representing our prior ignorance or indifference about the value of the vacuum energy density ρV by an imprecise credal set can limit, if not entirely nullify, the role of anthropic considerations beyond that of mere boundary conditions.

If we substitute imprecise prior and posterior credences in the formula from Weinberg (Reference Weinberg2000; see sec. 1.2 above), we have , with a prior credal set that will exclude all values of ρV outside the anthropic range, and ν(ρV) the average number of galaxies that form for ρV, which as in section 1.2 peaks around the mean value of the anthropic range. In order for the prior credence to express our ignorance or indifference, it should be such that it does not favor any value of ρV.

With the imprecise model, such a state of ignorance can be expressed by a set of probability distributions , all of which normalizable over the anthropic range and such that ,, such that ρV is favored by and not by .Footnote 12 Such a prior credal set will not favor any value of ρV. In particular, it is in principle possible to define this prior credal set so that for any value of ρV, the lowest expectation (with respect to our credence) among the posteriors is lower than the highest expectation among the priors. If then we adopt interval dominance as a criterion for comparative confidence claims (see sec. 3.1), then no observation of ρV will be able to lend support to our anthropic prediction.

One may object to the adoption of interval dominance in such a case. This criterion is arguably not fine grained enough to help us for most of the inferences we are likely to encounter. However, this choice of demanding confidence comparison rule can be motivated by the fact that we have no plausible alternative theoretical framework to the anthropic argument. In this context, it can be reasonable to agree to increase one’s credence about the anthropic explanation only if it does better than any other yet unknown alternative might have done. Nonetheless, if we adopt other confidence comparison rules, it is possible with the imprecise model to construct prior credal sets that define a large interval over the anthropic range such that the confirmatory boost obtained after observing ρV is not nearly as vindicative as it is with a single uniform distribution.

This approach does not prevent Bayesian induction altogether. Because all the functions in are probability distributions, they all can be updated as in usual Bayesian inferences and, in principle, converge toward a sharper credence, provided sufficient updating.

4. Conclusion

These cosmic puzzles show that, in the absence of an adequate representation of ignorance or indifference, a logic of induction will inevitably yield unwarranted results. Our usual methods of Bayesian induction are ill equipped to allow us to address either puzzle. I have shown that the imprecise credence framework allows us to treat both arguments in a way that avoids their undesirable conclusions. The imprecise model rests on Bayesian methods, but it is expressively richer than the usual Bayesian approach that only deals with single probability distributions.

Philosophical discussions about the value of the imprecise model usually center around the difficulty of defining updating rules that do not contradict general principles of conditionalization (especially the problem of dilation). But the ability to solve such paradoxes of confirmation and avoid unwarranted conclusions should be considered as a crucial feature of the imprecise model and count in its favor.

Footnotes

This article stems from discussions at the UCSC-Rutgers Institute for the Philosophy of Cosmology in summer 2013. I am grateful to Chris Smeenk and Wayne Myrvold for discussions and comments on earlier drafts. For helpful discussions, I thank participants at the Imprecise Probabilities in Statistics and Philosophy workshop at the Munich Center for Mathematical Philosophy in June 2014 and at the Graduate Colloquium series at Western University. I used excerpts and results from this article in a subsequent paper: “The Bayesian Who Knew Too Much.”

1. See, e.g., Bostrom (Reference Bostrom2002, secs. 6 and 7) and Richmond (Reference Richmond2006) for reviews.

2. See, e.g., Goodman (Reference Goodman1994) for opprobrium and Griffiths and Tenenbaum (Reference Griffiths and Tenenbaum2006) and Wells (Reference Wells2009) for praise.

3. The latter version does not violate the reflection principle—entailed by conditionalization—according to which an agent ought to have now a certain credence in a given proposition if she is certain she will have it at a later time (Monton and Roush Reference Monton and Roush2001).

4. See Carroll (Reference Carroll2001) and Solà (Reference Solà2013) for an overview of the cosmological constant problem.

5. The median value of the distribution obtained by such anthropic prediction is about 20 times the observed value (Pogosian, Vilenkin, and Tegmark Reference Pogosian, Vilenkin and Tegmark2004).

6. Tegmark and Bostrom (Reference Tegmark and Bostrom2005) used a similar reasoning to derive an upper bound on the date of a Doomsday catastrophe.

7. See, e.g., Syversveen (Reference Syversveen1998) for a short review on the problem of representing uninformative priors.

8. For an extended discussion about criteria for a representation of ignorance—with imprecise probabilities in particular—see de Cooman and Miranda (Reference de Cooman, Miranda, Harper and Wheeler2007, secs. 4 and 5). See also Benétreau-Dupin (Reference Benétreau-Dupin2015).

9. The same goes for improper priors, as was argued, e.g., by Dawid, Stone, and Zidek (Reference Dawid, Stone and Zidek1973).

10. This list is not exhaustive, see Troffaes (Reference Troffaes2007) and Augustin et al. (Reference Augustin, Coolen, de Cooman and Troffaes2014, sec. 8) for reviews.

11. In order to avoid too sharply peaked distributions (at X → 0), constraints can be placed on the variance of the distributions (i.e., a lower bound on the variance) without its affecting my argument.

12. This can be obtained, e.g., by a family of Dirichlet distributions (preferable in order to have invariance under redescription; see de Cooman et al. Reference de Cooman, Miranda and Quaeghebeur2009), each of which gives an expected value at a different point in the anthropically allowed range. As in sec. 3.2, a lower bound can be placed on the variance of all the functions in in order to avoid dogmatic functions.

References

Augustin, Thomas, Coolen, Frank P. A., de Cooman, Gert, and Troffaes, Matthias C. M., eds. 2014. Introduction to Imprecise Probabilities. Hoboken, NJ: Wiley.10.1002/9781118763117CrossRefGoogle Scholar
Benétreau-Dupin, Yann. 2015. “The Bayesian Who Knew Too Much.” Synthese 192:1527–42.CrossRefGoogle Scholar
Bostrom, Nick. 2002. Anthropic Bias: Observation Selection Effects in Science and Philosophy. New York: Routledge.Google Scholar
Carroll, Sean. M. 2001. “The Cosmological Constant.” Living Reviews in Relativity 4:156.10.12942/lrr-2001-1CrossRefGoogle ScholarPubMed
Dawid, A. Philip, Stone, M., and Zidek, James V.. 1973. “Marginalization Paradoxes in Bayesian and Structural Inference.” Journal of the Royal Statistical Society B 35:189233.Google Scholar
de Cooman, Gert, and Miranda, Enrique. 2007. “Symmetry of Models versus Models of Symmetry.” In Probability and Inference: Essays in Honour of Henry E. Kyburg Jr., ed. Harper, William L. and Wheeler, Gregory, 67149. London: College.Google Scholar
de Cooman, Gert, Miranda, Enrique, and Quaeghebeur, Erik. 2009. “Representation Insensitivity in Immediate Prediction under Exchangeability.” International Journal of Approximate Reasoning 50:204–16.10.1016/j.ijar.2008.03.010CrossRefGoogle Scholar
Dieks, Dennis. 1992. “Doomsday; Or, The Dangers of Statistics.” Philosophical Quarterly 42:7884.10.2307/2220450CrossRefGoogle Scholar
Dieks, Dennis 2007. “Reasoning about the Future: Doom and Beauty.” Synthese 156:427–39.CrossRefGoogle Scholar
Goodman, Steven N. 1994. “Future Prospects Discussed.” Nature 368:106–7.10.1038/368106b0CrossRefGoogle Scholar
Gott, J. Richard. 1993. “Implications of the Copernican Principle for Our Future Prospects.” Nature 363:315–19.CrossRefGoogle Scholar
Gott, J. Richard 1994. “Future Prospects Discussed.” Nature 368:108.CrossRefGoogle Scholar
Griffiths, Thomas L., and Tenenbaum, Joshua B.. 2006. “Optimal Predictions in Everyday Cognition.” Psychological Science 17:767–73.10.1111/j.1467-9280.2006.01780.xCrossRefGoogle ScholarPubMed
Hartle, James B., and Srednicki, Mark. 2007. “Are We Typical?Physical Review D 75:123523.Google Scholar
Joyce, James M. 2010. “A Defense of Imprecise Credences in Inference and Decision Making.” Philosophical Perspectives 24:281323.CrossRefGoogle Scholar
Kyburg, Henry E. 1978. “Subjective Probability: Criticisms, Reflections, and Problems.” Journal of Philosophical Logic 7:157–80.10.1007/BF00245926CrossRefGoogle Scholar
Leslie, John A. 1990. “Is the End of the World Nigh?Philosophical Quarterly 40:6572.10.2307/2219967CrossRefGoogle Scholar
Maor, Irit, Krauss, Lawrence, and Starkman, Glenn. 2008. “Anthropic Arguments and the Cosmological Constant, with and without the Assumption of Typicality.” Physical Review Letters 100:041301.10.1103/PhysRevLett.100.041301CrossRefGoogle ScholarPubMed
Martel, Hugo, Shapiro, Paul R., and Weinberg, Steven. 1998. “Likely Values of the Cosmological Constant.” Astrophysical Journal 492:2940.CrossRefGoogle Scholar
Monton, Bradley, and Roush, Sherrilyn. 2001. “Gott’s Doomsday Argument.” Unpublished manuscript, PhilSci Archive. http://philsci-archive.pitt.edu/id/eprint/1205.Google Scholar
Neal, Radford M. 2006. “Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning.” Technical Report no. 0607, Department of Statistics, University of Toronto.Google Scholar
Norton, John D. 2010. “Cosmic Confusions: Not Supporting versus Supporting Not.” Philosophy of Science 77:501–23.CrossRefGoogle Scholar
Pogosian, Levon, Vilenkin, Alexander, and Tegmark, Max. 2004. “Anthropic Predictions for Vacuum Energy and Neutrino Masses.” Journal of Cosmology and Astroparticle Physics 7:117.Google Scholar
Richmond, Alasdair. 2006. “The Doomsday Argument.” Philosophical Books 47:129–42.CrossRefGoogle Scholar
Smith, Cedric A. B. 1961. “Consistency in Statistical Inference and Decision.” Journal of the Royal Statistical Society B 23:137.Google Scholar
Solà, Joan. 2013. “Cosmological Constant and Vacuum Energy: Old and New Ideas.” Journal of Physics: Conference Series 453:012015.Google Scholar
Syversveen, Anne Randi. 1998. “Noninformative Bayesian Priors: Interpretation and Problems with Construction and Applications.” Unpublished manuscript, Department of Mathematical Sciences, Norwegian University of Science and Technology. http://www.math.ntnu.no/preprint/statistics/1998/S3-1998.ps.Google Scholar
Tegmark, Max, and Bostrom, Nick. 2005. “Is a Doomsday Catastrophe Likely?Nature 438:754.10.1038/438754aCrossRefGoogle ScholarPubMed
Troffaes, Matthias C. M. 2007. “Decision Making under Uncertainty Using Imprecise Probabilities.” International Journal of Approximate Reasoning 45:1729.10.1016/j.ijar.2006.06.001CrossRefGoogle Scholar
Vilenkin, Alexander. 1995. “Predictions from Quantum Cosmology.” Physical Review Letters 74:846–49.10.1103/PhysRevLett.74.846CrossRefGoogle ScholarPubMed
Walley, Peter. 1991. Statistical Reasoning with Imprecise Probabilities. London: Chapman & Hall.CrossRefGoogle Scholar
Weinberg, Steven. 1987. “Anthropic Bound on the Cosmological Constant.” Physical Review Letters 59:2607–10.10.1103/PhysRevLett.59.2607CrossRefGoogle ScholarPubMed
Weinberg, Steven 2000. “A Priori Probability Distribution of the Cosmological Constant.” arXiv preprint. http://arxiv.org/abs/astro-ph/0002387.Google Scholar
Weinberg, Steven 2007. “Living in the Multiverse.” In Universe or Multiverse? ed. Carr, Bernard, 2942. Cambridge: Cambridge University Press.10.1017/CBO9781107050990.004CrossRefGoogle Scholar
Wells, Willard. 2009. Apocalypse When? Calculating How Long the Human Race Will Survive. Chichester: Springer.10.1007/978-0-387-09837-1CrossRefGoogle Scholar
Figure 0

Figure 1. Posterior probability distributions for N conditional on r, obtained for r = 100 and assuming different likelihood distributions for r conditional on N (i.e., with different assumptions as to our relative place in humanity’s timeline), which each peak at different values . Lowermost curve corresponds to a likelihood distribution that peaks at τ → 0, that is, if we assume N → ∞.