Introduction
In 1961 Drake introduced a multi-parameter equation to estimate the number of civilizations in the galaxy capable of interstellar communication (Drake Reference Drake1961).Footnote 1 Soon after, von Hoerner (Reference von Hoerner1961) and Shklovskii and Sagan (Reference Shklovskii and Sagan1966) concluded that the equation's precision depended principally on its parameter L – the mean lifetime of a communicating civilization – because L's value was uncertain over several orders of magnitude. While subsequent advances in astrophysics have improved the precision of several parameters in the Drake equation (Burchell Reference Burchell2006; Frank and Sullivan Reference Frank and Sullivan2016; Vakoch and Dowd Reference Vakoch and Dowd2015), L remains highly uncertain (Oliver and Billingham Reference Oliver and Billingham1971; Ambartsumian and Sagan Reference Ambartsumian, Sagan and Sagan1973; Billingham et al. Reference Billingham, Oliver and Wolfe1979; Duncan Reference Duncan1991; Schenkel Reference Schenkel1999; Kompanichenko Reference Kompanichenko2000; Rubin Reference Rubin2001; Forgan Reference Forgan2009; Maccone Reference Maccone2010).
The apparent absence of communicating civilizations (Webb Reference Webb2015) in our planet-rich galaxy (Cassan et al. Reference Cassan, Kubas, Beaulieu, Dominik, Horne, Greenhill, Wambsganss, Menzies, Williams, Jorgensen, Udalski, Bennett, Albrow, Batista, Brillant, Caldwell, Cole, Coutures, Cook, Dieters, Prester, Donatowicz, Fouque, Hill, Kains, Kane, Marquette, Martin, Pollard, Sahu, Vinter, Warren, Watson, Zub, Sumi, Szymanski, Kubiak, Poleski, Soszynski, Ulaczyk, Pietrzynski and Wyrzykowski2012) underscores the possibility that such civilizations have short L (Webb Reference Webb2015; Bostrom and Cirkovic Reference Bostrom and Cirkovic2011), potentially due to factors exogenous to the civilization (e.g. nearby supernovae) and/or endogenous to the civilization (e.g. self-destruction).
On Earth, control of endogenous factors that could destroy civilization – namely, Malthusian resource exhaustion, nuclear weapons and environmental corruption – has until now rested with the very few persons who command large nuclear arsenals or steer the largest national economies. However, emerging technologies could change this. For example, biotechnology (President's Council of Advisors on Science and Technology 2016) and nanotechnology (Drexler Reference Drexler1987) offer the prospect of self-replicating elements able to spread autonomously and calamitously worldwide, at low cost and without heavy industrial machinery. Ultimately, thousands of individuals – having varying levels of impulse control – could wield such technologies.
Intuition suggests danger rises as potentially civilization-ending technology (‘CE technology’) becomes more widely distributed, but quantitative analyses of this effect in the context of Drake's L are rare. At the extreme of technology diffusion, Cooper (Reference Cooper2013) modelled an entire population of 1010 individuals (growing at 2% annually), each with a 10−7 annual probability of unleashing a biological agent causing 50% mortality (with 25% standard deviation). He found a mean span of L=8000 years before extinction, defined as a population less than 4000.
This paper generalizes Cooper's work. It develops a simple two-parameter mathematical model for L that applies to most scenarios of disseminated CE technology and is mathematically indifferent to specific CE technologies. For reasons summarized below, however, biotechnology may be regarded as a universal CE technology.
Biotechnology's potential to end civilizations
On Earth, microbial pandemics have ended non-technical civilizations (McNeill Reference McNeill1976). Antimicrobial drugs mitigate such risks only partially. Advisors to the President of the USA have already warned that biotechnology's rapid progress may soon make possible engineered microorganisms that hold ‘serious potential for destructive use by both states and technically-competent individuals with access to modern laboratory facilities’ (President's Council of Advisors on Science and Technology 2016). Indeed, small research groups engineered proof-of-principle demonstrations years ago (Jackson et al. Reference Jackson, Ramsay, Christensen, Beaton, Hall and Ramshaw2001; Herfst et al. Reference Herfst, Schrauwen, Linster, Chutinimitkul, de Wit, Munster, Sorrell, Bestebroer, Burke, Smith, Rimmelzwaan, Osterhaus and Fouchier2012; Imai et al. Reference Imai, Watanabe, Hatta, Das, Ozawa, Shinya, Zhong, Hanson, Katsura, Watanabe, Li, Kawakami, Yamada, Kiso, Suzuki, Maher, Neumann and Kawaoka2012), while recent history provides a precedent not only for a laboratory-preserved organism causing a worldwide pandemicFootnote 2 (Wertheim Reference Wertheim2010; Rozo and Gronvall Reference Rozo and Gronvall2015), but also for the organism's descendants circulating for 30 years in the global population (Zimmer and Burke Reference Zimmer and Burke2009). Looking forward, medical research initiatives such as the Cancer Moonshot (National Cancer Institute 2018) may, if successful, seed thousands of hospitals with exquisitely targetable cell-killing biotechnology that could, in principle, be adapted and aimed at any genetically defined target, not just cancer cells.
Any technically-capable intelligence produced by evolution likely shares this susceptibility. ‘Genetic’ processes, defined here as those that pass information to build a succeeding generation or direct the self's use of sustaining energy, are required for evolution (Farnsworth et al. Reference Farnsworth, Nelson and Gershenson2013). Assuming that no process can be perfect, imperfections in genetic processes equate to ‘genetic diseases,’ and will spur any intelligence having self-preservation drives to develop genetic manipulation technology to ameliorate those diseases. Given this motivation to alter genetic processes, plus the biological certainty that genetic processes respond to environmental inputs (e.g. food shortages), plus a general technical capacity to control environments ever more precisely, the eventual appearance of biotechnology may be expected. Cooper (Reference Cooper2013) expects that civilizations will typically develop biotechnology and spaceflight approximately simultaneously.
Biotechnology is inescapably threatening because it is inherently dual-use (Watson et al. Reference Watson, Sell, Watson, Rivers, Hurtado, Shearer, Geleta and Inglesby2018): curing genetic disease enables causing genetic disease. Cooper (Reference Cooper2013) uses Cohen's (Reference Cohen1987) theorem to assert that, under any reasonable model of computing (applied here to bio-molecular computing), no algorithm (‘medical treatment’) can stop every possible piece of invasive self-replicating software. Whether Cohen's theorem strictly applies or not, the truism that defensive technology generally lags offensive is relevant.
Of course, any civilization can walk away from any technology. But, because other widely available technologies with civilization-ending potential, e.g. nanotechnology, lack the a priori universal desirability of biotechnology, only biotechnology will herein be further discussed.
Model and results
The baseline model assumes that all communicating technical civilizations either continue communicating forever or go silent involuntarily due to some action arising within each civilization. Two parameters model the lifespan of such civilizations: E, the number of entities (individuals, coalitions, nation-states, etc.) in the civilization who control a means to end civilization (i.e. render it uncommunicative), and P, the uniform probability per annum per entity that an entity will trigger its civilization-ending means. Entities act independently, and civilization is assumed to end with the first trigger.
The simplest model for the probability, C(y), that the civilization will still be communicative after y years, under constant E and P, is:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn1.gif?pub-status=live)
Solving equation (1) for y:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn2.gif?pub-status=live)
Borrowing the abbreviation LD 50 from pharmacology, where it indicates the median lethal dose of a substance, it is here re-conceptualized as ‘lethal duration 50’ to indicate the number of years, under a given E and P, before civilization's accumulated probability of being uncommunicative, 1 − C(y), is 50%. Substituting C(y) = 1 − 0.50 into equation (2) yields:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn3.gif?pub-status=live)
Similarly, the number of years before civilization has a 5% chance of becoming uncommunicative is:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqnU1.gif?pub-status=live)
Increasing the certainty of civilizational death increases the lethal duration exponentially, as Fig. 1 shows. Thus, for any E and P, LD 95 ≈ (4.3 LD 50), LD 99.9999 ≈ (20 LD 50), and LD 100[1−C(y)] ≈ (80 LD 50) where C(y) = 10−24.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_fig1g.jpeg?pub-status=live)
Fig. 1. Survival times in a cohort of civilizations, all created at t=0. Left: Over time, the percentage of silent civilizations, 100(1 − C(t)), logarithmically approaches 100%. For any E and P, LD X% = (ln(1 − X#))/(ln(1 − 0.50))LD 50, where X% is a percentage and X# is the equivalent probability. Right: This panel modifies the left panel's axes. First, the time axis is expanded compared with the left. Second, the vertical axis has been inverted to show survival, C(t), over time. The LD 50 and LD 99 points carry over from the left panel. Remarkably, the time required to reach infinitesimal survival rates, e.g. 10−24, is less than two orders of magnitude larger than the median civilizational survival time, LD 50.
Figure 2 plots the relationship between E and LD 50 for several P, and illustrates the approximation LD 50 ≈ 0.7/(E × P), derived in equation (A3) of the Mathematical Appendix.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_fig2g.gif?pub-status=live)
Fig. 2. Technology diffusion (E) and psychosociology (P) determine the civilizational lifespan (LD 50). E is the number of entities who control a means to end civilization. P is the probability per annum per entity that the entity will trigger its civilization-ending means. Given a constant E and P, LD 50 is the median number of years before civilization is expected to end. E and LD 50 have an inverse linear relationship for any P.
To calculate the mean lifespan, it is more intuitive to first calculate the number of communicating civilizations, N(w), that exist at the end of a time window extending from year y = 0 to y = w. Assuming that zero civilizations existed at y = 0, and that communicating civilizations were born at a constant rate of B per year throughout the time window, the Mathematical Appendix shows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn3a.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn3b.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn3c.gif?pub-status=live)
Figure 3 plots the exact form of N(w) from equation (A8), for multiple w and EP when B=1. It shows with reasonable precision that N(w) ≤ B/(EP) for any w.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_fig3g.gif?pub-status=live)
Fig. 3. Civilizations and time. For six different values of E × P, the plot shows two equivalent quantities for time windows of various durations w: (a) $\overline {L(w)}$ = mean lifetime of communicating civilizations over time, and (b) N(w) = number of communicating civilizations over time when B=1. For both quantities, a constant B is assumed. Zero civilizations exist at time w=0. Equation (A15) mandates
$\overline {L(w)}<1/(EP)$ for all w. Per equation (A9),
$\overline {L(w)}$ grows substantially until a near-steady state is reached at about w = 10/(EP) years. An arbitrary-precision software package (Johansson et al. Reference Johansson2013) used equation (A8) to calculate N(w) and
$\overline {L(w)}$.
The parameter L in the Drake equation is reformulated herein to $\overline {L(w)}$, the mean lifespan for civilizations born during a time window of duration w. This transforms the Drake equation to:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn3d.gif?pub-status=live)
Thus, $\overline {L(w)}=N(w)$ when B=1, and so Fig. 3 is also a plot of
$\overline {L(w)}$.
Per Fig. 3, $\overline {L(w)}$ increases with w. However, its maximum value, at any time, is constrained. Assuming all civilizations have identical E and identical P:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn3e.gif?pub-status=live)
Combining these two formulae and defining N as ‘N(w) for all w’ yields the Drake equation as an inequality:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn4.gif?pub-status=live)
or, hewing to its classical form (Drake Reference Drake1961):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn5.gif?pub-status=live)
Because the model addresses only endogenous involuntary silencings, adding consideration of other causes for silencings would merely reinforce this inequality.
To produce near-term risk estimates for Earth, a PubMed search informed the value of E, as follows. With the assumption of a civilization-ending technology based on some yet-to-be-described genetic technique, the number of people authoring scientific articles indexed under ‘genetic techniques’ (one of PubMed's ≈27 000 standard index terms) can be used to estimate the number of people capable of exploiting such a technique, thereby serving as a proxy for E. Thus, the PubMed search
genetic techniques[mh] AND "2008/01/01"[PDAT]:"2015/12/31"[PDAT]
performed on 10 August 2017, yielded 594,458 publications in the most recent 8-year span of complete bibliographic coverage. After eliminating non-scientific publications (of type letter, comment, news, interview, etc.) 585 004 remained, which carried 1 555 661 unique author names. Of these authors, approximately 179 765 appeared on five or more publications. This number is a maximum because some authors publish under more than one name.
Models employing non-constant E and P are possible. The simplest posits that E grows as a population might: a fixed percent per year. If, over y years, E grows this way from some initial value E 0, with the growth continuously compounded, then:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn6.gif?pub-status=live)
where r is the growth factor (e.g. 0.02 for 2% annual growth) and e = 2.71828…. Unfortunately, the unbounded exponential term renders this ‘growth model’ nonsensical for even moderately large y. Still, some insights can emerge for short-time horizons, as detailed in Fig. 4, which is based on equation (A18) in the Appendix. Unsurprisingly, a growing E yields an LD 50 significantly smaller than is calculated from a constant E.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_fig4g.gif?pub-status=live)
Fig. 4. Drop in LD 50 when E grows.
The horizontal axis corresponds to LD 50 values calculated from equation (3) and a constant E and P. If, however, E is not constant, and instead grows at a fixed percentage annually (five growth rates are shown), then LD 50 shrinks to the corresponding value on the vertical axis, according to equation (A19). So, for example, an LD 50 of 600 years derived from Fig. 2 would be revised to approximately 190 years if E grew by 1% annually. To signal wariness about exponential explosion, each solid line changes to a dotted line when the number of entities has increased a million-fold (i.e., E y/E 0 ≥ 106).
Discussion of model
Unless explicitly noted, all discussion refers to the baseline model in which E is constant.
Equation 1 provides the probability, C(y), that a civilization survives endogenous involuntary silencing threats until some L = y. The lethal durations, LD 50 et al, are probabilistic statements of this L. Because the model terminates upon the first use of a civilization-ending technology, more complicated models, such as the Poisson distribution, are not required.
Thus, the model is simple, but not unreasonably so. However, with only two parameters, it is important to understand their inherent assumptions.
Model discussion: E and P (and B)
In broad terms, E characterizes a CE technology and its availability, while P characterizes the psychology and sociology of the entities who possess the technology. Although the loss of interstellar communicativeness is equated to the end of civilization, other endpoints (e.g. complete extinction) could be substituted. The only criteria are consistency of the endpoint, independence of the entities, and termination of the model upon first triggering.
Numerous subtleties attend the definitions of E and P.
First, E applies to any CE technology, be it nuclear, nano-, bio-, or another. The CE technology never fails to end civilization once triggered. The effects of ‘near miss’ extinction events on population and psychology are ignored.
Second, E includes only entities that possess (or can acquire) the ‘full stack’ of CE technology. That is, they must have the capability to make or otherwise obtain the weapon, and to deliver it in quantities that render the civilization uncommunicative. So, for example, even though designs for nuclear weapons are comparatively well known (Phillips Reference Phillips1978), E for Earth remains only ≈ 2 (representing the leaders of the USA and Russia).Footnote 3 The self-propagating nature of biological weapons would simplify, but not eliminate, the delivery challenge.
Third, to the extent that machine intelligences possess CE technology, they could also be counted in E. (Exemplar: ‘SkyNet’ from the Terminator movies.)
Fourth, E reflects a balance between offensive and defensive technologies. Thus, developing and readying defensive technology offers a straightforward, albeit challenging, path to markedly decrease E.
Fifth, P is the sum across all reasons, intended or not, that an entity might trigger the CE technology. Most are psychosocial, e.g. greed, hate, stupidity, folly, gullibility, power-lust, mental illness, ineptitude, non-fail-safe design, etc. The Bulletin of the Atomic Scientists’ ‘doomsday clock’ (Anonymous 2002) has similarities to P.
Sixth, the model assumes constant E and P throughout the time window of interest. This is unlikely to occur in a real civilization, given the dynamics of offensive/defensive technologies, population, sociopolitical stability, and technology diffusion. Simple model extensions would have E and P vary over time, or sum across subpopulations of entities each with their own E i and P i, or sum across multiple CE technologies each with their own E j and P j.Footnote 4
Unlike the model of (Cooper Reference Cooper2013), population growth – and the concomitant growth in E – is omitted from the baseline model because all realistic non-zero growth rates become nonsensical when compounded (exponentiated) over eons. Over short timeframes, the effect of a growing E can be reasonably equated to a speed-up in time. For example, when E is constant and the model reaches some state at year y, a situation in which E is growing by 2% annually will attain the same state significantly earlier, at year $50\thinspace ln\thinspace (1+y/50)$ according to equation (A19).
The model's flexibility could be improved – at the cost of great mathematical complexity – by assigning probability distributions to E and P and convolving them. However, models that assume a distribution around some mean value for P (denoted P mean) will yield lower values for C(y) and LD 50 than the present model, because of the positive exponent in the definition of C(y). Thus, this model's dispiritingly low values for LD 50 nevertheless represent a civilization's best-case outcome for a given P mean.
This is most obviously appreciated in the edge case where a single entity has its P = 1, for example, an entity who acquires the skills of a CE technology specifically to end civilization. As soon as a single qualified entity has P = 1, then the overall civilizational P is also 1, and LD 50 (in fact, all LD x) is zero.
Civilizations spanning multiple planets should be treated as multiple civilizations, each modelled separately with their own E and P. Modelling them as a single civilization assumes all the planets' civilizations die from one attack – an unnecessarily stringent requirement. Of course, P might change on planets that see a sister planet destroy itself.
Although colonization would imply a non-constant B, the model would still apply so long as B is less than some constant B max. Using B max in the model would provide an upper bound for N(w). Geometrically increasing B would require re-working the model, but the barrenness of the galaxy mitigates this possibility: (Tipler Reference Tipler1980) and others (Webb Reference Webb2015; Jones Reference Jones1981; Armstrong and Sandberg Reference Armstrong and Sandberg2013) note that a single civilization colonizing at even moderate rates of geometric increase would fill the galaxy in only a few million years, and we do not observe a full galaxy.
Furthermore, assuming that the technology of interstellar colonization is far more daunting than biotechnology, and that the self-preservation drives of individual intelligences far exceed any elective desire to migrate off-planet, it is reasonable to expect that, as a rule, civilizations will develop and use sophisticated biotechnology before dispersing themselves on other planets (Cooper Reference Cooper2013). Thus, the experience of 20th century Earth is likely typical, i.e. the progress of medicine and public health in the era antedating genetic biotechnology creates a population explosion, so that civilization consists of a large, dense, mobile population on a single homeworld at the time that potentially CE biotechnology is developed. Because such ecological conditions are conducive to the spread of communicable agents, it is reasonable to hypothesize that all planetary civilizations will face existential threats from contagious micro-organisms – whether engineered or not – before they become vigorous interstellar colonizers (Cooper Reference Cooper2013).
The model could also apply to civilizations based on networked machine intelligences when epidemic malware is a possibility. Because diversity among evolution-produced organisms would likely be higher than among designed software, building CE technology against machine intelligences could be comparatively easy.
Model discussion: stability
It may be argued that a potential CE technology cannot exist for long time spans without a defensive technology being developed, i.e. that E cannot exceed zero for thousands, millions, or billions of years.
Several considerations weaken this proposition, especially as relates to biotechnology. These considerations are illustrative and necessarily speculative. Future biotechnological progress will elucidate the extent to which they hold.
First, reliance on a single CE technology is not required. Instead, multiple CE technologies may exist serially, each enabling a multitude of different attacks, with each attack requiring a different defense. This is akin to the inventory of ‘zero day exploits’ that present-day entities accumulate to penetrate computer systems.
Second, a long period of E > 0 can be viewed as the concatenation of shorter time periods having E i > 0, where each E i derives from a separate CE attack possibility that is eventually countered by a defense tailored to that attack. For example, if the frailties of life allow for a million different attacks,Footnote 5 and they are arrayed sequentially, and it takes 1 year to tailor a defensive technology for each, then E > 0 for w = 106 years. If no periods of E = 0 were interspersed between the E i > 0 periods, then the time window w would equal elapsed time in the universe. In scenarios having interspersed E i = 0 periods, elapsed time would exceed window duration.
Third, the mere development of defensive technology is not sufficient. The technology must be fully fielded. That is, unless widespread pre-exposure vaccination is possible, an attack must be detected, the agent(s) characterized, and the remedy developed, tested, manufactured (perhaps in billions of doses), distributed and administered – all of which must succeed before the attack can take root in the population. This is a formidable challenge requiring multiple sub-technologies in the near term, or a single future technology that is currently indistinguishable from magic.
Fourth, defensive technology may be impossible on first principles. For example, every known life form adapts its gene expression to its environment. An offensive technology whose only defense necessitated extinguishing this genetic responsiveness would seem unobtainable.
Fifth, mere possession of defensive technology is not sufficient – timely and correct decisions to activate defenses on a civilizational scale must also occur. Thus, a civilization's decision-making process, be it political, machine-based, or other, is also a target for CE technologies. This means E has a small psychosociological component.Footnote 6 Decentralized decision-making, such that every individual intelligence possessed the counter-CE technology and independently decided when and if to self-medicate, would require a level of trust in the population that no government on earth has so far developed.
Sixth, generalizing the above scenario, CE technologies need not be highly lethal. To sustain itself, a densely populated world may rely on critical infrastructure and/or heavily optimized industrial processes. Direct or indirect disruption of these essential functions could cause sufficient social chaos to render a civilization uncommunicative.
Finally, if EP is large throughout the universe, then the model does not have to apply for millions or billions of years. For example, if E = 103 and P = 10−3 then LD 50 ≈ 0.7 years and the probability of surviving to 25 years is < 10−9.
Discussion of results
Results discussion: Earth
From equation (A3), achieving LD 50 ≥ 1000 years requires EP ≤ 7 × 10−4. Thus, with E = 2 today, P ≤ 0.00035 is required.
Given the pace of biotechnology's progress, plus the irresistible pressure to continue that progress for universally-desired medical purposes, plus the dual-use potential of the technology, plus its potential worldwide reach, many humans could soon have the capacity to end Earth's technical civilization, driving E ≫ 2. In a recent 8-year span, more than 1.5 million people participated in the ‘genetic techniques’ enterprise at a level sufficient to warrant authorship on a scientific article. Almost 180 000 of them authored five or more such articles. The number actually engineering artificial organisms today is certainly far smaller, but clearly, a large reservoir of hands-on molecular genetics competence already exists on Earth.
Although LD 50 has been our focus, planning with lower thresholds (Suskind Reference Suskind2006), e.g., LD 05 (≈ LD 50/13.5) or LD 01 (≈ LD 50/70), would mitigate unanticipated rapid rises in E or P. For example, comparing a CE technology's LD 01 to the anticipated time needed to develop defensive counter-technology might drive policymakers to speed such development.
Given the PubMed authorship numbers, a few new biotechnological innovations could reasonably and quickly raise E to 104. If so, and P = 10−7, then LD 01 ≈ 10 years. If E became larger, LD 01 would become smaller. The short LD 01 time span is concerning, given today's comparatively slow pace of antimicrobial innovation (the common cold and many other infections remain incurable and without vaccinations), and strongly argues that defensive technology development must be expanded and must occur simultaneously with any therapeutic (offensive) development.
An especially concerning scenario arises if, someday, hospitals employ people who routinely write patient-specific molecular-genetic programmes and package them into replicating viruses that are therapeutically administered to patients, especially cancer patients. If the world attained the European Union's per capita hospital density,Footnote 7 this could mean two hundred thousand hospitals employing perhaps 1 million people who might genetically engineer viruses every workday. Should techniques emerge for a highly communicable therapeutic virus – against which vaccination would be refused, as that would preclude future cancer therapy – and E reached 106, then attaining an LD 01 of just 10 years would require P < 10−9, perhaps an impossibility, given human nature.
Results discussion: Drake equation
By simulating an ensemble of civilizations, the present model challenges Burchell's (Reference Burchell2006) assertion that L in the Drake equation is ‘not truly estimable [estimatable] without observation of a set of societies.’ Although estimating P based on first principles cannot be done for extraterrestrial civilizations, estimating E and the product EP may be tractable within the assumptions of the model, as follows.
Lower-bound estimates for E would derive from deep understanding of the genetic mechanisms of life – all possible mechanisms, not just DNA/RNA – and from the possibilities of biotechnology as applied to those mechanisms. Thus, estimates of E would derive from understanding the gamut of intelligence-compatible biologies, an understanding that smart human biochemists could perhaps achieve ex nihilo, without interstellar travel or communication. Machine intelligences would have analogous considerations. The existence of other CE technologies might increase E further.
Because of equation (4), EP can be constrained by searching for extraterrestrial intelligence (SETI). With B increasingly well understood, constraining N in equation (4) constrains EP. Thus, if SETI efforts someday yielded a conclusion such as ‘We estimate that no more than N x communicating civilizations exist,’ then EP < B/N x.
If both EP and E can be estimated, then the value of P is constrained. It is interesting to note that, given its dependence on psychological factors, possessing a constraint or estimate of P would be the first step toward a quantitative epidemiology of alien psychologies.
The model applies so long as opportunities to deploy civilization-ending means predate the ability to counter all such attacks (and accidents). That is, whenever E > 0, equation 3 produces a finite value for LD 50 and civilization is at risk, assuming P > 0. Whether any measures could achieve P = 0, short of pervasive and perfect surveillance of entities, is unknown.
The model's low values for lifespan, $\overline {L(w)}$, have implications for SETI strategy. If geometrically increasing interstellar colonization circumvents short civilizational lifespan, then, all other factors being equal, communicating civilizations would be longest-lived where such colonization is easiest, e.g. where the time and/or energy required to move between habitable planets is smallest. This consideration adds to existing reasons why SETI might target zones of densely collected habitable planets (Turnbull and Tarter Reference Turnbull and Tarter2003).
Results discussion: the Fermi paradox and the great filter
To date, in a visible universe of ≈1024 stars and their planets, only Earth shows evidence of intelligent life. This apparent paradox, noted by Enrico Fermi and others (Webb Reference Webb2015), could be explained by a ‘Great Filter’ that all but prevents communicating civilizations from forming or surviving (Hanson Reference Hansonn.d.). The Great Filter may be technological in origin if ‘(a) virtually all sufficiently advanced civilizations eventually discover it and (b) its discovery leads almost universally to existential disaster’ (Bostrom Reference Bostrom2008).
Most remarkably, the present model supplies the quantitative 24 orders-of-magnitude winnowing required of a Great Filter, reducing it to a two-orders-of-magnitude multiplication. For example, if E = 106 and (optimistically) P = 10−9, then LD 50 ≈ 700 years, and LD 100[1−C(y)] ≈ (80 LD 50) ≈ 56 000 years when C(y) = 10−24. That is, for this E and P, we expect only one civilization in 1024 to still be communicating after 56 000 years, and even a galactically-short 100 000-year lifespan is effectively impossible because only one in 1042 civilizations remains communicative.
Overall, therefore, I would advise advanced technical civilizations to optimize not on megascale computation (Sandberg et al. Reference Sandberg, Armstrong and Cirkovic2017) nor engineering (Dyson Reference Dyson1960) nor energetics (Kardashev Reference Kardashev1964), but on defense from individually-possessable self-replicating existential threats, such as microbes or nanomachines.
Author ORCIDs
John G. Sotos 0000-0003-0176-2907.
Acknowledgments
I am grateful for the support and wise counsel of Jennifer Esposito, Mike Morton, Barry Hayes, and, of course, Tanya Roth. However, all errors are the author's responsibility. My thanks go to the anonymous reviewer for spurring development of Fig. 4.
Mathematical Appendix
Math 1: ln(1 − P) = −P as P → 0
To solve
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqnU5.gif?pub-status=live)
at P = 0, we observe that f(0) evaluates to 0/0, making the expression indeterminate. However, it also means L'Hôpital's rule applies in the second step below:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqnU6.gif?pub-status=live)
Hence:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqnU7.gif?pub-status=live)
So, when P → 0 we can use:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn7.gif?pub-status=live)
For our purposes this approximation is excellent, viz. ln(1 − 0.1) = −0.105 and ln(1 − 0.001) = −0.0010005.
Math 2: LD50 as P → 0
We start with equation (3) defining LD 50, then simultaneously take the limit and substitute equation (A1) into it:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn8.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn9.gif?pub-status=live)
Thinking solely in terms of exponents:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqnU8.gif?pub-status=live)
Math 3: N(w) – Exact
Recall from equation (1) that C(y) is the fraction of civilizations still communicating y years after their birth. Here, however, the notion of time changes a bit.
First, define:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqnU9.gif?pub-status=live)
Next, assume we are interested in a window of time in the galaxy's history running from year 0 to year w, where no civilizations were present at y = 0. We want to know the number of communicating civilizations that exist at the end of the window, i.e. at time w.
To be considered alive at year w, any civilization born in some year y will have to communicate for w−y more years. Thus:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqnU10.gif?pub-status=live)
Assuming B(y) is a constant (having units: civ year−1):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn10.gif?pub-status=live)
We can replace summation with integration:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn11.gif?pub-status=live)
To solve for N(w), assuming all civilizations have the same E and P, we define:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn12.gif?pub-status=live)
Substituting the above into equation (1) yields:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn13.gif?pub-status=live)
Then substituting equation (A7) into equation (A5):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqnU11.gif?pub-status=live)
This yields the exact form of N(w):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn14.gif?pub-status=live)
Math 4: N(w) – As P → 0 and w → ∞
In many scenarios for N(w), P → 0 and/or w → ∞. We here derive an approximation for such conditions.
First, expand the exact definition of N(w) in equation (A8):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqnU12.gif?pub-status=live)
Now substitute with the results of equation (A1), namely ln(1 − P) = −P when P is small and, consequently, (1 − P) = e −P:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqnU13.gif?pub-status=live)
As w becomes large, e −PEw → 0. Thus:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn15.gif?pub-status=live)
Using Fig. 3, which was calculated using the exact form of N(w) in equation (A8), we observe the approximate value-range of w for which the limit of equation (A9) holds:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn16.gif?pub-status=live)
Math 5:
$\overline {{\rm L(w)}}$
As many others have noted, the Drake equation can be reduced to a two-parameter form:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn17.gif?pub-status=live)
where N is the number of communicating civilizations, B is the birth rate of communicating civilizations and L is the mean lifetime of all birthed civilizations.
Applying this to our approach of examining time windows having constant B, we can rewrite equation (A11) as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn18.gif?pub-status=live)
where $\overline {L(w)}$ is the mean lifetime of a civilization born during the time window that extends from 0 to w.
Rearranging equation (A12) and then substituting from equation (A8) yields:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn19.gif?pub-status=live)
To derive a simple approximation for $\overline {L(w)}$, recall from equation (A10) that N(w) ≈ B/(EP). It is immediately apparent from equation (A12) that:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn20.gif?pub-status=live)
Finally, the ratio of equations (A2) to (A14) is noteworthy:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqnU14.gif?pub-status=live)
Math 6: Maximum
$\overline {\rm {L(w)}}$
To find the w where $\overline {L(w)}$ is maximal, we set the derivative of the definition of
$\overline {L(w)}$ (from equation (A13)) to zero:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn21.gif?pub-status=live)
Given 0 < S < 1, then S w = 0 at w = ∞. So, using equations (A13) and (A1):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn22.gif?pub-status=live)
Seeking to show $\overline {L(w)}_{\rm max}<1/(EP)$ for all P, we begin by observing:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn23.gif?pub-status=live)
The Mercator series is:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn24.gif?pub-status=live)
We combine the two preceding formulae into an inequality, then set x = −P:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn25.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn26.gif?pub-status=live)
Substituting back into the definitions of $\overline {L(w)}_{\rm max}$ gives, for 0 < P < 1:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn27.gif?pub-status=live)
Math 7: Model of E Growing Over Time
We wish to model E growing over time and apply the model to time spans that do not cause exponential explosion.
First, recall equation (1), the basis for the baseline ‘Constant-E’ model:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn28.gif?pub-status=live)
The exponential term Ey has units entity-years and signifies the total exposure of the civilization to destruction events. Renaming E to E 0 to reinforce its constant nature in the Constant-E model, we can write:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn29.gif?pub-status=live)
Similarly, we can calculate exposure for a model in which E grows with time. Equation (6) defined E y as growing from an initial value of E 0 at an annual rate of r over a period of y years, with the growth continuously compounded:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn30.gif?pub-status=live)
In this ‘Growing-E’ model, the civilization's exposure to destruction events is:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn31.gif?pub-status=live)
We can equate the two exposures from the right-hand-sides of equations (A16) and (A17), taking care to distinguish the two different y:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqnU15.gif?pub-status=live)
This equation says that a civilization's destruction-exposure after y 1 years as calculated by the constant model, equals the exposure after y 2 years as calculated by the growth model.
Continuing, we can cancel the E 0 terms and express y 2 in terms of y 1:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn34.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn36.gif?pub-status=live)
Analytically, equation (A18) provides a shortcut for converting results from the constant-E model to the growing-E model. For example, we can define the LD 50 for the growing-E model as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190808111841142-0275:S1473550418000447:S1473550418000447_eqn37.gif?pub-status=live)
where the [G] and [C] indicate the growing-E model and constant-E model, respectively. See Fig. 4.
Although equation (A18) does not have an explicit exponential term, it must still be applied carefully because it implicitly assumes that the number of entities can grow exponentially without limit, per equation (6). Alternative growing-E models may be derived, e.g. using linear growth.