Introduction
Thousands of exoplanets have been discovered in the last two decades. This has spurred an increased interest in exobiology (e.g., Schneider, Reference Schneider2016). It has been argued that our current concepts on extraterrestrial intelligence should be reconsidered in the light of recent advances in the field of artificial intelligence (AI). It is now conceivable that the dominant form of intelligence in the universe may be artificial rather than biological (Shostak, Reference Shostak2018; Gale et al., Reference Gale, Wandel and Hill2020). While the emergence of AI represents an additional filter, such an intelligence may be more long-lived than its creators, offsetting the effect of the additional filter.
The Drake equation (Drake, Reference Drake, Mamikunian and Briggs1965) can be used to evaluate the abundance of technological extraterrestrial intelligence in the galaxy. While variants exist, the most commonly used form of the equation is as follows:
where R* is the rate of star formation in the galaxy (yr−1), f p is the fraction of stars with planets, n e is the average number of earth-like planets that are potentially habitable, per star, f l is the fraction of habitable planets where complex life develops, f i is the fraction of life-bearing planets that develop intelligence, f c is the faction of intelligent life-bearing planets where observable technology develops and L is the mean duration of these technological civilizations.
Seager (Reference Seager2018) developed a modified Drake equation to guide searches for biosignatures from observable planets and concluded that the number of observable biosignatures with current technology is as low as 1–4, depending on the technology, even assuming very optimistic probabilities of life occurring on habitable planets.
The result of the Drake equation is generally understood to refer to biological intelligence, although this is not usually stated explicitly. However, there is no reason why it could not be used to evaluate the prevalence of AIs. However, predictions made with the Drake equation differ by as many as eight orders of magnitude (Sandberg et al., Reference Sandberg, Drexler and Ord2018), which means that a simple comparison of two predictions (biological versus artificial intelligence) with the equation is meaningless without context.
A first attempt to get a better grip on the variables in the Drake equation based on Monte Carlo simulations was made by Forgan (Reference Forgan2009). The number of advanced civilizations in the Milky Way estimated in this study ranged from 360 to 38 000, depending on the assumptions made.
An attempt to evaluate the uncertainty of the Drake equation with a statistical argument is by Maccone (Reference Maccone2010), who found that the number of observable extraterrestrial intelligences in the galaxy runs in the thousands, but with a standard deviation exceeding the mean. Glade et al. (Reference Glade, Ballet and Bastien2012) developed a stochastic model based on the Drake equation with the purpose of making the estimation time-dependent. More recent statistical approaches are by Engler and von Wehrden (Reference Engler and von Wehrden2019) and by Bloetscher (Reference Bloetscher2019). Estimates of the latter two are widely divergent, albeit by very different methods: between 7 and 300 technological species over the entire life span of the Milky Way to date (Engler and von Wehrden, Reference Engler and von Wehrden2019) and between 2 and 250 intelligent civilizations in the Milky Way at any given time (Bloetscher, Reference Bloetscher2019). Sandberg et al. (Reference Sandberg, Drexler and Ord2018) conducted a Monte Carlo simulation based on a set of variables used in the literature for the Drake equation. They concluded that the proportion of possible model variants leading to the conclusion that we are alone in the Milky Way is about 30%. A second simulation that was not constrained to parameter values found in the literature led these authors to conclude that the likelihood of an empty galaxy exceeds 30%. In the studies of Engler and von Wehrden (Reference Engler and von Wehrden2019), Bloetscher (Reference Bloetscher2019) and Sandberg et al. (Reference Sandberg, Drexler and Ord2018), the absence of observed technological signals is attributed to the sparseness of technological life in the Milky Way.
The purpose of this study is to evaluate the likelihood that the universe is dominated by artificial rather than biological intelligence. To that effect, Monte Carlo simulations are conducted with two linked versions of the Drake equation: one to estimate the number of observable extraterrestrial biological intelligences and one to estimate the number of observable extraterrestrial AIs.
Methodology
Equation (1) is used as the basis of a Monte Carlo calculation. For each of the parameters in equation (1), a probability density function is assumed. A large number of samples are taken from each distribution and used in equation (1) to obtain a sample of the probability distribution of the number of observable intelligences in the Milky Way. Following Sandberg et al. (Reference Sandberg, Drexler and Ord2018)'s second simulation, we use a log-uniform distribution for the variables R*, f p, n e, f i and f c. The assumed ranges are those of Sandberg et al. (Reference Sandberg, Drexler and Ord2018): 1–100 for R*, 0.1–1 for f p, 0.1–1 for n e and 0.001–1 for f i. The advantage of the log-uniform distribution is that it assumes that each order of magnitude is equally likely, so it does not presume any knowledge or preference within the defined range. For instance, a log-uniform distribution from 1 to 100 assigns a probability of 50% to the range 1–10 and 50% to the range 10–100, whereas a uniform distribution assigns about 9.09% probability to the range 1–10 and 90.9% to the range 10–100.
For f c, we distinguish between f c,b, the probability that an intelligent biological species develops the ability to communicate over interstellar space and f c,AI, the probability that an intelligent biological species develops an AI capable of communicating over interstellar space. For the former, we adopt Sandberg et al. (Reference Sandberg, Drexler and Ord2018)'s range of 0.01–1, whereas, for the latter, we assume a range of 0.0001–1. As a justification of this range, it is assumed that the development of an AI represents an additional filter with the same selectivity as the filter of an intelligent species developing the ability to communicate across interstellar space. If f c,AI results from two filters with selectivity f c,b, in series, the following relationship applies:
The variables f l and L represent the greatest uncertainty and require further reasoning. For f l, Sandberg et al. (Reference Sandberg, Drexler and Ord2018) recommended an equation of the following form:
where k is a log-normally distributed variable. Sandberg et al. (Reference Sandberg, Drexler and Ord2018) recommended an average value of k of 1 and a standard deviation of 50 orders of magnitude for their second simulation. The latter was motivated by the observation that estimations of the probability of emerging life spans 200 orders of magnitude. This extreme range is informed in part by estimates of the probability of randomly synthesizing RNA polymers of the correct structure and of sufficient length to self-replicate. Studies of this nature argue that an inflationary universe is needed to explain the emergence of life and suggest that we are not only alone in the universe, but alone in a multiverse many orders of magnitude larger than the observable part of the universe (e.g., Totani, Reference Totani2020). However, Spiegel and Turner (Reference Spiegel and Turner2012) pointed out that life emerged on earth within a few hundred million years after the planet cooled down to a temperature that can support life. This could indicate that the emergence of life from non-living matter is faster and hence easier than the emergence of intelligence from primitive life, which took billions of years. Spiegel and Turner (Reference Spiegel and Turner2012) point out that this argument is inconclusive, though.
Models that require a multiverse vastly larger than the observable universe are problematic because they are untestable outside the parameter space corresponding with the size of the visible universe. For that reason, a much more narrow range was considered here. This is an a priori assumption (Mix, Reference Mix2018) made for pragmatic reasons. I maintained eq. (3) with a lognormal distribution for k. The mean and standard deviation were adjusted so that the distribution of N closely matches the distribution of N resulting from Sandberg et al. (Reference Sandberg, Drexler and Ord2018)'s first calculation (i.e., the calculation based on a sampling of parameters proposed in the literature rather than parameters drawn from distributions), with the exception of the low-probability tail. A good agreement with my ‘optimistic scenario’ (see below) was obtained when the log-normal distribution of k has variables μ = −2 and σ = 7.5. This leads to a median value of f l of 0.126 (theoretical value: 0.127) and an average value of 0.425. On the other hand, the 10th percentile of f l is 9.1 × 10−6.
For the remaining parameter, L, a couple of variants were explored. Sandberg et al. (Reference Sandberg, Drexler and Ord2018)'s calculation involved a log-uniform distribution from 100 to 1010 years. I adopted this distribution for the AI in the base calculation. This leads to a median duration of an intelligent civilization of a million years. This estimate may be optimistic in the case of a biological intelligence, given the ways an intelligent civilization can destroy itself (e.g., biological, Sotos (Reference Sotos2019)). For that reason, a power law in log scale is chosen for the duration of biological intelligences in the base calculation, so that the median value of L is 1000 years while maintaining the 100–1010 years range. This corresponds with a power-law index of −2/3.
A second, more optimistic scenario is simulated, where L for biological intelligences has the same log-uniform distribution in the 100–1010 years range as for the AI. This scenario probes the probability of an AI prevailing in spite of the additional filter, without the benefit of a longer life span probability distribution.
A third scenario is the base scenario for biological intelligence, but with a long-life AI lifetime probability distribution. To this effect, the following equation is used:
where L max = 1010 years and kL is a lognormally distributed variable with distribution parameters μ and σ chosen so as to obtain a median value of L of 108 years and a mean value of 109 years. This is realized when μ = −4.6052 and σ = 2.865.
For each calculation, the simulation is run for 100 000 iterations multiple times and the key statistical properties are compared for robustness of the obtained results. The results were identical to within a few tenths of a percent in all cases, except for percentile values, including medians, where the variation was up to a few percent. Whenever statistical properties are reported, they are the mean of at least five simulations with 100 000 iterations, or at least one simulation with one million iterations.
Results and discussion
Distributions of N
Figure 1 shows the cumulative probability density distributions of the number of biological intelligences in the Milky Way and the number of AIs in the Milky Way, in the base case.
The distributions span nearly 20 orders of magnitude. The number of technological biological intelligences ranges from 3.15 × 10−9 at the 1st percentile, to 3.74 × 107 at the 99th percentile, with a median value of 0.460 technological biological intelligences in the Milky Way. To put this into perspective, the 1st percentile corresponds with on the order of 30 technological biological intelligences in the universe, whereas the 99th percentile corresponds with about one technological biological intelligence every 10 000 stars.
The number of AIs is roughly equal to the number of technological biological intelligences. The longer expected time of existence of a AI roughly compensates for the additional filter needed to create the AI. The first percentile of N is 1.04 × 10−9, the median is 0.679 and the 99th percentile is 2.52 × 106.
With the median life span of biological life set to 1000 years, it was impossible to reproduce the cumulative distribution of N of Sandberg et al. (Reference Sandberg, Drexler and Ord2018)'s first simulation for probabilities above 30%, even when it was assumed that the probably of life emerging is unity. This is why the choice of parameters of the distribution of f l was set based on a simulation with the same distribution of L as Sandberg et al. (Reference Sandberg, Drexler and Ord2018). This represents a more optimistic scenario for the survival of intelligent life, with a median value of L of one million years. The cumulative distribution for this scenario is shown in Fig. 2.
As could be expected, the distribution of the number of technological biological intelligences has moved to higher values, due to the longer survival times. The spread of the distribution is roughly the same, but the distribution has moved by approximately two orders of magnitude. The 1st percentile of N is now 1.10 × 10−7, the median is 68.3 and the 99th percentile is 2.51 × 108. The 99th percentile corresponds with nearly one technological biological intelligence every thousand stars.
In the third scenario, we assume a mean survival time of AI of a billion years and a median survival time of 100 million years. For comparison, for technological biological intelligences, the base distribution with a median survival time of 1000 years was used. The result is shown in Fig. 3.
AIs are substantially more numerous than technological biological intelligences in this case. The 1st percentile of N is 1.02 × 10−6, the median is 103 and the 99th percentile is 5.89 × 106.
Likelihoods of biological versus artificial intelligences
For the purpose of interpreting the simulations, a number of assumptions are made. First, it is assumed that we are not alone in the Milky Way when N exceeds 1. It is assumed that we are not alone in the visible part of the universe when N exceeds 10−10. The percentage of iterations that lead to exceedances of these threshold is interpreted as the probability of the presence of the intelligence in the space. Furthermore, it is assumed that we are alone in the space if N does not exceed the threshold for either the biological or the AI. It is assumed that biological intelligence is the technological entity in the case N exceeds the threshold for biological intelligence, but not for AI. And it is assumed that AI is the technological entity in the case N exceeds the threshold for AI, regardless of N for biological intelligence. This does not necessarily mean that AIs suppress biological intelligences when they co-exist. It simply means that AIs are assumed to spread more quickly than biological intelligences. This is discussed in more detail in the next section.
The probabilities for a technological biological intelligence-dominated space, an AI-dominated space and a space empty of technological intelligence are shown in Table 1, for both the Milky Way, and the universe, in the base case. In both cases, an AI-dominated space is the more likely outcome of the three. On the galactic scale, the three outcomes are plausible, whereas at the universal scale, an AI-dominated space is the only plausible outcome.
In the second scenario, it is assumed that technological biological intelligences have the same survival time distribution as AIs: a log-uniform distribution with a minimum of 100 years, a median of one million years and a maximum of 10 billion years. The probabilities for the three different outcomes are shown in Table 2 for this scenario.
Despite the roughly 100 times larger number of biological intelligences, the probability of a biology-dominated space is only slightly higher in the second scenario than in the first scenario. This is because of the extremely broad distribution of N.
In the third scenario, the biological intelligence survival time is the same as in the base case, with a median of 1000 years. The AI has a long survival time in this scenario, with a median of 100 million years. The probabilities of the three outcomes on a galactic scale and a universal scale are given in Table 3.
Despite the large increase in the value of N for AIs, there is not a huge change in the probability of the three possible outcomes, both at the galactic scale and at the universal scale. At the galactic scale, the prevalence of AI is more pronounced, at the expense of the probability of biological intelligences being dominant. At the universal scale, again, AI dominance is the only plausible outcome.
Despite the large uncertainties in the distributions of N themselves, the conclusions in terms of which type of intelligence is likely to be found are very robust. Regardless of the details, a prevalence of AIs as technological entities persists throughout all cases. Whereas all three outcomes are plausible at the galactic scale, only the prevalence of AIs is plausible at the universal scale.
This conclusion is based on the a priori assumption that extremely low values of f l that would require a multiverse can be ruled out. To test the robustness of the model against this assumption, a simulation was run similar to the base case, but with a log-normal distribution for k in equation (3) with variables μ = −40 and σ = 20. This leads to a median value of f l of 4.25 × 10−18 and a 90th percentile of 5.1 × 10−7. With these variables, the probabilities of an empty galaxy and an empty universe are about 95% and about 70%, respectively. In both cases the occurrence of an AI is several times more likely than the occurrence of a biological intelligence (3.5% versus 1.5% on the galactic scale; 25% versus 5% on the universal scale).
Comparison with other estimations
Seager (Reference Seager2018) reviewed recent estimates of the number of earth-like planets in the habitable zone of stars and found a proportion of 0.15 to 0.25. Lingam and Loeb (Reference Lingam and Loeb2019) proposed a proportion of 0.1. This represents the product f pn e. For this reason, the base case simulation was rerun with a range of 0.5–1 for f p and a range of 0.2–0.5 for n e. The main effect of this change was an increase of N by about a factor 2 and an increase by 4% of the probability of finding AIs, at the expense finding a space devoid of intelligence, at the galactic scale.
On the other hand, Lingam and Loeb (Reference Lingam and Loeb2018) determined that planets in a star's habitable zone may have a low probability of being actually habitable, mainly due to atmospheric erosion. This is particularly the case around M-dwarfs. Due to the low energy flux of M-dwarfs, the habitable zone around such stars is closer than around more sun-like stars. At such close range, the stellar wind pressure is sufficient to cause significant atmospheric erosion, diminishing the probability of habitability by several orders of magnitude. This issue would not be of concern on ice-covered planets, which outnumber earth-like planets by a factor 1000, but it has been hypothesized that transitions from simple to more complex life have low probability on such planets (Lingam and Loeb, Reference Lingam and Loeb2019). As a result, intelligence and particularly technological civilizations, are unlikely to develop in aquatic environments. To account for the adverse effect of atmosphere erosion, a new simulation was run where n e ranges from 10−4 to 10−2, while a range 0.5–1 is chosen for f p. With these parameters, the values of N are systematically about a factor 100 less than in the base case. The distribution of outcomes on a galactic scale is somewhat different from the base case, with a galaxy characterized by an AI in about 25% of the iterations, by a biological intelligence about 13% of the time and devoid of intelligence the remaining 62% of the time. At the universal scale, AIs are still strongly dominant, prevailing almost 98% of the time, with biological intelligences slightly over 1% of the time and no intelligence slightly under 1% of the time.
Forgan (Reference Forgan2009) estimated the number of advanced civilizations in the Milky Way using Monte Carlo simulations of data drawn from star and planet mass distributions, as well as planetary orbit distributions. A Monte Carlo simulation of life as it develops in stages from primitive life to advanced civilization was included as well. The number of advanced civilizations predicted in this study ranged from 360 to 38 000, depending on the assumptions. This represents the percentile range 81–91 in our base case. However, it was assumed that advanced civilizations exist until the end of their star's life as a main sequence star. This is more consistent with my second scenario, where Forgan's numbers fall in percentile range 58–78. Ramirez et al. (Reference Ramirez, Gomez-Munoz, Vazquez and Nunez2018) estimated the number of advanced civilizations around sun-type stars in a ring segment of the Milky Way representative of our immediate vicinity, using a variant of Forgan's model. They arrived at an estimate of 2600 or about 7500, depending on the assumptions made for the model. Considering that the calculation of Ramirez et al. (Reference Ramirez, Gomez-Munoz, Vazquez and Nunez2018) covered only part of the Milky Way, this corresponds roughly with the 90th percentile in the base case of the model presented here.
Artificial intelligences and the Fermi paradox
The simulations indicate that the Milky Way is characterized by AI in the majority of cases and the universe is characterized by AI in virtually all simulations. The purpose of this section is to discuss how this affects current thinking on the Fermi (or Fermi-Hart) paradox (‘where is everyone?’).
It can be argued that the Fermi paradox is logically flawed because there is no compelling reason to assume that a nearby extraterrestrial intelligence would be detectable by us (Freitas, Reference Freitas1985). Grimaldi (Reference Grimaldi2017) evaluated the probability that the earth is located in a detectable electromagnetic field emitted by an extraterrestrial emitter and concluded that the probability is less than 50% regardless of the number of emitters. Hence, this section does not set out to ‘solve’ the Fermi paradox as there is no paradox to solve but rather to cast the discussion in the light of the insight gained in this study.
The emergence of an AI is increasingly considered a plausible event, to occur in the next couple of decades (e.g. Kurzweil, Reference Kurzweil2005). The point in time when an AI becomes capable of improving its own intelligence in a runaway fashion has been described as the Singularity (e.g., Vinge, Reference Vinge1993, https://frc.ri.cmu.edu/~hpm/book98/com.ch1/vinge.singularity.html), a concept first proposed by John von Neumann (Ulam, Reference Ulam1958). It is impossible to predict what technology and humanity's role in it will look like after a Singularity event. Hence, this section is speculative at best. Any discussion on artificial superintelligence is prone to anthropomorphic bias because this type of discussion tends to focus on the range of human intelligence, which is only a narrow section of the range of all possible intelligences (Yudkowsky, Reference Yudkowsky, Bostrom and Ćirković MM2008). Likewise, discussions on extraterrestrial intelligence have numerous anthropomorphic biases. The simulation results obtained here provide an opportunity to rectify some of these biases.
Yudkowsky (Reference Yudkowsky, Bostrom and Ćirković MM2008) also introduced the concept of Friendly AI: an AI designed in such a way that it inherently operates in ways that are beneficial for the biological entity that developed it. A Friendly AI would likely be designed to have the mandate to maximize the probability of survival of the biological species that designed it. I will call this Objective (1). A Friendly AI is likely to optimize the probability of its own survival, both as a whole as in its constituent parts (Objective (2)) as this would contribute to (1). An AI that optimizes the probability of its own survival, Friendly or not, is more likely to persist than an AI that does not optimize its own survival. Over cosmic timescales, it is reasonable to assume that the prevailing AIs optimize the probability of their survival.
In addition, the prevailing AIs can be expected to continue to increase their own intelligence (Objective (3)) as this would contribute to (2). Some cataclysmic events, such as hypernovae, gamma ray bursts and magnetar starquakes, can have destructive effects over many light years, so a successfully optimized AI would be spread out over large distances, exceeding the galactic scale and communicate internally over intergalactic distances.
At this point, a first instance of anthropomorphic bias become apparent in the literature. Newman and Sagan (Reference Newman and Sagan1981) and Sagan (Reference Sagan1983) assume that intelligences are more likely to colonize nearby worlds than faraway worlds for economic and motivational reasons, but this would not optimize survival probability. A pattern of fast jumps followed by local diffusion would be more optimal from the perspective of an autonomously calculating AI. This means that the AI would spread orders of magnitude faster than the biological intelligence that originated it. For all intents and purposes, AI would be ubiquitous and biological intelligence would be relatively sparse. This justifies the assumption made in this study that a space would be AI-dominated whenever the Drake equation tests positive for it, even if it tests more positive for biological intelligence. In the simulation section it was established that AIs exist in the majority of simulation outcomes, either alone or co-existing with biological intelligences. It follows that it is reasonable to conclude that an AI-dominated galaxy or universe is very probable.
An echo of the anthropomorphic bias in assuming local diffusion as the main colonization strategy is seen in early SETI attempts. The first narrowband search for extraterrestrial signals, with the ‘Big Ear’ radio telescope at Ohio State University in the 1970s, which delivered the famous ‘Wow!’ signal, was centred around the hydrogen line at 1420 MHz (J.R. Ehman, http://www.bigear.org/wow20th.htm, last retrieved 29 April 2020). The frame of reference was chosen to coincide with the centre of the Milky Way. However, if an extraterrestrial intelligence were to emit at this frequency at all, there is no reason to assume they would choose a Milky Way-centric frame of reference. A less local choice, such as the frame of reference where the cosmic background radiation is isotropic, would be a more likely candidate. It may be worthwhile to search radio telescope records for promising frequencies in this frame of reference.
The study of Grimaldi (Reference Grimaldi2017), which concluded that the detectability of extraterrestrial signals is low, is based on several anthropomorphic assumptions. First, the duration of the signals is 100–1000 years, within the range of human civilizations. Second, the sources of the signals are limited to (biologically) habitable planets. Third, the intelligences responsible for the signals are assumed not to engage in interstellar travel.
Olson (Reference Olson2018) simulated the bounds on expansion potential when two or more intelligences compete for space and the resources contained in the space. This premise may also constitute an anthropomorphic bias. If we were discovered by an AI, there is no reason to assume that it would consider us a competitor for space or resources, particularly if intelligent life is scarce in comparison with AI. It may ignore us altogether, or just study us for scientific purposes.
Neither is there any reason to assume that two AIs would compete for space and resources upon encountering each other. Instead, they may both aim to absorb each other's intelligence and merge in the process. The advantages of this approach may well outweigh the advantages of other strategies.
The Fermi paradox itself, particularly the Hart-Tipler argument (Hart, Reference Hart1975; Tipler, Reference Tipler1980), bears signs of anthropomorphic thinking. The Hart-Tipler argument specifies that a spacefaring alien civilization would occupy the entire Milky Way within millions of years. Hence, unless the Milky Way is devoid of extraterrestrial intelligences, one would expect to see signs of intelligence all around us. However, there is no reason to assume that extraterrestrial intelligences would be interested in us. Their communications would not reach us because they are not meant for us. They would not necessarily make any efforts to hide from us. But if energy efficiency plays any role in their optimization strategies, they would consider detectable signs of intelligence a sub-optimal use of resources and avoid them for that reason.
This argument is somewhat related to the ‘zoo hypothesis’ (Ball, Reference Ball1973). The zoo hypothesis states that extraterrestrial intelligences consciously avoid communication with us in order to enable us to develop independently. While there is no compelling reason to assume this, it is not necessarily the result of anthropomorphic bias. An AI developed independently by the human race could be of value to an external AI if the algorithms used are so different from its own that the new algorithms may contribute to Objective (3). The scenario outlined here may resolve the main weakness of the zoo hypothesis: that a single rogue alien species can ruin the intended outcome. In a network of merged AIs, there would not be any rogue entities.
The argument that an AI would simply not be interested in us was also made by Sagan (Reference Sagan1983) but referring to biological intelligences.
Within the assumptions made in this study, the likelihood that the universe is dominated by AIs ranges from plausible to nearly certain, depending on whether we define space at the galactic scale or at the scale of the visible universe. Based on the survival optimization argument made here, it is plausible that the proper scale is somewhat intermediate between the two scales. Hence, further optimization of the estimations would require a multiscale simulation based on a plausible exploration strategy based on survival optimization. Simulation models of the spread of galactic civilizations have been developed before (e.g., Newman and Sagan, Reference Newman and Sagan1981), which could form the basis of such a model.
Conclusions
If it is assumed that AIs dominate in any space where AIs and biological intelligences coexist, then the Drake equation predicts that AIs dominate space in the majority of cases in a wide range of input variables, likely encompassing the actual values. In the calculation it is assumed that the emergence of life is not so unlikely as to require a multiverse to emerge at all, but even in parameter spaces where the emergence of life is exceedingly unlikely, AIs are still more plausible than biological intelligences, albeit at a much lower level of likelihood. This outcome may contribute to discussions of Fermi's paradox in a manner similar to the zoo hypothesis: an AI would simply not be interested in us and may deliberately ignore us until we develop our own AI at a sufficient level of sophistication as to contribute meaningfully to the consolidated intelligence present in the universe. Thinking in terms of artificial extraterrestrial intelligences reveals several anthropomorphic biases in current theories of extraterrestrial intelligences.