Hostname: page-component-745bb68f8f-hvd4g Total loading time: 0 Render date: 2025-02-11T21:18:59.189Z Has data issue: false hasContentIssue false

Exploration of the local solar neighbourhood I: Fixed number of probes

Published online by Cambridge University Press:  10 April 2013

Daniel Cartin*
Affiliation:
Naval Academy Preparatory School, 440 Meyerkord Ave Newport, RI (USA) 02841-1519
Rights & Permissions [Opens in a new window]

Abstract

Previous work in studying interstellar exploration by one or several probes has focused primarily either on engineering models for a spacecraft targeting a single star system, or large-scale simulations to ascertain the time required for a civilization to completely explore the Milky Way Galaxy. In this paper, a simulated annealing algorithm is used to numerically model the exploration of the local interstellar neighbourhood (i.e. of the order of ten parsecs of the Solar System) by a fixed number of probes launched from the Solar System; these simulations use the observed masses, positions and spectral classes of targeted stars. Each probe visits a pre-determined list of target systems, maintains a constant cruise speed, and only changes the direction from gravitational deflection at each target. From these simulations, it is examined how varying design choices – differing the maximum cruise speed, number of probes launched, number of target stars to be explored, and probability of avoiding catastrophic system failure per parsec – change the completion time of the exploration programme and the expected number of stars successfully visited. In addition, it is shown that improving this success probability per parsec has diminishing returns beyond a certain point. Future improvements to the model and possible implications are discussed.

Type
Research Article
Copyright
Copyright © Cambridge University Press 2013 

Introduction

In recent decades, research devoted to interstellar travel and exploration has focused principally on two areas. The first is predominantly aimed towards the design of a spacecraft capable of going to nearby stars with current or near-future technology (Bond et al. Reference Bond1978; Forward Reference Forward1985; Beals et al. Reference Beals, Beaulieu, Dembia, Kerstiens, Kramer, West and Zito1988; Landis Reference Landis1999; Long et al. Reference Long, Obousy, Tziolas, Mann, Osborne, Presby and Fogg2009). This has resulted in a creative outpouring of possible methods to accelerate and decelerate these crafts, as well as a means to communicate scientific data back to the Solar System. On the other hand, there have been a number of theoretical undertakings that examine the time required to engage in an exploration of galactic scale (Bjørk Reference Bjørk2007; Cotta and Morales Reference Cotta and Morales2009; Forgan et al. Reference Forgan, Papadogiannakis and Kitching2012). The rationale behind this research, at least in part, is an attempt to better understand the Fermi paradox of SETI. Surveys of this type allow one to estimate the time to completely explore the galaxy, giving an upper bound on the frequency of visits by other civilizations to the Solar System. In particular, Bjørk (Bjørk Reference Bjørk2007) assumes that a single probe is initially sent out, which launches a small (four or eight) set of sub-probes to explore a given sector of the galaxy, with 40 000 nearby stars targeted. Once this exploration is complete, the sub-probes return to their originating probe, which moves to a different stellar neighbourhood to start again. Each sub-probe chooses its next target by picking the nearest star not already visited. This work is built upon by Cotta and Morales (Cotta & Morales Reference Cotta and Morales2009), who use the same basic exploration model, but use various optimization techniques to reduce the total travel time of the sub-probes. After such a trial optimization, any resulting tour reducing the path time is kept, and the procedure is repeated. These heuristics give roughly a 10% decrease in the time to explore a given section of the galaxy. Finally, Forgan, Papadogiannakis and Kitching (Forgan et al. Reference Forgan, Papadogiannakis and Kitching2012) consider the use of a single probe, again using the choice of the nearest neighbouring star as the next target. However, unlike the other works, Forgan et al. use gravitational slingshots from the relative motion of probe and target star to progressively increase the speed of the probe in two of their simulation scenarios, resulting in large increase in that speed.

In this paper, the focus is on the middle ground between these two extremes. It is assumed, in particular, that the technological means and ability have been developed to launch exploration probes to star systems within a few parsecs, but not far enough intothe future that large portions of the Milky Way Galaxy – or even our local stellar neighbourhood – have been explored. Note that Moir and Barr (Moir and Barr Reference Moir and Barr2005) develop results similar to those in this paper, although they examined the possibility of a spacecraft using a cyclic trajectory to travel to a few nearby stars, then return to the Solar System. The possibility of self-replicating probes is not examined here, so the number of probes is fixed by the quantity launched from the Solar System. To start with, this reduces the number of parameters to consider, namely the number of daughter probes produced at each target system by the original arriving spacecraft. This is not to make any argument for or against self-replicating probes, merely to suggest that it is certainly easier to make a probe capable of travelling to another star system, than it is to build a probe that can do this and construct perfect replicas of itself upon reaching that system. Thus, it is at least feasible to say the first exploration programme of the type considered here will be of a fixed number of probes. An alternate programme of self-replicating probes will be saved for a companion paper to the present one.

Our exploration model makes the following additional assumptions:

  1. 1. Probes change direction only by using gravitational deflections at each target system. In multiple star systems, the probe may use any of the stars present to optimize the resulting course correction, although only one is used for this deflection (this counts as a visit to the entire multiple star system).

  2. 2. Each probe is accelerated up to a fixed cruise speed while leaving the Solar System, and maintains this speed throughout its assigned path. For simplicity, it is assumed that the positions of the target systems are known in advance, and their proper motions are neglected. Thus, increases in probe speed due to gravitational assists are not considered, as is done by Forgan et al. (Reference Forgan, Papadogiannakis and Kitching2012).

  3. 3. At the time of launch, a given probe will be assigned one or more systems to visit, and will travel through these systems without stopping.

Also, unless another source is cited, the stellar data used – specifically, the position coordinates, spectral types and masses – are from the Research Consortium on Nearby Stars database (Research Consortium on Nearby Stars 2012), and only the 60 nearest systems to the Solar System at most are considered.

Our current lack of experience in launching and utilizing interstellar probes means that the engineering model chosen is up to the individual. There are two typical categories for such a model:

  • Large spacecraft: Examples of this include Project Daedalus, its successor Icarus and the Longshot effort (Bond et al. Reference Bond1978; Beals et al. Reference Beals, Beaulieu, Dembia, Kerstiens, Kramer, West and Zito1988; Long et al. Reference Long, Obousy, Tziolas, Mann, Osborne, Presby and Fogg2009). These are likely to be more robust under the ravages of radiation damage and collisions with interstellar matter (such as planetesimals ejected from orbits around nearby stars), but are also more expensive in time and resources to construct. On the other hand, they are self-contained, and do not require additional infrastructure beyond construction facilities.

  • Small spacecraft: This class of interstellar probes includes the ‘starwisp’ probe, launched and powered using light beams from devices remaining in the Solar System (Forward Reference Forward1985; Landis Reference Landis1999). Since the power source does not move with the craft, these probes have very low mass, and thus can be produced in great numbers. However, it is unclear how such a lightweight construction will fare over interstellar travel for a few decades, since there would be little to no allowance for shielding against impact damage.

Owing to these considerations, a conservative choice is made – the assumption is that all probes launched are ‘large’, although some of the results presented below may apply to a choice of ‘small’ probe size.

One consequence of this choice is the need to use probes as efficiently as possible, in order not to waste resources. If the construction of a single probe requires a good deal of resources, then the launch of a multi-probe exploration programme is likely to be delayed, due to the need to assemble additional resources to construct all probes. Suppose, for example, that a single prototype probe is launched to a nearby star, and the design successfully reaches its destination and returns quality data about the target system. How long would it take to build additional probes to examine other systems? If it is supposed that the resource cost per probe as a fraction of either national or world production is kept fixed, then the waiting time may be significant – with the cost at a fixed percentage, an additional N probe would not be completed until economic production has grown by the same factor of N (see Millis (Reference Millis2011) and references therein). Table 1 shows the waiting period necessary to produce a certain additional number of probes, given an annual growth rate in the appropriate measure – such as the size of a national or world economy, energy production, or other metrics. These values are obtained by a compounding-type equation, namely R probe=R 1(1+c)t, where c is the annual growth rate in applicable resources, t the number of years, R probe the resource to construct the desired number of probes N probe and R 1 the resource to complete the initial probe. It is assumed that R probe/R 1N probe, i.e. there are no economies of scale when building multiple probes. Note that since this refers to the time to assemble resources for building all probes, it is possible to either build these crafts either sequentially (so that the launch time are staggered) or simultaneously. In either case, since the total construction time listed in Table 1 are small compared with the total travel times between stars, these construction times are not included in the mission completion time given in the Results section.

Table 1. Number of years after the launch of a single ‘large’ prototype probe to build an additional Nprobe probes, assuming a range of annual growth rates in a relevant quantity, e.g. energy production or financial means

Gravitational deflections

Now the issue of gravitational deflections is taken up, by considering hyperbolic orbits around a star (Moir & Barr Reference Moir and Barr2005). This allows us to find the relationship between the deflection angle between the incoming and outgoing legs of this orbit, and the asymptotic speed of the probe travelling this path. The following derivations are based on the well-known theory of central force orbits (see, e.g. Marion & Thornton Reference Marion and Thornton1988). For any conic section, the equation relating the distance r from the focus at a given angle θ is given by

(1)$$\displaystyle{{\rm \alpha} \over r} = 1 + {\rm \epsilon} \,\cos \,{\rm \theta} {\rm.}$$

Here, 2α is the latus rectum of the orbit and ϵ is the orbit eccentricity. Note that θ=0 is where the orbit has its closest approach to the focus, so that the perihelion distance r p is given by

(2)$$r_{\rm p} = \displaystyle{{\rm \alpha} \over {1 + {\rm \epsilon}}}.$$

For gravitational deflections, the probe is on a hyperbolic orbit through the target system, so ϵ>1. As the spacecraft leaves this system, it approaches r→∞ as the angle reaches θθ, the angular position of the probe on the outgoing leg. The value of θ is found from the conic section equation (1) by

(3)$${\rm \theta} _\infty = \cos ^{ - 1} \left( { - \displaystyle{1 \over {\rm \epsilon}}} \right),$$

with the total deflection Δθ=2θ−π (see Fig. 1). For our purposes, it is more useful to have a relation between the maximum cruise speed possible for a given deflection angle Δθ. To do this, two equations for the semi-major axis of a hyperbolic orbit are appropriate, namely

$$a = \displaystyle{{r_{\rm p}} \over {{\rm \epsilon} - 1}} \simeq \displaystyle{{GM_{\rm s}} \over {v_\infty ^2}},$$

where M s is the mass of the object providing the gravitational deflection (neglecting the probe's mass) and υ is the probe's speed as r→∞. Combining these relations, along with the above equations (1) through (3), to eliminate eccentricity from the relations, gives the result

(4)$${\rm v}_\infty = \sqrt {\left( {\displaystyle{{GM_{\rm s}} \over {r_{\rm p}}}} \right)\left[ {\csc \left( {\displaystyle{{{\rm \Delta \theta}} \over 2}} \right) - 1} \right]}.$$

Thus, for a desired deflection Δθ to the next target star, the maximum possible speed the probe can depend on the geometric factor Δθ, and a constant $\sqrt {GM_{\rm s} /r_{\rm p}}$ related to the star itself. Any probe cruise speed υ⩽υ will allow the spacecraft to execute a corresponding angular deflection of Δθ. This will be a factor in our results – it may be technologically feasible to accelerate a probe for a wide range of cruise speeds, but the probe will be limited to the maximum speed allowed by the path mapped out for it between target stars, and so the craft's cruise speed will be bounded above by the smallest value of υ along its trajectory.

Fig. 1. Plot of the hyperbolic orbit of a probe undergoing a gravitational deflection due to mass M s. The perihelion distance r p and semi-major axis a of the orbit are shown, as well as the angular deflection of the probe; the angle θ=0 is fixed by the perihelion point of the spacecraft. The probe enters the target system from the upper left, changes direction by Δθ, then leaves the system from the upper right. From the diagram, one can see that the smaller angle between the asymptotes of the hyperbola can be found in two ways – namely, 2π−2θ, and π−Δθ. Equating these together and solving for the angular deflection gives Δθ=2θ−π.

Turning next to how the constant $\sqrt {GM_{\rm s} /r_{\rm p}}$ scales with different stellar properties, writing it in terms of the Sun's mass M and radius r gives

(5)$$\eqalign{\sqrt {\displaystyle{{GM_{\rm s}} \over {r_{\rm p}}}} = & \left[ {4.368 \times 10^5 \left( {\displaystyle{{M_{\rm s}} \over {M_ \odot}}} \right)^{1/2} \left( {\displaystyle{{r_{\rm p}} \over {r_ \odot}}} \right)^{ - 1/2}} \right] {\rm ms}^{ - 1} \cr= & [0.01457c]\left( {\displaystyle{{M_{\rm s}} \over M}} \right)^{1/2} \left( {\displaystyle{{r_{\rm s}} \over {r_ \odot}}} \right)^{ - 1/2}.}$$

The perihelion distance is fixed by the need to limit the amount of heat flux incident on the stellar probe due to stellar radiation, which is related to the (bolometric) luminosity L s of the star and the desired maximum heat flux H max falling on the spacecraft. As discussed in Moir & Barr (Reference Moir and Barr2005), bounding the flux incident on the probe at small distances to the star helps to limit the total heat load on the craft. Perihelion distance and maximum heat flux are related by

(6)$$r_{\rm p} = \sqrt {\displaystyle{{L_{\rm s}} \over {4{\rm \pi} H_{{\rm max}}}}}.$$

To show which types of main-sequence stars provide the most useful gravitational deflections, the empirical relation (Lang Reference Lang1999)

(7)$$\left( {\displaystyle{{L_{\rm s}} \over {L_ \odot}}} \right) = \left( {\displaystyle{{M_{\rm s}} \over {M_ \odot}}} \right)^{\rm \alpha}$$

is used between stellar luminosity L s and mass M s of a main-sequence star, in terms of the comparable properties for the Sun; the exponent α is determined from astronomical data to be in the range α≃4. Using the value of H max=7 MWm−2 of Moir and Barr, equations (5) through (7) give

(8)$$\eqalign{\sqrt {\displaystyle{{GM_{\rm s}} \over {r_{\rm p}}}} = & \left( {\displaystyle{{4{\rm \pi} G^2 H_{\max} M_ \odot ^{\rm \alpha}} \over {L_ \odot}}} \right)^{1/4} M_{\rm s}^{(2 - {\rm \alpha} )/4} \cr = & [8.525 \times 10^{ - 4} c]\left( {\displaystyle{{M_{\rm s}} \over {M_ \odot}}} \right)^{(2 - {\rm \alpha} )/4}.}$$

Since α>2 for main-sequence stars, then as mass decreases, the maximum cruise speed υ increases. Note that – although we use the fixed value H max=7 MW m−2 for maximum heat flux throughout this work – improvements in radiation shielding of probes during close approaches may increase the relative size of υ, as $\sqrt {GM_{\rm s} /r_{\rm p}} \propto H_{\max} ^{1/4}$.

Less massive main-sequence stars produce less energy from nuclear fusion for a given mass than the larger variety. Thus, redder stars serve to provide the greatest angular change in a probe trajectory from solely gravitational attraction, while bluer stars provide the least. White dwarfs are even better, since their radiant energy does not arise from nuclear fusion and they have high mass densities, as their matter is supported only by electron degeneracy. Brown dwarfs are another favourable object class – although they are not as dense as white dwarfs, they do not emit much radiation due to their lack of nuclear fusion. Specific values of the constant $\sqrt {GM_{\rm s} /r_{\rm p}}$ are given for a variety of nearby objects in Table 2. Note that the use of only gravitational assists means that some systems may be targeted solely for their ability to change a probe's course. An example of this is van Maanen's Star, a solitary white dwarf that has a relatively high constant; thus, although it may be of inherent astrophysical interest, it also serves as a useful waypoint to create rather large angular deflections. In addition, when a probe is incident on a multiple star system, the usual best choice of star to use for a gravitational deflection will be the reddest of the stars present, if not a brown or white dwarf. For example, as shown in Table 2, a much greater change results from a close pass near Sirius B, rather than the brighter Sirius A. However, this aspect can be problematic in other situations, e.g. the α Centauri star system, where the best choice for such a deflection – Proxima Centauri – lies at a relatively large distance (0.237 light years) from the other two members of this triple star system. Since this is the sole example in the selection of target stars considered here, for simplicity, we count a probe visiting Proxima Centauri as one to the entire α Centauri system.

Table 2. Values of the constant $\sqrt {GM_{\rm s} /r_{\rm p}}$ for a variety of nearby stars, brown dwarfs and white dwarfs, where Ms is the mass of the object and rp is the perihelion distance for the interstellar probe, determined by using equations (5) and (6) with a maximum desired heat flux of 7.00 MWm−2. Mass and luminosity data for each star taken from the references listed; for Procyon B, Proxima Centauri, Sirius B and van Maanen's Star, the luminosity was derived from the Stefan–Boltzmann law, using the radii and effective temperatures provided in the references. The range in the values for the brown dwarfs ε Indi Ba and Bb represent observational uncertainty in the mass of those objects

Results

An efficient exploration programme is now sought, using a fixed number of probes launched from the Solar System at the same time, by simulating the results of different design choices. By ‘efficiency’, the amount of time to complete the programme is minimized in some sense. For example, with t a the time for probe a to complete its portion of the mission, our efficiency metric could be the largest of the values t a, or the sum of the completion times ∑ata; for the sake of definitiveness, the simulations presented below will minimize the Euclidean norm of all t a, e.g. an objective function

(9)$$C = \min \sum\limits_{\rm \alpha} t_{\rm \alpha} ^2.$$

This measure is aimed at producing probe travel times t a that are roughly comparable in size. The possible variables of the programme include how many probes N probe are launched as part of the exploration programme, the cruise speed for each of the probes (which may not be the same for all spacecrafts), the probability p of each probe successfully travelling a given parsec, and the total number of target systems N target. These variables are now discussed in turn. Building a substantial number of probes would require significant economic resources. Thus, an exploration programme of a large spacecraft would seek to minimize N probe as much as possible, while trying simultaneously increasing the number of targets N target successfully reached in the shortest feasible time. The probe cruise speed is bounded by the path chosen for each path; a separate consideration is whether the spacecraft can be actually accelerated up to a given velocity. Thus, the specific parameter used in the simulations is the maximum possible cruise speed υp the probes can use. Specifically, this means the probes can use any path, where for all angular deflections Δθ the maximum speed υ – given by relation (4) between them – does not exceed υp. In other words, the parameter υp serves to bound the search space of possible paths. However, the largest significance of this is on the travel time of the probe; although a probe with a large cruise speed will not be physically able to make a large angular change in its path, it can conceivably recover this by going faster along its trajectory.

Finally, the chance of a catastrophic failure of a probe is considered, leading to its inability to complete its mission. For the following, it is assumed that the probability p of a probe completely failing for a given parsec of its path is independent of it failing along any other parsec distance. This is justified by the following. Since the probe has a constant cruise speed, the time of flight is directly proportional to the distance travelled. This means that intrinsically time-related failures – such as critical systems onboard the probe – will have a path distance relation just as much as physical phenomena associated with the trajectory itself – e.g. the probability for the probe shielding to fail due to a high-energy impact of cosmic dust. It is assumed that the time-dependent and distance-dependent failure modes have similar probability distributions, so that the net effect is that the success probability p for a craft moving at a constant speed is a probability distribution per parsec.

Varying the parameter p will change the expected number of target systems successfully reached, and the time it takes to do so – either the time per probe, or the time to complete the entire exploration programme. These results are arrived at by using a simulated annealing algorithm (Kirkpatrick et al. Reference Kirkpatrick, Gelatt and Vecchi1983) to choose the best path for each probe. Although this does not guarantee the optimal routes for a multi-probe exploration programme, it does provide a solution that is relatively good in a short amount of computational time. Further details about the algorithm, and its effectiveness compared with the exact optimal solutions and other heuristic approaches, are given in the Appendix. An example simulation is demonstrated in Fig. 2.

Fig. 2. An example of a simulated exploration programme, for N target=20 and N probe=5, so that all star systems within 3.68 pc of the Solar System are surveyed. Lines represent the path of a probe from one system to another; the Solar System is the central node with five paths emerging from it. The first probe completes its mission after 1289 years, the last after 3931 years.

The first property of exploration probes considered is the upper bound υp on their cruise speed. For the large probes considered here, this speed will depend on the method of launch from the Solar System – is the method of propulsion onboard, or are other methods, e.g. beamed propulsion, used? Without getting into particulars, the effect of varying the given cruise speed has on the mean time for each probe to complete its exploration programme is examined. As seen in Table 2, values of $\sqrt {GM_{\rm s} /r_{\rm p}}$ for various stellar types sets the scale of probe cruise speeds in the range 102c–104c. A probe exceeding this range may have shorter travel time, but also a more limited selection of stars to aim towards after leaving a target system, since the required course deflection may not be possible at its cruise speed. In other words, a desired exploration path for a given probe is likely to fix a maximum possible cruise speed to be of the order of 102c. This varies according to the number of probes N probe – if more probes are launched for a fixed number of target systems, each probe has to visit fewer systems, leading to less stringent limitations on cruise speed. Thus, in Fig. 3, for N probe=3 and N probe=8, the travel time plateaus around υp=0.01c, but for N probe=20, the travel time decreases until about υp=0.02c. For the sake of definitiveness, the maximum cruise speed of all probes is set to υp=0.01c in the remainder of this paper.

Fig. 3. Mean travel time for each probe in a collection of probes, as a function of the cruise speed υp for each probe. Three possible choices of the number of probes launched from the Solar System given, for a total of N target=60 systems are visited by the collection.

Next, the effect of increasing the total number of probes N probe on the travel time of each probe is studied; for N target=60, the minimum, mean and maximum travel time for the collection of probes are shown in Fig. 4. Recall that the objective function used is the Euclidean norm (9) of all probe completion times, so various other measures of the travel times are reported in Fig. 4 as a different viewpoint on the solutions arrived at. Somewhat surprisingly, all three of these measures are roughly linear in this log–log plot, meaning that tpk, for some power k. Note that for most simulations used to obtain a scaling law of this type, N probeN target, since for comparatively large number of probes, the simulated annealing algorithms would not assign any target systems to a small percentage of the spacecraft; simulations with these results were discarded. Simulations over a large variety of choices for υp, N probe and N target were run; specifically, υp ranged from 103102c, N target took values of multiples of tens within the range [10, 60], and values of N probe were all multiples of four such that the simulation algorithm gave assigned paths to all probes. From these simulations, it was found that the mean travel time for each probe is given by the scaling law

(10)$$t_{{\rm mean}} \propto N_{{\rm target}}^{0.935 \pm 0.0208} \,{\rm \upsilon} _{\rm p}^{ - 0.942 \pm 0.0152} \,N_{{\rm probe}}^{ - 1.04 \pm 0.0213}.$$

For minimum and maximum times,

(11)$$t_{\min} \propto N_{{\rm target}}^{1.30 \pm 0.0262}\, {\rm \upsilon} _{\rm p}^{ - 0.921 \pm 0.0192} \,N_{{\rm probe}}^{ - 1.27 \pm 0.0268}$$

and

(12)$$t_{\max} \propto N_{{\rm target}}^{1.01 \pm 0.00974} \,{\rm \upsilon} _{\rm p}^{ - 0.929 \pm 0.00714} \,N_{{\rm probe}}^{ - 0.797 \pm 0.00998},$$

respectively. All of these relations were found by using a multiple linear regression on the logarithms of the variables; the coefficient of determination R 2 for all three equations exceeds 0.934. From these relations, all time scales are seen to be inversely proportional to the maximum speed of probes; at least roughly, staying within our chosen range υp⩽0.01c, doubling this maximum speed causes the mean time for each probe to complete its mission to be halved. The dependence of the time on the number of target stars and probes has a wider variance. Specifically, doubling N target reduces the minimum time for a probe to complete its mission by 58.5%, but the maximum time only by 42.4%. Finally, although it is not included in the equations listed above, the various times t scale with the heat flux H max incident on the probe at perihelion as tH max1/4, as seen by the form (8) of the constant $\sqrt {GM_{\rm s} /r_{\rm p}}$ and verified by numerical simulations.

Fig. 4. Minimum, mean and maximum travel times for probes exploring a total of N target=60 systems, as a function of the total number of probes N probe used in the exploration program.

Fig. 5. Cumulative time to visit a given number of systems, in units of kiloyears, with either N probe=3,8, or 20 probes used to explore a total of N target=60 systems.

Up until this point, all results presented have assumed that probes successfully carry out their assigned missions without mishap. Obviously, this is a simplifying assumption, since there would be a non-zero probability of catastrophic failure for each probe. The probability of the probe successfully travelling an additional parsec along its path is denoted as p. Therefore, supposing the distance between systems i and j in parsecs is given by d ij, the probability for a probe to successfully cross this distance is given by $p^{d_{ij}}$, while the probability of catastrophic failure is $1 - p^{d_{ij}}$. Now the expected number 〈N〉 of systems visited by the collection of probes is sought. Since all probes are independent, the expected number of systems 〈N〉 visited by all probes is simply the sum ∑aN a〉 of systems visited by each probe a. With k a systems for probe a to visit, the expected number of systems visited is defined as

(13)$$\langle N_a \rangle = \sum\limits_{n = 0}^{k_a} \,np_n,$$

where p n is the probability of reaching a maximum of n systems out of the k a possible, with ∑npn=1. The value p n is computed using a formal power series in a variable x, where the coefficient of the xn term will indicate the probability of n being the expected number of systems successfully visited. The probability of the probe going from system i to system j thus is represented by the linear function

$$(1 - p^{d_{ij}} ) + p^{d_{ij}} x,$$

while the outcome of going from i→j→k is given by the combination of two such functions, namely

(14)$$\eqalign{ & (1 - p^{d_{ij}} ) + p^{d_{ij}} x[(1 - p^{d_{\,jk}} ) + p^{d_{\,jk}} x] = (1 - p^{d_{ij}} ) \cr & + p^{d_{ij}} (1 - p^{d_{\,jk}} )x + p^{d_{ij}} p^{d_{\,jk}} x^2.}$$

Again, each coefficient on the right-hand side of equation (14) gives the probability for each value of the expected number of systems visited. In particular, the constant value $1 - p^{d_{ij}}$ gives the chance that no systems are visited, since it shows the probability that the probe does not successfully travel from i to j. The term linear in x gives the probability that one system is reached, since the probe reaches system j – with chance $p^{d_{ij}}$ – but not system k – with chance $1 - p^{d_{\,jk}}$. Finally, the quadratic term shows the probability of reaching both system j (probability $p^{d_{ij}}$) and system k (probability $p^{d_{\,jk}}$). Note that this probability is the same as $p^{d_{\,jk} + d_{\,jk}}$, i.e. the parameter p rises to the power of the total distance travelled along the probe's path. Thus, if the distance d i is defined as the distance along the path of the probe, from the Solar System (node 0) to system i, then the function f a(x) giving the expected number of systems visited by probe a is

$$\eqalign{ f_a (x) =\ &(1 - p^{d_1} ) + p^{d_1} (1 - p^{d_2} )x + \cdots + p^{d_{k_a - 1}} (1 - p^{d_{k_a}} )x^{k_a - 1} \cr & + p^{d_{k_a}} x^{k_a} = \left[ {\sum\limits_{n = 0}^{k_a - 1} \,p^{d_n} (1 - p^{d_{n + 1}} )x^n} \right] + p^{d_{k _a}} x^{k _a},}$$

where d 0=0. The last term in the series is different than the others, since it is irrelevant if the probe survives past the last targeted system on its path. In order to arrive back at the expected number 〈N a〉 given by (13), note that

$$\langle N_a \rangle = \displaystyle{{{\rm d}f_a} \over {{\rm d}x}}|_{x = 1}.$$

To find the expected number 〈N〉 of systems visited by all probes, by the chain rule for derivatives,

$$\langle N\rangle = \displaystyle{{\rm d} \over {{\rm d}x}}\left[ {\prod f_a (x)} \right]_{x = 1}.$$

Using this procedure, the variation in the expected number of systems visited for particular choices of N target and N probe are shown in Fig. 6. The probabilities p used in computing these results were chosen as p=0.995k, for integer values of k. Thus, there may be differences with the discussion following in the case of 0.995<p<1. However, for all cases calculated with p=0.995, 〈N〉 ∼ N target, so it is likely that the dependence of the expected number of visited systems on N probe is almost non-existent for such high success probabilities.

Fig. 6. Graphs of the expected number of systems 〈N〉 visited by a collection of probes launched from the Solar System, as a function of the probability p to successfully cross each parsec without a catastrophic failure. Each plot corresponds to a different number of total systems N target to be visited; within the plot, difference choices for the number of probes N probe launched is given.

One interesting result is how increasing the number N probe of probes launched has limited utility beyond a certain value. To show a particular instance of this, suppose a probe design is chosen where the probability of success per parsec is 0.90; the resulting expected number of systems visited, given a particular number of systems N target targeted, is shown in Table 3. In particular, increasing the desired number of systems actually visited requires large number of extra probes. For example, for N target=20, one has to double the number of probes from N probe=8 to N probe=16 to gain only one additional visited system. For the choice of N target=60, the results improve somewhat – doubling the number of probes from N probe=16 to N probe=32 results in a 70% increase in the expected number of systems visited – but further increases beyond N probe=32 do not give any appreciable gain beyond 15 systems. To express this in a different form, suppose Nprobe=60 systems are targeted, and one wishes to see what success probability p is necessary to ensure probes arrive at a minimum of 〈N〉=10 systems; these probabilities are given in Table 4. Consider the situation where one starts with a projected programme of N probe28, but would rather launch fewer probes by expanding engineering effort to increase the probe success probability p. Table 4 shows that, at first, small changes in p are needed to reduce N probe, but that diminished returns occur as N probe decreases below N probe16.

Table 3. Expected number 〈N〉 of systems visited as a function of the number of systems N target targeted and the number of probes N probe launched, for a fixed success probability per parsec of p=0.90

Table 4. Probability p of probe success per parsec necessary to ensure the expected number of systems 〈N〉=10 when N target=60 systems are visited

Discussion

In this paper, the characteristics of a future programme is looked at for exploring the local interstellar neighbourhood with a fixed number of probes, each of which travels at a constant cruise speed and uses only gravitational deflections to change the direction. When designing such a programme, one crucial metric is the amount of time it takes for the entire programme to be completed, or at least the average time for a typical probe to finish its pre-programmed trajectory. These values depend on the maximum possible speed of probes, the number of systems targeted, and the number of probes sent out to do the exploring. Roughly speaking, doubling the number of probes or the speed of those probes serves to halve the amount of time to finish the desired exploration, while the completion time is proportional to the target list – doubling the targets will approximately double the time to visit them. The latter is somewhat surprising, considering that twice the number of targets will fill a volume roughly 21/31.26 times as large. These conclusions are codified in the scaling laws (10) through (12). A second important factor to examine is how the probability of success alters the expected number of targets visited by functioning spacecraft. Somewhat surprisingly, increasing the number of probes launched will increase this expected number only up to a certain amount; further probes will not significantly change the sample of star systems explored.

This study set out to examine how to optimize an exploration programme based on ‘large’ spacecraft, such as a fleet of Daedalus-class probes. As such, the scaling laws obtained focus on how to maximize the results of such a programme. However, it is worth commenting here about considerations for an alternate choice, using a much larger number of ‘small’ probes, such as a mass-produced collection of solar sails or other beamed-propulsion concepts. It is intuitively likely that if multiple probes are sent out along the same path, with each probe having its own success probability p per parsec, that the expected number of systems reached by these probes will be higher than for a single probe. Stated differently, this means the effective success probability p eff of these multiple probes should be larger than p. Although this question has not been extensively studied, preliminary comparisons show that this intuition is true, and that the ratio (p eff−p)/p is in the range 0.010.1, depending on the number of extra probes sent along the same trajectory.

Next, future avenues of expanding this model are mentioned. One further direction to consider is the inclusion of the motion of stars relative to the Solar System. Looking only at radial velocities, i.e. the component of a star's relative motion towards or away from the Solar System, many of these are of the order of 104c, within the range of probes in our mathematical model. Thus, over the course of the simulated interstellar exploration programmes, there can be significant changes in both the distances between stars and the relative order of the distances between systems (Matthews Reference Matthews1994). This has two consequences for the scenario in this paper. The first is that the relative distance between star systems may change between the time the probes are launched, and when a particular system is reached by one of the spacecrafts. This adds a wrinkle not taken into account in the algorithm presented here, namely the possibility it may be advantageous to change the order that star systems are visited in order to use a favourable alignment. Note that this does not specifically refer to a decreasing distance between two targets, but also whether there is a decrease in the necessary angular deflection for a transit between three targets in sequence, allowing the probe to have a larger cruise speed for its entire journey. The second result of including relative motions is the possibility of increasing the cruise speed of a probe by a fortuitous gravitational assist at a target star (Forgan et al. Reference Forgan, Papadogiannakis and Kitching2012), although a decrease may happen as well. Both of these factors would serve to increase the needed complexity of the programme design algorithm used in the simulations, since the cruise speed of all probes and the positions of the target systems would be dynamic variables in the search for optimal paths. As detailed in Appendix A, the current simulated annealing algorithm looks for good paths by sampling random choices, evaluating the cruise speed and time for completion after the choice is made. However, if the target systems move in the simulation, then the cruise speed would depend on the positions of stars because of the necessary angular deflections, but the positions of a given star system when a probe reaches it can only be calculated when the time the probe arrives there is known, which depends on the probe's cruise speed. The result is that the cruise speed for a given path would have to be optimized, leading to extra calculations in the algorithm.

Finally, the implications are mentioned for the possibility of probes from extraterrestrial civilizations to visit the Solar System. As the results obtained here focus on optimizing the exploration of a certain volume of interstellar space, there are many instances where probes move back towards the Solar System as they complete their mission. Thus, the progress of probes does not match a wavefront always propagating away from their origin (such as that modelled by Newman & Sagan (Reference Newman, Sagan, Finney and Jones1985), so it is somewhat difficult to extrapolate these values beyond the scenarios considered. This can be seen in Fig. 5, showing the cumulative time to complete the programme as a function of total number of targets N target. Although these times are roughly linear for most of N target values, there is a marked change around N target≃55 for N probe=3 probes. This alteration in behaviour may occur for larger number of probes, only at greater choices of N target. This question can only be decided by explicitly carrying out the required simulations. However, it is viable to say that the exploration programme of a nearby civilization would reach the Solar System within a few tens of kiloyears. Another issue is that, for all simulations presented here, it is assumed that all systems within a certain distance threshold are targeted by probes, but this is certainly not the only choice. In particular, if a civilization is only interested in stars of a similar spectral class to their own, there is a chance that their interstellar exploration may not pick our Sun as a target. However, the Solar System may still be visited by an alien probe, solely for the possibility of a favourable angular deflection provided by the Sun between two target systems. Indeed, examining exact solutions using a branch-and-bound method, for a total of 11 systems and a selection of N target<11, many of the obtained solutions include multiple systems not on the target list (Cartin Reference Cartin2012), included only for navigational purposes. A comparison of the number of solutions with extra systems to the total number of target lists possible is given in Table 5. Using relation (6) for the perihelion r p of the probe as a function of maximum desired heat flux H max, we find that

(15)$$r_{\rm p} = (0.037{\rm AU})\sqrt {\left( {\displaystyle{{L_s} \over {L_ \odot}}} \right)\left( {\displaystyle{{1{\rm MW} {\rm m}^{-2}} \over {H_{{\rm max}}}}} \right)}$$

Thus, for an extraterrestrial probe using the Sun for gravitational deflection, r p=0.014 AU, well within the orbit of Mercury, and thus easily observed from Earth. This suggests the question of whether there are favourable routes for probes travelling through our Solar System using only gravitational deflections – in other words, if probes of this nature moving through the Solar System in between nearby stars might be more likely to travel along a small number of trajectories. Computing these paths,and targeting the resulting sectors of the celestial sphere for SETI observations, makes for an interesting future project.

Table 5. Number of exact solutions featuring systems not on the target list, compared with the total number of choices for the target list, for N target systems out of a possible 11 targets (Cartin Reference Cartin2012)

Table 6. Estimation errors in time for a single probe (N probe=1) to complete its mission for the simulated annealing algorithm, and two heuristic greedy algorithms – fastest speed and shortest time – compared with the exact optimal solutions obtained using a branch-and-bound algorithm, with Ntarget given in the table. For the simulated annealing algorithm, a total of 60 trials were used. When there are eight or less target star systems, the exact and simulated annealing solutions agree. Each of the three algorithms are as described in the Appendix

Acknowledgements

The author wishes to thank Duncan Forgan for helpful comments on a draft version of this work.

Appendix. Simulation algorithms

The algorithm used to compute the simulations presented in this paper is detailed here, along with a discussion of simpler heuristic methods, and a comparison of these methods to exact solutions. All numerical results shown use the standard numerical technique of simulated annealing in order to arrive at close to optimal solutions for large datasets in a reasonable amount of time. Obtaining exact optimal solutions requires more computational effort, for example, by a branch-and-bound search through likely candidates. However, using simpler methods than simulated annealing may allow for decent solutions obtained in even faster time; note that many of the previous efforts (Bjørk Reference Bjørk2007; Cotta & Morales Reference Cotta and Morales2009; Forgan et al. Reference Forgan, Papadogiannakis and Kitching2012) use these heuristic methods to obtain results, although Cotta and Morales supplement these with various improvement techniques. Below, the simulated annealing algorithm used in this paper is detailed, as well as the two heuristic methods, and compare these results with those of exact solution obtained by a branch-and-bound method. The interested reader may turn to Dasgupta, Papadimitriou and Vazirani (Dasgupta et al. Reference Dasgupta, Papadimitriou and Vazirani2008) for further information about these algorithmic methods.

Simulated annealing is a technique inspired by the physical process of controlling heating and cooling of a material in order to form large-sized crystals (Kirkpatrick et al. Reference Kirkpatrick, Gelatt and Vecchi1983). The process starts with an initially random solution, and its fitness – i.e. the value of the given objective function – is evaluated. This is compared with the fitness of ‘nearby’ solutions, where the initial solution is changed slightly by a random process. If the new solution is better, it is kept in lieu of the original solution. This is analogous to the heated material, where atoms may shift around at large temperatures and improve the crystalline structure, removing defects. However, it is possible that the shift may increase the energy of the material system, for increases less than a temperature-dependent function. The higher the temperature, the more these jumps are allowed. In a similar manner, the simulated annealing algorithm may keep marginally worse solutions, in order to test the solution space for deeper minima of the objective function. As the temperature decreases, these possible shifts are not as large, as it is hoped that the material has reached a local minimum in its free energy. Similarly, for the simulated annealing process, the amount of change allowed to the current solution decreases, based on a decreasing parameter analogous to temperature. In practice, multiple trials of the algorithm are necessary to ensure that a reasonable swath of solution space is searched effectively.

The particular version of the algorithm used here is detailed in the pseudocode in Algorithm 6. In order to find the best path, the simulated annealing algorithm is run for a number of trials, typically Trials=60. The objective function C used is the Euclidean norm ∑ata 2, with t a the time it takes for probe a to complete its particular mission. For n=N target target systems, each path is stored as an array giving a permutation of integers in the range [1, n] – the order in which the probe visits each system i after it leaves the source node i=0. When there is more than one probe N probe=P considered, these additional probes are assigned numbers in the range [n+1, n+P−1] and the path array is a permutation of [1, n+P−1]; the upper bound is n+P−1 since the source node i=0 also counts as a ‘probe’ node. The extra nodes are taken to have the same position as the source node for purposes of calculating the objective function of the overall probe collection. Since all probes are ignored after they visit the last target system on their list, there is no contribution to the objective function for portions of the path array going from a target system to a ‘probe’ node. A trial starts with a random permutation of these integers; nearby solutions are obtained with ListSwitch by switching two of the numbers in the path array, and stored in the array tempList. The results of each individual trial are stored in the temporary list tempMinPath, and compared with the overall minimum norm path minPath; if tempMinPath has a lower objective function, it is kept as the new best solution. In this implementation, the temperature schedule used is 1/log(k+1), for step k of the algorithm, up to a maximum of (n+1)2 steps. This choice gives reasonably good results in a short period of time.

As stated before, the simulated annealing algorithm is not guaranteed to calculate the optimal solution, but is a method to quickly arrive at one that is close to optimal. To find the exact optimal solution, one must use other techniques such as brute-force search or a branch-and-bound method; the use of the latter in the interstellar exploration programme will be detailed in a future paper. On the other hand, one can use heuristic techniques which arrive at potential solutions even more quickly than simulated annealing, at the cost of larger differences with the cost of the solution compared to the optimal one. Below are detailed two possible heuristic greedy algorithms each probe can use to decide which star system to visit next; as with the simulated annealing algorithm, they pre-calculate the path, and require the knowledge of coordinates and stellar characteristics of all possible targets.

  1. 1. Fastest speed: For each leg of the journey, coming from system i to system j, the next system k after j is chosen for the largest possible cruise speed out of all possibilities i→j→k. This depends only on the deflection angle for the trajectory i→j→k.

  2. 2. Shortest time: As the probe arrives at system j after the leg i→j, the next system k is chosen so that the leg j→k has the shortest possible travel time of all possible targets. Note that this is a function both of the deflection angle for the path i→j→k and the distance from j to k.

In both cases, as the path is assembled from each portion, the allowed cruise speed may be reduced, because the required angular deflection at a given system may decrease the speed upper bound. As these two algorithms do not test that many choices, compared to the simulated annealing algorithm, they are both much faster to compute. However, there is a corresponding decrease in optimality. To show this explicitly, the simple case of N probe=1 and small number of target stars (N target14) is considered. In Table 6, the simulated annealing and heuristic algorithms detailed above are compared with the exact results, and the percent excess in time for the single probe to complete its mission are given. One can see that both fare rather poorly compared with the optimal solution, and even the simulated annealing algorithm, which works rather well for these test cases.

Algorithm A. 1. Simulated annealing $(\vec r_i )$

References

Beals, K.A., Beaulieu, M., Dembia, F.J., Kerstiens, J., Kramer, D.L., West, J.R. & Zito, J.A. (1988). U S Naval Academy. NASA-CR-184718.Google Scholar
Bjørk, R. (2007). Int. J. Astrobiol. 6, 8993; arXiv: astro-ph/0701238.Google Scholar
Bond, A., et al. (1978). J. Br. Interplanet. Soc. 31 (Supplement), S37S42.Google Scholar
Bruntt, H., Bedding, T.R., Quirion, P.-O., Lo Curto, G., Carrier, F., Smalley, B., Dall, T.H., Arentoft, T., Bazot, M. & Butler, R.P. (2010). Mon. Not. R. Astron. Soc. 405, 1907–1923; arXiv:1002.4268.Google Scholar
Burleigh, M.R. et al. (2008). Mon. Not. R. Astron. Soc.: Lett. 386, L5L9; arXiv:0801.2917.CrossRefGoogle Scholar
Cartin, D. (2012). Developing routes for interstellar exploration probes, Presented at the Anacapa Society Meeting, May 2012.Google Scholar
Cerný, V. (1985). J. Optim. Theory Appl. 45, 4151.Google Scholar
Cotta, C. & Morales, Á. (2009). J. Br. Interplanet. Soc. 62, 8288; arXiv:0907.0345.Google Scholar
Dasgupta, S., Papadimitriou, C.H. & Vazirani, U.V. (2008). Algorithms, McGraw-Hill Higher Education, Boston.Google Scholar
Dawson, P.C. & De Robertis, M.M. (2004). Astron. J. 127, 29092914.Google Scholar
Demory, B.-O., Segransan, D., Forveille, T., Queloz, D., Beuzit, J.-L., Delfosse, X., Di Folco, E., Kervella, P., Le Bouquin, J.-B., & Perrier, C. (2009). Astron. Astrophys. 505, 205215; arXiv:0906.0602v2.Google Scholar
Forgan, D.H., Papadogiannakis, S. & Kitching, T. (2012). J. Br. Interplanet. Socy., to be published; arXiv:1212.2371.Google Scholar
Forward, R.L. (1985). J. Spacecr. Rockets, 22, 345350.Google Scholar
Gatewood, G. & Russell, J. (1974). Astron. J. 79, 815818.Google Scholar
Holberg, J.B., Barstow, M.A., Bruhweiler, F.C., Cruise, A.M. & Penny, A.J. (1998). Astrophys. J. 497, 935942.Google Scholar
King, R.R., McCaughrean, M.J., Homeier, D., Allard, F., Scholz, R.-D., & Lodieu, N. (2009). Astron. Astrophys. 510, A99; arXiv:0911.3143.Google Scholar
Kirkpatrick, S., Gelatt, C.D. Jr. & Vecchi, M.P. (1983). Science, 220, 671680.Google Scholar
Landis, G.A. (1999). Advanced Solar- and laser-pushed Lightsail Concepts, Final Report for NASA Institute for Advanced Concepts, May 31.Google Scholar
Lang, K.R. (1999). Astrophysical Formulae, Volume II: Space, Time, Matter and Cosmology, 3rd edn, Springer-Verlag, Berlin.CrossRefGoogle Scholar
Liebert, J., Young, P.A., Arnett, D., Holberg, J.B. & Williams, K.A. (2005). Astrophys. J. Lett. 630, L69L72; arXiv:astro-ph/0507523.CrossRefGoogle Scholar
Long, K. F., Obousy, R.K., Tziolas, A.C., Mann, A., Osborne, R., Presby, A. & Fogg, M. (2009). J. Br. Interplanet. Soc. 62, 403414.Google Scholar
Mamajek, E.E., (2012). Astrophys. J. Lett. 754, L20; arXiv: 1206.6353v2.CrossRefGoogle Scholar
Marion, J.B. & Thornton, S.T. (1988). Classical Dynamics of Particles & Systems, 3rd edn, Academic Press, San Diego.Google Scholar
Matthews, R.A.J. (1994). Quarter. J. R. Astron. Soc. 35, 19.Google Scholar
Millis, M.G. (2011). Energy, incessant obsolescence, and the first interstellar missions, in 61st International Astronautical Congress, International Astronautical Federation, Prague, Czech Republic; arXiv: 1101.1066.Google Scholar
Moir, R.W. & Barr, W.L. (2005). J. Br. Interplanet. Soc. 58, 332341.Google Scholar
Newman, W.I. & Sagan, C. (1985). Nonlinear Diffusion and Population Dynamics. In Interstellar Migration and the Human Experience, eds Finney, B.R. & Jones, E.M., University of California Press, Berkeley, pp. 302312.Google Scholar
Provencal, J.L., Shipman, H.L., Koester, D., Wesemael, D. & Bergeron, P. (2002), Astrophys. J. 568, 324334.Google Scholar
Research Consortium on Nearby Stars. (2012) “List of the Nearest 100 Stellar Systems,” available at www.recons.org (accessed 6 January 2013).Google Scholar
Figure 0

Table 1. Number of years after the launch of a single ‘large’ prototype probe to build an additional Nprobe probes, assuming a range of annual growth rates in a relevant quantity, e.g. energy production or financial means

Figure 1

Fig. 1. Plot of the hyperbolic orbit of a probe undergoing a gravitational deflection due to mass Ms. The perihelion distance rp and semi-major axis a of the orbit are shown, as well as the angular deflection of the probe; the angle θ=0 is fixed by the perihelion point of the spacecraft. The probe enters the target system from the upper left, changes direction by Δθ, then leaves the system from the upper right. From the diagram, one can see that the smaller angle between the asymptotes of the hyperbola can be found in two ways – namely, 2π−2θ, and π−Δθ. Equating these together and solving for the angular deflection gives Δθ=2θ−π.

Figure 2

Table 2. Values of the constant $\sqrt {GM_{\rm s} /r_{\rm p}}$ for a variety of nearby stars, brown dwarfs and white dwarfs, where Ms is the mass of the object and rp is the perihelion distance for the interstellar probe, determined by using equations (5) and (6) with a maximum desired heat flux of 7.00 MWm−2. Mass and luminosity data for each star taken from the references listed; for Procyon B, Proxima Centauri, Sirius B and van Maanen's Star, the luminosity was derived from the Stefan–Boltzmann law, using the radii and effective temperatures provided in the references. The range in the values for the brown dwarfs ε Indi Ba and Bb represent observational uncertainty in the mass of those objects

Figure 3

Fig. 2. An example of a simulated exploration programme, for Ntarget=20 and Nprobe=5, so that all star systems within 3.68 pc of the Solar System are surveyed. Lines represent the path of a probe from one system to another; the Solar System is the central node with five paths emerging from it. The first probe completes its mission after 1289 years, the last after 3931 years.

Figure 4

Fig. 3. Mean travel time for each probe in a collection of probes, as a function of the cruise speed υp for each probe. Three possible choices of the number of probes launched from the Solar System given, for a total of Ntarget=60 systems are visited by the collection.

Figure 5

Fig. 4. Minimum, mean and maximum travel times for probes exploring a total of Ntarget=60 systems, as a function of the total number of probes Nprobe used in the exploration program.

Figure 6

Fig. 5. Cumulative time to visit a given number of systems, in units of kiloyears, with either Nprobe=3,8, or 20 probes used to explore a total of Ntarget=60 systems.

Figure 7

Fig. 6. Graphs of the expected number of systems 〈N〉 visited by a collection of probes launched from the Solar System, as a function of the probability p to successfully cross each parsec without a catastrophic failure. Each plot corresponds to a different number of total systems Ntarget to be visited; within the plot, difference choices for the number of probes Nprobe launched is given.

Figure 8

Table 3. Expected number 〈N〉 of systems visited as a function of the number of systems Ntarget targeted and the number of probes Nprobe launched, for a fixed success probability per parsec of p=0.90

Figure 9

Table 4. Probability p of probe success per parsec necessary to ensure the expected number of systems 〈N〉=10 when Ntarget=60 systems are visited

Figure 10

Table 5. Number of exact solutions featuring systems not on the target list, compared with the total number of choices for the target list, for Ntarget systems out of a possible 11 targets (Cartin 2012)

Figure 11

Table 6. Estimation errors in time for a single probe (Nprobe=1) to complete its mission for the simulated annealing algorithm, and two heuristic greedy algorithms – fastest speed and shortest time – compared with the exact optimal solutions obtained using a branch-and-bound algorithm, with Ntarget given in the table. For the simulated annealing algorithm, a total of 60 trials were used. When there are eight or less target star systems, the exact and simulated annealing solutions agree. Each of the three algorithms are as described in the Appendix

Figure 12

Algorithm A. 1. Simulated annealing $(\vec r_i )$