1. Introduction
1.1. Overview of the paper
The primary motivation that fuels our work in this paper is the need to understand how an infectious disease spreads through a homogeneous population comprising intelligent, pragmatically thinking individuals who decide upon their actions (such as distancing oneself from possibly infected acquaintances via voluntary confinement to one’s home) on a day-to-day basis, with the aim of maximizing their respective utility functions. The key novelty of our work lies in being able to capture, via our model, the fact that the population we consider is made up of rational beings referred to as agents or players. We emphasize here the need for investigation in understanding the spread of contagion through a population whose members are not just helpless entities exposed to the infection at the whim of nature alone (see Section 1.2 for a brief discussion of the existing literature on models devised for studying the spread and control of epidemics, based on game theory).
Our model is firmly based on the premise of game theory, with a population $N = \{1, 2, \ldots, n\}$ of n agents, each of whom is allowed to choose from a set $A = [0,1]$ of available actions. Choosing action 0 is equivalent to the agent confining themselves to their home and coming in contact with no other agent, whereas choosing action 1 is tantamount to the agent going about their day as usual, with no restrictions imposed. An action profile $a_N=(a_1,\ldots,a_n)$ is an element of the set $A^n$ , with $a_{i}$ indicating the most recent action undertaken by agent i, for each $i \in N$ . The agents are represented by the vertices of an undirected weighted graph, and the interaction between agent i and agent j, for distinct $i, j \in N$ , is captured by the weight $g_{i,j} \in [0,1]$ of the edge connecting the vertices i and j. We further endow agent i, for each $i \in N$ , with an immunity power $\tau(i) \in (0,1)$ . We consider a discrete-time stochastic process indexed by $\mathbb{N}_{0}$ , the set of all non-negative integers. At the beginning of the tth epoch of time, for each $t \in \mathbb{N}_{0}$ , an agent $\underline{v}_{t}$ is chosen uniformly at random out of N and permitted to update their action. The chosen agent decides upon their action by taking stock of the state the process is in at the beginning of that epoch, and their own utility function, both of which are formally defined in Section 2. We mention here that the state $S_{t}$ of the process, at the start of epoch $t \in \mathbb{N}_{0}$ , is made up of two crucial components:
-
(i) the set $I(S_{t})$ comprising all the agents who are infected at the beginning of epoch t, and
-
(ii) the action profile $a_{N}(S_{t})$ of the agents at the beginning of epoch t.
Henceforth, the process mentioned above will be referred to as the stochastic virus spread process (SVSP). In addition, we shall consider, for some of our preliminary investigations of the SVSP, a deterministic virus spread process (DVSP) (see Section 2 for a more formal definition) in which the sequence $\underline{v} = (\underline{v}_{t}: t \in \mathbb{N}_{0})$ of agents is specified fully (i.e. the agent $\underline{v}_{t}$ chosen to update their action at the start of epoch t, for each $t \in \mathbb{N}_{0}$ , is predetermined, and not random).
The principal questions we aim to answer in this work are those concerning the limiting distribution of the infected set $I(S_{t})$ and the limiting distribution of the action profile $a_{N}(S_{t})$ of all agents concerned, as $t \rightarrow \infty$ , provided such limits exist. Such questions are pertinent not just theoretically, but also from a very practical perspective in that, in any country, the departments under the federal government that are tasked with overseeing the provision of healthcare for the population must be able to reliably predict the approximate proportion of citizens to get infected in the long run (i.e. when the epidemic has continued for a fairly long period). This is necessary because such knowledge can aid in the decision of the amount of resources (medicines and medical equipment, hospital beds, etc.) to set aside for the treatment of infected patients in the long run. The investigation of the limiting behavior of the action profile $a_{N}(S_{t})$ as $t \rightarrow \infty$ goes on to reveal how, when such a limit exists, individuals in a population typically tend to behave once the epidemic has prevailed for a sufficiently long time.
1.2. A brief review of pertinent literature
The classical compartmental models of epidemiology (see [Reference Brauer5] for a comprehensive survey) date as far back as the early 1900s (see [Reference Ross and Hudson34]). Some of the most notable ones out of these are the susceptible–infectious–removed (SIR) model (see [Reference Kermack and McKendrick17]), the susceptible–infectious–susceptible (SIS) model (see [Reference Hethcote13]), and the susceptible–exposed–infectious–removed (SEIR) model (see [Reference Aron and Schwartz1]). In recent years, more attention has been given to network models, in which the vertices or nodes of a network represent the individuals of the population under consideration, and the edge between any two distinct nodes denotes the relationship or interaction between the two individuals those nodes represent (for instance, see [Reference Azizi4, Reference Browne, Amchin, Schneider and Datta6, Reference Craig, Phelan, Siedlarek and Steinberg9, Reference Cui, Ni and Shen10, Reference Keeling and Eames16, Reference Lang, De Sterck, Kaiser and Miller23, Reference Maheshwari and Albert26, Reference Newman30, Reference Pastor-Satorras33, Reference Sah36, Reference Small and Tse38]).
We now begin a discussion of research articles that are closely aligned in flavor with our work in this paper. We begin with [Reference Aurell, Carmona, Dayankl and Laurière2], which investigates a game for a continuum of non-identical players evolving on a finite state space, with their heterogeneous interactions with other players represented via a graphon (viewed as the limit of a dense random graph). A player’s transition rates between the states depend on their control and the strength of their interaction with other players. Sufficient conditions for the existence of Nash equilibria are studied in [Reference Aurell, Carmona, Dayankl and Laurière2], and the existence of solutions to a continuum of fully coupled forward–backward ordinary differential equations characterizing the Nash equilibria is proved. In [Reference Vizuete, Frasca and Garin39], spectral properties of graphons are used to study stability and sensitivity to noise of deterministic SIS epidemics over large networks. In particular, the presence of additive noise in a linearized SIS model is considered, and a noise index is derived to quantify the deviation from the disease-free state due to noise.
In the next couple of paragraphs, we focus on citing a few articles out of the vast literature that concerns itself with applying the theory of mean-field games to the study of the spread of an epidemic throughout a population. In [Reference Aurell, Carmona, Dayankl and Laurière3], motivated by models of epidemic control in large populations, a Stackelberg mean-field game model between a principal and a mean field of agents evolving on a finite state space is considered, with the agents playing a non-cooperative game in which they can control their transition rates between states to minimize individual costs. An application is then proposed to an epidemic model of the SIR type in which the agents control their interaction rate and the principal is a regulator acting with non-pharmaceutical interventions. In [Reference Lee24], a mean-field game model for controlling the propagation of epidemics on a spatial domain is introduced, with the control variable being the spatial velocity (introduced first for the classical disease models, such as SIR), and fast numerical algorithms based on proximal primal–dual methods are provided. In [Reference Lee, Liu, Li and Osher25], a mean-field variational problem in a spatial domain, controlling the propagation of a pandemic by the optimal transportation strategy of vaccine distribution, is investigated. In [Reference Olmez32], an agent’s decision as to whether to be socially active in the midst of an epidemic is modeled as a mean-field game with health-related costs and activity-related rewards. By considering the fully and partially observed versions of this problem, the paper highlights the role of information in guiding an agent’s rational decisions. In [Reference Olmez31], how the evolution of an infectious disease in a large heterogeneous population is governed by the self-interested decisions (to be socially active) of individual agents is studied based on a mean-field-type optimal control model. The model is used to investigate the role of partial information in an agent’s decision-making, and to study the impact of such decisions by a large number of agents on the spread of the virus in the population.
In [Reference Cho7], a mean-field game model is proposed in which each of the agents chooses a dynamic strategy of making contacts, given the trade-off of gaining utility but also risking infection from additional contacts. Both the mean-field equilibrium strategy, which assumes that each agent acts selfishly to maximize their own utility, and the socially optimal strategy, which maximizes the total utility of the population, are computed and compared with each other. In the computation of the socially optimal strategies, an additional cost is included as an incentive to the agents to change their strategies. The price of anarchy of this system is computed to understand the conditions under which large discrepancies between the mean-field equilibrium strategies and the socially optimal strategies arise, which is when intervening public policy would be most effective. In [Reference Doncel, Gast and Gaujal11], a mean-field game model of SIR dynamics is proposed in which players choose when to get vaccinated. It is shown that this game admits a unique mean-field equilibrium that consists of vaccinating aggressively at a maximal rate for a certain amount of time and then not vaccinating, and it is shown that this equilibrium has the same structure as the vaccination strategy that minimizes the total cost. A very similar problem is studied in [Reference Gaujal, Doncel and Gast12] that focuses on virus propagation dynamics in a large population of agents, with each agent being in one of three possible states (namely, susceptible, infected, and recovered) and with each agent allowed to choose when to get vaccinated. It is shown that this system admits a unique symmetric equilibrium when the number of agents goes to infinity, and that the vaccination strategy that minimizes the social cost has the same threshold structure as the mean-field equilibrium, though the latter has a shorter threshold. The paper [Reference Hubert and Turinici15] studies newborn, non-compulsory vaccination in an SIR model with vital dynamics, with the evolution of each individual modeled as a Markov chain and their decision to vaccinate aimed at optimizing a criterion that depends on the time-dependent aggregate (societal) vaccination rate and the future epidemic dynamics. The existence of a Nash mean-field game equilibrium among all individuals in the population is established. In [Reference Laguzet and Turinici18], techniques from the theory of mean-field games are used to examine whether, in an SIR model, egocentric individuals (i.e. those whose actions are driven by self-interest when it comes to getting vaccinated) can reach an equilibrium with the rest of society, and it is shown that an equilibrium exists. The individual best vaccination strategy (with as well as without discounting) is completely characterized, a comparison is made with a strategy based only on overall societal optimization, and a situation with a non-negative price of anarchy is exhibited. In [Reference Laguzet, Turinici and Yahiaoui19], individual optimal vaccination strategies in an SIR model are analyzed. It is assumed that the individuals vaccinate according to a criterion taking into account the risk of infection, the possible side effects of the vaccine, and the overall epidemic course; that the vaccination capacity is limited; and that each individual discounts the future at a given positive rate. Under these assumptions, an equilibrium between the individual decisions and the evolution of the epidemic is shown to exist. In [Reference Salvarani and Turinici37], a model of an agent-based vaccination campaign against influenza with imperfect vaccine efficacy and durability of protection is considered. The existence of a Nash equilibrium is proved, and a novel numerical method is proposed to find said equilibrium. Various aspects of the model are also discussed, such as the dependence of the optimal policy on the imperfections of the vaccine, the best vaccination timing, etc.
In [Reference Hubert, Mastrolia, Possama and Warin14], a general mathematical formalism is introduced to study the optimal control of an epidemic via incentives to lockdown and testing, and the interplay between the government and the population, while an epidemic is spreading according to the dynamics given by a stochastic SIS model or a stochastic SIR model, is modeled as a principal–agent problem with moral hazard. Although, to limit the spread of the virus, individuals within a given population can choose to reduce interactions among themselves, this cannot be perfectly monitored by the government, and it comes with certain social and monetary costs for the population. One way to mitigate such costs and encourage social distancing, lockdown, etc. is to put in place an incentive policy in the form of a tax or subsidy. In addition, the government may also implement a testing policy in order to know more precisely the spread of the epidemic within the country, and to isolate infected individuals. It is verified via numerical results that if a tax policy is implemented, the individuals in the population are encouraged to significantly reduce interactions among themselves, and if the government also adjusts its testing policy, less effort is required on the part of the population to enforce social distancing and lockdown upon itself; the epidemic is largely contained by the targeted isolation of individuals who test positive. In [Reference Cooney8], a model for the evolution of sociality strategies in the presence of both a beneficial and a costly contagion is investigated, and a social diLemma is identified in that the evolutionarily stable sociality strategy is distinct from the collective optimum (i.e. the level of sociality that would be best for all individuals)—in particular, the level of social interaction in the former is greater (respectively less) than the social optimum when the good contagion spreads more (respectively less) readily than the bad contagion. Finally we cite [Reference Roy, Singh and Narahari35], which provides a state-of-the-art update on recent advances in the mean-field approach that can be used very effectively in analyzing a dynamical modeling framework, known as a continuous-time Markov decision process, for epidemic modeling and control.
1.3. Organization of the paper
The model that we investigate in this paper, along with all the pertinent definitions, has been described formally in Section 2, although we did allude to it briefly in Section 1. Section 2 also includes some observations and lemmas concerning the the deterministic virus spread process (also mentioned previously in Section 1). The main results of this paper—namely Theorems 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10—are stated in Section 3, along with relevant discussions regarding the conclusions drawn from them. Simulations exploring the cardinality of the infected set for the first several epochs of the process, which yield good approximations to the limit that it converges to, are given in Section 6. A summary of what we have been able to achieve in this paper, along with directions of research on this as well as related topics that we wish to pursue in the future, is provided in Section 7.
We would like to emphasize to our readers that we have included the proofs of Theorem 1, Theorem 2, and Theorem 7 in the main body of the paper, immediately following their respective statements in Section 3. We have done so in order to illustrate some of the most fundamental ideas used in our proof techniques. However, we have deferred the proofs of the remaining main results to Appendices B and D, to keep the paper as uncluttered as possible for the reader. Our aim is to ensure that our readers fully understand the key steps of the analysis carried out to prove our results, without being burdened with technical details every step of the way.
2. Formal description of the model
Recall, from the second paragraph of Section 1.1, the brief introduction to the model we consider in this paper. Here, we formalize the model by providing mathematical definitions of the crucial quantities involved in it.
The process described in Section 1.1 is said to be in the state $S = (I,a_{N})$ if $I \subseteq N$ denotes the set of infected agents and $a_N$ denotes the action profile at that time. Given a state S, we denote by I(S) the corresponding set of infected agents, and by $a_{\widehat{N}}(S) = (a_{i}(S)\,:\, i \in \widehat{N})$ the tuple in which $a_{i}(S)$ represents the action of the ith agent for all $i \in \widehat{N}$ , for any subset $\widehat{N}$ of N. In particular, if $\widehat{N} = N \setminus \{j\}$ , we abbreviate the notation $a_{N \setminus \{j\}}(S)$ by $a_{-j}(S)$ , and for any $a \in A$ , we denote by $(a \vee a_{-j}(S))$ the tuple $(a_{1}(S), \ldots, a_{j-1}(S), a, a_{j+1}(S), \ldots, a_{n}(S))$ . We denote by $\mathcal{S}$ the set of all possible states.
The viral exposure $r_i(S)$ that agent i is subjected to, when the process is in state S, is defined as
For an intuitive understanding of viral exposure, consider the interpretation that $a_j(S)$ determines how much (say, how many units of time) the jth individual goes out at state S, and $g_{ij}$ represents the amount of interaction i has with j when both are outside. Therefore, $g_{ij}a_j(S)$ is the amount of interaction the ith individual will have with the jth individual when they both are outside. Now, the denominator captures the fact that with more uninfected people roaming around, the amount of interaction an individual has with an infected person proportionally reduces, and so does their chance of being infected. Also, one can probabilistically interpret the model by assuming that for any individual j, $a_j$ is the probability that they go out, and $\frac{g_{ij} } { \sum_{j \in N \setminus \{i\}} g_{ij}}$ is the probability that individual i interacts with individual j when they both are out. Then, $r_i(S)$ is the probability that individual i interacts with an infected person, conditional on i going out (that is, $a_i=1$ ).
We assume that an individual i gets infected if $a_i r_i(S) \gt \tau $ . For a justification of this assumption, note that the viral exposure $r_i(S)$ of an individual i does not take care of the precaution (through staying at home) taken by i. However, the amount of virus individual i receives will naturally depend on the amount they go out (that is, $a_i$ ), together with the effective amount of virus present outside in i’s network (that is, $r_i(S)$ ). Therefore, the product of $r_i(S)$ and $a_i$ measures the total amount of virus that i’s body receives when i chooses $a_i$ . One could also potentially see this product being interpreted as follows: with the maximum possible risk of exposure being $r_i(S)$ , if the ith individual does not go out at all (that is, $a=0$ ) then they are not exposed at all, whereas if they choose to fully go out (that is, $a=1$ ) then they get exposed to the maximum possible amount, which is $r_i(S)$ . Now, since the immunity power $\tau$ measures the maximum amount of virus that an individual’s body can withstand, this implies that an individual i would be infected if $a_i r_i(S) \gt \tau $ .
Next, we introduce the utility function of an agent. The utility of an agent i, when the process is in state S, is defined as
where $f\,:\,[0,1] \to [0,1]$ is a strictly increasing function. Intuitively, if agent i is neither already infected during the current epoch (which is indicated by the condition $i \notin I(S)$ ) nor at risk of being infected in the next epoch (which is indicated by the condition $a_i(S) r_i(S) \leqslant \tau(i)$ , i.e. their action multiplied by the viral exposure they have been subjected to does not exceed their immunity power), they enjoy a ‘reward’ of amount 1 in addition to the utility $f(a_{i}(S))$ that they receive because of their chosen action (note that the strictly increasing nature of f ensures that the more they go out in society, the more utility they get). Otherwise, they are deprived of such a reward and must settle for the utility value $f(a_{i}(S))$ . One can interpret the (negative) reward as the cost of the viral infection.
We now formally describe how agent i responds if they are chosen to update their action at the beginning of an epoch when the system is in state S. We call this the best response by agent i at state S, denoted by $b_i(S)$ , and it is defined as
In words, this is the set of actions a by agent i that allow them to maximize their utility function (note that the utility function, as defined in (1), is a function of the state, and the state here consists of I(S) as the infected set and $(a \vee a_{-i}(S))$ as the action profile).
The following Lemma summarizes the best response of an agent at a state depending on whether or not they are infected at that state. In particular, it says that the best response always exists and is unique.
Lemma 1. Let $i \in N$ be an agent and $S \in \mathcal{S}$ be a state. Then
Proof. We present the proof by distinguishing the three cases that appear in the statement of the lemma.
Case 1: $i \in I(S)$ .
By (1), $u_i\left(I(S),(a_i,a_{-i}(S))\right)=f(a_i)$ . As f is an increasing function, their best response is
Case 2: $i \notin I(S)$ and $r_i(S)=0$ .
Since $r_i(S)=0$ , $ar_i(S) \leqslant \tau(i)$ for all $a\in [0,1]$ , and hence by (1), $u_i\left(I(S),(a,a_{-i}(S))\right)=1+f(a)$ for all $a\in [0,1]$ . As f is an increasing function, this implies $b_i(S)=1$ .
Case 3: $i \notin I(S)$ and $r_i(S) \gt 0$ .
Consider the quantity $\frac{\tau(i)}{r_i(S)}$ . It follows from (1) that $u_i\left(I(S),(a,a_{-i}(S))\right)$ is increasing in a in both the regions $\left[0,\min\left\{1,\frac{\tau(i)}{r_i(S)}\right\}\right]$ and $\left(\min\left\{1,\frac{\tau(i)}{r_i(S)}\right\},1\right]$ . The maximum value of $u_i\left(I(S),(a,a_{-i}(S))\right)$ when a lies in the region $\left[0,\min\left\{1,\frac{\tau(i)}{r_i(S)}\right\}\right]$ is $1+f\left(\min\left\{1,\frac{\tau(i)}{r_i(S)}\right\}\right)$ and that when a lies in the region $\left[\min\left\{1,\frac{\tau(i)}{r_i(S)}\right\},1\right]$ is f(1). Because $\tau(i) \gt 0$ , we have $\frac{\tau(i)}{r_i(S)} \gt 0$ . This, together with the fact that f is strictly increasing, implies $f\left(\min\left\{1,\frac{\tau(i)}{r_i(S)}\right\}\right) \gt 0$ . Hence, $1+f\left(\min\left\{1,\frac{\tau(i)}{r_i(S)}\right\}\right) \gt 1$ . Additionally, as $f(1) \leqslant 1$ , we have $1+f\left(\min\left\{1,\frac{\tau(i)}{r_i(S)}\right\}\right) \gt f(1)$ . Therefore, $u_i\left(I(S),(a_i,a_{-i}(S))\right)$ will be uniquely maximum at $\min\left\{1,\frac{\tau(i)}{r_i(S)}\right\}$ implying $b_i(S)=\min\left\{1,\frac{\tau(i)}{r_i(S)}\right\}$ . Combining all these, we have the following form for $b_i(S)$ :
There are two key messages to take away from Lemma 1. The first of these is that the best response of an agent i at any state S is unique, which is why we henceforth present $b_i(S)$ as an element of A (doing so is more convenient than writing it as a singleton subset of A). The second is that once an agent is infected or runs no risk of becoming infected (i.e. the viral exposure is 0), they choose to go out with no restrictions imposed on their movements.
We now summarize the stochastic process we focus on in this paper. We denote by $S_{t} = (I(S_{t}), a_{N}(S_{t}))$ the state of the process at the start of epoch t, for $t \in \mathbb{N}_{0}$ . At the beginning of epoch t, an agent $\tilde{v}_t$ is uniformly randomly chosen, and then the chosen agent takes the best response $b_{\tilde{v}_t}(S_t)$ as defined in (2) based on the number of agents $I(S_t)$ and the action profile $a_{-\tilde{v}_t}(S_t)$ of all agents except $\tilde{v}_t$ at epoch t. The action profile is then updated from $a_N (S_t)$ to $(b_{\tilde{v}_t}(S_t) \vee a_{-\tilde{v}_t}(S_t))$ . Let us define an intermediate state $\hat{S}_{t} = \left(I(S_{t}), (b_{\tilde{v}_t}(S_t) \vee a_{-\tilde{v}_t}(S_t))\right)$ . As a consequence, because of the change of the action profile, the viral exposure $(r_i(S_t))_{1 \leq i \leq N}$ changes accordingly to $(r_i(\hat{S}_{t}))_{1 \leq i \leq N}$ , and therefore those uninfected agents j satisfying $a_{j}(\hat{S}_{t})r_{j}(\hat{S}_{t}) \gt \tau(j)$ will also be infected and added to the set of infected agents. Thus, the updated infected set becomes $I(S_{t+1}) = I(S_{t}) \cup \{j\,:\, a_{j}(\hat{S}_{t})r_{j}(\hat{S}_{t}) \gt \tau(j)\}$ , and at the beginning of epoch $t+1$ , the process is in state $S_{t+1} = \left(I(S_{t+1}), (b_{\tilde{v}_t}(S_t) \vee a_{-\tilde{v}_t}(S_t))\right)$ (which tells us that $a_{N}(S_{t+1}) = (b_{\tilde{v}_t}(S_t) \vee a_{-\tilde{v}_t}(S_t))$ ).
Although we alluded to it in Section 1, we recall here the definition of the deterministic virus spread process (DVSP). Given a (deterministic) agent sequence $\underline{v}$ , the DVSP $S = (S_{t}\,:\, t \in \mathbb{N}_{0})$ induced by $\underline{v} = (\underline{v}_{t}\,:\, t \in \mathbb{N}_{0})$ , with $S_{t}$ indicating the state of the process just before epoch t commences, is defined in a manner identical to the stochastic virus spread process (SVSP) described above, with the only difference being that, instead of an agent being chosen randomly at the start of each epoch, the agent $\underline{v}_t$ is chosen at the start of epoch t to update their action, for each $t \in \mathbb{N}_{0}$ . Whenever the agent sequence $\underline{v}$ is not clear from the context, we shall denote the DVSP S (induced by $\underline{v}$ ) by $S(\underline{v})$ to emphasize its dependence on $\underline{v}$ . In this case, $S_{t}(\underline{v})$ will denote the state of the process at the start of epoch t.
In what follows, we make a few observations about the DVSP $S(\underline{v})$ that we shall use frequently throughout the paper. Recall that $\hat{S}_t$ indicates the intermediate state of the process at the midpoint of epoch t, for each $t \in \mathbb{N}_{0}$ .
Observation 1. Let S be the DVSP induced by $\underline{v}$ . Then, for all $t\in \mathbb{N}_0$ ,
-
(i) $I(S_t)=I(\hat{S}_t)$ and $a_{N}(\hat{S}_t)=a_{N}(S_{t+1})$ ;
-
(ii) if $b_{\underline{v}_t}(S_t)=a_{\underline{v}_t}(S_t)$ , then $S_t=\hat{S}_t=S_{t+1}$ ;
-
(iii) if $I(S_t)=I(S_{t+1})$ , then $\hat{S}_t=S_{t+1}$ .
Observation 2. For any fixed $i\in N$ , if $\underline{v}_t\neq i$ for some $t \in \mathbb{N}_0$ , then $a_i(S_t)=a_i(\hat{S}_t)=a_i(S_{t+1})$ . By repeated applications of this observation, we are able to conclude the following: if $\underline{v}_t\neq i$ for all $t \in [t^{\prime},t^{\prime\prime}]$ with $t^{\prime} \lt t^{\prime\prime}$ , then this yields $a_i(S_{t})=a_i(S_{t^{\prime}})$ for all $t\in [t^{\prime},t^{\prime\prime}]$ .
Observation 3. Since the best response of an infected agent is always 1 (see Lemma 1), $i \in I(S(t))$ and $\underline{v}_t=i$ together imply that $a_{i}(S_{t^{\prime}})=1$ for all $t^{\prime} \gt t$ .
Observation 4. Since the best response of an uninfected agent i is
by Lemma 1, it follows that $\underline{v}_t=i$ and $i\notin I(S_t)$ together imply that $i\notin I(S_{t+1})$ as well.
Recall that our main goal in this paper is to explore the limiting behaviors of both the cardinality of the infected set of agents and the action profile of all the agents in our population. We now show that such limits are well-defined, at the very least, for a deterministic sequence of agents.
Lemma 2. The DVSP $S(\underline{v})$ converges for each agent sequence $\underline{v} \in N^{\mathbb{N}_0}$ . In other words, both $\lim_{t \to \infty} I(S_t(\underline{v}))$ and $\lim_{t \to \infty} a_{N}(S_t(\underline{v}))$ exist.
Proof. Let $\underline{v}$ be a DVSP. It follows from the definition of $ S(\underline{v})$ that $I(S_1(\underline{v}))\subseteq I(S_2(\underline{v}))\subseteq \cdots \subseteq I(S_t(\underline{v}))$ for any $t\in \mathbb{N}_0$ . As $I(S_t)\subseteq N$ for all t, this means $\lim_{t \to \infty} I(S_t(\underline{v}))$ exists. Also, as $|N|$ is finite, there exists $t_0$ such that $I(S_{t_0}(\underline{v}))=I(S_t(\underline{v}))$ for all $t\geqslant t_0$ . Consider $\bar{t}\geqslant t_0+1$ . In the next claim, we show that for all $i\in N$ , $a_i(S_{\bar{t}+1})\geqslant a_i(S_{\bar{t}})$ .
Claim 1. We have $a_i(S_{\bar{t}+1})\geqslant a_i(S_{\bar{t}})$ for all $i\in N$ .
Proof of the claim. Let $\underline{v}_{\bar{t}}=j$ . By the definition of the process, for any other agent i we have $a_i(S_{\bar{t}+1})= a_i(S_{\bar{t}})$ , and hence the claim holds for them. We proceed to show that the claim holds for agent j. Recall that $a_j(\hat{S}_{{\bar{t}}})=a_j(S_{{\bar{t}+1}})$ (see Observation 1). If $j\in I(S_{\bar{t}})$ then $a_j(\hat{S}_{\bar{t}})=1$ (see Observation 3), and hence $a_j(\hat{S}_{\bar{t}})\geqslant a_j(S_{\bar{t}})$ . As $a_j(\hat{S}_{\bar{t}})=a_j(S_{\bar{t}+1})$ , this means $a_j(S_{\bar{t}+1})\geqslant a_j(S_{\bar{t}})$ . If $j\notin I(S_{\bar{t}})$ , by the definition of the process, j will choose their action as $a_j(\hat{S}_{\bar{t}})=b_j(S_{\bar{t}})$ . If $b_j(S_{\bar{t}})=1$ then there is nothing to show. Assume $b_j(S_{\bar{t}}) \lt 1$ . This implies $a_j(\hat{S}_{\bar{t}})=\frac{\tau(j)}{r_j(S_{\bar{t}})}$ . As $j\notin I(S_{\bar{t}})$ , we have $a_j(\hat{S}_{{\bar{t}-1}})r_j(\hat{S}_{\bar{t}-1}) \leqslant \tau(j)$ . Also, as $\bar{t}\geqslant t_0+1$ , it follows that $I(S_{\bar{t}-1})=I(S_{\bar{t}})$ , which implies $\hat{S}_{\bar{t}-1}=S_{\bar{t}}$ (see (iii) of Observation 1) and hence $r_j(\hat{S}_{\bar{t}-1})=r_j(S_{\bar{t}})$ . Combining this with the fact that $a_j(S_{{\bar{t}}})=a_j(\hat{S}_{{\bar{t}-1}})$ , we obtain $a_j(S_{{\bar{t}}}) \leqslant \frac{\tau(j)}{r_j(S_{\bar{t}})}$ . Since $a_j(\hat{S}_{\bar{t}})=\frac{\tau(j)}{r_j(S_{\bar{t}})}$ and $a_j(S_{{\bar{t}}}) \leqslant \frac{\tau(j)}{r_j(S_{\bar{t}})}$ , we have $a_j(\hat{S}_{{\bar{t}}})\geqslant a_j(S_{{\bar{t}}})$ . This completes the proof of the claim.
Since $a_i(S_t) \leqslant 1$ for all $i\in N$ , by Claim 1, we have the convergence of $a_{N}(S_t(\underline{v}))$ .
In view of Lemma 2, we set $S_\infty(\underline{v}) = \lim_{t \rightarrow \infty}S_{t}(\underline{v})$ .
The set ${N}^{\mathbb{N}_0}$ is the set of all agent-sequences indexed by $\mathbb{N}_0$ . We consider the probability space $(N^{\mathbb{N}_0}, \mathcal{F}, \mathbb{P})$ where $\mathcal{F}$ is the sigma-field generated by the cylindrical sets of ${N}^{\mathbb{N}_0}$ and $\mathbb{P}$ is the uniform probability distribution.
Remark 1. Let $N_{\infty}$ be the subset of ${N}^{\mathbb{N}_0}$ consisting of the agent-sequences where each agent moves an infinite number of times. In other words, ${N}_{\infty}= \{\underline{v} \in {N}^{\mathbb{N}_0}\,:\, \underline{v}_t=i \text{ for infinitely many } t, \text{ for all } i \in N\}$ . It is straightforward to see that the set $N_\infty$ has probability 1 under $\mathbb{P}$ , since the probability of the set ${N}^{\mathbb{N}_0} \setminus N_\infty$ is 0.
In view of Remark 1, for the rest of the paper, we shall work with the probability space $(N_\infty,\mathcal{F},\mathbb{P})$ . With a slight abuse of notation, we keep using the notation $\mathcal{F}$ for the induced $\sigma$ -field $\mathcal{F}\cap N_\infty$ on $N_\infty$ . Recall that in the stochastic virus spread process (SVSP), before each epoch commences, an agent is chosen randomly, following the discrete uniform distribution on the set N, and they are allowed to update their action by playing their best response (see (2)) to the current state. Consequently, the SVSP is a random variable S supported on the probability space $(N_\infty, \mathcal{F}, \mathbb{P})$ .
For an agent $i \in N$ , the random variable $t_i$ is defined as follows with respect to $(N_\infty, \mathcal{F}, \mathbb{P})$ : for $\underline{v}\in N_\infty$ , we set $t_i(\underline{v})= l$ if $l \in \mathbb{N}_0$ is such that $\underline{v}_l = i$ and $\underline{v}_k \neq i$ for all $k<l$ . Note that for any $\underline{v}\in N_\infty$ , $i\in N$ , and $t\in \mathbb{N}_0$ with $t \leqslant t_i(\underline{v})$ , we have $a_i(S_0)=a_i(S_{t}(\underline{v}))$ . Let $N_1$ be the measurable function on $(N_\infty, \mathcal{F}, \mathbb{P})$ that describes the (random) set of agents who had been chosen before agent 1 was chosen for the first time; that is, $N_1(\underline{v})= \{i\in N \mid t_i(\underline{v}) \lt t_1(\underline{v})\}$ .
We now establish that $|N_1|$ follows the uniform distribution on $\{0,1,\ldots,n-1\}$ , where $|S|$ denotes the cardinality of the set S. Lemma 3 will be used in the proofs of the main results of this paper.
Lemma 3. We have $\mathbb{P}(|N_1|=l) = \frac{1}{n}$ for all $l \in \{0,1,\ldots,n-1\}$ .
Proof. Since $\mathbb{P}$ is uniform and there are $n!$ possible orderings of the random times $t_1,\ldots, t_n$ , each ordering of $t_1,\ldots,t_n$ has an equal probability $\frac{1}{n!}$ of occurring. We can choose $m-1$ random variables from the set $\{t_2,\ldots,t_{n}\}$ of $n-1$ random times in $^{n-1}C_{m-1}$ ways, where $^{n}C_{r}= (n)! /(r!(n-r)!)$ denotes the number of ways to choose r objects out of n without replacement. Therefore, the number of orderings that correspond to the event $| \{i \in N \mid t_i \lt t_1\}| = m-1$ is $^{n-1}C_{m-1} \times (m-1)! \times (n-m)! $ , and so the probability of said event is
which equals $\frac{1}{n}$ . This completes the proof of the lemma.
3. Main results
Before stating our main results, we formally state the assumptions for all the main results of the paper. As mentioned in Section 1, here we consider a homogeneous population and study the spread of an epidemic. To be specific, the homogeneity of the population is imposed through the following assumptions:
-
• All individuals have the same action at the beginning of the epidemic (that is, before they started deciding their actions strategically in response to the present state of the epidemic).
-
• All individuals have the same immunity power.
-
• Every pair of individuals has the same level of interaction.
We also intend to study the spread of an epidemic from the very beginning; to capture this, we assume the following:
-
• Exactly one individual is infected at the beginning of the epidemic.
Formally, these assumptions are stated as follows (recall that $S_0$ denotes the initial state at which the process starts, and $a_i(S_0)$ and $I(S_0)$ denote the action of individual i and the set of infected people, respectively, at the state $S_0$ ):
-
(i) $a_i(S_0)=a$ , $\tau(i)=\tau$ , and $g_{ij}=c$ for all $i, j\in N$ with $i \neq j$ , for some $a \in [0,1]$ , $c \in \mathbb{R}^+$ , $\tau \in (0,1]$ ;
-
(ii) $I(S_0)=\{1\}$ .
We will adopt the above assumptions throughout the paper, without specifically mentioning them every time.
A natural discussion is in order as to how realistic these assumptions are. Note that, from a technical point of view, the possibility that every individual’s initial actions and immunity profile are potentially different is difficult to characterize. To create more realism, note that if the action (and immunity) profiles were all random, it would be very natural to assume that their distribution was identical throughout the population. Our homogeneity assumption can thus be seen as a special case in which the distributions assume a degenerate value. Also, the assumption of only one infected individual in the entire population is quite realistic, as that is exactly how contagious diseases spread. Our intuition says that the presence of more than one infected individual at the beginning will lead to more infected people with a higher probability, and such cases can be analyzed using techniques similar to those used in the paper. Since, to the best of our knowledge, ours is the first paper to look at a stochastic process of this kind, we decided to make the above assumptions in order to keep our findings concise and highlight the most important insights they yield.
We adopt the following notation to state our results. For $a\in \mathbb{R}$ , we let $\lceil a\rceil=\min\{k \in \mathbb{Z}\,:\, a \leqslant k\}$ and $\lfloor a\rfloor=\max\{k \in \mathbb{Z}\,:\, a\geqslant k\}$ . For $a,b\in \mathbb{N}_0$ , [a,b] denotes the set $\{a,a+1,\ldots,b\}$ if $a \leqslant b$ , and denotes the null set if $a \gt b$ . For $m\geqslant \lfloor \tau(n-1)\rfloor +1$ , we define the set $A_m$ to be the set of all ordered tuples $\vec{x}=(x_1,\ldots,x_n)$ that satisfy the following properties: [label=()]
-
(1) $x_1=1$ ,
-
(2) there are precisely $m-1$ coordinates $i \in \{2, \ldots, n\}$ such that $x_{i} = 1$ , and
-
(3) each of the remaining coordinates equals $\frac{\tau m}{(1+\tau)m-\tau(n-1)} \lt 1$ . The last inequality follows since $m>\tau(n-1).$
A couple of facts follow immediately from the above definition. The first is that $A_{n}$ is the singleton set $\{\vec{1}\}$ , where $\vec{1}$ is the n-dimensional tuple in which each coordinate equals 1. The second is that $|A_m| = ^{n-1}C_{m-1}$ for each m for which $A_m$ is well-defined, since we need only choose the $m-1$ coordinates out of $2, \ldots, n$ that equal 1.
3.1. Results when $a=0$
Here we consider the situation in which the (common) initial action a equals 0. Theorem 1 provides the limiting distribution of the infected set for arbitrary values of $\tau$ . Let
Theorem 1. Suppose $a=0$ . Then the limiting distribution of the infected set is given by
where $\alpha$ is as defined in (3).
Proof. We complete the proof in two steps. In Step 1, we explore how the infection spreads when agents update their actions according to a fixed agent sequence, and in Step 2 we use this to explore how infection spreads when agents update their actions randomly.
Step 1. Fix an agent sequence $\underline{v}\in N_\infty$ and let S be the DVSP induced by $\underline{v}$ . To shorten notation, for all $i \in N$ , let us denote $t_i(\underline{v})$ by $k_i$ . The following claim demonstrates how an agent i with $k_i<k_1$ will update their action.
Claim 1. Suppose $k_i<k_1$ for some $i\in N$ . Then $a_i(S_t)=1$ for all $t=k_i+1,\ldots, k_1$ .
Proof of the claim. By Lemma 7, $I(S_t)=\{1\}$ for all $t \leqslant k_1$ . Since $k_1 < \infty$ and $k_i<k_1$ , we have $k_i<\infty$ . Consider any time point l such that $k_i \leqslant l< k_1$ . By the definition of the process, we need to show that the claim holds for l such that $\underline{v}_{l}=i$ (see Observation 2). Since $l< k_1$ , we have $a_1(S_{l})=a_1(S_0)=0$ . This together with $I(S_{l})=\{1\}$ implies $r_i(S_{l})=0$ . Hence, by Lemma 1, agent i will update their action to 1 at $\hat{S}_{l}$ . Since $a_i(\hat{S}_l)=a_i(S_{l+1})$ , this means $a_i(S_{l+1})=1$ . This completes the proof of the claim.
Case 1: $|N_1(\underline{v})|\geqslant \alpha$ .
As $|N_1(\underline{v})| \leqslant n-1$ , the assumption of the case implies $\alpha=\left \lceil\frac{1}{\tau}\right \rceil$ . Hence, $\alpha\tau\geqslant 1$ . By Claim 1, $a_i(S_{k_1})=1$ for all $i\in N_1(\underline{v})$ . Also, by the definition of the process, $a_i(S_{k_1})=0$ for all $i\notin N_1(\underline{v})\cup \{1\}$ , as these agents have not updated their actions till the time point $k_1$ . Recall that $\hat{S}_{k_1}$ denotes the intermediate state whose only difference from $S_{k_1}$ is that agent $\underline{v}_{k_1}$ has updated their action to $b_{\underline{v}_{k_1}}(S_{k_1})$ . Since $\underline{v}_{k_1}=1$ , we have $a_i(S_{k_1})=a_i(\hat{S}_{k_1})$ for all $i\neq 1$ . Thus, $a_i(\hat{S}_{k_1})=1$ for all $i\in N_1(\underline{v})$ and $a_i(\hat{S}_{k_1})=0$ for all $i\notin N_1(\underline{v})\cup \{1\}$ .
By Remark 1 and the definition of the process, $a_1(\hat{S}_{k_1})=1$ . Consider the time point $k_1+1$ . By the definition of the process, an agent $i\neq 1$ will be in $I(S_{k_1+1})$ if $a_i(\hat{S}_{k_1})r_i(\hat{S}_{k_1})>\tau$ . Since $I(S_{k_1})=\{1\}$ , $a_i(\hat{S}_{k_1})=1$ for all $i\in N_1(\underline{v}) \cup \{1\}$ , $a_i(\hat{S}_{k_1})=0$ for all $i\notin N_1(\underline{v})\cup \{1\}$ , and $g_{ij}=c$ for all $i\neq j$ , it follows that $r_i(\hat{S}_{k_1}) \leqslant \frac{1}{\alpha}$ for all $i\in N_1(\underline{v})$ . Because $\alpha\tau\geqslant 1$ , this implies that no agent in $N_1(\underline{v})$ gets infected at the time point $k_1+1$ . Moreover, since $a_i(\hat{S}_{k_1})=0$ for each agent $i\notin N_1(\underline{v})\cup \{1\}$ , we have $a_i(\hat{S}_{k_1})r_i(\hat{S}_{k_1})=0 \leqslant \tau$ . Thus, no new agent gets infected at the time point $k_1+1$ , and hence $I(S_{k_1+1})=\{1\}$ .
We show that no new agent will get infected after this. We first show that $I(S_{k_1+2})=\{1\}$ . Let $\underline{v}_{k_1+1}=i$ . If $i\notin I(S_{k_1+1})$ , then as $I(S_{k_1})=I(S_{k_1+1})$ by Lemma 6, we have $I(S_{k_1+1})=I(S_{k_1+2})$ . If $i\in I(S_{k_1+1})$ then $i=1$ . Moreover, $a_1(S_{k_1+1})=a_1(\hat{S}_{k_1})=1$ . Hence, by Lemma 6, $I(S_{k_1+1})=I(S_{k_1+2})$ . Therefore, $I(S_{k_1+2})=\{1\}$ . Using the same arguments repeatedly, it follows that $I(S_t)=\{1\}$ for all $t\geqslant k_1+2$ . Thus, $I(S_\infty)=\{1\}$ .
Case 2: $|N_1(\underline{v})| \leqslant \alpha-1$ .
Using similar arguments as in Case 1, we have $a_i(\hat{S}_{k_1})=1$ for all $i\in N_1(\underline{v})$ and $a_i(\hat{S}_{k_1})=0$ for all $i\notin N_1(\underline{v})\cup \{1\}$ . This, together with $g_{ij}=c$ for all $i\neq j$ , implies $r_i(\hat{S}_{k_1})\geqslant\frac{1}{\alpha-1}$ for all $i\in N_1(\underline{v})$ . As $\alpha=\min\left\{\left \lceil\frac{1}{\tau}\right \rceil,n\right\}$ , we have $(\alpha-1)\tau \lt1$ . Hence, all agents in $N_1(\underline{v})$ will get infected at time point $k_1+1$ . Moreover, as $a_i(\hat{S}_{k_1})=0$ for all $i\notin N_1(\underline{v})\cup \{1\}$ , the agents outside $N_1(\underline{v}) \cup \{1\}$ will not get infected at time point $k_1+1$ . Thus, we have $I(S_{k_1+1})=N_1(\underline{v})\cup \{1\}$ . Because $a_i(S_{k_1+1})=a_i(\hat{S}_{k_1})=1$ for all $i\in I(S_{k_1+1})$ and $a_i(S_{k_1+1})=0 \leqslant \tau$ for all $i\notin I(S_{k_1+1})$ , by Lemma 8 it follows that $I(S_{k_1+1})=I(S_\infty)$ . Hence, $I(S_\infty)=N_1(\underline{v})\cup \{1\}$ .
Step 2. Consider the probability space $(N_\infty, \mathcal{F}, \mathbb{P})$ and random variables S and $t_1,\ldots, t_n$ . Let $m \in \{2,\ldots, n\}$ be such that $m \leqslant \alpha$ . In view of Case 1 and Case 2 in the current proof, we have (i) $|I(S_\infty)| \leqslant \alpha$ , and (ii) $|I(S_\infty)| = m$ with $1\in I(S_\infty)$ if and only if $| \{i \in N \mid t_i \lt t_1\}| = m-1$ . Also, $I(S_\infty) = \{1\}$ if and only if $ |\{i \in N \mid t_i < t_1\}|\geqslant\alpha$ . Moreover, as $\mathbb{P}$ is uniform, any two subsets of N with the same cardinality have the same probability. These observations together yield
This completes the proof of the theorem.
Next we explore the limiting distribution of the action profile, and this is found to be dependent on the value of $\tau$ . Accordingly, the statement of Theorem 2 is split into two parts on the basis of whether $\tau$ exceeds $(n-1)^{-1}$ or not. We introduce the quantity
Theorem 2. Suppose $a=0$ . For $\tau\geqslant \frac{1}{n-1}$ , the limiting distribution of the action profile is given by
whereas for $\tau \lt \frac{1}{n-1}$ , the limiting distribution of the action profile is given by
with $\alpha$ and $\beta$ as defined in (3) and (4) respectively.
A brief discussion is in order regarding some of the startling findings that may be deduced from the two theorems of Section 3.1. Theorem 1 reveals that if we consider any two different subsets $J_{1}$ and $J_{2}$ of N, the probabilities $\mathbb{P}(I(S_{\infty}) = J_{1})$ and $\mathbb{P}(I(S_{\infty}) = J_{2})$ are the same as long as $J_{1}$ and $J_{2}$ have the same cardinality and either both of them contain agent 1 or neither contains agent 1. We note that the number of subsets J of N with $1 \in J$ and $|J| = m$ is given by $^{n-1}C_{m-1}$ , so that summing $\mathbb{P}(I(S_{\infty}) = J)$ over all such J yields $\mathbb{P}\left(\left|I(S_{\infty})\right| = m\right) = n^{-1}$ for each $m \in [2,\alpha]$ . These observations suggest a rather close resemblance between the limiting distribution of the infected set, as well as the limiting distribution of its cardinality, and certain suitably defined discrete uniform distributions. In fact, for $\tau \leqslant n^{-1}$ , we have $\alpha = n$ , reducing the distribution of $|I(S_{\infty})|$ to precisely the discrete uniform distribution on $\{ 1,2,\ldots, n \}$ . This uniform structure is somewhat marred when $\tau \gt n^{-1}$ . For example, when $n=5$ , $a=0$ , and $\tau= 0.25$ , we have
whereas if $\tau=0.4$ , the probability distribution changes to
An intuitive explanation for this phenomenon is that with higher immunity, i.e. higher values of $\tau$ , the disease is less likely to spread to the entire community, instead having a higher probability of remaining confined to the initial infected set.
Conclusions of a similar flavor can be drawn as a consequence of Theorem 2. For any two different ordered tuples $\vec{x}$ and $\vec{y}$ that belong to the same $A_m$ , the probabilities $\mathbb{P}(a_{N}(S_{\infty}) = \vec{x})$ and $\mathbb{P}(a_{N}(S_{\infty}) = \vec{y})$ are equal, for both the cases $\tau \geqslant (n-1)^{-1}$ and $\tau < (n-1)^{-1}$ . Moreover, since $|A_m| = ^{n-1}C_{m-1}$ , we obtain $\mathbb{P}(a_{N}(S_{\infty}) \in A_m) = n^{-1}$ for every $m \in [\beta,\alpha]$ when $\tau \geqslant (n-1)^{-1}$ and for every $m \in [1,n]$ when $\tau \lt (n-1)^{-1}$ . These are, once again, reminiscent of suitably defined discrete uniform distributions.
A connection may be established between Theorem 1 and Theorem 2, for the case where $\tau \geqslant (n-1)^{-1}$ , via the following fact, whose justification has been included in the proof of Theorem 2: for any DVSP $S(\underline{v})$ , if the limiting infected set has cardinality $m\in [\beta,\alpha]$ (note that $\beta\geqslant 2$ ), the limiting action profile will be a tuple in $A_m$ , with all infected agents choosing action 1 and all uninfected agents choosing action $\tau m[(1+\tau)m-\tau(n-1)]^{-1}$ . On the other hand, if the limiting infected set for the DVSP $S(\underline{v})$ has cardinality strictly less than $\beta$ , the final action profile becomes $\vec{1}$ , signifying that all agents choose action 1 in the long run.
Proof. First assume $\tau\geqslant \frac{1}{n-1}$ . We first explore the limiting actions for a fixed agent sequence, and then we use this to find the limiting probability distribution. Let $\underline{v}$ be an agent sequence and S the DVSP induced by $\underline{v}$ . Note that by Remark 1, it is enough to assume $\underline{v}\in N_\infty$ . Therefore, by Lemma 10, all the agents outside $I(S_\infty)$ have the same action limit, and all the agents in $I(S_\infty)$ have the action limit 1. Let us denote the common limit for all agents outside $I(S_\infty)$ by $\gamma$ . We distinguish two cases based on the value of $N_1(\underline{v})$ (as in the proof of Theorem 1) to find $\gamma$ . Note that by the assumption of the theorem $\alpha \leqslant n-1$ .
Case 1: $|N_1(\underline{v})|\geqslant \alpha$ .
Recall that for this case the final infected set is $\{1\}$ . Moreover, by the assumption of the theorem, $\tau(n-1)\geqslant 1$ . Therefore, by Lemma 11, $\gamma=1$ . Hence, $a_{N}(S_\infty)=\vec{1}$ .
Case 2: $|N_1(\underline{v})| \leqslant \alpha-1$ .
Recall that for this case, the final infected set is $N_1(\underline{v})\cup \{1\}$ . Therefore, by Lemma 11, if $(n-1)\tau\geqslant |N_1(\underline{v})|+1$ then $a_{N}(S_\infty)=\vec{1}$ , and if $(n-1)\tau \lt|N_1(\underline{v})|+1$ then
Recall that $\beta=\min \{\lfloor(n-1)\tau \rfloor+1, \alpha+1\}$ . Hence, combining Cases 1 and 2, we have the following:
-
(i) $|N_1(\underline{v})|+1\in [\beta,\alpha]$ implies
-
(ii) $|N_1(\underline{v})|+1\in [1,\beta-1]\cup [\alpha+1,n]$ implies $a_{N}(S_\infty)=\vec{1}$ .
Note that (i) implies $a_{N}(S_\infty)\in A_{(|N_1(\underline{v})|+1)}$ when $|N_1(\underline{v})|+1\in [\beta,\alpha]$ . Also, as $\mathbb{P}$ is uniform, any two vectors in $A_m$ , for $m\in [\beta,\alpha]$ , have the same probability. Thus, we have the following distribution:
Now assume $\tau \lt \frac{1}{n-1}$ . We follow the same structure as in the previous case; that is, we first explore the limiting actions for a fixed agent sequence, and then we use this to find the limiting probability distribution. Let $\underline{v}$ be an agent sequence and S the DVSP induced by $\underline{v}$ . Note that by Remark 1, it is enough to assume $\underline{v}\in N_\infty$ . Therefore, by Lemma 10, all the agents outside $I(S_\infty)$ have the same action limit, and all the agents in $I(S_\infty)$ have the action limit 1. Let us denote the common limit by $\gamma$ . As $(n-1)\tau \lt1$ by the assumption of the theorem, we have $\alpha=n$ , and hence $|N_1(\underline{v})| \leqslant \alpha-1$ . Moreover, for $|N_1(\underline{v})| \leqslant \alpha-1$ (shown in the proof of Theorem 1), the final infected set is $N_1(\underline{v})\cup \{1\}$ . Thus, $|I(S_\infty)|>(n-1)\tau$ . Hence, by Lemma 11, if $|N_1(\underline{v})|+1<n$ , then
and if $|N_1(\underline{v})|+1=n$ , then $a_{N}(S_\infty)=\vec{1}$ . Recall the notation $A_m$ . By the above arguments, we have $a_{N}(S_\infty)\in A_{[|N_1(\underline{v})|+1]}$ when $|N_1(\underline{v})|+1<n$ . Moreover, as $\mathbb{P}$ is uniform, any two vectors in $A_m$ , for $m\in [1,(n-1)]$ , have the same probability. Thus, by Theorem 1, we have the following distribution:
3.2. Results when $ \boldsymbol{a}=1$
Here we consider the situation where the (common) initial action a equals 1. The following theorem provides the limiting distribution of the set of infected agents.
Theorem 3. Suppose $a=1$ . If $\tau\geqslant \frac{1}{n-1}$ , the limiting distribution of the infected set is given by
whereas if $\tau \lt \frac{1}{n-1}$ , the limiting distribution is given by
The proof of this theorem can be found in Appendix B (Subsection B.1). Note that since there are $n-1$ sets J such that $1\in J \text{ and } |J|=n-1$ , the above display exhibits a valid probability distribution.
Theorem 4. Suppose $a=1$ . If $\tau\geqslant \frac{1}{n-1}$ , the limiting distribution of the action profile is given by
whereas if $\tau \lt \frac{1}{n-1}$ , the limiting distribution is given by
The proof of this theorem can be found in Appendix D (Subsection D.1). We draw the reader’s attention to the fact that the results of Section 3.2 differ quite a bit in appearance from those of Section 3.1. While the limiting distribution of the infected set, for $a=0$ , is supported on all subsets of N that contain 1 and that have sizes bounded above by $\alpha$ (Theorem 1), the infected set, for $a = 1$ , converges to the singleton $\{1\}$ when $\tau \geqslant (n-1)^{-1}$ , and its limiting distribution is supported on only those subsets of N that contain 1 and have cardinality at least $n-1$ when $\tau \lt (n-1)^{-1}$ (Theorem 3). In some sense, for $a=0$ , the limiting distribution is ‘spread out’ over wider support, while for $a=1$ it is more ‘concentrated’.
Likewise, for $a = 0$ , the limiting distribution of the action profile is supported on all $A_m$ with $m \in \{n\} \cup [\beta,\alpha]$ when $\tau \geqslant (n-1)^{-1}$ , and it is supported on all $A_m$ with $m \in \{1, \ldots, n\}$ when $\tau \lt (n-1)^{-1}$ (Theorem 2). In contrast, for $a=1$ , the action profile converges to $\vec{1}$ when $\tau \geqslant (n-1)^{-1}$ , and the limiting distribution of the action profile is supported on just $A_{n} \cup A_{n-1}$ when $\tau \lt (n-1)^{-1}$ (Theorem 4).
3.3. Results when $0 \lt \boldsymbol{a} \boldsymbol{\leqslant} \tau$ and $\boldsymbol{a}\neq 1$
In this subsection, we consider the case where the (common) initial action a lies strictly between 0 and 1, and is bounded above by $\tau$ . Let
Theorem 5. Suppose $0 \lt a \leqslant \tau$ and $a\neq 1$ . Furthermore, suppose $\tau\geqslant \frac{1}{n-1}$ . Then the limiting distribution of the infected set is given by
The proof of this theorem can be found in Appendix B (Subsection B.2). Next, we describe the limiting distribution of the action profile. We introduce the following notation in order to state our next result:
Theorem 6. Suppose $0 \lt a \leqslant \tau$ and $a\neq 1$ . Furthermore, suppose $\tau\geqslant \frac{1}{n-1}$ . Then the limiting distribution of the action profile is given by
The proof of the theorem can be found in Appendix D (Subsection D.2).
Remark 2. If one sets $a=0$ in the conclusion of Theorem 5, one gets back the conclusion of Theorem 1 for $\tau\geqslant (n-1)^{-1}$ . However, Theorem 1 is more general in terms of its coverage of the values of $\tau$ . In a similar manner, setting $a=0$ in Theorem 6 yields Theorem 2 for the case of $\tau\geqslant (n-1)^{-1}$ .
Discussions of a flavor similar to those in Section 3.1 can be included here as well. Even if $J_{1}$ and $J_{2}$ are two different subsets of N, Theorem 5 shows that the probabilities $\mathbb{P}(I(S_{\infty})=J_{1})$ and $\mathbb{P}(I(S_{\infty})=J_{2})$ are the same as long as $J_{1}$ and $J_{2}$ have the same cardinality and either both contain 1 or neither does. Summing over all subsets of N that contain 1 and are of cardinality m, we obtain $\mathbb{P}\left(\left|I(S_{\infty})\right|=m\right) = n^{-1}$ for each $2 \leqslant m \leqslant \hat{\alpha}$ . Likewise, for any two different ordered tuples $\vec{x}$ and $\vec{y}$ , Theorem 6 shows that the probabilities $\mathbb{P}(a_{N}(S_\infty)=\vec{x})$ and $\mathbb{P}(a_{N}(S_\infty)=\vec{y})$ are the same as long as both $\vec{x}$ and $\vec{y}$ belong to the same $A_m$ . Summing over all members of an $A_m$ yields $\mathbb{P}(a_{N}(S_\infty)\in A_m) = n^{-1}$ for every $\hat{\beta} \leqslant m \leqslant \hat{\alpha}$ .
3.4. Results when $\tau \lt \boldsymbol{a} \lt 1$
In this subsection, we consider the scenario where the (common) initial action a is strictly greater than $\tau$ . We introduce the following notation to facilitate the presentation of the subsequent results. Let
Note that, in this regime, $\tau \lt a \lt a/(1-a)$ and thus $a-\tau(1-a)>0$ . Now since, $(n-1){}a\tau/(a-\tau(1-a))$ is increasing in $\tau$ , we have, for $\tau \geq 1/(n-1)$ ,
This yields $\bar{\alpha} \geqslant 2$ .
Theorem 7. Suppose $\frac{1}{n-1} \leqslant \tau \lt a \lt 1$ . If $\tilde{\alpha}+1 \leqslant \bar{\alpha}$ , the limiting distribution of the infected set is given by
whereas if $2 \leqslant \bar{\alpha} \lt \tilde{\alpha}+1$ , the limiting distribution is given by
where
and $\left\{\begin{smallmatrix} p \\ q \end{smallmatrix} \right\}$ is the Stirling number of the second kind with parameters p and q.
Proof. We start with a Lemma that shows that for an agent sequence, the infected set remains the same till agent 1 appears for the first time.
Lemma 4. Let $\underline{v}\in N_\infty$ and $\hat{t}\in \mathbb{N}_0$ be such that $\underline{v}_t\neq 1$ for all $t<\hat{t}$ . Then $I(S_t)=\{1\}$ for all $t \leqslant \hat{t}$ .
Proof. Note that if $\hat{t}=0$ then there is nothing to show. So assume $\hat{t}\geqslant 1$ . We use induction to prove the statement. As the base case, we show that $I(S_1)=\{1\}$ . Let $\underline{v}_0=i$ . Since $\hat{t}\geqslant 1$ , $i\neq 1$ . Moreover, as $g_{ij}=c$ for all $i\neq j$ , $r_i(S_0)=\frac{1}{(n-1)}$ . Hence,
as by our assumption $\tau\geqslant \frac{1}{(n-1)}$ . This means agent i will not get infected. For any $j\notin \{1,i\}$ , $a_j(\hat{S}_0)=a_j(S_0)=a$ and $r_j(\hat{S}_0)=\frac{a}{(n-2)a+1} \leqslant \frac{1}{(n-1)}$ . Thus,
So agent j will also not get infected at $t=1$ . Thus, $I(S_1)=\{1\}$ .
Next, we introduce the following induction hypothesis: Given $\bar{t}\in \mathbb{N}_0$ with $\hat{t}\geqslant \bar{t}>1$ , we have $I(S_1)=\cdots=I(S_{\bar{t}-1})=\{1\}$ .
We now show that $I(S_{\bar{t}})=\{1\}$ . Let $\underline{v}_{\bar{t}-1}=i$ . Since $\hat{t}\geqslant \bar{t}$ , this means $i\neq 1$ . Hence, $i\notin I(S_{\bar{t}-1})$ . As $\bar{t} \gt 1$ , we have $I(S_{\bar{t}-2})=I(S_{\bar{t}-1})$ . This, together with Lemma 6, implies $I(S_{\bar{t}})=I(S_{\bar{t}-1})=\{1\}$ . Thus, by induction, we have $I(S_{\hat{t}})=\{1\}$ . This completes the proof of the lemma.
We complete the proof of the theorem in two steps. In Step 1, we explore how the infection spreads when agents update their actions according to a fixed agent sequence, and in Step 2 we use this to explore how infection spreads when agents update their actions randomly.
Step 1: Fix an agent sequence $\underline{v}\in N_\infty$ and let S be the DVSP induced by $\underline{v}$ . To shorten notation, for all $i \in N$ , let us denote $t_i(\underline{v})$ by $k_i$ .
Claim 1. For all $0 \leqslant t \lt k_1$ , $a_i(S_{t+1})=1$ where $\underline{v}_t=i$ .
Proof of the claim. Let $\underline{v}_0=i$ . As $k_1>0$ , $i\neq 1$ . Since $a_j(S_0)=a>0$ for all $j\in N$ , $I(S_0)=\{1\}$ , and $g_{ij}=c$ for all $i\neq j$ , we have $r_i(S_0)=\frac{1}{(n-1)}$ . This means
as by our assumption $\tau \geqslant \frac{1}{(n-1)}$ . Thus, $a_i(S_1)=a_i(\hat{S}_0)=1$ .
Next, we introduce the following induction hypothesis: Given $\bar{t}\in \mathbb{N}_0$ with $\bar{t} \lt k_1$ , we have for all $t<\bar{t}$ , $a_j(S_{t+1})=1$ where $\underline{v}_t=j$ .
Now, let $\underline{v}_{\bar{t}}=i^{\prime}$ ; we show that $a_{i^{\prime}}(S_{\bar{t}+1})=1$ . Note that by Lemma 4, $I(S_{\bar{t}})=\{1\}$ . Moreover, by the induction hypothesis, $a_j(S_{\bar{t}})\geqslant a$ for all $j\in N\setminus \{1\}$ . Also, as $\bar{t} \lt k_1$ , we have $a_1(S_{\bar{t}})=a$ . Combining all these observations, we have
Since $r_{i^{\prime}}(S_{\bar{t}})>0$ ,
see Lemma 1. Therefore, using (10) and the fact that $\tau \geqslant \frac{1}{(n-1)}$ , we have $b_{i^{\prime}}(S_{\bar{t}})=1$ . Thus, $a_{i^{\prime}}(S_{\bar{t}+1})=a_{i^{\prime}}(\hat{S}_{\bar{t}})=1$ .
We distinguish some cases based on $|N_1(\underline{v})|$ .
Case 1: $|N_1(\underline{v})|\geqslant \tilde{\alpha}$ .
We show that no new agent will get infected and $I(S_\infty)=\{1\}$ . By Claim 1, $a_i(S_{k_1})=1$ for all $i\in N_1(\underline{v})$ . By the definition of the process, $a_i(S_{k_1})=a$ for all $i\notin N_1(\underline{v})\cup \{1\}$ , as these agents have not updated their actions till the time point $k_1$ . Recall that $\hat{S}_{k_1}$ denotes the intermediate state whose only difference from $S_{k_1}$ is that agent $\underline{v}_{k_1}$ has updated their action to $b_{\underline{v}_{k_1}}(S_{k_1})$ . Since $\underline{v}_{k_1}=1$ , we have $a_i(S_{k_1})=a_i(\hat{S}_{k_1})$ for all $i\neq 1$ . Thus, $a_i(\hat{S}_{k_1})=a$ for all $i\in |N_1(\underline{v})|$ and $a_i(\hat{S}_{k_1})=1$ for all $i\notin N_1(\underline{v})\cup \{1\}$ .
By Lemma 1 and the definition of the process, $a_1(\hat{S}_{k_1})=1$ . Consider the time point $k_1+1$ . By the definition of the process, an agent $i\neq 1$ will be in $I(S_{k_1+1})$ if $a_i(\hat{S}_{k_1})r_i(\hat{S}_{k_1})>\tau$ . Since $I(S_{k_1})=\{1\}$ , $a_i(\hat{S}_{k_1})=1$ for all $i\in N_1(\underline{v}) \cup \{1\}$ , $a_i(\hat{S}_{k_1})=a$ for all $i\notin N_1(\underline{v})\cup \{1\}$ , and $g_{ij}=c$ for all $i\neq j$ , it follows that for all $i\in N_1(\underline{v})$ ,
and
Recall that by the assumption of the case, $|N_1(\underline{v})|\geqslant \tilde{\alpha}$ . This together with the fact that
implies
Combining (11) and (12), we may conclude that agent i will not be infected at the time point $t+1$ . Similar arguments show that any agent $j\notin N_1(\underline{v})\cup \{1\}$ will not be infected at the time point $t+1$ . Hence, $I(S_{k_1+1})=\{1\}$ .
We show that no new agent will get infected after this. We first show that $I(S_{k_1+2})=\{1\}$ . Let $\underline{v}_{k_1+1}=i$ . If $i\notin I(S_{k_1+1})$ , then as $I(S_{k_1})=I(S_{k_1+1})$ by Lemma 6, we have $I(S_{k_1+1})=I(S_{k_1+2})$ . If $i\in I(S_{k_1+1})$ then $i=1$ . Moreover, $a_1(S_{k_1+1})=a_1(\hat{S}_{k_1})=1$ . Hence, by Lemma 6, $I(S_{k_1+1})=I(S_{k_1+2})$ . Therefore, $I(S_{k_1+2})=\{1\}$ . Using the same arguments repeatedly, it follows that $I(S_t)=\{1\}$ for all $t\geqslant k_1+2$ . Thus, $I(S_\infty)=\{1\}$ .
Case 2: $|N_1(\underline{v})| \leqslant \tilde{\alpha}-1$ .
In the following claim, we show that at time point $k_1+1$ , the infected set is $N_1(\underline{v})\cup \{1\}$ .
Claim 2. We have $I(S_{k_1+1})=N_1(\underline{v})\cup \{1\}$ .
Proof of the claim. Recall that $\tilde{\alpha}=\max\left\{1, \left\lceil \frac{1-(n-1)a\tau}{\tau(1-a)}\right\rceil\right\}$ . First assume $\tilde{\alpha}\neq \left\lceil \frac{1-(n-1)a\tau}{\tau(1-a)}\right\rceil$ , i.e., $\tilde{\alpha}=1$ and $1-(n-1)a\tau \leqslant 0$ . This, together with the assumption of the case, implies $|N_1(\underline{v})|=0$ . Therefore, $k_1=1$ . Hence, to prove the claim, it is enough to show that $I(S_1)=\{1\}$ . Note that by the definition of the process, $a_1(\hat{S}_0)=1$ , $a_i(\hat{S}_{0})=a$ for all $i\neq 1$ , and $g_{ij}=c$ for all $i\neq j$ . Thus,
This implies $I(S_{1})=\{1\}$ . Now assume $\tilde{\alpha}=\left\lceil \frac{1-(n-1)a\tau}{\tau(1-a)}\right\rceil$ . Consider an agent $i\in N_1(\underline{v})$ . Using similar arguments as in (11), we may show that
This, together with the fact that $\tilde{\alpha}=\left\lceil \frac{1-(n-1)a\tau}{\tau(1-a)}\right\rceil$ and $|N_1(\underline{v})| \leqslant \tilde{\alpha}-1$ , implies $a_i(\hat{S}_{k_1})r_i(\hat{S}_{k_1})>\tau$ , and hence agent i will get infected at the time point $t+1$ . For any $j\notin N_1(\underline{v})\cup \{1\}$ ,
and
Hence agent j gets infected at $t+1$ if
But this does not hold, as $\frac{a-\tau(n-1)a}{\tau(1-a)} \leqslant 0$ and $|N_1(\underline{v})|\geqslant 0$ . So agent j does not get infected at $t+1$ . Thus, $I(S_{k_1+1})=N_1(\underline{v})\cup 1$ . This completes the proof of the claim.
We now determine the final infected set. To do so we consider two sub-cases based on the value of $|N_1(\underline{v})|$ .
Case 2.1: $|N_1(\underline{v})|+1< \bar{\alpha}$ .
We show that no new agent will get infected after $k_1+1$ . We first show that $I(S_{k_1+2})=N_1(\underline{v})\cup \{1\}$ . Let $\underline{v}_{k_1+1}=i$ . If $i\in I(S_{k_1+1})$ then $i\in N_1(\underline{v})\cup \{1\}$ . Moreover, $a_i(S_{k_1+1})= a_i(\hat{S}_{k_1})=1$ . Hence, by Lemma 6, $I(S_{k_1+1})=I(S_{k_1+2})$ . If $i\notin I(S_{k_1+1})$ then since
agent i will choose $\min\left\{1,\frac{\tau}{r_i(S_{k_1+1})}\right\}$ as their action $a_i(\hat{S}_{k_1+1})$ at $\hat{S}_{k_1+1}$ . This means $a_i(\hat{S}_{k_1+1})r_i(\hat{S}_{k_1+1}) \leqslant \tau$ and agent i will not get infected at $k_1+2$ . To show that any agent $j\in I(S_{k_1+1})\setminus \{i\}$ will not get infected at $k_1+2$ , we first prove a claim.
Claim 3. We have $a_i(\hat{S}_{k_1+1})\geqslant a$ .
Proof of the claim. Note that if $a_i(\hat{S}_{k_1+1})=1$ , then the claim holds, as $a \leqslant 1$ . If $a_i(\hat{S}_{k_1+1})=\frac{\tau}{r_i(S_{k_1+1})}$ , then
Moreover, by the assumption of the case $|N_1(\underline{v})|+1<\bar{\alpha}$ . This together with $\bar{\alpha} =\left\lfloor \frac{(n-1)a\tau}{a-\tau(1-a)}\right\rfloor +1$ and (13) implies
This completes the proof of the claim.
For any $j\notin I(S_{k_1+1})\setminus \{i\}$ , $a_j(\hat{S}_{k_1+1})=a$ and
Thus,
Hence agent j will not get infected at $k_1+2$ . This allows us to conclude that $I(S_{k_1+2})=N_1(\underline{v})\cup \{1\}$ . Now, using similar logic as in Case 1, we can show that no agent will get infected after this and $I(S_\infty)=N_1(\underline{v})\cup \{1\}$ .
Case 2.2: $|N_1(\underline{v})|+1\geqslant \bar{\alpha}$ .
First assume that $\underline{v}_{k_1+1}=i$ where $i\in N_1(\underline{v})\cup \{1\}$ . We show that $I(S_{\infty})=N$ . Note that as $i\in N_1(\underline{v})\cup \{1\}$ , $a_i(\hat{S}_{k_1+1})=1$ . Thus, for any $j\notin N_1(\underline{v})\cup \{1\}$ ,
and hence
Therefore, $I(S_{k_1+2})=N$ and $I(S_\infty)=N$ .
Now assume that $\underline{v}_{k_1+1}=i$ where $i\notin N_1(\underline{v})\cup \{1\}$ . We show that $I(S_{k_1+2})=N\setminus i$ . Since $i\notin I(S_{k_1+1})$ and $\underline{v}_{k_1+1}=i$ , agent i will not get infected at $k_1+2$ (Observation 4). Consider $j\notin N_1(\underline{v})\cup 1$ with $j\neq i$ . We first prove a claim.
Claim 4. We have $\tau \lt a_i(\hat{S}_{k_1+1}) \lt a$ .
Proof of the claim. We show that $\tau \lt \frac{\tau}{r_i(S_{k_1+1})} \lt a$ . This together with $a<1$ proves the claim. As
we have $\tau \lt \frac{\tau}{r_i(S_{k_1+1})}$ . To see that $\frac{\tau}{r_i(S_{k_1+1})} \lt a$ , recall that by (13),
Moreover, by the assumption of the case, $|N_1(\underline{v})|+1\geqslant \bar{\alpha}$ . This, together with $\bar{\alpha} > \frac{(n-1)a\tau}{a-\tau(1-a)}$ , implies
This completes the proof of the claim.
For any $j\notin I(S_{k_1+1})\setminus \{i\}$ , $a_j(\hat{S}_{k_1+1})=a$ and
Thus,
Hence agent j will get infected at $k_1+2$ . This allows us to conclude that $I(S_{k_1+2})=N\setminus \{i\}$ .
To determine the final infected set, we now distinguish two cases based on whether $\underline{v}_{k_1+2}=i$ or not.
Case 2.2.1: $\underline{v}_{k_1+2}=i$ .
We show that the final infected set will be $N\setminus i$ . Since by our assumption $\underline{v}_{k_1+2}=i$ and $i\notin I(S_{k_1+2})$ , by Observation 4, $i\notin I(S_{k_1+3})$ . Hence, $I(S_{k_1+3})=N\setminus \{i\}$ . We now show that agent i will not get infected after this. At time point $k_1+2$ ,
Therefore, $a_i(\hat{S}_{k_1+2})=\tau$ . At $k_1+3$ , if $\underline{v}_{k_1+3}=i$ , then agent i will not get infected at $k_1+4$ (Observation 4). On the other hand, if $\underline{v}_{k_1+3}\neq i$ , then as $a_i(\hat{S}_{k_1+3})=a_i(\hat{S}_{k_1+2})=\tau$ , agent i will remain uninfected at $k_1+4$ . Continuing in this manner, we can show that agent i will not get infected after this. Thus, $I(S_\infty)=N\setminus \{i\}$ .
Case 2.2.2: $\underline{v}_{k_1+2}\neq i$ .
We show that the final infected set will be N. Since $I(S_{k_1+2})=N\setminus \{i\}$ , $r_i(\hat{S}_{k_1+2})=1$ . Moreover, as $a_i(S_{k_1+2})=a_i(\hat{S}_{k_1+1})>\tau$ (by Claim 4) and $\underline{v}_{k_1+2}\neq i$ , it follows that $a_i(\hat{S}_{k_1+2})>\tau$ . Combining these two facts, we have $a_i(\hat{S}_{k_1+2})r_i(\hat{S}_{k_1+2})>\tau$ . Thus, agent i will get infected at $k_1+3$ . Hence, $I(S_{k_1+3})=N$ and $I(S_\infty)=N$ .
Step 2: To begin with, we claim $\tilde{\alpha} \leqslant n-1$ . To see this, observe that
as $(n-1)\tau\geqslant 1$ . Hence, $\tilde{\alpha} \leqslant n-1$ . Moreover, recall that $\bar{\alpha}\geqslant 2$ . We now find the distribution of $I(S_\infty)$ . First, assume that $\tilde{\alpha}+1 \leqslant \bar{\alpha}$ . Therefore, by the above cases we have the following:
-
• $I(S_\infty)=\{1\}$ if $|N_1(\underline{v})|\in \{0,\tilde{\alpha},\tilde{\alpha}+1,\ldots,n-1\}$ , and
-
• $I(S_\infty)=N_1(\underline{v})\cup \{1\}$ if $|N_1(\underline{v})|\in \{1,2,\ldots,\tilde{\alpha}-1\}$ .
Moreover, as $\mathbb{P}$ is uniform, any two subsets of N with the same cardinality have the same probability. These observations together with Lemma 3 yield
Now assume that $\tilde{\alpha}+1> \bar{\alpha}\geqslant 2$ . By Case 1 and Case 2, we have the following:
-
(i) $I(S_\infty)=\{1\}$ if $|N_1(\underline{v})|\in \{0,\tilde{\alpha},\tilde{\alpha}+1,\ldots,n-1\},$
-
(ii) $|I(S_\infty)|=|N_1(\underline{v})|+1$ with $1\in I(S_\infty)$ if $|N_1(\underline{v})|\in \{1,2,\ldots,\bar{\alpha}-2\},$
-
(iii) $|I(S_\infty)|=n$ if $|N_1(\underline{v})|\in \{\bar{\alpha}-1,\ldots,\tilde{\alpha}-1\}$ and there is no $i\in N$ such that $k_i=k_1+1$ and $\underline{v}_{k_1+2}=i$ , and
-
(iv) $|I(S_\infty)|=n-1$ with $1\in I(S_\infty)$ if $|N_1(\underline{v})|\in \{\bar{\alpha}-1,\ldots,\tilde{\alpha}-1\}$ and there is $i\in N$ such that $k_i=k_1+1$ and $\underline{v}_{k_1+2}=i$ .
Since $|N_1|$ follows a uniform distribution on $\{0,1,\ldots,n-1\}$ and any two subsets of N with the same cardinality have the same probability, by (i) and (ii) we have
We calculate the probability of $|I(S_\infty)|=n-1$ . By (iv) we have
Note that by (i)–(iv),
Therefore,
Since any two subsets of N with the same cardinality have the same probability, combining all of the above observations yields the following distribution of the infected set:
This completes the proof of the theorem.
As in the previous subsections, we now proceed to explore the limiting distribution of the action profile. The following notation will help us present the results:
Theorem 8. Suppose $\frac{1}{n-1} \leqslant \tau \lt a \lt 1$ . If $\tilde{\alpha}+1 \leqslant \bar{\alpha}$ , the limiting distribution of the action profile is given by
whereas if $2 \leqslant \bar{\alpha} \lt \tilde{\alpha}+1$ , the limiting distribution is given by
where
and $\left\{\begin{smallmatrix} p \\[3pt]q \end{smallmatrix} \right\}$ is the Stirling number of the second kind with parameters p and q.
The proof of this theorem can be found in Appendix D (Subsection D.3). Once again, observations similar to those made in Sections 3.1 and 3.3 could be made here for Theorems 7 and 8, but to avoid unnecessary repetition, we do not elaborate upon them.
The next two theorems deal with the situation when $\tau$ is less than or equal to $\frac{a}{(n-1)}$ . Theorem 9 characterizes the limiting distribution of the infected set and Theorem 10 characterizes the limiting distribution of the action profile.
Theorem 9. Suppose $\tau \lt a \lt 1$ . If $\tau= \frac{a}{n-1}$ , the limiting distribution of the infected set is given by
whereas if $\tau \lt \frac{a}{n-1}$ , the limiting distribution is given by
The proof of this theorem can be found in Appendix B (Subsection B.3).
Theorem 10. Suppose $\tau \lt a \lt 1$ . If $\tau= \frac{a}{n-1}$ , the limiting distribution of the action profile is given by
whereas if $\tau \lt \frac{a}{n-1}$ , the limiting distribution is given by
The proof of this theorem can be found in Appendix D (Subsection D.4). The results in this final set of theorems are also consistent with our intuition, which says that if $\tau$ is small enough, the eventually infected set is either of size $n-1$ or of size n with probability 1.
Remark 3. The connection between the initial action and the limiting distribution is an important question that may come to the reader’s mind. In fact, it may intuitively seem that for every $k \leq n$ , the probability that k or more people are infected in the long run will be weakly increasing in the initial action a. In other words, a higher value of the initial action a makes it likely that there will be more infected people in the limit. The intuition is (somewhat) natural, as a higher value of a at the initial epoch means people are roaming more freely initially, which would lead to a higher initial risk of spreading the virus, and this could multiply over time. However, this intuition does not account for the game-theoretical nature of the model. The reason is that people are strategic, and they will take more precautions (by lowering their a) at the next epoch when they realize the virus has already spread (more) at the first epoch. For example, when $n=5$ , for $\tau = 0.05$ and $a=0.2$ , the disease spreads completely, i.e., all the agents are infected in the long run. However, if we keep $\tau$ at 0.05, for $a=0.35$ , the probability of all the agents being infected is $0.84$ . Similarly, for $\tau=0.35$ , three or more are infected in limit with probability $0.2$ for $a=0$ , but for $a=0.8$ , the same event has a probability of 0. Note that the counter intuition, that is, that fewer people are infected in the long run as a increases, is not true either.
4. Results under recovery
In this section, we consider situations where an infected agent recovers after $\kappa$ , $\kappa\in \mathbb{N}$ , epochs of time since getting the infection, and we analyze the corresponding stochastic process. Let the set of agents who recover from the infection at time t be denoted by $R_t$ . Thus, $R_t$ is the set of agents $I(S_{t-\kappa})$ who got the virus at time $t-\kappa$ . If $t - \kappa$ is negative, $R_t$ is the empty set. Furthermore, the agents infected at the beginning (that is, at time $t=0$ ) recover at time $t=\kappa$ . To paraphrase, we assume that the agents who are infected at $t=0$ actually did get infected at $t=0$ , not at some earlier time. We additionally assume that an agent decides to stay (completely) at home (that is, goes for the action 0) on the day they recover. This is a simplifying assumption but is not unrealistic.
We now detail the changes in the stochastic process considered before under the current setup. The infected set at time $t+1$ is given by $I(S_{t+1}) = I(S_{t}) \cup \{j: a_{j}(\hat{S}_{t})r_{j}(\hat{S}_{t}) > \tau(j)\} \setminus R_{t+1}$ , and the actions at time $t+1$ are given by $a_j(S_{t+1})=0$ for all $j \in R_{t+1}$ , and $a_j(S_{t+1})=a_j(\hat{S}_t)$ for all $j \notin R_{t+1}$ .
In what follows, we present our results under recovery. Except for the recovery component, we stick to the assumptions we made in Section 3 for the case of non-recovery. Recall that a denotes the (common) initial action of the agents. We provide results for the cases for every $\tau$ when a is 0 and 1. For both these extreme cases, we show that the epidemic ends in the long run (consequently, people roam freely).
We first prove a general Lemma that shows that irrespective of the initial action, if it happens at some epoch of a(ny) DVSP that all the infected agents have action 1 and no new agent gets infected at the immediate next epoch, then all agents will eventually recover.
Lemma 5. Consider $\underline{v}\in N_\infty$ and let $\hat{t} \in \mathbb{N}_0$ be such that $a_i(\hat{S}_{\hat{t}})=1 \text{ for all } i\in I(S_{\hat{t}})$ and $I(S_{\hat{t}})\supseteq I(S_{\hat{t}+1})$ . Then $I(S_\infty)=\emptyset$ .
Proof. We prove the Lemma by showing that for all $r,s\in \mathbb{N}_0$ with $r<s$ , $I(S_{\hat{t}+1+r})\supseteq I(S_{\hat{t}+1+s})$ . Assume for the sake of contradiction that there exists $p\in \mathbb{N}$ such that $I(S_{\hat{t}+1})\supseteq \cdots \supseteq I(S_{\hat{t}+p})$ but $I(S_{\hat{t}+p})\not\supseteq I(S_{\hat{t}+p+1})$ , i.e., after epoch $\hat{t}$ there was no new infection till $\hat{t}+p$ , and at $\hat{t}+p+1$ some new agents are infected. Let $i\in N$ be such that $i\in I(S_{\hat{t}+p+1})$ but $i\notin I(S_{\hat{t}+p})$ . This means $\underline{v}_{\hat{t}+p}\neq i$ and
As $\underline{v}_{\hat{t}+p}\neq i$ , we have $a_{i}(\hat{S}_{\hat{t}+p-1})=a_{i}(\hat{S}_{\hat{t}+p})$ . Thus,
Let $\underline{v}_{\hat{t}+p}=j$ . As $a_k(\hat{S}_{\hat{t}})=1 \text{ for all } k\in I(S_{\hat{t}})$ and $I(S_{\hat{t}})\supseteq \cdots \supseteq I(S_{\hat{t}+p})$ , (15) and the definition of the process together imply that $j\notin I(S_{\hat{t}+p-1})$ and $a_{j}(\hat{S}_{\hat{t}+p-1})>a_{j}(\hat{S}_{\hat{t}+p})$ . Furthermore, as $I(S_{\hat{t}+p-1})\supseteq I(S_{\hat{t}+p})$ and $\underline{v}_{\hat{t}+p}=j$ , it must be that $a_{j}(\hat{S}_{\hat{t}+p-1})r_{j}(\hat{S}_{\hat{t}+p-1}) \leq \tau$ and $a_{j}(\hat{S}_{\hat{t}+p})r_{j}(\hat{S}_{\hat{t}+p})=\tau$ . Combining these two observations, we have $r_{j}(\hat{S}_{\hat{t}+p-1})<r_{j}(\hat{S}_{\hat{t}+p})$ . But this is a contradiction. To see this first note that $I(S_{\hat{t}+p-1})\supseteq I(S_{\hat{t}+p})$ and $a_h(\hat{S}_{\hat{t}+p-1})=1$ for all $h\in I(S_{\hat{t}+p-1})$ . Moreover, as $\underline{v}_{\hat{t}+p}=j$ , we have $a_k(\hat{S}_{\hat{t}+p-1})=a_k(\hat{S}_{\hat{t}+p})$ for all $k\in N\setminus I(S_{\hat{t}+p-1})$ and $a_l(\hat{S}_{\hat{t}+p})=0$ for all $l\in I(S_{\hat{t}+p-1})\setminus I(S_{\hat{t}+p})$ . Thus, it follows that $r_{j}(\hat{S}_{\hat{t}+p-1})\geq r_{j}(\hat{S}_{\hat{t}+p})$ .
We are now ready to present and prove our results for the cases $a=0$ and $a=1$ .
4.1. Results when $\boldsymbol{a}=0$
In this subsection, we consider the situation where the (common) initial action a equals 0. The next theorem describes the limiting distribution of the infected set of agents and their action profiles. As we stated earlier, it shows that all the agents will recover in the long run, and the action profile will have a degenerate distribution at $\vec{1}$ .
Theorem 11. Suppose $a=0$ . Then the limiting distribution of the infected set is given by
and the limiting distribution of the action profile is given by
Proof. We first prove the distribution of the infected set. Fix an agent sequence $\underline{v}\in N_\infty$ and let S be the DVSP induced by $\underline{v}$ . We show that $I(S_\infty)=\emptyset$ . Observe that Claim 1 in the proof of Theorem 1 holds in the setup of this theorem as well. We distinguish two cases based on the values of $k_1$ .
Case 1: $k_1\geqslant \kappa$ .
The assumption of the claim implies that agent 1 will recover before getting a chance to update their action. As their initial action is 0 and they are the only agent infected at the beginning, this means that for any other agent i, $r_i(\hat{S}_t)=0$ for all t with $0\leq t\leq \kappa$ . Therefore, no new agent will be infected till $\kappa$ . At $\kappa$ , agent 1 will recover as per the process, and no one further will get infected.
Case 2: $k_1< \kappa$ .
Note that the assumption implies that agent 1 gets a chance to update their action before they recover. Thus, as in Theorem 1, we consider two sub-cases. Recall the definition of $\alpha$ as defined in (3).
Case 2.1: $|N_1(\underline{v})|\geq \alpha$ .
As in Case 1 of Theorem 1, we can show that $I(S_1)=\cdots=I(S_{k_1+1})=\{1\}$ and $a_{1}(\hat{S}_{k_1})=1$ . Hence, by Lemma 5, $I(S_\infty)=\emptyset$ .
Case 2.2: $|N_1(\underline{v})|\leq \alpha-1$ .
As $k_1< \kappa$ , using similar arguments as in Case 2 of Theorem 1, we can show that $N_1(\underline{v}) \subseteq I(S_{k_1+1})\subseteq N_1(\underline{v})\cup \{1\}$ with $a_{i}(S_{k_1+1})=1$ for all $i\in I(S_{k_1+1})$ and $a_{i}(S_{k_1+1})=0$ for all $i\notin I(S_{k_1+1})$ . At epoch $k_1+1$ , if $\underline{v}_{k_1+1}\in I(S_{k_1+1})$ then they will choose the the same action 1, and we will have $I(S_{k_1+2})\subseteq I(S_{k_1+1})$ . If $\underline{v}_{k_1+1}\notin I(S_{k_1+1})$ , they will choose their action $b_{\underline{v}_{k_1+1}}(S_{k_1+1})=\tau$ , as all the uninfected agents have action 0 and all the infected agents have action 1. Therefore, no new agent will get infected at $k_1+2$ , and we have $I(S_{k_1+2})\subseteq I(S_{k_1+1})$ . As $a_i(\hat{S}_{k_1+1})=1$ for all $i\in I(S_{k_1+1})$ , by Lemma 5, it follows that $I(S_\infty)=\emptyset$ .
As $\underline{v}$ is an arbitrary agent sequence in $N_\infty$ and the cases are exhaustive, we have $\mathbb{P}(I(S_\infty)=\emptyset)=1$ .
For the second part of the theorem, take any agent sequence $\underline{v} \in N_\infty$ . Then we have $I(S_\infty)=\emptyset$ . This means there exists $\hat{t}$ such that $I(S_{\hat{t}})=I(S_{\hat{t}+1})=\cdots=\emptyset$ . As there is no infected agent after epoch $\hat{t}$ , any agent who updates their action after epoch $\hat{t}$ will choose their best response as 1. This, together with the fact that at $\underline{v}$ every agent appears infinitely many times, lets us conclude that $a_N(S_\infty)=\vec{1}$ . Therefore, $ \mathbb{P}(a_{N}(S_\infty)=\vec{1})=1$ .
4.2. Results when $\boldsymbol{a}=1$
This subsection considers the situation where the (common) initial action a equals 1. As was the case for $a=0$ , here too we show that in the long run the epidemic vanishes, and people roam freely.
Theorem 12. Suppose $a=1$ . Then the limiting distribution of the infected set is given by
and the limiting distribution of the action profile is given by
Proof. We prove the theorem by showing that for any agent sequence $\underline{v}$ in $N_\infty$ , $I(S_\infty)=\emptyset$ , where S is the DVSP induced by $\underline{v}$ . Consider an agent sequence $\underline{v}$ and the DVSP S induced by $\underline{v}$ . First, assume that $\kappa\geq 2$ . Because of this assumption, using similar arguments as in the proof of Theorem 3, we can show the following:
-
(i) If $\tau\geq \frac{1}{n-1}$ , then $I(S_1)=\{1\}$ .
-
(ii) If $\tau \lt \frac{1}{n-1}$ , then one of the following holds:
-
(a) $I(S_1)=N$ and $a_j(S_1)=1$ for all $j\in N$ ,
or
-
-
(b) there exists $i\in N\setminus 1$ with $a_j(S_1)=a_j(S_2)=1$ for all $j\in N\setminus i$ such that either $I(S_1)=I(S_2)=N\setminus i$ or $I(S_1)=N\setminus i$ and $I(S_2)=N$ .
It is easy to see that if either (i) or (ii)-(a) holds, by Lemma 5, we will have $I(S_\infty)=\emptyset$ . Suppose (ii)-(b) holds with $I(S_1)=I(S_2)=N\setminus i$ , $a_j(S_1)=a_j(S_2)=1$ for all $j\in N\setminus i$ . This means $I(S_1)=I(S_2)=N\setminus i$ and $a_j(\hat{S}_1)=1$ for all $j\in N\setminus i$ . Thus, by Lemma 5, $I(S_\infty)=\emptyset$ . Now, suppose (ii)-(b) holds with $I(S_1)=N\setminus i$ , $I(S_2)=N$ with $a_j(S_1)=a_j(S_2)=1$ for all $j\in N\setminus i$ . At epoch $\kappa\geq 2$ , agent 1 recovers, i.e., $I(S_\kappa)=N\setminus 1$ . This means $r_i(S_\kappa)=1$ . So, if agent 1 gets a chance to update their action, they will choose $a_1(\hat{S}_\kappa)=\tau$ as their best response. Otherwise, it would be $a_1(\hat{S}_\kappa)=0$ . Thus, agent 1 will not get infected at $\kappa+1$ . Hence $I(S_{\kappa+1})=i$ , as all the agents except i will recover at $\kappa+1$ . At epoch $\kappa+1$ , all the uninfected agents have action less than or equal to $\tau$ ; thus, whichever agent updates their action at $\kappa+1$ , no one will get infected at $\kappa+2$ . Furthermore, at epoch $\kappa+2$ agent i will get infected. Therefore, $I(S_{\kappa+2})=\emptyset$ . No one will get infected after this; hence $I(S_\infty)=\emptyset$ . This shows that if $\kappa\geq 2$ , $I(S_\infty)=\emptyset$ .
Now, assume that $\kappa=1$ . This means agent 1 will recover at epoch 1 and $a_1(S_1)=0$ . Suppose $\tau\geq \frac{1}{n-1}$ . Using similar arguments as in the proof of Theorem 3, we can show that at epoch 1, no new agent will get infected, implying $I(S_1)=\emptyset$ . Therefore, $I(S_\infty)=\emptyset$ . Suppose $\tau \lt \frac{1}{n-1}$ . If $\underline{v}_0=1$ , then, using similar arguments as in the proof of Theorem 3, it can be shown that at epoch 1, all the agents other than 1 will get infected. We claim that $I(S_2)=\emptyset$ , yielding $I(S_\infty)=\emptyset$ . To see this, observe that if agent 1 updates their action at epoch 2, they will change it to $\tau$ , as all the other agents are infected and have non-zero actions. Otherwise, their action will remain the same, i.e., $a_1(\hat{S}_1)=0$ . Thus, agent 1 will not get further infected at epoch 2. Also, as $\kappa=1$ , all the other agents will recover at epoch 2. Hence, $I(S_2)=\emptyset$ .
If $\underline{v}_0=i\;(\neq 1)$ , then at epoch 1, all the agents other than agents i and 1 will get infected (using similar arguments as in Case 2 of Theorem 3). Moreover, agent i will update their action to $(n-1)\tau$ , i.e., $a_i(S_1)=(n-1)\tau$ . We consider different possibilities for $\underline{v}_1$ . First, consider $\underline{v}_1=i$ . This means that at epoch 2, agent i will not get infected, and as $a_1(S_1)=0$ , agent 1 will not get infected either. Furthermore, all the other agents recover at epoch 2. Combining these facts, we have $I(S_2)=\emptyset$ . Now consider $\underline{v}_1=j\; (\notin \{1,i\})$ . As $j\in I(S_1)$ and $a_j(S_1)=1$ , this means $a_j(\hat{S}_1)=1$ . Thus, at epoch 2, agent i will get infected (see Case 2.2 in the proof of Theorem 3). Also, as $\kappa=1$ , all agents except 1 and i will recover at this epoch. This means $a_k(S_2)=0$ for all $k\neq i$ (recall that $a_(S_1)=0$ and $\underline{v}_1\neq 1$ ). Therefore, none of these agents will get infected at the next epoch, i.e., at epoch 3. In addition, agent i will be recovered, implying $I(S_3)=\emptyset$ .
Finally, consider $\underline{v}_1=1$ . As $I(S_1)=N\setminus \{1,i\}$ with $a_k(S_1)=1$ for all $k\in I(S_1)$ and $a_i(S_1)=(n-1)\tau$ , agent 1 will choose their action as
Also, at epoch 2, agent i will get infected (see Case 2.2 in the proof of Theorem 3), and all the agents in $N\setminus \{1,i\}$ will be recovered. Thus,
and $a_k(S_2)=0$ for all $k\in N\setminus \{1,i\}$ . If $\underline{v}_2=1$ , then as $I(S_2)=i$ , $a_i(S_2)>0$ , and $a_k(S_2)=0$ for all $k\in N\setminus \{1,i\}$ , agent 1 will update their action to $\tau$ and will not get infected in the next epoch. Furthermore, agent i will be recovered at the next epoch, implying that $I(S_3)=\emptyset$ . If $\underline{v}_2=i$ , as $I(S_2)=i$ , agent i will update their action to 1, and as a result agent 1 will get infected again at epoch 3. However, as $\kappa=1$ , agent i will recover at epoch 3. This means all the uninfected agents at epoch 3 have action 0. Hence, at epoch 4, no new agent will get infected and agent 1 will recover, implying $I(S_4)=\emptyset$ . If $\underline{v}_2=r\; (\notin \{1,i\})$ , agent r will update their action to
We claim $I(S_3)=\emptyset$ . To see this, note that agent i will be recovered at epoch 3. Among the other agents, only agents 1 and r have positive actions. So it is enough to show that agents 1 and r will not get infected. As agent r updates their action to $a_r(\hat{S}_2)=\frac{\tau}{b_r(S_2)}$ , they will not get infected. For agent 1, we show that they will not get infected by showing that $a_r(\hat{S}_2)>a_1(\hat{S}_2)$ . Note that $a_1(\hat{S}_2)=a_1(\hat{S}_1)$ and
Thus, $a_r(\hat{S}_2)>a_1(\hat{S}_2)$ holds if and only if
Therefore, $I(S_3)=\emptyset$ . This shows that $I(S_\infty)=\emptyset$ for $\kappa=1$ and completes the first part of the theorem. The second part of the theorem follows from the same arguments used in the proof of Theorem 11.
4.3. Evidence from simulation
Although we furnish a thorough simulation study for the model without recovery in Section 6, here we briefly mention that we also ran a simulation study for this general model with recovery, and the following are our findings:
-
• For any $0 \lt \tau \lt1$ and $\kappa>0$ , for both $a=0$ and $a=1$ , the population does indeed become completely uninfected.
-
• Consistent with our intuition, the time it takes for the population to become disease-free is monotonic with $\kappa$ ; that is, with higher recovery time, it takes longer for the disease to vanish.
-
• Similar results indicating a disease-free population seem to be true for the $0 \lt a<1$ cases as well; however, since we leave the proofs for these scenarios of general a to future work, we do not comment here on the time it takes for the population to become disease-free, which depends on the value of a.
5. A model under simultaneous response
In our model (as defined in Section 2), exactly one agent is chosen randomly at every epoch, who then plays their best response to the current state. In this section we discuss another model, in which agents respond simultaneously at every epoch. Before we proceed to analyze this model, we point out some notable aspects of it.
While simultaneous response by all players is considered in static games and evolutionary games, such a model may violate the assumption of common belief in rationality for dynamic games like ours. Common belief in rationality means that each player is rational (utility-maximizing), each player believes that every other player is rational, each player believes that every other player believes that every other player is rational, and so on (see Chapter 4 in [Reference Maschler, Zamir and Solan27] for a formal definition). This is because, if all agents are (best-) responding to the current state simultaneously, then each agent knows that the state will change at the next epoch, and whatever is a/the best response for the current state may not continue to be optimal at the next epoch, during which the action will be executed in practice. One can add one more level of rationality by assuming that each agent i responds to the state obtained by replacing the actions of every other agent j with their best responses. However, that will not be consistent either, as while agent i is responding to the modified state (as defined above), they believe that every other agent j is responding to the current state. This inconsistency will remain no matter how many levels of rationality we consider. So playing a/the best response to the current state is not consistent with common belief in rationality.
One reasonable way to model such a situation is to assume that the players play actions corresponding to a Nash equilibrium, whenever that exists. A collection of strategies, one for each player, is a Nash equilibrium of a game if, for each player, the corresponding strategy is a best response to the strategies of the other players given in the Nash equilibrium. It is worth noting that players do not play a best response to the ‘current state’ (in fact, there is no such state in a static game) in a Nash equilibrium; instead, they play a best response to the perceived equilibrium state. If there is a unique Nash equilibrium of the game at the initial state, then a plausible model would be to assume that the players play the corresponding actions. However, if there are multiple equilibria, different agents may play actions corresponding to different equilibria, possibly resulting in an action profile that is ‘far’ from any equilibrium. Even if we assume that agents somehow coordinate to one particular Nash equilibrium, there is no clear way to identify that equilibrium as a function of the current state. This is particularly true because a Nash equilibrium may not give equal utility (or even relatively higher utility) to every player, and there is no clear way to decide whom to favor. This also raises the question of whether the players will stick to any particular Nash equilibrium forever in the dynamic situation we are considering. Nevertheless, it is known that playing a Nash equilibrium of a static game repeatedly constitutes a Nash equilibrium of the corresponding repeated game, and therefore we compute all the Nash equilibria of the game at the initial state in this section. Needless to say, we lose the dynamic nature of the problem in this approach.
Before we proceed to characterize the Nash equilibria, we discuss the connection of the model with evolutionary games. An evolutionary game consists of a class of populations each of which has a set of actions. A strategy of a population is to choose a distribution of its mass over its actions. In best-response dynamics, each population plays a best-response strategy to the current state. Since there need not be a unique best response, utilities are perturbed to achieve the unique best response at every state. When the utilities are perturbed using a Gumbel distribution, the corresponding dynamic is called the logit best-response dynamic. One important question in evolutionary games is whether playing a/the best response to the previous state every time leads to convergence to a Nash equilibrium. The paper [Reference Lahkar and Riedel22] establishes this fact under a logit best-response dynamic, and [Reference Lahkar, Mukherjee and Roy20] does it for arbitrary lower semicontinuous, strongly convex perturbations (for example, Tsallis and Burg entropy) of the utilities. A connection between the approach we have taken in this paper and the logit dynamic in evolutionary games can be drawn from [Reference Lahkar, Mukherjee and Roy21]. That paper shows that the logit dynamic can be achieved by starting with populations of finite size, allowing one randomly chosen action to respond at each time, and then letting the population size go to infinity. Recently, [Reference Mukherjee and Roy28, Reference Mukherjee and Roy29] have considered Bayesian evolutionary games and established the convergence to a Nash equilibrium for finite and continuum strategies, respectively, under the different perturbed Bayesian best-response dynamics.
It is worth emphasizing that in evolutionary games, the whole population is considered as one player, and actions are considered as different species in that population. In particular, unlike in our model, actions cannot be treated as rational players in evolutionary games; consequently, common belief in rationality does not apply to the actions. Nevertheless, we feel that a (suitable) evolutionary approach to the virus spread model would be an interesting problem for future research.
5.1. Nash equilibria of the game at the initial state
We now investigate the Nash equilibria of the game induced at the initial state. Since we treat it as a static game (in particular, the state is not allowed to change), we consider a general setup where an arbitrary set of agents S, $S \subseteq N$ , is infected at the initial state. Following the formulation in Section 2 (and simplifying certain expressions for the present case), the game at the initial state of the infection is defined as $G=\langle N, (A_i)_{i \in N}, (v_i)_{i \in N} \rangle $ , where
-
• $N=\{1,2,\ldots,n\}$ is the set of players;
-
• $A_i =[0,1]$ is the set of actions of each player $i \in N$ ;
-
• for each action profile $\vec{a}:=(a_1,\ldots,a_n)\in [0,1]^n,$
-
• for $i \in S$ , $v_i(\vec{a})=f(a_i)$ , and
-
• for $i \in N\setminus S$ ,
(16) \begin{align} v_i(\vec{a}) = \begin{cases} 1+f(a_i) & \text{if } a_i r_i(\vec{a}) \leqslant \tau, \\[3pt] f(a_i) & \text{if } a_i r_i(\vec{a})> \tau, \end{cases}\end{align}where $f\,:\,[0,1]\to [0,1]$ is a strictly increasing function, and\[ r_i(\vec{a}) = \begin{cases} \left(\frac{\sum_{j\in S}a_j}{\sum_{j\in N\setminus \{i\}}a_j}\right) & \text{if } \sum_{j\in N\setminus \{i\}}a_j \neq 0, \\[3pt] 0 & \text{if } \sum_{j\in N\setminus \{i\}}a_j = 0. \end{cases}\]
-
We now define the notion of (pure) Nash equilibrium for static games.
Definition 1. An action profile $a_N=(a_1,\ldots,a_n)$ is a (pure) Nash equilibrium of a game $G = \langle N, (A_i)_{i \in N}, (v_i)_{i \in N} \rangle $ if, for all $i\in N$ and all $a_i^{\prime}\in A_i$ ,
The following theorem characterizes all Nash equilibria of the game G.
Theorem 13. For the game G, if $\tau \lt\frac{|S|}{n-1}$ , then there is a unique Nash equilibrium where agents in S play the action 1 and every other agent plays the action $\frac{\tau|S|}{|S|-(n-|S|-1)\tau}$ . If $\tau\geq \frac{|S|}{n-1}$ , then there is a unique Nash equilibrium where every agent plays the action 1.
Proof. We first show that in any Nash equilibrium of the game G, an uninfected agent will remain uninfected. Assume for the sake of contradiction that there is a Nash equilibrium $\vec{a}=(a_1,\ldots,a_n)$ where the uninfected agent i becomes infected. Since $\vec{a}$ is a Nash equilibrium, we have
As, by our assumption, agent i is infected at $\vec{a}$ , we have $u_i(\vec{a})=f(a_i)$ . Consider $a_i^{\prime}=\tau$ . This means agent i will remain uninfected at $(a_i^{\prime},a_{-i})$ , and hence $u_i(a_i^{\prime},a_{-i})=1+f(a_i^{\prime})$ . But this contradicts (17), as $f\,:\,[0,1]\to [0,1]$ is a strictly increasing function. Therefore, in any Nash equilibrium of the game G, all the agents other than agents in S will remain uninfected. Also, as agents in S have the utility function f, they will always choose the action 1 in a Nash equilibrium.
Next we show that in any Nash equilibrium of the game G, any two uninfected agents will have the same action. Assume for the sake of contradiction that there is a Nash equilibrium of the game $\vec{a}=(a_1,\ldots,a_n)$ where two uninfected agents i and j have different actions, i.e., $a_i\neq a_j$ . Without loss of generality we may assume that $a_i>a_j$ . We show that $u_j(\vec{a})<u_j(a_j^{\prime},a_{-j})$ where $a_j^{\prime}=a_i$ , a contradiction to the fact that $\vec{a}$ is a Nash equilibrium. Note that as $\vec{a}$ is a Nash equilibrium, agent j will remain uninfected, and hence $u_j(\vec{a})=1+f(a_j)$ . For the profile $(a_j^{\prime},a_{-j})$ ,
This means agent j will get infected at $(a_j^{\prime},a_{-j})$ . Hence, $u_j(a_j^{\prime},a_{-j})=1+f(a_j^{\prime})>1+f(a_j)=u_j(\vec{a})$ . This shows that in any Nash equilibrium of the game G, all the uninfected agents will have the same action.
We are now ready to complete the proof of the lemma. Let $\vec{a}$ be a Nash equilibrium of the game G. First, assume that $\tau \lt \frac{|S|}{n-1}$ . As discussed before, all the agents in S will choose their action as 1. Therefore, $a_i=1$ for all $i\in S$ . Consider $i\notin S$ . As agent i will remain uninfected at $\vec{a}$ and $a_i$ is their best action given the actions of the others, it must be that
This together with the fact that $a_j=a_l$ for all $j,l \in N\setminus S$ implies
Note that as $\tau \lt \frac{|S|}{n-1}$ , the following holds:
Thus, $0\leq x\leq 1$ . Now assume that $\tau \geq \frac{|S|}{n-1}$ . As for the action profile $\vec{a}=\vec{1}$ , we have
for all $i\in N\setminus S$ , so no agent in $N\setminus S$ will get infected if they choose the action 1. As 1 is the maximum possible action, the unique Nash equilibrium will be $\vec{1}$ .
6. Simulation studies
In this section, we corroborate our theoretical results with some evidence from simulations. Our focus remains on the following three goals:
-
• We completely enumerate the empirical distribution of the number of infected agents for a small n (total number of agents) up to a few epochs.
-
• For the same n, we obtain the reported theoretical action profile limits by increasing the number of epochs. We achieve this by selecting a large number of sequences of the particular epoch length.
-
• For large n, using the same approach of random permutations, we explore some special cases of a and $\tau$ and show that we match the reported asymptotic distribution for the cardinality of the infected set.
6.1. Empirical distribution of the number of infected agents up to a few epochs
In our paper so far, we have provided the asymptotic distribution of the cardinality of the final infected set, but it remains to understand how fast this distribution or a close approximation of it is reached. In this section, we provide a very thorough exploration of the cardinality of the infected set up to a few epochs of time. Note that, for the distribution of the total number of infected people in the population, we required an exact enumeration of all possible sequences; thus it becomes difficult to compute this beyond 10 or 11 epochs. So the number we will report for this case is an exact probability enumeration based on 10 epochs. We set $n=5$ and consider the various values of a and $\tau$ listed in Table 1.
In particular, we see that after only 10 epochs for $n=5$ , we can reach very good approximations of the final distributions. Moreover, the number of epochs needed to get a close approximation is much smaller. One can see that except for the very first case of $a=0$ , $\tau=0.12$ , we have fairly good convergence to the actual distribution in 10 steps. Generally speaking, for smaller a and $\tau$ it takes longer to get close to the asymptotic distribution.
6.2. Empirical distribution of the action profile
A very pertinent question about some simulation evidence for the asymptotic distribution of the final action set was asked by one reviewer. First we must acknowledge that, even for $n=5$ , exactly tabulating all possible empirical distributions of a 5-dimensional vector is a challenging problem. Moreover, while we did run the simulation, we saw that for most cases the empirical distribution (with complete enumeration) after 10 or 11 epochs was somewhat far from the theoretical ones. For this reason, we decided to extend our empirical probability calculation to a larger number of epochs. As complete enumeration was computationally challenging, even for $n=5$ , we decided to randomly sample 50000 sequences of epoch lengths 50, 200, and 400. After a lot of deliberation on how to concisely report the average performance of the 5-dimensional vector after these epoch lengths, we decided to report the following:
-
• the theoretical distributions in Table 2, and
-
• the empirical distribution of the sum of the action profiles after 50, 200, and 400 epochs.
We choose the following cases:
-
• $a=0$ , with $\tau=0.1$ , $0.3$ , $0.6$ , and $0.9$ ;
-
• $a=0.4$ , with $\tau=0.05$ , $0.3$ , $0.4$ , and $0.9$ ;
-
• $a=0.7$ , with $\tau=0.1$ , $0.3$ , $0.6$ , and $0.9$ ;
-
• $a=1$ , with $\tau=0.1$ , $0.5$ , $0.6$ , and $0.9$ .
Solely for presentation purposes, we have omitted a few cases from Table 2 and presented them instead in Figures 1, 2, 3, and 4. One can see in Figure 1 that for very small $\tau$ the uniform distribution in the five possible classes was reached after 200 epochs, whereas for larger $\tau$ the theoretical distributions were reached quite quickly. For large a, however, say $a=1$ , as shown in Figure 4, the convergence is faster. This is consistent with our findings in Table 1, which shows the results of a complete enumeration.
Apart from matching the sum of actions in each case, which we present visually, we would like to draw the attention of the reader to the following cases:
-
• $a=0.4$ , $\tau=0.3$ , and
-
• $a=0.7$ , $\tau=0.3$
In each of these cases, the theoretical distribution from Table 2 includes a class with a negligible theoretical probability (of 0.006 in the first case and 0.01 in the second). One can see that even though these small classes were not prominent after 50 epochs, they became so after 200 epochs. Such strong empirical evidence bolsters the likelihood that our theoretical findings are accurate. Moreover, it says that the time needed for the action profile to eventually reach the limiting distribution depends on the values of a and $\tau$ .
6.3. Large sample size
Lastly we want to understand how well our results hold empirically for a large population, and, more importantly, how fast we converge to the theoretical distribution. For this as well, we have to rely on evaluating a random sample of sequences, as complete enumeration for even moderately large n, around 20 or 50, is nearly impossible to achieve. That said, we were able to achieve the theoretical distributions by evaluating a large number (50000) of permutations of the same length as the corresponding epoch. We choose the scenario $a=0$ , $\tau <1/(n-1)$ to exhibit this dynamics. Our theory in Theorem 1 says that the number of infected agents asymptotically converges to Uniform( $1, \cdots, n$ ). In Figure 5, we choose $n=5,20,50$ and obtain the dynamics after 20, 50, and 100 epochs.
One can see that for each n, initially, the infected set remains just 1 with a somewhat significant probability, whereas the probability of having all agents infected is negligible. However, as time progresses, the distribution of the number of agents becomes more uniform. While for small n the distribution becomes almost uniform fairly quickly (e.g., after about 20 epochs), for larger n it takes significantly longer. This finding is consistent with our intuition and initial setting. Since we begin with one infected individual, initially the number of infected agents remains 1 with nontrivial probability, and the chance of the infection’s spreading to everyone is quite small. We also ran the simulation for different values of n, a, and $\tau$ and obtained similar findings. However, for large n it is very difficult to summarize the results of every possible case exhaustively as we did for $n=5$ in Table 1, and so we skip those discussions here.
7. Conclusion
In this article, we propose a graph-theoretic model to describe the spread of a contagious disease, allowing for rational interventions from agents sitting at the nodes of the graph. The agents act based on a reasonable utility function, and they may (or may not) get infected if their exposure increases. We obtain the asymptotic distribution of the cardinality of the infected set as well as that of the action profile. The results reveal several interesting patterns that exhibit proximity to uniformity, as well as results that are intuitively justifiable (such as the observation that if everyone’s immunity ( $\tau$ value) is low to begin with, then eventually the whole population gets infected). We have given an almost complete picture of how the values of $\tau$ and a affect the final distributions of the infected set and the action profile. We also observe several fascinating phase transition phenomena in our results. Through exact enumeration of all possible sequences in which the agents are picked randomly, we also show that the empirical distributions obtained mimic the corresponding theoretical distributions rather closely after only around 10 epochs from the start of the process.
A number of questions remain to be addressed that are beyond the scope of this paper. We now give a brief overview of the questions we intend to pursue for similar or related models in the future.
First, so far we have been unable to obtain, theoretically at least, the limiting distributions for the case where $a/(n-1)< \tau < 1/(n-1)$ . The length of this interval, for any given $\tau$ , is negligible for large n. However, our simulation studies show that there are possibly only two different distributions that could lie in this space.
Second, we want to relax the restriction that all agents start with the same initial action a or the same initial immunity $\tau$ . Again, our numerical explorations have revealed that for fixed a and uniform $\tau$ , or for fixed $\tau$ and uniform a, the contagion tends to spread throughout the population, yielding a rather interesting phenomenon. However, our current mathematical tools fail to encompass such levels of randomness; thus, rigorously proving the occurrence of the phenomena mentioned above may require completely different mathematical machinery.
Third, we would also like to explore the situation where $g_{ij}$ is allowed to change over time or have its own model of evolution. That possibility would also significantly affect our computations and thus is left for future consideration.
Finally, from the model perspective, one could potentially incorporate a cost function for infected individuals when they go out, which would imply that $f(\cdot)$ in the utility function might not always be monotonic. It would be interesting to examine how such a cost function would affect the dynamics.
Appendix A. A few important lemmas
Lemma 6. Suppose $\underline{v}\in N_\infty$ and let $\bar{t}\in \mathbb{N}_0$ be such that either
or
Then $I(S_{\bar{t}})=I(S_{\bar{t}+1})$ .
Proof. First assume that $I(S_{\bar{t}-1})=I(S_{\bar{t}})$ and $\underline{v}_{\bar{t}}=i$ with $i\notin I(S_{\bar{t}})$ . Since $i\notin I(S_{\bar{t}})$ , i will choose their action as
If $r_i(S_{\bar{t}})=0$ then $r_i(\hat{S}_{\bar{t}})=r_i(S_{\bar{t}})=0$ , and agent i will not get infected, as $1\times r_i(\hat{S}_{\bar{t}})=0 \leqslant \tau(i)$ . Suppose $r_i(S_{\bar{t}})>0$ . Since $a_i(\hat{S}_{{\bar{t}}})=b_i(S_{{\bar{t}}})$ and $r_i(S_{\bar{t}})=r_i(\hat{S}_{\bar{t}})$ , this means agent i will not get infected at $\bar{t}+1$ . To show that any other agent $j\notin I(S_{\bar{t}})$ will not get infected at $\bar{t}+1$ , we first claim that $a_i(\hat{S}_{{\bar{t}}})\geqslant a_i(S_{{\bar{t}}})$ . If $a_i(\hat{S}_{{\bar{t}}})=1$ then there is nothing to show, so assume $a_i(\hat{S}_{{\bar{t}}})=\frac{\tau(i)}{r_{i}(S_{\bar{t}})}$ . As $i\notin I(S_{\bar{t}})$ , we have $a_i(\hat{S}_{{\bar{t}-1}})r_i(\hat{S}_{\bar{t}-1}) \leqslant \tau(i)$ . Moreover, as $I(S_{\bar{t}-1})=I(S_{\bar{t}})$ , it follows that $\hat{S}_{\bar{t}-1}=S_{\bar{t}}$ (see (iii) of Observation 1) and hence $r_i(\hat{S}_{\bar{t}-1})=r_i(S_{\bar{t}})$ . Combining this with the fact that $a_i(S_{{\bar{t}}})=a_i(\hat{S}_{{\bar{t}-1}})$ , we get $a_i(S_{{\bar{t}}})r_i(S_{\bar{t}}) \leqslant \tau(i)$ . So $a_i(\hat{S}_{{\bar{t}}})\geqslant a_i(S_{{\bar{t}}})$ .
Take $j\notin I(S_{\bar{t}})$ with $j\neq i$ . Since $j\notin I(S_{\bar{t}})$ , it follows that $a_j(\hat{S}_{\bar{t}-1})r_j(\hat{S}_{\bar{t}-1}) \leqslant \tau(j)$ . Additionally, $j\neq i$ implies $a_j(\hat{S}_{\bar{t}-1})=a_j(S_{\bar{t}})=a_j(\hat{S}_{\bar{t}})$ . Therefore, to show that $a_j(\hat{S}_{\bar{t}})r_j(\hat{S}_{\bar{t}}) \leqslant \tau(j)$ , it is enough to show that $r_j(\hat{S}_{\bar{t}-1})\geqslant r_j(\hat{S}_{\bar{t}})$ . Note that
So agent j will not get infected at $\bar{t}+1$ , and hence $I(S_{\bar{t}+1})=I(S_{\bar{t}})$ .
Now assume $i\in I(S_{\bar{t}})$ with $a_{i}(S_{\bar{t}})=1$ . This means $a_i(\hat{S}_{\bar{t}})=b_i(S_{\bar{t}})=1$ . As $b_i(S_{\bar{t}})=a_{i}(S_{\bar{t}})$ and $\underline{v}_{\bar{t}}=i$ , we have $S_{\bar{t}}=S_{\bar{t}+1}$ (see Observation 1). Hence, $I(S_{\bar{t}})=I(S_{\bar{t}+1})$ . This completes the proof of the lemma.
Lemma 7. Suppose that $I(S_0)=\{1\}$ and $a_i(S_0) \leqslant \tau(i)$ for all $i\in N$ . Let $\underline{v}\in N_\infty$ and $\hat{t}\in \mathbb{N}_0$ be such that $\underline{v}_t\neq 1$ for all $t<\hat{t}$ . Then $I(S_t)=\{1\}$ for all $t \leqslant \hat{t}$ .
Proof. Note that if $\hat{t}=0$ then there is nothing to show. So assume $\hat{t}\geqslant 1$ . We use induction to prove the statement. As the base case, we show that $I(S_1)=\{1\}$ . Let $\underline{v}_0=i$ . Since $\hat{t}\geqslant 1$ , $i\neq 1$ . Agent i will choose their action as
If $r_i(S_0)=0$ then $r_i(\hat{S}_0)=r_i(S_0)=0$ , and agent i will not get infected, as $1\times r_i(\hat{S}_0)=0 \leqslant \tau(i)$ . Suppose $r_i(S_0)>0$ . Since $a_i(\hat{S}_0)=b_i(S_0)$ , $r_i(\hat{S}_0)=r_i(S_0)$ , and $b_i(S_0) \leqslant \frac{\tau(i)}{r_i(S_0)}$ , this means agent i will not get infected at $t=1$ . For any $j\notin \{1,i\}$ , $a_j(\hat{S}_0)=a_j(S_0) \leqslant \tau(j)$ , so agent j will also not get infected at $t=1$ . Thus, $I(S_1)=\{1\}$ .
Next we introduce the following induction hypothesis: Given $\bar{t}\in \mathbb{N}_0$ with $\hat{t}\geqslant \bar{t}>1$ , we have $I(S_1)=\cdots=I(S_{\bar{t}-1})=\{1\}$ .
We now show that $I(S_{\bar{t}})=\{1\}$ . Let $\underline{v}_{\bar{t}-1}=i$ . Since $\hat{t}\geqslant \bar{t}$ , this means $i\neq 1$ . Hence, $i\notin I(S_{\bar{t}-1})$ . As $\bar{t} \gt 1$ , we have $I(S_{\bar{t}-2})=I(S_{\bar{t}-1})$ . This together with Lemma 6, implies $I(S_{\bar{t}})=I(S_{\bar{t}-1})=\{1\}$ . Thus, by induction, we have $I(S_{\hat{t}})=\{1\}$ . This completes the proof of the lemma.
Remark 4. It follows from Lemma 7 that $I(S_{t_1(\underline{v})})=\{1\}$ for all $\underline{v} \in N_\infty$ .
Lemma 8. Consider $\underline{v}\in N_\infty$ and let $\hat{t} \in \mathbb{N}_0$ be such that $a_i(S_{\hat{t}})=1 \text{ for all } i\in I(S_{\hat{t}}) \text{ and } a_i(S_{\hat{t}}) \leqslant \tau(i) \text{ for all } i\notin I(S_{\hat{t}})$ . Then $I(S_{\hat{t}})=I(S_\infty).$
Proof. We first show that $I(S_{\hat{t}+1})=I(S_{\hat{t}})$ . Let $\underline{v}_{\hat{t}}=i$ . Suppose $i\in I(S_{\hat{t}})$ . Thus, by Lemma 1, $a_i(\hat{S}_{\hat{t}})=b_i(S_{\hat{t}})=1$ . This implies $S_{\hat{t}}=S_{\hat{t}+1}$ (see Observation 1) and hence $I(S_{\hat{t}+1})=I(S_{\hat{t}})$ . Now suppose $i\notin I(S_{\hat{t}})$ . Agent i will choose their action as
If $r_i(S_{\hat{t}})=0$ then $r_i(\hat{S}_{\hat{t}})=r_i(S_{\hat{t}})=0$ , and agent i will not get infected, as $a_i(\hat{S}_{\hat{t}})\times r_i(\hat{S}_{\hat{t}})=0 \leqslant \tau(i)$ . Suppose $r_i(S_{\hat{t}})>0$ . Since $a_i(\hat{S}_{\hat{t}})=b_i(S_{\hat{t}})$ and $r_i(S_{\hat{t}})=r_i(\hat{S}_{\hat{t}})$ , this means agent i will not get infected at $\hat{t}+1$ . Take $j\notin I(S_{\hat{t}})$ and $j\neq i$ . Note that by the assumption of the lemma, $a_j(S_{\hat{t}}) \leqslant \tau(j)$ . Since $j\neq i$ , we have $a_j(S_{\hat{t}})=a_j(\hat{S}_{\hat{t}})$ . Combining these two facts, we have $a_j(\hat{S}_{\hat{t}}) \leqslant \tau(j)$ . As $r_j(\hat{S}_{\hat{t}}) \leqslant 1$ , this implies $a_j(\hat{S}_{\hat{t}})r_j(\hat{S}_{\hat{t}}) \leqslant \tau(j)$ . Thus, agent j will not get infected at $\hat{t}+1$ . Hence, $I(S_{\hat{t}+1})=I(S_{\hat{t}})$
We now show that for any $t\in \mathbb{N}_0$ with $t>\hat{t}+1$ , $I(S_{\hat{t}})=I(S_t)$ holds. Assume for the sake of contradiction that there exists $\bar{t}\in \mathbb{N}_0$ with $\bar{t} \gt \hat{t}+1$ such that $I(S_{\hat{t}})\subsetneq I(S_{\bar{t}})$ . Without loss of generality we can assume that $I(S_{\hat{t}})=I(S_{\hat{t}+1})=\cdots=I(S_{\bar{t}-1})$ . Let $\underline{v}_{\bar{t}-1}=i$ . Suppose $i\in I(S_{\bar{t}-1})$ . We first show
As $i\in I(S_{\bar{t}-1})$ and $I(S_{\bar{t}-1})=I(S_{\hat{t}})$ , we have $i\in I(S_{\hat{t}})$ . Thus, by the assumption of the lemma, $a_i(S_{\hat{t}})=1$ . Since $\hat{t} \leqslant \bar{t}-1$ , this implies $a_i(S_{\bar{t}-1})=1$ ; see Observation 3.
By (18), we have $a_i(S_{\bar{t}-1})=1$ . Since $\underline{v}_{\bar{t}-1}=i$ and $i\in I(S_{\bar{t}-1})$ , this implies $a_i(\hat{S}_{\bar{t}-1})=1$ . Thus, $S_{\bar{t}-1}=\hat{S}_{\bar{t}-1}$ , and hence $I(S_{\bar{t}-1})=I(S_{\bar{t}})$ (see Observation 1), a contradiction to the fact that $I(S_{\hat{t}})\subsetneq I(S_{\bar{t}})$ . Hence, $I(S_{\bar{t}})=I(S_{\hat{t}})$ .
Now suppose $i\notin I(S_{\bar{t}-1})$ . As $\bar{t} \gt \hat{t}+1$ , we have $I(S_{\bar{t}-1})=I(S_{\bar{t}-2})$ . This together with Lemma 6 implies $I(S_{\bar{t}-1})=I(S_{\bar{t}})$ , a contradiction to the fact that $I(S_{\hat{t}})\subsetneq I(S_{\bar{t}})$ . Hence, $I(S_{\bar{t}})=I(S_{\hat{t}})$ . This completes the proof of the lemma.
Appendix B. Proofs of Theorem 3, Theorem 5, and Theorem 9
B.1. Proof of Theorem 3
Proof. We follow the same structure that we used in the proof of Theorem 1.
Step 1. Fix an agent sequence $\underline{v}\in N_\infty$ and let S be the DVSP induced by $\underline{v}$ . To shorten notation, for all $i \in N$ , let us denote $t_i(\underline{v})$ by $k_i$ . Recall the set $N_1(\underline{v})$ . We distinguish two cases based on the value of $|N_1(\underline{v})|$ .
Case 1: $|N_1(\underline{v})|=0$ .
First assume $\tau\geqslant \frac{1}{n-1}$ . We show that no agent will get infected under this assumption, i.e., $I(S_\infty)=\{1\}$ . Note that by the assumption of the case, $\underline{v}_0=1$ . Also, as $a=1$ , $a_i(S_{0})=1$ for all $i\in N$ . Recall that $\hat{S}_{0}$ denotes the intermediate state whose only difference from $S_{0}$ is that agent $\underline{v}_{0}$ has updated their action to $b_{\underline{v}_{0}}(S_{0})$ . Since $\underline{v}_{0}=1$ , we have $a_i(S_{0})=a_i(\hat{S}_{0})$ for all $i\neq 1$ . Thus, $a_i(\hat{S}_{0})=1$ for all $i\in N\setminus \{1\}$ . Moreover, by Lemma 1 and the definition of the process, $a_1(\hat{S}_{0})=1$ . Consider the time point 1. By the definition of the process, an agent $i\neq 1$ will be in $I(S_{1})$ if $a_i(\hat{S}_{0})r_i(\hat{S}_{0})>\tau$ . Since $I(S_{0})=\{1\}$ , $a_i(\hat{S}_{0})=1$ for all $i\in N$ , and $g_{ij}=c$ for all $i\neq j$ , it follows that $r_i(\hat{S}_{0})= \frac{1}{n-1}$ for all $i\in N\setminus \{1\}$ . Because $\tau\geqslant \frac{1}{n-1}$ , this implies that no agent in $N\setminus \{1\}$ gets infected at the time point 1. Hence, $I(S_{1})=\{1\}$ .
We now show that no new agent will get infected after this. We first show that $I(S_{2})=\{1\}$ . Let $\underline{v}_{1}=i$ . If $i\notin I(S_{1})$ , then as $I(S_{0})=I(S_{1})$ by Lemma 6, we have $I(S_{1})=I(S_{2})$ . If $i\in I(S_{1})$ then $i=1$ . Moreover, $a_1(S_{1})=a_1(\hat{S}_{0})=1$ . Hence, by Lemma 6, $I(S_{1})=I(S_{2})$ . Therefore, $I(S_{2})=\{1\}$ . Using the same arguments repeatedly, it follows that $I(S_t)=\{1\}$ for all $t\geqslant 2$ . Thus, $I(S_\infty)=\{1\}$ .
Now assume $\tau \lt \frac{1}{n-1}$ . We show that all the agents get infected under this assumption. Using similar arguments as before, we get $r_i(\hat{S}_{0})= \frac{1}{n-1}$ for all $i\in N\setminus \{1\}$ . As $\tau \lt \frac{1}{n-1}$ , this means each $i\in N\setminus \{1\}$ will get infected at time point 1. Therefore, $I(S_\infty)=N$ .
Case 2: $|N_1(\underline{v})|\geqslant 1$ .
This means $\underline{v}_0\neq 1$ . Let $\underline{v}_0=i\notin \{1\}$ . Then, by the definition of the process, agent i will choose their action as $b_i(S_0)$ at the intermediate state $\hat{S}_0$ . As $a_j(S_0)=1$ for all $j\in N$ and $I(S_0)=\{1\}$ , it follows that $r_i(S_0)\neq 0$ . Therefore,
Since by our assumption $\underline{v}_{0}=i$ and $i\notin I(S_{0})$ , by Observation 4, $i\notin I(S_{1})$ . For any other uninfected agent j,
This together with the fact that $a_j(\hat{S}_0)=1$ implies the following:
-
(1) if $\tau\geqslant \frac{1}{n-1}$ , then $b_i(S_0)=1$ and hence $a_j(\hat{S}_0)r_j(\hat{S}_0)=\frac{1}{n-1} \leqslant \tau$ ;
-
(2) if $\tau \lt \frac{1}{n-1}$ , then $b_i(S_0)<1$ and hence $a_j(\hat{S}_0)r_j(\hat{S}_0)>\frac{1}{n-1}>\tau$ .
Combining the above observations, we may conclude that if $\tau\geqslant \frac{1}{n-1}$ then agent j will not get infected at time point 1, and if $\tau \lt \frac{1}{n-1}$ then agent j will get infected at time point 1. Hence, we have
If $\tau\geqslant \frac{1}{n-1}$ , then using similar arguments as in Case 1, we can show that $I(S_\infty)=\{1\}$ . If $\tau \lt \frac{1}{n-1}$ , then to identify the final infected set, we distinguish two sub-cases.
Case 2.1: $\underline{v}_1=i$ .
We show that the final infected set will be $N\setminus i$ . Since by our assumption $\underline{v}_{1}=i$ and $i\notin I(S_{1})$ , by Observation 4, $i\notin I(S_{2})$ . Hence, $I(S_{2})=N\setminus \{i\}$ . We now show that agent i will not get infected after this. At time point 2,
Therefore, $a_i(\hat{S}_{2})=\tau$ (see Observation 4). At time point 3, if $\underline{v}_{3}=i$ , then agent i will not get infected at time point 4 (Observation 4). On the other hand, if $\underline{v}_{3}\neq i$ , then as $a_i(\hat{S}_{3})=a_i(\hat{S}_{2})=\tau$ , it follows that $a_i(\hat{S}_{3})r_i(\hat{S}_3) \leqslant \tau$ . Hence agent i will remain uninfected at time point 4. Continuing in this manner, we can show that agent i will not get infected after this. Thus, $I(S_\infty)=N\setminus \{i\}$ .
Case 2.2: $\underline{v}_{1}\neq i$ .
We show that the final infected set will be N. Since $I(S_{1})=N\setminus \{i\}$ , $r_i(\hat{S}_{1})=1$ . Moreover, as $a_i(S_{1})=a_i(\hat{S}_{0})=b_i(S_0)=(n-1)\tau>\tau$ (see (19)) and $\underline{v}_{1}\neq i$ , it follows that $a_i(\hat{S}_{1})>\tau$ . Combining these two facts, we have $a_i(\hat{S}_{1})r_i(\hat{S}_{1})>\tau$ . Thus, agent i will get infected at time point 2. Hence, $I(S_{2})=N$ and $I(S_\infty)=N$ .
Step 2. First assume $\tau\geqslant \frac{1}{n-1}$ . Then, in view of Case 1 and Case 2 of the current proof, we have $I(S_\infty)=\{1\}$ .
Now assume $\tau \lt\frac{1}{n-1}$ . By Case 1 and Case 2 above, we have the following:
-
1. $|I(S_\infty)|=n-1$ with $1\in I(S_\infty)$ if $|N_1(\underline{v})|\geqslant 1$ and there is $i\in N\setminus \{1\}$ such that $k_i=0$ and $\underline{v}_{1}=i$ ;
-
2. $I(S_\infty)=N$ if either $|N_1(\underline{v})|=0$ or $|N_1(\underline{v})|\geqslant 1$ and there is no $i\in N\setminus \{1\}$ such that $k_i=0$ and $\underline{v}_{1}=i$ .
We calculate the probability of $|I(S_\infty)|=n-1$ . By (i) we have
Note that by (i) and (ii),
Therefore,
Since any two subsets of N with cardinality $n-1$ have the same probability, combining all of the above observations yields the following distribution of the infected set:
This completes the proof of the theorem.
B.2. Proof of Theorem 5
Proof. Note that
and
Also, as $\tau\geqslant \frac{1}{n-1}$ , we have $\hat{\alpha} \leqslant n-1$ . We follow the same structure that we used in the proof of Theorem 1.
Step 1. Fix an agent sequence $\underline{v}\in N_\infty$ and let S be the virus spread process induced by $\underline{v}$ . To shorten notation, for all $i \in N$ , let us denote $t_i(\underline{v})$ by $k_i$ . We first prove a claim similar to Claim 1 from Step 1 of the proof of Theorem 1.
Claim 1. For all $0 \leqslant t \lt k_1$ , $a_i(S_{t+1})=1$ where $\underline{v}_t=i$ .
Proof of the claim. Let $\underline{v}_0=i$ . As $k_1>0$ , $i\neq 1$ . Since $a_j(S_0)=a>0$ for all $j\in N$ , $I(S_0)=\{1\}$ , and $g_{ij}=c$ for all $i\neq j$ , we have $r_i(S_0)=\frac{1}{(n-1)}$ . This means
as by the assumption of the Lemma $\tau\geqslant \frac{1}{(n-1)}$ . Thus, $a_i(S_1)=a_i(\hat{S}_0)=1$ .
Next we introduce the following induction hypothesis: Given $\bar{t}\in \mathbb{N}_0$ with $\bar{t} \lt k_1$ , we have for all $t<\bar{t}$ , $a_j(S_{t+1})=1$ where $\underline{v}_t=j$ .
Let $\underline{v}_{\bar{t}}=i^{\prime}$ ; we show that $a_{i^{\prime}}(S_{\bar{t}+1})=1$ . Note that by Lemma 7, $I(S_{\bar{t}})=\{1\}$ . Moreover, by the induction hypothesis, $a_j(S_{\bar{t}})\geqslant a$ for all $j\in N\setminus \{1\}$ . Also, as $\bar{t} \lt k_1$ , we have $a_1(S_{\bar{t}})=a$ . Combining all these observations, we have
Since $r_{i^{\prime}}(S_{\bar{t}})>0$ ,
see Lemma 1. Therefore, using (22) and the fact that $\tau\geqslant \frac{1}{(n-1)}$ , we have $b_{i^{\prime}}(S_{\bar{t}})=1$ . Thus, $a_{i^{\prime}}(S_{\bar{t}+1})=a_{i^{\prime}}(\hat{S}_{\bar{t}})=1$ . This completes the proof of the claim.
We now determine the final infected set. Note that by Claim 1, $a_i(S_{k_1})=1$ for all $i\in N_1(\underline{v})$ . Also, by the definition of the process, $a_i(S_{k_1})=a$ for all $i\notin N_1(\underline{v})\cup \{1\}$ , as these agents have not updated their actions till the time point $k_1$ . Recall that $\hat{S}_{k_1}$ denotes the intermediate state whose only difference from $S_{k_1}$ is that agent $\underline{v}_{k_1}$ has updated their action to $b_{\underline{v}_{k_1}}(S_{k_1})$ . Since $\underline{v}_{k_1}=1$ , we have $a_i(S_{k_1})=a_i(\hat{S}_{k_1})$ for all $i\neq 1$ . Thus, $a_i(\hat{S}_{k_1})=1$ for all $i\in N_1(\underline{v})$ and $a_i(\hat{S}_{k_1})=a$ for all $i\notin N_1(\underline{v})\cup \{1\}$ .
Moreover, by Remark 1 and the definition of the process, $a_1(\hat{S}_{k_1})=1$ . Consider the time point $k_1+1$ . By the definition of the process, an agent $i\neq 1$ will be in $I(S_{k_1+1})$ if $a_i(\hat{S}_{k_1})r_i(\hat{S}_{k_1})>\tau$ . For any $i\notin N_1(\underline{v})\cup 1$ , $a_i(\hat{S}_{k_1})=a \leqslant \tau$ . Thus, $a_i(\hat{S}_{k_1})r_i(\hat{S}_{k_1}) \leqslant \tau$ and any agent in $ N_1(\underline{v})\cup \{1\}$ will not get infected at $k_1+1$ . For agents in $N_1(\underline{v})$ , we distinguish two cases.
Case 1: $|N_1(\underline{v})|\geqslant \hat{\alpha}$ .
Since $I(S_{k_1})=\{1\}$ , $a_i(\hat{S}_{k_1})=1$ for all $i\in N_1(\underline{v}) \cup \{1\}$ , $a_i(\hat{S}_{k_1})=a$ for all $i\notin N_1(\underline{v})\cup \{1\}$ , and $g_{ij}=c$ for all $i\neq j$ , it follows that
for all $i\in N_1(\underline{v})$ . This implies that no agent in $N_1(\underline{v})$ gets infected at the time point $k_1+1$ .
We show that no new agent will get infected after this. We first show that $I(S_{k_1+2})=\{1\}$ . Let $\underline{v}_{k_1+1}=i$ . If $i\notin I(S_{k_1+1})$ , then as $I(S_{k_1})=I(S_{k_1+1})$ by Lemma 6, we have $I(S_{k_1+1})=I(S_{k_1+2})$ . If $i\in I(S_{k_1+1})$ then $i=1$ . Moreover, $a_1(S_{k_1+1})=a_1(\hat{S}_{k_1})=1$ . Hence, by Lemma 6, $I(S_{k_1+1})=I(S_{k_1+2})$ . Therefore, $I(S_{k_1+2})=\{1\}$ . Using the same arguments repeatedly, it follows that $I(S_t)=\{1\}$ for all $t\geqslant k_1+2$ . Thus, $I(S_\infty)=\{1\}$ .
Case 2: $|N_1(\underline{v})| \leqslant \hat{\alpha}-1$ .
By the assumption of the case, $\hat{\alpha}\geqslant 1$ . First, assume $\hat{\alpha}=1$ . This, together with $|N_1(\underline{v})| \leqslant \hat{\alpha}-1$ , implies $|N_1(\underline{v})|=0$ . Therefore, $k_1=1$ . We show that $I(S_\infty)=\{1\}$ . Note that by the definition of the process, $a_i(\hat{S}_{0})=a$ for all $i\neq 1$ . As $a \leqslant \tau$ , this means no agent in the set $\{2,\ldots,n\}$ will get infected at the time point 1. Hence, $I(S_1)=\{1\}$ . Moreover, as $I(S_1)=\{1\}$ with $a_1(S_1)=1$ and $a_i(S_1)=a \leqslant \tau$ for all $i\neq 1$ , by Lemma 8 it follows that $I(S_{1})=I(S_\infty)$ . Hence, $I(S_\infty)=\{1\}$ .
Now assume $\hat{\alpha}\geqslant 2$ . Thus, by the definition of $\hat{\alpha}$ , we have
As $I(S_{k_1})=1$ , $a_i(\hat{S}_{k_1})=1$ for all $i\in N_1(\underline{v})\cup \{1\}$ , $a_i(\hat{S}_{k_1})=a$ for all $i\notin N_1(\underline{v})\cup \{1\}$ , and $g_{ij}=c$ for all $i\neq j$ , we have for all $i\in N_1(\underline{v})$
This implies that all agents in $N_1(\underline{v})$ will get infected at time point $k_1+1$ . Thus, we have $I(S_{k_1+1})=N_1(\underline{v})\cup \{1\}$ . Furthermore, as $a_i(S_{k_1+1})=a_i(\hat{S}_{k_1})=1$ for all $i\in I(S_{k_1+1})$ and $a_i(S_{k_1+1})=a \leqslant \tau$ for all $i\notin I(S_{k_1+1})$ , by Lemma 8 it follows that $I(S_{k_1+1})=I(S_\infty)$ . Hence, $I(S_\infty)=N_1(\underline{v})\cup \{1\}$ .
Step 2. Consider the probability space $(N_\infty, \mathcal{F}, \mathbb{P})$ and random variables S and $t_1,\ldots, t_n$ . Let $m \in \{2,\ldots, n\}$ be such that $m \leqslant \hat{\alpha}$ . In view of Case 1 and Case 2 of the current proof, we have (i) $|I(S_\infty)| \leqslant \hat{\alpha}$ , and (ii) $|I(S_\infty)| = m$ with $1\in I(S_\infty)$ if and only if $| \{i \in N \mid t_i < t_1\}| = m-1$ . Also, $|I(S_\infty)| = 1$ if and only if either $\{i \in N \mid t_i < t_1\}=\emptyset$ or $| \{i \in N \mid t_i < t_1\}| \geqslant \hat{\alpha}$ . Moreover, as $\mathbb{P}$ is uniform, any two subsets of N with the same cardinality have the same probability. These observations together yield
This completes the proof of the theorem.
B.3. Proof of Theorem 9
Proof. We follow the same structure that we used in the proof of Theorem 1.
Step 1. Fix an agent sequence $\underline{v}\in N_\infty$ and let S be the DVSP induced by $\underline{v}$ . To shorten notation, for all $i \in N$ , let us denote $t_i(\underline{v})$ by $k_i$ . Recall the set $N_1(\underline{v})$ . We distinguish two cases based on the value of $|N_1(\underline{v})|$ .
Case 1: $|N_1(\underline{v})|=0$ .
We show that, for $\tau \leqslant \frac{a}{n-1}$ , all the agents will get infected in this case, i.e., $I(S_\infty)=N$ . Note that by the assumption of the case, $\underline{v}_0=1$ . Recall that $\hat{S}_{0}$ denotes the intermediate state whose only difference from $S_{0}$ is that agent $\underline{v}_{0}$ has updated their action to $b_{\underline{v}_{0}}(S_{0})$ . Since $\underline{v}_{0}=1$ , we have $a_i(S_{0})=a_i(\hat{S}_{0})=a$ for all $i\neq 1$ . Moreover, by Remark 1 and the definition of the process, $a_1(\hat{S}_{0})=1$ . Consider the time point 1. By the definition of the process, an agent $i\neq 1$ will be in $I(S_{1})$ if $a_i(\hat{S}_{0})r_i(\hat{S}_{0})>\tau$ . Since $I(S_{0})=\{1\}$ , $a_i(\hat{S}_{0})=a$ for all $i\in N$ , and $g_{ij}=c$ for all $i\neq j$ , it follows that for all $i\in N\setminus \{1\}$ ,
Because $\tau \leqslant \frac{a}{n-1}$ , this implies that all the agents in $N\setminus \{1\}$ get infected at the time point 1. Hence $I(S_{1})=N$ . Therefore, by the definition of the process, $I(S_\infty)=N$ .
Case 2: $|N_1(\underline{v})|\geqslant 1$ .
This means $\underline{v}_0\neq 1$ . Let $\underline{v}_0=i\in N\setminus 1$ . Hence, by the definition of the process, agent i will choose their action as $b_i(S_0)$ at the intermediate state $\hat{S}_0$ . As $a_j(S_0)=a> 0$ for all $j\in N$ and $I(S_0)=\{1\}$ , it follows that $r_i(S_0)\neq 0$ . Therefore,
Since by our assumption $\underline{v}_{0}=i$ and $i\notin I(S_{0})$ , by Observation 4, $i\notin I(S_{1})$ . For any other uninfected agent j,
This together with the fact that $a_j(\hat{S}_0)=a$ implies the following:
-
(1) if $\tau = \frac{a}{n-1}$ then $a_j(\hat{S}_0)r_j(\hat{S}_0)=\frac{a}{n-1}= \tau$ ;
-
(2) if $\tau \lt \frac{a}{n-1}$ then $a_j(\hat{S}_0)r_j(\hat{S}_0)>\frac{a}{n-1}>\tau$ .
Combining the above observations, we may conclude that if $\tau= \frac{1}{n-1}$ then agent j will not get infected at time point 1, and if $\tau \lt \frac{a}{n-1}$ then agent j will get infected at time point 1. Hence, we have
To decide the final outcome, we first assume $\tau = \frac{a}{n-1}$ . Note that by (23), $b_i(S_0)=a$ . This means $a_i(S_1)=a$ . Moreover, as $\underline{v}_0=i$ , we have $a_j(S_1)=a$ for all $j\neq i$ . Using similar arguments, we can show that $a_k(S_{k_1})=a$ for all $k\in N$ and $I(S_{k_1})=\{1\}$ . By the definition of the process, $a_1(\hat{S}_{k_1})=1$ and $a_k(\hat{S}_{k_1})=a$ for all $k\neq 1$ . Therefore, for any $k\neq 1$ ,
Thus, all the agents other than agent 1 will get infected at $k_1+1$ . Hence, $I(S_\infty)=N$ .
Now assume $\tau \lt \frac{a}{n-1}$ . We distinguish two sub-cases.
Case 2.1. $\underline{v}_1=i$ .
We show that the final infected set will be $N\setminus i$ . Since by our assumption $\underline{v}_{1}=i$ and $i\notin I(S_{1})$ , by Observation 4, $i\notin I(S_{2})$ . Hence, $I(S_{2})=N\setminus \{i\}$ . We now show that agent i will not get infected after this. At time point 2,
Therefore, $a_i(\hat{S}_{2})=\tau$ (see Observation 4). At time point 3, if $\underline{v}_{3}=i$ , then agent i will not get infected at time point 4 (Observation 4). On the other hand, if $\underline{v}_{3}\neq i$ , then as $a_i(\hat{S}_{3})=a_i(\hat{S}_{2})=\tau$ , it follows that $a_i(\hat{S}_{3})r_i(\hat{S}_3) \leqslant \tau$ . Hence agent i will remain uninfected at time point 4. Continuing in this manner, we can show that agent i will not get infected after this. Thus, $I(S_\infty)=N\setminus \{i\}$ .
Case 2.2: $\underline{v}_{1}\neq i$ .
We show that the final infected set will be N. Since $I(S_{1})=N\setminus \{i\}$ , $r_i(\hat{S}_{1})=1$ . Moreover, as $a_i(S_{1})=a_i(\hat{S}_{0})=b_i(S_0)=(n-1)\tau>\tau$ (see (23)) and $\underline{v}_{1}\neq i$ , it follows that $a_i(\hat{S}_{1})>\tau$ . Combining these two facts, we have $a_i(\hat{S}_{1})r_i(\hat{S}_{1})>\tau$ . Thus, agent i will get infected at time point 2. Hence, $I(S_{2})=N$ and $I(S_\infty)=N$ .
Step 2. First assume $\tau= \frac{a}{n-1}$ . Then, in view of Case 1 and Case 2 of the current proof, we have $I(S_\infty)=N$ .
Now assume $\tau \lt \frac{a}{n-1}$ . By Case 1 and Case 2 above, we have the following:
-
(i) $I(S_\infty)=N\setminus i$ with $1\in I(S_\infty)$ if $|N_1(\underline{v})|\geqslant 1$ and there is $i\in N\setminus \{1\}$ such that $k_i=0$ and $\underline{v}_{1}=i$ ;
-
(ii) $I(S_\infty)=N$ if either $|N_1(\underline{v})|=0$ or $|N_1(\underline{v})|\geqslant 1$ and there is no $i\in N\setminus \{1\}$ such that $k_i=0$ and $\underline{v}_{1}=i$ .
We calculate the probability of $|I(S_\infty)|=n-1$ . By (i) we have
Note that by (i) and (ii),
Therefore,
Since any two subsets of N with cardinality $n-1$ have the same probability, combining all of the above observations yields the following distribution of the infected set:
This completes the proof of the theorem.
Appendix C. A few important lemmas
Lemma 9. Let $\underline{v}\in N_\infty$ and let S be the DVSP induced by $\underline{v}$ . Suppose $t_0$ is such that $I(S_{t_0})=I(S_t)$ for all $t\geqslant t_0$ and $a_k(S_{t_0})=1$ for all $k\in I(S_{t_0})$ . Then for $i\notin I(S_{t_0})$ and $\bar{t} \gt t_0$ with $\underline{v}_{\bar{t}}=i$ , we have
Proof. We use induction on $\bar{t}$ to prove the lemma. Note that for the base case, that is, for $\bar{t}=t_0+1$ , the Lemma holds vacuously.
Next we introduce the following induction hypothesis: Given $\bar{t}\in \mathbb{N}_0$ with $\bar{t} \gt t_0+1$ , the Lemma holds for all t with $t_0+1 \leqslant t<\bar{t}$ .
We now show that the statement in the Lemma holds for $\bar{t}$ . Suppose $\underline{v}_{\bar{t}}=i$ where $i\notin I(S_{t_0})$ . If there is no $t\in (t_0,\bar{t})$ such that $\underline{v}_t\notin I(S_{t_0})$ , the Lemma holds vacuously. So assume that $\hat{t}$ is the last time point before $\bar{t}$ such that $\underline{v}_{\hat{t}}=j$ for some $j\notin I(S_{t_0})$ . This, together with the induction hypothesis, implies $a_j(\hat{S}_{\hat{t}})\geqslant a_k(\hat{S}_{\hat{t}})$ for all $k\notin I(S_0)$ with $\underline{v}_t=k$ for some $t\in (t_0,\hat{t})$ . Also, by the definition of the process, $a_l(\hat{S}_{\hat{t}})=a_l(\hat{S}_{\bar{t}})$ for all $l\notin I(S_{t_0})\setminus i$ . Therefore, to prove the Lemma it is enough to show that $a_i(\hat{S}_{\bar{t}})\geqslant a_j(\hat{S}_{\bar{t}})$ . Additionally, as $a_j(\hat{S}_{\bar{t}}) \leqslant 1$ , we may assume that $a_i(\hat{S}_{\bar{t}})=\frac{\tau}{r_i(\hat{S}_{\bar{t}})}$ . Moreover, as $j\notin I(S_{t_0})$ , $a_j(\hat{S}_{\hat{t}}) \leqslant\frac{\tau}{r_j(\hat{S}_{\hat{t}})}$ . Now,
Equation (24) together with the fact that $a_i(\hat{S}_{\bar{t}})=\frac{\tau}{r_i(\hat{S}_{\bar{t}})}$ and $a_j(\hat{S}_{\hat{t}}) \leqslant \frac{\tau}{r_j(\hat{S}_{\hat{t}})}$ implies $a_i(\hat{S}_{\bar{t}})\geqslant a_j(\hat{S}_{\hat{t}})$ . Hence $a_i(\hat{S}_{\bar{t}})\geqslant a_j(\hat{S}_{\bar{t}})$ . This completes the proof of the lemma.
The following Lemma provides an important property of the final action limit for both infected and uninfected agents. It shows that an infected agent will have the action limit 1, whereas any two uninfected agents will have the same action limit; that is, for $i,j\notin I(S_\infty)$ , $a_i(S_\infty)=a_j(S_\infty)$ .
Lemma 10. Let $\underline{v}\in N_\infty$ and let S be the DVSP induced by $\underline{v}$ . Then
and
Proof. Let $\underline{v}\in N_\infty$ and let S be the DVSP induced by $\underline{v}$ . Consider $k\in I(S_\infty)$ . As $\underline{v} \in N_\infty$ , agent k appears infinitely many times in $\underline{v}$ . Moreover, after getting infected, whenever they update their action, they will choose it to be 1. Thus, $a_k(S_\infty)=1$ . Now consider $i,j \notin I(S_\infty)$ . Let $b=a_i(S_\infty)$ and consider $\epsilon>0$ . This means there exists $t_0$ such that $a_i(S_t)\geqslant b-\epsilon$ for all $t\geqslant t_0$ . Note that as N is a finite set and $I(S_\infty)$ exists, there exists $\tilde{t}\in \mathbb{N}_0$ such that $I(S_{\tilde{t}})=I(S_\infty)$ . In view of this, we may assume that $I(S_{t_0})=I(S_\infty)$ . Consider a time point $\hat{t}$ such that
-
(1) $\hat{t}> t_0$ and $\underline{v}_{\hat{t}}=j$ , and
-
(2) there exists $\bar{t}\in [t_o,\hat{t}]$ such that $\underline{v}_{\bar{t}}=i$ .
Such a time point $\hat{t}$ exists, since $\underline{v} \in N_\infty$ . Therefore, by Lemma 9, $a_j(S_{\hat{t}})\geqslant a_i(S_{\hat{t}})$ . As $\hat{t}> t_0$ , this means $a_j(S_{\hat{t}})\geqslant b-\epsilon$ . Furthermore, as $I(S_{t_0})=I(S_\infty)$ and $\hat{t}> t_0$ , by Claim 1 in Lemma 2, $a_j(S_t)\geqslant a_j(S_{\hat{t}})$ for all $t\geqslant \hat{t}$ . Thus, $a_j(S_t)\geqslant b-\epsilon$ for all $t\geqslant \hat{t}$ . Since $\epsilon$ is arbitrary, this gives $a_j(S_{\infty})\geqslant b$ . Similarly, we can show that $a_i(S_{\infty})\geqslant a_j(S_{\infty})$ . Hence, $a_i(S_{\infty})= a_j(S_{\infty})$ .
The next Lemma determines the common action limit of the uninfected agents.
Lemma 11. Let $\underline{v}\in N_\infty$ and let S be the DVSP induced by $\underline{v}$ . Furthermore, let $\gamma$ be the common action limit of the uninfected agents. Then
and
Proof. Let $t_0\in \mathbb{N}_0$ be such that $I(S_{t_0})=I(S_\infty)$ and $a_k(S_{t_0})=1$ for all $k\in I(S_{t_0})$ . First assume that $(n-1)\tau \lt|I(S_\infty)|$ . This implies $\frac{\tau}{|I(S_\infty)|}<\frac{1}{n-1}$ . We first show that for any time point $\bar{t}\geqslant t_0$ , if $\underline{v}_{\bar{t}}\notin I(S_\infty)$ then $a_{\underline{v}_{\bar{t}}}(\hat{S}_{\bar{t}})<1$ . Let $\underline{v}_{\bar{t}}=i$ . Since $a_{i}(\hat{S}_{\bar{t}})=\min\{\frac{\tau}{r_i(\hat{S}_{\bar{t}})},1\}$ , it is enough to show that $\frac{\tau}{r_i(\hat{S}_{\bar{t}})} \lt 1$ . We have
Since $\bar{t}$ is arbitrary, it follows that $a_{i}(\hat{S}_{t})=\frac{\tau}{r_i(\hat{S}_{t})}$ for all $t\geqslant t_0$ with $\underline{v}_t=i$ . Hence,
Taking the limit on both sides of (25), we have
Now assume $(n-1)\tau\geqslant |I(S_\infty)|$ . We have to show that $\gamma=1$ . Assume $\gamma<1$ . Consider $i\notin I(S_\infty)$ . Since, by Claim 1 in Lemma 2, $a_i(S_t)$ is an increasing sequence for $t>t_0$ , $\gamma<1$ implies $a_i(S_t)<1$ for all $t>t_0$ . This means $a_i(\hat{S}_t)=\frac{\tau}{r_i(\hat{S}_t)}$ for $t>t_0$ with $\underline{v}_t=i$ . Therefore, using similar arguments as before, we have
But this is a contradiction to the fact that $\gamma<1$ . Therefore, $\gamma=1$ .
Appendix D. Proofs of Theorem 4, Theorem 6, Theorem 8, and Theorem 10
D.1. Proof of Theorem 4
Proof. We first explore the limiting actions for a fixed agent sequence, and then we use this to find the limiting probability distribution. Let $\underline{v}$ be an agent sequence and S the DVSP induced by $\underline{v}$ . Note that by Remark 1, it is enough to assume $\underline{v}\in N_\infty$ . Therefore, by Lemma 10, all the agents outside $I(S_\infty)$ have the same action limit, and all the agents in $I(S_\infty)$ have the action limit 1. Let us denote the common limit by $\gamma$ . First assume $\tau \geqslant \frac{1}{n-1}$ . By Theorem 3, $I(S_\infty)=\{1\}$ . Therefore, $(n-1)\tau\geqslant |I(S_\infty)|$ , and hence, by Lemma 11, $\gamma=1$ . Thus, $a_{N}(S_\infty)=\vec{1}$ .
Now assume that $\tau \lt \frac{1}{n-1}$ . We distinguish two cases based on the value of $N_1(\underline{v})$ (as in the proof of Theorem 3) to find $\gamma$ .
Case 1: $|N_1(\underline{v})|=0$ .
Recall that for this case the final infected set is N. Hence, $a_{N}(S_\infty)=\vec{1}$ .
Case 2: $|N_1(\underline{v})|\geqslant 1$ .
Recall that for this case the final infected set has cardinality either n or $n-1$ . If the cardinality is n then $a_{N}(S_\infty)=\vec{1}$ . If the cardinality is $n-1$ , then as $(n-1)\tau \lt 1$ , by Lemma 11, $\gamma=\tau$ . Hence
Note that this implies $a_{N}(S_\infty)\in A_{n-1}$ . Also, as $\mathbb{P}$ is uniform, any two vectors in $A_{n-1}$ have the same probability. Thus, by Theorem 3, we have the following distribution:
D.2. Proof of Theorem 6
Proof. We first explore the limiting actions for a fixed agent sequence, and then we use this to find the limiting probability distribution. Let $\underline{v}$ be an agent sequence and S the DVSP induced by $\underline{v}$ . Note that by Remark 1, it is enough to assume $\underline{v}\in N_\infty$ . Therefore, by Lemma 10, all the agents outside $I(S_\infty)$ have the same action limit, and all the agents in $I(S_\infty)$ have the action limit 1. Let us denote the common limit by $\gamma$ . We distinguish two cases based on the value of $N_1(\underline{v})$ (as in the proof of Theorem 5) to find $\gamma$ .
Case 1: $|N_1(\underline{v})|\geqslant \hat{\alpha}$ .
Recall that for this case the final infected set is $\{1\}$ . Moreover, by the assumption of the theorem, $(n-1)\tau\geqslant 1$ . Therefore, by Lemma 6, $\gamma=1$ . Hence, $a_{N}(S_\infty)=\vec{1}$ .
Case 2: $|N_1(\underline{v})| \leqslant \hat{\alpha}-1$ .
Recall that for this case the final infected set is $N_1(\underline{v})\cup \{1\}$ . Note that as $\hat{\alpha} \leqslant n-1$ , $N_1(\underline{v})\cup \{1\} \leqslant n-1$ . Therefore, by Lemma 6, if $(n-1)\tau\geqslant |N_1(\underline{v})|+1$ then $a_{N}(S_\infty)=\vec{1}$ , and if $(n-1)\tau \lt|N_1(\underline{v})|+1$ then
Recall that $\hat{\beta}=\min \{\lfloor(n-1)\tau \rfloor+1,\hat{\alpha}+1\}$ . Thus, combining Cases 1 and 2, we have the following:
-
(i) $|N_1(\underline{v})|+1\in [\hat{\beta},\hat{\alpha}]$ implies
Note that (i) implies $a_{N}(S_\infty)\in A_{[|N_1(\underline{v})|+1]}$ when $|N_1(\underline{v})|+1\in [\hat{\beta},\hat{\alpha}]$ . Also, as $\mathbb{P}$ is uniform, any two vectors in $A_m$ , for $m\in [\hat{\beta},\hat{\alpha}]$ , have the same probability. Thus, we have the following distribution:
D.3. Proof of Theorem 8
Proof. We first explore the limiting actions for a fixed agent sequence, and then we use this to find the limiting probability distribution. Let $\underline{v}$ be an agent sequence and S the DVSP induced by $\underline{v}$ . Note that by Remark 1, it is enough to assume $\underline{v}\in N_\infty$ . Therefore, by Lemma 10, all the agents outside $I(S_\infty)$ have the same action limit, and all the agents in $I(S_\infty)$ have the action limit 1. Let us denote the common limit by $\gamma$ . First assume $\tilde{\alpha}+1 < \bar{\alpha}$ . As shown in the proof of Theorem 7, the infected set is either $\{1\}$ or $|N_1(\underline{v})|+1$ where $N_1(\underline{v})\in [1,\tilde{\alpha}]$ . Since, by the assumption of the theorem, $(n-1)\tau\geqslant 1$ , we have $(n-1)\tau\geqslant I(S_\infty)$ when the infected set is $\{1\}$ . Therefore, by Lemma 11, $\gamma=1$ , and hence $a_{N}(S_\infty)=\vec{1}$ . On the other hand, if the final infected set is $N_1(\underline{v})\cup \{1\}$ , the limiting action depends on $|N_1(\underline{v})|$ . By Lemma 6, if $(n-1)\tau\geqslant |N_1(\underline{v})|+1$ then $a_{N}(S_\infty)=\vec{1}$ , and if $(n-1)\tau \lt|N_1(\underline{v})|+1$ then
Recall that the following was shown in the proof of Theorem 7 when $\tilde{\alpha}+1 \leqslant \bar{\alpha}$ :
-
• $|I(S_\infty)|=1$ if $|N_1(\underline{v})|\in \{0,\tilde{\alpha},\tilde{\alpha}+1,\ldots,n-1\}$ ;
-
• $|I(S_\infty)|=|N_1(\underline{v})|+1$ if $|N_1(\underline{v})|\in \{1,2,\ldots,\tilde{\alpha}-1\}$ .
Recall that $\tilde{\beta}=\min \{\lfloor(n-1)\tau \rfloor+1,\tilde{\alpha}+1\}$ . Therefore, we have the following: [label=()]
-
(i) $|N_1(\underline{v})|+1\in [\tilde{\beta},\tilde{\alpha}]$ implies
-
(ii) $|N_1(\underline{v})|+1\in [1,\tilde{\beta}-1]\cup [\tilde{\alpha}+1,n]$ implies $a_{N}(S_\infty)=\vec{1}$ .
Note that (i) implies $a_{N}(S_\infty)\in A_{[|N_1(\underline{v})|+1]}$ when $|N_1(\underline{v})|+1\in [\tilde{\beta},\tilde{\alpha}]$ . Also, as $\mathbb{P}$ is uniform, any two vectors in $A_m$ , for $m\in [\tilde{\beta},\tilde{\alpha}]$ , have the same probability. Thus, we have the following distribution:
Now assume $2 \leqslant \bar{\alpha}< \tilde{\alpha}+1$ . Recall that the following was shown in the proof of Theorem 7 when $2 \leqslant \bar{\alpha}< \tilde{\alpha}+1$ :
-
(i) $|I(S_\infty)|=1$ if $|N_1(\underline{v})|\in \{0,\tilde{\alpha},\tilde{\alpha}+1,\ldots,n-1\}$ ,
-
(ii) $|I(S_\infty)|=|N_1(\underline{v})|+1$ if $|N_1(\underline{v})|\in \{1,2,\ldots,\bar{\alpha}-2\}$ ,
-
(iii) $|I(S_\infty)|=n$ if $|N_1(\underline{v})|\in \{\bar{\alpha}-1,\ldots,\tilde{\alpha}-1\}$ and there is no $i\in N$ such that $k_i=k_1+1$ and $\underline{v}_{k_1+2}=i$ , and
-
(iv) $|I(S_\infty)|=n-1$ if $|N_1(\underline{v})|\in \{\bar{\alpha}-1,\ldots,\tilde{\alpha}-1\}$ and there is $i\in N$ such that $k_i=k_1+1$ and $\underline{v}_{k_1+2}=i$ .
By the assumption of the theorem, $(n-1)\tau\geqslant 1$ and $\tau \lt 1$ . Thus, if $|I(S_\infty)|=1$ we have $(n-1)\tau\geqslant |I(S_\infty)|$ , and if $|I(S_\infty)|= (n-1)$ we have $(n-1)\tau \lt |I(S_\infty)|$ . Recall that $\bar{\beta}=\min \{\lfloor(n-1)\tau \rfloor+1,\bar{\alpha}\}$ . Combining all these observations, we may write the following:
-
(i) $|I(S_\infty)|\in [\bar{\beta},\bar{\alpha}-1]\cup \{n-1\}$ implies
-
(ii) $|I(S_\infty)|\in [1,\bar{\beta}-1]\cup \{n\}$ implies $a_{N}(S_\infty)=\vec{1}$ .
Note that (i) implies $a_{N}(S_\infty)\in A_{(|I(S_\infty)|)}$ when $|I(S_\infty)|\in [\bar{\beta},\bar{\alpha}-1]\cup \{n-1\}$ . Also, as $\mathbb{P}$ is uniform, any two vectors in $A_m$ , for $m\in [\bar{\beta},\bar{\alpha}-1]\cup \{n-1\}$ , have the same probability. Therefore, using Theorem 7, we have the following distribution:
D.4. Proof of Theorem 10
Proof. We first explore the limiting actions for a fixed agent sequence and then use this to find the limiting probability distribution. Let $\underline{v}$ be an agent sequence and S the DVSP induced by $\underline{v}$ . Note that by Remark 1, it is enough to assume $\underline{v}\in N_\infty$ . Therefore, by Lemma 10, all the agents outside $I(S_\infty)$ have the same action limit, and all the agents in $I(S_\infty)$ have the action limit 1. Let us denote the common limit by $\gamma$ . First assume $\tau = \frac{a}{n-1}$ . By Theorem 9, $I(S_\infty)=N$ . Therefore, by Lemma 10, $a_{N}(S_\infty)=\vec{1}$ .
Now assume that $\tau \lt \frac{a}{n-1}$ . We distinguish two cases based on the value of $N_1(\underline{v})$ (as in the proof of Theorem 9) to find $\gamma$ .
Case 1: $|N_1(\underline{v})|=0$ .
Recall that for this case the final infected set is N. Hence, $a_{N}(S_\infty)=\vec{1}$ .
Case 2: $|N_1(\underline{v})|\geqslant 1$ .
Recall that for this case, the final infected set has cardinality either n or $n-1$ . If the cardinality is n then $a_{N}(S_\infty)=\vec{1}$ . If the cardinality is $n-1$ , then as $(n-1)\tau <1$ , by Lemma 25, $\gamma=\tau$ . Hence,
Note that this implies $a_{N}(S_\infty)\in A_{n-1}$ . Also, as $\mathbb{P}$ is uniform, any two vectors in $A_{n-1}$ have the same probability. Thus, by Theorem 9, we have the following distribution:
Acknowledgements
We are thankful to the anonymous referees and the editors for their valuable feedback, which helped improve the paper significantly.
Funding information
S. Karmakar’s research is partially supported by NSF grant DMS 2124222. M. Podder has been supported by Grant No. SERB CRG/2021/006785, awarded by the Science and Engineering Research Board (SERB).
Competing interests
There were no competing interests to declare which arose during the preparation or publication process of this article.