Hostname: page-component-745bb68f8f-s22k5 Total loading time: 0 Render date: 2025-02-06T09:36:54.102Z Has data issue: false hasContentIssue false

On the steady state of continuous-time stochastic opinion dynamics with power-law confidence

Published online by Cambridge University Press:  16 September 2021

François Baccelli*
Affiliation:
The University of Texas at Austin and INRIA Paris
Sriram Vishwanath*
Affiliation:
The University of Texas at Austin
Jae Oh Woo*
Affiliation:
The University of Texas at Austin
*
*Postal address: Department of Mathematics, The University of Texas at Austin, 2515 Speedway, Austin, TX 78712, USA.
***Postal address: Department of Electrical and Computer Engineering, The University of Texas at Austin, 2501 Speedway, Austin, TX 78712, USA.
*Postal address: Department of Mathematics, The University of Texas at Austin, 2515 Speedway, Austin, TX 78712, USA.
Rights & Permissions [Opens in a new window]

Abstract

This paper introduces a non-linear and continuous-time opinion dynamics model with additive noise and state-dependent interaction rates between agents. The model features interaction rates which are proportional to a negative power of the opinion distances. We establish a non-local partial differential equation for the distribution of opinion distances and use Mellin transforms to provide an explicit formula for the stationary solution of the latter, when it exists. Our approach leads to new qualitative and quantitative results on this type of dynamics. To the best of our knowledge these Mellin transform results are the first quantitative results on the equilibria of opinion dynamics with distance-dependent interaction rates. The closed-form expressions for this class of dynamics are obtained for the two-agent case. However, the results can be used in mean-field models featuring several agents whose interaction rates depend on the empirical average of their opinions. The technique also applies to linear dynamics, namely with a constant interaction rate, on an interaction graph.

Type
Original Article
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

It is now well recognized that the polarization of opinions in societies is linked to the fact that opinion updates are predominantly due to interactions between persons already having close opinions. Several models of opinion dynamics incorporate this fact, for instance the bounded-confidence model, also known as the Hegselmann–Krause model [Reference Hegselmann and Krause22], where interactions take place between nodes with opinions less than a given confidence radius. However, none of these models offers an analytical framework allowing us to quantitatively predict how opinions cluster under such dynamics.

The present paper proposes a new model incorporating the close opinion interaction nature by proposing a power-law confidence model. This model is stochastic in two ways: first, interactions between any pair of agents take place at epochs of a random point process with a stochastic intensity that decreases with the opinion discrepancy between the two agents; second, the opinion of each agent evolves as a diffusion process. The main result of the present paper is that when the interaction rate decreases as a power law of the opinion discrepancy, the model is analytically tractable. This tractability goes with several qualitative results, such as the conditions for stability, those for the lack of accumulation of interactions, or the effect of parameter changes on the stationary distribution when it exists.

The analytical framework considered in this paper covers both the setting where interactions between agents happen as an autonomous point process and that where the rates of interactions depend on the discrepancies between opinions. In both settings, we represent opinions by points of the real line, and we describe the geometry of opinions by the pairwise differences between opinions. In the first setting, interactions take place independently of opinion values, whereas in the second, they are governed by the current geometry of opinions in addition to governing the evolution of this geometry. Models in the first class are in fact linear, and there is a rich analytical literature about scaling limits, convergence rate, and stationary solutions on the matter – see, e.g., [Reference Acemoğlu, Como, Fagnani and Ozdaglar1, Reference Acemoğlu and Ozdaglar3, Reference Fagnani and Zampieri20, Reference Ghaderi and Srikant21, Reference Olshevsky and Tsitsiklis36, Reference Toscani38, Reference Yildiz, Ozdaglar, Acemoglu, Saberi and Scaglione41]. As mentioned above, the second class is much more difficult to analyze and the most noticeable results concern the convergence of the dynamics to fixed points and estimates on the speed of this convergence [Reference Acemoğlu, Mostagir and Ozdaglar2, Reference Baccelli, Chatterjee and Vishwanath5, Reference Blondel, Hendrickx and Tsitsiklis12, Reference Brugna and Toscani14, Reference Como and Fagnani15, Reference Lobel, Ozdaglar and Feijer29, Reference Mirtabatabaei and Bullo33].

Most papers on the matter concentrate on the deterministic and discrete-time case. The present paper is focused on a stochastic, continuous-time model. The stochasticity first comes from the presence of an additive noise featuring self-beliefs and represented as an independent and identically distributed (i.i.d.) sequence in [Reference Baccelli, Chatterjee and Vishwanath5]. The second source of stochasticity lies in the fact that interactions happen at the epochs of a random point process with a stochastic intensity function of the opinion distances. In the power-law confidence model proposed here, this interaction rate is proportional to a negative power of the opinion distance. For this parametric setting we get a representation of the dynamics of opinion distances in terms of a stochastic differential equation, and a non-local partial differential equation for the distribution of this stochastic process. The main analytical novelty lies in the use of Mellin transforms to solve the stationary equation in a closed form. This continuous-time parametric model is used for tractability reasons in the first place; it allows us to use the powerful analytical framework of diffusion processes, and then of Mellin transforms. The closed forms obtained are hence directly linked to the specific parametric assumptions made. However, this model has some practical motivations as well: (i) the additive Brownian term can be seen as a diffusive force that is known to be sufficient for causing opinion polarization [Reference Baccelli, Chatterjee and Vishwanath5]; (ii) the power-law confidence model proposed here is in a sense more natural than the the bounded-confidence case in that it rarely accepts interactions between distant opinions as observed in real opinion making; (iii) in the opinion-dependent interaction case, the fact that the dynamics is in continuous time leads to interesting new phenomena such as the possibility of a fusion of opinions (which does not happen in discrete time) despite the presence of diffusive forces, as shown in the present paper.

While the main result of the present paper is the closed form for the steady state of this dynamics, the analysis also reveals further interesting qualitative properties. For instance, for small enough exponents, the power-law confidence model forbids opinion fusions despite interaction rates tending to infinity when the opinions get close; it also leads to weak consensus (namely to a stabilization of the distribution of opinion differences) despite interaction rates vanishing when the distance between opinions tends to infinity. The last property is in contrast with the polarization of opinions which always arises in the bounded-confidence case.

On the more technical side, we already mentioned the role of Mellin transforms. Such transforms have already been used to solve transport equations describing the TCP/IP protocol [Reference Baccelli, Kim and De Vleeschauwer6-Reference Baccelli, McDonald and Reynier8, Reference Dumas, Guillemin and Robert19, Reference Hollot, Misra, Towsley and Gong24]. If we consider instantaneous throughput as the analog of opinion distance, and packet losses as the analog of interactions, we get a natural connection between the halving of instantaneous throughput in the case of packet loss and the halving of opinion distance in the case of agent interactions. The main novelty regarding this work is twofold: first, we replace the transport operator (describing the linear increase of TCP) by a diffusion operator (representing the additive noise); second, whereas TCP features loss rates that increase with the value of instantaneous throughput, opinion dynamics feature interactions with a rate that decreases with opinion distance. In spite of these differences, we could extend the mathematical machinery from one case to the other.

Section 2 of this paper describes the general framework of the continuous-time model; this characterizes the dynamics in terms of diffusion with jumps. It encompasses both the case where the rates of interactions and jumps remain fixed, and the case where the rates are opinion dependent. Section 2 also presents the main results and discusses their implications on opinion dynamics. In Section 3 we discuss the notions of weak consensus and opinion fusion. The connections between opinion-dependent interactions and the bounded-confidence model are discussed at the beginning of Section 3. Section 3.2 discusses the pathwise representation of these diffusion processes with jumps. Mellin transforms are introduced in Section 3.4. Sections 3.5 and 3.9 contain the derivations of the closed-form stationary solutions. Section 3 studies the simplest case, namely the two-agent case with real-valued opinions. We discuss various multidimensional extensions (either more than two agents or opinions taking their values in the Euclidean space or interactions constrained by a graph) of the basic framework in Section 4. In particular, we show how to leverage our analysis of both the opinion-dependent and opinion-independent cases through a natural mean-field model featuring interactions between a large number of agents.

2. Main results on the opinion dynamics model

2.1. Setting for the two-agent problem

The opinion of an agent at time t will be denoted by X(t) and is assumed to take values in $\mathbb{R}$ (except in Section 4, where the case of ${\mathbb R}^d$-valued opinions is discussed). This assumption of continuous time and space is in part made for mathematical tractability, as already explained. The assumption of opinions with values in a non-compact state space appears in several papers [Reference Baccelli, Chatterjee and Vishwanath5, Reference Blondel, Hendrickx and Tsitsiklis12].

There are two types of agents: stubborn agents, with a constant opinion, and regular agents. In the absence of interaction, the opinion X(t) of a regular agent satisfies the following stochastic differential equation:

\begin{align*} \textrm{d} X(t) = \mu \textrm{d} t + \sigma \textrm{d} W(t),\end{align*}

where W(t) is a standard Brownian motion. We assume that the parameters $\mu$ and $\sigma$ of this diffusion are constant. In this opinion dynamics setting, the drift parameter $\mu$ can be interpreted as the bias of the agent and the diffusion parameter $\sigma \ge 0$ as its self-belief coefficient. In some cases, we assume that $\mu=0$.

We focus on pairwise interactions, i.e. interactions following the gossip model of [Reference Acemoğlu, Como, Fagnani and Ozdaglar1, Reference Acemoğlu, Mostagir and Ozdaglar2, Reference Baccelli, Chatterjee and Vishwanath5, Reference Mobilia, Petersen and Redner34]. We first solve the simplest problem, which is the two-agent case comprising one regular agent and one stubborn agent. We consider multi-agent extensions in a second step.

The assumptions of the two-agent model are the following:

  1. The stubborn agent S has a fixed opinion (say 0) at all times regardless of interactions.

  2. The opinion of the regular agent X has a diffusive dynamics, together with jumps/updates of its opinion at each of the interaction epochs with the stubborn agent.

  3. The interaction times are determined by a point process N(t) with stochastic intensity $\Lambda(X(t))$ which is a function of the opinion difference, X(t), of the two agents. Here, the stochastic intensity is defined with respect to the natural filtration of $\{X(s)\}$ (see [Reference Baccelli and Brémaud4]); in the power-law model, the interaction rate decreases with distance, as in, e.g., the bounded-confidence model.

  4. At an interaction event taking place at time t, the regular agent at X(t) incorporates the opinion of the stubborn agent S by updating its current opinion from X(t) to $X(t)/\theta$, which is the weighted average of its opinion X(t) and that of S, where $\theta \ge 1$.

  5. The diffusion represents the autonomous evolution of the agent’s opinion in the absence of interactions (the so-called self-beliefs of, e.g., [Reference Baccelli, Chatterjee and Vishwanath5]).

We broadly consider two types of interaction functions $\Lambda(X(t))$ according to the dependency of interaction clocks:

  • Opinion-independent interactions: the interaction rate is constant, with $\Lambda(X(t))=\lambda \ge 0$.

  • Opinion-dependent interaction: the interaction rate depends on the current geometry of opinion. Several explicit results will be based on the power-law model, where $\Lambda(X(t))=\frac{\lambda}{|X(t)|^{\alpha}}$ for some $\alpha \le 0$.

We can see independent interactions as a domination-type interaction where the regular agent has to incorporate the opinion of the stubborn agent regardless of the discrepancies between their opinions. We can relate the dependent interaction to free will. The free will of the regular agent is modeled by the stochastic intensity of $\Lambda(X(t))$. Assume, for instance, that $\Lambda(X(t))\le \lambda$. Then, one can interpret $\lambda$ as the rate of interaction offers and $\Lambda(X(t))$ as the rate of accepted interactions. In the free will example, the regular agent is more likely to incorporate the opinion of the stubborn agent if this opinion has some proximity to its own, and may ignore it otherwise. The connections with the Hegselmann–Krause model [Reference Hegselmann and Krause22] are discussed in the next section.

2.2. Related models and results

When there are no random additive self-beliefs, the opinion-independent interaction, discrete-time case, was studied by Fagnani and Zampieri [Reference Fagnani and Zampieri20]. The stationary solution was studied using a Markov process analysis. In the opinion-dependent interaction case, such as the bounded-confidence model [Reference Acemoğlu, Mostagir and Ozdaglar2, Reference Baccelli, Chatterjee and Vishwanath5, Reference Como and Fagnani15, Reference Hegselmann and Krause22], there are no known explicit stationary solutions to the best of our knowledge. In the discrete-time case with i.i.d. additive self-beliefs, it was shown in [Reference Baccelli, Chatterjee and Vishwanath5] that, for $\alpha>2$, the dynamics with such self-beliefs is unstable. This means that the distribution of opinion distance is not tight, a phenomenon that can be interpreted as opinion polarization. If $0 < \alpha < 2$, then there is a unique stationary regime for opinion distances, a situation which is interpreted in [Reference Baccelli, Chatterjee and Vishwanath5] as a weak form of consensus.

2.3. Summary of our results on the two-agent model

The main analytical result of the present paper is the characterization of the stationary distribution of the weak consensus (Theorem 4) that arises in the general power-law interaction model for all $\alpha$ in the range $0 < \alpha < 2$ (which includes the opinion-independent interaction case as the special case $\alpha=0$). More precisely, the partial differential equation satisfied by the distribution of the density of the solution of the stochastic differential equation admits a unique probabilistic solution given in closed form.

Interestingly, we can formally extend the stationary solution to this partial differential equation to the whole range $0\le \alpha<2$ and give a probabilistic interpretation of this solution. However, in the range $1\leq \alpha <2$, the stochastic differential equation is ill-defined. The physical interpretation of this solution is unclear to the authors at this stage. We leave this interpretation as an open problem. We state these different behaviors depending on the range of $\alpha$ in Theorem 2.

In the continuous-time case considered here, several interesting new phenomena appear depending on the value range of $\alpha > 0$. For $\alpha >2$, an accumulation point of interactions can lead to some fusion of opinions. Namely, with a positive probability, there is an infinite number of interactions in a finite time with an accumulation point at a random time T, and the limit of the opinion of the regular agent tends to that of the stubborn agent as $t\to T$. After this fusion time, the solution of the stochastic differential equation is ill-defined. This is why we use the term fusion rather than a strong consensus. The result of [Reference Baccelli, Chatterjee and Vishwanath5] also suggests that when $\alpha>2$ the polarization can also take place with positive probability. For $0 < \alpha < 1$, there is no such finite accumulation point of interactions, and the solution of the stochastic differential equation exists for all times. For $1 < \alpha < 2$, we have no understanding of the pathwise solution of the stochastic differential equation, as already mentioned.

2.4. Summary of results on multi-agent models

In the opinion-independent interaction case, there is no fundamental difficulty extending the results of the two-agent problem to general multi-agent scenarios, for instance to interactions on a graph. Depending on the types of interactions [Reference DeGroot18, Reference Holley23], the stationary distribution on a general graph can be represented as a mixture of distributions or an independent sum of distributions involving hitting probability on a graph. Therefore, we shall focus on opinion-dependent cases.

For the general power-law interaction case, we show in Section 4.3 that our results can be used to analyze a mean-field version of the opinion-dependent interaction model. This version features a single stubborn agent and a collection of d regular agents; all regular agents have pairwise interactions with the stubborn agent. All regular agents have the same rate of interaction with the stubborn agent; this rate is the negative moment of order $\alpha$ of the empirical average of the opinions of the d regular agents; each agent uses this common stochastic intensity to trigger (conditionally) independent interactions with the stubborn agent. When d tends to infinity, we show numerically that the model tends to a mean-field limit. We also show that the properties of the limit can be analyzed in terms of the opinion-independent model and a consistency equation that is shown to have a single solution for $\alpha$ in the range $0 < \alpha <1$. Interestingly, there is no solution to the consistency equation of this mean-field limit for $\alpha > 1$.

3. The two-agent stochastic interaction model

We consider the two-agent model with one regular agent X and one stubborn agent S. At each interaction, the regular agent updates its opinion to the average of its opinion and that of the stubborn agent. Without loss of generality, we assume that $S(t)=\textbf{s}=0$ for all $t\geq 0$. Fix $\theta>1$ and $\theta'>1$ such that $\frac{1}{\theta} + \frac{1}{\theta'}=1$. Then, X(t) satisfies the stochastic differential equation

(1) \begin{align}{\textrm d}X(t) = \mu {\textrm d}t + \sigma {\textrm d}W(t) - \frac{X(t-)}{\theta'}N({\textrm d}t),\end{align}

where $X({t-})$ is the left limit of $\{X(s)\}$ at t, and N is a point process on the real line with stochastic intensity $\Lambda(X({t-}))$ (with respect to the natural filtration of $\{X(s)\}$ [Reference Baccelli and Brémaud4]). The parameter $\theta$ (or $\theta'$) reflects the weighted average used when an interaction occurs.

For example, if $\theta'=3$, X(t) updates its opinion by weighting its own opinion by $\frac{1}{\theta}=1-\frac{1}{\theta'}=\frac{2}{3}$ and the other (stubborn) opinion by $\frac{1}{\theta'}$; so, $X(t)=\frac{2}{3}X({t-})+\frac{1}{3}S(t)=\frac{2}{3}X({t-})$ at the moment of an interaction. The path of X(t) is almost surely continuous except at interaction times, i.e. at the epochs of $N(\cdot)$.

In order to discuss our analytical solution in a clear form, we will consider three specific forms of interaction rate $\Lambda(x)$:

  1. (C1) $\Lambda(x)=\lambda$ (opinion-independent case);

  2. (C2) $\Lambda(x)=\frac{\lambda}{|x|^\alpha}$ for $\alpha>0$ (opinion-dependent case, power-law confidence, unbounded intensity);

  3. (C3) $\Lambda(x)=\min\big\{\frac{\lambda}{|x|^\alpha}, L \big\}$ for a large constant $L>0$ (opinion-dependent case, power-law confidence, bounded intensity).

Note that (C1) is a special case of (C2) (when $\alpha=0$). It is, however, qualitatively very different. The dependent models (C2) and (C3) differ in that the former gives an unbounded intensity and the latter a bounded one. When the intensity is unbounded, it is not guaranteed that the dynamics (1) is well-defined. For instance, if $X(0)=0$, the solution of this stochastic differential equation is ill-defined. This is further discussed in Theorem 1.

3.1. Relationship with the bounded-confidence model

The models discussed above differ from the bounded-confidence model [Reference Deffuant17, Reference Hegselmann and Krause22]. The latter assumes that interactions between two agents only occur when the distance between their opinions is less than some fixed (confidence) range. In the continuous-time setting, each agent has an exponential clock with a constant rate; when its clock ticks, it averages its opinion with that of all opinions at a distance less than the confidence range. In contrast, model (C1) assumes a constant interaction rate, but has no confidence range limitation. Model (C2) assumes a gradual decrease of the interaction rate with the opinion distance but is never zero, even for large distances. As illustrated by the example in Figure 1, in terms of interaction rate, model (C3) can be seen as the closest to the bounded-confidence model as it also features a constant interaction rate within a given range. However, like model (C2), model (C3) allows for interactions at all distances.

Figure 1. The bounded-confidence model with interaction rate function equal to $0.5$ on $[-5, 5]$ vs. the rate functions of the proposed model. The parameters are $\lambda= 0.5$ for (C1), and $\lambda = 5$, $\alpha=1.75$, and $L= 0.5$ for (C2) and (C3).

In spite of the proximity of the interaction models, in the presence of Brownian self-beliefs, the bounded-confidence model and the power-law confidence model differ in a fundamental way. As we shall see below, the latter is stable (positive recurrent) for small enough values of the interaction exponent $\alpha$, whereas the former is never stable, whatever the finite confidence range. This last fact was already observed in [Reference Baccelli, Chatterjee and Vishwanath5] for the discrete-time model and immediately follows from the null recurrence of Brownian motion here. Hence, in spite of the proximity of the interaction functions, bounded confidence always leads to polarization whereas power-law confidence models can lead to weak consensus.

3.2. Stability and construction of the opinion-dependent dynamics

In this section we prove that the solution of the stochastic differential equation associated with model (C2) is well-defined when $0\leq \alpha < 1$. For this, we use the Engelbert–Schmidt 0–1 law. Next, we show that, almost surely, no accumulation points of interactions can appear in a finite time horizon, which allows one to give a pathwise construction of the process X(t) for all $t\geq 0$. We then discuss the behavior of the dynamics when $1\leq \alpha <2$ and $\alpha\geq 2$.

Lemma 1. (Engelbert–Schmidt 0–1 law.) Let $\{W(t)\}$ be a standard Brownian motion. Assume that $W(0)=0$. For any $t>0$,

\begin{align*} \mathbb{P}\bigg[\int_{0}^{t} \frac{1}{|W(s)|^\alpha}\,{\textrm d}s < +\infty \ \text{for all } 0\leq t <\infty \bigg] = \begin{cases} 1 & \quad \text{if }\ 0\leq \alpha < 1,\\ 0 & \quad \text{if }\ \alpha\geq 1.\\ \end{cases} \end{align*}

Proof. See [Reference Karatzas and Shreve26] or [Reference Yen and Yor40].

We now show that Lemma 1 implies the integrability of the stochastic intensity when $0< \alpha <1$. The first step is the following theorem.

Theorem 1. Consider model (C2) with $\mu=0$ and $0\leq \alpha <1$. Let $X(0)=x_0$, and let $\gamma_{1}$ be the first interaction time of X(t). For all $t>0$, there exists a function $\chi(t)>0$ which does not depend on $x_0$ such that (i) $\mathbb{P}\left[ \gamma_1 \geq t \right] \geq \chi(t)$, (ii) the map $t\to \chi(t)$ is non-increasing, and (iii) $\chi(t)\to 1$ as $t\to 0$.

Proof. Without loss of generality, we may assume that $x_0\geq 0$ (when $x_0\leq 0$, we can apply the symmetry of the Brownian motion). From the definition of the interaction point process,

\begin{align*}\mathbb{P}\big[ \gamma_1 \geq t \big] =\mathbb{E} \bigg[ \exp \left\{ -\int_{0}^{t} \frac{\lambda}{|W(s)+x_0|^{\alpha}}\,{\textrm d}s \right\}\bigg].\end{align*}

Below, we fix $t>0$ and $\delta>0$, and consider three cases:

Case I $\left[X(0)=x_0=0 \right]$. By Lemma 1, $\int_{0}^{t} \frac{1}{|W(s)|^{\alpha}}\,{\textrm d}s < + \infty$ almost surely (a.s.). Hence, $\lim_{t\to 0}\int_{0}^{t} \frac{1}{|W(s)|^{\alpha}}\,{\textrm d}s = 0$ a.s. Let $\epsilon_1(t):=\mathbb{E}\left[ \exp\left\{-\int_{0}^{t} \frac{\lambda}{|W(s)|^{\alpha}}\,{\textrm d}s \right\} \right] \leq 1$. Since $\mathbb{P}\left[\int_{0}^{t} \frac{1}{|W(s)|^\alpha}\,{\textrm d}s < +\infty \right] =1$, we have $\epsilon_1(t)>0$. Note that, by dominated convergence,

\begin{align*}\lim_{t\to 0}\epsilon_1(t) =\mathbb{E}\left[ \exp\left\{- \lim_{t\to 0} \int_{0}^{t} \frac{\lambda}{|W(s)|^{\alpha}}\,{\textrm d}s \right\} \right] = 1.\end{align*}

Case II $\left[X(0)=x_0\geq \delta\right]$. It is well known that, for $\delta>0$, we can find $\epsilon_2(t, \delta)>0$ such that

\begin{align*}\epsilon_2(t, \delta):=\mathbb{P}\bigg[ \inf_{0\leq s\leq t} W(s) \geq -\frac{\delta}{2}\bigg] = \Phi \Big( \frac{\delta}{2\sqrt{t}\,} \Big) - \Phi \Big( -\frac{\delta}{2\sqrt{t}\,} \Big)>0,\end{align*}

where $\Phi(\cdot)$ is the cumulative normal distribution function [Reference Borodin and Salminen13]. So $\epsilon_2(t,\delta)> 0$ and $\epsilon_2(t,\delta)\to 1$ as $t\to 0$. Since $|x+x_0|\geq \frac{x_0}{2}\geq\frac{\delta}{2}$ for $x\geq -\frac{x_0}{2}$, conditioned on $\left\{\inf_{0\leq s\leq t} W(s) \geq -\frac{\delta}{2}\right\}$,

\begin{align*}\int_{0}^{t} \frac{1}{|W(s)+x_0|^{\alpha}}\,{\textrm d}s \leq \left(\frac{2}{\delta}\right)^{\alpha}t.\end{align*}

Hence, if $x_0\ge \delta$,

\begin{align*}\mathbb{P}\left[ \gamma_1 \geq t \right] &= \mathbb{E}\left[ \exp\left\{-\int_{0}^{t} \frac{\lambda}{|W(s)+x_0|^{\alpha}}\,{\textrm d}s \right\} \right] \\[3pt] & \geq \mathbb{E}\left[ \exp\left\{-\int_{0}^{t} \frac{\lambda}{|W(s)+x_0|^{\alpha}}{\textrm d}s \right\} \, \Big| \, \inf_{0\leq s\leq t} W(s) \geq -\frac{\delta}{2} \right] \mathbb{P}\left[ \inf_{0\leq s\leq t} W(s) \geq -\frac{\delta}{2} \right] \\[3pt] & \geq \exp\left\{-\left(\frac{2}{\delta}\right)^{\alpha}\lambda t \right\} \epsilon_2(t, \delta):=\epsilon_3(t, \delta).\end{align*}

So $\epsilon_3(t,\delta)> 0$.

Case III $\left[ 0<X(0)=x_0< \delta\right]$. Let $\tau_{0,\delta}$ be the first hitting time of $\{0,\delta\}$ by W(t). The,n by using the Green’s function formula [Reference Kallenberg25, Lemma 20.10], [Reference Ramanan37, Lemma 5.4],

\begin{align*}\mathbb{E} \left[ \int_{0}^{\tau_{0,\delta}} \frac{\lambda}{|W(s)+x_0|^{\alpha}}\,{\textrm d}s \right] = \int_{0}^{\delta} \frac{\lambda}{y^{\alpha}} g(y)\,{\textrm d}y,\end{align*}

where

\begin{equation*}g(y)=\frac{2(\min\{x_0, y \} )(\delta - \max\{x_0, y\})}{\delta}.\end{equation*}

By an elementary calculation,

\begin{align*}\mathbb{E} \left[ \int_{0}^{\tau_{0,\delta}} \frac{\lambda}{|W(s)+x_0|^{\alpha}}\,{\textrm d}s \right] =2\lambda\left( \frac{1}{1-\alpha} -\frac{1}{2-\alpha} \right)\left( x_0\delta^{1-\alpha} - x_0^{2-\alpha}\right).\end{align*}

This function is maximized at $x_0=\delta \left(2-\alpha\right)^{-\frac{1}{1-\alpha}}$. Hence,

\begin{align*}&\mathbb{E} \left[ \int_{0}^{\tau_{0,\delta}} \frac{\lambda}{|W(s)+x_0|^{\alpha}}\,{\textrm d}s \right] \leq \\[3pt] &\quad 2\lambda\left( \frac{1}{1-\alpha} -\frac{1}{2-\alpha} \right) \left( (2-\alpha)^{-\frac{1}{1-\alpha}}-(2-\alpha)^{-\frac{2-\alpha}{1-\alpha}}\right)\delta^{2-\alpha} =: \epsilon_4(\delta).\end{align*}

Notice that for $\delta$ small enough, $\epsilon_4(\delta)<1$. Then we consider two sub-cases, depending on the order of t and $\tau_{0,\delta}$.

When $t\geq \tau_{0,\delta}$,

\begin{align*}\int_{0}^{t} \frac{1}{|W(s)+x_0|^{\alpha}}\,{\textrm d}s = \int_{0}^{\tau_{0,\delta}} \frac{1}{|W(s)+x_0|^{\alpha}}\,{\textrm d}s + \int_{\tau_{0,\delta}}^{t} \frac{1}{|W(s)+x_0|^{\alpha}}\,{\textrm d}s =:\xi.\end{align*}

By the strong Markov property of Brownian motion, we can rewrite

\begin{align*}\xi & = \int_{0}^{\tau_{0,\delta}} \frac{1}{|W(s)+x_0|^{\alpha}}\,{\textrm d}s + \int_{0}^{t-\tau_{0,\delta}} \frac{1}{|W'(s)+z'|^{\alpha}}\,{\textrm d}s \\[3pt] & \leq \int_{0}^{\tau_{0,\delta}} \frac{1}{|W(s)+x_0|^{\alpha}}\,{\textrm d}s + \int_{0}^{t} \frac{1}{|W'(s)+z'|^{\alpha}}\,{\textrm d}s,\end{align*}

where $z'=0$ if $W_{\tau_{0,\delta}}$ is 0 and $z'=\delta$ otherwise. Here, W’(t) is an independent Brownian motion with $W_0'=0$.

When $ t<\tau_{0,\delta}$,

\begin{align*}\int_{0}^{t} \frac{1}{|W(s)+x_0|^{\alpha}}\,{\textrm d}s \leq \int_{0}^{\tau_{0,\delta}} \frac{1}{|W(s)+x_0|^{\alpha}}\,{\textrm d}s.\end{align*}

Therefore in both sub-cases,

\begin{align*}\int_{0}^{t}\frac{1}{|W(s)+x_0|^{\alpha}}\,{\textrm d}s \leq \int_{0}^{\tau_{0,\delta}} \frac{1}{|W(s)+x_0|^{\alpha}}\,{\textrm d}s + \int_{0}^{t} \frac{1}{|W'(s)+z'|^{\alpha}}\,{\textrm d}s .\end{align*}

We can summarize Case III by

\begin{eqnarray*}\mathbb{P}[ \gamma_1 \geq t] & = &\mathbb{E}\bigg[ \exp\left\{-\int_{0}^{t} \frac{\lambda}{|W(s)+x_0|^{\alpha}}\,{\textrm d}s \right\} \bigg]\\[5pt] &\geq & \mathbb{E}\bigg[ \exp\left\{ -\int_{0}^{\tau_{0,\delta}} \frac{\lambda}{|W(s)+x_0|^{\alpha}}\,{\textrm d}s \right\} \bigg]\mathbb{E}\bigg[ \exp\left\{ - \int_{0}^{t} \frac{\lambda}{|W'(s)+z'|^{\alpha}}\,{\textrm d}s \right\} \bigg]\\[5pt] &\geq & \left( 1- \epsilon_4(\delta) \right) \min\left\{ \epsilon_1(t), \epsilon_3(t,\delta) \right\} =:\epsilon_5(t,\delta),\end{eqnarray*}

where we used $\mathbb{E}[ \textrm{e}^{-Y} ] \geq 1-\mathbb{E}\left[Y\right]$ for the last inequality. Putting all three cases together, we conclude that $\mathbb{P}\left[ \gamma_1 \geq t \right] \geq \min\{ \epsilon_1(t),\epsilon_3(t,\delta),\epsilon_5(t,\delta) \} =:\widetilde \epsilon(t,\delta) >0$, where $\widetilde \epsilon(t,\delta)$ does not depend on $x_0$ and is strictly positive if $\delta$ is small enough.

Let

\begin{equation*} \chi(t)=\begin{cases}\widetilde \epsilon~(t,t^{1/3}) & \text{ if } \epsilon_4(t^{1/3}) <1 , \\ \\[-7pt] 1 & \textrm{otherwise}.\end{cases}\end{equation*}

Therefore, we conclude that $\chi(t)$ is strictly positive, non-increasing, and that in addition $\chi(t)\to 1$ as $T\to 0$.

Corollary 1. Under the assumptions of Theorem 1, the random variable $\gamma_1$ is stochastically larger than a random variable $\eta$ with distribution $H(t):=1-\chi(t)$ on $\mathbb{R}^{+}_0$ which does not depend on $x_0$.

Proof. The function $1-\chi(t)$ constructed in the theorem can be taken as the cumulative distribution function of a random variable $\eta$ on $\mathbb{R}^+\cup \{\infty\}$. The properties established in the theorem show that (i) $\gamma_1$ is stochastically larger than $\eta$, (ii) the distribution of $\eta$ does not depend on $x_0$, and (iii) $\eta$ is strictly positive a.s.

When $0\leq \alpha<1$, we can build the process in a pathwise sense by induction on the stopping times $\gamma_n$, $n=1,2,\ldots$, of interactions, the general idea being that the time that elapses between $\gamma_n$ and the first interaction time after $\gamma_n$ is stochastically larger than a random variable $\eta_n$ with distribution H(t). This, plus the strong Markov property, then imply that $\gamma_n\to \infty$ a.s. as $n\to \infty$. More precisely, assume that $W(0)=0$. Let $\gamma_0=0$ and $\gamma_1$ be the first interaction time. Let $\mathcal{F}_t$ be the filtration of W(t) and N(t). Conditioning on $\left\{ X(0)=x_0 \right\}$, $X(t)=W(t)+x_0$ for all $0\leq t<\gamma_1$, and $\gamma_1$ is an $\mathcal{F}(t)$-stopping time. Theorem 1 implies that $\gamma_1$ is a strictly positive random variable a.s. In addition, there exists a random variable $\eta_1$ with distribution H such that $\gamma_1\ge \eta_1$ a.s. On the event $\gamma_1<+\infty$, we define $X({\gamma_1}):=\frac{W({\gamma_1})+x_0}{\theta}$. Again conditioning on $\left\{ W({\gamma_1})=x_1 \right\}$, by the strong Markov property, $W({\gamma_1+t})\,{\buildrel \textrm{d} \over =}\, W(t) + x_1$. Then, there exists $\gamma_2>\gamma_1$ and we can define

\begin{align*}X(\gamma_1+t) := X({\gamma_1}) + W({\gamma_1+t}) - W({\gamma_1})=\frac{x_0}{\theta}-\frac{x_1}{\theta’}+W({\gamma_1+t}) ,\end{align*}

for all $0\leq t<\gamma_2-\gamma_1$. By the same argument as above, there exists a random variable $\eta_2$ with distribution H such that $\gamma_2-\gamma_1\ge \eta_2$ a.s. By the strong Markov property, we can take $\eta_2$ independent of $\eta_1$. More generally, we can prove by induction the existence of the stopping times $\gamma_1<\gamma_2<\cdots$ and i.i.d. random variables $\eta_1,\eta_2,\ldots$ such that we can construct $X(\gamma_n+t)$ by the formula $X(\gamma_n+t) := X({\gamma_n})+W({\gamma_n+t})-W({\gamma_n})$ for $0\leq t < \gamma_{n+1}-\gamma_{n}$, and

\begin{align*}X({\gamma_{n+1}}):=\frac{1}{\theta}X({\gamma_{n+1}-})=\lim_{t\to (\gamma_{n+1}-\gamma_{n})} \frac{1}{\theta} X(\gamma_n+t).\end{align*}

It remains to prove that $\gamma_n$ tends to infinity with $n\to \infty$. Since the sequence $\{\eta_n\}$ is i.i.d., by the strong law of large numbers $\gamma_n \ge \sum_{i=0}^{n-1} \eta_i \to \infty$ as $n\to\infty$. Therefore, the process X(t) is a.s. well-defined for all times t.

Theorem 2. Assume that $\Lambda(X(t))=\frac{\lambda}{|X(t)|^{\alpha}}$. For $0\leq \alpha<1$, there is no finite accumulation time point of interactions almost surely and the stochastic process $\{X(t)\}$ is well-defined over the whole time horizon.

When $\alpha \geq 1$, the proof of Theorem 1 cannot be adapted. For instance, the dynamics are always ill-defined when starting from $x_0=0$ since, by Lemma 1, for all $t>0$,

\begin{align*}\mathbb{P}\left[\int_{0}^{t} \frac{\lambda}{|W(s)|^{\alpha}}\,{\textrm d}s = +\infty\right] =1.\end{align*}

For $1\leq \alpha < 2$, the process has no finite accumulation point of interactions almost surely until the first hitting time of 0, and is pathwise ill-defined after the hitting time. For $\alpha \geq 2$, the dynamics is ill-defined even when starting from $x_0\ne 0$ by the law of the iterated logarithm for Brownian motion. See [Reference MÖrters and Peres35, Theorem 5.1 and Corollary 5.3]. Note that when $x_0=0$, the process is ill-defined by Lemma 1. The law of the iterated logarithm for Brownian motion implies that, for any $t>0$, there exists a constant $C>1$ such that $|W(t+h)-W(t)|\leq C|2h\log\log(1/h)|^{\frac{1}{2}}$ almost surely for all $0\leq h \leq \epsilon$ with some $\epsilon>0$. Then, for $\alpha\geq 2$ and given the first hitting time $\gamma_1$,

\begin{align*} |W(\gamma_1)+x_0-W(\gamma_1-h)-x_0| = |W(\gamma_1-h)+x_0| \leq C|2h\log\log(1/h)|^{\frac{1}{2}}.\end{align*}

Therefore, when $x_0\neq 0$, by choosing $\epsilon'<\min\{\epsilon,\gamma_1, 1/e \}$ (where e is the Euler constant),

\begin{align*} \int_{0}^{\gamma_1} \frac{\lambda}{|W(s)+x_0|^{\alpha}}\,{\textrm d}s &\geq \int_{\gamma_1-\epsilon'}^{\gamma_1} \frac{\lambda}{C^{\alpha}|2(s-\gamma_1)\log\log (1/(s-\gamma_1))|^{\frac{\alpha}{2}}}\,{\textrm d}s\\ &\geq \int_{0}^{\epsilon'} \frac{\lambda}{C^2|2s\log\log(1/s)|}\,{\textrm d}s = +\infty \quad \text{almost surely.}\end{align*}

Hence, when $\alpha\geq 2$, finite accumulations of interactions occur almost surely and the process is pathwise ill-defined in the vicinity of zero.

3.3. Fokker–Planck evolution equation

We now establish the Kolmogorov forward equation (also referred to as the Fokker–Planck evolution equation) of the probability density of X(t). This will be done under the following assumptions.

Assumption 1.

  1. (i) The function $x\to \Lambda(x)$ is measurable.

  2. (ii) For all $0\leq a \le b \le +\infty$, $\int_{a}^{b} \Lambda(X(t)) \, {\textrm d}t \le +\infty$ almost surely.

Then, $\Lambda(X(t))$ is predictable and X(t) under (1) satisfies [Reference BjÖrk11, Assumption 6.1.1]. Note that conditions (C1) and (C3) imply Assumption 1. Under (C2), Assumption 1 holds when $0\le \alpha \le 1$. For the next theorem, we use the smoothness of the density of X(t) proved in Appendix A.

Theorem 3. Assume $\theta>1$. Under Assumption 1, the density p(t,x) of X(t) satisfies the non-local partial differential equation

\begin{equation*} \frac{\partial p(t,x)}{\partial t} = \frac{\sigma^2}{2}\frac{\partial^2 p(t,x)}{\partial x^2} - \mu \frac{\partial p(t,x)}{\partial x} - \Lambda(x) p(t,x) +\theta \Lambda(\theta x) p_t(\theta x).\end{equation*}

Proof. We follow the approach described in[Reference BjÖrk11, Propositions 6.2.1, 6.2.2]. Under Assumption 1, the stochastic intensity $\Lambda(X(t-))$ is locally integrable and predictable in the sense of [Reference BjÖrk11]. For any function $g:\mathbb{R}\times\mathbb{R}\to \mathbb{R}$ which is in $C^{1,2}$, we have, from Ito’s formula,

\begin{align*}{\textrm d}g(t,X(t)) =& \bigg\{\frac{\partial g}{\partial t}(t,X(t))+\mu\frac{\partial g}{\partial x}(t,X(t)) + \frac{\sigma^2}{2}\frac{\partial^2 g_t}{\partial x^2}(t,X(t)) \bigg\}{\textrm d}t + \sigma\frac{\partial g}{\partial x}(t,X(t)){\textrm d}W(t)\\& + \left(g\left(t,\frac{X(t)}{\theta}\right) - g(t,X(t)) \right)N({\textrm d}t).\end{align*}

Then, the infinitesimal generator can be described as follows. For any function $f:\mathbb{R}\to \mathbb{R}$ which is in $C^{2}$, we have

\begin{align*}\mathcal{A}f = \mu\frac{\textrm{d} f}{\textrm{d} x}(x) + \frac{\sigma^2}{2}\frac{\textrm{d}^2 f}{\textrm{d} x^2}(x)+ \left(f\left(\frac{1}{\theta}x\right)- f(x)\right)\Lambda(x).\end{align*}

The adjoint operator $\mathcal{A}^*$ is given by

\begin{align*}\mathcal{A}^*f = -\mu\frac{\textrm{d} f}{\textrm{d} x}(x) +\frac{\sigma^2}{2}\frac{\textrm{d}^2 f}{\textrm{d} x^2}(x) +\theta f\left(\theta x\right)\Lambda\left(\theta x\right)- f(x)\Lambda(x),\end{align*}

since $\int f\big(\frac{1}{\theta}x\big)h(x)\, {\textrm d}x = \int \theta f(x)h(\theta x)\,{\textrm d}x$ for all h. By Lemma 5 in Appendix A, the probability density function $p(t,x) \in C^{1,2}$. So, the probability density function p(t,x) satisfies $\mathcal{A}^*p(t,x) = \frac{\partial p(t,x)}{\partial t} $. Therefore, we have the forward evolution equation

\begin{align*} \frac{\partial p(t,x)}{\partial t} = \frac{\sigma^2}{2}\frac{\partial^2 p(t,x)}{\partial x^2} - \mu \frac{\partial p(t,x)}{\partial x} - \Lambda(x) p(t,x) +\theta \Lambda(\theta x) p_t(\theta x). \tag*{$\square$}\end{align*}

In steady state, the density p(t,x) would be invariant over the time t. Therefore, $\frac{\partial p(t,x)}{\partial t}=0$.

Corollary 2. The stationary distribution, when it exists, satisfies the non-local ordinary differential equation (ODE)

(2) \begin{align} \sigma^2\frac{{\textrm d}^2 p(x)}{{\textrm d} x^2} - 2\mu \frac{{\textrm d} p(x)}{{\textrm d} x} = 2\Lambda(x) p(x) -2\theta \Lambda(\theta x) p(\theta x). \end{align}

3.4. Some tools

To find the stationary solution of the stochastic differential equation (1) or the ordinary differential equation (2), we will rely on PASTA and Mellin transforms.

3.4.1. PASTA

The ‘Poisson arrivals see time averages’ (PASTA) property of stochastic processes is well known in queuing theory [Reference Baccelli and Brémaud4, Reference Melamed and Whitt30, Reference Melamed and Whitt31, Reference Wolff39]. Let N(t) be a stationary point process with points $\{t_n\}$. Let $\{\mathcal{F}_t\}$ be a filtration such that N(t) is ${\mathcal F}_t$-measurable for all t.

Lemma 2. Let X(t) be an $\mathcal{F}_t$-predictable, stationary stochastic process. The sequence $Y(n) := X(t_n-)$, $n= \ldots,-2,-1,0,1,2,\ldots$, is stationary. If the $\mathcal{F}_t$ stochastic intensity of N is constant, then the stationary distribution of Y(n) coincides with the stationary distribution of X(t).

Proof. See [Reference Baccelli and Brémaud4, Theorem 3.3.1].

3.4.2. Mellin transform

The Mellin transform of a non-negative function f(x) on $\mathbb{R}_+=(0,\infty)$ is defined by

(3) \begin{align}\mathcal{M}(f;s)=\int_{0}^{\infty} x^{s-1}f(x)\,{\textrm d}x ,\end{align}

when the integral exists. So the Mellin transform is an extended moment transform of a function f(x).

The integral (3) defines a transform in a vertical strip of the complex s plane. Assuming that $\mathcal{M}(f;s)$ is finite for $a \le \text{Re}(s) \le b$, the inversion of the Mellin transform is given by

\begin{align*} f(x)=\frac{1}{2\pi \textrm{i}}\int_{c-\infty \textrm{i}}^{c+\infty \textrm{i}} x^{-s}\mathcal{M}(f;s)\,{\textrm d}s \qquad \text{for } a<c<b.\end{align*}

We will also leverage the following Euler type identity [Reference Baccelli, Kim and McDonald7].

Lemma 3. For any $\theta \le 1$,

\begin{align*} \prod_{k=0}^{\infty}\bigg( 1-\frac{1}{\theta^{s+k}} \bigg) = \sum_{n= 0}^{\infty} \frac{1}{\theta^{sn}}\prod_{k=1}^{n} \bigg( \frac{\theta}{1-\theta^k} \bigg), \end{align*}

where by convention $\prod_{k=1}^{0} \big( \frac{\theta}{1-\theta^k}\big) = 1$.

3.5. Probabilistic representation of the solution under (C1)

In this section we assume that (C1) holds and that the bias term $\mu$ in (1) can be non-zero. A sample path is plotted in Figure 2 for illustration.

Figure 2. Evolution of the opinion value when $\mu=0$, $\lambda = 2.0$, $\sigma=3.0$, $\alpha=0$, and $\theta=\theta'=2.0$

Let $T=\{t_1,t_2,\ldots\}$, where $t_1\leq t_2\leq \cdots$ denotes the set of epochs of the interaction Poisson point process of intensity $\lambda$. At time $t_n$, the regular agent X interacts with the stubborn agent S. For each n, let $Y(n) := \lim_{t\uparrow t_n} X(t)= X({t_n^{-}})$ denote the state just prior to the interaction time $t_n$. Let $\Delta t_n := t_{n+1}-t_{n}$. The sequence $\{\Delta t_n\}$ is an i.i.d. sequence of Exp$(\lambda)$ random variables. The following stochastic recurrence equation holds:

(4) \begin{align}Y({n+1}) = \frac{1}{\theta} Y(n) + W'(n),\end{align}

where the sequence $\{W'(n)\}$ is again an i.i.d. sequence of random variables with density

\begin{equation*} h(x)= \int_{0}^\infty\frac 1 {\sqrt{2\pi \sigma^2 t}\,} \exp\left\{-\frac{(x-\mu t)^2}{2 \sigma^2 t}\right\}\lambda \textrm{e}^{-\lambda t}\, {\textrm d}t.\end{equation*}

The sequence $\{Y(n)\}$ is called an embedded chain of X(t). From the recurrence equation (4), we can represent the stationary solution Y as

\begin{equation*} Y = \sum_{n=0}^{\infty} \frac{W'(n)}{\theta^n}.\end{equation*}

Consider a stationary Poisson point process with i.i.d. marks. The mark of point $t_n$ is a standard Brownian motion starting from 0 (only the restriction of this Brownian motion from time 0 to time $t_{n+1}-t_n$ is useful). Let $\{{\mathcal F}_t\}$ be the $\sigma$-algebra generated by this marked point process. The stochastic process X(t) is ${\mathcal F}_t$ adapted and also ${\mathcal F}_t$-predictable when assuming its paths are left-continuous. The ${\mathcal F}_t$-intensity of N is the constant $\lambda$. It then follows from the PASTA property in Lemma 2 that the stationary distribution of X(t) coincides with the distribution of Y. Hence, the stationary distribution of X is a geometric sum of i.i.d. mixtures of Gaussian random variables.

Proposition 1. Under (C1) the stationary distribution of X is that of the sum

(5) \begin{align}\sum_{j=0}^{\infty} \frac{V({j})}{\theta^j},\end{align}

where the V(j) are i.i.d. mixtures of Gaussians with density h.

Corollary 3. The characteristic function of the stationary distribution of X is

\begin{align*} \mathbb{E}\big[\textrm{e}^{\textrm{i}\xi X({+\infty})}\big] = \prod_{j=0}^{\infty}\bigg[ \frac{\lambda}{\lambda-\dfrac{\textrm{i}\mu \xi}{\theta^j} +\dfrac{\sigma^2 \xi^2}{2\theta^{2j}}} \bigg].\end{align*}

Note that each of the summands in (5) (before scaling by $\theta^j$) follows an i.i.d. geometric stable distribution (Linnik distribution), and that the stationary distribution is a geometric sum of i.i.d. geometric stable random variables. Figure 3 plots the density of this distribution.

Figure 3. Simulated histogram when $\lambda = 3.0$, $\sigma=0.02$, $\mu=0.1$, $\alpha=0$, and $\theta=\theta'=2.0$.

Figure 4. Evolution of the opinion when $\lambda =2.0$, $\sigma=3.0$, $\alpha=0.5$, and $\theta=\theta'=2.0$.

3.6. Analytical solution of (C2) without bias term

In this section we consider case (C2) under the assumption that the bias term is 0. We solve the ordinary differential equation (2) by leveraging Mellin transforms. While the definition of the process can only be granted when $0\leq \alpha <1$, we nevertheless consider the case $0\leq \alpha \le 2$ when solving (2).

For comparison to the opinion-independent dynamics, we plot a sample of the opinion-dependent process in Figure 4. The main qualitative difference with the opinion-dependent process is that the interaction is more frequent around the zero opinion value (the stubborn agent opinion), so that the process can hardly escape the vicinity of the stubborn agent opinion.

Theorem 4. Consider case (C2) with $\theta\ge 1$, $\mu=0$, and $0\leq \alpha \le 2$. The unique density p(x) in $\mathbb R$ which is $C^2$ and the solution of the ordinary differential equation (2) is

\begin{align*} p(x) &= 2\phi (2-\alpha) \sum_{n=0}^{\infty} \frac{a_n}{\theta^{n}} \sqrt{\bigg\{\left(\frac{2\lambda }{\sigma^2(2-\alpha)^2}\right)^{\frac{1}{2-\alpha}} \theta^{n} |x| \bigg\}}^{1/2} \\ &\quad \times \textrm{BesselK}\bigg(\frac{1}{2-\alpha},2 \sqrt{\left\{\frac{2\lambda \theta^{n(2-\alpha)}|x|^{2-\alpha}}{\sigma^2(2-\alpha)^2}\right\}}^{1/2} \bigg),\end{align*}

where

\begin{align*} a_0 & = 1, \qquad a_n=\prod_{k=1}^{n} \bigg(\frac{\theta^{2-\alpha}}{1-\theta^{k(2-\alpha)}} \bigg) ; \\[3pt] \phi & = \frac{\left( \frac{2\lambda}{\sigma^2(2-\alpha)^2} \right)^{\frac{1}{2-\alpha}}}{2\Gamma\big(\frac{1}{2-\alpha}\big)\Gamma\left(\frac{2}{2-\alpha}\right)} \bigg(\prod_{k=0}^{\infty}\bigg( 1- \frac{1}{\theta^{2+k(2-\alpha)}} \bigg)\bigg)^{-1} ; \\[3pt] \textrm{BesselK}(\nu,z) & = \frac{\Gamma\left(\nu+\frac{1}{2}\right)(2z)^\nu}{\sqrt{\pi}}\int_{0}^{\infty}\frac{\cos t}{(t^2+z^2)^{\nu+\frac{1}{2}}}\,{\textrm d}t.\end{align*}

Proof. We divide p(x) into two components, $p(x)=p_+(x)+p_-(x)$, where $p_+(x)=p(x)\textbf{1}_{\{x\geq 0 \}}$ and $p_-(x)=p(x)\textbf{1}_{\{x< 0 \}}$. It is easy to see that each component satisfies the same equation, and $p_+(x)=p_-(-x)$ for $x\le 0$. So, by symmetry, it suffices to solve the equation for $p_+(x)$, that is:

\begin{align*} \sigma^2x^{\alpha}\frac{\textrm{d}^2 p_+(x)}{\textrm{d} x^2} = 2\lambda p_+(x) -2\theta^{1-\alpha}\lambda p_+(\theta x). \end{align*}

Let $M(s):=\mathcal{M}\left( p_+(x) ;s \right)$, where $\mathcal{M}\left(p_+(x);s\right)$ is the Mellin transform with respect to s. By transforming the equation, we have

\begin{align*} \sigma^2 (s+\alpha)(s+1+\alpha)M(s+\alpha)=2\lambda \Big(1-\frac{1}{\theta^{s+1+\alpha}}\Big) M(s+2). \end{align*}

Let $M(s)=f(s)\Gamma\left( \frac{s}{2-\alpha} \right)\Gamma\left( \frac{s+1}{2-\alpha}\right)\left(\frac{\sigma^2(2-\alpha)^2}{2\lambda}\right)^{\frac{s}{2-\alpha}}$ by introducing f(s) as yet another analytic function. The above equation can be rewritten as

\begin{align*} f(s)=\big( 1-\frac{1}{\theta^{s+1}}\big) f(s+2-\alpha). \end{align*}

Since $2-\alpha\ge 0$, we may expand f(s) toward the increasing direction of s infinitely many times. By introducing an indefinite constant $\phi \ge 0$, f(s) can have the form

\begin{align*} f(s) = \phi \prod_{k=0}^{\infty} \big( 1- \frac{1}{\theta^{s+1+k(2-\alpha)}} \big). \end{align*}

By plugging f(s) into M(s) above, we have

(6) \begin{align} M(s)=\phi\Gamma\Big( \frac{s}{2-\alpha} \Big)\Gamma\Big( \frac{s+1}{2-\alpha}\Big)\Big(\frac{\sigma^2(2-\alpha)^2}{2\lambda}\Big)^{\frac{s}{2-\alpha}} \prod_{k=0}^{\infty} \Big( 1- \frac{1}{\theta^{s+1+k(2-\alpha)}} \Big). \end{align}

Since p(x) is a probability density and $p_+(x)=p_-(-x)$ for $x \ge 0$, $M(1)=\frac{1}{2}$. This implies that

\begin{align*} \phi = \frac{\left( \frac{2\lambda}{\sigma^2(2-\alpha)^2} \right)^{\frac{1}{2-\alpha}}}{2\Gamma\left(\frac{1}{2-\alpha}\right)\Gamma\left(\frac{2}{2-\alpha}\right)} \left(\prod_{k=0}^{\infty}\left( 1- \frac{1}{\theta^{2+k(2-\alpha)}} \right)\right)^{-1}. \end{align*}

Applying Lemma 3, the inverse Mellin transform, to (6) and applying the residue theorem guides us to the solution:

\begin{align*} p_+(x) & = \frac{1}{2\pi \textrm{i}}\int_{c-\textrm{i}\infty}^{c+\textrm{i}\infty} x^{-s}\phi\Gamma\Big( \frac{s}{2-\alpha} \Big)\Gamma\Big( \frac{s+1}{2-\alpha}\Big)\bigg(\frac{\sigma^2(2-\alpha)^2}{2\lambda}\bigg)^{\frac{s}{2-\alpha}} \\[3pt] & \qquad \qquad \qquad \times \prod_{k=0}^{\infty} \bigg( 1- \frac{1}{\theta^{s+1+k(2-\alpha)}} \bigg) \textrm{d} s \\[3pt] & = \frac{1}{2\pi \textrm{i}}\int_{c-\textrm{i}\infty}^{c+\textrm{i}\infty} x^{-s}\phi\Gamma\Big( \frac{s}{2-\alpha} \Big)\Gamma\Big( \frac{s+1}{2-\alpha}\Big)\bigg(\frac{\sigma^2(2-\alpha)^2}{2\lambda}\bigg)^{\frac{s}{2-\alpha}} \\[3pt] & \qquad \qquad \qquad \times \sum_{n=0}^{\infty} \frac{1}{\theta^{n\left(s+1\right)}}\prod_{k=1}^{n} \bigg(\frac{\theta^{2-\alpha}}{1-\theta^{k(2-\alpha)}} \bigg) \textrm{d} s \\[3pt] &=\sum_{n=0}^{\infty} \sum_{m=0}^{\infty} \Big(\frac{\sqrt{2\lambda}x}{\sigma} \Big)^m \frac{(-1)^m}{m!} \phi(-m)\frac{1}{2^{-mn+n}}\prod_{k=1}^{n} \Big(\frac{4}{1-4^k} \Big)\\[3pt] & = \sum_{n=0}^{\infty} \frac{2\phi (2-\alpha)a_n}{\theta^{n}} \sqrt{\bigg\{\bigg(\frac{2\lambda }{\sigma^2(2-\alpha)^2}\bigg)^{\frac{1}{2-\alpha}} \theta^{n} |x| \bigg\}}^{1/2} \\[3pt] & \qquad \qquad \qquad \times \text{BesselK}\bigg(\frac{1}{2-\alpha},2 \sqrt{\left\{\frac{2\lambda \theta^{n(2-\alpha)}|x|^{2-\alpha}}{\sigma^2(2-\alpha)^2}\right\}}^{1/2} \bigg), \end{align*}

where $a_n=\prod_{k=1}^{n} (\frac{\theta^{2-\alpha}}{1-\theta^{k(2-\alpha)}})$. We note that we use the change of variable $s'=\frac{s}{2-\alpha}$ in the inverse Mellin transform step, along with the following observation for $a \le 0$:

\begin{align*} \frac{1}{2\pi \textrm{i}}\int_{c-\textrm{i}\infty}^{c+\textrm{i}\infty} x^{-s}\Gamma(s)\Gamma(s+a) \, \textrm{d} s = 2x^{\frac{1}{2}a}\text{BesselK}\left(a,2\sqrt{x}\right). \tag*{$\square$} \end{align*}

As expected, p(x) does not depend on the initial state X(0). The function p(x) is non-negative and bounded when $0\leq \alpha \ge 2$. Non-negativity can be easily seen as the $\text{BesselK}(\nu,z)$ function is non-negative. Boundedness follows from the fact that, for $\theta \ge 1$, since $a_n\to 0$ when $n\to \infty$,

\begin{align*} p(x) \leq \sum_{n=0}^{\infty} \frac{C}{\theta^{n/2}} < +\infty,\end{align*}

where C is a universal constant.

Since $M(1) = \frac{1}{2}$ from (6), we also have $\int p(x) \, \textrm{d} x = 1$. A final remark is that $|x|^\frac{1}{2}\text{BesselK}(\frac{1}{2-\alpha},|x|^\frac{2-\alpha}{2})$ has an exponential tail as $|x|\to \infty$. Figure 5 shows an example of the simulated histogram and the solution derived from (2).

Figure 5. Simulation vs. explicit solution when $\lambda=2.0$, $\sigma=3.0$, and $\alpha=0.5$.

Figure 6. Sensitivity of parameters $\alpha$, $\lambda$, $\sigma$, and $\theta$.

3.7. Parameter sensitivity

In this section we analyze the sensitivity of the analytic solution proposed in Theorem 4 with respect to the parameters of the model. Figure 6 shows a collection of plots of the density p(x) when varying each given parameter and fixing the other parameters. The baseline parameters are $\alpha=0.5$, $\lambda=2.0$, $\sigma=3.0$, and $\theta=2.0$. We can see the following from Figure 6 concerning the parameters:

$\alpha$ (top left): When the regular opinion is sufficiently far away from the stubborn opinion, interactions are less likely, as already discussed. This is clearly reinforced when $\alpha$ is larger. This intuitively explains why the tail of p(x) is heavier as $\alpha$ increases. We see that the peak point p(0) is also decreasing as $\alpha$ increases.

$\lambda$ (top right): As $\lambda$ increases, the interaction rate increases proportionally, which implies more interactions, and a density p(x) which is more concentrated around the opinion of the stubborn agent.

$\sigma$ (bottom left): As $\sigma$ increases, the strength of self-belief increases, which naturally results in a widening of the shape of p(x).

$\theta$ (bottom right): As $\theta$ increases, the weight of the opinion of the stubborn opinion increases, which forces the regular opinion to be closer to the stubborn opinion.

3.8. More on case (C2) with a bias term

We gather here partial results on the ordinary differential equation for $p_+(x)$ in the presence of a non-zero bias term. In this case,

\begin{align*} \sigma^2\frac{{\textrm d}^2 p_+(x)}{{\textrm d} x^2} -2 \mu \frac{{\textrm d} p_+(x)}{{\textrm d} x} = 2\lambda p_+(x) -2\theta\lambda p_+(\theta x).\end{align*}

The Mellin transform yields the recurrence equation

\begin{align*} \sigma^2 s(s+1)M(s) +2\mu (s+1)M(s+1) =2\lambda \Big(1-\frac{1}{\theta^{s+1}}\Big) M(s+2).\end{align*}

Let f(s) be defined by

\begin{equation*}M(s)=f(s)\Gamma(s)\Big(\frac{\sigma^2}{2\lambda}\Big)^{s/2}.\end{equation*}

Then f(s) satisfies the relation

\begin{align*} f(s)+\frac{\sqrt{2}\mu}{\sigma\sqrt{\lambda}\,}f(s+1)=\Big( 1-\frac{1}{\theta^{s+1}}\Big)\, f(s+2).\end{align*}

Letting $\omega:=\frac{\sqrt{2}\mu}{\sigma\sqrt{\lambda}\,}$, we can simplify this to

\begin{align*} f(s)+\omega f(s+1)=\Big( 1-\frac{1}{\theta^{s+1}}\Big)\, f(s+2).\end{align*}

By introducing $\Psi(s):=\frac{f(s+1)}{f(s)}$, the equation can be rewritten as

(7) \begin{align} \frac 1 {\Psi(s)} + \omega = a(s+1)\Psi(s+1),\end{align}

where $a(s):= 1- \theta^{-s}$. On the other hand, we can find a unique positive solution of the equation (by introducing $\gamma(s)$):

\begin{equation*} \frac 1 {\gamma(s)} + \omega = a(s) \gamma(s) .\end{equation*}

it is easy to see that this function $\gamma(s)$ is non-decreasing. Let $\Sigma(s)= \frac{\Psi(s)}{\gamma(s)} -1$. From (7), we may further simplify this as

\begin{align*} \Sigma(s) = \frac{\xi(s)}{\zeta(s)+\Sigma(s+1)},\end{align*}

where

\begin{eqnarray*}\xi(s):= \frac{1}{a(s)\gamma(s)\gamma(s+1)}, \qquad \zeta(s):= 1-\frac{\gamma(s)}{\gamma(s+1)}\ge 0.\end{eqnarray*}

Hence, $\Sigma(s)$ admits the continued fraction expansion

\begin{eqnarray*}\Sigma(s) = \frac {\xi(s)} {\zeta(s) + \frac {\xi(s+1)} {\zeta(s+1) +\frac{\xi(s+2)} {\zeta(s+2) + \frac{\xi(s+3)}{\zeta(s+3)+\cdots} } } }.\end{eqnarray*}

From the definition of $\Psi(s)$ above, it follows that $f(s+1)=f(s)\gamma(s)(1+\Sigma(s))$. Therefore, we may expand f(s) as

\begin{align*}f(s)= \phi \prod_{k=0}^\infty \frac 1{\gamma(s+k)(1+\Sigma(s+k))},\end{align*}

where $\phi$ is a constant. Then we have a representation of M(s).

3.9. Summary of the results and questions on the two-agent model

Here we list the results obtained so far, some properties of interest, and some open problems.

For (C1) with any bias term $\mu$, the stochastic process X(t) admits a unique stationary regime. This stationary regime is ergodic (as a factor of a marked Poisson point process). We give a probabilistic representation of the stationary distribution in Proposition 1. For $\mu=0$, we also give an analytical solution in Theorem 4 for $\alpha=0$, namely

\begin{align*} p\left(x \right)=\phi \sum_{n=0}^{\infty} \frac{a_n}{\theta^n} \exp\bigg\{-\bigg(\frac{\sqrt{2\lambda}2\theta^n}{\sigma}\bigg)|x|\bigg\},\end{align*}

where $a_n=\prod_{k=1}^{n} \big(\frac{\theta^2}{1-\theta^{2k}} \big)$ with $a_0=1$, and $\phi = \frac{\sqrt{2\lambda}}{2\sigma}\big(\prod_{k=0}^{\infty}( 1- \frac{1}{\theta^{2(1+k)}})\big)^{-1}$. Moreover, (6) can be simplified to

(8) \begin{align} M(s)=\phi \Gamma(s)\Big( \frac{\sigma^2}{2\lambda}\Big)^{s/2} \prod_{k=0}^{\infty} \Big( 1- \frac{1}{\theta^{s+1+2k}} \Big).\end{align}

For (C2) with $\mu=0$ and $0\leq \alpha \le 1$, there exists a unique stationary regime for X(t) and this stationary process is ergodic. This follows from the fact that the process sampled at jump times is a $\phi$-irreducible Markov chain [Reference Meyn and Tweedie32]. The stationary distribution of this stationary regime is given in Theorem 4. We also have an analytical solution to the ODE characterizing the stationary regimes when $1\leq \alpha \le 2$ (see Theorem 4). However, we cannot connect this solution to a dynamics defined pathwise. An interesting open question is about the meaning of this analytic solution for $\alpha$ in this range.

For (C3), X(t) is pathwise well-defined since the stochastic intensity is bounded. The Markov analysis is of the same nature as that alluded to above. The associated process is ergodic when $\alpha \le 2$. This leads to a natural (open) question. Consider a solution $p_L(x)$ of (2) with $\lambda_L(x)$. Do we have $\lim_{L\to\infty} p_L(x) = p(x)$ in Theorem 4, e.g. when $1\leq \alpha \le 2$?

4. Extension to multi-agent and multi-dimensional models

This section focuses on some extensions of the models to multiple agents and to multi-dimensional opinions. For the first extension, we can come up with many interesting scenarios, e.g. based on social interaction graphs. We illustrate the flexibility of our opinion-independent approach by solving a specific scenario with three agents. We then discuss higher-dimensional opinions, as well as a natural mean-field model where the opinion-independent and opinion-dependent interaction rate models are connected through a single model.

4.1. A three-agent interaction model

Consider a scenario with three agents $X_1$, $X_2$, and $X_3$. Agents $X_1$ and $X_3$ are stubborn with opinion values $X_1(t)=s_1\in\mathbb{R}$ and $X_3(t)=s_3\in \mathbb R$, whereas Agent $X_2$ is regular (see Figure 7).

Figure 7. Three-agent interaction model.

Without loss of generality, we assume that $s_1 \le s_3$. Agent $X_2$’s diffusion has bias $\mu$ and variance $\sigma^2$. Agent $X_2$ interacts with the stubborn agent $X_1$ with intensity $\Lambda_1(x)$ and with the stubborn agent $X_3$ with intensity $\Lambda_3(x)$. For example, let $\Lambda_1(x)=\frac{\lambda_1}{|x-s_1|^{\alpha_1}}$ and $\Lambda_3(x)=\frac{\lambda_3}{|x-s_3|^{\alpha_3}}$. The interactions of $X_2$ take place at the epochs of a point process $N_2(t)$ which is the superposition of two point processes $N_{1}(t)$ and $N_{3}(t)$ with stochastic intensities $\Lambda_1(X_2(t-))$ and $\Lambda_3(X_2(t-))$ for interactions with $X_1$ and $X_3$, respectively. At each interaction, $X_2$ updates its opinion by averaging it with the appropriate stubborn opinion. We assume $\theta=\theta'=2$.

If both $\Lambda_1(X_2(t-))$ and $\Lambda_3(X_2(t-))$ are almost surely locally integrable, we obtain the following non-local partial differential equation result.

Proposition 2. Assume that, for $i=1$ and $i=3$,

\begin{align*} \mathbb{P}\bigg[\int_{a}^{b} \Lambda_i\left(X_2(t-)\right) {\textrm d}t < +\infty ,\quad \text{ for any}\ 0\leq a \le b\le +\infty \bigg]=1.\end{align*}

Then, under the above assumptions, the density $p(t,x)\in C^2$ of $X_2(t)$ satisfies the non-local partial differential equation

\begin{align*} \frac{\partial p(t,x)}{\partial t} &= \frac{\sigma^2}{2}\frac{\partial^2 p(t,x)}{\partial x^2} - \mu \frac{\partial p(t,x)}{\partial x} - \left[\Lambda_1(x) + \Lambda_3(x)\right] p(t,x) \\[3pt] &\quad +2\Lambda_1(2x) p_t(2x-s_1) + 2\Lambda_3(2x) p_t(2x-s_3).\end{align*}

Sketch of the proof. The proof follows the same lines of thought as for Theorem 3. Interaction terms are added as they result from conditionally independent interactions.

By letting $\frac{\partial p(t,x)}{\partial t} =0$, we have the ordinary differential equation for the stationary distribution.

Corollary 4. Under the assumptions of Proposition 2, the stationary density p(x) of agent $X_2$ satisfies the non-local ordinary differential equation

(9) \begin{align} &\sigma^2\frac{{\textrm d}^2 p(x)}{{\textrm d} x^2} +2 \mu \frac{{\textrm d} p(x)}{{\textrm d} x} = 2\left[\Lambda_1(x) +\Lambda_3(x) \right] p(t,x )\nonumber\\[3pt] &\quad -4\Lambda_1(2x) p_t(2x-s_1) - 4\Lambda_3(2x) p_t(2x-s_3). \end{align}

We have no solution in the general power-law case. However, when $\mu=0$, $\Lambda_1(x)=q\lambda$, and $\Lambda_3(x)=(1-q)\lambda$ with $q\in[0,1]$, i.e. in the opinion-independent interaction rate case, by following the same approach as in Section 3.5, the following probabilistic representation of the solution can be obtained:

\begin{align*} X_2({+\infty}) = \bigg[ \sum_{j=0}^{\infty} \frac{V({j})}{2^j} + (s_3-s_1) \sum_{j=0}^{\infty} \frac{U(j)}{2^j} \bigg] + s_1,\end{align*}

where V(j) follows an independent $N(\mu\Delta t_j, \sigma^2\Delta t_j)$ Gaussian distribution, $\Delta {t_j}\sim \text{Exp}(\lambda)$, and $U(n)\sim \text{Ber}(1-q)$. Let $A= \sum_{j=0}^{\infty} \frac{V({j})}{2^j}$ and $B= (s_3-s_1) \sum_{j=0}^{\infty} \frac{U(n)}{2^j}$. Hence, the stationary solution of $X_2(t)$ admits the following representation with two independent components: $X_2(+\infty)=A+B + s_1$. We discussed the distribution of A in Section 3.5. Bhati et al. studied the distribution of B for $q\in[0,1]$ [Reference Bhati, Kgosi and Rattihalli10]. The general form of B is non-trivial. When $q=0.5$, $B \sim U[0,s_3-s_1]$ by matching each realization of B with the binary representation of real values in [0,Reference Acemoğlu, Como, Fagnani and Ozdaglar1] and multiplying it by $s_3-s_1$, where $U[0,s_3-s_1]$ denotes the uniform distribution on $[0,s_3-s_1]$. So $B+s_1\sim U[s_1,s_3]$.

Proposition 3. Let $g(x)=\frac{1}{s_3-s_1}$ be the density function of a uniform distribution on $[s_1,s_3]$. When $\Lambda_1(x)=q\lambda$, $\Lambda_3(x)=(1-q)\lambda$, $\mu=0$, and $q=0.5$, the solution of (9) is $p(x) = p^*(x) \star g(x)$, where $\star$ denotes convolution and $p^*(x)$ is the solution obtained in Theorem 4 when $\alpha=0$.

Figure 8 compares a simulated solution and the solution established in Proposition 3.

Figure 8. Three-body simulation vs. analytically derived solution when $\lambda = 3.0$, $\sigma=2.0$, $q=0.5$, $\alpha=0$.

4.2. Multi-dimensional continuous-time opinion dynamics

Let us return to the two-agent model. Assume that the regular agent X has an opinion vector $\textbf{X}(t) = \left(X_{1}(t), X_{2}(t),\ldots,X_{d}(t) \right)\in\mathbb{R}^d$ and updates those opinions by interacting with the stubborn agent. Assume that each component $X_{i}(t)$ follows an independent diffusion process with parameters $\mu_i$ and $\sigma_i$. The stubborn agent Z has the opinion vector $\textbf{Z}(t) =\left(0,0,\ldots, 0\right)\in\mathbb{R}^d$. The stochastic interactions are in terms of a multi-dimensional point process $\textbf{N}(t)=(N_1(t), \ldots, N_d(t))$, with stochastic intensity $\Lambda_i(\textbf{X}(t))$ for each $X_i$. Assume that $\Theta'=(\theta_1',\ldots,\theta_d')$ and $1/\Theta'=(1/\theta_1',\ldots,1/\theta_d')$. Let $\odot$ denote componentwise multiplication. Then

\begin{align*} {\textrm d}\textbf{X}(t) = \boldsymbol{\mu} {\textrm d}t + \boldsymbol{\sigma}\odot {\textrm d}\textbf{W}(t) - \frac{1}{\Theta'} \odot \textbf{X}({t-}) \odot \textbf{N}({\textrm d} t),\end{align*}

where $\boldsymbol{\mu}=\left(\mu_1,\ldots,\mu_d \right)$ and $\boldsymbol{\sigma}=\left(\sigma_1,\ldots,\sigma_d\right)$. Note that $\textbf{X}({t-})$ is the vector value approaching from the left of $\textbf{X}(t)$.

As above, the only case that can be solved at this stage is the opinion-independent case. For instance, when $\Lambda_i(\textbf{X}(t))=\lambda_i$ and the components of $\textbf{N}(t)$ are independent, the components $X_i(t)$ are independent. Therefore, the distribution of $X_i(t)$ follows the partial differential equation described in Theorem 3, with $\mu_i$, $\sigma_i$, $\theta_i$, and $\Lambda_i(x)=\lambda_i$. So the stationary distribution of $\textbf{X}(t)$ is a product form with marginals given by the stationary distributions obtained in the opinion-independent case.

When $\Lambda_i(\textbf{X}(t))=\lambda$ and the components of $\textbf{N}(t)$ are dependent Poisson point processes (for instance the very same point process pathwise), then the marginal stationary distributions of the components of $\textbf{X}(t)$ are available for each component, but we have no result on the d-dimensional stationary distribution of $\textbf{X}(t)$.

4.3. A mean-field interaction model

In this subsection we consider a $(d+1)$-agent model based on mean-field interactions. This model features one stubborn agent with zero-valued opinion (without loss of generality) and d regular agents (see Figure 9). All regular agents are assumed to have the same dynamics in distribution.

Figure 9. $(d+1)$-agent mean-field limit model.

Let $\textbf{X}(t)=(X_1(t),\ldots, X_d(t))$ and $\textbf{N}(t)=(N_1(t),\ldots, N_d(t))$. The dynamics of the regular agent $X_i(t)$ consists of an independent diffusion with bias $\mu$ and diffusion coefficient $\sigma$. The main novelty is the assumption that $\{N_i(t)\}$ is a collection of conditionally independent point processes with a common stochastic intensity $\Lambda(x)$ of the form

\begin{align*} \Lambda(\textbf{X}(t))= \frac{1}{d} \sum_{i=1}^d \frac {\widetilde \lambda}{|X_i(t)|^{\alpha}},\end{align*}

where $\{N_i(t)\}$ are conditionally independent given $\textbf{X}(t)$, and where $\widetilde \lambda$ is a positive constant. Hence, the regular agents are only coupled by this shared stochastic intensity.

The model can be seen as a multi-dimensional variant of the model introduced in the previous section, where the interaction rate of agent $X_i$ with the stubborn agent is proportional to the empirical moment of order $-\alpha$ of the opinions of the regular agents. For $i=1,\ldots,d$, each dynamics of $X_i$ is governed by the stochastic differential equation

\begin{align*} {\textrm d}X_i(t) = \mu {\textrm d}t+ \sigma {\textrm d}W_i(t) - \frac{X_i(t-)}{\theta'} {\textrm d}N_i(t).\end{align*}

Assume that when d tends to infinity, the steady state of $\Lambda(\textbf{X}(t))$ tends to a positive and finite constant, say $\kappa$, so that the regular agents become asymptotically independent when d tends to infinity. This assumption will be referred to as the mean-field limit hypothesis below.

For the following analytical derivation, we assume that $\mu=0$. Assuming that the mean-field limit exists, the constant $\kappa$ should coincide with the steady-state moment of order $-\alpha$ of the opinion of the regular agent in the two-agent state-independent model analyzed in Section 3.5. When taking $\alpha=0$ and $\lambda=\kappa$ in the result of Theorem 4, the Mellin tranform of the stationary $X_i$ becomes, by (8),

(10) \begin{align} M_{\kappa}(s)=\phi \Gamma(s)\Big( \frac{\sigma^2}{2\kappa}\Big)^{s/2} \prod_{k=0}^{\infty} \Big( 1- \frac{1}{\theta^{s+1+2k}} \Big),\end{align}

where

\begin{align*} \phi = \frac{\sqrt{2\kappa}}{2\sigma}\bigg(\prod_{k=0}^{\infty}\Big( 1- \frac{1}{\theta^{2(1+k)}} \Big)\bigg)^{-1}.\end{align*}

When it exists, the mean-field limit should hence satisfy some consistency equation: the moment of order $-\alpha$ of the density p(x) given in Theorem 4 for the parameters $\theta$, $\sigma$, and $\lambda=\kappa$ should be such that

(11) \begin{align} 2\widetilde \lambda M_{\kappa}(1-\alpha)=\kappa.\end{align}

In view of (10), when $0 \le \alpha \le 1$, (11) can be rewritten as $c\cdot \kappa^{\frac{\alpha}{2}}=\kappa$ for some constant $0 < c < +\infty$ (the fact that this constant is finite requires the assumption that $\alpha < 1$ because of the singularity of the $\Gamma$ function). Hence, for all $0 < \alpha < 1$, the only positive and finite solution of this consistency equation is $\kappa=c^{2/(2-\alpha)}$.

We conclude that this mean-field limit, when it holds, has a unique and well defined solution for all $0 < \alpha < 1$. It is beyond the scope of the present paper to prove that the mean-field limit holds. However, let us stress that there is numerical evidence that the mean-field hypothesis holds for all $0 < \alpha < 1$.

In contrast, when $\alpha > 1$, there is no non-degenerate solution to the self-consistency equation, and there is, in addition, numerical evidence that the mean-field hypothesis does not hold. That is, when d tends to infinity, the empirical moment of order $-\alpha$ of the regular agent opinions does not tend to a finite limit.

The statements on the case $0 < \alpha < 1$ are illustrated in Figure 10, which plots the empirical histrogram of the opinions of the d regular agents for various choices of d when $\alpha=1/2$. When $d\geq 10\,000$, the simulated histogram is very close to the explicit solution. Note that the mean-field limit is a very good approximation for much smaller values of d.

Figure 10. Simulation histograms and analytically computed density of mean-field limit when $d=10, 100, 1000, 10\,000, 100\,000$ and $\lambda' =2.0$, $\sigma=3.0$, $\alpha=0.5$, and $\theta=\theta'=2.0$.

5. Conclusion

We have introduced a new continuous-time model for opinion dynamics that features power-law confidence between agents subject to diffusive forces. The steady-state behavior of this type of dynamics was shown to satisfy non-local partial differential equations and was characterized using Mellin transforms. We first solved the two-agent problem and then proposed some extensions to multi-agent cases, including a mean-field model that captures the essence of the rate-dependent model. In the presence of diffusive self-beliefs, this model fundamentally differs from the bounded confidence model in that it leads to weak consensus for small enough interaction exponents. It also qualitatively differs from that of discrete-time models due to the possibility of accumulation of interaction events. Our analysis leads to a good understanding of the case where the interaction exponent is less than 1. In this case, there is no such accumulation of interaction events, we give a pathwise construction of the dynamics, and obtain an explicit formula for the stationary distribution in question. The case where this exponent is between 1 and 2 remains mysterious: there is no pathwise construction for the dynamics, and yet some distributional solutions to the non-local partial differential equation.

Appendix A. Proof of smoothness of the density

Lemma 4. Let $\mathcal{F}_X(\xi)$ denote the characteristic function of the random variable X on $\mathbb{R}$. If the characteristic function of X satisfies

\begin{align*} \int_{\mathbb{R}} |\xi|^m|\mathcal{F}_X(\xi)| \, \textrm{d}\xi < +\infty ,\end{align*}

for some $m\in\mathbb{N}$, then the density f(x) of X is of class $C^m$ and the derivatives of orders $0,\ldots, m$ of f(x) converge to 0 as $|x|\to \infty$, where $f(x)\in C^m$ means the mth-order derivative $f^{(m)}(x)$ exists for all $x\in \mathbb{R}$.

Proof. See [Reference Ken-Iti27, Proposition 28.1].

Lemma 5. Assume $\theta>1$. Under Assumption 1 from Section 3.3, X(t) has a smooth density p(t,x) which satisfies $p(t,x)\in C^{1,\infty}\left(\mathbb{R}\right)$ and, for all $m\geq 1$, $\frac{\partial^m p}{\partial x^m}(t,x)\to 0$ as $|x| \to\infty$, where $p(t,x)\in C^{1,\infty}$ means p(t,x) is differentiable with respect to t, and p(t,x) is differentiable with respect to x infinitely many times.

Proof. Under Assumption 1, the point process N exists and does not have accumulation points, as proved in Theorem 1. Conditionally on $N_n(t)=\delta_{t_1}+\cdots+\delta_{t_n}$ with $0 \le t_1 \le \cdots < t_n \leq t$,

\begin{align*} X(t) = \sigma\bigg[ W(t)-W(t_n) + \frac{W(t_n)-W(t_{n-1})}{\theta} + \cdots + \frac{W(t_1) }{\theta^{n}}\bigg] + \frac{x_0}{\theta^{n}},\end{align*}

so that the characteristic function of X(t) is

\begin{align*} \mathcal{F}_n(\xi) & := \mathbb{E}\Big[ \textrm{e}^{\textrm{i}\xi X(t)} \big| \mid \{N_n(t)=\delta_{t_1}+\cdots+\delta_{t_n}\} \Big]\\[3pt] & = \exp{\bigg[ \textrm{i}\frac{x_0}{\theta^n}\xi -\frac{\sigma^2\xi^2}{2}\left(t-t_{n} + \frac{t_n-t_{n-1}}{\theta^2} + \cdots + \frac{t_{1}}{\theta^{2n}}\right) \bigg]}.\end{align*}

Since $\theta > 1$,

\begin{align*} |\mathcal{F}_n(\xi)| \leq \exp{\Big(-\frac{\sigma^2\xi^2 t}{2\theta^{2n}}\Big)}.\end{align*}

Denote by $\mathcal{F}_{X(t)}(\xi)$ the characteristic function of X(t) and by $\nu_n(\cdot)$ the Janossy measure of N[0,t] (see [Reference Baccelli and Woo9, Reference Daley and Vere-Jones16]). Then,

\begin{align*} |\mathcal{F}_{X(t)}(\xi)| &\leq \sum_{n=0}^{\infty} \int_{\{0< t_1 < \cdots < t_n < t\} }|\mathcal{F}_n(\xi)| \, {\textrm d}\nu_n ({\textrm d}t_1\cdots {\textrm d}t_n) \\ &\leq \sum_{n=0}^{\infty} \exp{\Big(-\frac{\sigma^2\xi^2 t}{2\theta^{2n}}\Big)} \mathbb{P} [ N(t)=n ]. \leq \exp{\Big(-\frac{\sigma^2\xi^2 t}{2}\Big)} < +\infty.\leq \sum_{n=0}^{\infty} \exp{\Big[-\frac{\sigma^2\xi^2 t}{2\theta^{2n}}\Big]}.\end{align*}

Hence, for each $m\in \mathbb{N}$,

\begin{align*} \int_{\mathbb{R}} |\xi|^{m} |\mathcal{F}_{X(t)}(\xi)| \, {\textrm d}\xi &\leq \int_{\mathbb{R}} |\xi|^{m}\sum_{n=0}^{\infty}\exp{\left(-\frac{\sigma^2\xi^2 t}{2\theta^{2n}}\right)} \mathbb{P} \left[ N(t)=n \right]{\textrm d}\xi \\ &\leq \sum_{n=0}^{\infty} \frac{C(\sigma, m, t)}{\theta^{n(m-1)}} \mathbb{P} \left[ N(t)=n \right] \leq \sum_{n=0}^{\infty} \frac{C(\sigma, m, t)}{\theta^{n(m-1)}} < + \infty ,\end{align*}

for some constant $C(\sigma, m, t)$ [Reference Krishnamoorthy28, Section 11.2]. Applying Lemma 4 concludes the mth-order differentiability and the decay on tails for each order of the partial derivative with respect to x.

By [Reference BjÖrk11, Proposition 6.1.1], X(t) is a Markov process, so it satisfies the Chapman–Kolmogorov equation. The differentiability with respect to t is implied by applying the Chapman–Kolmogorov equation.

Acknowledgements

We thank the editor and anonymous reviewers for their constructive comments, which helped us to improve the manuscript. This research was funded by the Department of Defense #W911NF1510225 and by a Math+X award from the Simons Foundation #197982 to The University of Texas at Austin. The work of FranÇois Baccelli was supported by ERC grant 788851.

References

Acemoğlu, D., Como, G., Fagnani, F. and Ozdaglar, A. (2013). Opinion fluctuations and disagreement in social networks. Math. Operat. Res. 38, 127.CrossRefGoogle Scholar
Acemoğlu, D., Mostagir, M. and Ozdaglar, A. (2014). State-dependent opinion dynamics. In Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), pp. 47734777.10.1109/ICASSP.2014.6854508CrossRefGoogle Scholar
Acemoğlu, D. and Ozdaglar, A. (2011). Opinion dynamics and learning in social networks. Dynamic Games Appl. 1, 349.CrossRefGoogle Scholar
Baccelli, F. and Brémaud, P. (2003). Elements of Queueing Theory, 2nd edn. Springer, Berlin.CrossRefGoogle Scholar
Baccelli, F., Chatterjee, A. and Vishwanath, S. (2015). Pairwise stochastic bounded confidence opinion dynamics: Heavy tails and stability. In Proc. IEEE Conf. Computer Communications (INFOCOM), pp. 18311839.10.1109/INFOCOM.2015.7218565CrossRefGoogle Scholar
Baccelli, F., Kim, K. B. and De Vleeschauwer, D. (2005). Analysis of the competition between wired, DSL and wireless users in an access network. In Proc. 24th Ann. Joint Conf. IEEE Comp. Commun. Soc., Vol. 1, pp. 362373.CrossRefGoogle Scholar
Baccelli, F., Kim, K. B. and McDonald, D. R. (2007). Equilibria of a class of transport equations arising in congestion control. Queueing Systems 55, 18.CrossRefGoogle Scholar
Baccelli, F., McDonald, D. R. and Reynier, J. (2002). A mean-field model for multiple TCP connections through a buffer implementing RED. Performance Evaluation 49, 7797.10.1016/S0166-5316(02)00136-0CrossRefGoogle Scholar
Baccelli, F. and Woo, J. O. (2016). On the entropy and mutual information of point processes. In Proc. 2016 IEEE Int. Symp. Information Theory (ISIT), pp. 695699.CrossRefGoogle Scholar
Bhati, D., Kgosi, P. and Rattihalli, R. N. (2011). Distribution of geometrically weighted sum of Bernoulli random variables. Appl. Math. 2, 13821386.CrossRefGoogle Scholar
BjÖrk, T. (2011). An introduction to point processes from a martingale point of view.Google Scholar
Blondel, V. D., Hendrickx, J. M. and Tsitsiklis, J. N. (2010). Continuous-time average-preserving opinion dynamics with opinion-dependent communications. SIAM J. Control Optim. 48, 52145240.10.1137/090766188CrossRefGoogle Scholar
Borodin, A. N. and Salminen, P. (2012). Handbook of Brownian Motion: Facts and Formulae. BirkhÄuser, Basel.Google Scholar
Brugna, C. and Toscani, G. (2015). Kinetic models of opinion formation in the presence of personal conviction. Phys. Rev. E 92, 052818.CrossRefGoogle ScholarPubMed
Como, G. and Fagnani, F. (2011). Scaling limits for continuous opinion dynamics systems. Ann. Appl. Prob. 21, 15371567.CrossRefGoogle Scholar
Daley, D. J. and Vere-Jones, D. (2007). An Introduction to the Theory of Point Processes, Vol. II. Springer, New York.Google Scholar
Deffuant, G. Neau, D., Amblard, F. and Weisbuch, G. (2000). Mixing beliefs among interacting agents. Adv. Complex Syst. 3, 8798.CrossRefGoogle Scholar
DeGroot, M. H. (1974). Reaching a consensus. J. Am. Statist. Assoc. 69, 118121.10.1080/01621459.1974.10480137CrossRefGoogle Scholar
Dumas, V., Guillemin, F. and Robert, P. (2002). A Markovian analysis of additive-increase multiplicative-decrease algorithms. Adv. Appl. Prob. 34, 85111.CrossRefGoogle Scholar
Fagnani, F. and Zampieri, S. (2008). Randomized consensus algorithms over large scale networks. IEEE J. Sel. Areas Commun. 26, 634649.CrossRefGoogle Scholar
Ghaderi, J. and Srikant, R. (2013). Opinion dynamics in social networks: A local interaction game with stubborn agents. In Proc. IEEE American Control Conf. (ACC), pp. 19821987.CrossRefGoogle Scholar
Hegselmann, R. and Krause, U. (2002). Opinion dynamics and bounded confidence models, analysis, and simulation. J. Artif. Soc. Social Simul. 5.Google Scholar
Holley, R. A. and Liggett. T. M. (1975). Ergodic theorems for weakly interacting infinite systems and the voter model. Ann. Prob. 3, pp. 643663.CrossRefGoogle Scholar
Hollot, C. V., Misra, V., Towsley, V. and Gong, W.-B. (2001). A control theoretic analysis of RED. In Proc. 20th Ann. Joint Conf. IEEE Comp. Commun. Soc., Vol. 3, pp. 15101519.Google Scholar
Kallenberg, O. (2006). Foundations of Modern Probability. Springer, New York.Google Scholar
Karatzas, I. and Shreve, S. (2012). Brownian Motion and Stochastic Calculus. Springer, New York.Google Scholar
Ken-Iti, S. (1999). LÉvy Processes and Infinitely Divisible Distributions. Cambridge University Press.Google Scholar
Krishnamoorthy, K. (2016). Handbook of Statistical Distributions with Applications. CRC Press, Boca Raton.CrossRefGoogle Scholar
Lobel, I., Ozdaglar, A. and Feijer, D. (2011). Distributed multi-agent optimization with state-dependent communication. Math. Program. 129, 255284.CrossRefGoogle Scholar
Melamed, B. and Whitt, W. (1990). On arrivals that see time averages. Operat. Res. 38, 156172.CrossRefGoogle Scholar
Melamed, B. and Whitt, W. (1990). On arrivals that see time averages: A martingale approach. J. Appl. Prob. 27, 376384.CrossRefGoogle Scholar
Meyn, S. P. and Tweedie, R. L. (2012). Markov Chains and Stochastic Stability. Springer, New York.Google Scholar
Mirtabatabaei, A. and Bullo, F. (2012). Opinion dynamics in heterogeneous networks: Convergence conjectures and theorems. SIAM J. Control Optim. 50, 27632785.10.1137/11082751XCrossRefGoogle Scholar
Mobilia, M., Petersen, A. and Redner, S. (2007). On the role of zealotry in the voter model. J. Statist. Mech. Theory Exp. 2007, P08029.CrossRefGoogle Scholar
MÖrters, P. and Peres, Y. (2010). Brownian Motion. Cambridge University Press.Google Scholar
Olshevsky, A. and Tsitsiklis, J. N. (2009). Convergence speed in distributed consensus and averaging. SIAM J. Control Optim. 48, 3355.CrossRefGoogle Scholar
Ramanan, K. (2006). Reflected diffusions defined via the extended Skorokhod map. Electron. J. Prob. 11, 934992.10.1214/EJP.v11-360CrossRefGoogle Scholar
Toscani, G. Kinetic models of opinion formation. Commun. Math. Sci. 4, 481496.CrossRefGoogle Scholar
Wolff, R. W. (1982). Poisson arrivals see time averages. Operat. Res. 30, 223231.CrossRefGoogle Scholar
Yen, J.-Y. and Yor, M. (2014). Local Times and Excursion Theory for Brownian Motion. Springer, New York.Google Scholar
Yildiz, E., Ozdaglar, A., Acemoglu, D., Saberi, A. and Scaglione, A. (2013). Binary opinion dynamics with stubborn agents. ACM Trans. Econom. Comput. 1, 19.Google Scholar
Figure 0

Figure 1. The bounded-confidence model with interaction rate function equal to $0.5$ on $[-5, 5]$ vs. the rate functions of the proposed model. The parameters are $\lambda= 0.5$ for (C1), and $\lambda = 5$, $\alpha=1.75$, and $L= 0.5$ for (C2) and (C3).

Figure 1

Figure 2. Evolution of the opinion value when $\mu=0$, $\lambda = 2.0$, $\sigma=3.0$, $\alpha=0$, and $\theta=\theta'=2.0$

Figure 2

Figure 3. Simulated histogram when $\lambda = 3.0$, $\sigma=0.02$, $\mu=0.1$, $\alpha=0$, and $\theta=\theta'=2.0$.

Figure 3

Figure 4. Evolution of the opinion when $\lambda =2.0$, $\sigma=3.0$, $\alpha=0.5$, and $\theta=\theta'=2.0$.

Figure 4

Figure 5. Simulation vs. explicit solution when $\lambda=2.0$, $\sigma=3.0$, and $\alpha=0.5$.

Figure 5

Figure 6. Sensitivity of parameters $\alpha$, $\lambda$, $\sigma$, and $\theta$.

Figure 6

Figure 7. Three-agent interaction model.

Figure 7

Figure 8. Three-body simulation vs. analytically derived solution when $\lambda = 3.0$, $\sigma=2.0$, $q=0.5$, $\alpha=0$.

Figure 8

Figure 9. $(d+1)$-agent mean-field limit model.

Figure 9

Figure 10. Simulation histograms and analytically computed density of mean-field limit when $d=10, 100, 1000, 10\,000, 100\,000$ and $\lambda' =2.0$, $\sigma=3.0$, $\alpha=0.5$, and $\theta=\theta'=2.0$.