Hostname: page-component-745bb68f8f-b95js Total loading time: 0 Render date: 2025-02-07T10:41:06.308Z Has data issue: false hasContentIssue false

Mean reflected stochastic differential equations with jumps

Published online by Cambridge University Press:  15 July 2020

Phillippe Briand*
Affiliation:
Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LAMA
Abir Ghannoum*
Affiliation:
Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LAMA and Univ. Libanaise, LaMA-Liban
Céline Labart*
Affiliation:
Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LAMA
*
*Postal address: Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LAMA, 73000 Chambéry, France
**Postal address: 73000 Chambéry, France and P.O. Box 37, Tripoli, Liban
***Postal address: 73000Chambéry, France. Email address: celine.labart@univ-smb.fr
Rights & Permissions [Opens in a new window]

Abstract

In this paper, a reflected stochastic differential equation (SDE) with jumps is studied for the case where the constraint acts on the law of the solution rather than on its paths. These reflected SDEs have been approximated by Briand et al. (2016) using a numerical scheme based on particles systems, when no jumps occur. The main contribution of this paper is to prove the existence and the uniqueness of the solutions to this kind of reflected SDE with jumps and to generalize the results obtained by Briand et al. (2016) to this context.

Type
Original Article
Copyright
© Applied Probability Trust 2020

1. Introduction

Reflected stochastic differential equations (SDEs) have been introduced in the pioneering work of Skorokhod (see [Reference SkorokhodSko61]), and their numerical approximations by Euler schemes have been widely studied (see [Reference SlominskiSlo94], [Reference SlominskiSlo01], [Reference LepingleLep95], [Reference LepinglePet95], [Reference PetterssonPet97]). Reflected stochastic differential equations driven by a Lévy process have also been studied in the literature (see [Reference Menaldi and RobinMR85], [Reference Kohatsu-HigaKH92]). More recent works have introduced and studied reflected backward stochastic differential equations with jumps (see [Reference Hamadène and OuknineHO03], [Reference Essaky, Harraj and OuknineEHO05], [Reference Hamadène and HassaniHH06], [Reference EssakyEss08], [Reference Crépey and MatoussiCM08], [Reference Quenez and SulemQS14]), as well as their numerical approximation (see [Reference Dumitrescu and LabartDL16a] and [Reference Dumitrescu and LabartDL16b]). The main particularity of our work comes from the fact that the constraint acts on the law of the process X rather than on its paths. The study of such equations is linked to the theory of mean field games, which has been introduced by Lasry and Lions (see [Reference Lasry and LionsLL07a], [Reference Lasry and LionsLL07b], [Reference Lasry and LionsLL06b], [Reference Lasry and LionsLL06a]) and whose probabilistic point of view is studied in [Reference Carmona and DelarueCD18a] and [Reference Carmona and DelarueCD18b]. Stochastic differential equations with mean reflection have been introduced by Briand, Elie, and Hu in their backward forms in [Reference Briand, Elie and HuBEH18]. In that work, the authors show that mean reflected stochastic processes exist and are uniquely defined by the associated system of equations of the following form:

(1.1) \begin{equation} \begin{cases} \begin{split} & X_t =X_0+\int_0^t b(X_{s}) ds + \int_0^t \sigma(X_{s}) dB_s + K_t,\quad t\geq 0, \\[3pt] & \mathbb{E}[h(X_t)] \geq 0, \quad \int_0^t \mathbb{E}[h(X_s)] \, dK_s = 0, \quad t\geq 0. \end{split} \end{cases} \end{equation}

Because the reflection process K depends on the law of the position, the authors of [Reference Briand, Chaudru de Raynal, Guillin and LabartBCdRGL16], inspired by mean field games, study the convergence of a numerical scheme based on particle systems to compute solutions to (1.1) numerically.

In this paper, we extend previous results to the case of jumps, i.e. we study existence and uniqueness of solutions to the following mean reflected stochastic differential equation (MR-SDE in the sequel):

(1.2) \begin{equation} \begin{cases} \begin{split} & X_t =X_0+\int_0^t b(X_{s^-}) ds + \int_0^t \sigma(X_{s^-}) dB_s + \int_0^t\int_E F(X_{s^-},z) \tilde{N}(ds,dz) + K_t,\quad t\geq 0, \\[3pt] & \mathbb{E}[h(X_t)] \geq 0, \quad \int_0^t \mathbb{E}[h(X_s)] \, dK_s = 0, \quad t\geq 0, \end{split} \end{cases}\end{equation}

where ${\smash E=\mathbb{R}^*}$ , ${\tilde{N}}$ is a compensated Poisson measure ${\tilde{N}(ds,dz)=N(ds,dz)-\lambda(dz)ds}$ , and B is a Brownian process independent of N. We also propose a numerical scheme based on a particle system to compute solutions to (1.2) numerically, and we study the rate of convergence of this scheme.

Our main motivation for studying (1.2) comes from financial problems subject to risk measure constraints. Given any position X, its risk measure ${\rho(X)}$ can be seen as the amount of own funds needed by the investor to hold the position. For example, we can consider the following risk measure: ${\rho(X) = \inf\{m\,{:}\,\mathbb{E}[u(m+X)]\geq p\}}$ where u is a utility function (concave and increasing) and p is a given threshold (we refer the reader to [Reference Artzner, Delbaen, Eber and HeathADEH99] and to [Reference Föllmer and SchiedFS02] for more details on risk measures). Suppose that we are given a portfolio X of assets whose dynamic, when there is no constraint, follows the jump diffusion model

\begin{equation*}d X_t = b(X_t) d t + \sigma(X_t) d B_t +\int_E F(X_{t-},z) \tilde{N}(dt,dz), \qquad t\geq 0.\end{equation*}

Given a risk measure ${\rho}$ , one can ask that ${X_t}$ remain at an acceptable position at each time t. The constraint can be rewritten as ${\mathbb{E} \left[h(X_t)\right] \geq 0}$ for ${t\geq 0}$ , where ${h=u-p}$ .

In order to satisfy this constraint, the agent has to add some cash to the portfolio over time, and the dynamic of the wealth of the portfolio becomes

\begin{equation*} d X_t = b(X_t) d t + \sigma(X_t) d B_t +\int_E F(X_{t-},z) \tilde{N}(dt,dz)+d K_t, \qquad t\geq 0,\end{equation*}

where ${K_t}$ is the amount of cash added to the portfolio up to time t to balance the ‘risk’ associated to ${X_t}$ . Of course, the agent wants to cover the risk in a minimal way, adding cash only when needed: this leads to the Skorokhod condition ${\mathbb{E}[h(X_t)] d K_t = 0}$ . Putting together all conditions, we end up with a dynamic of the form (1.2) for the portfolio.

The paper is organized as follows. In Section 2, we show that, under Lipschitz assumptions on b, ${\sigma}$ , and F and bi-Lipchitz assumptions on h, the system admits a unique strong solution, i.e. there exists a unique pair of processes (X, K) satisfying the system (1.2) almost surely, the process K being an increasing and deterministic process. Then we show that, by adding some regularity on the function h, the Stieltjes measure dK is absolutely continuous with respect to the Lebesgue measure, and we obtain the explicit expression of its density. In Section 3 we show that the system (1.2) can be seen as the limit of an interacting particle system with oblique reflection of mean field type. This result allows us to define in Section 4 an algorithm based on this interacting particle system together with a classical Euler scheme which gives a strong approximation of the solution of (1.2). When h is bi-Lipschitz, this leads to an approximation error in the ${L^2}$ sense proportional to ${n^{-1}+ N^{-\frac{1}{2}}}$ , where n is the number of points of the discretization grid and N is the number of particles. When h is smooth, we get an approximation error proportional to ${n^{-1}+ N^{-1}}$ . By the way, we improve the speed of convergence obtained in [Reference Briand, Chaudru de Raynal, Guillin and LabartBCdRGL16]. Finally, we illustrate these results numerically in Section 5.

2. Existence, uniqueness and properties of the solution

In this paper, ${(\Omega,\mathcal{F},\mathbb{P})}$ is a complete probability space endowed with a standard Brownian motion ${B=\{B_t\}_{0\leq t\leq T}}$ . ${\{\mathcal{F}_t\}_{0\leq t\leq T}}$ is the usual augmented filtration of B. Before moving on, we give the following assumptions needed in the sequel.

Assumption 1. (A.1)

  1. (i) Lipschitz assumption: there exists a constant ${C_p \gt 0}$ such that for all ${x,x'\in\mathbb{R}}$ and ${p>0}$ , we have

    \begin{equation*} |b(x)-b(x')|^p+|\sigma(x)-\sigma(x')|^p+\int_E|F(x,z)-F(x',z)|^p\lambda(dz)\leq C_p |x-x'|^p. \end{equation*}
  2. (ii) The random variable ${X_0}$ is square integrable independent of ${B_t}$ and ${N_t}$ .

Assumption 2. (A.2)

  1. (i) The function ${h\,{:}\,\mathbb{R} \longrightarrow \mathbb{R}}$ is increasing and bi-Lipschitz: there exist ${0 \lt m\leq M}$ such that

    \begin{equation*}\forall\ x\in\mathbb{R},\forall y\in\mathbb{R}, \quad m|x-y|\leq|h(x)-h(y)|\leq M|x-y|.\end{equation*}
  2. (ii) The initial condition ${X_0}$ satisfies ${\mathbb{E} [h(X_0)]\geq0}$ .

Assumption 3. (A.3) There exists ${p>4}$ such that ${X_0\in\mathbb{L}^p}$ (i.e. ${\mathbb{E} [|X_0|^p]<\infty}$ ).

Assumption 4. (A.4) The function h is twice continuously differentiable with bounded derivatives.

2.1. Preliminary results

Consider the function

(2.1) \begin{equation}{H} \,{:}\, \mathbb{R} \times \mathcal{P}_1(\mathbb{R}) \ni (x,\nu) \mapsto \int h(x+z) \nu(dz),\end{equation}

where ${\mathcal{P}_1(\mathbb{R})}$ is the set of probability measures with a finite first-order moment. Let ${\bar{G}_0}$ be the inverse function in space of H evaluated at 0,

(2.2) \begin{equation}\bar{G}_0 \,{:}\, \mathcal{P}_1(\mathbb{R}) \ni \nu \mapsto \inf \{x \in \mathbb{R} \,{:}\, H(x,\nu) \ge 0 \} ,\end{equation}

and let ${{G}_0}$ denote the positive part of ${\bar{G}_0}$ ,

(2.3) \begin{equation}{G}_0 \,{:}\, \mathcal{P}_1(\mathbb{R}) \ni \nu \mapsto \inf \{x \ge 0 \,{:}\, H(x,\nu) \ge 0 \}.\end{equation}

We start by studying some properties of H and ${G_0}$ .

Lemma 1. Under Assumption 2, we have the following:

  1. (i) For all ${\nu}$ in ${\mathcal{P}_1(\mathbb{R})}$ , the function ${H(\cdot,\nu)\,{:}\, \mathbb{R}\ni x \mapsto H(x,\nu)}$ is bi-Lipschitz:

    (2.4) \begin{equation} \forall x,y \in \mathbb{R}, \quad m|x-y| \le |H(x,\nu)-H(y,\nu)|\le M|x-y|. \end{equation}
  2. (ii) For all x in ${\mathbb{R}}$ , the function ${H(x,\cdot)\,{:}\, \mathcal{P}_1(\mathbb{R})\ni \nu \mapsto H(x,\nu)}$ satisfies the following Lipschitz inequality:

    (2.5) \begin{equation} \forall \nu,\nu' \in \mathcal{P}_1(\mathbb{R}), \quad |H(x,\nu)-H(x,\nu')|\le \left|\int h(x+\cdot) (d\nu-d\nu')\right|. \end{equation}

Proof. Lemma 1 ensues from the definition of H (see (2.1)).

Let ${\nu}$ and ${\nu'}$ be two probability measures. The Wasserstein-1 distance between ${\nu}$ and ${\nu'}$ is defined by

\begin{equation*}W_1(\nu,\nu')=\sup_{\varphi 1-Lipschitz}\bigg|\int\varphi(d\nu-d\nu')\bigg|=\inf_{X\sim\nu;\ Y\sim\nu'}\mathbb{E}[|X-Y|].\end{equation*}

Thus

(2.6) \begin{equation}\forall\nu,\nu'\in\mathcal{P}_1(\mathbb{R}), \quad |H(x,\nu)-H(x,\nu')|\leq MW_1(\nu,\nu').\end{equation}

According to the Monge–Kantorovitch theorem, the assertion (2.5) implies that for all x in ${\mathbb{R}}$ , the function ${H(x,\cdot)}$ is Lipschitz continuous with respect to the Wasserstein-1 distance. The regularity of ${G_0}$ is then given in the following lemma.

Lemma 2. Under Assumption 2, the function ${{G}_0 \,{:}\,\mathcal{P}_1(\mathbb{R}) \ni \nu \mapsto {G}_0(\nu)}$ is Lipschitz continuous:

(2.7) \begin{equation}|{G}_0(\nu)-{G}_0( \nu') | \le {\frac{1}{m}}\left|\int h({\bar{G}}_0(\nu)+\cdot) (d\nu-d\nu')\right| ,\end{equation}

where ${\bar{G}_0(\nu)}$ is the inverse of ${H(\cdot,\nu) }$ at point 0. In particular,

(2.8) \begin{equation}|{G}_0(\nu)-{G}_0( \nu') | \le {\frac{M}{m}}W_1(\nu, \nu').\end{equation}

Proof. The proof is given in [Reference Briand, Chaudru de Raynal, Guillin and LabartBCdRGL16, Lemma 2.5].

2.2. Existence and uniqueness of the solution of (1.2)

The set of Assumptions 14 will be used as follows:

  1. The existence and uniqueness results are stated under Assumption 1 (the standard assumption for SDEs) and Assumption 2 (the assumption used in [BEH18]).

  2. The convergence of particle systems is proved under Assumption 3.

  3. Some of the results will be improved under the smoothness assumption, Assumption 4.

Firstly, we recall the existence and uniqueness result of [BEH18] in the case of SDEs.

Definition 1. A couple of processes (X, K) is said to be a flat deterministic solution to (1.2) if (X, K) satisfy (1.2) with K being a non-decreasing continuous deterministic function with ${K_0=0}$ .

Given this definition we have the following result.

Theorem 1. Under Assumptions 1 and 2, the mean reflected SDE (1.2) has a unique deterministic flat solution (X, K). Moreover,

(2.9) \begin{equation}\forall t\geq 0, \quad K_t=\sup_{s\leq t} \inf\{x\geq0\,{:}\,\mathbb{E}[h(x+U_s)] \geq 0\}=\sup\limits_{s\le t} {G}_0(\mu_{s}),\end{equation}

where ${(U_t)_{0\leq t\leq T}}$ is the process defined by

(2.10) \begin{equation}U_t=X_0+\int_0^t b(X_{s^-}) ds + \int_0^t \sigma(X_{s^-}) dB_s + \int_0^t\int_E F(X_{s^-},z) \tilde{N}(ds,dz)\end{equation}

and ${(\mu_{t})_{0\le t\le T}}$ is the family of marginal laws of ${(U_{t})_{0\le t\le T}}$ .

Proof. We refer to [Reference Briand, Elie and HuBEH18] for the proof in the case of continuous backward SDEs. We present here the proof of the forward case with jumps.

Let us consider the set ${\mathcal{C}^2=\{X\ \mathcal{F}\mbox{-adapted}\ \mbox{c\`{a}dl\`{a}g},\ \mathbb{E}(\sup_{t\leq T}|X_t|^2)<\infty\}}$ , and let ${\hat{X}\in \mathcal{C}^2}$ be a given process. We define

\begin{equation*}\hat U_t=X_0+\int_0^t b(\hat X_{s^-}) ds + \int_0^t \sigma(\hat X_{s^-}) dB_s + \int_0^t\int_E F(\hat X_{s^-},z) \tilde{N}(ds,dz),\end{equation*}

and the function K given by

(2.11) \begin{equation}K_t=\sup_{s\leq t} \inf\{x\geq0\,{:}\,\mathbb{E}[h(x+\hat U_s)] \geq 0\}=\sup\limits_{s\le t} {G}_0(\hat\mu_{s}).\end{equation}

Let us introduce the process X:

\begin{equation*}X_t=X_0+\int_0^t b(\hat X_{s^-}) ds + \int_0^t \sigma(\hat X_{s^-}) dB_s + \int_0^t\int_E F(\hat X_{s^-},z) \tilde{N}(ds,dz)+K_t,\end{equation*}

where K is given by (2.11). We check that (X, K) is the solution to (1.2) with U replaced by ${\hat{U}}$ . First, based on the definition of K, we have ${\mathbb{E}[h(X_t)] \geq 0}$ , ${K_t=G_0(\hat{\mu}_t)}$ ${dK_t-a.e.}$ and ${G_0(\hat{\mu}_t)>0}$ ${dK_t-a.e.}$ Then we obtain

\begin{align*}\int_0^t \mathbb{E}[h(X_s)] dK_s&=\int_0^t \mathbb{E}[h(\hat U_s+K_s)] dK_s\\[3pt] &= \int_0^t \mathbb{E}[h(\hat U_s+G_0(\hat{\mu}_s))] dK_s\\[3pt] &= \int_0^t \mathbb{E}[h(\hat U_s+G_0(\hat{\mu}_s))] \textbf{1}_{G_0(\hat{\mu}_s)>0} dK_s.\end{align*}

Moreover, since h is continuous, we have ${\mathbb{E}[h(\hat U_s+G_0(\hat{\mu}_s))]=0}$ as soon as ${G_0(\hat{\mu}_s)>0}$ , so that

\begin{equation*}\int_0^t \mathbb{E}[h(X_s)] dK_s=0.\end{equation*}

Second, choose the map ${\Phi\,{:}\,\mathcal{C}^2\longrightarrow \mathcal{C}^2}$ which associates to ${\hat X}$ the process X, solution to (1.2). Let us prove that ${\Phi}$ is a contraction. Using the same Brownian motion and Poisson process, we consider ${\hat X, \hat X'\in \mathcal{C}^2}$ and K, K defined by (2.11). From Assumption 1, and by using the Cauchy–Schwarz and Doob inequalities, we get

\begin{equation*} \begin{aligned}&\mathbb{E}\bigg[\sup_{t\leq T}|X_t-X'_t|^2\bigg] \\[3pt] &\quad \leq 4 \mathbb{E}\bigg[\sup_{t\leq T}\Bigg\{\bigg|\int_0^t \bigg(b(\hat X_{s^-})-b(\hat X'_{s^-})\bigg) ds\bigg|^2 + \bigg|\int_0^t \bigg(\sigma(\hat X_{s^-})-\sigma(\hat X'_{s^-})\bigg) dB_s\bigg|^2 \\[3pt]&\qquad + \bigg|\int_0^t\int_E \bigg(F(\hat X_{s^-},z)-F(\hat X'_{s^-},z)\bigg) \tilde{N}(ds,dz)\bigg|^2 + |K_t-K'_t|^2\Bigg\}\bigg]\\[3pt]&\quad \leq 4 \Bigg\{\mathbb{E}\bigg[\sup_{t\leq T}t\int_0^t \Big|b(\hat X_{s^-})-b(\hat X'_{s^-})\Big|^2 ds\bigg] + \mathbb{E}\bigg[\sup_{t\leq T}\bigg|\int_0^t \bigg(\sigma(\hat X_{s^-})-\sigma(\hat X'_{s^-})\bigg) dB_s\bigg|^2\bigg] \\[3pt]&\qquad + \mathbb{E}\bigg[\sup_{t\leq T}\bigg|\int_0^t\int_E \bigg(F(\hat X_{s^-},z)-F(\hat X'_{s^-},z)\bigg) \tilde{N}(ds,dz)\bigg|^2\bigg] + \sup_{t\leq T}|K_t-K'_t|^2\Bigg\} \\[3pt]&\quad \leq C \Bigg\{T\mathbb{E}\bigg[\int_0^T \Big|b(\hat X_{s^-})-b(\hat X'_{s^-})\Big|^2 ds\bigg]+ \mathbb{E}\bigg[\int_0^T \Big|\sigma(\hat X_{s^-})-\sigma(\hat X'_{s^-})\Big|^2ds\bigg] \\[3pt]&\qquad + \int_0^T\int_E\mathbb{E}\bigg[\Big|F(\hat X_{s^-},z)-F(\hat X'_{s^-},z)\Big|^2\bigg] \lambda(dz)ds + \sup_{t\leq T}|K_t-K'_t|^2\Bigg\}\\[3pt]&\quad \leq C\Bigg\{T^2C_1\mathbb{E}\bigg[\sup_{t\leq T}|\hat X_{t^-}-\hat X'_{t^-}|^2\bigg]+ TC_1\mathbb{E}\bigg[\sup_{t\leq T}|\hat X_{t^-}-\hat X'_{t^-}|^2\bigg] \\[3pt]&\qquad + TC_1\mathbb{E}\bigg[\sup_{t\leq T}|\hat X_{t^-}-\hat X'_{t^-}|^2\bigg] + \sup_{t\leq T}|K_t-K'_t|^2\Bigg\}\\[3pt]&\quad \leq C\bigg(T^2C_1+TC_2\bigg)\mathbb{E}\bigg[\sup_{t\leq T}|\hat X_{t}-\hat X'_{t}|^2\bigg]+C\sup_{t\leq T}|K_t-K'_t|^2. \end{aligned}\end{equation*}

From the representation (2.11) of the process K and Lemma 2, we have that

\begin{equation*} \begin{aligned} \sup_{t\leq T}|K_t-K'_t|^2 & \leq \frac{M}{m} \mathbb{E}\bigg[\sup_{t\leq T}|\hat U_t-\hat U'_t|^2\bigg]\\[3pt] &\leq C(T^2C_1+TC_2)\mathbb{E}\bigg[\sup_{t\leq T}|\hat X_{t}-\hat X'_{t}|^2\bigg]. \end{aligned}\end{equation*}

This leads to

\begin{equation*} \begin{aligned} \mathbb{E}\bigg[\sup_{t\leq T}|X_t-X'_t|^2\bigg] & \leq C(1+T)T\mathbb{E}\bigg[\sup_{t\leq T}|\hat X_{t}-\hat X'_{t}|^2\bigg]. \end{aligned}\end{equation*}

Therefore, there exists a positive ${\mathcal{T}}$ , depending only on b, ${\sigma}$ , F, and h, such that for all ${T <\mathcal{T}}$ , the map ${\Phi}$ is a contraction. Consequently, we get the existence and uniqueness of a solution on ${[0, \mathcal{T}]}$ , and by iterating the construction the result is extended to ${\mathbb{R}^+}$ .

2.3. Regularity results on K, X, and U

Remark 1. In view of this construction, we derive that for all ${0\leq s< t}$ ,

\begin{equation*} \begin{aligned} &K_{t}-K_{s}\\ &= \sup\limits_{s\le r\le t} \inf \left\{x\ge 0 \,{:}\, \mathbb{E}\left[ h \left( x+X_s+\int_s^{r} b(X_{u^-}) du +\int_s^{r} \sigma(X_{u^-}) dB_u\right.\right.\right.\\ &\quad\qquad \left.\left.\left.+ \int_s^{r}\int_E F(X_{u^-},z) \tilde{N}(du,dz) \right)\right] \geq 0\right\}\!. \end{aligned}\end{equation*}

Proof. From the representation (2.9) of the process K, we have

\begin{equation*}\begin{aligned}K_t&=\sup_{r\leq t} G_0(U_r)=\max\bigg\{\sup_{r\leq s}G_0(U_r),\sup_{s\leq r\leq t} G_0(U_r)\bigg\}\\[3pt] &=\max\bigg\{K_s,\sup_{s\leq r\leq t} G_0(U_r)\bigg\}\\[3pt] &=\max\bigg\{K_s,\sup_{s\leq r\leq t} G_0(X_s-K_s+U_r-U_s)\bigg\}\\[3pt] &=\max\bigg\{K_s,\sup_{s\leq r\leq t} \bigg[\bar{G_0}(X_s-K_s+U_r-U_s)^+\bigg]\bigg\}.\end{aligned}\end{equation*}

By the definition of ${\bar{G_0}}$ , we observe that for all ${y\in\mathbb{R}}$ , ${\bar{G_0}(X+y)=\bar{G_0}(X)-y}$ , so we get

\begin{equation*}\begin{aligned}K_t&=\max\bigg\{K_s,\sup_{s\leq r\leq t} \bigg[\bigg(K_s+\bar{G_0}(X_s+U_r-U_s)\bigg)^+\bigg]\bigg\}\\[3pt] &=K_s+\max\bigg\{0,\sup_{s\leq r\leq t} \bigg[\bigg(K_s+\bar{G_0}(X_s+U_r-U_s)\bigg)^+-K_s\bigg]\bigg\}.\end{aligned}\end{equation*}

Note that ${\sup_r (f(r)^+)=(\sup_r f(r))^+=\max(0,\sup_r f(r))}$ for any function f, and obviously

\begin{equation*}\begin{aligned}K_t&=K_s+\sup_{s\leq r\leq t} \bigg[\bigg\{\bigg(K_s+\bar{G_0}(X_s+U_r-U_s)\bigg)^+-K_s\bigg\}^+\bigg]\\[3pt] &=K_s+\sup_{s\leq r\leq t}\bigg[\bigg(\bar{G_0}(X_s+U_r-U_s)\bigg)^+\bigg]\\[3pt] &=K_s+\sup_{s\leq r\leq t}G_0(X_s+U_r-U_s),\end{aligned}\end{equation*}

so

\begin{equation*}\begin{aligned}K_t-K_s=\sup_{s\leq r\leq t}G_0(X_s+U_r-U_s).\end{aligned}\end{equation*}

Proposition 1. Suppose that Assumptions 1 and 2 hold. Then, for every ${p\geq 2}$ , there exists a positive constant ${K_p}$ , depending on T, b, ${\sigma}$ , F, and h, such that

  1. (i) ${\mathbb{E}\big[\sup_{t\leq T}|X_t|^p\big]\leq K_p\big(1+\mathbb{E}\big[|X_0|^p\big]\big)}$ , and

  2. (ii) for all ${0\leq s\leq t\leq T,\quad \mathbb{E}\big[\sup_{s\leq u\leq t}|X_u|^p|\mathcal{F}_s\big]\leq C\big(1+|X_s|^p\big).}$

Remark 2. Under the same conditions, we conclude that

\begin{equation*}\mathbb{E}\big[\sup_{t\leq T}|U_t|^p\big]\leq K_p\big(1+\mathbb{E}\big[|X_0|^p\big]\big).\end{equation*}

Proof of (i). We have

\begin{equation*}\begin{aligned}\mathbb{E}\bigg[\sup_{t\leq T}|X_t|^p\bigg]&\leq 5^{p-1}\Bigg\{\mathbb{E}|X_0|^p+\mathbb{E} \sup_{t\leq T}\bigg(\int_0^t |b(X_{s^-})| ds\bigg)^p + \mathbb{E} \sup_{t\leq T}\bigg|\int_0^t \sigma(X_{s^-}) dB_s\bigg|^p \\[3pt] &\quad +\mathbb{E} \sup_{t\leq T}\bigg|\int_0^t\int_E F(X_{s^-},z) \tilde{N}(ds,dz)\bigg|^p + K_T^p\Bigg\}.\end{aligned}\end{equation*}

The last term ${K_T =\sup_{t\leq T}G_0(\mu_t)}$ is studied first. By using the Lipschitz property of ${G_0}$ from Lemma 2 and the definition of the Wasserstein metric, we have

\begin{equation*}\forall t\geq 0, \quad |G_0(\mu_t)|\leq \frac{M}{m}\mathbb{E}[|U_t-U_0|],\end{equation*}

since ${G_0(\mu_0)=0}$ as ${\mathbb{E}[h(X_0)]\geq 0}$ , and where U is defined by (4.3). Therefore

\begin{equation*}\begin{aligned}|K_T|^p=|\sup_{t\leq T}G_0(\mu_t)|^p&\leq 3^{p-1}\bigg(\frac{M}{m}\bigg)^p\Bigg\{\mathbb{E} \sup_{t\leq T}\bigg(\int_0^t |b(X_{s^-})| ds\bigg)^p + \mathbb{E} \sup_{t\leq T}\bigg|\int_0^t \sigma(X_{s^-}) dB_s\bigg|^p \\[3pt] &\quad + \mathbb{E} \sup_{t\leq T}\bigg|\int_0^t\int_E F(X_{s^-},z) \tilde{N}(ds,dz)\bigg|^p\Bigg\},\end{aligned}\end{equation*}

and so

\begin{equation*}\begin{aligned}\mathbb{E}\big[\sup_{t\leq T}|X_t|^p\big]&\leq C(p,M,m)\mathbb{E}\bigg[|X_0|^p+\sup_{t\leq T}\bigg(\int_0^t |b(X_{s^-})| ds\bigg)^p + \sup_{t\leq T}\bigg|\int_0^t \sigma(X_{s^-}) dB_s\bigg|^p \\[3pt] &\quad + \sup_{t\leq T}\bigg|\int_0^t\int_E F(X_{s^-},z) \tilde{N}(ds,dz)\bigg|^p\bigg].\end{aligned}\end{equation*}

Hence, using Assumption 1 and the Cauchy–Schwarz, Doob, and BDG inequalities yields

\begin{equation*}\begin{aligned}\mathbb{E}\bigg[\sup_{t\leq T}|X_t|^p\bigg]&\le C\Bigg\{\mathbb{E}\bigg[|X_0|^p\bigg] + T^{p-1}\mathbb{E}\bigg[\int_0^T (1+|X_{s^-}|)^p ds\bigg] + C_1\mathbb{E}\bigg[\int_0^T (1+|X_{s^-}|)^2 ds\bigg]^\frac{p}{2} \\[3pt]&\quad + C_2\mathbb{E}\bigg[\int_0^T (1+|X_{s^-}|)^p ds\bigg]\Bigg\}\\[3pt] &\le C_1\bigg(1+\mathbb{E}|X_0|^p\bigg)+C_2\int_{0}^{T}\mathbb{E}\bigg[\sup_{t\leq r}|X_t|^p\bigg]dr,\end{aligned}\end{equation*}

and from Gronwall’s lemma, we can conclude that for all ${p\geq 2}$ , there exists a positive constant ${K_p}$ , depending on T, b, ${\sigma}$ , F, and h, such that

\begin{equation*}\mathbb{E}\big[\sup_{t\leq T}|X_t|^p\big]\leq K_p\big(1+\mathbb{E}\big[|X_0|^p\big]\big).\end{equation*}

Proof of (ii). For the first part, we have

\begin{equation*}\begin{aligned} X_u &=U_u+K_u\\[2pt]&=X_s+(U_u-U_s)+(K_u-K_s)\\[2pt]&=X_s+\int_s^u b(X_{r^-}) dr + \int_s^u \sigma(X_{r^-}) dB_r + \int_s^u\int_E F(X_{r^-},z) \tilde{N}(dr,dz)\\[2pt]&\quad + (K_u-K_s).\end{aligned}\end{equation*}

Let us define ${\mathbb{E}_s[\cdotp]=\mathbb{E}[\cdotp|\mathcal{F}_s]}$ . Then we get

\begin{equation*}\begin{aligned}&\mathbb{E}_s\bigg[\sup_{s\leq u\leq t}|X_u|^p\bigg]\\[2pt]&\quad \leq 5^{p-1}\Bigg\{\mathbb{E}_s\bigg[|X_s|^p\bigg]+\mathbb{E}_s\bigg[ \sup_{s\leq u\leq t}\bigg|\int_s^u b(X_{r^-}) dr\bigg|^p\bigg] + \mathbb{E}_s \bigg[\sup_{s\leq u\leq t}\bigg|\int_s^u \sigma(X_{r^-}) dB_r\bigg|^p\bigg] \\[2pt]&\qquad+ \mathbb{E}_s \bigg[\sup_{s\leq u\leq t}\bigg|\int_s^u\int_E F(X_{r^-},z) \tilde{N}(dr,dz)\bigg|^p\bigg] +\bigg|K_t-K_s\bigg|^p\Bigg\}\\[2pt]&\quad\leq C\Bigg\{|X_s|^p+T^{p-1}\int_s^t \mathbb{E}_s\bigg[\bigg|b(X_{r^-})\bigg|^p\bigg] dr+ \int_s^t \mathbb{E}_s\bigg[\bigg|\sigma(X_{r^-})\bigg|^p\bigg] dr \\[2pt] &\qquad+ \int_s^t\int_E \mathbb{E}_s\bigg[\bigg|F(X_{r^-},z)\bigg|^p\bigg]\lambda(dz) dr +2\bigg|K_T\bigg|^p\Bigg\}\\[2pt] &\quad\leq C(T)\Bigg\{|X_s|^p+C_1\int_s^t \mathbb{E}_s\bigg[1+|X_{r^-}|^p\bigg]dr+2\bigg|K_T\bigg|^p\Bigg\}\\[3pt]& \quad\leq C_1(1+|X_s|^p)+C_2\int_s^t \mathbb{E}_s[\sup_{s\leq u \leq r}|X_{u^-}|^p]dr.\end{aligned}\end{equation*}

Finally, from Gronwall’s lemma, we deduce that for all ${0\leq s\leq t\leq T}$ , there exists a constant C, depending on p, T, b, ${\sigma}$ , F, and h, such that

\begin{equation*}\mathbb{E}\big[\sup_{s\leq u\leq t}|X_u|^p|\mathcal{F}_s\big]\leq C\big(1+|X_s|^p\big).\end{equation*}

Proposition 2. Let ${p\geq2}$ and let Assumptions 1, 2, and 3 hold. There exists a constant C depending on p, T, b, ${\sigma}$ , F, and h such that the following hold:

  1. (i) ${\forall\ 0\leq s \lt t\leq T,\quad |K_t-K_s|\leq C|t-s|^{(1/2)}.}$

  2. (ii) ${\forall\ 0\leq s\leq t\leq T,\quad \mathbb{E}\big[|U_t-U_s|^p\big]\leq C|t-s|.}$

  3. (iii) ${\forall\ 0\leq r \lt s \lt t\leq T,\quad \mathbb{E}[|U_s-U_r|^p|U_t-U_s|^p]\leq C|t-r|^2.}$

Remark 3. Under the same conditions, we conclude that

\begin{equation*}\forall\ 0\leq s\leq t\leq T,\quad \mathbb{E}\big[|X_t-X_s|^p\big]\leq C|t-s|.\end{equation*}

Proof of (i). Let us recall that, for any process X,

\begin{equation*}\bar{G_0}(X)=\inf\{x\in\mathbb{R}\,{:}\,\mathbb{E}[h(x+ X)] \geq 0\},\end{equation*}
\begin{equation*}G_0(X)=(\bar{G_0}(X))^+=\inf\{x\geq 0\,{:}\,\mathbb{E}[h(x+X)] \geq 0\}.\end{equation*}

From Remark 1, we have

(2.12) \begin{equation}\begin{aligned}K_t-K_s=\sup_{s\leq r\leq t}G_0(X_s+U_r-U_s).\end{aligned}\end{equation}

Hence, from the previous representation of ${K_t-K_s}$ , we deduce the ${\frac{1}{2}}$ -Hölder property of the function ${t\longmapsto K_t}$ . Indeed, since by definition ${G_0(X_s)=0}$ , if ${s<t}$ , by using Lemma 2, we have

\begin{equation*}\begin{aligned}|K_t-K_s|&=\sup_{s\leq r\leq t}G_0(X_s+U_r-U_s)\\[3pt]&=\sup_{s\leq r\leq t}[G_0(X_s+U_r-U_s)-G_0(X_s)]\\[3pt]&=\frac{M}{m}\sup_{s\leq r\leq t}\mathbb{E}[|U_r-U_s|],\end{aligned}\end{equation*}

and so

\begin{equation*}\begin{aligned}|K_t-K_s|&\leq C\Bigg\{\mathbb{E} \bigg[\sup_{s\leq r\leq t}\bigg|\int_s^r b(X_{u^-}) du\bigg|\bigg] + \bigg(\mathbb{E} \bigg[\sup_{s\leq r\leq t}\bigg|\int_s^r \sigma(X_{u^-}) dB_u\bigg|^2\bigg]\bigg)^{1/2}\\[3pt]&\qquad + \bigg(\mathbb{E} \bigg[\sup_{s\leq r\leq t}\bigg|\int_s^r\int_E F(X_{u^-},z) \tilde{N}(du,dz)\bigg|^2\bigg]\bigg)^{1/2}\Bigg\}\\[3pt]&\quad\leq C\Bigg\{\int_s^t\mathbb{E}\bigg[\bigg|b(X_{u^-})\bigg|\bigg]du+\bigg(\mathbb{E}\bigg[\int_s^t\bigg|\sigma(X_{u^-})\bigg|^2du\bigg]\bigg)^{1/2}\\[3pt]&\qquad +\bigg(\mathbb{E} \bigg[\int_s^t\int_E \bigg|F(X_{u^-},z) \bigg|^2\lambda(dz)du\bigg]\bigg)^{1/2}\Bigg\}\\[3pt]&\quad \leq C\Bigg\{|t-s|\mathbb{E}\bigg[1+\sup_{u\leq T}|X_u|\bigg]+|t-s|^{1/2}\bigg(\mathbb{E}\bigg[1+\sup_{u\leq T}|X_u|^2\bigg]\bigg)^{1/2}\Bigg\}.\end{aligned}\end{equation*}

Therefore, if ${X_0\in \mathbb{L}^p}$ for some ${p\geq2}$ , it follows from Proposition 1 that

\begin{equation*}\begin{aligned}|K_t-K_s|\leq C|t-s|^{1/2}.\end{aligned}\end{equation*}

Proof of (ii).

\begin{align*} \mathbb{E}\bigg[|U_t-U_s|^p\bigg]&\leq 4^{p-1}\mathbb{E}\Bigg[\bigg(\int_s^t |b(X_{r^-})| dr\bigg)^p + \bigg|\int_s^t \sigma(X_{r^-}) dB_r\bigg|^p \\[3pt]&\quad +\bigg|\int_s^t\int_E F(X_{r^-},z) \tilde{N}(dr,dz)\bigg|^p \Bigg]\\[3pt] & \leq C\sup\limits_{0\le r\le t}\mathbb{E}\bigg[\bigg(\int_s^r|b(X_{u^-})| du\bigg)^p + \bigg|\int_s^r \sigma(X_{u^-}) dB_u\bigg|^p \\[3pt]&\quad +\bigg|\int_s^r\int_E F(X_{u^-},z) \tilde{N}(du,dz)\bigg|^p\bigg]\\[3pt] & \leq C\Bigg\{|t-s|^{p-1}\mathbb{E}\bigg[\int_s^t (1+|X_{u^-}|)^p du\bigg] + C_1\mathbb{E}\bigg[\bigg(\int_s^t (1+|X_{u^-}|)^2 du\bigg)^{p/2}\bigg] \\[3pt] &\quad + C_2\mathbb{E}\bigg[\int_s^t (1+|X_{u^-}|)^p du\bigg]\Bigg\}\\[3pt] &\leq C_1\mathbb{E}\bigg[1+\sup_{t\leq T}|X_t|^p\bigg]|t-s|^p+C_2\mathbb{E}\bigg[\bigg(1+\sup_{t\leq T}|X_t|^2\bigg)^{p/2}\bigg]|t-s|^{p/2}\\[3pt]&\quad +C_3\mathbb{E}\bigg[1+\sup_{t\leq T}|X_t|^p\bigg]|t-s|. \end{align*}

Finally, if ${X_0\in \mathbb{L}^p}$ for some ${p\geq2}$ , we conclude that there exists a constant C, depending on p, T, b, ${\sigma}$ , F, and h, such that

\begin{equation*}\forall\ 0\leq s\leq t\leq T,\mathbb{E}\big[|X_t-X_s|^p\big]\leq C|t-s|.\end{equation*}

Proof of (iii). Let ${0\leq r<s<t\leq T}$ . We have

\begin{equation*}\begin{aligned}\mathbb{E}\bigg[|U_s-U_r|^p|U_t-U_s|^p\bigg]&\leq \mathbb{E}\bigg[|U_s-U_r|^p\mathbb{E}_s[|U_t-U_s|^p]\bigg]\\[3pt]&\leq C\mathbb{E}\Bigg[|U_s-U_r|^p\Bigg\{\mathbb{E}_s\bigg[\bigg|\int_s^t b(X_{s^-}) ds\bigg|^p\bigg]+\mathbb{E}_s\bigg[\bigg|\int_s^t \sigma(X_{s^-}) dB_s\bigg|^p\bigg]\\[3pt]& \quad +\mathbb{E}_s\bigg[\bigg|\int_s^t\int_E F(X_{s^-},z) d\tilde{N}(ds,dz)\bigg|^p\bigg]\Bigg\}\Bigg].\end{aligned}\end{equation*}

Then, from the Burkholder–Davis–Gundy inequality, we get

\begin{equation*}\begin{aligned}&\mathbb{E}\bigg[|U_s-U_r|^p|U_t-U_s|^p\bigg]\\[3pt] &\quad \leq C\mathbb{E}\Bigg[|U_s-U_r|^p\Bigg\{\mathbb{E}_s\bigg[\bigg|\int_s^t b(X_{s^-}) ds\bigg|^p\bigg]+\bigg(\mathbb{E}_s\bigg[\int_s^t \bigg|\sigma(X_{s^-})\bigg|^2 ds\bigg]\bigg)^{p/2}\\[3pt]&\qquad +\mathbb{E}_s\bigg[\int_s^t\int_E \bigg|F(X_{s^-},z)\bigg|^p \lambda(dz)ds\bigg]\Bigg\}\Bigg]\\[3pt]&\quad\leq C\mathbb{E}\Bigg[|U_s-U_r|^p\Bigg\{|t-s|^p\bigg(1+\mathbb{E}_s\bigg[\sup_{s\leq u\leq t}|X_u|^p\bigg]\bigg)\\[3pt]&\quad\quad+|t-s|^{p/2}\bigg(1+\mathbb{E}_s\bigg[\sup_{s\leq u\leq t}|X_u|^2\bigg]^{p/2}\bigg)+|t-s|\bigg(1+\mathbb{E}_s\bigg[\sup_{s\leq u\leq t}|X_u|^p\bigg]\bigg)\Bigg\}\Bigg]\\[3pt]&\quad \leq C\mathbb{E}\Bigg[|U_s-U_r|^p\Bigg\{|t-s|\bigg(1+\mathbb{E}_s\bigg[\sup_{s\leq u\leq t}|X_u|^p\bigg]\bigg)\Bigg\}\Bigg];\end{aligned}\end{equation*}

thus, from (i) and Proposition 1, we obtain

\begin{equation*}\begin{aligned}\mathbb{E}&\Big[|U_s-U_r|^p|U_t-U_s|^p\Big]\\[3pt]&\quad\leq C_1|t-s|\mathbb{E}\Big[|U_s-U_r|^p\Big]+C_2|t-s|\mathbb{E}\Big[|U_s-U_r|^p|X_s|^p\Big]\\[3pt]&\quad\leq C_1|t-s||s-r|+C_2|t-s|\mathbb{E}\Big[|U_s-U_r|^p\Big(|X_s-X_r|^p+|X_r|^p\Big)\Big]\\[3pt]&\quad\leq C_1|t-r|^2+C_2|t-s|\mathbb{E}\Big[2^{p-1}|U_s-U_r|^{p}\Big(|U_s-U_r|^p+|K_s-K_r|^p\Big)\Big]\\[3pt]&\qquad+C_3|t-s|\mathbb{E}\Big[|U_s-U_r|^p|X_r|^p\Big]\\[3pt]&\quad\leq C_1|t-r|^2+C_2|t-s|\mathbb{E}\Big[|U_s-U_r|^{2p}\Big]+C_3|t-s||s-r|^{p/2}\mathbb{E}\Big[|U_s-U_r|^{p}\Big]\\[3pt]&\qquad+C_4|t-s|\mathbb{E}\Big[|U_s-U_r|^p|X_r|^p\Big]\\[3pt]&\quad\leq C_1|t-r|^2+C_4|t-s|\mathbb{E}\Big[|X_r|^p\mathbb{E}_r[|U_s-U_r|^p]\Big].\end{aligned}\end{equation*}

Following the proof of (ii), we can also get

\begin{equation*}\begin{aligned}\mathbb{E}_r[|U_s-U_r|^p]\leq C|s-r|\bigg(1+\mathbb{E}_r\bigg[\sup_{r\leq u\leq s}|X_u|^p\bigg]\bigg).\end{aligned}\end{equation*}

Then

\begin{equation*}\begin{aligned}\mathbb{E}\big[|U_s-U_r|^p|U_t-U_s|^p\big]&\leq C_1|t-r|^2+C_2|t-s||s-r|\mathbb{E}\bigg[|X_r|^p\Big(1+\mathbb{E}_r\Big[\sup_{r\leq u\leq s}|X_u|^p\Big]\Big)\bigg]\\[3pt]&\leq C_1|t-r|^2+C_2|t-r|^2\mathbb{E}\Bigg[|X_r|^p\Big(1+\sup_{r\leq u\leq s}|X_u|^p\Big)\Bigg].\end{aligned}\end{equation*}

Under Assumption 3, we conclude that

\begin{equation*}\mathbb{E}[|U_s-U_r|^p|U_t-U_s|^p]\leq C|t-r|^2 \qquad \forall\ 0\leq r<s<t\leq T.\end{equation*}

2.4. Density of K

Consider the second-order linear partial operator ${\mathcal{L}}$ described by

(2.13) \begin{align}\mathcal{L}f(x)&: b(x)\frac{\partial}{\partial x}f(x)+\frac{1}{2}\sigma\sigma^*(x)\frac{\partial^2}{\partial x^2}f(x) \nonumber\\[3pt]&\quad +\int_E\big(\,f\big(x+F(x,z)\big)-f(x)-F(x,z)f'(x)\big)\lambda(dz),\end{align}

for any twice continuously differentiable function f.

Propositions 3. Suppose Assumptions 1, 2, and 4 hold. Let (X,K) be the unique deterministic flat solution to (1.2). Then the process K is Lipschitz continuous and the Stieltjes measure dK has the following density:

(2.14) \begin{equation}k\,{:}\,\mathbb{R}^+\ni t\longmapsto\frac{(\mathbb{E}[\mathcal{L}h(X_{t^-})])^-}{\mathbb{E}[h'(X_{t^-})]}{\bf{1}}_{\mathbb{E}[h(X_t)]=0}.\end{equation}

Let us admit for the moment the following results that will be useful for our proof.

Lemma 3. The functions ${t\longmapsto\mathbb{E}\left[h(X_t)\right]}$ and ${t\longmapsto\mathbb{E} \left[\mathcal{L}h(X_t)\right]}$ are continuous.

Lemma 4. If ${\varphi}$ is a continuous function such that, for some ${C\geq 0}$ and ${p\geq 1}$ ,

\begin{equation*}\forall\ x\in\mathbb{R}, \quad |\varphi(x)|\leq C(1+|x|^p),\end{equation*}

then the function ${t\longmapsto\mathbb{E} [\varphi(X_t)]}$ is continuous.

The proof of Lemma 3 is given later in this section, and that of Lemma 4 is given in Appendix A. We may now proceed to the proof of Proposition 3.

Proof. Firstly, we prove that K is Lipschitz continuous. In order to do this, we first prove that ${s\longmapsto\bar{G}_0(\mu_{s})}$ is Lipschitz continuous on [0,T]. From the definition of ${\bar{G}_0}$ , we have ${H(\bar{G}_0(\mu_{t}),\mu_t)=0}$ , and by using (2.4), if ${s<t}$ , we get

\begin{equation*}\begin{aligned}&|\bar{G}_0(\mu_{s})-\bar{G}_0(\mu_{t})|\\[3pt]&\quad \leq \frac{1}{m}|H(\bar{G}_0(\mu_{s}),\mu_t)-H(\bar{G}_0(\mu_{t}),\mu_t)|\\[3pt]&\quad =\frac{1}{m}|H(\bar{G}_0(\mu_{s}),\mu_t)|,\\[3pt]&\quad =\frac{1}{m}|\mathbb{E}[h(\bar{G}_0(\mu_{s})+U_t)]|\\[3pt]&\quad=\frac{1}{m}\bigg|\mathbb{E}\bigg[h\bigg(\bar{G}_0(\mu_{s})+U_s+\int_{s}^{t}b(X_{r^-})dr+\int_{s}^{t}\sigma(X_{r^-})dB_r+\int_{s}^{t}\int_E F(X_{r^-},z)\tilde{N}(dr,dz)\bigg)\bigg]\bigg|.\end{aligned}\end{equation*}

From Itô’s formula, we obtain

\begin{equation*}\begin{aligned}&h(\bar{G}_0(\mu_{s})+U_t)\\[3pt]&\quad =h(\bar{G}_0(\mu_{s})+U_s)+\int_s^tb\big(X_{r^-}\big)h'\big(\bar{G}_0(\mu_{s})+U_{r^-}\big)dr+\int_s^t\sigma\big(X_{r^-}\big)h'\big(\bar{G}_0(\mu_{s})+U_{r^-}\big)dB_r\\[3pt]&\qquad +\int_s^t\int_E F\big(X_{r^-},z\big)h'\big(\bar{G}_0(\mu_{s})+U_{r^-}\big)\tilde{N}(dr,dz)+\frac{1}{2}\int_s^t\sigma^2\big(X_{r^-}\big)h''\big(\bar{G}_0(\mu_{s})+U_{r^-}\big)dr\\[3pt]&\qquad +\int_s^t\int_E m(r,z)\lambda(dz)dr+\int_s^t\int_E m(r,z) \tilde{N}(dr,dz),\end{aligned}\end{equation*}

with

\begin{equation*}m(r,z)=\big(h\big(\bar{G}_0(\mu_{s})+U_{r^-}+F(X_{r^-},z)\big)-h\big(\bar{G}_0(\mu_{s})+U_{r^-}\big)-F\big(X_{r^-},z\big)h'\big(\bar{G}_0(\mu_{s})+U_{r^-}\big)\big).\end{equation*}

This yields

\begin{equation*}\begin{aligned}h(\bar{G}_0(\mu_{s})+U_t)&=h(\bar{G}_0(\mu_{s})+U_s)+\int_s^t\bar{\mathcal{L}}_{X_{r^-}}h(\bar{G}_0(\mu_{s})+U_{r^-})dr\\[3pt] &\quad+\int_s^t\sigma(X_{r^-})h'(\bar{G}_0(\mu_{s})+U_{r^-})dB_r\\[3pt]&\quad+\int_s^t\int_E\bigg(h\big(\bar{G}_0(\mu_{s})+U_{r^-}+F(X_{r^-},z)\big)-h\big(\bar{G}_0(\mu_{s})+U_{r^-}\big)\bigg)\tilde{N}(dr,dz),\end{aligned}\end{equation*}

where

\begin{equation*}\bar{\mathcal{L}}_yf(x): b(y)\frac{\partial}{\partial x}f(x)+\frac{1}{2}\sigma\sigma^*(y)\frac{\partial^2}{\partial x^2}f(x)+\int_E\big(f\big(x+F(y,z)\big)-f(x)-F(y,z)f'(x)\big)\lambda(dz).\end{equation*}

Therefore,

\begin{equation*}\begin{aligned}\mathbb{E}[h(\bar{G}_0(\mu_{s})+U_t)]&=\mathbb{E}[h(\bar{G}_0(\mu_{s})+U_s)]+\int_s^t\mathbb{E}[\bar{\mathcal{L}}_{X_{r^-}}h(\bar{G}_0(\mu_{s})+U_{r^-})]dr\\[3pt]&=H(\bar{G}_0(\mu_{s}),\mu_s)+\int_s^t\mathbb{E}[\bar{\mathcal{L}}_{X_{r^-}}h(\bar{G}_0(\mu_{s})+U_{r^-})]dr\\[3pt]&=\int_s^t\mathbb{E}[\bar{\mathcal{L}}_{X_{r^-}}h(\bar{G}_0(\mu_{s})+U_{r^-})]dr.\end{aligned}\end{equation*}

Consequently, the result immediately follows from the fact that h has bounded derivatives and ${\sup_{s\leq T}|X_s|}$ is a square integrable random variable for each ${T > 0}$ (see Proposition 1).

Finally, we deduce that K is Lipschitz continuous and so has a bounded density on [0,T] for each ${T > 0}$ (see Proposition 2.7 in [Reference Briand, Chaudru de Raynal, Guillin and LabartBCdRGL16] for more details).

Secondly, let us find the density of the measure dK. For all ${0\leq s\leq t\leq T}$ , we have

\begin{equation*}\begin{aligned}X_{t} &=X_s+\int_s^{t} \big(b(X_{r^-})-\int_E F(X_{r^-},z)\lambda(dz)\big) dr + \int_s^{t} \sigma(X_{r^-}) dB_r + \int_s^{t}\int_E F(X_{r^-},z) N(dr,dz) \\&\quad + K_{t}-K_s.\end{aligned}\end{equation*}

Under Assumption 4 and thanks to Itô’s formula we get

\begin{equation*}\begin{aligned}&h(X_{t})-h(X_s)\\ &\quad=\int_s^{t}b(X_{r^-})h'(X_{r^-}) dr+\int_s^{t} \sigma(X_{r^-})h'(X_{r^-}) dB_r+\int_s^{t}\int_E F(X_{r^-},z)h'(X_{r^-}) \tilde{N}(dr,dz)\\[3pt] &\qquad+\int_s^{t} h'(X_{r^-}) dK_r+\frac{1}{2}\int_s^{t} \sigma^2(X_{r^-})h''(X_{r^-}) dr\\[3pt] &\qquad +\int_s^{t}\int_E\big(h\big(X_{r^-}+F(X_{r^-},z)\big)-h(X_{r^-})-F(X_{r^-},z)h'(X_{r^-})\big)N(dr,dz)\\[3pt]&= \int_s^{t}b(X_{r^-})h'(X_{r^-}) dr+\int_s^{t} \sigma(X_{r^-})h'(X_{r^-}) dB_r+\int_s^{t}\int_E F(X_{r^-},z)h'(X_{r^-}) \tilde{N}(dr,dz)\\[3pt]&\qquad+\int_s^{t} h'(X_{r^-}) dK_r+\frac{1}{2}\int_s^{t} \sigma^2(X_{r^-})h''(X_{r^-}) dr\\[3pt]&\qquad+\int_s^{t}\int_E\big(h\big(X_{r^-}+F(X_{r^-})\big)-h(X_{r^-})-F(X_{r^-})h'(X_{r^-})\big)\lambda(dz)dr\\[3pt]&\qquad+\int_s^{t}\int_E\big(h\big(X_{r^-}+F(X_{r^-},z)\big)-h(X_{r^-})-F(X_{r^-},z)h'(X_{r^-})\big)\tilde{N}(dr,dz)\\[3pt]&= \int_s^{t}\mathcal{L}h(X_{r^-}) dr+\int_s^{t} h'(X_{r^-}) dK_r+\int_s^{t} \sigma(X_{r^-})h'(X_{r^-}) dB_r\\[3pt]&\qquad+\int_s^{t}\int_E\big(h\big(X_{r^-}+F(X_{r^-},z)\big)-h(X_{r^-})\big)\tilde{N}(dr,dz),\end{aligned}\end{equation*}

where ${\mathcal{L}}$ is given by (2.13). Thus, we obtain

(2.15) \begin{equation}\mathbb{E}\bigg(\int_s^{t} h'(X_{r^-}) dK_r\bigg)=\mathbb{E} h(X_{t})-\mathbb{E} h(X_s)-\int_s^{t}\mathbb{E}\mathcal{L}h(X_{r^-}) dr.\end{equation}

As a conclusion, using (2.15), Lemma 3, and the proof of Proposition 2.7 in [Reference Briand, Chaudru de Raynal, Guillin and LabartBCdRGL16], we deduce that the measure dK has the following density:

\begin{equation*}k_t=\frac{(\mathbb{E}[\mathcal{L}h(X_{t^-})])^-}{\mathbb{E}[h'(X_{t^-})]}{\bf{1}}_{\mathbb{E}[h(X_t)]=0}.\end{equation*}

Proof of Lemma 3. Under Assumption 2, and by using Lemma 4, we obtain the continuity of the function ${t\longmapsto\mathbb{E} h(X_t)}$ .

Under Assumptions 1, 2, and 4, we observe that ${x\longmapsto\mathcal{L}h(X_t)}$ is a continuous function such that, for all ${x\in\mathbb{R}}$ , there exist constants ${C_1, C_2, C_3 > 0}$ such that

\begin{equation*}|b(x)h'(x)|\leq C_1(1+|x|),\end{equation*}
\begin{equation*}|\sigma^2(x)h''(x)|\leq C_2(1+|x|^2),\end{equation*}

and

\begin{equation*}\begin{aligned}&\bigg|\int_E\big(h\big(x+F(x,z)\big)-h(x)-F(x,z)h'(x)\big)\lambda(dz)\bigg|\\[3pt]&\quad\leq C_3\int_E|F(x,z)|\lambda(dz)\\[3pt]&\quad \leq C_3\bigg(\int_E|F(x,z)-F(0,z)|\lambda(dz)+\int_E|F(0,z)|\lambda(dz)\bigg)\\[3pt]&\quad\leq C_3\int_E|x|\lambda(dz)+C'_3\\[3pt]&\quad\leq C_3(1+|x|).\end{aligned}\end{equation*}

Finally, by using Lemma 4, we conclude that ${t\longmapsto\mathbb{E} \mathcal{L}h(X_t)}$ is continuous.

3. Approximation of mean reflected SDEs by an interacting reflected particle system

By using the notation presented in the beginning of Section 2, in particular Equation (2.9), the unique solution of the SDE (1.2) can be derived as

(3.1) \begin{equation}X_t =X_0+\int_0^t b(X_{s^-}) ds + \int_0^t \sigma(X_{s^-}) dB_s + \int_0^t\int_E F(X_{s^-},z) \tilde{N}(ds,dz)+\sup\limits_{s\le t} {G}_0(\mu_{s}),\end{equation}

where ${\mu_t}$ stands for the law of

\begin{equation*}U_t =X_0+\int_0^t b(X_{s^-}) ds + \int_0^t \sigma(X_{s^-}) dB_s + \int_0^t\int_E F(X_{s^-},z) \tilde{N}(ds,dz).\end{equation*}

Let us consider the particle approximation of the above system. In order to do this, let us introduce the particles: for ${1\leq i\leq N}$ ,

(3.2) \begin{equation}X^i_t = \bar{X}^i_0+\int_0^t b(X^i_{s^-}) ds + \int_0^t \sigma(X^i_{s^-}) dB^i_s + \int_0^t\int_E F(X^i_{s^-},z) \tilde{N}^i(ds,dz)+\sup\limits_{s\le t} {G}_0(\mu_{s}^N),\end{equation}

where ${(B^i)_{1\leq i\leq N}}$ are independent Brownian motions, ${(\tilde{N}^i)_{1\leq i\leq N}}$ are independent compensated Poisson measures, ${(\bar{X}^i_0)_{1\leq i\leq N}}$ are independent copies of ${X_0}$ , and ${\mu_s^N}$ represents the empirical distribution at time s of the particles

\begin{equation*}U^i_t =\bar{X}^i_0+\int_0^t b(X^i_{s^-}) ds + \int_0^t \sigma(X^i_{s^-}) dB^i_s + \int_0^t\int_E F(X^i_{s^-},z) \tilde{N}^i(ds,dz),\quad 1\leq i\leq N,\end{equation*}

namely ${\displaystyle\mu^N_s=\frac{1}{N}\sum^N_{i=1}\delta_{U^i_s}}$ . Note that

\begin{equation*}G_0(\mu_s^N)=\inf\Big\{x\geq0\,{:}\,\frac{1}{N}\sum_{i=1}^{N}h(x+U^i_s) \geq 0\Big\},\end{equation*}
\begin{equation*}K^N_t=\sup_{s\leq t}G_0(\mu_s^N).\end{equation*}

Now we can prove the propagation of chaos effect. In order to do this, let us introduce the following independent copies of X:

\begin{align*}\bar X^i_t &=\bar X^i_0+\int_0^t b(\bar X^i_{s^-}) ds + \int_0^t \sigma(\bar X^i_{s^-}) dB^i_s + \int_0^t\int_E F(\bar X^i_{s^-},z) \tilde{N}^i(ds,dz)+\sup\limits_{s\le t} {G}_0(\mu_{s}),\\&\quad 1\leq i\leq N,\end{align*}

where the Brownian motions and the Poisson processes are the ones used in (3.2).

In addition, we introduce the decoupled particles ${\bar U^i}$ , ${1\leq i\leq N}$ :

\begin{equation*}\bar U^i_t =\bar X^i_0+\int_0^t b(\bar X^i_{s^-}) ds + \int_0^t \sigma(\bar X^i_{s^-}) dB^i_s + \int_0^t\int_E F(\bar X^i_{s^-},z) \tilde{N}^i(ds,dz).\end{equation*}

It is worth noting that the particles ${(\bar U^i_t)_{1\leq i\leq N}}$ are independent and identically distributed (i.i.d.). Furthermore, we introduce ${\bar\mu^N}$ as the empirical measure associated to this system of particles.

Remark 4.

  1. (i) Under our assumptions, we have ${\mathbb{E}\left[h\left(\bar{X}^i_0\right)\right]=\mathbb{E}\left[h(X_0)\right]\geq 0}$ . However, there is no reason to have

    \begin{equation*} \frac{1}{N}\, \sum_{i=1}^N h\left(\bar{X}^i_0\right) \geq 0,\end{equation*}
    even if N is large. As a consequence,
    \begin{equation*} {G}_0(\mu_{0}^N)=\inf\bigg\{x\geq 0\,{:}\,\frac{1}{N}\sum_{i=1}^{N}h\left(x+\bar{X}^i_0\right) \geq 0\bigg\}\end{equation*}
    is not necessarily equal to 0. As a byproduct, we have ${X^i_0 = \bar{X}^i_0 + {G}_0(\mu_{0}^N)}$ , and the non-decreasing process ${\sup_{s\le t} {G}_0(\mu_{s}^N)}$ is not equal to 0 at time ${t=0}$ . Written in this way, the particles defined by (3.2) cannot be interpreted as the solution of a reflected SDE. To view the particles as the solution of a reflected SDE, instead of (3.2) one has to solve
    \begin{align*} X^i_t &= \bar{X}^i_0 + {G}_0(\mu_{0}^N) +\int_0^t b(X^i_{s^-}) ds + \int_0^t \sigma(X^i_{s^-}) dB^i_s \\[3pt] &\quad + \int_0^t\int_E F(X^i_{s^-},z) \tilde{N}^i(ds,dz)+ K^N_t, \\[3pt] \frac{1}{N}\sum_{i=1}^{N} &h\left(X^i_t\right) \geq 0, \qquad \int_0^t \frac{1}{N}\sum_{i=1}^{N}h\left(X^i_t\right)\, dK^N_s=0,\end{align*}
    with ${K^N}$ non-decreasing and ${K^N_0=0}$ . Since we do not use this point in the sequel, we will work with the form (3.2).
  2. (ii) Following the proof of Theorem 1, it is easy to demonstrate existence and uniqueness of a solution for the particle-approximated system (3.2).

We have the following result concerning the approximation of (1.2) by an interacting particle system.

Theorem 2. Let ${T>0}$ and suppose that Assumptions 1 and 2 hold.

  1. (i) Under Assumption 3, there exists a constant C depending on b, ${\sigma}$ , and F such that, for each ${j\in\{1,\ldots,N\}}$ ,

    \begin{equation*}\mathbb{E}\bigg[\sup_{s\leq T}|X^j_s-\bar X^j_s|^2\bigg]\leq C\exp\Big(C\Big(1+\frac{M^2}{m^2}\Big)(1+T^2)\big)\frac{M^2}{m^2}N^{-1/2}.\end{equation*}
  2. (ii) Under Assumption 4, there exists a constant C depending on b, ${\sigma}$ , and F such that, for each ${j\in\{1,\ldots,N\}}$ ,

    \begin{equation*}\mathbb{E}\Big[\sup_{s\leq T}|X^j_s-\bar X^j_s|^2\Big]\leq C\exp\Big(C\Big(1+\frac{M^2}{m^2}\Big)(1+T^2)\Big)\frac{1+T^2}{m^2}\Big(1+\mathbb{E}\Big[\sup_{s\leq T}|X_T|^2\Big]\Big)N^{-1}.\end{equation*}

Proof. Let ${t>0}$ . We have, for ${r\leq t}$ ,

\begin{equation*}\begin{aligned}\big|X^j_r-\bar{X}^j_r\big|& \leq \bigg|\int_0^r b(X^j_{s^-})- b(\bar{X}^j_{{s}^-})ds\bigg| + \bigg|\int_0^r \bigg(\sigma(X^j_{s^-})-\sigma(\bar{X}^j_{{s}^-})\bigg) dB^i_s\bigg| \\[3pt] &\quad + \bigg|\int_0^r\int_E\bigg(F(X^j_{s^-},z) -F(\bar{X}^j_{{s}^-},z)\bigg) \tilde{N}^j(ds,dz)\bigg| + \big|\sup_{s\le r}{G}_0(\mu_s^N)-\sup_{s\le r}{G}_0({\mu}_{s})\big|.\end{aligned}\end{equation*}

From the inequality

\begin{equation*}\begin{aligned}\big|\sup_{s\le r}{G}_0(\mu_s^N)-\sup_{s\le r}{G}_0({\mu}_{s})\big|&\leq \sup_{s\le r}\big|{G}_0(\mu_s^N)-{G}_0({\mu}_{s})\big|\leq \sup_{s\le t}\big|{G}_0(\mu_s^N)-{G}_0({\mu}_{s})\big|\\[3pt] &\leq \sup_{s\le t}\big|{G}_0(\mu_s^N)-{G}_0({\bar\mu}^N_{s})\big|+\sup_{s\le t}\big|{G}_0(\bar\mu_s^N)-{G}_0({\mu}_{s})\big|,\end{aligned}\end{equation*}

we obtain

(3.3) \begin{equation}\begin{aligned}\sup_{r\le t}\big|X^j_r-\bar{X}^j_r\big|& \leq I_1+\sup_{s\le t}\big|{G}_0(\mu_s^N)-{G}_0({\bar\mu}^N_{s})\big|+\sup_{s\le t}\big|{G}_0(\bar\mu_s^N)-{G}_0({\mu}_{s})\big|,\end{aligned}\end{equation}

where ${I_1}$ is defined by

\begin{equation*}\begin{aligned}I_1&=\int_0^t \big|b(X^j_{s^-})- b(\bar{X}^j_{{s}^-})\big|ds + \sup_{r\le t}\Big|\int_0^r \Big(\sigma(X^j_{s^-})-\sigma(\bar{X}^j_{{s}^-})\Big) dB^i_s\Big|\\[3pt] &\quad + \sup_{r\le t}\Big|\int_0^r\int_E\Big(F(X^j_{s^-},z) -F(\bar{X}^j_{{s}^-},z)\Big) \tilde{N}^j(ds,dz)\Big|.\end{aligned}\end{equation*}

Firstly, due to Assumption 1 and the Doob and Cauchy–Schwarz inequalities, we have

\begin{equation*}\begin{aligned}\mathbb{E}\big[\big|I_1\big|^2\big]&\leq C\Bigg\{\mathbb{E} \bigg[t\int_0^t\bigg|b(X^j_{s^-})- b(\bar{X}^j_{{s}^-}) \bigg|^2ds\bigg] + \mathbb{E}\bigg[\int_0^t \bigg|\sigma(X^j_{s^-})-\sigma(\bar{X}^j_{{s}^-})\bigg|^2 ds\bigg]\\[3pt]&\quad +\mathbb{E} \bigg[\int_0^t\int_E\bigg|F(X^j_{s^-},z) -F(\bar{X}^j_{{s}^-},z)\bigg|^2 \lambda(dz)ds\bigg]\Bigg\} \\[3pt]&\leq C\Bigg\{tC_1\int_0^t \mathbb{E}\Big[\big|X^j_s-\bar{X}^j_{{s}}\big|^2\Big] ds+ C_1\int_0^t \mathbb{E}\Big[\big|X^j_s-\bar{X}^j_{{s}}\big|^2\Big] ds\\[3pt]&\quad +C_1\int_0^t \mathbb{E}\Big[\big|X^j_s-\bar{X}^j_{{s}}\big|^2\Big] ds\Bigg\}\\[3pt]&\leq C(1+t)\int_0^t \mathbb{E}\Big[\big|X^j_s-\bar{X}^j_{{s}}\big|^2\Big] ds,\end{aligned}\end{equation*}

where C is a constant that depends only on b, ${\sigma}$ , and F. Note that C may change from line to line.

Secondly, in view of Lemma 2,

\begin{equation*}\sup_{s\le t}\big|{G}_0(\mu_s^N)-{G}_0({\bar\mu}^N_{s})\big|\leq \frac{M}{m}\sup_{s\le t}\frac{1}{N}\sum_{i=1}^{N}\big|U^i_s-\bar{U}^i_{{s}}\big|\leq \frac{M}{m}\frac{1}{N}\sum_{i=1}^{N}\sup_{s\le t}\big|U^i_s-\bar{U}^i_{{s}}\big|.\end{equation*}

Moreover, taking into account that the variables are exchangeable, the Cauchy–Schwarz inequality implies

\begin{equation*}\mathbb{E}\bigg[\sup_{s\le t}\big|{G}_0(\mu_s^N)-{G}_0({\bar\mu}^N_{s})\big|^2\bigg]\leq \frac{M^2}{m^2}\frac{1}{N}\sum_{i=1}^{N}\mathbb{E}\bigg[\sup_{s\le t}\big|U^i_s-\bar{U}^i_{{s}}\big|^2\bigg]= \frac{M^2}{m^2}\mathbb{E}\bigg[\sup_{s\le t}\big|U^j_s-\bar{U}^j_{{s}}\big|^2\bigg].\end{equation*}

Since

\begin{align*}U^j_s-\bar{U}^j_{{s}}&=\int_0^s (b(X^j_{r^-})-b(\bar X^j_{r^-})) dr + \int_0^s (\sigma(X^j_{r^-})-\sigma(\bar X^j_{r^-})) dB^j_r\\[3pt] &\quad+ \int_0^s\int_E (F(X^j_{r^-},z)-F(\bar X^j_{r^-}),z) \tilde{N}^j(dr,dz),\end{align*}

and following the previous computations, we get

\begin{equation*}\mathbb{E}\Big[\sup_{s\le t}\big|{G}_0(\mu_s^N)-{G}_0({\bar\mu}^N_{s})\big|^2\Big]\leq C \frac{M^2}{m^2}(1+t)\int_0^t \mathbb{E}\Big[\big|X^j_s-\bar{X}^j_{{s}}\big|^2\Big] ds.\end{equation*}

Consequently, combining the previous estimates with Equation (3.3) gives

\begin{equation*}\begin{aligned}\mathbb{E}\Big[\sup_{r\le t}\big|X^j_r-\bar{X}^j_r\big|^2\Big]&\leq K\int_0^t \mathbb{E}\Big[\big|X^j_s-\bar{X}^j_{{s}}\big|^2\Big] ds+4\mathbb{E}\Big[\sup_{s\le t}\big|{G}_0(\bar\mu_s^N)-{G}_0({\mu}_{s})\big|^2\Big]\\&\leq K\int_0^t \mathbb{E}\Big[\sup_{r\le s}\big|X^j_r-\bar{X}^j_{{r}}\big|^2\Big] ds+4\mathbb{E}\Big[\sup_{s\le t}\big|{G}_0(\bar\mu_s^N)-{G}_0({\mu}_{s})\big|^2\Big],\end{aligned}\end{equation*}

where ${K=C(1+t)(1+M^2/m^2)}$ . According to Gronwall’s lemma, we get

\begin{equation*}\begin{aligned}\mathbb{E}\bigg[\sup_{r\le t}\big|X^j_r-\bar{X}^j_r\big|^2\bigg]&\leq Ce^{Kt}\mathbb{E}\bigg[\sup_{s\le t}\big|{G}_0(\bar\mu_s^N)-{G}_0({\mu}_{s})\big|^2\bigg].\end{aligned}\end{equation*}

In view of Lemma 2, we have

\begin{equation*}\begin{aligned}\mathbb{E}\bigg[\sup_{s\le t}\big|{G}_0(\bar\mu_s^N)-{G}_0({\mu}_{s})\big|^2\bigg]\leq {\frac{1}{m^2}}\mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\mu_s)+\cdot) (d\bar\mu_s^N-d\mu_s)\right|^2\bigg],\end{aligned}\end{equation*}

which leads to

(3.4) \begin{equation}\begin{aligned}\mathbb{E}\bigg[\sup_{r\le t}\big|X^j_r-\bar{X}^j_r\big|^2\bigg]&\leq Ce^{Kt}{\frac{1}{m^2}}\mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\mu_s)+\cdot) (d\bar\mu_s^N-d\mu_s)\right|^2\bigg].\end{aligned}\end{equation}

Proof of (i). Since h is at least a Lipschitz function, the rate of convergence will be given by the convergence of empirical measure of i.i.d. diffusion processes. As we consider a uniform convergence in time, getting the usual rate of convergence is not straightforward. If we only suppose that Assumption 2 holds, we obtain that

\begin{equation*}\begin{aligned}{\frac{1}{m^2}}\mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\mu_s)+\cdot) (d\bar\mu_s^N-d\mu_s)\right|^2\bigg]\leq \frac{M^2}{m^2}\mathbb{E}\bigg[\sup_{s\le t}W_1^2(\bar{\mu}_s^N,\mu_s)\bigg].\end{aligned}\end{equation*}

According to the additional Assumption 3, and in view of the proof of Part (i) of [Reference Briand, Chaudru de Raynal, Guillin and LabartBCdRGL16, Theorem 3.2], we have

\begin{equation*}\mathbb{E}\bigg[\sup_{s\le 1}W_1^2(\bar{\mu}_s^N,\mu_s)\bigg] \leq CN^{-1/2}.\end{equation*}

Proof of (ii). Under Assumption 4, we can get rid of the supremum in time by using the sharp estimate

(3.5) \begin{equation}\mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\mu_s)+\cdot) (d\bar\mu_s^N-d\mu_s)\right|^2\bigg].\end{equation}

According to Proposition 3, let ${\psi}$ be the Radon–Nikodym derivative of ${\bar{G}_0(\mu)}$ . Since ${(\bar{U}^i)_{1\leq i\leq N}}$ are independent copies of U, we have

\begin{equation*}\begin{aligned}R_N(s): \int h(\bar{G}_0(\mu_s)+\cdot) (d\bar\mu_s^N-d\mu_s)&=\frac{1}{N}\sum_{i=1}^N h\big(\bar{G}_0(\mu_s)+\bar{U}_s^i\big)-\mathbb{E}\big[h\big(\bar{G}_0(\mu_s)+{U}_s\big)\big]\\&=\frac{1}{N}\sum_{i=1}^N \big\{h\big(\bar{G}_0(\mu_s)+\bar{U}_s^i\big)-\mathbb{E}\big[h\big(\bar{G}_0(\mu_s)+\bar{U}_s^i\big)\big]\big\}\\&=\frac{1}{N}\sum_{i=1}^N \big\{h\big(V^i_s\big)-\mathbb{E}\big[h\big(V_s^i\big)\big]\big\},\end{aligned}\end{equation*}

where ${V^i}$ is the semi-martingale ${s\longmapsto \bar{G}_0(\mu_s)+\bar{U}_s^i}$ .

It follows from Itô’s formula that

\begin{equation*}\begin{aligned}h\big(V^i_s\big)&=h\big(V^i_0\big)+\int_{0}^{s}h'\big(V^i_{r^-}\big)d\bar{G}_0\big(\mu_r\big)+\int_{0}^{s}b\big(\bar{X}^i_{r^-}\big)h'\big(V^i_{r^-}\big)dr+\int_{0}^{s}\sigma\big(\bar{X}^i_{r^-}\big)h'\big(V^i_{r^-}\big)dB^i_r\\[2pt]&\quad+\int_{0}^{s}\int_EF\big(\bar{X}^i_{r^-},z\big)h'\big(V^i_{r^-}\big)\tilde{N}^i(dr,dz)+\frac{1}{2}\int_{0}^{s}\sigma^2\big(\bar{X}^i_{r^-}\big)h''\big(V^i_{r^-}\big)dr\\[2pt]&\quad+\int_0^{s}\int_E\Big(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h(V^i_{r^-})-F(\bar{X}^i_{r^-},z)h'(V^i_{r^-})\Big)\lambda(dz)dr\\[2pt]&\quad+\int_0^s\int_E\Big(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h(V^i_{r^-})-F(\bar{X}^i_{r^-},z)h'(V^i_{r^-})\Big)\tilde{N}^i(dr,dz)\\[2pt]&=h\big(V^i_0\big)+\int_{0}^{s}h'\big(V^i_{r^-}\big)\psi_r dr+\int_{0}^{s}b\big(\bar{X}^i_{r^-}\big)h'\big(V^i_{r^-}\big)dr+\frac{1}{2}\int_{0}^{s}\sigma^2\big(\bar{X}^i_{r^-}\big)h''\big(V^i_{r^-}\big)dr\\[2pt]&\quad+\int_0^{s}\int_E\Big(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h(V^i_{r^-})-F(\bar{X}^i_{r^-},z)h'(V^i_{r^-})\Big)\lambda(dz)dr\\[2pt]&\quad+\int_{0}^{s}\sigma\big(\bar{X}^i_{r^-}\big)h'\big(V^i_{r^-}\big)dB^i_r+\int_0^s\int_E\Big(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h(V^i_{r^-})\Big)\tilde{N}^i(dr,dz)\\[2pt]&=h\big(V^i_0\big)+\int_{0}^{s}h'\big(V^i_{r^-}\big)\psi_r dr+\int_{0}^{s}\bar{\mathcal{L}}_{\bar{X}^i_{r^-}}h\big(V^i_{r^-}\big)dr+\int_{0}^{s}h'\big(V^i_{r^-}\big)\sigma\big(\bar{X}^i_{r^-}\big)dB^i_r\\[2pt]&\quad+\int_{0}^{s}\int_E\Big(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h\big(V^i_{r^-}\big)\Big)\tilde{N}^i(dr,dz)\\[2pt]&=h\big(V^i_0\big)+\int_{0}^{s}\big\{h'\big(V^i_{r^-}\big)\psi_r +\bar{\mathcal{L}}_{\bar{X}^i_{r^-}}h\big(V^i_{r^-}\big)\big\}dr+\int_{0}^{s}h'\big(V^i_{r^-}\big)\sigma\big(\bar{X}^i_{r^-}\big)dB^i_r\\[2pt]&\quad+\int_{0}^{s}\int_E\Big(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h\big(V^i_{r^-}\big)\Big)\tilde{N}^i(dr,dz).\end{aligned}\end{equation*}

Taking the expectation gives

\begin{equation*}\begin{aligned}\mathbb{E}\bigg[h\big(V^i_s\big)\bigg]&=\mathbb{E}\bigg[h\big(V^i_0\big)\bigg]+\int_{0}^{s}\mathbb{E}\big[h'\big(V^i_{r^-}\big)\psi_r +\bar{\mathcal{L}}_{\bar{X}^i_{r^-}}h\big(V^i_{r^-}\big)\big]dr\\&=H(\bar{G}_0(\mu_{0}),\mu_0)+\int_{0}^{s}\mathbb{E}\big[h'\big(V^i_{r^-}\big)\psi_r +\bar{\mathcal{L}}_{\bar{X}^i_{r^-}}h\big(V^i_{r^-}\big)\big]dr\\&=0+\int_{0}^{s}\mathbb{E}\big[h'\big(V^i_{r^-}\big)\psi_r +\bar{\mathcal{L}}_{\bar{X}^i_{r^-}}h\big(V^i_{r^-}\big)\big]dr.\end{aligned}\end{equation*}

We immediately deduce that

\begin{equation*}\begin{aligned}R_N(s)&=\frac{1}{N}\sum_{i=1}^N h(V^i_0)+\frac{1}{N}\sum_{i=1}^N \int_{0}^{s}C^i(r) dr +M_N(s)+L_N(s)\\[3pt]&=\frac{1}{N}\sum_{i=1}^N h(V^i_0)+ \int_{0}^{s}\bigg(\frac{1}{N}\sum_{i=1}^N C^i(r)\bigg) dr +M_N(s)+L_N(s),\end{aligned}\end{equation*}

where

\begin{equation*}C^i(r)=h'\big(V^i_{r^-}\big)\psi_r +\bar{\mathcal{L}}_{\bar{X}^i_{r^-}}h\big(V^i_{r^-}\big)-\mathbb{E}\big[h'\big(V^i_{r^-}\big)\psi_r +\bar{\mathcal{L}}_{\bar{X}^i_{r^-}}h\big(V^i_{r^-}\big)\big],\end{equation*}
\begin{equation*}M_N(s)=\frac{1}{N}\sum_{i=1}^N\int_{0}^{s}h'\big(V^i_{r^-}\big)\sigma\big(\bar{X}^i_{r^-}\big)dB^i_r,\end{equation*}
\begin{equation*}L_N(s)=\frac{1}{N}\sum_{i=1}^N\int_{0}^{s}\int_E\Big(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h\big(V^i_{r^-}\big)\Big)\tilde{N}^i(dr,dz).\end{equation*}

Then,

\begin{equation*}\begin{aligned}\sup_{s\le t}|R_N(s)|&\leq\bigg|\frac{1}{N}\sum_{i=1}^N h(V^i_0)\bigg|+ \sup_{s\le t}\int_{0}^{s}\bigg|\frac{1}{N}\sum_{i=1}^N C^i(r)\bigg| dr +\sup_{s\le t}|M_N(s)|+\sup_{s\le t}|L_N(s)|\\[3pt]&\leq \bigg|\frac{1}{N}\sum_{i=1}^N h(V^i_0)\bigg|+\int_{0}^{t}\bigg|\frac{1}{N}\sum_{i=1}^N C^i(r)\bigg| dr +\sup_{s\le t}|M_N(s)|+\sup_{s\le t}|L_N(s)|.\end{aligned}\end{equation*}

Since ${(U^i)_{1\leq i\leq N}}$ and ${(\bar{X}^i)_{1\leq i\leq N}}$ are i.i.d., and by using the Cauchy–Schwarz inequality, we obtain

\begin{equation*}\begin{aligned}\mathbb{E}\bigg[\sup_{s\le t}|R_N(s)|^2\bigg]&\leq 4\Bigg\{\mathbb{V}\bigg[\frac{1}{N}\sum_{i=1}^N h(V^i_0)\bigg]+\mathbb{E}\bigg[\bigg(\int_{0}^{t}\bigg|\frac{1}{N}\sum_{i=1}^N C^i(r)\bigg| dr\bigg)^2\bigg] \\[3pt]&\quad+\mathbb{E}\bigg[\sup_{s\le t}|M_N(s)|^2\bigg]+\mathbb{E}\bigg[\sup_{s\le t}|L_N(s)|^2\bigg]\Bigg\}\\[3pt]&\leq 4\Bigg\{\mathbb{V}\bigg[\frac{1}{N}\sum_{i=1}^N h(V^i_0)\bigg]+t\mathbb{E}\bigg[\int_{0}^{t}\bigg|\frac{1}{N}\sum_{i=1}^N C^i(r)\bigg|^2 dr\bigg] \\[3pt]&\quad+\mathbb{E}\bigg[\sup_{s\le t}|M_N(s)|^2\bigg]+\mathbb{E}\bigg[\sup_{s\le t}|L_N(s)|^2\bigg]\Bigg\}\\[3pt]&= 4\Bigg\{\mathbb{V}\bigg[\frac{1}{N}\sum_{i=1}^N h(V^i_0)\bigg]+t\int_{0}^{t}\mathbb{V}\bigg(\frac{1}{N}\sum_{i=1}^N C^i(r)\bigg) dr \\[3pt]&\quad+\mathbb{E}\bigg[\sup_{s\le t}|M_N(s)|^2\bigg]+\mathbb{E}\bigg[\sup_{s\le t}|L_N(s)|^2\bigg]\Bigg\}.\end{aligned}\end{equation*}

Hence, we get

\begin{equation*}\begin{aligned}\mathbb{E}\bigg[\sup_{s\le t}|R_N(s)|^2\bigg]&\leq \frac{4}{N} \mathbb{V}[h(V_0)]+\frac{4t}{N}\int_{0}^{t}\mathbb{V}(C(r)) dr +4\mathbb{E}\bigg[\sup_{s\le t}|M_N(s)|^2\bigg]+4\mathbb{E}\bigg[\sup_{s\le t}|L_N(s)|^2\bigg]\\&=\frac{4}{N} \mathbb{V}[h(V_0)]+\frac{4t}{N}\int_{0}^{t}\mathbb{V}(h'\big(V_{r^-}\big)\psi_r +\bar{\mathcal{L}}_{{X}_{r^-}}h\big(V_{r^-}\big)) dr \\[3pt]&\quad+4\mathbb{E}\bigg[\sup_{s\le t}|M_N(s)|^2\bigg]+4\mathbb{E}\bigg[\sup_{s\le t}|L_N(s)|^2\bigg].\end{aligned}\end{equation*}

Since ${M_N}$ is a martingale with

\begin{equation*}\langle M_N\rangle_t=\frac{1}{N^2}\sum_{i=1}^N\int_{0}^t\big(h'(V^i_{r^-})\sigma(\bar{X}^i_{r^-})\big)^2dr,\end{equation*}

Doob’s inequality leads to

\begin{equation*}\begin{aligned}\mathbb{E}\Big[\sup_{s\le t}|M_N(s)|^2\Big]&\leq 4\mathbb{E}\big[|M_N(t)|^2\big]\\[3pt]&=\frac{4}{N^2}\sum_{i=1}^N\int_{0}^t\mathbb{E}\Big[\big(h'(V^i_{r^-})\sigma(\bar{X}^i_{r^-})\big)^2\Big]dr\\[3pt]&=\frac{4}{N}\int_{0}^t\mathbb{E}\Big[\big(h'(V_{r^-})\sigma({X}_{r^-})\big)^2\Big]dr.\end{aligned}\end{equation*}

Then, using Doob’s inequality for the martingale ${L_N}$ , we obtain

\begin{equation*}\begin{aligned}\mathbb{E}\Big[\sup_{s\le t}|L_N(s)|^2\Big]&\leq 4\mathbb{E}\big[|L_N(t)|^2\big]\\[2pt]&=\frac{4}{N^2}\sum_{i=1}^N\mathbb{E}\bigg[\bigg(\int_{0}^t\int_E\Big(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h\big(V^i_{r^-}\big)\Big)\tilde{N}^i(dr,dz)\bigg)^2\bigg]\\[2pt]&\quad +\frac{8}{N^2}\sum_{1\leq i<j\leq N}\mathbb{E}\bigg[\int_{0}^t\int_E\Big(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h\big(V^i_{r^-}\big)\Big)\tilde{N}^i(dr,dz)\\[2pt]&\qquad \int_{0}^t\int_E\Big(h\big(V^j_{r^-}+F(\bar{X}^j_{r^-},z)\big)-h\big(V^j_{r^-}\big)\Big)\tilde{N}^j(dr,dz)\bigg]\\[2pt]&=\frac{4}{N^2}\sum_{i=1}^N\int_{0}^t\int_E\mathbb{E}\bigg[\Big(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h\big(V^i_{r^-}\big)\Big)^2\bigg]\lambda(dz)dr\\[2pt]&=\frac{4}{N}\int_{0}^t\int_E\mathbb{E}\bigg[\Big(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h\big(V^i_{r^-}\big)\Big)^2\bigg]\lambda(dz)dr.\end{aligned}\end{equation*}

Finally, using the fact that h has bounded derivatives and that b, ${\sigma}$ , and F are Lipschitz, we get

\begin{equation*}\mathbb{E}\bigg[\sup_{s\le t}|R_N(s)|^2\bigg]\leq C(1+t^2)\bigg(1+\mathbb{E}\bigg[\sup_{s\le t}|X_s|^2\bigg]\bigg)N^{-1}.\end{equation*}

This gives the result coming back to (3.4).

4. Numerical approximation for MR-SDE, and its performance

In this section, the numerical approximation of the SDE (1.2) on [0,T] is studied. Let ${0=T_0<T_1<\cdots<T_n=T}$ be a subdivision of [0,T], and define the mapping ‘ ${\_}$ ’ by ${s\mapsto\underline{s}=T_k}$ if ${s\in[T_k,T_{k+1})}$ , ${k\in \{0,\cdots ,n-1\}}$ . Let us consider the case of regular subdivisions: for a given integer n, ${T_k=kT/n}$ , ${k=0,\ldots,n}$ .

In the previous section, we have shown that the particle system given, for ${1\leq i\leq N}$ , by

\begin{equation*}X^i_t=\bar{X}^i_0+\int_0^t b(X^i_{s^-}) ds + \int_0^t \sigma(X^i_{s^-}) dB^i_s + \int_0^t\int_E F(X^i_{s^-},z) \tilde{N}^i(ds,dz) + \sup_{s\le t} {G}_0(\mu^N_{s}),\end{equation*}

where

\begin{equation*}\mu^N_t =\frac{1}{N}\sum^N_{i=1}\delta_{U^i_t}\end{equation*}

with

\begin{equation*}U^i_t =\bar{X}^i_0+\int_0^t b(X^i_{s^-}) ds + \int_0^t \sigma(X^i_{s^-}) dB^i_s + \int_0^t\int_E F(X^i_{s^-},z) \tilde{N}^i(ds,dz),\quad 1 \leq i\leq N,\end{equation*}

${B^i}$ being independent Brownian motions, ${N^i}$ being independent Poisson processes, and ${\bar{X}^i_0}$ being independent copies of ${X_0}$ , converges to the solution of (1.2). Hence, to determine the numerical approximation, we apply an Euler scheme to this particle system. The discrete version of the particle system is as follows: for ${1\leq i\leq N}$ ,

\begin{equation*}\tilde{X}^i_t=\bar{X}^i_0+\int_0^t b(\tilde{X}^i_{\underline{s}^-}) ds + \int_0^t \sigma(\tilde{X}^i_{\underline{s}^-}) dB^i_s + \int_0^t\int_E F(\tilde{X}^i_{\underline{s}^-},z) \tilde{N}^i(ds,dz) + \sup_{s\le t} {G}_0(\tilde{\mu}^N_{\underline{s}}),\end{equation*}

where

\begin{align*} \tilde{\mu}^N_{\underline{t}} & =\frac{1}{N}\sum^N_{i=1}\delta_{\tilde{U}^i_t}, \\[3pt] \tilde{U}^i_t & =\bar{X}^i_0+\int_0^t b(\tilde{X}^i_{\underline{s}^-}) ds + \int_0^t \sigma(\tilde{X}^i_{\underline{s}^-}) dB^i_s + \int_0^t\int_E F(\tilde{X}^i_{\underline{s}^-},z) \tilde{N}^i(ds,dz),\quad 1 \leq i\leq N.\end{align*}

4.1. Scheme

In view of the above notation, and taking into account the result on the interacting system of mean reflected particles of the MR-SDE of Section 3 and Remark 1, we deduce the following algorithm for the numerical approximation of the MR-SDE.

Remark 5. It should be pointed out that, at each step k of the algorithm, the increment of the reflection process K is approximated by the increment of the following approximation:

(4.1) \begin{equation}\Delta_k \hat{K}^N: \sup_{l\leq k}G_0(\tilde{\mu}^N_{T_l})-\sup_{l\leq k-1}G_0(\tilde{\mu}^N_{T_l}).\end{equation}

First, we consider the special case when the SDE is defined by

\begin{equation*} \begin{cases} \begin{split} & X_t =X_0+\int_0^t b(X_{s^-}) ds + \int_0^t \sigma(X_{s^-}) dB_s + \int_0^t F(X_{s^-}) d\tilde{N}_s + K_t,\quad t\geq 0, \\[3pt] & \mathbb{E}[h(X_t)] \geq 0, \quad \int_0^t \mathbb{E}[h(X_s)] \, dK_s = 0, \quad t\geq 0, \end{split} \end{cases}\end{equation*}

where N is a Poisson process with intensity ${\lambda}$ , and ${\tilde{N}_t=N_t-\lambda t.}$

By Remark 1, the increment (4.1) can be estimated by

\begin{equation*}\begin {aligned}\widehat{\Delta_k K}^N: \inf\Bigg\{x\geq 0: & \frac{1}{N}\sum_{i=1}^N h\bigg(x+\Big(\tilde{X}_{T_{k-1}}^{\tilde{\mu}^N}\Big)^i+\frac{T}{n} \bigg(b\bigg(\Big(\tilde{X}_{T_{k-1}}^{\tilde{\mu}^N}\Big)^i\bigg)-\lambda F\bigg(\Big(\tilde{X}_{T_{k-1}}^{\tilde{\mu}^N}\Big)^i\bigg) \bigg)\\ &+\frac{\sqrt{T}}{\sqrt{n}}\sigma\bigg(\Big(\tilde{X}_{T_{k-1}}^{\tilde{\mu}^N}\Big)^i\bigg)G^i+F\bigg(\Big(\tilde{X}_{T_{k-1}}^{\tilde{\mu}^N}\Big)^i\bigg)H^i\bigg)\geq0\bigg\},\end {aligned}\end{equation*}

where ${G^i\sim \mathcal{N}(0,1)}$ and ${H^i\sim \mathcal{P}(\lambda(T/n))}$ , and the ${(G^{i})_{i\,=1,..,N}}$ and ${(H^{i})_{i\,=1,..,N}}$ are i.i.d.

In addition, procedures similar to those in the proof of Theorem 1 can be used to verify that the increments of the approximated reflection process are equal to the approximation of the increments:

\begin{equation*}\forall\ k\in\{1,\cdots n\}, \quad \widehat{\Delta_k K}^N=\Delta_k\hat{K}^N.\end{equation*}

Returning to the general case (1.2), we see in [YS12] that ${N=\{N(t): N(E\times[0,t])\}}$ is a stochastic process with intensity ${\lambda}$ that counts the number of jumps until some given time. The Poisson random measure N(dz, dt) generates a sequence of pairs ${\{(\iota_i,\xi_i),i\in \{1,2,\cdots,N(T)\}\}}$ for a given finite positive constant T if ${\lambda<\infty}$ . Here ${\{\iota_i,i\in \{1,2,\cdots,N(T)\}\}}$ is a sequence of increasing nonnegative random variables representing the jump times of a standard Poisson process with intensity ${\lambda}$ , and ${\{\xi_i,i\in \{1,2,\cdots,N(T)\}\}}$ is a sequence of i.i.d. random variables, where ${\xi_i}$ is distributed according to f(z), where ${\lambda(dz)dt=\lambda f(z)dzdt}$ . The numerical approximation can equivalently be written in the following form:

${$\eqalign{ & \bar X_{{T_k}}^j = \bar X_{{T_{k - 1}}}^j + {T \over n}(b(\bar X_{{T_{k - 1}}}^j) - \int_E \lambda F(\bar X_{{T_{k - 1}}}^j,z)f(z)dz) + \sqrt {{T \over n}} \sigma (\bar X_{{T_{k - 1}}}^j){G^j} \cr & + \sum\limits_{i = H_{{T_{k - 1}}}^j + 1}^{H_{{T_k}}^j} F (\bar X_{{T_{k - 1}}}^j,{\xi _i}) + {\Delta _k}{{\hat K}^N}, \cr & {\Delta _k}{{\hat K}^N} = {\widehat{{\Delta _k}K}^N} = \inf \{ x \ge 0{\mkern 1mu} :{\mkern 1mu} {1 \over N}\sum\limits_{j = 1}^N h (x + \bar X_{{T_{k - 1}}}^j + {T \over n}(b(\bar X_{{T_{k - 1}}}^j) - \int_E \lambda F(\bar X_{{T_{k - 1}}}^j,z)f(z)dz) \cr & + \sqrt {{T \over n}} \sigma (\bar X_{{T_{k - 1}}}^j){G^j} + \sum\limits_{i = H_{{T_{k - 1}}}^j + 1}^{H_{{T_k}}^j} F (\bar X_{{T_{k - 1}}}^j,{\xi _i})) \ge 0\} , \cr} }$$

where ${G^j\sim \mathcal{N}(0,1)}$ and ${H^j\sim \mathcal{P}(\lambda(T/n))}$ , and the ${(G^{\,j})_{j\,=1,...,N}}$ and ${(H^{j})_{j\,=1,...,N}}$ are i.i.d.

4.2. Scheme error

Proposition 4.

  1. (i) Let ${T>0}$ , let N and n be two nonnegative integers, and suppose that Assumptions 1, 2, and 3 hold. There exists a constant C, depending on T, b, ${\sigma}$ , F, h, and ${X_0}$ but independent of N, such that for all ${i=1,\ldots,N}$ ,

    \begin{equation*}\mathbb{E}\bigg[\sup_{s\le t}\big|X^i_s-\tilde{X}^i_s\big|^2\bigg]\leq C\bigg(n^{-1}+N^{-1/2}\bigg).\end{equation*}
  2. (ii) Moreover, if Assumption 4 is in force, there exists a constant C, depending on T, b, ${\sigma}$ , F, h, and ${X_0}$ but independent of N, such that for all ${i=1,\ldots,N}$ ,

    \begin{equation*}\mathbb{E}\bigg[\sup_{s\le t}\big|X^i_s-\tilde{X}^i_s\big|^2\bigg]\leq C\bigg(n^{-1}+N^{-1}\bigg).\end{equation*}

Proof. Let us fix ${i\in \{1,\ldots,N\}}$ and ${T>0}$ . We have, for ${t\leq T}$ ,

\begin{equation*}\begin{aligned}\bigg|X^i_s-\tilde{X}^i_s\bigg|& \leq \bigg|\int_0^s b(X^i_{r^-})- b(\tilde{X}^i_{\underline{r}^-})dr \bigg| + \bigg|\int_0^s \bigg(\sigma(X^i_{r^-})-\sigma(\tilde{X}^i_{\underline{r}^-})\bigg) dB^i_r\bigg| \\[5pt] &\quad + \bigg|\int_0^s\int_E\bigg(F(X^i_{r^-},z) -F(\tilde{X}^i_{\underline{r}^-},z)\bigg) \tilde{N}^i(dr,dz)\bigg| + \sup_{r\le s} \big|{G}_0(\mu_r^N)-{G}_0(\tilde{\mu}^N_{\underline{r}})\big|.\end{aligned}\end{equation*}

Hence, using Assumption 1 and the Cauchy–Schwarz, Doob, and BDG inequalities gives

(4.2) \begin{equation}\begin{aligned}\mathbb{E}\bigg[\sup_{s\le t}\big|X^i_s-\tilde{X}^i_s\big|^2\bigg] & \leq 4\mathbb{E}\bigg[\sup_{s\le t}\Bigg\{\bigg|\int_0^s\bigg(b(X^i_{r^-})- b(\tilde{X}^i_{\underline{r}^-})\bigg) dr\bigg|^2 + \bigg|\int_0^s \bigg(\sigma(X^i_{r^-})-\sigma(\tilde{X}^i_{\underline{r}^-})\bigg) dB^i_r\bigg|^2 \\[3pt] &\quad + \bigg|\int_0^s\int_E\bigg(F(X^i_{r^-},z) -F(\tilde{X}^i_{\underline{r}^-},z)\bigg) \tilde{N}^i(dr,dz)\bigg|^2\\[3pt] & \quad + \sup_{r\le s} \big|{G}_0(\mu_r^N)-{G}_0(\tilde{\mu}^N_{\underline{r}})\big|^2\Bigg\}\Bigg] \\[3pt] &\leq C\Bigg\{\mathbb{E} \bigg[t\int_0^t\bigg|b(X^i_{s^-})- b(\tilde{X}^i_{\underline{s}^-}) \bigg|^2ds\bigg] + \mathbb{E}\bigg[\int_0^t \bigg|\sigma(X^i_{s^-})-\sigma(\tilde{X}^i_{\underline{s}^-})\bigg|^2 ds\bigg]\\[3pt] &\quad +\mathbb{E} \bigg[\int_0^t\int_E\bigg|F(X^i_{s^-},z) -F(\tilde{X}^i_{\underline{s}^-},z)\bigg|^2 \lambda(dz)ds\bigg]\\[3pt] & \quad + \mathbb{E}\bigg[\sup_{s\le t} \big|{G}_0(\mu_s^N)-{G}_0(\tilde{\mu}^N_{\underline{s}})\big|^2\bigg]\Bigg\} \\[3pt] &\leq C\Bigg\{TC_1\int_0^t \mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg] ds+ C_1\int_0^t \mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg] ds\\[3pt] & \quad +C_1\int_0^t \mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg] ds+ \mathbb{E}\bigg[\sup_{s\le t} \big|{G}_0(\mu_s^N)-{G}_0(\tilde{\mu}^N_{\underline{s}})\big|^2\bigg]\Bigg\}\\[3pt] &\leq C\int_0^t \mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg] ds +4\mathbb{E}\bigg[\sup_{s\le t} \big|{G}_0(\mu_s^N)-{G}_0(\tilde{\mu}^N_{\underline{s}})\big|^2\bigg].\\ \end{aligned}\end{equation}

Denoting by ${(\mu^i_{t})_{0\le t\le T}}$ the family of marginal laws of ${(U^i_{t})_{0\le t\le T}}$ and by ${(\tilde{\mu}^i_{\underline{t}})_{0\le t\le T}}$ the family of marginal laws of ${(\tilde{U}^i_{t})_{0\le t\le T}}$ , we have

\begin{equation*}\begin{aligned}\mathbb{E}\bigg[\sup_{s\le t} \big|{G}_0(\mu_s^N)-{G}_0(\tilde{\mu}^N_{\underline{s}})\big|^2\bigg]&\leq3\Bigg\{\mathbb{E}\bigg[\sup_{s\le t} \big|{G}_0(\mu_s^N)-{G}_0(\mu_s^i)\big|^2\bigg]+\sup_{s\le t} \big|{G}_0(\mu_s^i)-{G}_0(\tilde{\mu}^i_{\underline{s}})\big|^2\\[4pt] &\quad +\mathbb{E}\bigg[\sup_{s\le t} \big|{G}_0(\tilde{\mu}^i_{\underline{s}})-{G}_0(\tilde{\mu}^N_{\underline{s}})\big|^2\bigg]\Bigg\},\end{aligned}\end{equation*}

and from Lemma 2,

\begin{align*} &\leq3\Bigg\{{\frac{1}{m^2}}\mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\mu^i_s)+\cdot) (d\mu_s^N-d\mu_s^i)\right|^2\bigg]+\bigg(\frac{M}{m}\bigg)^2\sup_{s\le t} W_1^2(\mu_s^i,\tilde{\mu}^i_{\underline{s}})\\ &\quad +{\frac{1}{m^2}}\mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\tilde{\mu}^i_{\underline{s}})+\cdot) (d\tilde{\mu}^N_{\underline{s}}-d\tilde{\mu}^i_{\underline{s}})\right|^2\bigg]\Bigg\}\\ &\leq C\Bigg\{\mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\mu^i_s)+\cdot) (d\mu_s^N-d\mu_s^i)\right|^2\bigg]+\sup_{s\le t} \mathbb{E}\bigg[\bigg|U_s^i-\tilde{U}^i_{\underline{s}}\bigg|^2\bigg]\\[8pt] &\quad +\mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\tilde{\mu}^i_{\underline{s}})+\cdot) (d\tilde{\mu}^N_{\underline{s}}-d\tilde{\mu}^i_{\underline{s}})\right|^2\bigg]\Bigg\}.\end{align*}

Proof of (i). Following the proof of (i) in Theorem 2, we obtain

\begin{align*} & \mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\mu^i_s)+\cdot) (d\mu_s^N-d\mu_s^i)\right|^2\bigg]\leq C\mathbb{E}\bigg[\sup_{s\le t} W_1^2(\mu_s^N,\mu_s^i)\bigg]\leq CN^{-1/2},\\[8pt] & \mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\tilde{\mu}^i_{\underline{s}})+\cdot) (d\tilde{\mu}^N_{\underline{s}}-d\tilde{\mu}^i_{\underline{s}})\right|^2\bigg]\leq C\mathbb{E}\bigg[\sup_{s\le t} W_1^2(\tilde{\mu}^i_{\underline{s}},\tilde{\mu}^N_{\underline{s}})\bigg]\leq CN^{-1/2},\end{align*}

from which we can derive the inequality

\begin{equation*}\begin{aligned}\mathbb{E}\bigg[\sup_{s\le t} \big|{G}_0(\mu_s^N)-{G}_0(\tilde{\mu}^N_{\underline{s}})\big|^2\bigg]&\leq C_1\sup_{s\le t} \mathbb{E}\bigg[\bigg|U_s^i-\tilde{U}^i_{\underline{s}}\bigg|^2\bigg]+C_2N^{-1/2}\\[8pt] &\leq C_1\bigg\{\sup_{s\le t} \mathbb{E}\bigg[\bigg|U_s^i-\tilde{U}^i_{s}\bigg|^2\bigg]+\sup_{s\le t} \mathbb{E}\bigg[\bigg|\tilde{U}^i_{s}-\tilde{U}^i_{\underline{s}}\bigg|^2\bigg]\bigg\}+C_2N^{-1/2}.\end{aligned}\end{equation*}

For the first term of the right-hand side, we can observe that

\begin{equation*}\begin{aligned}\sup_{s\le t}\mathbb{E}\bigg[\bigg|U_s^i-\tilde{U}^i_{s}\bigg|^2\bigg]\bigg]&\leq \mathbb{E}\bigg[\sup_{s\le t}\bigg|U_s^i-\tilde{U}^i_{s}\bigg|^2\bigg]\bigg]\\[5pt] &\leq 3\mathbb{E}\bigg[\sup_{s\le t}\Bigg\{\bigg|\int_0^s\bigg(b(X^i_{r^-})- b(\tilde{X}^i_{\underline{r}^-})\bigg) dr\bigg|^2 + \bigg|\int_0^s \bigg(\sigma(X^i_{r^-})-\sigma(\tilde{X}^i_{\underline{r}^-})\bigg) dB^i_r\bigg|^2 \\[5pt] & \quad + \bigg|\int_0^s\int_E\bigg(F(X^i_{r^-},z) -F(\tilde{X}^i_{\underline{r}^-},z)\bigg) \tilde{N}^i(dr,dz)\bigg|^2\bigg\}\bigg] \\[5pt] &\leq C\Bigg\{\mathbb{E} \bigg[t\int_0^t\bigg|b(X^i_{s^-})- b(\tilde{X}^i_{\underline{s}^-}) \bigg|^2ds\bigg] + \mathbb{E}\bigg[\int_0^t \bigg|\sigma(X^i_{s^-})-\sigma(\tilde{X}^i_{\underline{s}^-})\bigg|^2 ds\bigg]\\[5pt] & \quad +\mathbb{E} \bigg[\int_0^t\int_E\bigg|F(X^i_{s^-},z) -F(\tilde{X}^i_{\underline{s}^-},z)\bigg|^2 \lambda(dz)ds\bigg]\Bigg\} \\[5pt] &\leq C\Bigg\{TC_1\int_0^t \mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg] dr+ 2C_1\int_0^t \mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg] dr\Bigg\}\\[5pt] &\leq C\int_0^t \mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg] ds.\end{aligned}\end{equation*}

Using Assumption 1, the second term ${\sup_{s\le t} \mathbb{E}\bigg[\bigg|\tilde{U}^i_{s}-\tilde{U}^i_{\underline{s}}\bigg|^2\bigg]}$ becomes

\begin{align*}\sup_{s\le t} \mathbb{E}\bigg[\bigg|\tilde{U}^i_{s}-\tilde{U}^i_{\underline{s}}\bigg|^2\bigg]&\leq 3\sup_{s\le t}\Bigg\{\mathbb{E}\bigg[\bigg|\int_{\underline{s}}^s b(\tilde{X}^i_{\underline{r}^-}) dr\bigg|^2+ \bigg|\int_{\underline{s}}^s \sigma(\tilde{X}^i_{\underline{r}^-}) dB^i_r\bigg|^2\\[3pt] & \quad + \bigg|\int_{\underline{s}}^s \int_E F(\tilde{X}^i_{\underline{r}^-},z) \tilde{N}^i(dr,dz)\bigg|^2\bigg]\Bigg\}\\[3pt] &\leq 3\sup_{s\le t}\Bigg\{\mathbb{E}\bigg[\bigg| b(\tilde{X}^i_{\underline{s}}) \bigg|^2 \big|s-\underline{s}\big|^2+ \bigg|\sigma(\tilde{X}^i_{\underline{s}}) \bigg|^2 \big|B^i_s-B^i_{\underline{s}}\big|^2\\[3pt] & \quad + \int_{\underline{s}}^s \int_E \bigg|F(\tilde{X}^i_{\underline{r}^-},z)\bigg|^2 \lambda(dz)dr\bigg]\Bigg\}\\[3pt] &\leq 3\sup_{s\le t}\Bigg\{\mathbb{E}\bigg[\bigg| b(\tilde{X}^i_{\underline{s}}) \bigg|^2 \big|s-\underline{s}\big|^2+ \bigg|\sigma(\tilde{X}^i_{\underline{s}}) \bigg|^2 \big|B^i_s-B^i_{\underline{s}}\big|^2+ C\!\int_{\underline{s}}^s\!(1+|\tilde{X}^i_{\underline{r}^-}|^2) dr\bigg]\Bigg\}\\[3pt] &\leq 3\sup_{s\le t}\Bigg\{\bigg(\frac{T}{n}\bigg)^2\mathbb{E}\bigg[\big|\sup_{\underline{s}\leq r\leq s} b(\tilde{X}^i_{\underline{r}^-}) \big|^2\bigg]+ \mathbb{E}\bigg[\big|B^i_s-B^i_{\underline{s}}\big|^2\bigg] \mathbb{E}\bigg[\big|\sigma(\tilde{X}^i_{\underline{s}}) \big|^2\bigg]\\[3pt] &\quad +C\bigg(\frac{T}{n}\bigg)\mathbb{E}\bigg[\sup_{\underline{s}\leq r\leq s} (1+|\tilde{X}^i_r|^2)\bigg]\Bigg\}\\[3pt] &\leq C_1\bigg(\frac{T}{n}\bigg)^2\mathbb{E}\bigg[\sup_{s\le T} \big|b(\tilde{X}^i_s) \big|^2\bigg]+ C_2 \sup_{s\le t}\mathbb{E}\bigg[\big|B^i_s-B^i_{\underline{s}}\big|^2\bigg] \mathbb{E}\bigg[\sup_{s\le T}\big|\sigma(\tilde{X}^i_s) \big|^2\bigg]\\[3pt] &\quad + C_3 \bigg(\frac{T}{n}\bigg)\mathbb{E}\bigg[\sup_{s\le T} (1+|\tilde{X}^i_s|^2)\bigg]\\[3pt] &\leq C_1\bigg(\frac{T}{n}\bigg)^2\bigg(1+\mathbb{E}\bigg[\sup_{s\le T} \big|\tilde{X}^i_s \big|^2\bigg]\bigg)\\[3pt] & \quad + C_2 \sup_{s\le t}\mathbb{E}\bigg[\big|B^i_s-B^i_{\underline{s}}\big|^2\bigg] \bigg(1+\mathbb{E}\bigg[\sup_{s\le T} \big|\tilde{X}^i_s \big|^2\bigg]\bigg)\\[3pt] &\quad + C_3 \bigg(\frac{T}{n}\bigg)\bigg(1+\mathbb{E}\bigg[\sup_{s\le T} \big|\tilde{X}^i_s \big|^2\bigg]\bigg),\end{align*}

and from Proposition 1, we get

\begin{equation*}\begin{aligned}\sup_{s\le t} \mathbb{E}\bigg[\bigg|\tilde{U}^i_{s}-\tilde{U}^i_{\underline{s}}\bigg|^2\bigg]&\leq C_1\bigg(\frac{T}{n}\bigg)+ C_2 \sup_{s\le t}\mathbb{E}\bigg[\big|B^i_s-B^i_{\underline{s}}\big|^2\bigg].\end{aligned}\end{equation*}

Then, by using the BDG inequality, we obtain

\begin{equation*}\begin{aligned}\sup_{s\le t}\mathbb{E}\bigg[\big|B^i_s-B^i_{\underline{s}}\big|^2\bigg]=\sup_{s\le t}\mathbb{E}\bigg[\bigg(\int_{\underline{s}}^s dB^i_u\bigg)^2\bigg]\leq\sup_{s\le t}|s-\underline{s}|\leq \frac{T}{n}.\end{aligned}\end{equation*}

Therefore, we conclude

(4.3) \begin{equation}\begin{aligned}\sup_{s\le t} \mathbb{E}\bigg[\bigg|\tilde{U}^i_{s}-\tilde{U}^i_{\underline{s}}\bigg|^2\bigg]&\leq C_1n^{-1}+ C_2 n^{-1}\\&\leq Cn^{-1},\end{aligned}\end{equation}

from which we derive the inequality

(4.4) \begin{equation}\begin{aligned}\mathbb{E}\bigg[\sup_{s\le t} \big|{G}_0(\mu_s^N)-{G}_0(\tilde{\mu}^N_{\underline{s}})\big|^2\bigg]&\leq C\Bigg\{n^{-1}+N^{-1/2}+\int_0^t \mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg] ds\Bigg\},\end{aligned}\end{equation}

and taking into account (4.2) we get

(4.5) \begin{equation}\begin{aligned}\mathbb{E}\bigg[\sup_{s\le t}\big|X^i_s-\tilde{X}^i_s\big|^2\bigg] & \leq C\Bigg\{n^{-1}+N^{-1/2}+\int_0^t \mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg] ds\Bigg\}.\end{aligned}\end{equation}

Since

\begin{equation*}\begin{aligned}\mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg]&\leq 2\mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{s}\big|^2\bigg]+2\mathbb{E}\bigg[\big|\tilde{X}^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg]\\[8pt] &=2\mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{s}\big|^2\bigg]+2\mathbb{E}\bigg[\big|\tilde{U}^i_s-\tilde{U}^i_{\underline{s}}\big|^2\bigg],\end{aligned}\end{equation*}

it follows from (4.3) and (4.5) that

\begin{equation*}\begin{aligned}\mathbb{E}\bigg[\sup_{s\le t}\big|X^i_s-\tilde{X}^i_s\big|^2\bigg] & \leq C\Bigg\{n^{-1}+N^{-1/2}+\int_0^t \mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_s\big|^2\bigg] ds\Bigg\}.\end{aligned}\end{equation*}

Finally, we conclude the proof of (i) with Gronwall’s lemma.

Proof of (ii). Following the proof of (ii) in Theorem 2, we obtain

\begin{equation*}\mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\mu^i_s)+\cdot) (d\mu_s^N-d\mu_s^i)\right|^2\bigg]\leq CN^{-1},\end{equation*}
\begin{equation*}\mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\tilde{\mu}^i_{\underline{s}})+\cdot) (d\tilde{\mu}^N_{\underline{s}}-d\tilde{\mu}^i_{\underline{s}})\right|^2\bigg]\leq CN^{-1}.\end{equation*}

By the same strategy as the one applied in the proof of (i) in Theorem 4, the result follows easily:

\begin{equation*}\mathbb{E}\bigg[\sup_{s\le t}\big|X^i_s-\tilde{X}^i_s\big|^2\bigg]\leq C\bigg(n^{-1}+N^{-1}\bigg).\end{equation*}

Theorem 3. Let ${T > 0}$ , let N and n be two nonnegative integers, and suppose that Assumptions 1, 2, and 3 hold.

  1. (i) There exists a constant C, depending on T, b, ${\sigma}$ , F, h, and ${X_0}$ but independent of N, such that for all ${i=1,\ldots,N}$ ,

    \begin{equation*}\mathbb{E}\bigg[\sup_{t\le T}\big|\bar{X}^i_t-\tilde{X}^i_t\big|^2\bigg]\leq C\bigg(n^{-1}+N^{-1/2}\bigg).\end{equation*}
  2. (ii) If in addition Assumption 4 holds, there exists a positive constant C, depending on T, b, ${\sigma}$ , F, h, and ${X_0}$ but independent of N, such that for all ${i=1,\ldots,N}$ ,

    \begin{equation*}\mathbb{E}\bigg[\sup_{t\le T}\big|\bar{X}^i_t-\tilde{X}^i_t\big|^2\bigg]\leq C\bigg(n^{-1}+N^{-1}\bigg).\end{equation*}

Proof. The proof is straightforward, writing

\begin{equation*}\big|\bar{X}^i_t-\tilde{X}^i_t\big|\leq \big|\bar{X}^i_t-{X}^i_t\big|+\big|{X}^i_t-\tilde{X}^i_t\big|\end{equation*}

and using Theorem 2 and Proposition 4.

5. Numerical examples

In this section, let us study on [0, T] processes of the following sort:

(5.1) \begin{equation} \begin{cases} \begin{split} & X_t =X_0-\int_0^t (\beta_s+a_s X_{s^-}) ds + \int_0^t (\sigma_s+\gamma_s X_{s^-}) dB_s\\[5pt] & \quad + \int_0^t \int_E c(z)(\eta_s+\theta_s X_{s^-}) \tilde{N}(ds,dz) + K_t,\quad t\geq 0, \\[8pt] & \mathbb{E}[h(X_t)] \geq 0, \quad \int_0^t \mathbb{E}[h(X_s)] \, dK_s = 0, \quad t\geq 0, \end{split} \end{cases}\end{equation}

where ${(\beta_t)_{t\geq0}}$ , ${(a_t)_{t\geq0}}$ , ${(\sigma_t)_{t\geq0}}$ , ${(\gamma_t)_{t\geq0}}$ , ${(\eta_t)_{t\geq0}}$ , and ${(\theta_t)_{t\geq0}}$ are bounded adapted processes. This sort of process is chosen to make some explicit computations which allow the illustration of the algorithm. Different diffusions and functions h are considered in order to illustrate our results.

Linear constraint. Firstly, we consider the cases where ${h\,{:}\,\mathbb{R}\ni x \longmapsto x-p\in\mathbb{R}.}$

Case (i). Drifted Brownian motion and compensated Poisson process: ${\beta_t=\beta>0}$ , ${a_t=\gamma_t=\theta_t=0}$ , ${\sigma_t=\sigma>0}$ , ${\eta_t=\eta>0}$ , ${X_0=x_0\geq p}$ , ${c(z)=z}$ , and

\begin{equation*}f(z)=\frac{1}{\sqrt{2\pi z}}\exp\bigg(-\frac{(\ln z)^2}{2}\bigg)\bf{1}_{\{0<z\}}.\end{equation*}

We have

\begin{equation*}K_t=(p+\beta t-x_0)^+,\end{equation*}

and

\begin{equation*}X_t=X_0-(\beta+\lambda\sqrt{e})t+\sigma B_t+\sum_{i=0}^{N_t}\eta \xi_i+K_t,\end{equation*}

where ${N_t\sim \mathcal{P}(\lambda t)}$ and ${\xi_i\sim lognormal(0,1).}$

Case (ii). Black–Scholes process: ${\beta_t=\sigma_t=\eta_t=0}$ , ${a_t=a>0}$ , ${\gamma_t=\gamma>0}$ , ${\theta_t=\theta>0}$ , ${c(z)=\delta_1(z)}$ . Then

\begin{equation*}K_t=ap(t-t^*)1_{t\geq t^*},\end{equation*}

where ${t^*=\frac{1}{a}(\ln(x_0)-\ln(p))}$ , and

\begin{equation*}X_t=Y_t+Y_t\int_0^tY_s^{-1}dK_s,\end{equation*}

where Y is the process defined by

\begin{equation*}Y_t=X_0\exp\Big(-(a+\gamma^2/2+\lambda\theta)t+\gamma B_t\Big)(1+\theta)^{N_t}.\end{equation*}

Nonlinear constraint. Secondly, we consider the case of a nonlinear function h:

\begin{equation*}h\,{:}\,\mathbb{R}\ni x\longmapsto x+\alpha \sin(x)-p\in\mathbb{R}, \quad -1<\alpha<1.\end{equation*}

We illustrate this case with the following.

Case (iii). Ornstein–Uhlenbeck process: ${\beta_t=\beta>0}$ , ${a_t=a>0}$ , ${\gamma_t=\theta_t=0}$ , ${\sigma_t=\sigma>0}$ , ${\eta_t=\eta>0}$ , ${X_0=x_0}$ with ${x_0>|\alpha|+p}$ , ${c(z)=\delta_1(z)}$ . We obtain

\begin{equation*}\textrm{d}K_t=e^{-at}\textrm{d}\sup_{s\leq t}(F^{-1}_s(0))^+,\end{equation*}

where for all t in [0, T],

\begin{equation*}\begin{aligned} & F_t\,{:}\,\mathbb{R}\ni x\longmapsto \Bigg\{e^{-at}\bigg(x_0-\beta\bigg(\frac{e^{at}-1}{a}\bigg)+x\bigg)+\alpha\exp\bigg(-e^{-at}\frac{\sigma^2}{2a}\sinh(at)\bigg)\\[4pt] &\times\Bigg[\frac{1}{2}\big(\exp(\lambda t(e^{i\eta}-1))+\exp(\lambda t(e^{-i\eta}-1))\big)\sin\bigg(e^{-at}\bigg(x_0-(\beta+\lambda\eta)\bigg(\frac{e^{-at}-1}{a}\bigg)+x\bigg)\bigg)\\[4pt] &+\frac{1}{2i}\big(\exp(\lambda t(e^{i\eta}-1))-\exp(\lambda t(e^{-i\eta}-1))\big)\cos\bigg(e^{-at}\bigg(x_0-(\beta+\lambda\eta)\bigg(\frac{e^{-at}-1}{a}\bigg)+x\bigg)\bigg)\Bigg]\\[3pt] & -p\Bigg\}.\end{aligned}\end{equation*}

Remark 6. We choose these examples in order to obtain an analytic form of the ‘true’ reflecting process K which can be compared numerically with its empirical approximation ${\hat{K}}$ . Having the exact simulation of the underlying process, we can verify the efficiency of our algorithm.

5.1. Proofs of the numerical illustrations

In order to have a closed, or almost closed, expression for the compensator K we introduce the process Y solving the non-reflected SDE

\begin{equation*}Y_t =X_0-\int_0^t (\beta_s+a_s Y_{s^-}) ds + \int_0^t (\sigma_s+\gamma_s Y_{s^-}) dB_s + \int_0^t\int_E c(z)(\eta_s+\theta_s Y_{s^-}) \tilde{N}(ds,dz).\end{equation*}

By letting ${A_s=\int_0^ta_sds}$ and applying Itô’s formula on ${e^{A_t}X_t}$ and ${e^{A_t}Y_t}$ , we get

\begin{equation*}\begin{aligned}e^{A_t}X_t&=X_0+\int_0^t e^{A_s}X_s a_s ds+\int_0^t e^{A_s}(-\beta_s-a_s X_{s^-}) ds + \int_0^t e^{A_s}(\sigma_s+\gamma_s X_{s^-}) dB_s \\&\quad + \int_0^t\int_E e^{A_s} c(z)(\eta_s+\theta_s X_{s^-}) \tilde{N}(ds,dz)+\int_0^te^{A_s}dK_s\\&=X_0-\int_0^t e^{A_s}\beta_s ds + \int_0^t e^{A_s}(\sigma_s+\gamma_s X_{s^-}) dB_s\\&\quad + \int_0^t\int_E e^{A_s} c(z)(\eta_s+\theta_s X_{s^-}) \tilde{N}(ds,dz)+\int_0^te^{A_s}dK_s.\end{aligned}\end{equation*}

In the same way,

\begin{equation*}\begin{aligned}e^{A_t}Y_t=X_0-\int_0^t e^{A_s}\beta_s ds + \int_0^t e^{A_s}(\sigma_s+\gamma_s Y_{s^-}) dB_s + \int_0^t\int_E e^{A_s} c(z)(\eta_s+\theta_s Y_{s^-}) \tilde{N}(ds,dz),\end{aligned}\end{equation*}

and so

\begin{equation*}\begin{aligned}X_t&=Y_t+e^{-A_t}\int_0^te^{A_s}dK_s+ e^{-A_t}\int_0^t e^{A_s}\gamma_s (X_{s^-}+Y_{s^-}) dB_s\\[5pt] &\quad +e^{-A_t} \int_0^t\int_E e^{A_s} c(z)\theta_s(X_{s^-}+Y_{s^-}) \tilde{N}(ds,dz).\end{aligned}\end{equation*}

Remark 7. In all cases, we have ${a_t=a}$ , i.e. ${A_t=at}$ , so we get

\begin{equation*}\begin{aligned}\mathbb{E}[Y_t]&=\mathbb{E}\bigg[e^{-at}\bigg(x_0-\int_0^t e^{as}\beta ds+ \int_0^t e^{as}(\sigma_s+\gamma_s Y_{s^-}) dB_s\\[5pt] & \qquad\qquad + \int_0^t\int_E e^{as} c(z)(\eta_s+\theta_s Y_{s^-}) \tilde{N}(ds,dz)\bigg)\bigg]\\[5pt] &=e^{-at}\bigg(x_0-\int_0^t e^{as}\beta ds\bigg)\\[5pt] &=e^{-at}\bigg(x_0-\beta \bigg(\frac{e^{at}-1}{a}\bigg)\bigg).\end{aligned}\end{equation*}

Proof of assertions in Case (i). From Proposition 3 and Remark 7, we have

\begin{equation*}\begin{aligned}k_t&=\beta \bf{1}_{\mathbb{E}(X_t)\,{=}\,p}\\[3pt] &=\beta \bf{1}_{\mathbb{E}(Y_t)+K_t\,{-}\,p\,{=}\,0}\\[3pt] &=\beta \bf{1}_{x_0-\beta t\,{+}\,K_t\,{-}\,p\,{=}\,0},\end{aligned}\end{equation*}

so we obtain that

\begin{equation*}\begin{aligned}K_t&=\int_0^t k_s ds\\[5pt] &=\int_0^t \beta \bf{1}_{K_s\,{=}\,p\,{+}\,\beta s-x_0} ds,\end{aligned}\end{equation*}

and as ${K_t\geq 0}$ , we conclude that

\begin{equation*}K_t=(p+\beta t-x_0)^+.\end{equation*}

Next, we have

\begin{equation*}f(z)=\frac{1}{\sqrt{2\pi z}}\exp\bigg(-\frac{(\ln z)^2}{2}\bigg),\end{equation*}

the density function of a lognormal random variable, so we can obtain

\begin{equation*}\int_E \eta z \lambda(dz)=\lambda\eta\int_E z f(z) dz=\lambda\eta \mathbb{E}(\xi)\end{equation*}

where ${\xi\sim lognormal(0,1)}$ , and we conclude that

\begin{equation*}\int_E \eta z \lambda(dz)=\lambda\eta\sqrt{e}.\end{equation*}

Finally, we deduce the exact solution

\begin{equation*}X_t=X_0-(\beta+\lambda\sqrt{e}) t+\sigma B_t+\sum_{i=0}^{N_t}\eta \xi_i+K_t,\end{equation*}

where ${N_t\sim \mathcal{P}(\lambda t)}$ and ${\xi_i\sim lognormal(0,1).}$

Proof of assertions in Case (ii). In this case, using the same Proposition and Remark, we have

\begin{equation*}\begin{aligned}k_t&=(\mathbb{E}(-aX_t))^- \bf{1}_{\mathbb{E}(X_t)=p},\end{aligned}\end{equation*}

which implies

\begin{equation*}\begin{aligned}\mathbb{E}(X_t)=p&\Longleftrightarrow \mathbb{E}(Y_t)-p+e^{-at}\int_0^t e^{as}dK_s=0\\[5pt] &\Longleftrightarrow -x_0e^{-at}+p=e^{-at}\int_0^t e^{as}dK_s\\[5pt] &\Longleftrightarrow K_s=ap,\end{aligned}\end{equation*}

and

\begin{equation*}\begin{aligned}K_t\geq0&\Longleftrightarrow -x_0e^{-at}+p\geq0\\&\Longleftrightarrow e^{-at}\leq\frac{p}{x_0}\\&\Longleftrightarrow t\geq\frac{1}{a}(\ln(x_0)-\ln(p)): t^*.\end{aligned}\end{equation*}

So we conclude that ${K_t=ap(t-t^*)1_{t\geq t^*}}$ , where ${t^*=\frac{1}{a}(\ln(x_0)-\ln(p))}$ .

Next, by the definition of the process ${Y_t}$ ,

\begin{equation*}dY_t =-a Y_{t^-} dt + \gamma Y_{t^-} dB_t + \theta Y_{t^-} d\tilde{N}_t,\end{equation*}

we have

\begin{equation*}Y_t=X_0\exp\Big(-(a+\gamma^2/2+\lambda\theta)t+\gamma B_t\Big)(1+\theta)^{N_t}.\end{equation*}

Thanks to Itô’s formula we get

\begin{equation*}\begin{aligned}d\bigg(\frac{1}{Y_t}\bigg)&=-\frac{1}{Y_t^2}dY_t+\frac{1}{2}\bigg(\frac{2}{Y_t^3}\bigg)\gamma^2Y_t^2dt+d\sum_{s\leq t}\bigg(\frac{1}{Y_{s^-}+\Delta Y_s}-\frac{1}{Y_{s^-}}+\frac{1}{Y_{s^-}^2}\Delta Y_s\bigg)\\[5pt] &=\frac{a}{Y_t}dt-\frac{\gamma}{Y_t}dB_t-\frac{\theta}{Y_{t^-}}d\tilde{N}_t+\frac{\gamma^2}{Y_t}dt+d\sum_{s\leq t}\bigg(\frac{1}{(1+\theta)Y_{s^-}}-\frac{1}{Y_{s^-}}+\frac{\theta}{Y_{s^-}}\bigg),\end{aligned}\end{equation*}

and so

\begin{equation*}\begin{aligned}dY_t^{-1}&=(a+\gamma^2)Y_t^{-1}dt-\gamma Y_t^{-1}dB_t-\theta Y_{t^-}^{-1}d\tilde{N}_t+\bigg(\frac{\theta^2}{1+\theta}\bigg)d\sum_{s\leq t}Y_{s^-}^{-1}\\&=\bigg(a+\gamma^2+\frac{\lambda\theta^2}{1+\theta}\bigg)Y_t^{-1}dt-\gamma Y_t^{-1}dB_t-\bigg(\frac{\theta}{1+\theta}\bigg) Y_{t^-}^{-1}d\tilde{N}_t.\end{aligned}\end{equation*}

Then, using integration by parts, we obtain

\begin{equation*}\begin{aligned}d(X_t Y_t^{-1})&=X_{t^-} dY_t^{-1}+Y_{t^-}^{-1}dX_t+d[X,Y^{-1}]_t \\[3pt] &=(a+\gamma^2)X_tY_t^{-1}dt-\gamma X_tY_t^{-1}dB_t-\theta X_{t^-}Y_{t^-}^{-1}d\tilde{N}_t+\bigg(\frac{\theta^2}{1+\theta}\bigg)d\sum_{s\leq t}X_{s^-}Y_{s^-}^{-1}\\[3pt] & \quad -aX_t Y_t^{-1}dt+\gamma X_t Y_t^{-1}dB_t+\theta X_{t^-} Y_{t^-}^{-1}d\tilde{N}_t+Y_t^{-1}dK_t\\[3pt] & \quad -\gamma^2X_tY_t^{-1}dt-\bigg(\frac{\theta^2}{1+\theta}\bigg)d\sum_{s\leq t}X_{s^-}Y_{s^-}^{-1}\\[3pt] &=Y_t^{-1}dK_t.\end{aligned}\end{equation*}

Finally, we deduce that

\begin{equation*}X_t=Y_t+Y_t\int_0^tY_s^{-1}dK_s.\end{equation*}

Proof of assertions in Case (iii). In this case, we have

\begin{equation*}\begin{aligned}Y_t&=e^{-at}\bigg(x_0-\beta \bigg(\frac{e^{at}-1}{a}\bigg)\bigg)+ \sigma_s e^{-at}\int_0^t e^{as}dB_s + e^{-at}\int_0^t \eta_s e^{as} d\tilde{N}_s\\[5pt] &=e^{-at}\bigg(x_0-(\beta+\lambda\eta) \bigg(\frac{e^{at}-1}{a}\bigg)\bigg)+ \sigma_s e^{-at}\int_0^t e^{as}dB_s + e^{-at}\int_0^t \eta_s e^{as} dN_s\\[5pt]&: f_t+G_t+F_t,\end{aligned}\end{equation*}

and

\begin{equation*}X_t=Y_t+e^{-at}\bar{K}_t,\quad \bar{K}_t=\int_0^te^{as}dK_s.\end{equation*}

Hence

\begin{equation*}\begin{aligned}h(X_t)&=Y_t+e^{-at}\bar{K}_t+\alpha\sin(Y_t+e^{-at}\bar{K}_t)-p\\[3pt] &=Y_t+e^{-at}\bar{K}_t+\alpha\big(\sin(Y_t)\cos(e^{-at}\bar{K}_t)+\cos(Y_t)\sin(e^{-at}\bar{K}_t)\big)-p\\[3pt] &=Y_t+e^{-at}\bar{K}_t+\alpha\big[\cos(e^{-at}\bar{K}_t)\big\{\sin(\,f_t)\cos(G_t)\cos(F_t)+\cos(\,f_t)\sin(G_t)\cos(F_t)\\[3pt] &\quad +\cos(\,f_t)\cos(G_t)\sin(F_t)-\sin(\,f_t)\sin(G_t)\sin(F_t)\big\}\\[3pt] &\quad +\sin(e^{-at}\bar{K}_t)\big\{\cos(\,f_t)\cos(G_t)\cos(F_t)\\[3pt] &\quad -\sin(\,f_t)\sin(G_t)\sin(F_t)-\sin(\,f_t)\cos(G_t)\sin(F_t)-\cos(\,f_t)\sin(G_t)\sin(F_t)\big\}\big]-p.\end{aligned}\end{equation*}

On one side, since ${G_t}$ is a centered Gaussian random variable with variance

\begin{equation*}V=\sigma^2\frac{1-e^{-2at}}{2a}=\sigma^2e^{-at}\frac{sinh(at)}{a},\end{equation*}

we obtain that

\begin{equation*}\mathbb{E}[e^{iG_t}]=e^{-V/2},\end{equation*}
\begin{equation*}\mathbb{E}[\sin(G_t)]=\mathbb{E}\bigg[\frac{e^{iG_t}-e^{-iG_t}}{2i}\bigg]=0,\end{equation*}

and

\begin{equation*}\mathbb{E}[\!\cos(G_t)]=\mathbb{E}\bigg[\frac{e^{iG_t}+e^{-iG_t}}{2}\bigg]=\mathbb{E}(e^{iG_t})=\exp\bigg(-e^{-at}\frac{\sigma^2}{2a}sinh(at)\bigg)=: g(t).\end{equation*}

On the other side,

\begin{equation*}\begin{aligned}\mathbb{E}[e^{iF_t}]=\mathbb{E}\bigg[\exp\bigg(i\eta e^{-at}\int_0^t e^{as}dN_s\bigg)\bigg],\end{aligned}\end{equation*}

by taking a small, we get

\begin{equation*}\begin{aligned}\mathbb{E}[e^{iF_t}]&\approx\mathbb{E}\bigg[\exp\bigg(i\eta \int_0^t dN_s\bigg)\bigg]\\[5pt] &\approx\mathbb{E}\Big[\exp\Big(i\eta N_t\Big)\Big]\\[5pt] &\approx\exp\Big(\lambda t(e^{i\eta}-1)\Big),\end{aligned}\end{equation*}

and so

\begin{equation*}\mathbb{E}[\sin(F_t)]\approx\frac{\exp\Big(\lambda t(e^{i\eta}-1)\Big)-\exp\Big(\lambda t(e^{-i\eta}-1)\Big)}{2i}=: m(t),\end{equation*}

\begin{equation*}\mathbb{E}[\cos(F_t)]\approx\frac{\exp\Big(\lambda t(e^{i\eta}-1)\Big)+\exp\Big(\lambda t(e^{-i\eta}-1)\Big)}{2}=: n(t).\end{equation*}

Using Remark 7, we conclude that, for small a,

\begin{equation*}\begin{aligned}\mathbb{E}[h(X_t)]&\approx\mathbb{E}[Y_t]+e^{-at}\bar{K}_t+\alpha\Big(g(t)m(t)\cos(f_t+e^{-at}\bar{K}_t)+g(t)n(t)\sin(f_t+e^{-at}\bar{K}_t)\Big)-p\\&: F_t(\bar{K}_t).\end{aligned}\end{equation*}

Therefore,

\begin{equation*}\bar{K}_t=\sup_{s\leq t}\Big(F_s^{-1}(0)\Big)^+\ and\ dK_t=e^{-at}d\sup_{s\leq t}\Big(F_s^{-1}(0)\Big)^+.\end{equation*}

5.2. Illustrations

This computation works as follows. Let ${0 = T_0 < T_1 <\cdots< T_n = T }$ be a subdivision of [0, T ] of step size ${T/n}$ , n being a positive integer, let X be the unique solution of the MR-SDE (5.1), and let ${(\tilde{X}^i_{T_k})_{0\leq k\leq n}}$ , for a given i, be its numerical approximation given by Algorithm 1. For a given integer L, we draw ${(\bar{X}^l)_{0\leq l\leq L}}$ and ${(\tilde{X}^{i,l})_{0\leq l\leq L}}$ , L independent copies of X and ${\tilde{X}^i}$ . Then we approximate the ${\mathbb{L}^2}$ -error of Theorem 3 by

(5.2) \begin{equation}\hat{E}=\frac{1}{L}\sum_{l=1}^{L}\max_{0\leq k\leq n}\Big|\bar{X}^l_{T_k}-\tilde{X}^{i,l}_{T_k}\Big|^2.\end{equation}

Figure 1 illustrates the evolution in time of the true K (full line) and the estimated K (dotted line for particle method, dashed line for density method) in Case (i). It is confirmed that the approximation of K is almost the same as the exact solution. The evolution of ${\log(\hat{E})}$ with respect to ${\log(N)}$ is depicted in Figure 2. It can be seen that the slope is equal to ${0.9}$ , which is consistent with the statement of Theorem 3.

Algorithm 1 Particle approximation

Figure 1: Case (i). ${n = 500,\ N = 100000,\ T = 1,\ \beta = 2,\ \sigma = 1,\ \lambda=5,\ x_0 =1,\ p = 1/2}$ .

Figure 2: Case (i). Regression of ${\log(\hat{E})}$ w.r.t. ${\log(N)}$ . Data: ${\hat{E}}$ when N varies from 100 to 2200 with step size 300. Parameters: ${n = 100}$ , ${T = 1}$ , ${\beta =2}$ , ${\sigma= 1}$ , ${\lambda=5}$ , ${x_0 = 1}$ , ${p = 1/2}$ , ${L = 1000}$ .

Figure 3 illustrates the evolution in time of the true K (full line) and the estimated K (dotted line for particle method, dashed line for density method) in Case (ii). As in the previous example, the approximation of K is almost the same as the exact solution. The evolution of ${\log(\hat{E})}$ with respect to ${\log(N)}$ is depicted in Figure 4. It can be seen that the slope is equal to ${0.9}$ , which is consistent with the statement of Theorem 3.

Figure 3: Case (ii). Parameters: ${n = 500}$ , ${N = 10000}$ , ${T = 1}$ , ${\beta=0}$ , ${a = 3}$ , ${\gamma=1}$ , ${\eta =1}$ , ${\lambda=2}$ , ${x_0 = 4}$ , ${p = 1}$ .

Figure 4: Case (ii). Regression of ${\log(\hat{E})}$ w.r.t. ${\log(N)}$ . Data: ${\hat{E}}$ when N varies from 100 to 800 with step size 100. Parameters: ${n = 1000}$ , ${T = 1}$ , ${\beta =0}$ , ${a=3}$ , ${\gamma=1}$ , ${\eta= 1}$ , ${\lambda=2}$ , ${x_0 = 4}$ , ${p = 1}$ , ${L = 1000}$ .

Figure 5 illustrates the evolution in time of the true K (full line) and the estimated K (dotted line for particle method, dashed line for density method) in Case (iii). Moreover, we notice that the approximation of K using the particle method is closer to the exact K than the one using the density method.

Figure 5: Case (iii). Parameters: ${n = 1000}$ , ${N = 100000}$ , ${T = 15}$ , ${\beta =10^{-2}}$ , ${\sigma = 1}$ , ${p =\pi/2}$ , ${\alpha = 0.9}$ , ${a=10^{-2}}$ , ${x_0}$ is the unique solution of ${x+\alpha\sin (x)-p=0}$ plus ${10^{-1}}$ .

Appendix A. Proof of Lemma 4

Let s and t in [0, T] be such that ${s\leq t}$ .

Firstly, we suppose that ${\varphi}$ is a continuous function with compact support. In this case, there exists a sequence of Lipschitz continuous functions ${\varphi_n}$ with compact support which converges uniformly to ${\varphi}$ . Therefore, by using Proposition 2, we get

\begin{equation*}\begin{aligned}|\mathbb{E}[\varphi(X_t)]-\mathbb{E}[\varphi(X_s)]|&\leq |\mathbb{E}[\varphi(X_t)]-\mathbb{E}[\varphi_n(X_t)]|+|\mathbb{E}[\varphi_n(X_t)]-\mathbb{E}[\varphi_n(X_s)]|\\ &\quad +|\mathbb{E}[\varphi_n(X_s)]-\mathbb{E}[\varphi(X_s)]|\\ &\leq \mathbb{E}[|(\varphi-\varphi_n)(X_t)|]+C_n\mathbb{E}[|X_t-X_s|]+|\mathbb{E}[(\varphi_n-\varphi)(X_s)]|\\ &\leq 2\mathbb{E}[\parallel\varphi_n-\varphi\parallel_\infty]+C_n(\mathbb{E}[|X_t-X_s|^2])^{1/2}\\ &\leq 2\mathbb{E}[\parallel\varphi_n-\varphi\parallel_\infty]+C_n|t-s|^{1/2}.\end{aligned}\end{equation*}

Thus, we obtain that

\begin{equation*}\limsup_{t\rightarrow s}|\mathbb{E}[\varphi(X_t)]-\mathbb{E}[\varphi(X_s)]|\leq 2\mathbb{E}[\parallel\varphi_n-\varphi\parallel_\infty].\end{equation*}

This result is true for all ${n\geq1}$ , so we deduce that

\begin{equation*}\limsup_{t\rightarrow s}|\mathbb{E}[\varphi(X_t)]-\mathbb{E}[\varphi(X_s)]|=0;\end{equation*}

then we conclude the continuity of the function ${t\longmapsto\mathbb{E} [\varphi(X_t)]}$ .

Secondly, we consider the case where ${\varphi}$ is a continuous function such that

\begin{equation*}\forall\ x\in\mathbb{R}, \exists C\in\mathbb{R}, \quad \varphi(x)\leq C(1+|x|^p).\end{equation*}

We define a sequence of functions ${\varphi_n}$ such that for all ${n\geq1}$ and ${x\in\mathbb{R}}$ ,

\begin{equation*}\varphi_n(x)=\varphi(x)\theta_n(x)\end{equation*}

with

\begin{equation*} \theta_n(x)= \begin{cases} 1 & \text{if}\ |x|\leq n, \\ 0 & \text{if}\ |x|> n. \end{cases} \end{equation*}

Based on this definition, ${\varphi_n}$ is a continuous function with compact support. Then we get

\begin{equation*} \begin{aligned} |\mathbb{E}[\varphi(X_t)]-\mathbb{E}[\varphi(X_s)]| &\leq \Big|\mathbb{E}\Big[(\varphi-\varphi_n)(X_t)({\bf 1}_{|\textbf{X}_{\textbf{t}}|\leq n}+{\bf 1}_{|\textbf{X}_{\textbf{t}}|> n})\Big]\Big|+\Big|\mathbb{E}[\varphi_n(\textbf{X}_{\textbf{t}})]-\mathbb{E}[\varphi_n(\textbf{X}_{\textbf{s}})]\Big|\\&\quad +\Big|\mathbb{E}\Big[(\varphi_n-\varphi)(X_s)({\bf 1}_{|\textbf{X}_{\textbf{s}}|\leq n}+{\bf 1}_{|\textbf{X}_{\textbf{s}}|> n})\Big]\Big|\\ & \leq \Big|\mathbb{E}\Big[(\varphi-\varphi_n)(X_t){\bf 1}_{|\textbf{X}_{\textbf{t}}|> n}\Big]\Big|+\Big|\mathbb{E}[\varphi_n(\textbf{X}_{\textbf{t}})]-\mathbb{E}[\varphi_n(\textbf{X}_{\textbf{s}})]\Big| \\ &\quad +\Big|\mathbb{E}\Big[(\varphi_n-\varphi)(X_s){\bf 1}_{|\textbf{X}_{\textbf{s}}|> n}\Big]\Big|\\ & \leq 2\mathbb{E}\Big[|\varphi(\textbf{X}_{\textbf{t}})|{\bf 1}_{|\textbf{X}_{\textbf{t}}|> n}\Big]+\Big|\mathbb{E}[\varphi_n(\textbf{X}_{\textbf{t}})]-\mathbb{E}[\varphi_n(\textbf{X}_{\textbf{s}})]\Big|+2\mathbb{E}\Big[|\varphi(\textbf{X}_{\textbf{s}})|{\bf 1}_{|\textbf{X}_{\textbf{s}}|> n}\Big]\\ & \leq C\mathbb{E}\Big[(1+|X_t|^p){\bf 1}_{|\textbf{X}_{\textbf{s}}|> n}\Big]+\Big|\mathbb{E}[\varphi_n(X_t)]-\mathbb{E}[\varphi_n(\textbf{X}_{\textbf{s}})]\Big| \\ &\quad +C\mathbb{E}\Big[(1+|X_s|^p){\bf 1}_{|\textbf{X}_{\textbf{s}}|> n}\Big]\\ & \leq C\mathbb{E}\Big[(1+\sup_{t\leq T}|\textbf{X}_{\textbf{t}}|^p){\bf 1}_{\sup_{t\leq T}|\textbf{X}_{\textbf{t}}|> n}\Big]+\Big|\mathbb{E}[\varphi_n(\textbf{X}_{\textbf{t}})]-{\mathbb{E}}[\varphi_n(\textbf{X}_{\textbf{s}})]\Big|. \end{aligned} \end{equation*}

Thus, by using the first part of this lemma, we obtain that

\begin{equation*}\limsup_{t\rightarrow s}|\mathbb{E}[\varphi(X_t)]-\mathbb{E}[\varphi(X_s)]|\leq C\mathbb{E}\Big[(1+\sup_{t\leq T}|X_t|^p)\bf{1}_{\sup_{t\leq T}|X_t|> n}\Big].\end{equation*}

This result is true for all ${n\geq1}$ ; by using the dominated convergence theorem, we deduce that

\begin{equation*}\limsup_{t\rightarrow s}|\mathbb{E}[\varphi(X_t)]-\mathbb{E}[\varphi(X_s)]|=0,\end{equation*}

and we conclude the continuity of the function ${t\longmapsto\mathbb{E} [\varphi(X_t)]}$ .

References

Artzner, P., Delbaen, F., Eber, J.-M. and Heath, D. (1999). Coherent measures of risk. Math. Finance 9, 203228.CrossRefGoogle Scholar
Briand, P., Chaudru de Raynal, P.-E., Guillin, A. and Labart, C. (2016). Particles systems and numerical schemes for mean reflected stochastic differential equations. Submitted.Google Scholar
Briand, P., Elie, R. and Hu, Y. (2018). BSDEs with mean reflexion. Ann. Appl. Prob. 28, 482510.CrossRefGoogle Scholar
Carmona, R. and Delarue, F. (2018) Probabilistic Theory of Mean Field Games with Applications I. Springer, New York.Google Scholar
Carmona, R. and Delarue, F. (2018) Probabilistic Theory of Mean Field Games with Applications II. Springer, New York.CrossRefGoogle Scholar
Crépey, S. and Matoussi, A. (2008). Reflected and doubly reflected BSDEs with jumps: a priori estimates and comparison. Ann. Appl. Prob. 18, 20412069.CrossRefGoogle Scholar
Dumitrescu, R. and Labart, C. (2016). Numerical approximation of doubly reflected BSDEs with jumps and RCLL obstacles. J. Math. Anal. Appl. 442, 206243.CrossRefGoogle Scholar
Dumitrescu, R. and Labart, C. (2016). Reflected scheme for doubly reflected BSDEs with jumps and RCLL obstacles. J. Comput. Appl. Math. 296, 827839.CrossRefGoogle Scholar
Essaky, E. H., Harraj, N. and Ouknine, Y. (2005). Backward stochastic differential equation with two reflecting barriers and jumps. Stoch. Anal. Appl. 23, 921938.CrossRefGoogle Scholar
Essaky, E. H. (2008). Reflected backward stochastic differential equation with jumps and RCLL obstacle. Bull. Sci. Math. 132, 690710.CrossRefGoogle Scholar
Föllmer, H. and Schied, A. (2002). Convex measures of risk and trading constraints. Finance Stoch. 6, 429447.CrossRefGoogle Scholar
Fournier, N. and Guillin, A. (2015). On the rate of convergence in Wasserstein distance of the empirical measure. Prob. Theory Relat. Fields 162, 707738.CrossRefGoogle Scholar
Hamadène, S. and Hassani, M. (2006). BSDEs with two reacting barriers driven by a Brownian motion and an independent Poisson noise and related Dynkin game. Electron. J. Prob. 11, 121145.CrossRefGoogle Scholar
Hamadène, S. and Ouknine, Y. (2003). Reflected backward stochastic differential equation with jumps and random obstacle. Electron. J. Prob. 8, 120.CrossRefGoogle Scholar
Kohatsu-Higa, A. (1992). Reflecting stochastic differential equations with jumps. Tech. Rep. 92--30, University of Puerto Rico.Google Scholar
Lasry, J.-M. and Lions, P.-L. (2006). Jeux à champ moyen. I. Le cas stationnaire. C. R. Acad. Sci. Paris 343, 619625.CrossRefGoogle Scholar
Lasry, J.-M. and Lions, P.-L. (2006). Jeux à champ moyen. II. Horizon fini et contrôle optimal. C. R. Acad. Sci. Paris 343, 679684.CrossRefGoogle Scholar
Lasry, J.-M. and Lions, P.-L. (2007). Large investor trading impacts on volatility. Ann. Inst. H. Poincaré, Anal. Non Linéaire 24, 311323.Google Scholar
Lasry, J.-M. and Lions, P.-L. (2007). Mean field games. Japanese J. Math. 2, 229260.CrossRefGoogle Scholar
Lepingle, D. (1995). Euler scheme for reflected stochastic differential equations. Math. Comput. Simul. 38, 119126.CrossRefGoogle Scholar
Menaldi, J.-L. and Robin, M. (1985). Reflected diffusion processes with jumps. Ann. Prob. 13, 319341.CrossRefGoogle Scholar
Pettersson, R. (1995). Approximations for stochastic differential equations with reflecting convex boundaries. Stoch. Process. Appl. 59, 295308.CrossRefGoogle Scholar
Pettersson, R. (1997). Penalization schemes for reflecting stochastic differential equations. Bernoulli 3, 403414.CrossRefGoogle Scholar
Quenez, M. C. and Sulem, A. (2014). Reflected BSDEs and robust optimal stopping for dynamic risk measures with jumps. Stoch. Process. Appl. 124, 30313054.CrossRefGoogle Scholar
Rüschendorf, L. and Rachev, S. T. (1998). Mass Transportation Problems. Vol. 1: Theory. Vol. 2: Applications. Springer, New York.Google Scholar
Skorokhod, A. V. (1961). Stochastic equations for diffusion processes in a bounded region. Theory Prob. Appl. 6, 264274.CrossRefGoogle Scholar
Slominski, L. (1994). On approximation of solutions of multidimensional SDEs with reflecting boundary conditions. Stoch. Process. Appl. 50, 197219.CrossRefGoogle Scholar
Slominski, L. (2001). Euler’s approximations of solutions of SDEs with reflecting boundary. Stoch. Process. Appl. 92, 317337.CrossRefGoogle Scholar
Yu, H. and Song, M. (2012). Numerical solutions of stochastic differential equations driven by Poisson random measure with non-Lipschitz coefficients. J. Appl. Math. 2012.Google Scholar
Figure 0

Algorithm 1 Particle approximation

Figure 1

Figure 1: Case (i). ${n = 500,\ N = 100000,\ T = 1,\ \beta = 2,\ \sigma = 1,\ \lambda=5,\ x_0 =1,\ p = 1/2}$.

Figure 2

Figure 2: Case (i). Regression of ${\log(\hat{E})}$ w.r.t. ${\log(N)}$. Data: ${\hat{E}}$ when N varies from 100 to 2200 with step size 300. Parameters: ${n = 100}$, ${T = 1}$, ${\beta =2}$, ${\sigma= 1}$, ${\lambda=5}$, ${x_0 = 1}$, ${p = 1/2}$, ${L = 1000}$.

Figure 3

Figure 3: Case (ii). Parameters: ${n = 500}$, ${N = 10000}$, ${T = 1}$, ${\beta=0}$, ${a = 3}$, ${\gamma=1}$, ${\eta =1}$, ${\lambda=2}$, ${x_0 = 4}$, ${p = 1}$.

Figure 4

Figure 4: Case (ii). Regression of ${\log(\hat{E})}$ w.r.t. ${\log(N)}$. Data: ${\hat{E}}$ when N varies from 100 to 800 with step size 100. Parameters: ${n = 1000}$, ${T = 1}$, ${\beta =0}$, ${a=3}$, ${\gamma=1}$, ${\eta= 1}$, ${\lambda=2}$, ${x_0 = 4}$, ${p = 1}$, ${L = 1000}$.

Figure 5

Figure 5: Case (iii). Parameters: ${n = 1000}$, ${N = 100000}$, ${T = 15}$, ${\beta =10^{-2}}$, ${\sigma = 1}$, ${p =\pi/2}$, ${\alpha = 0.9}$, ${a=10^{-2}}$, ${x_0}$ is the unique solution of ${x+\alpha\sin (x)-p=0}$ plus ${10^{-1}}$.