1. Introduction
Reflected stochastic differential equations (SDEs) have been introduced in the pioneering work of Skorokhod (see [Reference SkorokhodSko61]), and their numerical approximations by Euler schemes have been widely studied (see [Reference SlominskiSlo94], [Reference SlominskiSlo01], [Reference LepingleLep95], [Reference LepinglePet95], [Reference PetterssonPet97]). Reflected stochastic differential equations driven by a Lévy process have also been studied in the literature (see [Reference Menaldi and RobinMR85], [Reference Kohatsu-HigaKH92]). More recent works have introduced and studied reflected backward stochastic differential equations with jumps (see [Reference Hamadène and OuknineHO03], [Reference Essaky, Harraj and OuknineEHO05], [Reference Hamadène and HassaniHH06], [Reference EssakyEss08], [Reference Crépey and MatoussiCM08], [Reference Quenez and SulemQS14]), as well as their numerical approximation (see [Reference Dumitrescu and LabartDL16a] and [Reference Dumitrescu and LabartDL16b]). The main particularity of our work comes from the fact that the constraint acts on the law of the process X rather than on its paths. The study of such equations is linked to the theory of mean field games, which has been introduced by Lasry and Lions (see [Reference Lasry and LionsLL07a], [Reference Lasry and LionsLL07b], [Reference Lasry and LionsLL06b], [Reference Lasry and LionsLL06a]) and whose probabilistic point of view is studied in [Reference Carmona and DelarueCD18a] and [Reference Carmona and DelarueCD18b]. Stochastic differential equations with mean reflection have been introduced by Briand, Elie, and Hu in their backward forms in [Reference Briand, Elie and HuBEH18]. In that work, the authors show that mean reflected stochastic processes exist and are uniquely defined by the associated system of equations of the following form:
Because the reflection process K depends on the law of the position, the authors of [Reference Briand, Chaudru de Raynal, Guillin and LabartBCdRGL16], inspired by mean field games, study the convergence of a numerical scheme based on particle systems to compute solutions to (1.1) numerically.
In this paper, we extend previous results to the case of jumps, i.e. we study existence and uniqueness of solutions to the following mean reflected stochastic differential equation (MR-SDE in the sequel):
where ${\smash E=\mathbb{R}^*}$ , ${\tilde{N}}$ is a compensated Poisson measure ${\tilde{N}(ds,dz)=N(ds,dz)-\lambda(dz)ds}$ , and B is a Brownian process independent of N. We also propose a numerical scheme based on a particle system to compute solutions to (1.2) numerically, and we study the rate of convergence of this scheme.
Our main motivation for studying (1.2) comes from financial problems subject to risk measure constraints. Given any position X, its risk measure ${\rho(X)}$ can be seen as the amount of own funds needed by the investor to hold the position. For example, we can consider the following risk measure: ${\rho(X) = \inf\{m\,{:}\,\mathbb{E}[u(m+X)]\geq p\}}$ where u is a utility function (concave and increasing) and p is a given threshold (we refer the reader to [Reference Artzner, Delbaen, Eber and HeathADEH99] and to [Reference Föllmer and SchiedFS02] for more details on risk measures). Suppose that we are given a portfolio X of assets whose dynamic, when there is no constraint, follows the jump diffusion model
Given a risk measure ${\rho}$ , one can ask that ${X_t}$ remain at an acceptable position at each time t. The constraint can be rewritten as ${\mathbb{E} \left[h(X_t)\right] \geq 0}$ for ${t\geq 0}$ , where ${h=u-p}$ .
In order to satisfy this constraint, the agent has to add some cash to the portfolio over time, and the dynamic of the wealth of the portfolio becomes
where ${K_t}$ is the amount of cash added to the portfolio up to time t to balance the ‘risk’ associated to ${X_t}$ . Of course, the agent wants to cover the risk in a minimal way, adding cash only when needed: this leads to the Skorokhod condition ${\mathbb{E}[h(X_t)] d K_t = 0}$ . Putting together all conditions, we end up with a dynamic of the form (1.2) for the portfolio.
The paper is organized as follows. In Section 2, we show that, under Lipschitz assumptions on b, ${\sigma}$ , and F and bi-Lipchitz assumptions on h, the system admits a unique strong solution, i.e. there exists a unique pair of processes (X, K) satisfying the system (1.2) almost surely, the process K being an increasing and deterministic process. Then we show that, by adding some regularity on the function h, the Stieltjes measure dK is absolutely continuous with respect to the Lebesgue measure, and we obtain the explicit expression of its density. In Section 3 we show that the system (1.2) can be seen as the limit of an interacting particle system with oblique reflection of mean field type. This result allows us to define in Section 4 an algorithm based on this interacting particle system together with a classical Euler scheme which gives a strong approximation of the solution of (1.2). When h is bi-Lipschitz, this leads to an approximation error in the ${L^2}$ sense proportional to ${n^{-1}+ N^{-\frac{1}{2}}}$ , where n is the number of points of the discretization grid and N is the number of particles. When h is smooth, we get an approximation error proportional to ${n^{-1}+ N^{-1}}$ . By the way, we improve the speed of convergence obtained in [Reference Briand, Chaudru de Raynal, Guillin and LabartBCdRGL16]. Finally, we illustrate these results numerically in Section 5.
2. Existence, uniqueness and properties of the solution
In this paper, ${(\Omega,\mathcal{F},\mathbb{P})}$ is a complete probability space endowed with a standard Brownian motion ${B=\{B_t\}_{0\leq t\leq T}}$ . ${\{\mathcal{F}_t\}_{0\leq t\leq T}}$ is the usual augmented filtration of B. Before moving on, we give the following assumptions needed in the sequel.
Assumption 1. (A.1)
(i) Lipschitz assumption: there exists a constant ${C_p \gt 0}$ such that for all ${x,x'\in\mathbb{R}}$ and ${p>0}$ , we have
\begin{equation*} |b(x)-b(x')|^p+|\sigma(x)-\sigma(x')|^p+\int_E|F(x,z)-F(x',z)|^p\lambda(dz)\leq C_p |x-x'|^p. \end{equation*}(ii) The random variable ${X_0}$ is square integrable independent of ${B_t}$ and ${N_t}$ .
Assumption 2. (A.2)
(i) The function ${h\,{:}\,\mathbb{R} \longrightarrow \mathbb{R}}$ is increasing and bi-Lipschitz: there exist ${0 \lt m\leq M}$ such that
\begin{equation*}\forall\ x\in\mathbb{R},\forall y\in\mathbb{R}, \quad m|x-y|\leq|h(x)-h(y)|\leq M|x-y|.\end{equation*}(ii) The initial condition ${X_0}$ satisfies ${\mathbb{E} [h(X_0)]\geq0}$ .
Assumption 3. (A.3) There exists ${p>4}$ such that ${X_0\in\mathbb{L}^p}$ (i.e. ${\mathbb{E} [|X_0|^p]<\infty}$ ).
Assumption 4. (A.4) The function h is twice continuously differentiable with bounded derivatives.
2.1. Preliminary results
Consider the function
where ${\mathcal{P}_1(\mathbb{R})}$ is the set of probability measures with a finite first-order moment. Let ${\bar{G}_0}$ be the inverse function in space of H evaluated at 0,
and let ${{G}_0}$ denote the positive part of ${\bar{G}_0}$ ,
We start by studying some properties of H and ${G_0}$ .
Lemma 1. Under Assumption 2, we have the following:
(i) For all ${\nu}$ in ${\mathcal{P}_1(\mathbb{R})}$ , the function ${H(\cdot,\nu)\,{:}\, \mathbb{R}\ni x \mapsto H(x,\nu)}$ is bi-Lipschitz:
(2.4) \begin{equation} \forall x,y \in \mathbb{R}, \quad m|x-y| \le |H(x,\nu)-H(y,\nu)|\le M|x-y|. \end{equation}(ii) For all x in ${\mathbb{R}}$ , the function ${H(x,\cdot)\,{:}\, \mathcal{P}_1(\mathbb{R})\ni \nu \mapsto H(x,\nu)}$ satisfies the following Lipschitz inequality:
(2.5) \begin{equation} \forall \nu,\nu' \in \mathcal{P}_1(\mathbb{R}), \quad |H(x,\nu)-H(x,\nu')|\le \left|\int h(x+\cdot) (d\nu-d\nu')\right|. \end{equation}
Proof. Lemma 1 ensues from the definition of H (see (2.1)).
Let ${\nu}$ and ${\nu'}$ be two probability measures. The Wasserstein-1 distance between ${\nu}$ and ${\nu'}$ is defined by
Thus
According to the Monge–Kantorovitch theorem, the assertion (2.5) implies that for all x in ${\mathbb{R}}$ , the function ${H(x,\cdot)}$ is Lipschitz continuous with respect to the Wasserstein-1 distance. The regularity of ${G_0}$ is then given in the following lemma.
Lemma 2. Under Assumption 2, the function ${{G}_0 \,{:}\,\mathcal{P}_1(\mathbb{R}) \ni \nu \mapsto {G}_0(\nu)}$ is Lipschitz continuous:
where ${\bar{G}_0(\nu)}$ is the inverse of ${H(\cdot,\nu) }$ at point 0. In particular,
Proof. The proof is given in [Reference Briand, Chaudru de Raynal, Guillin and LabartBCdRGL16, Lemma 2.5].
2.2. Existence and uniqueness of the solution of (1.2)
The set of Assumptions 1–4 will be used as follows:
• The existence and uniqueness results are stated under Assumption 1 (the standard assumption for SDEs) and Assumption 2 (the assumption used in [BEH18]).
• The convergence of particle systems is proved under Assumption 3.
• Some of the results will be improved under the smoothness assumption, Assumption 4.
Firstly, we recall the existence and uniqueness result of [BEH18] in the case of SDEs.
Definition 1. A couple of processes (X, K) is said to be a flat deterministic solution to (1.2) if (X, K) satisfy (1.2) with K being a non-decreasing continuous deterministic function with ${K_0=0}$ .
Given this definition we have the following result.
Theorem 1. Under Assumptions 1 and 2, the mean reflected SDE (1.2) has a unique deterministic flat solution (X, K). Moreover,
where ${(U_t)_{0\leq t\leq T}}$ is the process defined by
and ${(\mu_{t})_{0\le t\le T}}$ is the family of marginal laws of ${(U_{t})_{0\le t\le T}}$ .
Proof. We refer to [Reference Briand, Elie and HuBEH18] for the proof in the case of continuous backward SDEs. We present here the proof of the forward case with jumps.
Let us consider the set ${\mathcal{C}^2=\{X\ \mathcal{F}\mbox{-adapted}\ \mbox{c\`{a}dl\`{a}g},\ \mathbb{E}(\sup_{t\leq T}|X_t|^2)<\infty\}}$ , and let ${\hat{X}\in \mathcal{C}^2}$ be a given process. We define
and the function K given by
Let us introduce the process X:
where K is given by (2.11). We check that (X, K) is the solution to (1.2) with U replaced by ${\hat{U}}$ . First, based on the definition of K, we have ${\mathbb{E}[h(X_t)] \geq 0}$ , ${K_t=G_0(\hat{\mu}_t)}$ ${dK_t-a.e.}$ and ${G_0(\hat{\mu}_t)>0}$ ${dK_t-a.e.}$ Then we obtain
Moreover, since h is continuous, we have ${\mathbb{E}[h(\hat U_s+G_0(\hat{\mu}_s))]=0}$ as soon as ${G_0(\hat{\mu}_s)>0}$ , so that
Second, choose the map ${\Phi\,{:}\,\mathcal{C}^2\longrightarrow \mathcal{C}^2}$ which associates to ${\hat X}$ the process X, solution to (1.2). Let us prove that ${\Phi}$ is a contraction. Using the same Brownian motion and Poisson process, we consider ${\hat X, \hat X'\in \mathcal{C}^2}$ and K, K defined by (2.11). From Assumption 1, and by using the Cauchy–Schwarz and Doob inequalities, we get
From the representation (2.11) of the process K and Lemma 2, we have that
This leads to
Therefore, there exists a positive ${\mathcal{T}}$ , depending only on b, ${\sigma}$ , F, and h, such that for all ${T <\mathcal{T}}$ , the map ${\Phi}$ is a contraction. Consequently, we get the existence and uniqueness of a solution on ${[0, \mathcal{T}]}$ , and by iterating the construction the result is extended to ${\mathbb{R}^+}$ .
2.3. Regularity results on K, X, and U
Remark 1. In view of this construction, we derive that for all ${0\leq s< t}$ ,
Proof. From the representation (2.9) of the process K, we have
By the definition of ${\bar{G_0}}$ , we observe that for all ${y\in\mathbb{R}}$ , ${\bar{G_0}(X+y)=\bar{G_0}(X)-y}$ , so we get
Note that ${\sup_r (f(r)^+)=(\sup_r f(r))^+=\max(0,\sup_r f(r))}$ for any function f, and obviously
so
Proposition 1. Suppose that Assumptions 1 and 2 hold. Then, for every ${p\geq 2}$ , there exists a positive constant ${K_p}$ , depending on T, b, ${\sigma}$ , F, and h, such that
(i) ${\mathbb{E}\big[\sup_{t\leq T}|X_t|^p\big]\leq K_p\big(1+\mathbb{E}\big[|X_0|^p\big]\big)}$ , and
(ii) for all ${0\leq s\leq t\leq T,\quad \mathbb{E}\big[\sup_{s\leq u\leq t}|X_u|^p|\mathcal{F}_s\big]\leq C\big(1+|X_s|^p\big).}$
Remark 2. Under the same conditions, we conclude that
Proof of (i). We have
The last term ${K_T =\sup_{t\leq T}G_0(\mu_t)}$ is studied first. By using the Lipschitz property of ${G_0}$ from Lemma 2 and the definition of the Wasserstein metric, we have
since ${G_0(\mu_0)=0}$ as ${\mathbb{E}[h(X_0)]\geq 0}$ , and where U is defined by (4.3). Therefore
and so
Hence, using Assumption 1 and the Cauchy–Schwarz, Doob, and BDG inequalities yields
and from Gronwall’s lemma, we can conclude that for all ${p\geq 2}$ , there exists a positive constant ${K_p}$ , depending on T, b, ${\sigma}$ , F, and h, such that
Proof of (ii). For the first part, we have
Let us define ${\mathbb{E}_s[\cdotp]=\mathbb{E}[\cdotp|\mathcal{F}_s]}$ . Then we get
Finally, from Gronwall’s lemma, we deduce that for all ${0\leq s\leq t\leq T}$ , there exists a constant C, depending on p, T, b, ${\sigma}$ , F, and h, such that
Proposition 2. Let ${p\geq2}$ and let Assumptions 1, 2, and 3 hold. There exists a constant C depending on p, T, b, ${\sigma}$ , F, and h such that the following hold:
(i) ${\forall\ 0\leq s \lt t\leq T,\quad |K_t-K_s|\leq C|t-s|^{(1/2)}.}$
(ii) ${\forall\ 0\leq s\leq t\leq T,\quad \mathbb{E}\big[|U_t-U_s|^p\big]\leq C|t-s|.}$
(iii) ${\forall\ 0\leq r \lt s \lt t\leq T,\quad \mathbb{E}[|U_s-U_r|^p|U_t-U_s|^p]\leq C|t-r|^2.}$
Remark 3. Under the same conditions, we conclude that
Proof of (i). Let us recall that, for any process X,
From Remark 1, we have
Hence, from the previous representation of ${K_t-K_s}$ , we deduce the ${\frac{1}{2}}$ -Hölder property of the function ${t\longmapsto K_t}$ . Indeed, since by definition ${G_0(X_s)=0}$ , if ${s<t}$ , by using Lemma 2, we have
and so
Therefore, if ${X_0\in \mathbb{L}^p}$ for some ${p\geq2}$ , it follows from Proposition 1 that
Proof of (ii).
Finally, if ${X_0\in \mathbb{L}^p}$ for some ${p\geq2}$ , we conclude that there exists a constant C, depending on p, T, b, ${\sigma}$ , F, and h, such that
Proof of (iii). Let ${0\leq r<s<t\leq T}$ . We have
Then, from the Burkholder–Davis–Gundy inequality, we get
thus, from (i) and Proposition 1, we obtain
Following the proof of (ii), we can also get
Then
Under Assumption 3, we conclude that
2.4. Density of K
Consider the second-order linear partial operator ${\mathcal{L}}$ described by
for any twice continuously differentiable function f.
Propositions 3. Suppose Assumptions 1, 2, and 4 hold. Let (X,K) be the unique deterministic flat solution to (1.2). Then the process K is Lipschitz continuous and the Stieltjes measure dK has the following density:
Let us admit for the moment the following results that will be useful for our proof.
Lemma 3. The functions ${t\longmapsto\mathbb{E}\left[h(X_t)\right]}$ and ${t\longmapsto\mathbb{E} \left[\mathcal{L}h(X_t)\right]}$ are continuous.
Lemma 4. If ${\varphi}$ is a continuous function such that, for some ${C\geq 0}$ and ${p\geq 1}$ ,
then the function ${t\longmapsto\mathbb{E} [\varphi(X_t)]}$ is continuous.
The proof of Lemma 3 is given later in this section, and that of Lemma 4 is given in Appendix A. We may now proceed to the proof of Proposition 3.
Proof. Firstly, we prove that K is Lipschitz continuous. In order to do this, we first prove that ${s\longmapsto\bar{G}_0(\mu_{s})}$ is Lipschitz continuous on [0,T]. From the definition of ${\bar{G}_0}$ , we have ${H(\bar{G}_0(\mu_{t}),\mu_t)=0}$ , and by using (2.4), if ${s<t}$ , we get
From Itô’s formula, we obtain
with
This yields
where
Therefore,
Consequently, the result immediately follows from the fact that h has bounded derivatives and ${\sup_{s\leq T}|X_s|}$ is a square integrable random variable for each ${T > 0}$ (see Proposition 1).
Finally, we deduce that K is Lipschitz continuous and so has a bounded density on [0,T] for each ${T > 0}$ (see Proposition 2.7 in [Reference Briand, Chaudru de Raynal, Guillin and LabartBCdRGL16] for more details).
Secondly, let us find the density of the measure dK. For all ${0\leq s\leq t\leq T}$ , we have
Under Assumption 4 and thanks to Itô’s formula we get
where ${\mathcal{L}}$ is given by (2.13). Thus, we obtain
As a conclusion, using (2.15), Lemma 3, and the proof of Proposition 2.7 in [Reference Briand, Chaudru de Raynal, Guillin and LabartBCdRGL16], we deduce that the measure dK has the following density:
Proof of Lemma 3. Under Assumption 2, and by using Lemma 4, we obtain the continuity of the function ${t\longmapsto\mathbb{E} h(X_t)}$ .
Under Assumptions 1, 2, and 4, we observe that ${x\longmapsto\mathcal{L}h(X_t)}$ is a continuous function such that, for all ${x\in\mathbb{R}}$ , there exist constants ${C_1, C_2, C_3 > 0}$ such that
and
Finally, by using Lemma 4, we conclude that ${t\longmapsto\mathbb{E} \mathcal{L}h(X_t)}$ is continuous.
3. Approximation of mean reflected SDEs by an interacting reflected particle system
By using the notation presented in the beginning of Section 2, in particular Equation (2.9), the unique solution of the SDE (1.2) can be derived as
where ${\mu_t}$ stands for the law of
Let us consider the particle approximation of the above system. In order to do this, let us introduce the particles: for ${1\leq i\leq N}$ ,
where ${(B^i)_{1\leq i\leq N}}$ are independent Brownian motions, ${(\tilde{N}^i)_{1\leq i\leq N}}$ are independent compensated Poisson measures, ${(\bar{X}^i_0)_{1\leq i\leq N}}$ are independent copies of ${X_0}$ , and ${\mu_s^N}$ represents the empirical distribution at time s of the particles
namely ${\displaystyle\mu^N_s=\frac{1}{N}\sum^N_{i=1}\delta_{U^i_s}}$ . Note that
Now we can prove the propagation of chaos effect. In order to do this, let us introduce the following independent copies of X:
where the Brownian motions and the Poisson processes are the ones used in (3.2).
In addition, we introduce the decoupled particles ${\bar U^i}$ , ${1\leq i\leq N}$ :
It is worth noting that the particles ${(\bar U^i_t)_{1\leq i\leq N}}$ are independent and identically distributed (i.i.d.). Furthermore, we introduce ${\bar\mu^N}$ as the empirical measure associated to this system of particles.
(i) Under our assumptions, we have ${\mathbb{E}\left[h\left(\bar{X}^i_0\right)\right]=\mathbb{E}\left[h(X_0)\right]\geq 0}$ . However, there is no reason to have
\begin{equation*} \frac{1}{N}\, \sum_{i=1}^N h\left(\bar{X}^i_0\right) \geq 0,\end{equation*}even if N is large. As a consequence,\begin{equation*} {G}_0(\mu_{0}^N)=\inf\bigg\{x\geq 0\,{:}\,\frac{1}{N}\sum_{i=1}^{N}h\left(x+\bar{X}^i_0\right) \geq 0\bigg\}\end{equation*}is not necessarily equal to 0. As a byproduct, we have ${X^i_0 = \bar{X}^i_0 + {G}_0(\mu_{0}^N)}$ , and the non-decreasing process ${\sup_{s\le t} {G}_0(\mu_{s}^N)}$ is not equal to 0 at time ${t=0}$ . Written in this way, the particles defined by (3.2) cannot be interpreted as the solution of a reflected SDE. To view the particles as the solution of a reflected SDE, instead of (3.2) one has to solve\begin{align*} X^i_t &= \bar{X}^i_0 + {G}_0(\mu_{0}^N) +\int_0^t b(X^i_{s^-}) ds + \int_0^t \sigma(X^i_{s^-}) dB^i_s \\[3pt] &\quad + \int_0^t\int_E F(X^i_{s^-},z) \tilde{N}^i(ds,dz)+ K^N_t, \\[3pt] \frac{1}{N}\sum_{i=1}^{N} &h\left(X^i_t\right) \geq 0, \qquad \int_0^t \frac{1}{N}\sum_{i=1}^{N}h\left(X^i_t\right)\, dK^N_s=0,\end{align*}with ${K^N}$ non-decreasing and ${K^N_0=0}$ . Since we do not use this point in the sequel, we will work with the form (3.2).(ii) Following the proof of Theorem 1, it is easy to demonstrate existence and uniqueness of a solution for the particle-approximated system (3.2).
We have the following result concerning the approximation of (1.2) by an interacting particle system.
Theorem 2. Let ${T>0}$ and suppose that Assumptions 1 and 2 hold.
(i) Under Assumption 3, there exists a constant C depending on b, ${\sigma}$ , and F such that, for each ${j\in\{1,\ldots,N\}}$ ,
\begin{equation*}\mathbb{E}\bigg[\sup_{s\leq T}|X^j_s-\bar X^j_s|^2\bigg]\leq C\exp\Big(C\Big(1+\frac{M^2}{m^2}\Big)(1+T^2)\big)\frac{M^2}{m^2}N^{-1/2}.\end{equation*}(ii) Under Assumption 4, there exists a constant C depending on b, ${\sigma}$ , and F such that, for each ${j\in\{1,\ldots,N\}}$ ,
\begin{equation*}\mathbb{E}\Big[\sup_{s\leq T}|X^j_s-\bar X^j_s|^2\Big]\leq C\exp\Big(C\Big(1+\frac{M^2}{m^2}\Big)(1+T^2)\Big)\frac{1+T^2}{m^2}\Big(1+\mathbb{E}\Big[\sup_{s\leq T}|X_T|^2\Big]\Big)N^{-1}.\end{equation*}
Proof. Let ${t>0}$ . We have, for ${r\leq t}$ ,
From the inequality
we obtain
where ${I_1}$ is defined by
Firstly, due to Assumption 1 and the Doob and Cauchy–Schwarz inequalities, we have
where C is a constant that depends only on b, ${\sigma}$ , and F. Note that C may change from line to line.
Secondly, in view of Lemma 2,
Moreover, taking into account that the variables are exchangeable, the Cauchy–Schwarz inequality implies
Since
and following the previous computations, we get
Consequently, combining the previous estimates with Equation (3.3) gives
where ${K=C(1+t)(1+M^2/m^2)}$ . According to Gronwall’s lemma, we get
In view of Lemma 2, we have
which leads to
Proof of (i). Since h is at least a Lipschitz function, the rate of convergence will be given by the convergence of empirical measure of i.i.d. diffusion processes. As we consider a uniform convergence in time, getting the usual rate of convergence is not straightforward. If we only suppose that Assumption 2 holds, we obtain that
According to the additional Assumption 3, and in view of the proof of Part (i) of [Reference Briand, Chaudru de Raynal, Guillin and LabartBCdRGL16, Theorem 3.2], we have
Proof of (ii). Under Assumption 4, we can get rid of the supremum in time by using the sharp estimate
According to Proposition 3, let ${\psi}$ be the Radon–Nikodym derivative of ${\bar{G}_0(\mu)}$ . Since ${(\bar{U}^i)_{1\leq i\leq N}}$ are independent copies of U, we have
where ${V^i}$ is the semi-martingale ${s\longmapsto \bar{G}_0(\mu_s)+\bar{U}_s^i}$ .
It follows from Itô’s formula that
Taking the expectation gives
We immediately deduce that
where
Then,
Since ${(U^i)_{1\leq i\leq N}}$ and ${(\bar{X}^i)_{1\leq i\leq N}}$ are i.i.d., and by using the Cauchy–Schwarz inequality, we obtain
Hence, we get
Since ${M_N}$ is a martingale with
Doob’s inequality leads to
Then, using Doob’s inequality for the martingale ${L_N}$ , we obtain
Finally, using the fact that h has bounded derivatives and that b, ${\sigma}$ , and F are Lipschitz, we get
This gives the result coming back to (3.4).
4. Numerical approximation for MR-SDE, and its performance
In this section, the numerical approximation of the SDE (1.2) on [0,T] is studied. Let ${0=T_0<T_1<\cdots<T_n=T}$ be a subdivision of [0,T], and define the mapping ‘ ${\_}$ ’ by ${s\mapsto\underline{s}=T_k}$ if ${s\in[T_k,T_{k+1})}$ , ${k\in \{0,\cdots ,n-1\}}$ . Let us consider the case of regular subdivisions: for a given integer n, ${T_k=kT/n}$ , ${k=0,\ldots,n}$ .
In the previous section, we have shown that the particle system given, for ${1\leq i\leq N}$ , by
where
with
${B^i}$ being independent Brownian motions, ${N^i}$ being independent Poisson processes, and ${\bar{X}^i_0}$ being independent copies of ${X_0}$ , converges to the solution of (1.2). Hence, to determine the numerical approximation, we apply an Euler scheme to this particle system. The discrete version of the particle system is as follows: for ${1\leq i\leq N}$ ,
where
4.1. Scheme
In view of the above notation, and taking into account the result on the interacting system of mean reflected particles of the MR-SDE of Section 3 and Remark 1, we deduce the following algorithm for the numerical approximation of the MR-SDE.
Remark 5. It should be pointed out that, at each step k of the algorithm, the increment of the reflection process K is approximated by the increment of the following approximation:
First, we consider the special case when the SDE is defined by
where N is a Poisson process with intensity ${\lambda}$ , and ${\tilde{N}_t=N_t-\lambda t.}$
By Remark 1, the increment (4.1) can be estimated by
where ${G^i\sim \mathcal{N}(0,1)}$ and ${H^i\sim \mathcal{P}(\lambda(T/n))}$ , and the ${(G^{i})_{i\,=1,..,N}}$ and ${(H^{i})_{i\,=1,..,N}}$ are i.i.d.
In addition, procedures similar to those in the proof of Theorem 1 can be used to verify that the increments of the approximated reflection process are equal to the approximation of the increments:
Returning to the general case (1.2), we see in [YS12] that ${N=\{N(t): N(E\times[0,t])\}}$ is a stochastic process with intensity ${\lambda}$ that counts the number of jumps until some given time. The Poisson random measure N(dz, dt) generates a sequence of pairs ${\{(\iota_i,\xi_i),i\in \{1,2,\cdots,N(T)\}\}}$ for a given finite positive constant T if ${\lambda<\infty}$ . Here ${\{\iota_i,i\in \{1,2,\cdots,N(T)\}\}}$ is a sequence of increasing nonnegative random variables representing the jump times of a standard Poisson process with intensity ${\lambda}$ , and ${\{\xi_i,i\in \{1,2,\cdots,N(T)\}\}}$ is a sequence of i.i.d. random variables, where ${\xi_i}$ is distributed according to f(z), where ${\lambda(dz)dt=\lambda f(z)dzdt}$ . The numerical approximation can equivalently be written in the following form:
where ${G^j\sim \mathcal{N}(0,1)}$ and ${H^j\sim \mathcal{P}(\lambda(T/n))}$ , and the ${(G^{\,j})_{j\,=1,...,N}}$ and ${(H^{j})_{j\,=1,...,N}}$ are i.i.d.
4.2. Scheme error
(i) Let ${T>0}$ , let N and n be two nonnegative integers, and suppose that Assumptions 1, 2, and 3 hold. There exists a constant C, depending on T, b, ${\sigma}$ , F, h, and ${X_0}$ but independent of N, such that for all ${i=1,\ldots,N}$ ,
\begin{equation*}\mathbb{E}\bigg[\sup_{s\le t}\big|X^i_s-\tilde{X}^i_s\big|^2\bigg]\leq C\bigg(n^{-1}+N^{-1/2}\bigg).\end{equation*}(ii) Moreover, if Assumption 4 is in force, there exists a constant C, depending on T, b, ${\sigma}$ , F, h, and ${X_0}$ but independent of N, such that for all ${i=1,\ldots,N}$ ,
\begin{equation*}\mathbb{E}\bigg[\sup_{s\le t}\big|X^i_s-\tilde{X}^i_s\big|^2\bigg]\leq C\bigg(n^{-1}+N^{-1}\bigg).\end{equation*}
Proof. Let us fix ${i\in \{1,\ldots,N\}}$ and ${T>0}$ . We have, for ${t\leq T}$ ,
Hence, using Assumption 1 and the Cauchy–Schwarz, Doob, and BDG inequalities gives
Denoting by ${(\mu^i_{t})_{0\le t\le T}}$ the family of marginal laws of ${(U^i_{t})_{0\le t\le T}}$ and by ${(\tilde{\mu}^i_{\underline{t}})_{0\le t\le T}}$ the family of marginal laws of ${(\tilde{U}^i_{t})_{0\le t\le T}}$ , we have
and from Lemma 2,
Proof of (i). Following the proof of (i) in Theorem 2, we obtain
from which we can derive the inequality
For the first term of the right-hand side, we can observe that
Using Assumption 1, the second term ${\sup_{s\le t} \mathbb{E}\bigg[\bigg|\tilde{U}^i_{s}-\tilde{U}^i_{\underline{s}}\bigg|^2\bigg]}$ becomes
and from Proposition 1, we get
Then, by using the BDG inequality, we obtain
Therefore, we conclude
from which we derive the inequality
and taking into account (4.2) we get
Since
it follows from (4.3) and (4.5) that
Finally, we conclude the proof of (i) with Gronwall’s lemma.
Proof of (ii). Following the proof of (ii) in Theorem 2, we obtain
By the same strategy as the one applied in the proof of (i) in Theorem 4, the result follows easily:
Theorem 3. Let ${T > 0}$ , let N and n be two nonnegative integers, and suppose that Assumptions 1, 2, and 3 hold.
(i) There exists a constant C, depending on T, b, ${\sigma}$ , F, h, and ${X_0}$ but independent of N, such that for all ${i=1,\ldots,N}$ ,
\begin{equation*}\mathbb{E}\bigg[\sup_{t\le T}\big|\bar{X}^i_t-\tilde{X}^i_t\big|^2\bigg]\leq C\bigg(n^{-1}+N^{-1/2}\bigg).\end{equation*}(ii) If in addition Assumption 4 holds, there exists a positive constant C, depending on T, b, ${\sigma}$ , F, h, and ${X_0}$ but independent of N, such that for all ${i=1,\ldots,N}$ ,
\begin{equation*}\mathbb{E}\bigg[\sup_{t\le T}\big|\bar{X}^i_t-\tilde{X}^i_t\big|^2\bigg]\leq C\bigg(n^{-1}+N^{-1}\bigg).\end{equation*}
Proof. The proof is straightforward, writing
5. Numerical examples
In this section, let us study on [0, T] processes of the following sort:
where ${(\beta_t)_{t\geq0}}$ , ${(a_t)_{t\geq0}}$ , ${(\sigma_t)_{t\geq0}}$ , ${(\gamma_t)_{t\geq0}}$ , ${(\eta_t)_{t\geq0}}$ , and ${(\theta_t)_{t\geq0}}$ are bounded adapted processes. This sort of process is chosen to make some explicit computations which allow the illustration of the algorithm. Different diffusions and functions h are considered in order to illustrate our results.
Linear constraint. Firstly, we consider the cases where ${h\,{:}\,\mathbb{R}\ni x \longmapsto x-p\in\mathbb{R}.}$
Case (i). Drifted Brownian motion and compensated Poisson process: ${\beta_t=\beta>0}$ , ${a_t=\gamma_t=\theta_t=0}$ , ${\sigma_t=\sigma>0}$ , ${\eta_t=\eta>0}$ , ${X_0=x_0\geq p}$ , ${c(z)=z}$ , and
We have
and
where ${N_t\sim \mathcal{P}(\lambda t)}$ and ${\xi_i\sim lognormal(0,1).}$
Case (ii). Black–Scholes process: ${\beta_t=\sigma_t=\eta_t=0}$ , ${a_t=a>0}$ , ${\gamma_t=\gamma>0}$ , ${\theta_t=\theta>0}$ , ${c(z)=\delta_1(z)}$ . Then
where ${t^*=\frac{1}{a}(\ln(x_0)-\ln(p))}$ , and
where Y is the process defined by
Nonlinear constraint. Secondly, we consider the case of a nonlinear function h:
We illustrate this case with the following.
Case (iii). Ornstein–Uhlenbeck process: ${\beta_t=\beta>0}$ , ${a_t=a>0}$ , ${\gamma_t=\theta_t=0}$ , ${\sigma_t=\sigma>0}$ , ${\eta_t=\eta>0}$ , ${X_0=x_0}$ with ${x_0>|\alpha|+p}$ , ${c(z)=\delta_1(z)}$ . We obtain
where for all t in [0, T],
Remark 6. We choose these examples in order to obtain an analytic form of the ‘true’ reflecting process K which can be compared numerically with its empirical approximation ${\hat{K}}$ . Having the exact simulation of the underlying process, we can verify the efficiency of our algorithm.
5.1. Proofs of the numerical illustrations
In order to have a closed, or almost closed, expression for the compensator K we introduce the process Y solving the non-reflected SDE
By letting ${A_s=\int_0^ta_sds}$ and applying Itô’s formula on ${e^{A_t}X_t}$ and ${e^{A_t}Y_t}$ , we get
In the same way,
and so
Remark 7. In all cases, we have ${a_t=a}$ , i.e. ${A_t=at}$ , so we get
Proof of assertions in Case (i). From Proposition 3 and Remark 7, we have
so we obtain that
and as ${K_t\geq 0}$ , we conclude that
Next, we have
the density function of a lognormal random variable, so we can obtain
where ${\xi\sim lognormal(0,1)}$ , and we conclude that
Finally, we deduce the exact solution
where ${N_t\sim \mathcal{P}(\lambda t)}$ and ${\xi_i\sim lognormal(0,1).}$
Proof of assertions in Case (ii). In this case, using the same Proposition and Remark, we have
which implies
and
So we conclude that ${K_t=ap(t-t^*)1_{t\geq t^*}}$ , where ${t^*=\frac{1}{a}(\ln(x_0)-\ln(p))}$ .
Next, by the definition of the process ${Y_t}$ ,
we have
Thanks to Itô’s formula we get
and so
Then, using integration by parts, we obtain
Finally, we deduce that
Proof of assertions in Case (iii). In this case, we have
and
Hence
On one side, since ${G_t}$ is a centered Gaussian random variable with variance
we obtain that
and
On the other side,
by taking a small, we get
and so
Using Remark 7, we conclude that, for small a,
Therefore,
5.2. Illustrations
This computation works as follows. Let ${0 = T_0 < T_1 <\cdots< T_n = T }$ be a subdivision of [0, T ] of step size ${T/n}$ , n being a positive integer, let X be the unique solution of the MR-SDE (5.1), and let ${(\tilde{X}^i_{T_k})_{0\leq k\leq n}}$ , for a given i, be its numerical approximation given by Algorithm 1. For a given integer L, we draw ${(\bar{X}^l)_{0\leq l\leq L}}$ and ${(\tilde{X}^{i,l})_{0\leq l\leq L}}$ , L independent copies of X and ${\tilde{X}^i}$ . Then we approximate the ${\mathbb{L}^2}$ -error of Theorem 3 by
Figure 1 illustrates the evolution in time of the true K (full line) and the estimated K (dotted line for particle method, dashed line for density method) in Case (i). It is confirmed that the approximation of K is almost the same as the exact solution. The evolution of ${\log(\hat{E})}$ with respect to ${\log(N)}$ is depicted in Figure 2. It can be seen that the slope is equal to ${0.9}$ , which is consistent with the statement of Theorem 3.
Figure 3 illustrates the evolution in time of the true K (full line) and the estimated K (dotted line for particle method, dashed line for density method) in Case (ii). As in the previous example, the approximation of K is almost the same as the exact solution. The evolution of ${\log(\hat{E})}$ with respect to ${\log(N)}$ is depicted in Figure 4. It can be seen that the slope is equal to ${0.9}$ , which is consistent with the statement of Theorem 3.
Figure 5 illustrates the evolution in time of the true K (full line) and the estimated K (dotted line for particle method, dashed line for density method) in Case (iii). Moreover, we notice that the approximation of K using the particle method is closer to the exact K than the one using the density method.
Appendix A. Proof of Lemma 4
Let s and t in [0, T] be such that ${s\leq t}$ .
Firstly, we suppose that ${\varphi}$ is a continuous function with compact support. In this case, there exists a sequence of Lipschitz continuous functions ${\varphi_n}$ with compact support which converges uniformly to ${\varphi}$ . Therefore, by using Proposition 2, we get
Thus, we obtain that
This result is true for all ${n\geq1}$ , so we deduce that
then we conclude the continuity of the function ${t\longmapsto\mathbb{E} [\varphi(X_t)]}$ .
Secondly, we consider the case where ${\varphi}$ is a continuous function such that
We define a sequence of functions ${\varphi_n}$ such that for all ${n\geq1}$ and ${x\in\mathbb{R}}$ ,
with
Based on this definition, ${\varphi_n}$ is a continuous function with compact support. Then we get
Thus, by using the first part of this lemma, we obtain that
This result is true for all ${n\geq1}$ ; by using the dominated convergence theorem, we deduce that
and we conclude the continuity of the function ${t\longmapsto\mathbb{E} [\varphi(X_t)]}$ .