Hostname: page-component-745bb68f8f-grxwn Total loading time: 0 Render date: 2025-02-06T04:46:29.340Z Has data issue: false hasContentIssue false

Self-exciting multifractional processes

Published online by Cambridge University Press:  25 February 2021

Salvador Ortiz-Latorre*
Affiliation:
University of Oslo
*
*Postal address: Postboks 1053 Blindern, 0316 OSLO.
Rights & Permissions [Opens in a new window]

Abstract

We propose a new multifractional stochastic process which allows for self-exciting behavior, similar to what can be seen for example in earthquakes and other self-organizing phenomena. The process can be seen as an extension of a multifractional Brownian motion, where the Hurst function is dependent on the past of the process. We define this by means of a stochastic Volterra equation, and we prove existence and uniqueness of this equation, as well as giving bounds on the p-order moments, for all $p\geq1$. We show convergence of an Euler–Maruyama scheme for the process, and also give the rate of convergence, which is dependent on the self-exciting dynamics of the process. Moreover, we discuss various applications of this process, and give examples of different functions to model self-exciting behavior.

Type
Research Papers
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction and notation

In recent years, higher computer power and better tools from statistics show that there are many natural phenomena which do not follow the standard normal distribution, but rather exhibit different types of memory, sometimes changing these properties over time. Therefore several different types of extensions of standard stochastic processes have been proposed to try to give a more realistic picture of nature corresponding to what we observe. There are several stochastic processes that are popular today for the modeling of various features of memory associated with a random phenomenon. The most elementary example in the continuous case is the so-called fractional Brownian motion, where a fractional kernel is introduced in an appropriate way that allows us to describe memory in the process. Another way of introducing memory in the discontinuous case is to allow the intensity of a compound Poisson process to be dependent on time; see e.g. [Reference Hawkes10]. These processes, generally known as Hawkes processes, are point processes which allow for self-exciting behavior by letting the conditional intensity be dependent on the past events of the processes. It is worth noting that there exists a close relationship between fractional stochastic differential equations and the limit of certain Hawkes processes; see [Reference El Euch and Rosenbaum8]. In the mid-1990s Véhel and Peltier introduced a new way to model memory in a continuous stochastic process. This was achieved by allowing the strength of the memory of the fractional kernel in a fractional Brownian motion to be a function of time; see e.g. [Reference Peltier and Véhel13]. In this note we will consider a continuous type of process which is inspired by the multifractional Brownian motion, by letting the strength of the memory of this fractional kernel depend not only on time but also on the history of the process itself. This allows for greater flexibility in the modeling of phenomena with a complex memory structure. A similar type of process was introduced by Barrière, Echelard and Véhel [Reference Barrière, Echelard and Véhel2], who allowed the memory at a certain time to be fully determined by the value of the process at that particular time. Our approach represents an alternative way of addressing this memory issue, by allowing this fractional kernel to be dependent on the whole path of the process. In particular, this is done by constructing the process as a solution to a singular, stochastic Volterra equation. Multifractional Brownian motion is interesting in its own right, in that it is a non-stationary Gaussian process that has regularity properties changing in time. A simple version of this process is known as the Riemann–Liouville multifractional Brownian motion and can be represented by the integral

(1.1) \begin{equation}B_{t}^{h}=\int_{0}^{t} (t-s) ^{h (t) -{{1}/{2}}}\,{\textrm{d}} B_{s},\end{equation}

where $ \{ B_{t}\} _{t\in[0,T]}$ is a Brownian motion and h is a deterministic function. Interestingly, if we restrict the process to a small interval, say $ [t-\epsilon,t+\epsilon] $, the local $\alpha$-Hölder regularity of this process on that interval is of order $\alpha\sim h(t)$ if $\epsilon$ is sufficiently small. Thus the regularity of the process is dependent on time. Applications of such processes have been found in fields ranging from Internet traffic and image synthesis to finance; see e.g. [Reference Benassi, Jaffard and Roux3], [Reference Bianchi and Pianese4], [Reference Bianchi, Pantanella and Pianese5], [Reference Corlay, Lebovits and Véhel7], [Reference Lebovits and Véhel12], [Reference Pesquet-Popescu and Véhel14], [Reference Pianese, Bianchi and Palazzo15], and [Reference Riedi and Véhel16]. In 2010 Sornette and Filimonov proposed a self-excited multifractal process to be considered in the modeling of earthquakes and financial market crashes; see [Reference Sornette and Filimonov17]. By a self-excited process, they mean a process where the future state depends directly on all the past states of the process. The model they proposed was defined in a discrete manner. They also suggested a possible continuous-time version of their model, but they did not study its existence rigorously. This article is therefore intended as an attempt to propose a continuous-time version of a model similar to that proposed by Sornette and Filimonov, and we will study its mathematical properties.

We will first consider an extension of a multifractional Brownian process, which is found as the solution to the stochastic differential equation

(1.2) \begin{equation}X_{t}^{h,f}=\int_{0}^{t}\exp\big(\!-f\big(t,X_{s}^{h,f}\big)(t-s)\big)(t-s)^{h\big(t,X_{s}^{h,f}\big)-{{1}/{2}}}\,{\textrm{d}} B_{s},\end{equation}

where $ \{ B_{t}\} _{t\in[0,T]}$ is a general d-dimensional Brownian motion, h is bounded and takes values in (0, 1), and f is a non-negative, sufficiently regular function. Already at this point we could think that the local regularity of the process $X^{h,f}$ would be dependent on the history of $X^{h,f}$ via h and f, in a similar manner to the multifractional Brownian motion in equation (1.1). As we can see, the formulation of the process is via a stochastic Volterra equation with a possibly singular kernel. We will therefore show the existence and uniqueness of this equation, and then say that its solution is a self-exciting multifractional gamma process (SEM-gamma) $X^{h,f}$, due to its resemblance to the gamma kernel; see e.g. [Reference Barndorff-Nielsen, Espen Benth and Veraart1]. We will study the probabilistic properties, and discuss examples of functions h and f, which give different dynamics for the process $X^{h,f}$. The process is neither stationary nor Gaussian in general, and is therefore mathematically challenging to apply in any standard model, for example in finance. Nevertheless, the process has some interesting properties in its own right. The study of such processes could also shed some light on natural phenomena whose behaviour is outside the scope of standard stochastic processes, such as the self-excited dynamics of earthquakes, as argued in [Reference Sornette and Filimonov17].

We will first show the existence and uniqueness of equation (1.2) and then study probabilistic and path properties such as variance and Hölder regularity of the process. We will introduce an Euler–Maruyama scheme to approximate the process, and show its strong convergence as well as estimate its rate of convergence. Finally, we will discuss a particular case of the process, where $f=0$, for which we find out that, under certain parametrizations of h, one can also preserve some desired properties observed in the time series generated by the process itself.

1.1. Notation and preliminaries

Let $T>0$ be a fixed constant. We will use the standard notation $L^{\infty} ( [0,T] ) $ for essentially bounded functions on the interval [0, T]. Furthermore, let $\triangle^{(m)} [a,b] $ denote the m-simplex. That is, define $\triangle^{(m)}[a,b]$ to be given by

\begin{equation*}\triangle^{(m)} ([a,b]) \coloneqq \{ (s_{m},\ldots,s_{1})\colon a\leq s_{1}<\cdots<s_{m}\leq b\} .\end{equation*}

We will consider functions $k\colon \triangle^{ (2) } ( [0,T] ) \rightarrow\mathbb{R}_{+}$, which will be used as a kernel in an integral operator, in the sense that we consider integrals of the form

\begin{equation*}\int_{0}^{t}k (t,s) f (s) \,{\textrm{d}} s\end{equation*}

whenever the integral is well-defined. We call these functions Volterra kernels.

Definition 1.1. Let $k\colon \triangle^{(2)} ( [0,T] ) \rightarrow\mathbb{R}_{+}$ be a Volterra kernel. If k satisfies

\begin{equation*}t\mapsto\int_{0}^{t}k (t,s) \,{\textrm{d}} s\in L^{\infty} ( [0,T] )\end{equation*}

and

\begin{equation*}\limsup_{\epsilon\downarrow0}\biggl\|\int_{\cdot}^{\cdot+\epsilon}k (\cdot+\epsilon,s) \,{\textrm{d}} s\biggr\|_{L^{\infty} ( [0,T] ) }=0,\end{equation*}

then we say that $k\in\mathcal{K}_{0}.$

We will frequently use the constant C to denote a general constant, which might vary throughout the text. When it is important, we will use a subscript to denote what this constant depends upon, i.e. $C=C_{T}$ to denote dependence on T.

2. Zhang’s existence and uniqueness of stochastic Volterra equations

In this section we will assume that $ \{ B_{t}\} _{t\in [0,T] }$ is a d-dimensional Brownian motion defined on a filtered probability space $(\Omega,\mathcal{F}, \{ \mathcal{F}_{t}\} _{t\in [0,T] },\mathbb{P})$. Consider the Volterra equation

(2.1) \begin{equation}X_{t}=g_{t}+\int_{0}^{t}\sigma (t,s,X_{s}) \,{\textrm{d}} B_{s},\quad0\leq t\leq T,\end{equation}

where g is a progressively measurable stochastic process and $\sigma\colon \triangle^{(2)} ( [0,T] ) \times\mathbb{R}^{n}\rightarrow\mathcal{L} (\mathbb{R}^{d},\mathbb{R}^{n}) $ is a measurable function, where $\mathcal{L} (\mathbb{R}^{d},\mathbb{R}^{n}) $ is the linear space of $(n\times d)$-matrices.

Next we write a simplified version of the hypotheses for $\sigma$ and g, introduced previously by Zhang in [Reference Zhang20], which will be used to prove that there exists a unique solution to (2.1).

  1. H1. There exists $k_{1}\in\mathcal{K}_{0}$ such that the function $\sigma$ satisfies the following linear growth inequality for all $(s,t)\in\triangle^{(2)} ([0,T]) ,$ and $x\in\mathbb{R}^{n}$:

    \begin{equation*} |\sigma (t,s,x) | ^{2}\leq k_{1} (t,s) \big(1+ |x| ^{2}\big) .\end{equation*}
  2. H2. There exists $k_{2}\in\mathcal{K}_{0}$ such that the function $\sigma$ satisfies the following Lipschitz inequality for all $(s,t)\in\triangle^{(2)} ([0,T]) ,$ $x,y\in\mathbb{R}^{n}$:

    \begin{equation*} |\sigma (t,s,x) -\sigma (t,s,y) | ^{2}\leq k_{2} (t,s) |x-y| ^{2}.\end{equation*}
  3. H3. For some $p\geq2$, we have

    \begin{equation*}\sup_{t\in [0,T] }\int_{0}^{t} [k_{1} (t,s) +k_{2} (t,s) ] \cdot\mathbb{E} [ |g_{s}| ^{p}]\,{\textrm{d}} s<\infty,\end{equation*}
    where $k_{1}$ and $k_{2}$ satisfy H1 and H2.

Based on the above hypotheses, we can use the following tailor-made version of the theorem on existence and uniqueness found in [Reference Zhang20] to show that there exists a unique solution to (2.1).

Theorem 2.1. (X. Zhang.) Assume that $\sigma\colon \triangle^{(2)} ([0,T]) \times\mathbb{R}^{n}\rightarrow\mathcal{L} (\mathbb{R}^{d},\mathbb{R}^{n}) $ is measurable, and that g is an $\mathbb{R}^{n}$-valued, progressively measurable process satisfying H1H3. Then there exists a unique measurable, $\mathbb{R}^{n}$-valued, progressively measurable process $X_{t}$ satisfying for all $t\in[0,T]$ the equation

\begin{equation*}X_{t}=g_{t}+\int_{0}^{t}\sigma (t,s,X_{s}) \,{\textrm{d}} B_{s}.\end{equation*}

Furthermore, for some $C_{T,p,k_{1}}>0$ we have

\begin{equation*}\mathbb{E}[ |X_{t}| ^{p}]\leq C_{T,p,k_{1}} \biggl(1+\mathbb{E}[ |g_{t}| ^{p}]+\sup_{t\in[0,T]}\int_{0}^{t}k_{1} (t,s) \mathbb{E}[ |g_{s}| ^{p}]\,{\textrm{d}} s\biggr),\end{equation*}

where p is from H3, and $k_{1}$ is as given in H1.

It will also be useful in future sections to consider the following additional hypothesis.

  1. H4. The process g is continuous and satisfies, for some $\delta>0$ and for any $p\geq2$,

    \begin{equation*}\mathbb{E}\biggl[\sup_{t\in [0,T] } |g_{t}| ^{p}\biggr]<\infty\end{equation*}
    and
    \begin{equation*}\mathbb{E}[ |g_{t}-g_{s}| ^{p}]\leq C_{T,p} |t-s| ^{\delta p}.\end{equation*}

Remark 2.1. Hypothesis H4 is equivalent to saying that $g\in C^{\delta}$ $\mathbb{P}$-a.s.

3. Self-exciting multifractional gamma processes

In this section we will build a continuous-time process in a way that allows us to capture the intermittency effect suggested in [Reference Sornette and Filimonov17], which is often observed in turbulent data. In order to do so we need to introduce some basic definitions that will serve as building blocks to introduce such processes properly. All proofs of the results stated in this section are given in the appendix.

Definition 3.1. Consider a non-negative function $f\colon [0,T] \times\mathbb{R}\rightarrow\mathbb{R}_{+}$, such that it satisfies the Lipschitz conditions

\begin{align*} &|\,f (t,x) -f (t,y) | \leq C |x-y| ,\\[3pt] &|\,f (t,x) -f (t^{\prime},x) | \leq C |t-t^{\prime}| ,\end{align*}

for all $x,y\in\mathbb{R}$ and $t,t^{\prime}\in [0,T] $. We say that f is a dampening function if it also satisfies the linear growth condition

\begin{equation*} |\,f (t,x) | \leq C (1+ |x| ) ,\end{equation*}

for all $x\in\mathbb{R}$, $t\in [0,T] $ and some constant $C>0$.

Definition 3.2. Consider a function $h\colon [0,T] \times\mathbb{R}\rightarrow\mathbb{R}_{+}$ with parameters $ (h_{*},h^{*}) $, such that $h_{*}\leq h^{*}$. We say that h is a Hurst function if h (t, x) takes values in $ [h_{*},h^{*}] \subset (0,1) $ for all $x\in\mathbb{R}^{d}$ and $t\in [0,T] $, and h satisfies the Lipschitz conditions

\begin{align*} &|h (t,x) -h (t,y) | \leq C |x-y| ,\\[3pt] &|h (t,x) -h (t^{\prime},x) | \leq C |t-t^{\prime}| ,\end{align*}

for all $x,y\in\mathbb{R}$, $t,t^{\prime}\in [0,T] $ and some $C>0$.

Consider the stochastic process given formally by the Volterra equation

\begin{equation*}X_{t}^{h,f}=g_{t}+\int_{0}^{t}\exp \big(\!-f \big(t,X_{s}^{h,f}\big) (t-s) \big) (t-s) ^{h \big(t,X_{s}^{h,f}\big) -{{1}/{2}}}\,{\textrm{d}} B_{s},\end{equation*}

where g is a progressively measurable, one-dimensional process, $h\colon [0,T] \times\mathbb{R}\rightarrow\mathbb{R}_{+}$, and B is a one-dimensional Brownian motion. The following lemma shows the existence and uniqueness of the above equation by means of Theorem 2.1. We will discuss the continuity properties of the solution further in this section.

Lemma 3.1. Let $\sigma(t,s,x)=\exp (\!-f (t,x) (t-s) ) (t-s) ^{h(t,x)-{{1}/{2}}}$, such that f is a dampening function and h is a Hurst function with parameters $ (h_{*},h^{*}) $ according to Definitions 3.1 and 3.2 respectively. Then we obtain

(3.1) \begin{equation} |\sigma (t,s,x) | ^{2}\leq k (t,s) \big(1+ |x| ^{2}\big) ,\end{equation}

where

\begin{equation*}k (t,s) =C_{T} (t-s) ^{2h_{*}-1}\end{equation*}

and

(3.2) \begin{equation} |\sigma (t,s,x) -\sigma (t,s,y) | ^{2}\leq C_{T}k (t,s) |\log (t-s) | ^{2} |x-y| ^{2}.\end{equation}

It follows that $\sigma$ satisfies H1H2.

Since the kernel $\sigma$ proposed also satisfies H1H2, applying Zhang’s theorem (Theorem 2.1) yields the existence and uniqueness for the solution of what we call a self-exciting multifractional gamma process (SEM-gamma), given as the solution to the equation

(3.3) \begin{equation}X_{t}^{h,f}=g_{t}+\int_{0}^{t}\exp \big(\!-f \big(t,X_{s}^{h,f}\big) (t-s) \big) (t-s) ^{h \big(t,X_{s}^{h,f}\big) -{{1}/{2}}}\,{\textrm{d}} B_{s},\end{equation}

as well as bounds on p-moments of the following type:

\begin{equation*}\mathbb{E}\Big[ \big|X_{t}^{h,f}\big| ^{p}\Big]\leq C_{T,p,k_{1}} \biggl(1+\mathbb{E}[ |g_{t}| ^{p}]+\sup_{t\in [0,T] }\int_{0}^{t} (t-s) ^{h_{*}-{{1}/{2}}}\mathbb{E}[ |g_{s}| ^{p}]\,{\textrm{d}} s\biggr).\end{equation*}

Next we will show the Hölder regularity for the solution of (3.3). In order to do so we need the following preliminary lemmas, Lemmas 3.2 and 3.3.

Lemma 3.2. Let $T>u>v>0.$ Then, for any $\alpha\leq0$ and $\beta\in [0,1] $, we have

\begin{equation*} |u^{\alpha}-v^{\alpha}| \leq2^{1-\beta} |\alpha| ^{\beta} |u-v| ^{\beta} |v| ^{\alpha-\beta},\end{equation*}

and for $\alpha\in (0,1) $

\begin{equation*} |u^{\alpha}-v^{\alpha}| \leq |\alpha| \, |u-v| ^{\alpha+\beta (1-\alpha) } |v| ^{-\beta (1-\alpha) }.\end{equation*}

Lemma 3.3. Let $\sigma(t,s,x)=\exp (\!-f (t,x) (t-s) ) (t-s) ^{h(t,x)-{{1}/{2}}}$, such that f is a dampening function and h is a Hurst function with parameters $ (h_{*},h^{*}) $ according to Definitions 3.1 and 3.2 respectively. Then, for any $0<\gamma<2h_{*}$, there exists $\lambda_{\gamma}\colon \triangle^{(3)} ( [0,T] ) \rightarrow\mathbb{R}$ such that

(3.4) \begin{equation} |\sigma (t,s,x) -\sigma (t^{\prime},s,x) | ^{2}\leq\lambda_{\gamma} (t,t^{\prime},s) ,\end{equation}

and

(3.5) \begin{equation}\int_{0}^{t^{\prime}}\lambda_{\gamma} (t,t^{\prime},s) \,{\textrm{d}} s\leq C_{T,\gamma} |t-t^{\prime}| ^{\gamma},\end{equation}

for some constant $C_{T,\gamma}>0.$

Now we are in a position such that we can introduce the Hölder continuity properties of the solution to the SEM-gamma process of (3.3) via the following proposition.

Proposition 3.1. Let $ \{ X_{t}^{h,f}\} _{t\in [0,T] }$ be a SEM-gamma process defined as in Lemma 3.1, and assume that g satisfies H4 for some $\delta>0$. Then $X^{h,f}$ admits a version with $\alpha$-Hölder trajectories for any $\alpha<h_{*}\wedge\delta$. In particular, we have

\begin{equation*} \big| \big(X_{t}^{h,f}-X_{s}^{h,f}\big) \big| \leq C |t-s| ^{\alpha}\quad{for\ all}\ t,s\in [0,T] .\end{equation*}

4. Simulation of self-exciting multifractional gamma processes

The aim of this section is to study a discretization scheme for the SEM-gamma processes proposed in the previous sections. In particular, we will consider an Euler-type discretization and prove that this converges strongly to the original process at a rate depending on $h_{*}$. We end the section providing some examples of numerical simulations using the Euler discretization.

4.1. Euler–Maruyama approximation scheme

Consider a time discretization of the interval [0, T], using a step size $\Delta t={{T}/{N}}>0$. The Euler–Maruyama scheme (EM) yields a discrete-time approximation $\bar{X}_{k}^{h,f}$ of the process $X_{t_{k}}^{h,f}$ for $t_{k}=k\Delta t$ with $k\in \{ 0,\ldots,N\} ,$ given by

(4.1) \begin{align}\bar{X}_{0}^{h,f} & =X_{0}^{h,f}=0,\end{align}
(4.2) \begin{align}\bar{X}_{k}^{h,f} & =\sum_{i=0}^{k-1}\exp \big(\!-f \big(t_{k},\bar{X}_{i}^{h,f}\big) (t_{k}-t_{i}) \big) (t_{k}-t_{i}) ^{h \big(t_{k},\bar{X}_{i}^{h,f}\big) -{{1}/{2}}}\Delta B_{i},\end{align}

for all $k\in \{ 1,\ldots,N\} $, where $\Delta B_{i}=B (t_{i+1}) -B (t_{i}) $.

In order to study the approximation error, it is convenient to consider the continuous-time interpolation of $ \{ \bar{X}_{k}^{h,f}\} _{k\in \{ 0,\ldots,N\} }$ given by

(4.3) \begin{equation}\bar{X}_{t}^{h,f}=\int_{0}^{t}\exp \big(\!-f \big(t,\bar{X}_{\eta (s) }^{h,f}\big) (t-\eta (s) ) \big) (t-\eta (s) ) ^{h \big(t,\bar{X}_{\eta (s) }^{h,f}\big) -{{1}/{2}}}\,{\textrm{d}} B_{s},\end{equation}

for all $t\in [0,T] $, where $\eta\coloneqq \sum_{i=0}^{k-1}t_{i}\cdot\boldsymbol{1}_{[t_{i},t_{i+1}) }$.

Lemma 4.1. Let f be a dampening function, and let h be a Hurst function with parameters $ (h_{*},h^{*}) $ according to Definitions 3.1 and 3.2 respectively. Let $\bar{X}^{h,f}= \{ \bar{X}_{t}^{h,f}\} _{t\in [0,T] }$ be given as in (4.3). Then

\begin{equation*} \mathbb{E}\bigl[ \big|\bar{X}_{t}^{h,f}\big| ^{2}\bigr]\leq C_{T},\quad0\leq t\leq T,\end{equation*}

and

(4.4) \begin{equation}\mathbb{E}\bigl[ \big|\bar{X}_{t}^{h,f}-\bar{X}_{t^{\prime}}^{h,f}\big| ^{2}\bigr]\leq C_{T,\gamma} |t-t^{\prime}| ^{\gamma},\quad0\leq t^{\prime}\leq t\leq T,\end{equation}

for any $\gamma<2h_{*}$, where $C_{T}$ and $C_{T,\gamma}$ are positive constants.

The following result is a combination of Theorem 1 and Corollary 2 in [Reference Ye, Gao and Ding18].

Theorem 4.1. Suppose $\beta>0$, a(t) is a non-negative function locally integrable on $0\leq t<T<+\infty$ and g(t) is a non-negative, non-decreasing continuous function defined on $0\leq t<T$, $g(t) \leq M$ (constant), and suppose u(t) is non-negative and locally integrable on $0\leq t<T$ with

\begin{equation*}u (t) \leq a (t) +g (t) \int_{0}^{t} (t-s) ^{\beta-1}u (s) \,{\textrm{d}} s,\end{equation*}

on this interval. Then

\begin{equation*}u (t) \leq a (t) +\int_{0}^{t}\biggl(\sum_{n=1}^{\infty}\dfrac{ (g (t) \Gamma (\beta) ) ^{n}}{\Gamma (n\beta) } (t-s) ^{n\beta-1}a (s) \biggr)\,{\textrm{d}} s,\quad 0\leq t<T.\end{equation*}

If in addition a(t) is a non-decreasing function on [0,T), then

\begin{equation*}u (t) \leq a (t) E_{\beta} (g (t) \Gamma (\beta) t^{\beta}) ,\end{equation*}

where $E_{\beta}$ is the Mittag–Leffler function defined by

\begin{equation*}E_{\beta} (z) =\sum_{k=0}^{\infty}\dfrac{z^{k}}{\Gamma (k\beta+1) }.\end{equation*}

Using Lemma 4.1 and Theorem 4.1, we can show the order of convergence for the approximating scheme.

Theorem 4.2. Let f be a dampening function, and let h be a Hurst function with parameters $ (h_{*},h^{*}) $ according to Definitions 3.1 and 3.2 respectively. Let $\bar{X}^{h,f}= \{ \bar{X}_{t}^{h,f}\} _{t\in [0,T] }$ be given as in (4.3). Then the Euler–Maruyama scheme (4.3) satisfies

\begin{equation*} \sup_{0\leq t\leq T}\mathbb{E}\bigl[ \big|X_{t}^{h,f}-\bar{X}_{t}^{h,f}\big| ^{2}\bigr]\leq C_{T,\gamma,h_{*}}E_{h_{*}} (C_{T,\gamma,h_{*}}\Gamma (h_{*}) T^{h_{*}}) |\Delta t| ^{\gamma},\end{equation*}

where $\gamma\in (0,2h_{*}) $ and $C_{T,\gamma,h_{*}}$ is a positive constant which does not depend on N.

Remark 4.1. In [Reference Zhang19], Zhang introduced an Euler-type scheme for stochastic differential equations of Volterra type and showed that his scheme converges at a certain positive rate, without being very precise. A direct application of his result to our case provides a worse rate than the one we obtain in Theorem 4.2, the reason being that, due to our particular kernel, we are able to use a fractional Grönwall lemma, which provides more precise estimates in our proof.

4.2. Examples

The following examples provide some numerical simulations of the SEM-gamma process and also discuss the observed properties in the time series generated for different Hurst and dampening functions.

Example 4.1. Let $h (x) ={{1}/{{(1+x^{2})}}}\in (0,1) ,$ $f=0.5,$ and $ \{ B_{t}\} _{t\in[0,T]}$ be a one-dimensional Brownian motion. Assume $X_{t}^{h,f}$ starts at zero and define the process as given in (1.2). Figure 1(a) shows the plot of h and Figure 1(b) shows a sample path of the process resulting from the implementation of the EM-approximation given by (4.1). The step size used in all simulations is $\Delta t=1/100$.

Figure 1. Numerical simulation of a trajectory of a SEM-gamma process for $f=0.5$ and $h(x)={{1}/{{(1+x^{2})}}}$.

In [Reference Sornette and Filimonov17], Sornette and Filimonov suggested a class of self-excited processes that may exhibit all stylized facts found in financial time series as heavy tails (asset return distribution displays heavy tails with positive excess kurtosis), absence of autocorrelations (autocorrelations in asset returns are negligible, except for very short time scales $\simeq20$ minutes), volatility clustering (absolute returns display a positive, significant and slowly decaying autocorrelation function) and the leverage effect (volatility measures of an asset are negatively correlated with its returns), among others stated in [Reference Cont and Tankov6]. The SEM-gamma process resembles these properties for any reasonable choice of $h\subset (0,1) $, given a good choice for the mean reversion, i.e. a controlled dampening function f. SEM-gamma processes may also be useful for modeling the log-volatility of financial assets of different nature, in order to capture the roughness in the underlying volatility, as suggested by Gatheral, Jaisson, and Rosenbaum in [Reference Gatheral, Jaisson and Rosenbaum9]. In fact, squaring or taking the exponential of the SEM-gamma process in Figures 2 and 3 seems to capture the varying Hurst index and clustering phenomena observed for the realized volatility time series, as Gatheral et al. [Reference Gatheral, Jaisson and Rosenbaum9] suggest.

Example 4.2. Figure 2 shows the change in the behavior of the Hurst exponent (a transition from rougher values to smoother values, i.e. $h\approx0$ to $h\approx1$) as we progressively change f from lower values to higher values. In particular we compare $f (x) \equiv c$, where $c\in \{ 0,0.5,1,5\} $.

Figure 2. Numerical simulations of trajectories of SEM-gamma processes for $h (x) ={{1}/{{(1+x^{2})}}}$ and $f\in \{ 0,0.5,1,5\} $.

Figure 3. Scale comparative of the SEM-gamma process with $f (x) =5$ and $h (x) ={{1}/{{(1+x^{2})}}}$.

Remark 4.2. Note from Figure 2 that one can control the clustering effect of the increments and the varying regularity of the process by controlling the parameter f, regardless of the Hurst function chosen as h. This type of effect is easy to achieve in the SEM-gamma process for any h given $f\neq0$, but it can also be achieved for $f=0$ and the right choice of the Hurst function h. The clustering effect is desirable in numerous fields, for example in modeling financial markets, when trying to capture shocks in asset prices. Figure 3 shows the rough nature of the process hidden at a lower scale. This is achieved by zooming in close enough at the case $f (x) =5$.

It also makes sense to let f(x) be a function of x rather than a constant, and in particular, if we take $f (x) =h (x) ={{1}/{{(1+x^{2})}}}$, we can see in Figure 4 how the regime switch in the Hurst exponent is less abrupt, favoring a sustained difference of roughness in time.

Figure 4. Numerical simulation of a trajectory of a SEM-gamma process when the Hurst function is $h(x)=f (x) ={{1}/{{(1+x^{2})}}}$.

5. The SEM process: a particular case of the SEM-gamma process

Consider the stochastic process given by the particular case where $f=0$ in (1.2):

\begin{equation*}X_{t}^{h}=\int_{0}^{t} (t-s) ^{h \big(t,X_{s}^{h}\big) -{{1}/{2}}}\,{\textrm{d}} B_{s}.\end{equation*}

This process is the continuous-time version of the discretized version proposed by Sornette and Filimonov in [Reference Sornette and Filimonov17]. All the previous results for existence and uniqueness of the solution hold, as well as the discretization scheme. It is interesting to note that regime switches in the Hurst exponent are only observed depending on the choice made in the parametrization of the Hurst function. In order to illuminate this fact we propose a series of examples.

5.1. Examples

Let us discuss the SEM processes resulting from considering different functions $h\colon \mathbb{R}\rightarrow (0,1) $.

Example 5.1. Let

\begin{equation*} h(x)=\dfrac{1}{2}+\dfrac{1/2}{1+x^{2}}\in\biggl(\dfrac{1}{2},1\biggr)\subset(0,1), \end{equation*}

$f=0$ and $ \{ B_{t}\} _{t\in[0,T]}$ be a one-dimensional Brownian motion. Assume $X_{t}^{h}$ starts at zero and define the process as given in (1.2). Figure 5(a) shows the plot of h and Figure 5(b) shows a sample path of the process resulting from the implementation of the EM-approximation. Note that this process is smoother than a Brownian motion at the origin and rapidly converges to the classical Brownian motion as the process departs from zero. This implies that the value of h approaches $1/2$ quickly and goes back to 1 when the process $X_{t}^{h,f}$ crosses the x-axis. Note that the Hurst exponent as a function of time has very low frequency in regime changes, as is observed from the red line plot.

Figure 5. Numerical simulation of a trajectory of a SEM process when the Hurst function is $h (x) ={{1}/{2}}+{{{(1/2)}}/{{(1+x^{2})}}}$.

Figure 6. Numerical simulation of a trajectory of a SEM process when the Hurst function is $h (x) ={{1}/{2}}-{{{(1/2)}}/{{(1+x^{2})}}}$.

Example 5.2. Let

\begin{equation*} h (x) =\dfrac{1}{2}-\dfrac{1/2}{1+x^{2}}\in\biggl(0,\dfrac{1}{2}\biggr)\subset (0,1) , \end{equation*}

$f=0$ and $ \{ B_{t}\} _{t\in[0,T]}$ be a one-dimensional Brownian motion. Assume $X_{t}^{h}$ starts at zero and define the process as given in (1.2). Figure 6(a) shows the plot of h and Figure 6(b) shows a sample path of the process. It is interesting to note that in this case, contrary to the previous example, we have a rougher process than a Brownian motion at the origin, which temporarily resembles the classical Brownian motion as the sample path departs from zero and gets rougher again whenever the process crosses the x-axis. This makes the process go away from zero even faster due to the increased roughness, producing high frequency in regime changes of the Hurst exponent as a function of time.

Example 5.3. Let $h (x) ={{1}/{{(1+x^{2})}}}\in (0,1) $. This is exactly the same case we examined in the previous section when we considered $f=0$. Note that in the earlier case, the Hurst exponent as a function of time quickly collapsed to zero, making the process the roughest possible and preventing any regime switch in the values of the Hurst function. This illuminates the fact that regime switches in the Hurst functions of SEM processes can also be achieved provided that we make the right choice in the parametrization of the function h. SEM-gamma processes are the opposite case where, regardless of the choice made as a Hurst function, the frequency in the regime switch of h is totally controlled by the dampening function $f>0$ in the long run.

Remark 5.1. The plots in Figure 7 show the autocorrelation function of the absolute value in the time series of the increments in the SEM process (Figure 7(a)) from example (5.3) and in the SEM-gamma process (Figure 7(b)) with $f (x) =0.1$. Note that autocorrelation in the second case is clearly much higher.

Figure 7. SEM and SEM-gamma processes’ autocorrelation functions.

Appendix A. Proofs

Proofs of all the results from previous sections are given in this appendix.

A.1 Proof of Lemma 3.1

Proof. We prove the three claims in the order they are stated in the lemma, and start by proving (3.1). Remember that f is a non-negative function and

\begin{equation*}h (t,x) \in [h_{*},h^{*}] \subset (0,1) ,\end{equation*}

for all $t\in [0,T] $ and $x\in\mathbb{R}$, so elementary computations show that

(A.1) \begin{align} |\sigma (t,s,x) | ^{2} & ={\textrm{e}}^{-2f (t,x) (t-s) } (t-s) ^{2h (t,x) -1}\notag \\ &\leq (t-s) ^{2h (t,x) -1}\nonumber \\ & \leq (t-s) ^{2 (h (t,x) -h_{*}) +2h_{*}-1}\notag \\ &\leq T^{2 (h^{*}-h_{*}) } (t-s) ^{2h_{*}-1},\end{align}

which yields (3.1) with $k (t,s) =C_{T} (t-s) ^{2h_{*}-1}$. Next we consider (3.2), and using $x=\exp (\log (x) ) $ we write

\begin{equation*}\sigma (t,s,x) =\exp\biggl(\!-f (t,x) (t-s) +\log (t-s) \biggl(h (t,x) -\dfrac{1}{2}\biggr)\biggr),\end{equation*}

where $ (t,s) \in\triangle^{ (2) } ( [0,T] ) $, and consider the following inequality derived from the fundamental theorem of calculus:

(A.2) \begin{equation} |{\textrm{e}}^{x}-{\textrm{e}}^{y}| \leq {\textrm{e}}^{\max(x,y)} |x-y| ,\quad x,y\in\mathbb{R}.\end{equation}

Using the Lipschitz assumption on h and the above inequality, we can write the following upper bound:

(A.3) \begin{align} & |\sigma (t,s,x) -\sigma (t,s,y) | \nonumber \\[3pt] & \quad\leq\exp (\!\min (\,f (t,x) ,f (t,y) ) (t-s) ) \exp\biggl(\biggl[\max (h (t,x) ,h (t,y) ) -\dfrac{1}{2}\biggr]\cdot\log (t-s) \biggr)\nonumber \\[3pt] & \quad\quad\, \times ( |\,f (t,x) -f (t,y) |\, |t-s| + |\log (t-s) |\, |h (t,x) -h (t,y) | ) .\end{align}

Recalling that $ |{\textrm{e}}^{-f (t,x) (t-s) }| \leq1$ and that f and h are uniformly Lipschitz, then it is readily checked that (A.3) implies

\begin{equation*} |\sigma (t,s,x) -\sigma (t,s,y) | ^{2} \leq C_{T} |t-s| ^{2h_{*}-1} |\log (t-s) | ^{2} |x-y| ^{2}.\end{equation*}

We will now define the kernels $k_{1}$ and $k_{2}$ which will satisfy hypotheses H1H3. To this end, let $k_{1}$ and $k_{2}$ be given by

\begin{align*}k_{1} (t,s) \equiv h_{1} (t-s) & \coloneqq C (t-s) ^{2h_{*}-1},\\[3pt] k_{2} (t,s) \equiv h_{2} (t-s) & \coloneqq h_{1} (t-s) |\log (t-s) | ^{2}.\end{align*}

It is now readily seen that $k_{1}$ and $k_{2}$ automatically lie in $\mathcal{K}_{0}$, due to the fact that both $h_{1}$ and $h_{2}$ are functions in $L_{loc}^{1} (\mathbb{R}_{+}) $; see e.g. Section 2.1 in [Reference Zhang20]. This concludes the proof.

A.2 Proof of Lemma 3.2

Proof. For $\alpha=0$ the proof is clear. For $\alpha<1$ and $\alpha\neq0,$ using the remainder of Taylor’s formula in integral form we get

(A.4) \begin{align} |u^{\alpha}-v^{\alpha}| & =\biggl| (u-v) \int_{0}^{1}\alpha (v+\theta (u-v) ) ^{\alpha-1} (1-\theta) \,{\textrm{d}} \theta\biggr|\nonumber \\[3pt] & \leq |\alpha|\, |u-v| \int_{0}^{1} |v+\theta (u-v) | ^{\alpha-1}\,{\textrm{d}} \theta \notag \\[3pt] &\leq |\alpha|\, |u-v|\, |v| ^{\alpha-1},\end{align}

where we have used $ |v+\theta (u-v) | ^{\alpha-1}\leq |v| ^{\alpha-1}$. On the other hand, using $ |v+\theta (u-v) | ^{\alpha-1}\leq\theta^{\alpha-1} |u-v| ^{\alpha-1}$ and assuming that $\alpha\in (0,1) $, we obtain

(A.5) \begin{equation} |u^{\alpha}-v^{\alpha}| \leq\alpha |u-v| ^{\alpha}\int_{0}^{1}\theta^{\alpha-1}\,{\textrm{d}} \theta= |u-v| ^{\alpha}.\end{equation}

In what follows we will use the interpolation inequality $a\wedge b\leq a^{\beta}b^{1-\beta}$ for any $a,b>0$ and $\beta\in [0,1] $. Consider the case $\alpha<0.$ Using the interpolation inequality with the simple bound $ |u^{\alpha}-v^{\alpha}| \leq2 |v| ^{\alpha}$ and the bound (A.4), we get

\begin{equation*} |u^{\alpha}-v^{\alpha}| \leq2^{1-\beta} |\alpha| ^{\beta} |u-v| ^{\beta} |v| ^{\beta (\alpha-1) } |v| ^{(1-\beta)\alpha}=2^{1-\beta} |\alpha| ^{\beta} |u-v| ^{\beta} |v| ^{\alpha-\beta}.\end{equation*}

Consider the case $\alpha\in (0,1) $. Using the interpolation inequality with the bounds (A.4) and (A.5), we can write

\begin{equation*} |u^{\alpha}-v^{\alpha}| \leq |\alpha|\, |u-v| ^{\beta} |v| ^{\beta (\alpha-1) } |u-v| ^{ (1-\beta) \alpha}= |\alpha|\, |u-v| ^{\alpha+\beta (1-\alpha) } |v| ^{-\beta (1-\alpha) }.\end{equation*}

This concludes the proof.

A.3 Proof of Lemma 3.3

Proof. In order to prove (3.4), it is clear that

\begin{equation*}\sigma (t,s,x) -\sigma (t^{\prime},s,x) ={\textrm{e}}^{-f (t,x) (t-s) } (t-s) ^{h (t,x) -{{1}/{2}}}-{\textrm{e}}^{-f (t^{\prime},x) (t^{\prime}-s) } (t^{\prime}-s) ^{h (t^{\prime},x) -{{1}/{2}}}.\end{equation*}

Furthermore, note that for all $t>t^{\prime}>s>0$ we can add and subtract the term

\begin{equation*}{\textrm{e}}^{-f (t,x) (t-s) } (t-s) ^{h(t^{\prime},x)-{{1}/{2}}},\end{equation*}

to get

\begin{equation*}\sigma (t,s,x) -\sigma (t^{\prime},s,x) =J^{1} (t,t^{\prime},s,x) +J^{2} (t,t^{\prime},s,x) ,\end{equation*}

where

\begin{align*}J^{1} (t,t^{\prime},s,x) & \coloneqq {\textrm{e}}^{-f (t,x) (t-s) } ( (t-s) ^{h (t,x) -{{1}/{2}}}- (t-s) ^{h(t^{\prime},x)-{{1}/{2}}}) ,\\[3pt] J^{2} (t,t^{\prime},s,x) & \coloneqq {\textrm{e}}^{-f (t,x) (t-s) } (t-s) ^{h(t^{\prime},x)-{{1}/{2}}}-{\textrm{e}}^{-f (t^{\prime},x) (t^{\prime}-s) } (t^{\prime}-s) ^{h (t^{\prime},x) -{{1}/{2}}}.\end{align*}

We start by finding an upper bound for the term $J^{1}$. Using (A.2), the fact that ${\textrm{e}}^{-f (t,x) (t-s) }\leq1,$ and that h is Lipschitz in the time argument, by arguments similar to those in Lemma 3.1, we obtain for any $\delta\in (0,1) $, with $s<t<t^{\prime}$,

\begin{align*} |J^{1} (t,t^{\prime},s,x) | & \leq C_{T} |t-t^{\prime}|\, |t-s| ^{h_{*}-{{1}/{2}}} |\log (t-s) | \\[3pt] & \leq C_{T,\delta} |t-t^{\prime}|\, |t'-s| ^{h_{*}-{{1}/{2}}-\delta}.\end{align*}

Next, in order to bound the term $J^{2}$, we add and subtract the quantity ${\textrm{e}}^{-f (t,x) (t-s) }\break (t^{\prime}-s) ^{h (t^{\prime},x) -{{1}/{2}}}$, to obtain

\begin{align*}|J^{2} (t,t^{\prime},s,x)| & \leq \big|{\textrm{e}}^{-f(t,x)(t-s)} \big((t-s)^{h(t^{\prime},x)-{{1}/{2}}}-(t^{\prime}-s)^{h(t^{\prime},x)-{{1}/{2}}} \big)\\[3pt] & \quad\, - (t^{\prime}-s)^{h (t^{\prime},x)-{{1}/{2}}} \big({\textrm{e}}^{-f (t,x)(t-s)}-{\textrm{e}}^{-f(t^{\prime},x)(t^{\prime}-s)}\big) \big|\\[3pt] & \leq \big|(t-s)^{h(t^{\prime},x)-{{1}/{2}}}-(t^{\prime}-s)^{h(t^{\prime},x)-{{1}/{2}}}\big|\\[3pt] & \quad\, + \big|(t^{\prime}-s)^{h(t^{\prime},x)-{{1}/{2}}}\big|\,\big|{\textrm{e}}^{-f(t,x)(t-s)}-{\textrm{e}}^{-f(t^{\prime},x)(t^{\prime}-s)}\big|.\end{align*}

Using (A.2), we can rewrite the above expression as

\begin{align*} |J^{2} (t,t^{\prime},s,x) | & \leq | (t-s) ^{h(t^{\prime},x)-{{1}/{2}}}- (t^{\prime}-s) ^{h (t^{\prime},x) -{{1}/{2}}}| + | (t^{\prime}-s) ^{h (t^{\prime},x) -{{1}/{2}}}| \\[3pt] & \quad\, \times |{\textrm{e}}^{\max (\!-f (t,x) (t-s) ,-f (t^{\prime},x) (t^{\prime}-s) ) }|\, |\,f (t^{\prime},x) (t^{\prime}-s) -f (t,x) (t-s) | \\[3pt] & \leq | (t-s) ^{h(t^{\prime},x)-{{1}/{2}}}- (t^{\prime}-s) ^{h (t^{\prime},x) -{{1}/{2}}}| \\[3pt] & \quad\, +C_{T} |t^{\prime}-s| ^{h (t^{\prime},x) -{{1}/{2}}} |\,f (t^{\prime},x) (t^{\prime}-s) -f (t,x) (t-s) | .\end{align*}

Then, adding and subtracting $f (t,x) (t'-s) $, and using the linear growth and Lipschitz conditions on f, we obtain

\begin{align*} |\,f (t^{\prime},x) (t^{\prime}-s) -f (t,x) (t-s) | & \leq |\,f (t,x) |\, |t'-t| + |t'-s|\, |\,f (t',x) -f (t,x) | \\[3pt] & \leq C |t'-t| +C |t'-s|\, |t'-t| \\[3pt] & \leq C_{T} |t'-t| ,\end{align*}

and we have

\begin{align*} |J^{2} (t,t^{\prime},s,x) | & \leq | (t-s) ^{h(t^{\prime},x)-{{1}/{2}}}- (t^{\prime}-s) ^{h (t^{\prime},x) -{{1}/{2}}}| +C_{T} |t^{\prime}-s| ^{h (t^{\prime},x) -{{1}/{2}}} |t-t^{\prime}| \\[3pt] & \leq | (t-s) ^{h(t^{\prime},x)-{{1}/{2}}}- (t^{\prime}-s) ^{h (t^{\prime},x) -{{1}/{2}}}| +C_{T} |t^{\prime}-s| ^{h_{*}-{{1}/{2}}} |t-t^{\prime}| .\end{align*}

Now we apply Lemma 3.2 with $u=t-s$, $v=t^{\prime}-s$ and $\alpha=h (t^{\prime},x) -\frac{1}{2}$ to upper-bound the above inequality. Note that since $h (t^{\prime},x) \in [h_{*},h^{*}] \subset (0,1) ,$

\begin{equation*}\alpha\in\biggl[h_{*}-\dfrac{1}{2},h^{*}-\dfrac{1}{2}\biggr]\subset\biggl(\!-\dfrac{1}{2},\dfrac{1}{2}\biggr).\end{equation*}

Hence, if $\alpha=h (t^{\prime},x) -\frac{1}{2}\leq0$ (this implies $h_{*}\leq1/2$ and $\alpha\in (\!-\frac{1}{2},0) $), we get

\begin{align*} | (t-s) ^{h(t^{\prime},x)-{{1}/{2}}}- (t^{\prime}-s) ^{h (t^{\prime},x) -{{1}/{2}}}| & \leq2 |t-t^{\prime}| ^{\beta_{1}} |t^{\prime}-s| ^{h(t^{\prime},x)-{{1}/{2}}-\beta_{1}}\\[3pt] & \leq C_{T} |t-t^{\prime}| ^{\beta_{1}} |t^{\prime}-s| ^{h_{*}-{{1}/{2}}-\beta_{1}},\end{align*}

for any $\beta_{1}\in (0,1) $. If $\alpha=h(t^{\prime},x)-\frac{1}{2}>0$ (this implies $h^{*}>1/2$ and $\alpha\in (0,\frac{1}{2}) $), we get

\begin{align*} | (t-s) ^{h(t^{\prime},x)-{{1}/{2}}}- (t^{\prime}-s) ^{h (t^{\prime},x) -{{1}/{2}}}| & \leq\dfrac{1}{2} |t-t^{\prime}| ^{\alpha+\beta_{2} (1-\alpha) } |t^{\prime}-s| ^{-\beta_{2} (1-\alpha) }\\[3pt] & \leq\dfrac{1}{2} |t-t^{\prime}| ^{\alpha+{{1}/{2}}-\varepsilon (1-\alpha) } |t^{\prime}-s| ^{-{{1}/{2}}+\varepsilon (1-\alpha) }\\[3pt] & \leq\dfrac{1}{2} |t-t^{\prime}| ^{h_{*}-\varepsilon} |t^{\prime}-s| ^{-{{1}/{2}}+{{{\varepsilon}}/{2}}},\end{align*}

where in the second inequality we have chosen

\begin{equation*}\beta_{2}=\dfrac{1}{2 (1-\alpha) }-\varepsilon,\varepsilon>0,\end{equation*}

and in the third inequality we used $(1-\alpha)\in (\frac{1}{2},1) $. Therefore we can write the following bound:

\begin{align*} |\sigma (t,s,x) -\sigma (t^{\prime},s,x) | ^{2} & \leq2 ( |J^{1} (t,t^{\prime},s,x) | ^{2}+ |J^{2} (t,t^{\prime},s,x) | ^{2}) \\[3pt] & \leq2\biggl(C_{T,\delta} |t-t^{\prime}| ^{2} |t'-s| ^{2h_{*}-1-2\delta}\\[3pt] & \quad\, +C_{T} |t-t^{\prime}| ^{2\beta_{1}} |t^{\prime}-s| ^{2h_{*}-1-2\beta_{1}}+\dfrac{1}{2} |t-t^{\prime}| ^{2h_{*}-2\varepsilon} |t^{\prime}-s| ^{-1+\varepsilon}\biggr)\\[3pt] & \leq C_{T,\beta_{1}} |t-t^{\prime}| ^{2\beta_{1}} |t'-s| ^{-1+h_{*}-\beta_{1}},\end{align*}

where to get the last inequality we chose $\delta=\beta_{1}$ and $\varepsilon=h_{*}-\beta_{1}$. Therefore, defining

\begin{equation*}\lambda_{\gamma} (t,t^{\prime},s) \coloneqq C_{T,\gamma} (t-t^{\prime}) ^{\gamma} (t^{\prime}-s) ^{-1+h_{*}-{{{\gamma}}/{2}}},\end{equation*}

and choosing $\gamma$ such that $0<\gamma<2h_{*},$ we can compute

\begin{equation*}\int_{0}^{t^{\prime}}\lambda_{\gamma} (t,t^{\prime},s) \,{\textrm{d}} s\leq C_{T,\gamma} (t^{\prime}) ^{h_{*}-{{{\gamma}}/{2}}} (t-t^{\prime}) ^{\gamma},\end{equation*}

which concludes the proof.

A.4. Proof of Proposition 3.1

Proof. By Lemma 3.1 there exists a unique solution $X_{t}^{h,f}$ to (3.3) with bounded p-order moments. We will show that the $X_{t}^{h}$ also have Hölder-continuous paths. To this end, we will show that for any $p\in\mathbb{N}$ there exists a constant $C>0$ and a function $\alpha$, both depending on p, such that

\begin{equation*}\mathbb{E}\bigl[ \big|X_{s,t}^{h,f}\big| ^{2p}\bigr]\leq C_{p} |t-s| ^{\alpha_{p}},\end{equation*}

where we use the increment notation $q_{s,t}=q_{t}-q_{s}.$ From this we apply Kolmogorov’s continuity theorem (e.g. Theorem 2.8 in [Reference Karatzas and Shreve11, page 53]) in order to obtain the claim. Note that the increment of $X_{s,t}^{h,f}$ minus the increment of g satisfies

\begin{align*}X_{s,t}^{h,f}-g_{s,t} & =\int_{s}^{t}\exp \big(\!-f \big(t,X_{r}^{h,f}\big) (t-r) \big) (t-r) ^{h \big(r,X_{r}^{h,f}\big) -{{1}/{2}}}\,{\textrm{d}} B_{r}\\[3pt] & \quad\, +\int_{0}^{s}\exp \big(\!-f \big(t,X_{r}^{h,f}\big) (t-r) \big) (t-r) ^{h \big(r,X_{r}^{h,f}\big) -{{1}/{2}}}\,{\textrm{d}} B_{r}\\[3pt] & \quad\, -\int_{0}^{s}\exp \big(\!-f \big(s,X_{r}^{h,f}\big) (s-r) \big) (s-r) ^{h \big(r,X_{r}^{h,f}\big) -{{1}/{2}}}\,{\textrm{d}} B_{r},\end{align*}

and thus using that

(A.6) \begin{equation} |a+b| ^{q}\leq2^{q-1} ( |a| ^{q}+ |b| ^{q}) ,\end{equation}

for any $q\in\mathbb{N},$ we obtain

\begin{align*} & \mathbb{E}\bigl[\big|X_{s,t}^{h,f}-g_{s,t}\big|^{2p}\bigr]\\[3pt] & \quad \leq C_{p}\mathbb{E}\biggl[\biggl|\int_{s}^{t}\exp\big(\!-f\big(t,X_{r}^{h,f}\big)(t-r)\big)(t-r)^{h\big(r,X_{r}^{h,f}\big)-{{1}/{2}}}\,{\textrm{d}} B_{r}\biggr|^{2p}\biggr]\\[3pt] & \quad\quad\, +C_{p}\mathbb{E}\biggl[\int_{0}^{s}\biggl|\exp\big(\!-f\big(t,X_{r}^{h,f}\big)(t-r)\big)(t-r)^{h\big(r,X_{r}^{h,f}\big)-{{1}/{2}}}\\[3pt] & \quad \quad\, -\exp\big(\!-f \big(s,X_{r}^{h,f}\big)(s-r)\big)(s-r)^{h \big(r,X_{r}^{h,f}\big)-{{1}/{2}}}\biggr|^{2}\,{\textrm{d}} B_{r}\biggr]\\[3pt] & \quad\eqqcolon C_{p}\big(J_{s,t}^{1}+J_{s,t}^{2}\big).\end{align*}

Clearly, as $h (t,x) \in [h_{*},h^{*}] \subset (0,1) $, by the Burkholder–Davis–Gundy (BDG) inequality we have

(A.7) \begin{align}J_{s,t}^{1} & \leq C_{p}\mathbb{E}\biggl[\biggl|\int_{s}^{t}(t-r)^{2h (t,X_{r}) -1}\,{\textrm{d}} r\biggl|^{p}\biggr]\notag \\[3pt]& \leq C_{p,T}\biggl|\int_{s}^{t} (t-r) ^{2h_{*}-1}\,{\textrm{d}} r\biggr|^{p}\notag \\[3pt]&=C_{p,T,h_{*}} |t-s| ^{2ph_{*}}.\end{align}

Now consider the term $J_{s,t}^{2}$. Again applying BDG inequality together with the bounds (3.4) and (3.5) from Lemma 3.3, we have that for any $\gamma<2h_{*}$

(A.8) \begin{align}J_{s,t}^{2} & \leq C_{p}\mathbb{E}\biggl[\biggl|\int_{0}^{s}\Bigl({\textrm{e}}^{-f \big(t,X_{r}^{h,f}\big) (t-r) } (t-r) ^{h (t,X_{r}) -{{1}/{2}}} -{\textrm{e}}^{-f \big(s,X_{r}^{h,f}\big) (s-r) } (s-r) ^{h (t,X_{r}) -{{1}/{2}}}\Bigr)^{2}\,{\textrm{d}} r\biggr|^{p}\biggr]\notag \\[3pt]& \leq C_{p}\mathbb{E}\biggl[\biggl|\int_{0}^{s}\lambda_{\gamma} (t,s,r) \,{\textrm{d}} r\biggr|^{p}\biggr]\notag \\[3pt]&\leq C_{p,T,\gamma} |t-s| ^{p\gamma},\end{align}

Combining (A.7) and (A.8), we can see that

\begin{equation*}\mathbb{E}\bigl[\big|X_{s,t}^{h,f}-g_{s,t}\big|^{2p}\bigr]\leq C_{p,T,\gamma} |t-s| ^{p\gamma}.\end{equation*}

Furthermore, again using (A.6) we see that

\begin{equation*}\mathbb{E}\bigl[\big|X_{s,t}^{h,f}\big|^{2p}\bigr]\leq2^{2p-1}\bigl(\mathbb{E}\bigl[\big|X_{s,t}^{h,f}- (g_{t}-g_{s})\big|^{2p}\bigr]+\mathbb{E}[|g_{s,t}|^{2p}]\bigr).\end{equation*}

Thus, invoking the bounds from H4 on g, we obtain

\begin{equation*}\mathbb{E}\bigl[\big|X_{s,t}^{h,f}\big|^{2p}\bigr]\leq C_{p,T,\gamma} |t-s| ^{2p ({{{\gamma}}/{2}}\wedge\delta) },\end{equation*}

and it follows from Kolmogorov’s continuity theorem that $X^{h,f}$ has $\mathbb{P}$-a.s. $\alpha$-Hölder-continuous trajectories with $\alpha\in (0,h_{*}\wedge\delta) $.

A.5. Proof of Lemma 4.1

Proof. Recall that $k (t,s) =C_{T} (t-s) ^{2h_{*}-1}$ and, since $\eta (s) \leq s$, we have the inequality

(A.9) \begin{equation}k (t,\eta (s) ) \leq k (t,s) .\end{equation}

Using the Itô isometry, that ${\textrm{e}}^{-2f (t,x) }\leq1$, and (A.1) and (A.9), we obtain

\begin{align*}\mathbb{E}\bigl[\big|\bar{X}_{t}^{h,f}\big|^{2}\bigr] & =\mathbb{E}\biggl[\int_{0}^{t}\,{\textrm{e}}^{-2f \big(t,\bar{X}_{\eta (s) }^{h,f}\big) (t-\eta (s) ) } (t-\eta (s) ) ^{2h \big(t,\bar{X}_{\eta (s) }^{h,f}\big) -1}\,{\textrm{d}} s\biggr]\\[3pt] & \leq\mathbb{E}\biggl[\int_{0}^{t} (t-\eta (s) ) ^{2h \big(t,\bar{X}_{\eta (s) }^{h,f}\big) -1}\,{\textrm{d}} s\biggr]\\[3pt] & \leq\int_{0}^{t}k (t,\eta (s) ) \,{\textrm{d}} s\\[3pt]&\leq\int_{0}^{t}k (t,s) \,{\textrm{d}} s\\[3pt]&\leq C_{T}.\end{align*}

To prove the bound (4.4), note that

\begin{align*}\bar{X}_{t}^{h,f}-\bar{X}_{t^{\prime}}^{h,f} & =\int_{t^{\prime}}^{t}\,{\textrm{e}}^{-f \big(t,\bar{X}_{\eta (s) }^{h,f}\big) (t-\eta (s) ) } (t-\eta (s) ) ^{h \big(t,\bar{X}_{\eta (s) }^{h,f}\big) -{{1}/{2}}}\,{\textrm{d}} B_{s},\\[3pt] & \quad\, +\int_{0}^{t^{\prime}}\bigg\{ {\textrm{e}}^{-f \big(t,\bar{X}_{\eta (s) }^{h,f}\big) (t-\eta (s) ) } (t-\eta (s) ) ^{h \big(t,\bar{X}_{\eta (s) }^{h,f}\big) -{{1}/{2}}}\\[3pt] & \quad\, -{\textrm{e}}^{-f \big(t',\bar{X}_{\eta (s) }^{h,f}\big) (t-\eta (s) ) } (t^{\prime}-\eta (s) ) ^{h \big(t^{\prime},\bar{X}_{\eta (s) }^{h,f}\big) -{{1}/{2}}}\bigr\} \,{\textrm{d}} B_{s}\\[3pt] & \eqqcolon J_{1}+J_{2}.\end{align*}

Due to the Itô isometry, that ${\textrm{e}}^{-2f (t,x) }\leq1$, and (A.1) and (A.9), we obtain the bounds

\begin{align*}\mathbb{E}[ |J_{1}| ^{2}] & =\mathbb{E}\biggl[\int_{t^{\prime}}^{t}\,{\textrm{e}}^{-2f \big(t,\bar{X}_{\eta (s) }^{h,f}\big) (t-\eta (s) ) } (t-\eta (s) ) ^{2h \big(t,\bar{X}_{\eta (s) }^{h,f}\big) -1}\,{\textrm{d}} s\biggr]\\[3pt] & \leq\int_{t'}^{t}k (t,\eta (s) ) \,{\textrm{d}} s\\[3pt]&\leq\int_{t'}^{t}k (t,s) \,{\textrm{d}} s\\[3pt]&=C_{T} |t-t^{\prime}| ^{2h_{*}}.\end{align*}

Again using the Itô isometry and (3.4) and (3.5), for any $\gamma<2h_{*}$ we can write that

\begin{equation*}\mathbb{E}[ |J_{2}| ^{2}] \leq\int_{0}^{t^{\prime}}\lambda_{\gamma} (t,t^{\prime},\eta (s) ) \bigl(1+\mathbb{E}\bigl[ \big|\bar{X}_{\eta (s) }^{h,f}\big| ^{2}\bigr]\bigr)\,{\textrm{d}} s\leq C_{T}\int_{0}^{t^{\prime}}\lambda_{\gamma} (t,t^{\prime},s) \,{\textrm{d}} s\leq C_{T,\gamma} |t-t^{\prime}| ^{\gamma},\end{equation*}

where in the second inequality we used $\lambda_{\gamma} (t,t^{\prime},\eta (s) ) \leq\lambda_{\gamma} (t,t^{\prime},s) $, because $\lambda_{\gamma}$ is essentially a negative fractional power of $(t-s)$ and $\eta (s) \leq s$ and also $\mathbb{E}[ |\bar{X}_{t}^{h,f}| ^{2}]\leq C_{T}$, $0\leq t\leq T$, which we have just proved above. Combining the bounds for $\mathbb{E}[ |J_{1}| ^{2}]$ and $\mathbb{E}[ |J_{2}| ^{2}]$, the result follows.

A.6. Proof of Theorem 4.2

Proof. Define

\begin{equation*}\delta_{t}\coloneqq X_{t}^{h,f}-\bar{X}_{t}^{h,f},\quad\varphi (t) \coloneqq \sup_{0\leq s\leq t}\mathbb{E}[ |\delta_{s}| ^{2}],\quad t\in [0,T] .\end{equation*}

For any $t\in [0,T] $, we can write

\begin{align*}\delta_{t} & =\int_{0}^{t}\Bigl({\textrm{e}}^{-f \big(t,X_{s}^{h,f}\big) (t-s) } (t-s) ^{h\big(t,X_{s}^{h,f}\big)-{{1}/{2}}}- {\textrm{e}}^{-f \big(t,\bar{X}_{\eta (s) }^{h,f}\big) (t-\eta (s) ) } (t-\eta (s) ) ^{h \big(t,\bar{X}_{\eta (s) }^{h,f}\big) -{{1}/{2}}}\Bigr)\,{\textrm{d}} B_{s}\\[3pt] & =\int_{0}^{t}\Bigl({\textrm{e}}^{-f \big(t,X_{s}^{h,f}\big) (t-s) } (t-s) ^{h\big(t,X_{s}^{h,f}\big)-{{1}/{2}}}- {\textrm{e}}^{-f \big(t,X_{s}^{h,f}\big) (t-s) } (t-s) ^{h \big(t,\bar{X}_{\eta (s) }^{h,f}\big) -{{1}/{2}}}\Bigr)\,{\textrm{d}} B_{s}\\[3pt] & \quad\, +\int_{0}^{t}\Bigl({\textrm{e}}^{-f \big(t,X_{s}^{h,f}\big) (t-s) } (t-s) ^{h \big(t,\bar{X}_{\eta (s) }^{h,f}\big) -{{1}/{2}}}- {\textrm{e}}^{-f \big(t,\bar{X}_{\eta (s) }^{h,f}\big) (t-\eta (s) ) } (t-\eta (s) ) ^{h \big(t,\bar{X}_{\eta (s) }^{h,f}\big) -{{1}/{2}}}\Bigr)\,{\textrm{d}} B_{s}\\[3pt] & \eqqcolon I_{1} (t) +I_{2} (t) .\end{align*}

First we bound the second moment of $I_{1} (t) $ in terms of a certain integral of $\varphi$. Using the Itô isometry, (3.2), and the Lipschitz property of h, we get

\begin{align*}\mathbb{E}[ |I_{1} (t) | ^{2}] & \leq\int_{0}^{t}k (t,s) (\log (t-s) ) ^{2}\mathbb{E}\bigl[ \big(h \big(t,X_{s}^{h,f}\big) -h \big(t,\bar{X}_{\eta (s) }^{h,f}\big) \big) ^{2}\bigr]\,{\textrm{d}} s\\[3pt] & \leq C_{T,\delta}\int_{0}^{t} (t-s) ^{2 (h_{*}-\delta) -1}\mathbb{E}\bigl[ \big|X_{s}^{h,f}-\bar{X}_{\eta (s) }^{h,f}\big| ^{2}\bigr]\,{\textrm{d}} s,\end{align*}

for $\delta>0,$ arbitrarily small. By adding and subtracting $\bar{X}_{s}^{h,f}$, elementary computations show that

\begin{equation*}\mathbb{E}\bigl[ \big|X_{s}^{h,f}-\bar{X}_{\eta (s) }^{h,f}\big| ^{2}\bigr]\leq2\varphi (s) +2\mathbb{E}\bigl[ \big|\bar{X}_{s}^{h,f}-\bar{X}_{\eta (s) }^{h,f}\big| ^{2}\bigr].\end{equation*}

Moreover, combining (4.4) and Lemma 4.1 yields

\begin{equation*}\int_{0}^{t} (t-s) ^{2 (h_{*}-\delta) -1}\mathbb{E}\bigl[ \big|\bar{X}_{s}^{h,f}-\bar{X}_{\eta (s) }^{h,f}\big| ^{2}\bigr]\,{\textrm{d}} s\leq C_{T}\dfrac{T^{2 (h_{*}-\delta) }}{2 (h_{*}-\delta) } |\Delta t| ^{\gamma}.\end{equation*}

Therefore, choosing $\delta={{{h_{*}}}/{2}}$, we have

(A.10) \begin{equation}\mathbb{E}[ |I_{1}| ^{2}]\leq C_{T,h_{*}}\biggl\{ \int_{0}^{t} (t-s) ^{h_{*}-1}\varphi (s) \,{\textrm{d}} s+ |\Delta t| ^{\gamma}\biggr\} .\end{equation}

Next we find a bound for the second moment of $I_{2} (t) $. Again using the Itô isometry, (3.4) and (3.5), and Lemma 4.1, we can write

(A.11) \begin{equation}\mathbb{E}[ |I_{2}| ^{2}]\leq\int_{0}^{t}\lambda_{\gamma} (t+ (s-\eta (s) ) ,t,s) \bigl(1+\mathbb{E}\bigl[ \big|\bar{X}_{\eta (s) }^{h,f}\big| ^{2}\bigr]\bigr)\,{\textrm{d}} s\leq C_{T,\gamma} |\Delta t| ^{\gamma},\end{equation}

for any $\gamma<2h_{*}$, and where we have used that

\begin{equation*}\mathbb{E}[ |\bar{X}_{s}^{h,f}| ^{2}]\leq C_{T},\quad0\leq s\leq T.\end{equation*}

Combining (A.10) and (A.11), we obtain

\begin{equation*}\varphi (t) \leq C_{T,\gamma,h_{*}}\biggl\{ \int_{0}^{t} (t-s) ^{h_{*}-1}\varphi (s) \,{\textrm{d}} s+ |\Delta t| ^{\gamma}\biggr\} .\end{equation*}

Using Theorem 4.1 with $a (t) =C_{T,\gamma,h_{*}} |\Delta t| ^{\gamma}$, $g (t) =C_{T,\gamma,h_{*}}$ and $\beta=h_{*}$, we can conclude that

\begin{align*}\varphi (T) \leq C_{T,\gamma,h_{*}}E_{h_{*}} (C_{T,\gamma,h_{*}}\Gamma (h_{*}) T^{h_{*}}) |\Delta t| ^{\gamma}.\\[-35pt]\end{align*}

References

Barndorff-Nielsen, O. E., Espen Benth, F. and Veraart, A. E. D. (2018). Ambit Stochastics (Probability Theory and Stochastic Modelling). Springer, Cham.CrossRefGoogle Scholar
Barrière, O., Echelard, A. and Véhel, L. (2012). Self-regulating processes. Electron. J. Prob. 17, 130.CrossRefGoogle Scholar
Benassi, A., Jaffard, S. and Roux, D. (1997). Elliptic Gaussian random processes. Rev. Mat. Iberoamericana 13, 1990.CrossRefGoogle Scholar
Bianchi, S. and Pianese, A. (2015). Asset price modeling: from fractional to multifractional processes. In Future Perspectives in Risk Models and Finance (International Series in Operations Research & Management Science 211), pp. 247285. Springer, Cham.CrossRefGoogle Scholar
Bianchi, S., Pantanella, A. and Pianese, A. (2013). Modeling stock prices by multifractional Brownian motion: an improved estimation of the pointwise regularity. Quant. Finance 13, 13171330.CrossRefGoogle Scholar
Cont, R. and Tankov, P. (2004). Financial Modelling with Jump Processes (Chapman & Hall/CRC Financial Mathematics Series). Chapman & Hall/CRC, Boca Raton.Google Scholar
Corlay, S., Lebovits, J. and Véhel, J. L. (2014). Multifractional stochastic volatility models. Math. Finance 24, 364402.CrossRefGoogle Scholar
El Euch, O. and Rosenbaum, M. (2019). The characteristic function of rough Heston models. Math. Finance 29, 338.CrossRefGoogle Scholar
Gatheral, J., Jaisson, T. and Rosenbaum, M. (2018). Volatility is rough. Quant. Finance 18, 933949.CrossRefGoogle Scholar
Hawkes, A. G. (2018). Hawkes processes and their applications to finance: a review. Quant. Finance 18, 193198.CrossRefGoogle Scholar
Karatzas, I and Shreve, S. E. (1998). Brownian Motion and Stochastic Calculus, 2nd edn. Springer.CrossRefGoogle Scholar
Lebovits, J. and Véhel, J. L. (2014). White noise-based stochastic calculus with respect to multifractional Brownian motion. Stochastics 86, 87124.CrossRefGoogle Scholar
Peltier, R.-F. and Véhel, J. L. (1995). Multifractional Brownian motion: definition and preliminary results. Research Report RR-2645, INRIA, Projet FRACTALES.Google Scholar
Pesquet-Popescu, B. and Véhel, J. L. (2002). Stochastic fractal models for image processing. IEEE Signal Process. Magazine 19, 4862.CrossRefGoogle Scholar
Pianese, A., Bianchi, S. and Palazzo, A. M. (2018). Fast and unbiased estimator of the time-dependent Hurst exponent. Chaos 28, 031102.CrossRefGoogle ScholarPubMed
Riedi, R. and Véhel, J. L. (1997). Multifractal properties of TCP traffic: a numerical study. Research Report RR-3129, INRIA, Projet FRACTALES.Google Scholar
Sornette, D. and Filimonov, V. (2011). Self-excited multifractal dynamics. Europhys. Lett. Assoc. 94, 46003.Google Scholar
Ye, H., Gao, J. and Ding, Y. (2007). A generalized Gronwall inequality and its application to a fractional differential equation. J. Math. Anal. Appl. 328, 10751081.CrossRefGoogle Scholar
Zhang, X. (2008). Euler schemes and large deviations for stochastic Volterra equations with singular kernels. J. Differential Equat. 244, 22262250.CrossRefGoogle Scholar
Zhang, X. (2010). Stochastic Volterra equations in Banach spaces and stochastic partial differential equation. J. Funct. Anal. 258, 13611425.CrossRefGoogle Scholar
Figure 0

Figure 1. Numerical simulation of a trajectory of a SEM-gamma process for $f=0.5$ and $h(x)={{1}/{{(1+x^{2})}}}$.

Figure 1

Figure 2. Numerical simulations of trajectories of SEM-gamma processes for $h (x) ={{1}/{{(1+x^{2})}}}$ and $f\in \{ 0,0.5,1,5\} $.

Figure 2

Figure 3. Scale comparative of the SEM-gamma process with $f (x) =5$ and $h (x) ={{1}/{{(1+x^{2})}}}$.

Figure 3

Figure 4. Numerical simulation of a trajectory of a SEM-gamma process when the Hurst function is $h(x)=f (x) ={{1}/{{(1+x^{2})}}}$.

Figure 4

Figure 5. Numerical simulation of a trajectory of a SEM process when the Hurst function is $h (x) ={{1}/{2}}+{{{(1/2)}}/{{(1+x^{2})}}}$.

Figure 5

Figure 6. Numerical simulation of a trajectory of a SEM process when the Hurst function is $h (x) ={{1}/{2}}-{{{(1/2)}}/{{(1+x^{2})}}}$.

Figure 6

Figure 7. SEM and SEM-gamma processes’ autocorrelation functions.