Hostname: page-component-7b9c58cd5d-bslzr Total loading time: 0.001 Render date: 2025-03-14T01:43:12.096Z Has data issue: false hasContentIssue false

Martingale decomposition of an L2 space with nonlinear stochastic integrals

Published online by Cambridge University Press:  11 December 2019

Clarence Simard*
Affiliation:
Université du Québec à Montréal
*
* Postal address: Département de Mathématiques, Université du Québec à Montréal, C.P. 8888, succ. Centre-ville, Montréal (Québec), H3C 3P8, Canada.
Rights & Permissions [Opens in a new window]

Abstract

This paper generalizes the Kunita–Watanabe decomposition of an $L^2$ space. The generalization comes from using nonlinear stochastic integrals where the integrator is a family of continuous martingales bounded in $L^2$ . This result is also the solution of an optimization problem in $L^2$ . First, martingales are assumed to be stochastic integrals. Then, to get the general result, it is shown that the regularity of the family of martingales with respect to its spatial parameter is inherited by the integrands in the integral representation of the martingales. Finally, an example showing how the results of this paper, with the Clark–Ocone formula, can be applied to polynomial functions of Brownian integrals.

Type
Research Papers
Copyright
© Applied Probability Trust 2019 

1. Introduction

Nonlinear stochastic integrals, sometimes called stochastic line integrals, are stochastic integrals where the integrator is a family of semimartingales. Let $\{M(x);\, x\in E\subset \mathbb{R} \}$ be a family of semimartingales indexed by x (x is often called the spatial parameter). Let $\xi$ be a predictable process with values in E; $\int M({\rm d} t, \xi_t)$ denotes the nonlinear stochastic integral of $\xi$ with respect to M. To get an intuitive understanding, let $\xi$ be a simple predictable process defined by $\xi_t = \xi_{t_k} \in \mathcal{F}_{t_{k-1}}$ for $t_{k-1} <t\leq t_k$ where $t_0=0<t_1<t_2<\cdots $ and $\{\mathcal{F}_t\}_{t\geq0}$ is a filtration; then, $\int_0^TM({\rm d} s, \xi_s)=\sum_{k\geq 1}\{ M(t_k\wedge T, \xi_{t_k}) - M(t_{k-1}\wedge T, \xi_{t_k})\}$ . For continuous predictable processes the integral is defined like the standard stochastic integral; see [Reference Kunita8].

This stochastic integral has been defined for different families of semimartingales in [Reference Chitashvili, Prokhorov, Itô and Springer4]. Those results can also be found in [Reference Chitashvili and Mania5,Reference Chitashvili and Mania6]. More recently, [Reference Kühn7] extended the integral to a family of semimartingales which is not necessarily continuous with respect to the spatial parameter. Detailed construction of nonlinear stochastic integrals can also be found under different assumptions in [Reference Carmona and Nualart2,Reference Kunita8].

Applications of nonlinear stochastic integrals can be found in mathematical finance, more specifically for modeling illiquid markets. In those models, a nonlinear stochastic integral is used to define the value of the trading portfolio in which the cash flow of a transaction is a nonlinear function of the number of shares traded. See [Reference Bank and Baum1,Reference Cetin, Jarrow and Protter3] for examples of financial applications. Generalization of the Kunita–Watanabe decomposition found in this paper could lead to an extension of the theory of quadratic hedging developed in [Reference Schweizer11].

In this paper we look at the problem of approximating a random variable with nonlinear stochastic integrals. This problem is written as

$$\begin{equation*} \inf_{\theta \in \mathcal{I}^M}\mathbb{E}\bigg[ \bigg( H - \int M({\rm d} t,\theta_t)\bigg)^2\bigg] , \end{equation*}$$

where H is a square-integrable random variable, $\{M(x);\, x\in \mathbb{R}\}$ is a family of martingales, and $\mathcal{I}^M$ is a set of integrands. Without additional assumptions on $\{M(x)\}$ , this is usually an optimization problem over a non-convex set. Therefore, the uniqueness of the solution, or even the existence, is not guaranteed. The main result of this paper is to characterize, when it exists, the solution of this problem and discuss sufficient conditions for the existence and uniqueness of the solution. It will be shown that this characterization is a generalization of the Kunita–Watanabe decomposition [Reference Kunita and Watanabe9].

The remainder of this paper is organized in the following way. The next two sections establish the conditions of the probability space, introduce some notations, and state the optimizing problem to be solved. Section 4 presents the solution of the optimization problem under the simplifying assumption that the martingale family is defined by a known family of stochastic integrals. The generalization of the Kunita–Watanabe decomposition is also discussed in this section. Section 5 provides the preliminary results required for the general version of the main theorem, which is found in Section 5.2. Finally, Section 6 contains an example.

2. Probability space and notations

Let $(({\Omega}, {\mathcal{F}}, {\mathbb{F}}, {\mathbb{P}}))$ be a filtered probability space where $\mathcal{F}\,:\!=\{\mathcal{F}_t; \, t\geq 0\}$ is a right-continuous filtration, complete with respect to $\mathbb{P}$ , and where $\mathbb{F} = \lim_{t\to \infty} \mathcal{F}_t$ is the smallest $\sigma$ -algebra containing all $\mathcal{F}_t$ . The results of this paper require that all adapted processes are continuous; therefore, it is assumed that for any stopping times T and $\{T_n\}$ such that $T_n \uparrow T$ then ${\vee_n \mathcal{F}_{T_n} = \mathcal{F}_T}$ .

First, define

$$\begin{equation*}\mathcal{M}=\{ X \colon [0,\infty)\times \Omega \rightarrow \mathbb{R} ; \, X \text{ is a } \text{continuous martingale adapted to } \mathcal{F}\};\end{equation*}$$

that is, for $X\in \mathcal{M}$ , $t\rightarrow X_t$ is continuous almost surely (a.s.), $\mathbb{E}[|X_t|]<\infty $ for all $t \geq 0$ , and $\mathbb{E}[X_t \mid \mathcal{F}_s] = X_s$ for all $0\leq s \leq t$ . Then, let

$$\begin{equation*}L^2(\mathbb{P}) = \{Y \colon \Omega \rightarrow \mathbb{R}; \, Y\in \mathbb{F}\text{ and } ||Y||_{L^2}<\infty \}\end{equation*}$$

be the space of square-integrable random variables where $||Y||_{L^2}^2 = \mathbb{E}[Y^2]$ . In this paper it is assumed that martingales are continuous and bounded in $L^2(\mathbb{P})$ ; that is, they belong to the set

$$\begin{equation*}\mathcal{M}^2=\{X\in \mathcal{M}; \, \mathbb{E}[\sup_{t\geq 0}(X_t)^2]<\infty \}.\end{equation*}$$

However, without loss of generality, the results will be stated for

$$\begin{equation*}\mathcal{M}_0^2=\{X\in \mathcal{M}^2; \, X_0 \equiv 0 \}\end{equation*}$$

and the space $L^2_0(\mathbb{P}) = \{X \in L^2(\mathbb{P}); \, \mathbb{E}[X] = 0\}$ . The general case is recovered by translating processes and random variables. The following convention is also established: since there is an isometry between random variables in $L^2(\mathbb{P})$ (resp. $L^2_0(\mathbb{P})$ ) and martingales in $\mathcal{M}^2$ (resp. $\mathcal{M}_0^2$ ), the same notation will be used to define an element of $L^2(\mathbb{P})$ (resp. $L^2_0(\mathbb{P})$ ) and an element of $\mathcal{M}^2$ (resp. $\mathcal{M}^2_0$ ). For instance, for $X\in L^2(\mathbb{P})$ , X defines the random variable as well as the almost sure limit, $X=\lim_{t\to \infty}X_t$ , of the martingale $X\,:\!=\{X_t; \, t\geq 0\}\in \mathcal{M}^2$ . Finally, it is assumed that ${(\Omega, \mathbb{F})}$ is separable, and therefore there exists a countable basis for $L^2(\mathbb{P})$ .

3. Optimization problem

In this section the family of martingales $\{M(x); \, x\in \mathbb{R}\}$ used as an integrator is defined and the optimizing problem is stated.

Following the assumptions on the probability space ${(\Omega, \mathcal{F}, \mathbb{F}, \mathbb{P})}$ , every martingale in $\mathcal{M}^2$ can be written as a stochastic integral. For this part of the paper it is assumed that the integral representation of the family of martingales is known. Consequently, conditions can be directly imposed on the integrands, which simplifies the presentation of the main result.

The family of martingales is defined as

$$\begin{align*} M \colon [0,\infty) \times \mathbb{R} \times \Omega &\rightarrow \mathbb{R} , \\ (t,x,\omega) &\mapsto M(t,x,\omega) , \end{align*}$$

such that M(x) is in $\mathcal{M}^2_0$ for each $x \in \mathbb{R}$ . Following the previous convention, $M(x)\in L^2_0(\mathbb{P})$ where $M(x)=\lim_{t\to \infty} M(t,x)$ a.s.

In the following, the problem that is solved is the approximation of a square-integrable random variable $H\in L^2_0(\mathbb{P})$ with the stochastic integral $\int M({\rm d} s,\theta_s)$ . In other words, the problem to solve is

(3.1) $$\inf_{\theta \in \mathcal{I}^M}\bigg|\bigg| H - \int M({\rm d} s,\theta_s) \bigg|\bigg|_{L^2},$$

where $\mathcal{I}^M$ is a set of suitable integrands which is yet to be defined. Intuitively, this problem can be seen as minimizing the distance from a point H to a curve $\int M({\rm d} s,\theta_s)$ in a linear space.

From the conditions imposed on ${(\Omega, \mathcal{F},\mathbb{F},\mathbb{P})}$ , there exists a martingale $B\in \mathcal{M}^2_0$ and a family of continuous predictable processes $\{\mu(x); \, x\in \mathbb{R}\}$ ,

$$\begin{align} \mu\,:\, [0,\infty)\times \mathbb{R}\times \Omega &\rightarrow \mathbb{R} , \notag \\ (t,x,\omega) &\mapsto \mu(t,x,\omega), \notag \end{align}$$

such that

(3.2) $$\begin{equation} M(x) =\int \mu(t,x) \, {\rm d} B_t. \label{eqn2} \end{equation}$$

The representation in (3.2) is a direct application of the Kunita–Watanabe decomposition and the existence of a countable basis for $\mathcal{M}^2$ . The latter is a consequence of the assumption of separability of the space ${(\Omega,\mathbb{F})}$ . Following the integral representation of $M \,:\!= \{M(x); \, x\in \mathbb{R}\}$ , we find a more tractable expression for the nonlinear stochastic integral with respect to M:

$$\begin{equation*} \int M({\rm d} t,\theta_t) = \int \mu(t,\theta_t) \, {\rm d} B_t. \end{equation*}$$

4. Main results

The main result requires the function $x\mapsto \mu(t,x)$ in the representation (3.2) to be smooth enough. While it is a simple condition to impose, it is less so to deduce the properties of $\mu(x)$ from M(x). In this section, in order to provide a clearer presentation, the main results are established under the assumption that the integrand $\mu(x)$ in the integral representation (3.2) is known. However, this assumption is not satisfactory in the general case since the Kunita–Watanabe decomposition only gives the existence of the integrand in the integral representation. The general case requires results that are provided in Section 5.1.

Let $\mathcal{I}^M$ be defined as

$$\begin{equation*}\mathcal{I}^M = \bigg\{\theta; \, \theta \text{ is predictable and } \int M({\rm d} s,\theta_s), \int \frac{\partial}{\partial x}M({\rm d} s,\theta_s) \in L^2(\mathbb{P})\bigg\}\end{equation*}$$

and let

$$\begin{equation*}\mathcal{H}^M = \bigg\{\!\int M({\rm d} s,\theta_s) ; \, \theta \in \mathcal{I}^M \bigg\}.\end{equation*}$$

The next theorem characterizes the solution to (3.1).

A simple condition for the existence of a solution is that $\mathcal{H}^M$ is closed and, since it is assumed that the integral representation (3.2) is known, we can simply assume that $\mu(t,\mathbb{R})$ is closed a.s. for all $t\geq 0$ . Note that in the following, $\mathcal{C}^{m}$ is the set of m-times continuously differentiable functions.

$$\begin{equation*} H = \int M({\rm d} t,\theta^H_t) + L^H , \end{equation*}$$

where $L^H = H-\int M({\rm d} t,\theta^H_t)$ is orthogonal to $\int \frac{\partial}{\partial x}M({\rm d} t,\theta^H_t)$ , i.e.

(4.1) $$\begin{equation} \mathbb{E}\bigg[ \bigg( H - \int M({\rm d} t,\theta^H_t)\bigg)\int \frac{\partial}{\partial x} M({\rm d} t,\theta^H_t) \bigg] =0. \label{eqn3} \end{equation}$$

Moreover,

$$\bigg|\bigg| H - \int M({\rm d} t,\theta^H_t) \bigg|\bigg|_{L^2} = \inf_{\theta \in \mathcal{I}^M} \bigg|\bigg| H - \int M({\rm d} t,\theta_t) \bigg|\bigg|_{L^2} .$$

Thanks to the parallelogram law [12, Chapter 6] there exists $X^H$ such that $X^n \rightarrow X^H$ in $L^2(\mathbb{P})$ . From the assumption that $\mu(t,\mathbb{R})$ is closed, hence that $\mathcal{H}^M$ is closed, there exist $\theta^n,\theta^H\in \mathcal{I}^M$ such that $\int M({\rm d} t, \theta^n_t) = X^n$ and $\int M({\rm d} t,\theta^H_t)=X^H$ .

Let $F(\varepsilon) = \big|\big| H - \int M({\rm d} t,\theta^H_t+\varepsilon) \big|\big|_{L^2}^2$ , then we know that $\frac{{\rm d}}{{\rm d}\varepsilon}F(\varepsilon)\big|_{\varepsilon = 0} = 0$ . We now need to show that we get the same result by differentiating inside the norm.

Define the sequence of stopping times

$$\begin{equation*} \tau_n = \inf\bigg\{t>0 ; \sup_{\{\phi\}, |\phi_s|<1}\bigg|\int_0^t\frac{\partial}{\partial x}M({\rm d} s,\theta^H_s + \phi_s)- \int_0^t \frac{\partial}{\partial x}M({\rm d} s,\theta^H_s) \bigg|\geq n \bigg\}, \end{equation*}$$

where the supremum is taken over predictable processes $\{\phi\}$ with $|\phi_s|<1$ for all $s\geq 0$ . For each n,

(4.2) $$\begin{align} &\lim_{\varepsilon \to 0}\mathbb{E}\bigg[ \int_0^{t\wedge \tau_n} \bigg( \frac{\mu(s,\theta^H_s +\varepsilon) - \mu(s,\theta^H_s)}{\varepsilon} - \frac{\partial }{\partial x}\mu(s,\theta^H_s) \bigg)^2{\rm d} [B]_s\bigg] \notag \\ & \quad = \lim_{\varepsilon \to 0}\mathbb{E}\bigg[ \int_0^{t\wedge \tau_n} \bigg( \frac{\partial}{\partial x}\mu(s,\theta^H_s +\phi_s(\varepsilon)) - \frac{\partial }{\partial x}\mu(s,\theta^H_s) \bigg)^2{\rm d} [B]_s\bigg] \notag \notag \\ & \quad =\mathbb{E}\bigg[ \lim_{\varepsilon \to 0} \bigg(\int_0^{t\wedge \tau_n}\frac{\partial}{\partial x}M({\rm d} s,\theta^H_s +\phi_s(\varepsilon)) - \int_0^{t\wedge \tau_n} \frac{\partial}{\partial x}M({\rm d} s,\theta^H_s) \bigg)^2\bigg] =0, \label{eqn4}\end{align}$$

where the process $\phi(\varepsilon)$ is predictable and $\phi_s(\varepsilon) \in (\theta^H_s -\varepsilon, \theta^H_s+\varepsilon)$ . We see that for each n, $\int_0^{t\wedge \tau_n}M({\rm d} s,\theta^H_s)$ is a.s. differentiable.

Now, from the Kunita–Watanabe decomposition, there exists a predictable process h and an $L^2(\mathbb{P})$ -martingale $\lambda^H$ such that $H = \int h_s\,{\rm d} B_s + \lambda^H$ . Since $\lambda^H$ is strongly orthogonal to B, it is also strongly orthogonal to $\int M({\rm d} s, \theta_s)$ and $\int \frac{\partial }{\partial x}M({\rm d} s,\theta_s)$ for any $\theta \in \mathcal{I}^M$ . Therefore, we can set $\lambda^M \equiv 0$ without loss of generality.

Let $F_n(t,\varepsilon) = \big|\big| \int_0^{t\wedge \tau_n}(h_s - \mu({\rm d} s,\theta^H_s + \varepsilon) ){\rm d} B_s \big|\big|^2_{L^2}$ . Then, using (4.2), we have

(4.3) $$\begin{equation} \frac{\partial}{\partial \varepsilon}F_n(t,\varepsilon)\bigg|_{\varepsilon = 0} = (-2)\mathbb{E}\bigg[\int_0^{t\wedge \tau_n}(h_s - \mu(s,\theta^H_s) )\frac{\partial }{\partial x}\mu(s,\theta^H_s) \, {\rm d} [B]_s \bigg] = 0. \label{eqn5} \end{equation}$$

Finally, from the Cauchy–Schwartz inequality and the fact that $\theta^H\in \mathcal{I}_M$ ,

$$\mathbb{E}\bigg[\bigg(H-\int M({\rm d} s,\theta^H_s)\bigg) \int \frac{\partial}{\partial x}M({\rm d} s,\theta^H_s) \bigg] \notag \\ \quad \leq \bigg\| H-\int M({\rm d} s,\theta^H_s) \bigg\|^2 \bigg\| \int \frac{\partial}{\partial x}M({\rm d} s,\theta^H_s) \bigg\|^2 <\infty . $$

This inequality allows us to take the limit with respect to n and t in (4.3) to show that

$$\mathbb{E}\bigg[\bigg(H-\int M({\rm d} s,\theta^H_s)\bigg) \int \frac{\partial}{\partial x}M({\rm d} s,\theta^H_s) \bigg] =0,$$

which concludes the proof.

As opposed to the Kunita–Watanabe decomposition, the characterization of $\theta^H$ in Theorem 4.1 does not guarantee that a process satisfying (4.1) is the minimizing integrand. Since the convexity of $\mathcal{H}^M$ is not assumed, a process satisfying (4.1) could give a local minimum. A simple condition for the set $\mathcal{H}^M$ to be convex is that the function $x\mapsto \mu(t,x)$ is a.s. convex for all $t\geq 0$ .

In the case where the Kunita–Watanabe decomposition of H is known, the next corollary shows that the characterization of $\theta^H$ becomes an almost sure characterization.

Corollary 4.1. Assume the conditions of Theorem 4.1 are satisfied and that $\mathcal{I}^B\subset \mathcal{I}^M$ , where $\mathcal{I}^B = \{ \theta \colon \theta \text{ is predictable and uniformly bounded}\}$ . Let H have the Kunita–Watanabe decomposition $H = \int h_s \, {\rm d} B_s + \lambda^H$ . Then

$$\begin{equation*} ( h_s- \mu(s,\theta^H_s)) \frac{\partial}{\partial x}\mu(s,\theta^H_s) \equiv 0 \end{equation*}$$

for all $s \notin \Gamma$ , where $\int \mathbf{1}_{\Gamma}(t) \, {\rm d} [B]_t = 0$ .

since $\theta$ is bounded, and that $\int \frac{\partial}{\partial x}M({\rm d} s, \theta^H_s)$ is in $L^2(\mathbb{P})$ by assumption. By using the definition of the orthogonal projection on linear space, we have that

$$\bigg\| \int h_s \, {\rm d} B_s - \bigg(\int M({\rm d} s,\theta^H_s) +\int \alpha_s \frac{\partial }{\partial x}M({\rm d} s,\theta^H_s) \bigg) \bigg\|^2_{L^2} \notag \\ = \bigg\| \int h_s \, {\rm d} B_s - \bigg(\int M({\rm d} s,\theta^H_s) -\int \alpha_s \frac{\partial }{\partial x}M({\rm d} s,\theta^H_s) \bigg) \bigg\|^2_{L^2}$$

for all $\alpha \in \mathcal{I}^B$ . This equality is equivalent to

$$\mathbb{E}\bigg[\int (h_s - \mu(s, \theta_s) )\,\alpha_s \frac{\partial}{\partial x}\mu(s,\theta^H_s) \, {\rm d}[B]_s \bigg] \notag \\ = \mathbb{E}\bigg[ \int (h_s - \mu(s,\theta^H_s) )\frac{\partial }{\partial x}\mu(s,\theta^H_s) \, {\rm d} B_s \int \alpha_s \, {\rm d} B_s\bigg] = 0$$

for all $\alpha \in \mathcal{I}^B$ . Since the set $\{\int \alpha_s \, {\rm d} B_s; \, \alpha \in \mathcal{I}^B\}$ is dense in $\{ X\in L^2(\mathbb{P}); \, X = \int \theta_s \, {\rm d} B_s\}$ with respect to the $L^2(\mathbb{P})$ norm, we have $\int (h_s -\mu(s,\theta^H_s) )\frac{\partial }{\partial x}\mu(s,\theta^H_s) \, {\rm d} B_s \equiv 0$ . The proof is completed by taking the norm,

$$\bigg\| \int \{h_s -\mu(s,\theta^H_s) \}\frac{\partial }{\partial x}\mu(s,\theta^H_s){\rm d} B_s \bigg\|^2_{L^2} \\ = \mathbb{E}\bigg[\int \bigg\{(h_s -\mu(s,\theta^H_s)) \frac{\partial }{\partial x}\mu(s,\theta^H_s)\bigg\}^2{\rm d}[B]_s \bigg] = 0. {\square}$$

Theorem 4.1 and Corollary 4.1 give a generalization of the Kunita–Watanabe decomposition. Indeed, the result of Theorem 4.1, can be written as

$$\begin{equation*} H = \int M({\rm d} t,\theta^H_t) + L^H , \end{equation*}$$

where $L^H = H-\int \theta^H_t \, {\rm d} B_t$ and satisfies

$$\begin{equation*} \mathbb{E}\bigg[L^H\int \alpha_t \frac{\partial }{\partial x}M({\rm d} t,\theta^H_t) \bigg] = 0 \end{equation*}$$

for all bounded predictable processes $\alpha$ . This last affirmation, which is part of the proof of Corollary 4.1, shows that $L^H$ is strongly orthogonal to $\int \frac{\partial }{\partial x}M({\rm d} t,\theta^H_t) $ . Finally, if we put $\mu(t,x)= xB_t$ , then applying the above results leads to

$$\begin{equation*} H = \int \theta^H_t \, {\rm d} B_t + L^H , \end{equation*}$$

where $L^H$ is strongly orthogonal to B, which is the Kunita–Watanabe decomposition of H.

The following example is an application of Theorem 4.1 and Corollary 4.1 in a case where the minimizing integrand is known explicitly.

Example 4.1. Let $\mathcal{F}_t = \sigma\{W^{(1)}_s,W^{(2)}_s; \, 0\leq s \leq t\}$ , where $W^{(i)}$ , $i=1,2$ , are two independent Brownian motions and $\mathbb{F} = \mathcal{F}_T$ for some $T>0$ , and define $W_t = \rho W^{(1)}_t + \sqrt{1-\rho^2}W^{(2)}_t$ for all $t\geq 0$ with $\rho \in (0,1)$ . Then, define the family of martingales $\{M(x)\}\subset \mathcal{M}_0^2$ by $M(t,x) = {\rm e}^{xW_t - t\frac{x^2}{2}}-1$ for $0 \leq t \leq T$ . Finally, let $H_T = \big(W^{(1)}_T\big)^2 -T= 2 \int_0^TW^{(1)}_s \, {\rm d} W^{(1)}_s$ . In the following, we find the predictable process $\theta^H$ which solves

(4.4) $$\begin{equation} \inf_{\theta \in \mathcal{I}^M}\mathbb{E}\bigg[\bigg(H_T - \int_0^TM({\rm d} s,\theta_s) \bigg)^2 \bigg]. \label{eqn6} \end{equation}$$

Using Itô’s formula we find that $M(t,x) = \int_0^t\{ M(s,x)+1 \}x\,{\rm d} W_s$ . Therefore, $\mu(t,x)=( M(t,x)+1 )x$ in the integral representation and we can check that $\mu(t,x)$ satisfies the conditions in Theorem 4.1.

Differentiating with respect to x, we find that

$$\begin{equation*}\frac{\partial}{\partial x} M(t,x) = \int_0^tM(s,x)(xW_s-x^2s+1) \, {\rm d} W_s,\end{equation*}$$

which leads to

$$\begin{equation*} \int_0^T M({\rm d} s,\theta_s) = \int_0^T \{ M(s,\theta_s)+1 \}\,\theta_s \, {\rm d} W_s \end{equation*}$$

and

$$\begin{equation*} \int_0^T\frac{\partial}{\partial x}M({\rm d} s,\theta_s) = \int_0^TM(s,\theta_s)(\theta_s W_s - (\theta_s)^2s+1 ) \, {\rm d} W_s. \end{equation*}$$

At this point, we find the Kunita–Watanabe decomposition for $H_T$ with respect to W. This is done by performing an orthogonal projection on the linear space $\big\{ \int_0^T\theta {\rm d} W_s\big\} \subset L^2(\mathbb{P})$ . We find that $H_T = \int_0^T2 \rho W^{(1)}_s \, {\rm d} W_s + \lambda^H_T$ . With the Kunita–Watanabe decomposition of $H_T$ known, we can use Corollary 4.1 to find that

(4.5) $$\begin{equation} (2\rho W^{(1)}_s - \theta^H_s M(s,\theta^H_s) )(\theta^H_sW_s-(\theta^H_s)^2s+1)\,M(s,\theta^H_s) \equiv 0. \label{eqn7} \end{equation}$$

Defining $\theta^H$ by $\theta^H_s M(s,\theta^H_s) = 2 \rho W^{(1)}_s$ for all s, then

$$\begin{equation*} \mathbb{E}\bigg[ \bigg(H_T - \int_0^TM({\rm d} s,\theta^H_s) \bigg)^2 \bigg] = \mathbb{E}[(\lambda^H_T)^2] , \end{equation*}$$

which shows that $\theta^H$ is the minimizer. As mentioned before, since we can verify that the solution to (4.5) is the solution to the optimizing problem, it is irrelevant to verify whether $\mathcal{H}^M$ is closed.

5. General case

In the first part of the paper it was assumed that the family of processes $\mu$ in the integral representation of the martingales $\{M(x); \, x\in \mathbb{R}\}$ was known. However, in general we only have the existence of the integrand. Therefore, the fact that the assumptions required in Theorem 4.1 are in terms of $\mu$ lessens the practicality of the result.

In this part of the paper conditions are stated in terms of $M\,:\!=\{M(x); \, x\in \mathbb{R}\}$ . Indeed, using the Kunita–Watanabe decomposition, we still have the existence of $\mu$ in the integral representation, but some properties are required to get the result. In order to generalize Theorem 4.1, the next section is devoted to establishing different relationships between the properties of $\{M(x); \, x\in \mathbb{R}\}$ and the properties of the integrand in the integral representation. With those properties, the main results of Section 4 are recovered.

5.1. Regularity conditions

Recall that the family of martingales is defined by

$$\begin{align*} M \colon [0,\infty) \times \mathbb{R} \times \Omega &\rightarrow \mathbb{R} , \\ (t,x,\omega) &\mapsto M(t,x,\omega) \end{align*}$$

such that M(x) is in $\mathcal{M}^2_0$ for each $x \in \mathbb{R}$ . Using the separability assumption of $(\Omega, \mathbb{F})$ , there exists $B \in \mathcal{M}_0^2$ and a family of predictable processes $\{\mu(x); \, x \in \mathbb{R}\}$ such that

$$\begin{equation*}M(x) = \int \mu(s,x)\,{\rm d} B_s.\end{equation*}$$

The existence of $\mu(x)$ is a simple application of the Kunita–Watanabe decomposition for each x.

To establish the results of Theorem 4.1 the function $x\mapsto \mu(t,x)$ needs to be continuously differentiable. Here, we only have the existence of $\mu$ . Fortunately, our next results link the differentiability of $x\mapsto \mu(t,x)$ to the differentiability of $x\mapsto M(x)$ .

and define $M^n(x) = M(x)\mathbf{1}_{A_n}$ for all $x\in \mathbb{R}$ . We have that $M^n(x)\rightarrow M(x)$ a.s., and that $|M^n(x)|\leq |M(x)|$ for all n and each $x\in \mathbb{R}$ . By the dominated convergence theorem, we have that

$$\begin{equation*}\mathbb{E}[(M^n(x)-M(x))^2]\rightarrow 0.\end{equation*}$$

Now, let $\{\mu(x)\}$ be such that $M(x) = \int \mu(t,x)\,{\rm d} B_t$ . For $x,y\in K$ , we find that

$$\begin{multline} \mathbb{E}\bigg[\!\int (\mu(s,x) - \mu(s,y))^2{\rm d}[B]s \bigg] = \mathbb{E}[(M(x)-M(y))^2 ] \notag \\ \leq \mathbb{E}[(M(x) - M^n(x))^2]+ \mathbb{E}[(M^n(x)- M^n(y))^2] + \mathbb{E}[(M^n(y) - M(y))^2] \notag . \end{multline}$$

Using the dominated convergence theorem and the fact that M(x) is a.s. in $\mathcal{C}^0$ , we have that $\lim_{x \to y}\mathbb{E}[(M^n(x)-M^n(y))^2]=0$ for all n. Finally, from the convergence of $M^n(x)$ to M(x) in $L^2(\mathbb{P})$ , for all $\varepsilon >0$ we can find $N_\varepsilon$ such that $\mathbb{E}[(M(x)- M^n(x))^2]< \varepsilon$ and $\mathbb{E}[(M(y)- M^n(y))^2]<\varepsilon$ for all $n>N_\varepsilon$ .

We see that for a fixed $\omega \in \Omega$ , we must have that $\mu(x+h,s)\rightarrow \mu(x,s)$ except if $s\in \Gamma(\omega)\in \mathcal{B}(\mathbb{R})$ , where $\int_{\Gamma(\omega)}{\rm d}[B]_s(\omega)=0$ . But since $\Gamma$ is the set where the martingale B is constant, we can choose $x\mapsto \mu(x,s)$ to be continuous for $s\in \Gamma$ . Finally, for $s \notin \Gamma$ we have that $\mu(s,x+h)\rightarrow \mu(s,x)$ a.s.

In the representation $M(x) = \int \mu(s,x)\,{\rm d} B_s$ , we have that the process $\{\mu(t,x)\}_{t\geq 0}$ is unique for each x. However, it is not clear if the family of processes $\{\mu(x); \, x\in \mathbb{R}\}$ is unique, which could lead to M being ill-defined. The next result shows that if $x\mapsto M(x)$ is almost surely continuous, then the representation is unique, up to an equivalence class of processes. This class of processes is defined by processes equal up to a null set with respect to the measure $\beta \colon \mathcal{B}([0,\infty))\times \mathbb{F}\rightarrow [0,\infty)$ given by $\beta(A) = \int \int \mathbf{1}_{A}(s,\omega) \,{\rm d}[B]_s\mathbb{P}({\rm d}\omega)$ , where $\mathcal{B}([0,\infty ))$ is the Borel sigma-algebra on $[0,\infty)$ and $\mathbf{1}_{A}$ is the indicator function of the set A.

Let $\{x_i\}_{i\in \mathcal{I}}$ be a dense subset of $\mathbb{R}$ where $\mathcal{I}$ is a countable index set. For each $i\in \mathcal{I}$ , there exists $A_i \in \mathcal{B}([0,\infty))\times \mathbb{F}$ such that $\beta(A_i) = 0$ and where $\mu(s,x_i) = \nu(s,x_i)$ on ${(A_i)}^\rm c$ . Since $x\mapsto \mu(s,x)$ is a.s. continuous, $\mu(s,x) = \nu(s,x)$ except maybe on $\cup_{i\in \mathcal{I}} A_i$ where $\beta(\cup_{i\in \mathcal{I}} A_i) = 0$ .

From the previous theorem, we can say that the integrand in the representation of M is almost surely unique with respect to the sigma finite measure $\beta$ . We nonetheless get that $M(x) = \int \mu(s,x)\,{\rm d} B_s$ is well defined. For a fixed $x\in \mathbb{R}$ , let $\hat{\mu}(x)$ be such that $\mu(t,x,\omega)\neq \hat{\mu}(t,x,\omega)$ on $A\subset \mathcal{B}([0,\infty))\times \mathbb{F}$ , where $\beta(A) = 0$ and $\mu(t,x,\omega) = \hat{\mu}(t,x,\omega)$ on $A^{\rm c}$ . Assume that $\mathbb{P}(\bigcup_{t\geq 0}A(t,\cdot))>0$ , where $A(t,\cdot) = \{\omega \in \Omega; \, (t,\omega)\in A\}$ is a cross section of A. For each $\omega$ , we must have that $\int_{A(\cdot, \omega)}{\rm d}[B]_s(\omega) = 0$ , which means that $A(\cdot, \omega)$ is a set of times where the martingale B is constant; hence, the value of $\mu(t,x,\omega)$ for $t\in A(\cdot, \omega)$ has no impact on the value M(x). On the other hand, if $\int_{A(\cdot, \omega)}{\rm d}[B]_s(\omega)>0$ , then we must have that $\mathbb{P}(\bigcup_{t\geq 0}A(\cdot, t))=0$ and $\hat{\mu}(x)$ is actually a modification of $\mu(x)$ .

To establish conditions for the differentiability of $\mu(x)$ we first define

$$\begin{equation*} \mathcal{C}^{m,\delta}(K) = \bigg\{f \colon \mathbb{R}\rightarrow \mathbb{R} \colon \bigg|\frac{\partial^m }{\partial x^m}f(x) - \frac{\partial^m }{\partial x^m}f(y) \bigg| \leq K|x-y|^\delta \bigg\} , \end{equation*}$$

which is the set of m-times continuously differentiable functions with $\delta$ -Hölder mth derivative. In the following theorem we need M(x) to be in $\mathcal{C}^{1,\delta}(K)$ , but we can allow $\delta$ and K to depend on $\omega$ which gives a more general result than requiring M(x) to be uniformly in $\mathcal{C}^{1,\delta}(K)$ .

where $\alpha(x,h)$ is a random variable in ${(x-h,x+h)}$ . Moreover, if $|h|<1$ we have that $\mathbb{E}[K^2|h|^{2\delta}]<\infty $ . Then,

(5.1) $$\begin{align} \lim_{h\to 0}\mathbb{E}\bigg[\bigg(\frac{M(x+h)-M(x)}{h} - \frac{{\rm d}}{{\rm d} x}M(x) \bigg)^2 \bigg] & = \lim_{h \to 0}\mathbb{E}\bigg[ \bigg(\frac{{\rm d} }{{\rm d} x}M(\alpha(x,h)) -\frac{{\rm d} }{{\rm d} x}M(x)\bigg)^2 \bigg] \notag \\ & \leq \lim_{h\to 0}\mathbb{E}[K^2|h|^{2\delta}] = 0. \label{eqn8}\end{align}$$

We then conclude that $\frac{{\rm d}}{{\rm d} x}M(x)$ is in $L^2(\mathbb{P})$ since it is a closed space.

Now, from Theorem 5.1, there exists a family of predictable processes $\nu$ such that $\frac{{\rm d}}{{\rm d} x}M(x) = \int \nu(s,x)\,{\rm d} B_s$ . Using the same argument as above, we have that

$$\begin{multline} \lim_{h\to 0}\mathbb{E}\bigg[\bigg(\frac{M(x+h)-M(x)}{h} - \frac{{\rm d}}{{\rm d} x}M(x) \bigg)^2 \bigg] \notag \\ = \lim_{h \to 0}\mathbb{E}\bigg[ \int \bigg(\frac{\mu(s,x+h)-\mu(s,x)}{h} -\nu(s,x)\bigg)^2{\rm d}[B]_s \bigg] = 0. \notag \end{multline}$$

We conclude that $x\mapsto \mu(s,x)$ is almost surely continuously differentiable almost everywhere with respect to ${\rm d}[B]_s$ . As in Theorem 5.1, we can take $\mu(s,x)$ to be a.s. continuously differentiable for all $s\geq 0$ .

5.2. General version of the main result

The last theorem provides conditions on M(x) so that the integrand $\mu(x)$ in the integral representation has the regularity required in Theorem 4.1. Therefore, it is possible to recover the main result without assuming that the integral representation is known.

Theorem 4.1. (Main theorem.) Let $\mu\,:\!=\{\mu(t,x); \, t\geq 0, \, x\in \mathbb{R}\}$ be a family of continuous predictable processes such that $x\mapsto \mu(t,x)$ is a.s. in $\mathcal{C}^1$ for all $t\geq 0$ , and let $H\in L^2_0(\mathbb{P})$ . Assume that for all $x\in \mathbb{R}$ , $M(x)\,:\!=\int\mu(t,x) \, {\rm d} B_t \in L^2_0(\mathbb{P})$ and $\frac{\partial}{\partial x}M(x) \,:\!= \int \frac{\partial}{\partial x}\mu(t,x) \, {\rm d} B_t \in L^2(\mathbb{P})$ , and that $\mu(t,\mathbb{R})$ is closed a.s. for all $t\geq 0$ . Then there exists $\theta^H \in \mathcal{I}^M$ such that

Remark 4.1. In Theorem 4.1, the condition that $\mu(t,\mathbb{R})$ is closed is only required to assess the existence of the solution in the general setting. For an explicit problem, if the characterization in Theorem 4.1 gives rise to a minimizing integrand in $\mathcal{I}^M$ , then the closedness is irrelevant.

Theorem 5.1. Let $\{M(x)\}$ be a family of random variables in $L^2_0(\mathbb{P})$ such that $x\mapsto M(x)$ is a.s. in $\mathcal{C}^{0}$ . Then, for each $t\geq 0$ , $x\mapsto M(t,x)$ is a.s. in $\mathcal{C}^0$ and we can take $x\mapsto \mu (t,x)$ to be a.s. continuous for all $t\geq 0$ in the representation $M(x) = \int \mu(t,x)\,{\rm d} B_t$ .

Theorem 5.2. If $x\mapsto M(x)$ is a.s. in $\mathcal{C}^0$ , then the representation $M(x) = \int \mu(s,x)\,{\rm d} B_s$ is unique except on a set $A \subset \mathcal{B} ([0,\infty)) \times \mathbb{F}$ where $\mathbb{E}[\int_{A} 1\,{\rm d}[B]_s] = 0$ .

Theorem 5.3. Let $\{M(x); \, x\in \mathbb{R}\}$ be a family of $L^2_0(\mathbb{P})$ random variables such that M(x) is a.s. in $C^{1,\delta}(K)$ , where K and $\delta$ are positive $\mathbb{F}$ -measurable and K is in $L^2(\mathbb{P})$ . Then $\{\frac{{\rm d}}{{\rm d} x}M(x); \, x\in \mathbb{R} \}$ is a family of $L^2(\mathbb{P})$ random variables and there exists a family of predictable processes $\{\mu(x); \, x\in \mathbb{R}\}$ such that $x\mapsto \mu(x)$ is a.s. in $C^1$ ,

$$\begin{equation*} M(x) = \int \mu(s,x) \, {\rm d} B_s , \text{ and } \frac{\partial }{\partial x}M(x) = \int\frac{\partial}{\partial x}\mu(s,x)\,{\rm d} B_s. \end{equation*}$$

Theorem 5.4. (Main theorem, general version.) Let $\{M(x); \, x\in \mathbb{R}\}$ be a family of martingales in $\mathcal{M}^2_0$ where $x\mapsto M(x)$ is a.s. in $C^{1,\delta}(K)$ , $\delta$ and K are positive and $\mathbb{F}$ -measurable, and where K is in $L^2(\mathbb{P})$ . Let $H\in L^2_0(\mathbb{P})$ and assume that $\mathcal{H}^M$ is closed in $L^2(\mathbb{P})$ . Then there exists $\theta^H \in \mathcal{I}^M$ such that

$$\begin{equation*} H = \int M({\rm d} s, \theta^H_s) + L^H, \end{equation*}$$

where $L^H = H - \int M({\rm d} s,\theta^H_s)$ is orthogonal to $\int \frac{\partial}{\partial x}M({\rm d} s,\theta^H_s)$ , i.e.

(5.2) $$\begin{equation} \mathbb{E}\bigg[\bigg( H - \int M({\rm d} s,\theta^H_s)\bigg) \int \frac{\partial }{\partial x}M({\rm d} s,\theta^H_s)\bigg]=0. \label{eqn9} \end{equation}$$

Moreover,

$$\bigg\| H - \int M({\rm d} s,\theta^H_s) \bigg\|_{L^2} = \inf_{X\in \mathcal{H}^M} | \|H- X\|_{L^2}.$$

Proof. First, take a sequence $\{X^n\}\subset \mathcal{H}^M$ such that

$$\left|\| H - X^n \right| \|_{L^2} \rightarrow \inf_{X\in \mathcal{H}^M} \left| \| H - X \right| \|_{L^2} .$$

Proof. It is clear that

$$\mathcal{H}^\prime = \bigg\{\!\int \theta_s \frac{\partial}{\partial x}M({\rm d} s, \theta^H_s); \, \theta \in \mathcal{I}^B\bigg\} \subset L^2(\mathbb{P}),$$

Proof. Given a compact subset $K\subset \mathbb{R}$ , let

$$A_n = \big\{\sup_{x,y\in K}|M(x)-M(y)|\leq n \big\}$$

Proof. Suppose that $M(x) = \int\mu(s,x)\,{\rm d} B_s = \int \nu(s,x)\,{\rm d} B_s$ ;

$$\begin{equation*} \mathbb{E}\bigg[\!\int (\mu(s,x)-\nu(s,x))^2{\rm d}[B]_s\bigg] = \mathbb{E}[(M(x) - M(x))^2]=0. \end{equation*}$$

Therefore, $\mu(s,x) =\nu(s,x)$ except maybe on a set $\Gamma_x\in \mathcal{B}([0,\infty))\times \mathbb{F}$ such that $\beta(\Gamma_x) = 0$ .

Proof. First, for $h\in \mathbb{R}$ we find that

$$\begin{align*} \bigg|\frac{M(x+h)-M(x)}{h}-\frac{{\rm d} }{{\rm d} x}M(x) \bigg| = \bigg|\frac{{\rm d} }{{\rm d} x}M(\alpha(x,h)) -\frac{{\rm d} }{{\rm d} x}M(x) \bigg|<K|h|^\delta, \end{align*}$$

Proof. From Theorem 5.3, $M(x) = \int \mu(t,x) \,{\rm d} B_t$ where $x\mapsto \mu(t,x)$ is a.s. in $\mathcal{C}^1$ . Finally, since $\mathcal{H}^M$ is assumed to be closed, we can apply the results of Theorem 4.1.

In conclusion, the next section shows how the above results, in conjunction with the Clark–Ocone formula [Reference Nualart10], can be applied to martingales derived from polynomial functions of Brownian martingales.

6. Polynomial functions of Brownian martingales

Theorem 5.4 extends the result of Theorem 4.1 to the more general case where the integral version of the martingales is not known. However, for modeling and application purposes, the canonical probability space of the Brownian motion is often rich enough. On this space, all continuous martingales are Brownian integrals and, in many cases, the integrand is either determined by the model or can be identified.

In the following, we show how we can build a family of martingales from a polynomial function of Brownian integrals and get an explicit expression for the integrand in the integral representation using the Clark–Ocone formula. Then, the solution to the minimization problem (3.1) is expressed as the zero of a polynomial function in one variable.

Let $\mathcal{P}_{n,k}$ be the set of real polynomials of degree k with n variables. To denote such a polynomial, let $\{\delta^{(i)}\}_{i=1}^{N_{n,k}}$ be a sequence where $\delta^{(i)} \in \{0,1,\dots, k\}^n$ , $N_{n,k} = {{k+n-1}\choose{n-1}}$ , and $\sum_{j=1}^n \delta_j^{(i)} = k$ for all i. Then, for $y=(y_1,\dots, y_n)$ , let $y^{\delta^{(i)}} =\Pi_{j=1}^n y_j^{\delta_j^{(i)}}$ and define $p_{\{\delta\}}(y) = \sum_{i=1}^{N_{n,k}}d_{i} y^{\delta^{(i)}} \in \mathcal{P}_{n,k}$ , where $\{d_i\}_{i=1}^{N_{n,k}}$ are real coefficients.

Now, assume that $({\Omega, \mathcal{F}, \mathbb{F}, \mathbb{P}})$ is the canonical probability space of the Brownian motion B. Let $\Theta = (\Theta_1,\dots, \Theta_n)$ be a vector of Brownian integrals, that is, $\Theta_i = \int \theta_i(t)\,{\rm d} B_t$ for some predictable processes $\theta_i$ , and assume that for each i, $\mathbb{E}[|\Theta_i|^m]<\infty$ for $m=2,\dots, k$ . The family of martingales $\{M(x); \, x\in \mathbb{R}\}$ is constructed from the polynomial function

$$M(x) = \sum_{i=0}^k a_ix^i p^{(i)}_{\{\delta\}} (\Theta) , % .$$

where $ p^{(i)}_{\{\delta\}} (y) \in \mathcal{P}_{n,k-i}$ and $\{a_i\}_{i=0}^k$ are real coefficients. Without loss of generality, it is assumed that $\mathbb{E}[M(x)] = 0$ for all x. Otherwise, by defining $\tilde{M}(x) = M(x) -\mathbb{E}[M(x)]$ , we get that $\tilde{M}(x)$ is centered and is still a polynomial function of the variable x. From the Clark–Ocone formula, we find that

$$\begin{equation*} M(x) = \int \mu(t,x)\,{\rm d} B_t , \end{equation*}$$

where $\mu(t,x) = \sum_{i=0}^k a_i x^i\mathbb{E}[D_t p^{(i)}_{\{\delta\}} (\Theta) \mid \mathcal{F}_t]$ , and where

$$\begin{equation*}D_tp^{(i)}_{\{\delta\}}(\Theta) = \sum_{j=1}^n\frac{\partial}{\partial y_j} p^{(i)}_{\{\delta\}}(\Theta)\theta_j(t)\end{equation*}$$

is the Malliavin derivative of $p^{(i)}_{\{\delta\}}(\Theta)$ at time $t\geq 0$ .

Now, restrict H to a polynomial function of $\Theta$ , i.e. $H = q_{\gamma}(\Theta)$ where $q_{\gamma}(y) \in \mathcal{P}_{n,l}$ . Assuming again that $\mathbb{E}[H] = 0$ , we have that

$$\begin{equation*} H = \int h_t \, {\rm d} B_t , \end{equation*}$$

where $h_t = \mathbb{E}[D_tq_{\gamma}(\Theta) \mid \mathcal{F}_t] = \sum_{j=1}^n \theta_j(t)\mathbb{E}\big[\frac{\partial}{\partial y_j}q_{\gamma}(\Theta) \mid \mathcal{F}_t\big]$ . Finally, using Theorem 4 and Corollary 4.1, there exists $\theta^H$ such that

$$\begin{equation*} H = \int M({\rm d} t,\theta^H_t) + L^H , \end{equation*}$$

where $\theta^H$ satisfies

$$\begin{align} & \quad ( h(t) - \mu(t,\theta^H_t) )\frac{\partial}{\partial x}\mu(t,\theta^H_t) \equiv 0 \notag \\ \Longleftrightarrow & \quad \bigg( \sum_{j=1}^n\theta_j(t)\mathbb{E}\bigg[\frac{\partial}{\partial y_j}q_{\gamma}(\Theta) \,\Big|\, \mathcal{F}_t\bigg] - \sum_{i=0}^k\tilde{a}_i (\theta^H_t)^i \bigg)\sum_{i=1}^k i\tilde{a}_i (\theta^H_t)^{i-1} \equiv 0 , \notag \end{align}$$

where $\tilde{a}_i = a_i \sum_{j=1}^n\theta_j(t)\mathbb{E}\big[\frac{\partial}{\partial y_j} p^{(i)}_{\{\delta\}}(\Theta) \mid \mathcal{F}_t\big]$ .

In the simpler case where $p^{(i)}_{\{\delta\}} (y) = p_{\{\delta\}}(y)$ for all $i = 1,\dots, N_{n,k}$ , $\theta^H$ satisfies

$$\begin{equation*} \sum_{i=0}^ka_i (\theta_t^H)^i = \frac{ \sum_{j=1}^n\theta_j(t)\mathbb{E}\big[\frac{\partial}{\partial y_j}q_{\gamma}(\Theta) \mid \mathcal{F}_t\big]}{ \sum_{j=1}^n\theta_j(t)\mathbb{E}\big[\frac{\partial}{\partial y_j}p_{\delta}(\Theta) \mid \mathcal{F}_t\big]} , \end{equation*}$$

or

$$\begin{equation*} \sum_{i=1}^k i a_i (\theta^H_t)^{i-1}=0. \end{equation*}$$

In the above, the conditional distribution of the n-dimensional Gaussian vector $\Theta|_{\mathcal{F}_t}$ is $\mathcal{N}_n(\Theta_t,\Sigma_t)$ , where $\Theta_t = \big(\int_{[t,\infty)}\theta_1(t)\,{\rm d} B_t,\dots,\int_{[t,\infty)}\theta_n(t)\,{\rm d} B_t\big)$ and $[\Sigma_t]_{ij} = \int_{[t,\infty)}\mathbb{E}[\theta_i(t)\theta_j(t)]\,{\rm d} t$ . Hence, the integrands $h_t$ and $\mu(t,x)$ are defined by the first k moments of the n-dimensional normal distribution $\mathcal{N}_n(\Theta_t,\Sigma_t)$ .

Acknowledgements

I would like to thank my colleagues at UQAM, Jean-François Renaud and Anne Mackay, for all the discussions. I also thank Frédéric Godin, Cody Hyndman, and the graduate students at Concordia for their time and comments on this work.

References

Bank, P. and Baum, D. (2004). Hedging and portfolio optimization in illiquid financial markets with a large trader. Math. Finance 14, 118.CrossRefGoogle Scholar
Carmona, R. and Nualart, D. (1990). Nonlinear Stochastic Integrators, Equations and Flows, Vol. 6, Stochastic Monographs. Gordon and Breach, London.Google Scholar
Cetin, U., Jarrow, R. and Protter, P. (2004). Liquidity risk and arbitrage pricing theory. Finance Stoch. 8, 311341.CrossRefGoogle Scholar
Chitashvili, R. (1983). Martingale ideology in the theory of controlled stochastic processes. In Probability and Mathematical Statistics, eds. Prokhorov, J. V. and Itô, K., Springer, New York, pp. 7392.CrossRefGoogle Scholar
Chitashvili, R. and Mania, M. (1996). Characterization of a regular family of semimartingales by line integrals. Georgian Math. J. 3, 525542.CrossRefGoogle Scholar
Chitashvili, R. and Mania, M. (1999). Stochastic line integrals with respect to local martingales and semimartingales. Proc. Tbilisi A. Razmadze Math. Inst. 120, 126.Google Scholar
Kühn, C. (2012). Nonlinear stochastic integration with a non-smooth family of integrators. Stochastics 84, 3753.CrossRefGoogle Scholar
Kunita, H. (1990). Stochastic Flows and Stochastic Differential Equations, Vol. 24 of Cambridge Studies in Advanced Mathematics. Cambridge University Press.Google Scholar
Kunita, H. and Watanabe, S. (1967). On square integrable martingales. Nagoya Math. J. 30, 209245.CrossRefGoogle Scholar
Nualart, D. (2006). The Malliavin Calculus and Related Topics, 2nd edn. Springer, Berlin.Google Scholar
Schweizer, M. (1994). Approximating random variables by stochastic integrals. Ann. Prob. 22, 15361575.CrossRefGoogle Scholar
Williams, D. (1991). Probability with Martingales. Cambridge University Press.CrossRefGoogle Scholar