1 Introduction
The elliptical family of distributions is well studied in the literature (Cambanis et al. Reference Cambanis, Steel and Gordon1981; Fang et al. Reference Fang, Kotz and Ng1990; Landsman & Neslehova Reference Landsman and Neslehova2008; Ignatieva & Landsman Reference Ignatieva and Landsman2015; Landsman et al. 2016; Reference Landsman, Makov and Shushi2018) and commonly used by practitioners in fields ranging from Finance and Actuarial Science to Engineering. One of the major disadvantages of the elliptical family is that for a multivariate elliptical distribution, its univariate marginals are elliptical with the same characteristic or density generator, that is, they preserve the same form up to the location and scale transformations. For example, the marginal distributions of the multivariate normal and the multivariate Student’s t-distributions are, respectively, the normal and the Student’s t-distributions with the same degrees of freedom. Thus, there is no place for different types of marginals, which might be very important when dealing with real data. Motivated by this problem, in this paper, we introduce a novel multivariate family of distributions that captures the different behaviours of each of the marginals but still preserves the symmetry inherent to the elliptical family and convenient geometric representation. The elliptical copula is another candidate for the modelling of a random vector with different elliptical marginals; however, it is relatively difficult for calculations when dealing with risk measures.
In section 2, we present the multi-spherical distribution, which is a natural extension of the spherical distributions for spheres with different radii. In section 3, we extend the presented distributions into the family of multi-elliptical (ME) distributions that contain a vector of location parameters and a scale matrix that leads to a specific covariance matrix of the family. Section 4 is devoted to the special members of the proposed distributions and section 5 provides calculations of risk measures in the context of this family as well as Tail Conditional Expectation (TCE) based capital allocation. A numerical illustration is given in section 6. Section 7 offers a conclusion to the paper.
Number of authors also suggested some generalisations of elliptical families. In the special section 3.1, we discuss these generalisations and compare them with ME family of distributions.
2 Extension of the Multivariate Elliptical Distributions with Different Elliptical Marginals
Let
$\textbf{U}^{\left( n\right) }$
be n-variate random variable uniform on the unit sphere
$S_{n-1}$
and
$\textbf{r}=(r_{1},...,r_{n})^{T}$
be a vector of non-negative random variables.
Define
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn1.png?pub-status=live)
where vectors
$\textbf{U}^{\left( n\right) }$
and
$\textbf{r}$
are supposed to be independent and symbol
$\circ$
is the Hadamard product. Notice that stochastic representation (1) is more general than that suggested in Fang et al. (Reference Fang, Kotz and Ng1990), equation (2), where it is supposed that all random variables
$r_{1},r_{2},...,r_{n}$
are equal. Further considering (1) we, without loss of generality, choose the first component
$r_{1}$
as a basis component and we represent the other components of vector
$\textbf{r}$
as a scale mixture of the first as follows
$r_{i}=r_{1}\frac{r_{i}}{r_{1}}=$
$r_{1}V_{i}$
$i=2,...,n.$
Then,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn2.png?pub-status=live)
where
$\textbf{V}=(V_{1},V_{2},...,V_{n-1})^{T}$
is a vector of
$(n-1)$
non-negative random variables. Notice that nor
$r_{1}$
neither
$\textbf{V}$
do not depend on vector
$\textbf{U}^{\left( n\right) }$
.
Let
$\Psi_{n}$
be a class of functions
$\psi(t):$
$[0,\infty){\rightarrow}\mathbb{R}$
such that function
$\psi(\sum_{i=1}^{n}t_{i}^{2})$
is an n-dimensional characteristic function (Fang et al. (Reference Fang, Kotz and Ng1990); Landsman & Valdez (Reference Landsman and Valdez2003); and others). It is clear that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU1.png?pub-status=live)
Function from the introduced class we call characteristic generator.
Emphasise again that we have chosen the first component as a basis component arbitrarily and that in a special case when vector
$\textbf{V}$
is the vector of
$(n-1)$
ones, that is,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU2.png?pub-status=live)
we conclude that
$\textbf{Z}^{\ast}$
has a characteristic function:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU3.png?pub-status=live)
where
$\boldsymbol{\lambda}$
is a vector in
$\mathbb{R}^{n}.$
We can say that
$\textbf{Z}^{\ast}$
has a spherical distribution with characteristic generator
$\psi(t)$
and write
$\textbf{Z}^{\ast}\sim S_{n}(\psi).$
In fact, the random radius
$r_{1}$
acts on all components of
$\textbf{U}^{(n)}$
equally.
Another picture can be observed with
$\textbf{Z}$
, defined by (2). The components of vector
$\textbf{U}^{(n)}$
are multiplied by different random radii
$r_{i}=r_{1}V_{i-1},i=2,...,n,$
where the random variables
$V_{i},i=1,...,n-1$
may be independent, dependent, or may even coincide with a constant.
Definition 1 The distribution of
$\textbf{Z,}$
having the stochastic representation (2), is called a multi-spherical distribution with characteristic generator
$\psi$
and denoted by
$\textbf{Z}\sim \texttt{MS}_{n}(\psi,f_{\textbf{V}}),$
where
$f_{\textbf{V}}$
is the density of vector
$\textbf{V}.$
Theorem 1 The characteristic function of the multi-spherical distribution
$\textbf{Z}\sim\texttt{MS}_{n}(\psi,f_{\textbf{V}})$
takes the following explicit form:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU4.png?pub-status=live)
where
$\Omega_{\textbf{V}}$
is a diagonal matrix:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU5.png?pub-status=live)
Proof. Using the properties of the conditional expectation and taking into account that
$\psi(t)$
is the characteristic generator of the elliptical random vector
$\textbf{Z}^{\ast}$
, we can write
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU6.png?pub-status=live)
where
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU7.png?pub-status=live)
which leads to the equality
$\boldsymbol{\lambda}_{\textbf{V}}^{T}\boldsymbol{\lambda}_{\textbf{V}}=\boldsymbol{\lambda}^{T}\Omega_{\textbf{V}}\boldsymbol{\lambda},$
with the diagonal matrix
$\Omega_{\textbf{V}}=\texttt{diag}( 1,\break V_{1}^{2},...,V_{n-1}^{2}) .$
Finally, we conclude that
$\varphi_{\textbf{Z}}\left( \boldsymbol{\lambda}\right)=\mathbb{E}_{\textbf{V}}\left( \psi\left( \boldsymbol{\lambda}^{T}\Omega_{\textbf{V}}\boldsymbol{\lambda}\right) \right) .$
If the density generator
$g^{(n)}$
of
$\textbf{Z}^{\ast}$
exists and moreover, if the condition
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU8.png?pub-status=live)
holds, the pdf of
$\textbf{Z}^{\ast}$
is given by:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU9.png?pub-status=live)
where
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn3.png?pub-status=live)
For more details, please see Landsman & Valdez (Reference Landsman and Valdez2003). So,
$\ f_{\textbf{Z}}( \textbf{z}|\textbf{V}) =c_{n}g^{(n)}( \textbf{z}^{T}\Omega_{\textbf{V}}^{-1}\textbf{z}) /{\displaystyle\prod\limits_{i=1}^{n-1}}V_{i},$
where
$\Omega_{\textbf{V}}^{-1}=\texttt{diag}( 1,V_{1}^{-2},...,V_{n-1}^{-2}) .$
Then, the density is given by:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU10.png?pub-status=live)
and we write
$\textbf{Z}\sim \texttt{MS}_{n}(g^{(n)},f_{\textbf{V}}).$
Theorem 2 Assume
$\textbf{Z}\sim \texttt{MS}_{n}(\psi,f_{\textbf{V}})$
. Then, the first component
$Z_{1}$
is independent of the
$\textbf{Z}_{-1}=(Z_{2},...,Z_{n})$
iff
$\psi(t)=\exp(-ct),$
$c>0$
. This implies that
$Z_{1}$
is normal. In the latter case, if
$V_{1},...,V_{n-1}$
are independent, the vector
$\textbf{Z}$
consists of independent components.
Proof. As
$\boldsymbol{\lambda}^{T}\Omega_{\textbf{V}} \lambda=\lambda_{1}^{2}+\Sigma_{j=2}^{n}V_{j-1}^{2}\boldsymbol{\lambda}_{j}^{2},$
we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn4.png?pub-status=live)
iff
$Z_{1}$
is independent of
$\textbf{Z}_{-1}.$
On the other hand, equation (4) is the well-known second Cauchy functional equation, which has the only continuous solution
$\psi(t)=\exp(\pm ct),c>0$
(see Efthimiou Reference Efthimiou2010, Section 5.2) but as
$\psi(\frac{ct^{2}}{2})$
is a characteristic function,
$|\psi(\frac{ct^{2}}{2})|\leq1$
and so
$\psi(t)=\exp(-ct),$
that is,
$Z_{1}$
is normally distributed. Now assume that
$\psi(t)=\exp(-ct)$
and
$V_{1},...,V_{n-1}$
are independent, then
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU11.png?pub-status=live)
2.1 Marginal characteristic functions and density distributions
The marginality of multivariate distributions is an essential property that should be considered. For example, it is well known that the marginal of a normal distribution is also a normal distribution. To obtain a marginal one-dimensional characteristic function, consider
$\boldsymbol{\lambda}=(0,...0,\lambda,0,...,0)^{T},$
where
$\lambda$
stands in the i-th place. Then,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU12.png?pub-status=live)
We observe that the characteristic function of
$Z_{i}$
depends on
$\lambda^{2}$
, that is,
$\textbf{Z}_{i}$
is a unidimensional spherical distribution, but its characteristic generator depends on the distribution of
$V_{i-1}.$
The density of
$\textbf{Z}_{i},$
which is recall a marginal of vector
$\textbf{Z,}$
is equalled
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU13.png?pub-status=live)
where
$g^{(1)}$
is the density generator of a one-dimensional elliptical variable corresponding to the density generator
$g^{(n)}$
and
$c_{1}$
can be obtained from (3) setting
$n=1.$
From these expressions, it follows that one-dimensional marginal random variables are elliptical but have different characteristic and density generators, being the scale mixtures of the primary marginal elliptical distributions.
3 The ME Family of Distributions
Let
$\Sigma$
be positive definite matrix. Taking the scale and location transformation of the standard MS random vector
$\textbf{Z}$
:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn5.png?pub-status=live)
the characteristic function and the pdf of
$\,\textbf{X}$
take the following forms, respectively:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn6.png?pub-status=live)
and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn7.png?pub-status=live)
where
${\mu\in\mathbb{R}}^{n},$
and
$|\Sigma|$
is the determinant of matrix
$\Sigma.$
Definition 2 Distribution of vector
$\textbf{X}$
having characteristic function (6) and density function (7) is called a ME distribution and denoted by
$\textbf{X}\sim \texttt{ME}_{n}({\mu},\Sigma_{\textbf{V}},\psi,f_{\textbf{V}})$
(in terms of characteristic generator) or
$\textbf{X}\sim \texttt{ME}_{n}({\mu},\Sigma_{\textbf{V}},g^{(n)},f_{\textbf{V}})$
(in terms of density generator). Here
$\Sigma_{\textbf{V}}=\Sigma^{1/2}\Omega_{\textbf{V}}\Sigma^{1/2}$
and
$f_{\textbf{V}}$
is the pdf distribution of vector
$\textbf{V}$
.
Theorem 3 The linear transformation of a ME random vector
$\textbf{X}$
is again a ME random vector, that is, this property exhibited by the elliptical family is preserved also for ME family.
Proof. Let
$\textbf{X}\sim \texttt{ME}_{n}(\boldsymbol{\mu},\Sigma_{\textbf{V}},\psi,f_{\textbf{V}}).$
Then, its characteristic function has the form (6), and thus for
$\textbf{Y} = B\textbf{X}+\textbf{b}$
, where B is
$m\times n$
rectangular matrix and
$\textbf{b}$
is a m-dimensional vector:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU14.png?pub-status=live)
This implies that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU15.png?pub-status=live)
or, in the term of the density generator:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn8.png?pub-status=live)
Define by
$\Omega_{\textbf{0}}$
a diagonal matrix of expectations of vector
$\textbf{V}$
:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU16.png?pub-status=live)
Theorem 4 The expectation and covariance matrix of
$\textbf{X}\sim \texttt{ME}_{n}(\boldsymbol{\mu},\Sigma_{\textbf{V}},\psi,f_{\textbf{V}})$
are given by:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU17.png?pub-status=live)
and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU18.png?pub-status=live)
respectively.
Proof. From the expressions of the characteristic and density functions of
$\textbf{X}$
provided in (6) and (7), it follows that
$E\left( \textbf{X}|\textbf{V}\right) = {\mu,}$
and cov
$\left( \textbf{X}|\textbf{V}\right) =-2\psi^{\prime}(0) \cdot \Sigma^{1/2}\Omega_{\textbf{V}}\Sigma^{1/2}.$
Taking the expectation with respect to random
$\textbf{V}$
provides
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU19.png?pub-status=live)
and concludes the proof.
For
$\psi(t)=\exp(-\frac{1}{2}t),$
we have a normal distribution of
$\textbf{X}|\textbf{V,}$
and
$2\psi^{\prime}(0)=-1.$
In Landsman & Valdez (Reference Landsman and Valdez2003), the authors observed that
$-\psi^{\prime}(0)=\sigma_{Z_{1}^{\ast}}^{2}$
is the variance of univariate marginal of the elliptical vector
$\textbf{Z}^{\ast}.$
There, the authors provided
$\sigma_{Z_{1}^{\ast}}^{2}$
for many different members of an elliptical family. Suppose now that
$\textbf{X}$
is partitioned, that is,
$\textbf{X}^{T}=(\textbf{X}_{1}^{T}\;\ \textbf{X}_{2}^{T}),$
where
$\textbf{X}_{1}^{T}$
and
$\textbf{X}_{2}^{T}$
are subvectors of
$\textbf{X}^{T}.$
Then, vector
$\boldsymbol{\mu}^{T}=(\boldsymbol{\mu}_{\textbf{1}}^{T},\boldsymbol{\mu}_{\textbf{2}}^{T})$
and matrix:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU20.png?pub-status=live)
have the corresponding partitions. Although ME family does not preserve the regression property as elliptical family does, however, the properties of ME family based on its representations allow to have an attractive form for the regression expectation vector.
Theorem 5 For linear regression vector, the expectation is given by:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU21.png?pub-status=live)
where
$R_{+}^{n-1}=[0,\infty)\times \cdot\cdot\cdot \times\lbrack0,\infty)$
n-1 times.
Proof. Define
$R\overset{D}{=}\textbf{X}_{1}|\textbf{X}_{2}=\textbf{x}_{2}.$
Then, from Theorem 2.18 (Fang et al. Reference Fang, Kotz and Ng1990) follows that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU22.png?pub-status=live)
where
$q(\textbf{x}_{2})=(\textbf{x}_{2}-\boldsymbol{\mu}_{2})^{T}\Sigma_{\textbf{v,(22)}}^{-1}(\textbf{x}_{2}-\boldsymbol{\mu}_{2})$
, where the definition of
$\psi_{a_{2}}( \cdot )$
can be found in equation (2.5) on p. 29 of the cited book. See also the relevant discussion given after equation (2.42), p. 45 of this book. Then, the characteristic function of R is given:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU23.png?pub-status=live)
From this immediately follows that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn9.png?pub-status=live)
Then, we can write
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU24.png?pub-status=live)
We note that neither
$\boldsymbol{\mu}_{1}$
nor
$\boldsymbol{\mu}_{2}$
do not depend on
$\textbf{v}$
.
Remark 1 Sometimes instead of transformation (5) it is better to use transformation
$\textbf{X}=\boldsymbol{\mu}+L\textbf{Z,}$
where instead of square root of matrix
$\Sigma$
one uses the lower triangular matrix of Cholesky decomposition of matrix
$\Sigma$
, that is,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn10.png?pub-status=live)
This may be explained by the fact that performing the Cholesky decomposition is easier than the square root decomposition. The previous results can be easily corrected as follows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU25.png?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU26.png?pub-status=live)
where we take into account that
$|L|=|L^{T}|=|\Sigma|^{1/2}.$
Remark 2 Unlike the elliptical-based copula approach, in which we have elliptical marginals, the proposed approach given in this paper provides important properties much similar to the standard elliptical family of distributions such as the closeness to affine transformations and gives a natural extension of the elliptical family into a multivariate symmetric distribution with different elliptical density generators. We point out that there is no direct link between the copula approach and the ME distributions, since the distribution function of the later one is based on expectations of the elliptical density function, unlike copula, which is a function of the elliptical distribution functions.
Remark 3 Multivariate elliptical distribution can be simulated straightforwardly. First given density
$f_{\textbf{V}}$
, we simulate the realisation of random vector
$\textbf{V}=(V_{1},...,V_{n-1})^{T}.$
Further given realisation of
$\textbf{V}$
, we simulate the realisation of elliptical vector
$\textbf{X}\sim E_{n}({\boldsymbol{\mu}},\Sigma_{\textbf{V}},g^{(n)}).$
Repeating this process N times, we obtain N realisations of ME vector
$\texttt{ME}_{n}({\boldsymbol{\mu}},\Sigma_{\textbf{V}},g^{(n)},f_{\textbf{V}}).$
3.1 A brief review of existed generalisations of the elliptical families
Frahm (Reference Frahm2004) introduces the generalised elliptical distributions which take the traditional form:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU27.png?pub-status=live)
where unlike elliptical distribution, the random R may also obtain negative values. Here
$\boldsymbol{U}^{(k)}$
is a k-variate,
$1\leq k\leq n,$
random vector that is uniformly distributed, and
$\Lambda$
is some
$n\times k$
real matrix. Note that Frahm’s generalised elliptical distributions have the same disadvantages as the classical elliptical distributions: its marginal distributions preserve the same form up to the location and scale transformations.
In Fang et al. (Reference Fang, Fang and Kotz2002), the authors introduced the meta-elliptical distributions, which are defined by a copula of elliptically distributed random variables that generates, for example, multivariate non-symmetric distributions. We note that the ME distributions essentially differ from meta-elliptical distributions. The latter uses the copula approach and is notoriously cumbersome and does not preserve the geometric representation inherent for elliptical distributions. In particular, meta-elliptical distributions do not preserve the important properties of elliptical distribution such as, for example, the linear transformation (see Theorem 3) preserving by ME distributions but not preserving by meta-elliptical.
In Ollila & Koivunen (Reference Ollila and Koivunen2004), the authors introduce complex random vectors that possess elliptical distributions, extending the standard elliptical random vectors into the complex plane.
In Gomez et al. (2007), the authors extend the multivariate slash normal distribution into the multivariate slash elliptical distributions, which is defined by a stochastic representation as to the scale mixture of elliptically distributed random variables with respect to the power of a uniform random variable. In particular, an n-variate random vector
$\boldsymbol{X}$
has a slash elliptical distribution if it can be represented by
$\boldsymbol{Y}=\boldsymbol{\mu}+\Sigma^{1/2}\frac{\boldsymbol{Z}}{\zeta^{1/q}}$
where
$\zeta\sim\exp(2),$
$q>0$
and
$\boldsymbol{Z}$
is an n-variate spherical random vector with density generator
$g^{(n)}.\boldsymbol{\ }$
The slash elliptical distributions are an interesting generalisation of the normal distribution. However, unlike the proposed ME distribution, the slash elliptical distribution is not closed under affine transformation. They also possess the same disadvantage of elliptical distribution: they preserve the same form of univariate marginal distributions up to the location and scale transformations.
For arranging the flexible tail behaviour of any multivariate radial direction, an interesting generalisation of elliptical distribution was suggested in the paper by Kring et al. (Reference Kring, Rachev, HÖchstÖtter, Fabozzi and Bianch2009). However, this distribution also does not preserve the linear transformation (see Theorem 3), which is important when dealing with financial returns and risks.
Comparing the proposed ME family of distributions with other extensions of the elliptical family of distributions, we observe that our proposal provides a suitable scheme for modelling loss distributions since while dealing with symmetric loss marginal distributions, which appeared quite often in financial losses, we also have multivariate symmetric distribution unlike the previous suggested families such as the generalised elliptical distributions that also includes skewed distributions such as the asymmetric Student’s t-distribution. Moreover, our proposed family of distributions is obtained explicitly by knowing the marginal distributions, without additional assumptions about their dependence structure, as appearing in copulas or copula-like distribution functions. We suggest an alternative approach that has some convenient properties allowing to obtain explicit formulas for the tail value at risk (TVaR) of the components and their sum. In this paper, we introduce a multivariate family of distributions that captures the different behaviours of each of the marginals but still preserves the symmetry inherent to the elliptical family and convenient geometric representation. The proposal is based on a different distribution of radius of each elliptical component.
4 The Normal-ME Distribution
Assume
$g^{(n)}(u)=e^{-u},$
the characteristic function is
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU28.png?pub-status=live)
and the pdf is given by:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn11.png?pub-status=live)
Then,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU29.png?pub-status=live)
meaning that
$X_{1}\sim N(\mu_{1},\sigma_{11}).$
The first univariate marginal distribution is normal. However, the other univariate marginals have different distributions that are distinct from the normal. The characteristic function of
$X_{i},i=2,..,n$
is
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU30.png?pub-status=live)
and the pdf is
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn12.png?pub-status=live)
It can be seen from (12) the distribution of each marginal
$X_{i}$
is elliptical with the density generator that is a scale mixture of a normal density generator, and the mixing distributions are different for any
$i=2,...,n$
. It can be also noticed that the class of such mixtures is rather rich.
The normal-ME family of distribution has an attractive representation of its density by the moment generating function (mgf) of some
$(n-1)$
-variate vector of positive components. To show this consider
$(n-1)$
-variate vector
$\textbf{U}=(V_{1}^{-2},...,V_{n-1}^{-2})^{T}=(U_{1},...,U_{n-1})^{T}$
with density of the form:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn13.png?pub-status=live)
where
$f_{\textbf{U}^{\ast}}( \cdot )$
is some
$n-1$
variate density function such that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU31.png?pub-status=live)
Theorem 6 Suppose, for simplicity,
$\boldsymbol{\mu}=\textbf{0},\Sigma=I_{n},$
where
$I_{n}$
is
$n-$
dimensional identity matrix. Then,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU32.png?pub-status=live)
where
$\textbf{z}_{-1}=(z_{2},...,z_{n})^{T},$
and
$M_{\textbf{U}^{\ast}}( \cdot )$
is a mgf of vector
$\textbf{U}^{\ast}.$
Proof. From (7), we write
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn14.png?pub-status=live)
where
$\textbf{u}_{n-1}=(u_{1},...,u_{n-1})^{T},d\textbf{u}_{n-1}=du_{1} \cdot\cdot\cdot du_{n-1}.$
4.1 The multi-Student family of distributions
Let us demonstrate the n-variate ME distribution in which the first univariate marginal is a normal and the other univariate marginals are Student’s t-distributions with different degrees of freedom.
We choose
$\textbf{U}^{\ast}$
having the multivariate gamma-type distribution which was introduced in Krishnamoorthy & Parthasarathy (Reference Krishnamoorthy and Parthasarathy1951), Section 4. This distribution has mgf:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn15.png?pub-status=live)
where
$v_{1}$
$>0,...,v_{n-1}>0,v>0,$
$\beta_{i}=s_{i}/(1-s_{i}),i=1,..,n-1$
and
$g(\beta_{1},...,\beta_{n-1})$
is determinant:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn16.png?pub-status=live)
constructed by
$\rho_{ij},i,j=1,..,n-1,$
being elements of some
$(n-1)\times(n-1)$
correlation matrix. From expression (15), it immediately follows that mgf of univariate vector is
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn17.png?pub-status=live)
that is, the marginal element
$U_{i}^{\ast}$
actually has a gamma distribution with a shape parameter
$v_{i}.$
In the following Theorem, we clarify the role of coefficients
$\rho_{ij},i,j=1,...,n-1,$
of matrix in (16).
Theorem 7 The correlation between elements
$U_{i}^{\ast},U_{j\text{ }}^{\ast}$
is equal:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn18.png?pub-status=live)
Proof. We recall the well-known Leibniz formula for calculating the determinant of matrix
$A=\left( a_{ij}\right) _{i,j=1}^{n-1}$
:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU33.png?pub-status=live)
where the sum is computed over all permutations
$\alpha$
of the set
$\{1,...,n-1\}$
. Applying this formula to the matrix in (16), we can write that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn19.png?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU34.png?pub-status=live)
Recalling that
$\beta_{i}=s_{i}/(1-s_{i}),$
we can write from (19) that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn20.png?pub-status=live)
Then, using equality (20):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU35.png?pub-status=live)
Taking into account that it follows from (17) that
$Var(\textbf{U}_{i}^{\ast})=v_{i},i=1,...,n-1,$
and we immediately obtain (18).
Let us now return to calculating the normal-multi-Student density based on the dependence structure generated by multivariate distribution of vector
$\textbf{U}^{\ast}.$
Substituting (15) into equation (14) of Theorem 6, we obtain
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn21.png?pub-status=live)
One can see that
$Z_{1}$
is a normal random variable, which is independent on
$(Z_{2},...,Z_{n})^{T\text{ }},$
and
$(Z_{2},...,Z_{n})^{T\text{ }}$
is a
$(n-1)$
-variate dependent random vector with univariate marginals having Student’s t-distributions with
$m_{1}=2v_{2}-1,...,$
$m_{n-1}=2v_{n}-1$
degrees of freedom, respectively. The statement about marginal distribution of
$Z_{i}$
follows after substituting
$-\frac{1}{2}z_{i}^{2}$
into the marginal mgf of
$\textbf{U}^{\ast}$
given in (17) for any
$i=2,...,n$
, respectively. When all
$\rho_{ij}=0,i,j=1,..,n-1$
,
$Z_{2},...,$
$Z_{n}$
are independent.
4.1.1 Special case,
$n=3$
The determinant in (16) is
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn22.png?pub-status=live)
and the moment generating function of a generalised bivariate gamma-type distribution from (15) has a form:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU36.png?pub-status=live)
recall
$v_{1},v_{2},v>0.$
From Theorem 7, it follows that the correlation coefficient:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU37.png?pub-status=live)
This distribution was considered in Van Dan Berg et al. (Reference Van Dan Berg, Roux and Bekker2013). Now from (21) and (22), we can write the three-variate symmetric density of
$\textbf{Z}$
:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn23.png?pub-status=live)
One can see again that
$Z_{1}$
is a normal random variable, which is independent of
$(Z_{2},Z_{3})^{T\text{ }},$
and
$(Z_{2},Z_{3})^{T\text{ }}$
is a bivariate-dependent random vector, whose marginals are Student’s t-distributions with
$m_{1}=2v_{1}-1$
and
$m_{2}=2v_{2}-1$
degrees of freedom, respectively, and
$0\leq\rho_{12}^{2}<1$
is a measure of dependence. The statement about marginal Student’s t-distribution of
$Z_{i\text{ }}$
follows after substituting
$-\frac{1}{2}z_{i}^{2}$
into the marginal mgf of
$\textbf{U}^{\ast}$
given in (17) for
$i=2,3.$
When
$\rho_{12}=0,$
$Z_{2}$
and
$Z_{3}$
are independent.
Now we show how to calculate the constant c using another method that was proposed in (13). In fact, we have to evaluate
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU38.png?pub-status=live)
After changing variables:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU39.png?pub-status=live)
we get
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn24.png?pub-status=live)
where
$_{3}F_{2}(a_{1},a_{2},a_{3};\ b_{1},b_{2};\ z)$
is the generalised hypergeometric function, and
$p=v_{1}-3/2,q=v_{2}-3/2,r=v.$
The integral in (24) was evaluated using the WolframAlpha computational package. Therefore, the constant c in density (23) is equal to:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU40.png?pub-status=live)
Recall that
$_{p}F_{q}(a_{1},...,a_{p};\ b_{1},...,b_{q};\ z)$
is a generalised hypergeometric function of z with p parameters of type 1 and q parameters of type 2 (see, e.g. Masjed-Jamei & Koepf Reference Masjed-Jamei and Koepf2019, equation (2)). In Figure 1, we provide a graph of the densities of a “multi-Student” vector
$\textbf{Z}_{-1}=(Z_{2},Z_{3})^{T}$
, with different and equals marginal degrees of freedoms, respectively
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_fig1.png?pub-status=live)
Figure 1. Density of multi-Student family of distributions with different and equals marginal degrees of freedom.
4.1.2 Special case,
$n=4$
We consider the case of a normal-multi-Student distribution when
$n=4.$
The determinant in (16) is equal to:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU41.png?pub-status=live)
Then, from (21), we can write the density of four-variate symmetric density of
$\textbf{Z}$
:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU42.png?pub-status=live)
Again we see that that
$Z_{1}$
is a normal random variable, which is independent on
$(Z_{2},Z_{3},Z_{4})^{T\text{ }},$
and
$(Z_{2},Z_{3},Z_{4})^{T\text{ }}$
is a three-variate dependent random vector, whose marginals are Student’s t-distributions with
$m_{1}=2v_{1}-1,$
$m_{2}=2v_{2}-1$
and
$m_{3}=2v_{3}-1$
degrees of freedom, respectively. From Theorem 7, it follows that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU43.png?pub-status=live)
When
$\rho_{12}=0,\rho_{13}=0$
and
$\rho_{23}=0,$
$Z_{2},$
$Z_{3}$
and
$Z_{4}$
are independent. When only
$\rho_{13}=0$
and
$\rho_{23}=0$
,
$Z_{4}$
is independent on vector
$(Z_{2},$
$Z_{3}).$
4.2 Examples of application Theorem 6 not related to distribution (21)
Example 1 Suppose that
$\textbf{U}^{\ast}$
has the bivariate exponential distribution proposed in Balakrishna & Shiji (2014), where the moment generation function is
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU44.png?pub-status=live)
where
$\lambda>0$
and
$\ \beta>0$
are scale parameters of the marginal exponential distributions,
$0<\alpha<1$
is measure of dependence. Then, using (14), we can write the density of
$\textbf{Z}$
:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU45.png?pub-status=live)
One can see that
$Z_{1}$
is again a normal random variable, which is independent on
$(Z_{2},Z_{3})^{T},$
and
$(Z_{2},Z_{3})^{T\text{ }}$
is a bivariate-dependent Cauchy distribution, whose marginals are Cauchy with different scale parameters
$\lambda$
and
$\beta.$
When
$\alpha\downarrow0$
,
$(Z_{2},Z_{3})^{T}$
becomes independent.
Example 2 Suppose that
$\textbf{U}^{\ast}$
has a bivariate
$\chi^{2}$
distribution with m degrees of freedom proposed in Natarajah (2010), where the moment generation function of
$U^{\ast}$
is
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU46.png?pub-status=live)
Then, again using (14), we write the density of
$\textbf{Z}\ $
:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU47.png?pub-status=live)
One can see that
$Z_{1}$
is a normal random variable, which is independent on
$(Z_{2},Z_{3})^{T}$
and
$(Z_{2},Z_{3})^{T\text{ }}$
is a bivariate dependent
$\chi^{2}$
distribution, whose marginals are
$\chi^{2}$
distributions with m degree of freedoms and
$\rho$
is measure of dependence,
$0\leq\rho^{2}\leq1.$
When
$\rho=0,$
$Z_{2}$
and
$Z_{3}$
are independent. For the case
$\rho^{2}\rightarrow1,$
density of
$(Z_{2},Z_{3})^{T\text{ }}$
degenerates to a constant.
5 TVaR and Allocation of the Aggregate Loss
Landsman & Valdez (Reference Landsman and Valdez2003) investigated the TVaR risk measure in the context of elliptical family. They considered various examples of TVaR models and obtained the closed-form expression of a natural allocation for aggregate loss. We provide a brief review of the presented results.
Consider a random variable X representing a loss of a portfolio or risk of the stock. We denote by
$F_{X}(x)$
the cumulative distribution function (cdf) and by
$\bar{F}_{X}(x)=1-F_{X}(x)$
the distributional tail function. TVaR is defined as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn25.png?pub-status=live)
where
$x_{q}$
represents the q-level quantile of the loss distribution, which means that
$F_{X}(x_{q})=q.$
The TVaR defined in equation (25) represents the expected loss that potentially occurs in unfavourable scenarios.
Assume now that the random vector
$\textbf{X}^{\ast}\sim E_{n}({\boldsymbol{\mu},\boldsymbol{\Sigma},}g^{(n)})$
, that is, elliptical with density generator
$g^{(n)}$
. Then, for the aggregate loss
$S^{\ast}=\sum_{k=1}^{n}X_{k}^{\ast}=\textbf{1}^{T}\textbf{X}^{\ast}\sim E_{n}(\boldsymbol{\mu}_{S^{\ast}},{\boldsymbol{\sigma}}_{S^{\ast}}^{2},g^{(1)})$
, where
$\textbf{1}$
is the vector with n ones,
$\mu_{S^{\ast}}=\textbf{1}^{T}\boldsymbol{\mu},$
and
$\sigma_{S^{\ast}}^{2}=1^{T}\Sigma1$
are the expectation and the scale values of
$S^{\ast}$
. Assume the expected value of components of vector
$\textbf{X}^{\ast}$
exists, that is,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn26.png?pub-status=live)
in Landsman & Valdez (Reference Landsman and Valdez2003), it was shown as:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn27.png?pub-status=live)
where
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn28.png?pub-status=live)
with
$z_{s_{q}^{\ast}}=\dfrac{s_{q}^{\ast}-\mu_{S^{\ast}}}{\sqrt{\sigma_{S^{\ast}S^{\ast}}}}$
. Here, the following function
$\overline{G}(x)=c_{1}\int_{x}^{\infty}g^{(1)}(u)du$
was introduced in the cited paper, whose compliment
$G(x)=c_{1}\int_{0}^{x}g^{(1)}(u)du$
was called the cumulative generator, and
$s_{q}^{\ast}=VaR_{q}(S^{\ast}).$
We note that the condition (26) guarantees the finiteness of
$\overline{G}(x).$
TVaR provides the following natural capital allocation of the aggregate loss
$S^{\ast}$
:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU48.png?pub-status=live)
In Landsman & Valdez (Reference Landsman and Valdez2003), the components of this allocation were also derived as follows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU49.png?pub-status=live)
where
$\sigma_{k,S^{\ast}}={\displaystyle\sum_{j=1}^{n}}\sigma_{kj}.\ $
Our goal is to provide the capital allocation of aggregate sum
$S=$
${\displaystyle\sum_{j=1}^{n}}X_{i}$
of elements of ME random vector
$\textbf{X}$
$\sim\texttt{ME}_{n}(\boldsymbol{\mu},\Sigma_{\textbf{v}},g^{(n)},f_{\textbf{V}}).$
Denote
$s_{q}=VaR_{q}(S)$
and notice that
$\mu_{S}=\mu_{S^{\ast}}=\Sigma_{i=1}^{n}\mu_{i}.$
Assume
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU50.png?pub-status=live)
Then,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU51.png?pub-status=live)
where
$v_{0}=1.$
Define
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU52.png?pub-status=live)
Note that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU53.png?pub-status=live)
and recall that
$\Sigma_{\textbf{V}}=\Sigma^{1/2}\Omega_{\textbf{V}}\Sigma^{1/2}.$
Theorem 8 Assume
$\textbf{X}$
$\sim\texttt{ME}_{n}(\boldsymbol{\mu},\Sigma_{\textbf{v}},g^{(n)},f_{\textbf{V}}).$
Then
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU54.png?pub-status=live)
Proof. For the proof, please see the supplementary material.
Remark 4 If instead of square root decomposition of matrix
$\Sigma$
, we use Cholesky decomposition (10) as in Remark 1, with the matrix:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU55.png?pub-status=live)
Then, using the same method shown in the proof of Theorem 8, one can obtain the TVaR-based decomposition:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU56.png?pub-status=live)
where
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU57.png?pub-status=live)
To illustrate the obtained results, consider the case
$n=3.$
Then,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU58.png?pub-status=live)
As the symbol form of
$\Sigma^{1/2\text{ }}$
is rather cumbersome, we use Cholesky decomposition, which looks more elegant:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn29.png?pub-status=live)
Then, the non-zero elements of matrix L are as follows:
$d_{11}=\sigma_{1},d_{21}=\sigma_{2}\rho_{12},d_{22}=\sigma_{2}\sqrt{1-\rho_{12}^{2}},d_{31}=\sigma_{3}\rho_{13},d_{32}=\sigma_{3}\frac{\rho_{23}-\rho_{12}\rho_{13}}{\sqrt{1-\rho_{12}^{2}}}$
,
$d_{33}=\sigma_{3}\sqrt{1-\frac{\rho_{13}^{2}+\rho_{23}^{2}-2\rho_{12}\rho_{13}\rho_{23}}{1-\rho_{12}^{2}}},$
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU59.png?pub-status=live)
and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU60.png?pub-status=live)
6 Numerical Illustration
In this section, we show how the portfolio decomposition with TVaR can be provided numerically in the case of the ME family. We assume that
$\textbf{X}\sim \texttt{ME}_{3}(\boldsymbol{\mu},\Sigma_{\textbf{V}},g^{(3)},f_{\textbf{V}})$
, where
$g^{(3)}(u)=\exp(-u)$
and
$f_{\textbf{V}}$
is such as in Example 4.1.1. In other words, the density has the form:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqn30.png?pub-status=live)
where
$f_{\textbf{Z}}(\textbf{z})$
is given in (23). Recall that the lower triangle matrix of Cholesky decomposition has form (29), we can write that determinant of L,
$|L|=\sigma_{1}\sigma_{2}\sigma_{3}\sqrt{1-\rho_{123}^{2}}=|\Sigma|^{1/2}$
, and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU61.png?pub-status=live)
where
$\rho_{123}^{2}=\rho_{12}^{2}-2\rho_{12}\rho_{13}\rho_{23}+\rho_{13}^{2}+\rho_{23}^{2}.$
Then,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU62.png?pub-status=live)
where
$z_{2}=\Big(-\rho_{12}\frac{(x_{1}-\mu_{1})}{\sigma_{1}}+\frac{(x_{2}-\mu_{1})}{\sigma_{1}}\Big)/\sqrt{\big(1-\rho_{12}^{2}\big)}$
and
$z_{3}=\Big(-\frac{(\rho_{13}-\rho_{12}\rho_{23})}{\sqrt{\left( 1-\rho_{12}^{2}\right) }}\frac{(x_{1}-\mu_{1})}{\sigma_{1}}-\frac{\left( \rho_{23}-\rho_{12}\rho_{13}\right) }{\sqrt{\left( 1-\rho_{12}^{2}\right) }}\frac{(x_{2}-\mu_{2})}{\sigma_{1}}+\sqrt{\frac{1-\rho_{12}^{2}}{1-\rho_{123}^{2}}}\frac{(x_{3}-\mu_{3})}{\sigma_{3}}\Big)/\sqrt{\big(1-\rho_{123}^{2}\big)}. $
Now we assume that
$m_{1}=2,m_{2}=3,v=2.5,\;\rho_{12}^{2}=0.879,\;{\boldsymbol{\mu}}=(5,3,7)^{T}$
and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU63.png?pub-status=live)
Then,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_eqnU64.png?pub-status=live)
and portfolio decomposition is given in the Table 1.
Table 1. TVaR-based capital allocation of aggregate risk
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220329105624827-0132:S1748499521000038:S1748499521000038_tab1.png?pub-status=live)
One can see that even for aggregate risk with components having different shapes of distributions, the TVaR-based capital allocation can be computed numerically. When investigating portfolio risks, we show that ME family allows to naturally decompose total risk into the sum of individual risks of the portfolio constituents. This turns out to be particularly meaningful in practice, when one is interested in computing capital requirements for each institutional business line, whereas business lines are assumed to be correlated and allocate the contribution of individual risks to the aggregate risk.
Alternatively, one might be interesting in the problem of optimal portfolio selection (OPS) using TVaR. Landsman & Makov (Reference Landsman and Makov2011) have shown that the elliptical family is very convenient for the solution of this problem using TVaR and not only TVaR, but any translation invariant and positive-homogeneous risk measure, where TVaR is a main representative, because OPS problem reduces to the linear plus square root quadratic functionals. In the case of ME family of distributions, which is more complicated case, the mentioned phenomenon still can be used conditionally of vector
$\textbf{V}$
with further averaging the result with respected probability measure having density
$f_{\textbf{V}}.$
The details will be developed in the future research.
7 Conclusion
We proposed a ME family of distributions that generalises the elliptical family of distributions. In particular, the proposed model allows each of the elliptical random variables to have different characteristic (or density) generators and is based on a different distribution of the radius of each elliptical component. We investigated and calculated the covariance structure of this family and provided an expectation of the linear regression vector. As special cases, we offer a multi-Student subfamily of distributions and several notable examples. We investigated the problem of capital allocation of the aggregate sum of the components of this family to obtain the analytic form of this allocation based on a TVaR risk measure. We also presented a numerical illustration of the obtained results.
Acknowledgements
This research was supported by the Israel Science Foundation (Grant N 1686/17). The authors are grateful to the anonymous reviewer for his/her useful comments.