Hostname: page-component-745bb68f8f-d8cs5 Total loading time: 0 Render date: 2025-02-06T07:16:35.439Z Has data issue: false hasContentIssue false

Conditional tail independence in Archimedean copula models

Published online by Cambridge University Press:  01 October 2019

Michael Falk*
Affiliation:
University of Würzburg
Simone A. Padoan*
Affiliation:
Bocconi University of Milan
Florian Wisheckel*
Affiliation:
University of Würzburg
*
* Postal address: Chair of Mathematics VIII, University of Würzburg, Emil-Fischer-Str. 30, 97074 Würzburg, Germany.
*** Postal address: Department of Decision Sciences, Bocconi University of Milan, via Roentgen, 1 20136 Milan, Italy.
* Postal address: Chair of Mathematics VIII, University of Würzburg, Emil-Fischer-Str. 30, 97074 Würzburg, Germany.
Rights & Permissions [Opens in a new window]

Abstract

Consider a random vector $\textbf{U}$ whose distribution function coincides in its upper tail with that of an Archimedean copula. We report the fact that the conditional distribution of $\textbf{U}$ , conditional on one of its components, has under a mild condition on the generator function independent upper tails, no matter what the unconditional tail behavior is. This finding is extended to Archimax copulas.

Type
Research Papers
Copyright
© Applied Probability Trust 2019 

1. Introduction

Let $\textbf{U}=(U_1,\dots,U_d)$ be a random vector (rv) whose distribution function (df) F is in the domain of attraction of a multivariate extreme value df G, denoted by $F\in\mathcal D(G)$ , i.e. there are constants $\textbf{a}_n=(a_{n1},\dots,a_{nd})>{\textbf{0}}\in{\mathbb{R}}^d$ , $\textbf{b}_n=(b_{n1},\dots,b_{nd})\in{\mathbb{R}}^d$ , $n\in{\mathbb{N}}$ , such that for each $\textbf{x}=(x_1,\dots,x_d)\in{\mathbb{R}}^d$ ,

$$F^n(\textbf{a}_n\textbf{x}+\textbf{b}_n) \to_{n\to\infty} G(\textbf{x}).$$

Note that all operations on vectors such as $\textbf{x}+\textbf{y}$ , $\textbf{x}\textbf{y}$ , etc. are always meant componentwise.

The rv $\textbf{U}$ , or, equivalently, the df F, is said to have asymptotically independent (upper) tails if

$$G(\textbf{x})=\prod_{i=1}^d G_i(x_i),$$

where $G_i$ , $1\le i\le d$ , denote the univariate margins of G.

We require in this paper that the df F of $\textbf{U}$ coincides in its upper tail with a copula, say C, i.e. there exists $\textbf{U}_0=(u_{01},\dots,u_{0d})\in(0,1)^d$ such that

$$F(\textbf{U})=C(\textbf{U}),\qquad \textbf{U}\in[\textbf{U}_0,{\textbf{1}}]\subset{\mathbb{R}}^d.$$

Each univariate margin of a copula is the uniform distribution $H(u)=u$ for $0\le u\le 1$ and, thus, each univariate margin of F equals H(u) for $u\in[v_0,1]$ , where $v_0\,:\!=\max_{1\le i\le d}u_{0i}$ .

The significance of copulas is due to Sklar’s theorem ([Reference Sklar11, Reference Sklar, Rüschendorf, Schweizer and Taylor12]), by which an arbitrary multivariate df can be represented as a copula together with its univariate margins. The dependence structure among the margins of an arbitrary rv is, therefore, determined by the copula. For an introduction to copulas we refer to the book by Nelsen [Reference Nelsen10].

We require in this paper that the upper tail of C is that of an Archimedean copula $C_\varphi$ , i.e. there exists a convex and strictly decreasing function $\varphi\,:\,(0,1]\to[0,\infty)$ , with $\varphi(1)=0$ , such that

$$C_\varphi(\textbf{U}) =\varphi^{-1}(\varphi(u_1)+\dots+\varphi(u_d))$$

for $\textbf{U}\in[\textbf{U}_0,{\textbf{1}}]\subset{\mathbb{R}}^d$ , where $\textbf{U}_0=(u_{01},\dots,u_{0d})\in(0,1)^d$ .

A prominent example is $\varphi_p(s)\,:\!=(1-s)^p$ , $s\in[0,1]$ , where $p\ge 1$ . In this case we obtain

(1.1) $$\begin{equation} C_{\varphi_p}(\textbf{U})= 1-\Bigg(\sum_{i=1}^d(1-u_i)^p \Bigg)^{1/p}, \qquad \textbf{U}\in[\textbf{U}_0,{\textbf{1}}]. \label{eqn1} \end{equation}$$

Note that

$$C_{\varphi_p}(\textbf{U})\,:\!=\max\Bigg(0,1-\Bigg(\sum_{i=1}^d(1-u_i)^p \Bigg)^{1/p}\Bigg),\qquad \textbf{U}\in[0,1]^d,$$

defines a multivariate df only in dimension $d=2$ ; see, e.g., [Reference McNeil and Nešlehová9, Examples 2.1, 2.2]. But one can find, for arbitrary dimension $d\ge 2$ , an rv whose df satisfies equation (1.1); see, e.g., [Reference Falk4, (2.15)]. This is the reason why we require the Archimedean structure of $C_\varphi$ only on some upper interval $[\textbf{u}_0,{\textbf{1}}]$ and we do not speak of $C_\varphi$ as a copula, but rather as a distribution function.

The behavior of $C_\varphi(\textbf{U})$ for $\textbf{U}$ close to ${\textbf{1}}\in{\mathbb{R}}^d$ determines the upper tail behavior of the components of $\textbf{U}$ . Precisely, suppose that $C_\varphi\in\mathcal D(G)$ , i.e.

$${\Big[C_\varphi\Big({\textbf{1}}+\frac{\textbf{x}}n\Big)\Big]^n}\to_{n\to\infty} G(\textbf{x}),\qquad \textbf{x}\le{\textbf{0}}\in{\mathbb{R}}^d,$$

where the norming constants are prescribed by the univariate margins of $C_\varphi$ , which is the df $H(u)=u$ , $u\in[v_0,1]$ . We obviously have, for arbitrary $x\le 0$ and n large enough,

$${\Big[H\Big(1+\frac xn \Big)\Big]^n} = \Big(1+\frac xn \Big)^n \to\exp(x).$$

The multivariate max-stable df G, consequently, has standard negative exponential margins $G_i(x)=\exp(x)$ , $x\le 0$ .

Moreover, there exists a norm ${\Vert\cdot\Vert}_{D}$ on ${\mathbb{R}}^d$ such that $G(\textbf{x})=\exp(\!-{\Vert\textbf{x}\Vert}_{D})$ for $\textbf{x}\le {\textbf{0}}\in{\mathbb{R}}^d$ ; see, e.g., [Reference Falk4]. This norm ${\Vert\cdot\Vert}_{D}$ describes the asymptotic tail dependence of the margins of $C_\varphi$ ; the index D, therefore, means dependence. In particular, ${\Vert\cdot\Vert}_{D}={\Vert\cdot\Vert}_1$ is the case of (asymptotic) independence of the margins, whereas ${\Vert\cdot\Vert}_{D}={\Vert\cdot\Vert}_\infty$ yields their total dependence. For the df $C_{\varphi_p}$ in (1.1) we obtain, for example, for n large,

$$\begin{align*} {\Big[C_{\varphi_p}\Big({\textbf{1}}+\frac{\textbf{x}}n \Big)\Big]^n} &= \Bigg(1-\frac 1n\Bigg(\sum_{i=1}^d{\vert x_i\vert}^p\Bigg)^{1/p} \Bigg)^n\\ &\to_{n\to\infty} \exp(\!-{\Vert\textbf{x}\Vert}_p),\qquad \textbf{x}=(x_1,\dots,x_d)\le{\textbf{0}}\in{\mathbb{R}}^d, \end{align*}$$

where ${\Vert\textbf{x}\Vert}_p=(\sum_{i=1}^d{\vert x_i\vert}^p)^{1/p}$ , $p\ge 1$ , is the logistic norm on ${\mathbb{R}}^d$ . In this case we have tail independence only for $p=1$ .

In this paper we investigate the problem of whether conditioning on a margin $U_j=u$ has an influence on the tail dependence of the left margins $U_1,\dots,U_{j-1},U_{j+1},\dots,U_d$ . Actually, we will show that the rv $(U_1,\dots,U_{j-1},U_{j+1},\dots,U_d)$ , conditional on $U_j=u$ , in general has independent tails for each choice of j, no matter what the unconditional tail behavior is; see Section 3. This is achieved under a mild condition on the generator function $\varphi$ , which is introduced in Section 2.

2. Condition on the generator function

Our results are achieved under the following condition on the generator function $\varphi$ . There exists a number $p\ge 1$ such that

(C0) $$\lim_{s\downarrow 0}\frac{\varphi(1-sx)}{\varphi(1-s)}=x^p,\qquad x>0\tag{C0}\label{C0}.$$

Remark 2.1. The exponent p in condition (C0) is necessarily greater than one by the convexity of $\varphi$ , which can easily be seen as follows. We have for arbitrary $\lambda,x,y\in(0,1]$

$$\varphi(\lambda x+(1-\lambda)y) \le\lambda \varphi(x)+(1-\lambda)\varphi(y).$$

Setting $x=1-s$ and $y=1$ , we obtain

$$\varphi(\lambda(1-s)+1-\lambda)=\varphi(1-\lambda s) \le\lambda \varphi(1-s) ,$$

and thus

$$\lim_{s\downarrow 0}\frac{\varphi(1-\lambda s)}{\varphi(1-s)} =\lambda^p\le \lambda.$$

But this requires $p\ge 1$ .

A df $C_\varphi$ whose generator satisfies condition (C0) is in the domain of attraction of a multivariate extreme value distribution. Precisely, we have the following result.

Proposition 2.1. Suppose that the generator $\varphi$ satisfies condition (C0). Then we have $C_\varphi\in\mathcal D(G)$ , where $G(\textbf{x})=\exp(\!-{\Vert\textbf{x}\Vert}_p)$ , $\textbf{x}\le{\textbf{0}}\in{\mathbb{R}}^d$ .

Proof. First we show that condition (C0) implies, for $x>0$ ,

(2.1) $$\begin{equation}\lim_{s\downarrow 0}\frac{1-\varphi^{-1}(sx)}{1-\varphi^{-1}(s)} = x^{1/p}.\label{eqn2} \end{equation}$$

Choose $\delta_{sx},\delta_s\in(0,1)$ such that

$$\varphi(1-\delta_{sx})=sx,\quad \varphi(1-\delta_{s})=s,$$

i.e.

$$\varphi^{-1}(sx)= 1-\delta_{sx},\quad \varphi^{-1}(s)= 1-\delta_{s}.$$

Condition (C0) implies, for $s\downarrow 0$ ,

$$x=\frac{\varphi(1-\delta_{sx})}{\varphi(1-\delta_s)}=\frac{\varphi\Big(1-\delta_s\frac{\delta_{sx}} {\delta_s}\Big)}{\varphi(1-\delta_s)}\sim\Big(\frac{\delta_{sx}}{\delta_s}\Big)^p,$$

where $\sim$ means that the ratio of the left-hand side and the right-hand side converges to one as s converges to zero. But this is

$$\lim_{s\downarrow 0}\frac{1-\varphi^{-1}(sx)}{1-\varphi^{-1}(s)} = x^{1/p}.$$

Next, we show that, for $\textbf{x}=(x_1,\dots,x_d)\le{\textbf{0}}\in{\mathbb{R}}^d$ ,

$$\lim_{n\to\infty}{C_\varphi^n}\Big({\textbf{1}}+\frac{\textbf{x}}n\Big) =\lim_{n\to\infty}\bigg[\varphi^{-1}\Bigg(\sum_{i=1}^d\varphi\Big(1+\frac{x_i}n\Big) \Bigg)\bigg]^n =\exp(\!-{\Vert\textbf{x}\Vert}_p).$$

Taking logarithms on both sides, this is equivalent to

$$\lim_{n\to\infty} n\Bigg[1-\varphi^{-1}\Bigg(\sum_{i=1}^d\varphi\Big(1+\frac{x_i}n\Big)\Bigg)\Bigg] = {\Vert\textbf{x}\Vert}_p.$$

Write

$$\frac 1n = 1-\varphi^{-1}\Big(\varphi\Big(1-\frac 1n\Big)\Big).$$

Then

$$\begin{align*}n\Bigg[1-\varphi^{-1}\Bigg(\sum_{i=1}^d\varphi\Big(1+\frac{x_i}n\Big) \Bigg)\Bigg] &=\frac{1-\varphi^{-1}\Big(\sum_{i=1}^d\varphi\big(1+\frac{x_i}n\big) \Big)}{1-\varphi^{-1}\big(\varphi\big(1-\frac 1n\big)\big)}\\&= \frac{1-\varphi^{-1}\Big(\varphi\big(1-\frac 1n\big)\sum_{i=1}^d\frac{\varphi\big(1+\frac{x_i}n\big)}{\varphi\big(1-\frac 1n \big)} \Big)}{1-\varphi^{-1}\big(\varphi\big(1-\frac 1n\big)\big)}\\&\to_{n\to\infty} \Bigg(\sum_{i=1}^d(\!-x_i)^p \Bigg)^{1/p}\end{align*}$$

by condition (C0) and equation (2.1), which is the assertion.

Condition (C0) on $\varphi$ is, for example, implied by the condition

(C1) $$\lim_{s\downarrow 0} \frac{\varphi(1-s)}{s^p}=A\tag{C1}\label{C1}$$

for some constant $A>0$ and $p\ge 1$ , which is obviously satisfied by the generator $\varphi_p(s)=$ $(1-s)^p$ .

Condition (C1) is, by l’Hôpital’s rule, implied by

(C2) $$-\lim_{s\downarrow 0} \frac{\varphi'(1-s)}{s^{p-1}}=pA.\tag{C2}\label{C2}$$

As a consequence, (C2) implies the condition

(C3) $$-\lim_{s\downarrow 0} \frac{s\varphi'(1-s)}{\varphi(1-s)}=p.\tag{C3}\label{C3}$$

Charpentier and Segers [Reference Charpentier and Segers2, Theorem 4.1] showed, among others, that a copula $C_\varphi$ whose generator satisfies (C3) is in the domain of attraction of $G(\textbf{x})=\exp(\!-{\Vert\textbf{x}\Vert}_p)$ , $\textbf{x}\le{\textbf{0}}\in{\mathbb{R}}^d$ ; see also [Reference Falk4, Corollary 3.1.15]. In this case we have tail independence only if $p=1$ .

The Clayton family with generator $\varphi_\vartheta(t)\,:\!= (t^{-\vartheta}-1)/\vartheta$ and $\vartheta > 0$ satisfies condition (C2) with $p=1$ and $A=1$ . As a consequence, we have independent tails for each $\vartheta>0$ .

The Frank family has the generator

$$\varphi_\vartheta(t)\,:\!= -\log\bigg(\frac{{\rm e}^{-\vartheta t}-1}{{\rm e}^{-\vartheta}-1}\bigg),\qquad \vartheta>0.$$

It satisfies condition (C0) with $p=1$ , i.e. we again have independent tails for each $\vartheta>0$ .

Consider, on the other hand, the generator $\varphi_\vartheta(t)\,:\!=(\!-\log(t))^\vartheta$ , $\vartheta\ge 1$ , of the Gumbel–Hougaard family of Archimedean copulas. This generator satisfies condition (C0) with $p=\vartheta$ , and thus we have tail independence only for $\vartheta=1$ .

3. Main theorem

In this section we establish conditional tail independence of the margins of $C_\varphi$ if the generator $\varphi$ satisfies condition (C0). First, we compute the conditional df of $(U_1,\dots,U_{j-1},U_{j+1},\dots,U_d)$ given that $U_j=u$ .

Lemma 3.1. We have, for $j\in\{{1,\dots,d}\}$ and $\textbf{U}=(u_1,\dots,u_{j-1},u,u_{j+1},\dots,u_d)\in[\textbf{U}_0,{\textbf{1}})$ ,

$$\begin{align*}H_{j,u}(u_1,\dots,u_{j-1},u_{j+1},\dots,u_d)&\,:\!=\mathrm{P}(U_i\le u_i,\,1\le i\le d,\, i\not=j\mid U_j=u)\\&= \frac{\varphi'(u)}{\varphi'({C_\varphi(\textbf{U})})}\\&=\frac{\varphi'(u)}{\varphi'(\varphi^{-1}(\varphi(u)+\sum_{1\le i\le d, \,i\not=j}\varphi(u_i) ) )},\end{align*}$$

provided the derivative $\varphi'(v)$ exists in a neighborhood of u, that $\varphi'$ is continuous at u with $\varphi'(u)\not=0$ , and that ${C_{\varphi}(\textbf{U})}\not=0$ as well.

Proof. For notational simplicity we establish the result for the choice $j=d$ . We have, for $\textbf{U}=(u_1,\dots,u_d)\in[\textbf{U}_0,{\textbf{1}})$ ,

$$\begin{align*}&\mathrm{P}(U_i\le u_i,\,1\le i\le d-1\mid U_d=u_d)\\&=\lim_{\varepsilon\downarrow 0}\frac{\mathrm{P}(U_i\le u_i,\,1\le i\le d-1,\, U_d\in[u_d,u_d+\varepsilon])}{\mathrm{P}(U_d\in[u_d,u_d+\varepsilon])}\\&=\lim_{\varepsilon\downarrow 0} \frac{\mathrm{P}(U_i\le u_i,\,1\le i\le d-1,\, U_d\le u_d+\varepsilon) - \mathrm{P}(U_i\le u_i,\,1\le i\le d-1,\, U_d\le u_d)}{\varepsilon}\\&=\lim_{\varepsilon\downarrow 0}\frac{\varphi^{-1}(\sum_{i=1}^{d-1}\varphi(u_i)+ \varphi(u_d+\varepsilon) )-\varphi^{-1}(\sum_{i=1}^d\varphi(u_i) )}{\varepsilon}\\ &= (\varphi^{-1})'\Bigg(\sum_{i=1}^d\varphi(u_i) \Bigg)\,\varphi'(u_d)\\ &= \frac{\varphi'(u_d)}{\varphi'(\varphi^{-1}(\sum_{i=1}^d\varphi(u_i) ) )}\\&= \frac{\varphi'(u_d)}{\varphi'(C_\varphi(\textbf{U}))},\end{align*}$$

which is the assertion. □

Note that the univariate margins of the df $H_{j,u}$ , $1\le j\le d$ , coincide in their upper tails, where they are equal to

$$H_u(v)\,:\!=\frac{\varphi'(u)}{\varphi'(\varphi^{-1}(\varphi(u)+\varphi(v) ))},\qquad v_0\le v\le 1,$$

with $v_0=\max_{1\le i\le d}u_{0i}$ .

The upper endpoint of $H_u$ is one, and therefore if the df $H_u$ is in the domain of attraction of a univariate extreme value df G, then the family of negative Weibull distributions $G_\alpha(x)\,:\!=\exp(\!-{\vert x\vert}^\alpha)$ , $x\le 0$ , with $\alpha>0$ , is the first choice. Note that $\alpha=1$ yields the standard negative exponential distribution.

The univariate df $H_u$ is in the domain of attraction of $G_\alpha$ for some $\alpha>0$ if and only if

$$\lim_{s\downarrow 0}\frac{1-H_u(1-sx)}{1-H_u(1-s)}=x^\alpha,\qquad x>0 ; % ,$$

see, e.g., [Reference Galambos6, Theorem 2.1.2].

Lemma 3.2. Suppose that the second derivative of $\varphi$ exists in a neighborhood of $u>v_0$ , and that it is continuous in u with $\varphi''(u)\not=0\not=\varphi'(u)$ . The univariate df $H_u$ satisfies $H_u\in\mathcal D(G_p)$ for some $p\ge 1$ if and only if $\varphi$ satisfies condition (C0).

Proof. Applying Taylor’s formula twice shows that

$$\begin{align*}1-H_u(1-s) &= \frac{\varphi'(\varphi^{-1}(\varphi(u)+\varphi(1-s) ) )-\varphi'(u)}{\varphi'(\varphi^{-1}(\varphi(u)+\varphi(1-s)))}\\&\sim \frac{\varphi''(u)}{\varphi'(u)^2}\varphi(1-s)\end{align*}$$

as $s\downarrow 0$ , which is the assertion. □

The next result is our main theorem.

Theorem 3.1. Suppose the generator $\varphi$ of $C_\varphi$ satisfies condition (C0). Then, if $u> u_{0j}$ and $\varphi$ satisfies the differentiability conditions in Lemma 3.2, we obtain, for $\textbf{x}=({x_1,\dots,x_{d-1}})\le{\textbf{0}}$ $\in{\mathbb{R}}^{d-1}$ ,

$${[H_{j,u}({\textbf{1}}+ca_n\textbf{x})]^n}\to_{n\to\infty}\exp\Bigg(\!-\sum_{i=1}^{d-1}(\!-x_i)^p \Bigg),$$

with $c\,:\!= (\varphi'(u)^2/\varphi''(u))^{1/p}$ and $a_n\,:\!=1-\varphi^{-1}(1/n)$ , $n \ge {n_0}$ .

Proof. For notational simplicity we establish this result for $j=d$ . It is sufficient to establish that, for $\textbf{x}=({x_1,\dots,x_{ d-1}})\le{\textbf{0}}\in{\mathbb{R}}^{d-1}$ ,

(3.1) $$\begin{equation}n(1-H_{d,u}({\textbf{1}}+ca_n\textbf{x}))\to_{n\to\infty} \sum_{i=1}^{d-1}(\!-x_i)^p.\label{eqn3} \end{equation}$$

We know from Lemma 3.1 that, for $(u_1,\dots,u_{d-1},u)\in[\textbf{U}_0,{\textbf{1}}]$ ,

(3.2) $$\begin{equation}H_{d,u}(u_1,\dots,u_{d-1})=\frac{\varphi'(u)}{\varphi'(\varphi^{-1}(\varphi(u)+\sum_{i=1}^{d-1}\varphi(u_i) ))}.\label{eqn4} \end{equation}$$

As a consequence we obtain, with $(u_1,\dots,u_{d-1})={\textbf{1}}+c a_n\textbf{x}$ ,

$$\begin{align*}&n(1-H_{d,u}({\textbf{1}}+ca_n\textbf{x}))\\&= n\left(1-\frac{\varphi'(u)}{\varphi'(\varphi^{-1}(\varphi(u)+\sum_{i=1}^{d-1}\varphi(1+ca_n x_i)))} \right)\\&= n \frac{\varphi'(\varphi^{-1}(\varphi(u)+\sum_{i=1}^{d-1}\varphi(1+ca_n x_i) ) )-\varphi'(u)}{\varphi'(\varphi^{-1}(\varphi(u)+\sum_{i=1}^{d-1}\varphi(1+ca_n x_i)))},\end{align*}$$

where the denominator converges to $\varphi'(u)$ as n increases because $a_n\downarrow 0$ .

Taylor’s formula yields that the numerator equals

$$\varphi''(\vartheta_n)\Bigg(\varphi^{-1}\Bigg(\varphi(u)+\sum_{i=1}^{d-1}\varphi(1+ca_nx_i) \Bigg)-u \Bigg),$$

where $\varphi''(\vartheta_n)$ converges to $\varphi''(u)$ as n increases. Applying Taylor’s formula again yields

$$\begin{align*}\varphi^{-1}\bigg(\varphi(u)+\sum_{i=1}^{d-1}\varphi(1+ca_n x_i)\bigg)-u&=\frac 1{\varphi'(\varphi^{-1}(\xi_n))}\sum_{i=1}^{d-1}\varphi(1+ca_n x_i),\end{align*}$$

where $\xi_n$ converges to $\varphi(u)$ as n increases. But

$${n \sum_{i=1}^{d-1}\varphi(1+ca_n x_i) = \sum_{i=1}^{d-1}\frac{\varphi(1+ca_n x_i)}{\varphi(1-a_n)}} \to_{n\to\infty}\sum_{i=1}^{d-1}(\!-cx_i)^p % .$$

by condition (C0). This yields the assertion.

Remark 3.1. The preceding result shows tail independence of $H_{j,u}$ , as the limiting df is the product of its margins.

Lemma 3.2 implies, moreover, that the reverse implication in the previous result also holds, i.e. if $H_{j,u}$ is in the domain of attraction of a multivariate max-stable df G with negative Weibull margins having parameter at least one, then condition (C0) is satisfied by Lemma 3.2, and G has, by the preceding result, identical independent margins.

Finally, by the preceding arguments we have $H_{j,u}\in\mathcal D(G)$ , where G has negative Weibull margins, if and only if just one univariate margin of $H_{j,u}$ is in the domain of attraction of a univariate extreme value distribution, and in this case G has identical and independent margins.

Remark 3.2. As suggested by one of the reviewers, Theorem 3.1 enables the simulation of an Archimedean copula from an extreme area. In particular, from equation (3.1) we have the approximation

\begin{equation*} (1-H_{d,u}({\textbf{1}}+ca_n\textbf{x}))\approx \sum_{i=1}^{d-1}(\!-x_i/n^{1/p})^p,\quad n\to\infty. \end{equation*}

A random vector of dimension $d-1$ that has the survival probability on the right-hand side above, together with an independent rv uniformly distributed on (0,1), then provides the simulation of an Archimedean copula in its extreme region. Whether this is an efficient way of simulation, however, requires further work.

4. Archimax copulas

Let $\varphi\,:\,(0,1]\to[0,\infty)$ be the generator of an Archimedean copula $C_\varphi(\textbf{U})=\varphi^{-1}(\sum_{i=1}^d\varphi(u_i))$ , $\textbf{U}=(u_1,\dots,u_d)\in(0,1]^d$ , and let ${\Vert\cdot\Vert}_D$ be an arbitrary D-norm. Put

(4.1) $$\begin{equation} C(\textbf{U})\,:\!=\varphi^{-1}({\Vert(\varphi(u_1),\dots,\varphi(u_d))\Vert}_D),\qquad \textbf{U}\in(0,1]^d. \label{eqn5} \end{equation}$$

It was established by Charpentier et al. [Reference Charpentier, Fougères, Genest and Nešlehová1] that C actually defines a copula on ${\mathbb{R}}^d$ , called an Archimax copula. Choosing ${\Vert\cdot\Vert}_D={\Vert\cdot\Vert}_1$ yields $C(\textbf{U})=C_\varphi(\textbf{U})$ , and thus the concept of Archimax copulas generalizes that of Archimedean copulas considerably.

To also include the generator family $\varphi_p(s)=(1-s)^p$ , $s\in[0,1]$ , $p\ge 1$ , we require the representation of C in equation (4.1) only for $\textbf{U}\in[\textbf{U}_0,{\textbf{1}}]\subset(0,1]^d$ . There actually exists an rv whose copula satisfies

$$\begin{equation*} C(\textbf{U})=\varphi^{-1}({\Vert(\varphi(u_1),\dots,\varphi(u_d))\Vert}_p),\qquad \textbf{U}\in[\textbf{U}_0,{\textbf{1}}] , \end{equation*}$$

with some $\textbf{U}_0\in(0,1)^d$ . This follows from the fact that ${\Vert({\vert x_1\vert}^p,\dots,{\vert x_d\vert}^p)\Vert}_D^{1/p}$ is again a D-norm, with an arbitrary D-norm ${\Vert\cdot\Vert}_D$ and $p\ge 1$ ; see Proposition 2.6.1 and equations (2.14) and (2.15) in [Reference Falk4].

An Archimax copula is in the domain of attraction of a multivariate extreme value distribution if the generator satisfies condition (C0). Precisely, we have the following result.

Proposition 4.1. Suppose the generator $\varphi$ satisfies condition (C0). Then the corresponding Archimax copula C with arbitrary D-norm ${\Vert\cdot\Vert}_D$ satisfies $C\in\mathcal D(G)$ , where $G(\textbf{x})=\exp(\!-{\Vert({\vert x_1\vert}^p,\dots,{\vert x_d\vert}^p)\Vert}_D^{1/p})$ , $\textbf{x}\le{\textbf{0}}\in{\mathbb{R}}^d$ .

Proof. We have, for $\textbf{x}=(x_1,\dots,x_d)\le{\textbf{0}}\in{\mathbb{R}}^d$ ,

$$\begin{align*}&n\Big[1-\varphi^{-1}\Big({\Big\Vert\Big(\varphi\Big(1+\frac{x_1}n\Big),\dots,\varphi\Big(1+\frac{x_d}n \Big) \Big)\Big\Vert}_D \Big) \Big]\\&= {\frac{1-\varphi^{-1}\Bigg(\varphi\Big(1-\frac 1n \Big){\Bigg\Vert\Bigg(\frac{\varphi\big(1+\frac{x_1}n \big)}{\varphi\big(1-\frac 1n \big)},\dots, \frac{\varphi\big(1+\frac{x_d}n \big)}{\varphi\big(1-\frac 1n\big)} \Bigg)\Bigg\Vert}_D \Bigg)} {1-\varphi^{-1}\Big(\varphi\Big(1-\frac 1n\Big) \Big)}}\\&\to_{n\to\infty}{\Vert({\vert x_1\vert}^p,\dots,{\vert x_d\vert}^p )\Vert}_D^{1/p}\end{align*}$$

by condition (C0) and equation (2.1). Repeating the arguments in the proof of Proposition 2.1 yields the assertion. □

Let the rv $\textbf{U}=(U_1,\dots,U_d)$ follow an Archimax copula with generator function $\varphi$ and D-norm ${\Vert\cdot\Vert}_D$ . Does it also have independent tails, conditional on one of its components? We give a partial answer to this question.

Suppose the underlying ${\Vert\cdot\Vert}_D$ is a logistic one, ${\Vert\cdot\Vert}_q$ , with $q\ge 1$ . Then

$$\begin{align*} \varphi^{-1}({\Vert(\varphi(u_1),\dots,\varphi(u_d))\Vert}_q) &= \varphi^{-1}\Bigg(\Bigg(\sum_{i=1}^d\varphi(u_i)^q\Bigg)^{1/q}\Bigg)\\ &= \psi^{-1}\Bigg(\sum_{i=1}^d\psi(u_i) \Bigg), \end{align*}$$

where

$$\psi(s)\,:\!=\varphi(s)^q,\qquad s\in[0,1].$$

If the generator $\varphi$ satisfies condition (C0), then the generator $\psi$ clearly satisfies condition (C0) as well:

$$\lim_{s\downarrow 0}\frac{\psi(1-sx)}{\psi(1-s)} = x^{pq},\qquad x >0.$$

If $\varphi$ satisfies the differentiability conditions in Lemma 3.2 then the conclusion of Theorem 3.1 applies, i.e. with the choice ${\Vert\cdot\Vert}_D={\Vert\cdot\Vert}_q$ , $q\ge 1$ , the rv $\textbf{U}$ again has independent tails, conditional on one of its components.

On the other hand, set $\textbf{U}=(U,\dots,U)$ , where U is an rv that follows the uniform distribution on (0,1). Choose ${\Vert\cdot\Vert}_D={\Vert\cdot\Vert}_\infty$ with ${\Vert\textbf{x}\Vert}_\infty=\max_{1\le i\le d}({\vert x_i\vert})$ . Then we have, for every function $\varphi\,:\,(0,1]\to[0,\infty)$ that is continuous and strictly decreasing,

$$\begin{align*} C(\textbf{U})&=\mathrm{P}(U\le u_1,\dots,U\le u_d)\\ &=\min_{1\le i\le d}u_i\\ &=\varphi^{-1}({\Vert(\varphi(u_1),\dots,\varphi(u_d))\Vert}_\infty),\qquad \textbf{U}\in (0,1]^d. \end{align*}$$

The copula C is, therefore, an Archimax copula, but it has completely dependent conditional margins.

5. Simulation study

We illustrate through a simulation study the conditional tail independence of the Archimedean Gumbel–Hougaard copula family with dimension $d>2$ and dependence parameter $\vartheta>1$ . The latter condition on the parameter $\vartheta$ implies that the copula’s tails are asymptotically dependent.

There are several statistical tests to verify whether the tails of a multivariate distribution are asymptotically independent, provided that the latter is in the domain of attraction of a multivariate extreme value df. In the bivariate case some tests have been suggested by Draisma et al. [Reference Draisma3], Hüsler and Li [Reference Hüsler and Li8], and Falk et al. [Reference Falk, Hüsler and Reiss5, Chapter 6.5]. However, to extend them to dimensions higher than two is not straightforward. We therefore rely on the hypothesis testing proposed by Guillou et al. [Reference Guillou, Padoan and Rizzelli7], which is based on the componentwise maximum approach and is meant for an arbitrary dimension $d\geq 2$ .

Such a test is grounded on a system of hypotheses where under the null hypothesis it is assumed that $A(\textbf{t})=1$ for all $\textbf{t}\in{\mathcal S}_d$ , i.e. the tails are asymptotically independent, while under the alternative hypothesis it is assumed that $A(\textbf{t})\gt1$ for at least one $\textbf{t}\in{\mathcal S}_d$ , i.e. some tails are asymptotically dependent. Here, A is the Pickands dependence function and ${\mathcal S}_d$ is the d-dimensional unit simplex (see, e.g., [Reference Falk, Hüsler and Reiss5, Chapter 4]). Guillou et al. [Reference Guillou, Padoan and Rizzelli7] proposed using the test statistic $\widehat{S}_n=\sup_{\textbf{t}\in{\mathcal S}_d}\sqrt{n}|\widehat{A}_n(\textbf{t})-1|$ to decide whether or not to reject the null hypothesis, where $\widehat{A}_n$ is an appropriate estimator of the Pickands dependence function and n is the sample size of the componentwise maxima. Under the null hypothesis the test statistic converges to a suitable random variable S for large samples. Large values of the observed test statistic provide evidence against the null hypothesis, and the quantiles of the distribution of S used for rejecting (or not) the null hypothesis are reported in Table 1 of their article.

Table 1. Rejection rate (in percentage) of the null hypothesis (asymptotic independent tails) based on $M=1000$ simulations.

We performed the following simulation experiment. In the first step we simulate a sample of size $n=110\,000$ of independent observations from a Gumbel–Hougaard copula with $d=3$ and $\vartheta=3$ . Then, we compute the vector of normalized componentwise maxima $m_{n,j}=\max_{i=1,\ldots,n}(u_{i,j}-b_{n,j})/a_{n,j}$ with $a_{n,j}=n$ , $b_{n,j}=1$ , and $j=1,\ldots,d$ . In the second step, for $u=0.99$ and $\varepsilon=0.0005$ we select the observations $(u_{i,1},\ldots,u_{i,j-1},u_{i,j+1}, \ldots, u_{i,d})$ such that $u_{i,j}\in[u-\varepsilon,u+\varepsilon]$ , $i=1,\ldots,n$ . To work with a sample with fixed size we consider only $k=1000$ such observations. Then, we compute the vector of normalized componentwise maxima $m^{*}_{k,s}=\max_{i=1,\ldots,k} u_{i,s}/(c a_{k,s})$ , where $c= (\varphi'(u)^2/\varphi''(u))^{1/\vartheta}$ and $a_{k,s}\,:\!=1-\varphi^{-1}(1/k)$ with $\varphi(t)\,:\!=(\!-\log(t))^\vartheta$ and $s=1,\ldots,j-1,j-1,\ldots,d$ . We repeat the first and second steps $N=100$ times obtaining two samples of componentwise maxima, one from the d-dimensional copula and one from the relative $d-1$ conditional distribution.

The top-left and top-right panels of Figure 1 display an example of maxima obtained from the Gumbel–Hougaard copula and the associated estimate of the Pickands dependence function, respectively. A strong dependence among the variables is evident. To see this better, in the middle panels the maxima of a pair of variables and the relative estimate of the Pickands dependence function are reported. Indeed, the latter is close to the lower bound $\max(1-t, t)$ , i.e. the case of complete dependence.

Figure 1: The top-left panel displays the maxima obtain with the data simulated from a trivariate Gumbel–Hougaard copula with $\vartheta=4$ . The middle-left panel shows the maxima corresponding to two components. Finally, the lower-left panel shows the maxima obtain with the simulated data where one component is set to be a high value. The right column reports the relative estimated Pickands dependence function.

The bottom panels of Figure 1 display the maxima obtained with the second step of the simulation experiment and the associated estimate of the Pickands dependence function. These maxima, in contrast to the previous one, seem to be independent, and indeed the estimated Pickands dependence function is close to the upper bound (i.e. the case of independence).

Then, we applied the above hypothesis test to the sample of maxima obtained in the first and second steps of the simulation experiment, leading to observed values of $3.843$ and $0.348$ , respectively, for the test statistic. Since the $0.95$ -quantiles of the distribution of S are $1.300$ and $0.960$ for $d=3$ and $d=2$ , respectively (see [Reference Guillou, Padoan and Rizzelli7]), we conclude that we reject the hypothesis of tails independence with the first sample of maxima, whereas we do not reject it with the second sample. These results are consistent with our theoretical findings.

We repeated this simulation experiment $M=1000$ times, and with the maxima obtained with the second step of the simulation experiment we computed the rejection rate of the null hypothesis. Since we simulated data under the null hypothesis we expect that the rejection rate is close to the nominal value of a type I error, i.e. 5%. We did this for different dimensions d and values of the parameter $\vartheta$ . The results are collected in Table 1. Again, the simulation results show that our theoretical findings are correct.

Notice that the rejection rates in Table 1 are sensitive to the level of unconditional copula dependence and dimension, as expected.

Acknowledgements

The authors are indebted to the Associate Editor and two anonymous reviewers for their careful reading of the manuscript and their constructive remarks.

References

Charpentier, A., Fougères, A., Genest, C. and Nešlehová, J. (2014). Multivariate Archimax copulas. J. Multivariate Anal. 126, 118136.CrossRefGoogle Scholar
Charpentier, A. and Segers, J. (2009). Tails of multivariate Archimedean copulas. J. Multivariate Anal. 100, 15211537.CrossRefGoogle Scholar
Draisma, G. et al. (2004). Bivariate tail estimation: dependence in asymptotic independence. Bernoulli 10, 251280.CrossRefGoogle Scholar
Falk, M. (2019). Multivariate Extreme Value Theory and D-Norms. Springer, New York.CrossRefGoogle Scholar
Falk, M., Hüsler, J. and Reiss, R.-D. (2011). Laws of Small Numbers: Extremes and Rare Events, 3rd edn. Birkhäuser, Basel.CrossRefGoogle Scholar
Galambos, J. (1987). The Asymptotic Theory of Extreme Order Statistics, 2nd edn. Krieger, Malabar.Google Scholar
Guillou, A., Padoan, S. A. and Rizzelli, S. (2018). Inference for asymptotically independent samples of extremes. J. Multivariate Anal. 167, 114135.CrossRefGoogle Scholar
Hüsler, J. and Li, D. (2009). Testing asymptotic independence in bivariate extremes. J. Statist. Planning Infer. 139, 990998.CrossRefGoogle Scholar
McNeil, A. J. and Nešlehová, J. (2009). Multivariate archimedean copulas, d-monotone functions and $\ell_1$ -norm symmetric distributions. Ann. Statist. 37, 30593097.CrossRefGoogle Scholar
Nelsen, R. B. (2006). An Introduction to Copulas, 2nd edn. Springer Series in Statistics. Springer, New York.Google Scholar
Sklar, A. (1959). Fonctions de répartition à n dimensions et leurs marges. Pub. Inst. Stat. Univ. Paris 8, 229231.Google Scholar
Sklar, A. (1996). Random variables, distribution functions, and copulas – a personal look backward and forward. In Distributions with Fixed Marginals and Related Topics, eds Rüschendorf, L., Schweizer, B., and Taylor, M. D.. Lecture Notes – Monograph Series, Vol. 28. Institute of Mathematical Statistics, Hayward, CA, pp. 114.Google Scholar
Figure 0

Table 1. Rejection rate (in percentage) of the null hypothesis (asymptotic independent tails) based on $M=1000$ simulations.

Figure 1

Figure 1: The top-left panel displays the maxima obtain with the data simulated from a trivariate Gumbel–Hougaard copula with $\vartheta=4$. The middle-left panel shows the maxima corresponding to two components. Finally, the lower-left panel shows the maxima obtain with the simulated data where one component is set to be a high value. The right column reports the relative estimated Pickands dependence function.