1. Introduction
The aim of this paper is to prove a generalization of the Rényi–Srivastava characterization of the Poisson law [Reference Rényi10]; see Theorem 2 below. The generalization we deal with was stated by D. Pierre-Loti-Viaud and P. Boulongne in Theorem III.1 of [Reference Pierre Loti-Viaud and Boulongne9] (in French), which is restated below as Theorem 1. In these two characterizations of Poisson law, only the necessary condition is non-trivial. We give two distinct, original proofs of Theorem 1, which are the first to be published in English; to date the only available proof of Theorem 1 has been in French [Reference Pierre Loti-Viaud and Boulongne9]. A by-product of our new approach consists of explicit closed formulas for the moments and mixed moments of the so-called compound variables defined by (2)–(3). This type of variable provides a fundamental tool in a wide range of applications including probability theory, mathematical statistics, and their applications as reliability theory and non-life insurance. Therefore they appear under different names in the literature, as stopped-sum distributions in [Reference Johnson, Kemp and Kotz6, Chapter 9], compound random variables in [Reference Klugman, Panjer and Willmot5, Chapter 9], and random sums in [Reference Feller3, Example (d) and Exercise 24, Chapter 5].
Our paper is organized as follows. In Section 2 we introduce the random variables (RVs) $\mathbf{N}$, $\mathbf{S}$, and $(\mathbf{S}_r)_{1\leq r\leq R}$ defining the extended collective model in non-life insurance. We show that Theorem 1 can be seen as a generalization of a well-known characterization of Poisson law restated for reference in Theorem 2.
In Section 3 we introduce analytic tools as the characteristic and certain generating functions relevant to a general proof. The proof of Theorem 1 is given at the end of Section 3.
When the random variables under discussion are completely determined by their moments, a second proof of Theorem 1, essentially algebraic, is available and is the object of Section 4. The generating functions introduced in Section 3 admit the developments (20)–(21) involving moments and cumulants. Bell polynomials provide a convenient formalism to express the moments of our RVs. For the use of such formal tools in applied probability theory, the reader is referred to [Reference Sundt and Vernic13].
The analytical and algebraic methods we use are familiar to theoreticians but not to all practitioners. Thus the aim of this paper is to be self-contained in order to provide applied mathematicians with a survey of analytical and algebraic methods underlying the objects of their studies. Note that our first proof differs from that in [Reference Pierre Loti-Viaud and Boulongne9] by the use of a wider variety of generating functions, whereas the second based on moments is a fundamentally new approach. All the RVs we shall consider in this paper take non-negative values and will be distinct from a degenerate RV constant equal to zero.
2. The collective model in non-life insurance
For a general introduction we refer to [Reference Pierre Loti-Viaud and Boulongne9, Chapters I–III], [Reference Klugman, Panjer and Willmot5, Chapter 9], and the numerous references therein. The collective risk model, or aggregate loss model, is an infinite set of RVs consisting of an RV $\mathbf{N}$ and a sequence $(\mathbf{Y}_k)_{k\geq1}$. The RV $\mathbf{N}$ is the claim count random variable. It takes non-negative integer values and we set $\mathbb E\mathbf{N}=m\in(0,\infty]$, $ \mathbb P(\mathbf{N}=k)=\nu_k$, $ k\in\mathbb N.$ Note the important particular case where the count distribution follows a Poisson law with expectation $m\in(0,\infty)$, in other words
For a positive integer k, $\mathbf{Y}_k$ is the kth individual-loss random variable. We assume that (i) the RVs $\mathbf{N}, \mathbf{Y}_1,\mathbf{Y}_2,\ldots$ are independent (hypothesis of independence of relative frequency and cost), and (ii) the RVs $\mathbf{Y}_1,\mathbf{Y}_2,\ldots$ are identically distributed (hypothesis of the stationary character of losses). They follow the distribution of an RV that will be denoted $\mathbf{Y}$.
The aggregate losses RV is the sum $\mathbf{S}=\sum_{k=1}^{\mathbf{N}}\mathbf{Y}_k.$ A refinement of the collective model is provided by the extended collective risk model; see [Reference Pierre Loti-Viaud and Boulongne9, Definition III.1]. In this model each loss $\mathbf{Y}_k$ is allotted to a specific class of risks denoted by an integer $r\in\{1,\ldots,R\}$. Henceforth r and r′ will always denote two distinct elements from the set $\{1,\ldots,R\}$ with $R\geq2$. Two classes denoted by distinct integers do not overlap. Mathematically this split of the sequence $(\mathbf{Y}_k)_{k\geq 1}$ can be modeled by a Bernoulli scheme, i.e. a finite set of random variables
and the probabilities $\ \mathbb P(\mathbf{B}_r=1)=p_r>0$ with $\sum_{r=1}^Rp_r=1$. We introduce a sequence
of independent copies of $(\mathbf{B}_r)_{1\leq r\leq R}$, our last assumption in the model being that, for every triple (r, k, l), the RVs $\mathbf{N},\mathbf{Y}_k,\mathbf{B}_{r,\ell}$ are independent.
Let us now define, for $k=1,2,\ldots,$ the RVs
Then the number of losses, say $\mathbf{N}_r$, and the aggregate losses, allotted to the rth class of risks, say $\mathbf{S}_r$, are given by
We are now in a position to state the following theorem.
Theorem 1. ([Reference Pierre Loti-Viaud and Boulongne9].) The random variables of the extended collective risk model satisfy the equality
and enjoy the following properties.
(i) The RVs $\mathbf{S}_r,r=1,\ldots,R$, are mutually independent if and only if $\mathbf{N}$ follows a Poisson law.
-
(ii) The RVs $\mathbf{S}_r$ and $\mathbf{S}_{r^{\prime}}$ are identically distributed if and only if $p_{r}=p_{r^{\prime}}$.
Note that if $\mathbf{Y}$ is a deterministic variable, constant equal to 1, then $\mathbf{S}_r=\mathbf{N}_r$ corresponds to a binomial split of the counting variable $\mathbf{N}$, and assertion (i) in the above theorem can be rewritten as follows, in the case $R=2$.
Theorem 2. ([Reference Rényi10, Reference Srivastava12].) With the above notation we have $\mathbf{N}=\mathbf{N}_1+\mathbf{N}_2$ and the variables $\mathbf{N}_1$ and $\mathbf{N}_2$ are mutually independent if and only if $\mathbf{N}$ follows a Poisson distribution.
This remarkable characterization of the Poisson distribution was stated in the case $R=2$ by Srivastava [Reference Srivastava12, Theorem 2], who presented this result as the corollary of a result proved by Rényi [Reference Rényi10, Theorems 1 and 2]. For various characterizations of Poisson law, see [Reference Johnson, Kemp and Kotz6, §4.8], particularly the reference to [Reference Srivastava12] on page 182.
Thus Theorem 1 can be seen as a generalization of the Rényi–Srivastava characterization of the Poisson law.
3. General proof
Let us introduce some of the functions characterizing a random vector $(\mathbf{X}_1,\ldots,\mathbf{X}_n)$ (with $n=1,2,\ldots,$ and without parentheses if $n=1$). As well as the characteristic function (see [Reference Johnson, Kemp and Kotz6, (1.264–265)], and [Reference Johnson, Kemp and Kotz7, (34.8)]), defined for $t_1,\ldots,t_n\in\mathbb R$ by
we will make use of the (uncorrected) moments (see [Reference Johnson, Kemp and Kotz6, (1.227)] and [Reference Johnson, Kemp and Kotz7, (34.9)]), the (descending) factorial moments and factorial cumulants (see [Reference Johnson, Kemp and Kotz6, (1.249), (1.256), and (1.274)]) generating functions denoted and defined by
for $t_1,\ldots,t_n\leq 0$, and
for $-1<t\leq 0$. For the sake of notational simplicity we will also use the notations $ \phi_{\mathbf{X}}=\break \Phi_{\mathbf{X}}-1$ and $ m_{\mathbf{X}}=M_{\mathbf{X}}-1$.
These functions are easily seen to be well-defined and taking finite values on the given domain of definition. Furthermore $M_{\mathbf{X}}$ and $m_{\mathbf{X}}$ are increasing with respect to each variable, and for an integer-valued random variable $\mathbf{N}$ the following equalities hold: $M_{\mathbf{N}}(-\infty,0]=(\nu_0,1]$, $M_{[\mathbf{N}]}({\text{e}}^t-1)=M_{\mathbf{N}}(t)$ and
A first result is as follows.
Proposition 1. The characteristic and moment generating functions of $(\mathbf{S}_1,\ldots,\mathbf{S}_R)$, $\mathbf{S}$, and $\mathbf{S}_r$ are given by
Proof. On one hand the formula for conditional expectation (see [Reference Feller3, (10.6)]) enables us to write
keeping in mind that $\mathbf{N}$ and $\mathbf{Y}_{r,k}$ are independent. On the other hand, keeping in mind the defining properties (1), it is easily checked that
and by taking the expectation of both sides we obtain
The proof for $M_{(\mathbf{S}_1,\ldots,\mathbf{S}_r)}$ is similar, replacing ${\text{i}} t_r$ with $t_r$. The last four equalities are one-variable versions of the first two equalities, with $p_r=1$ for $\mathbf{S}$.
Let us see in which way $M_{\mathbf{X}}$ characterizes a non-negative random variable $\mathbf{X}$.
Lemma 1. If $M_{\mathbf{X}}$ and $M_{\mathbf{X}^{\prime}}$ coincide over a non-empty set $S\subseteq(-\infty,0)$ having one accumulation point $s_0\in S$, then the two non-negative random variables $\mathbf{X}$ and $\mathbf{X}^{\prime}$ are identically distributed.
Proof. Introduce the distribution function $F_{\mathbf{X}}(t)= \mathbb P(\mathbf{X}\leq t),\ t\geq 0$. The Laplace transform of $\mathbf{X}$ is given by the Stieljes integral
where $z=x+{\text{i}} y$ is a complex variable with real and imaginary parts $x={\text{Re}}(z)$ and $y={\text{Im}}(z)$ respectively ([Reference Widder14, (1)]). It is clear that $\Lambda_{\mathbf{X}}(z)$ exists over the half-plane $H^+=\{z\colon {\text{Re}}(z)>0\}$. In fact it defines an analytic function over $H^+$ ([Reference Widder14, Theorem 5a]). Now, by using similar notations for $\mathbf{X}^{\prime}$, our assumption implies, for each $t\in S$,
We have shown that the two functions $\Lambda_{\mathbf{X}^\prime}$ and $\Lambda_{\mathbf{X}}$ are analytic over $H^+$ and coincide over $-S$. This means that $\Lambda_{\mathbf{X}}-\Lambda_{\mathbf{X}^\prime}$ is an analytic function for which $-s_0$ is a non-isolated zero. Thus $\Lambda_{\mathbf{X}}-\Lambda_{\mathbf{X}^\prime}$ is the zero function over $H^+$ (see [Reference Evgrafov2, Theorem 5.1]). By uniqueness of the Laplace transform ([Reference Widder14, Theorem 6.3]), we have $F_{\mathbf{X}}=F_{\mathbf{X}^\prime}$.
Let us now recall two characterizations of Poisson law, noting that (13) expresses the linearity of the factorial cumulant generating function.
Lemma 2. Assume $m>0$. The following propositions are equivalent.
(i) The RV $\mathbf{N}$ follows a Poisson distribution with expectation m.
-
(ii) For each $t\in\mathbb R$ we have
(12) \begin{align} M_{[\mathbf{N}]}(t) & =\exp{(mt)},\end{align}(13) \begin{align} K_{[\mathbf{N}]}(t) & =mt.\end{align} -
(iii) There exists a non-empty open interval $I\subset(-\infty,0)$ such that
(14) \begin{align} M_{[\mathbf{N}]}(t) & =\exp{(mt)}\quad {\text{for all } t\in I,}\end{align}(15) \begin{align} K_{[\mathbf{N}]}(t) & =mt\quad {\text{for all } t\in I.}\end{align}
Proof. From [Reference Johnson, Kemp and Kotz6, §4.3] we know that (i) implies (ii), and perforce (iii). Next (iii) implies (i) by Lemma 1 with $S=I$.
The following lemma is elementary but deserves to be stated explicitly as a key step in our general proof.
Lemma 3. If the $(S_r)_{1\leq r\leq R}$ are mutually independent, then for any $r\neq r^{\prime}$ we have
Therefore, if we set
the function $f(x)=-K_{[\mathbf{N}]}(-x)$ is non-negative, continuously differentiable, and satisfies relations
Proof. Independence implies
Then choose $(t_1,\ldots,t_R)$ with all coordinates equal to zero except $t_r=t_{r^{\prime}}=t$, write (7), and take the logarithm to obtain (16). Then (17) is simply a reformulation of the appropriate identities in (4)–(5), and (18) is (16) with $x=-(p_r+p_{r^{\prime}})m_{\mathbf{Y}}(t)$. The non-negativity and continuous differentiability of f follows from that of K.
A converse of the above lemma is provided by the following one which, like the former, though elementary, is a key step in our proof of Theorem 1 (i).
Lemma 4. Let $x_0>0$, $m\in\mathbb R$ and let $f\colon [0,x_0)\to [0,\infty)$ be a continuously differentiable function satisfying (17)–(18). Then f is the linear function $f(x)=mx.$
Proof. Without loss of generality we can assume $p\leq 1/2$ so that $q=1-p\geq p$. From (18) and by a straightforward induction we see that for every positive integer n
By continuity of f′, for every positive integer n, the numbers $a_n=\min_{x\in[p^n,q^n]} f^{\prime}(x)$ and $b_n=\max_{x\in[p^n,q^n]} f^{\prime}(x)$ are well-defined, finite, and satisfy $\lim a_n=\lim b_n=f^{\prime}(0)=m$. Rolle’s theorem implies, for every n, $a_np^kq^{n-k}s\leq f(p^kq^{n-k}s)\leq b_np^kq^{n-k}s.$ By summing the latter inequalities over k and in view of (19), we obtain
These inequalities lead, as n tends to infinity, to $f(s)=ms$, which proves the desired result.
Proof of Theorem 1(i). If the random variables $\mathbf{S}_r$ and $\mathbf{S}_{r^{\prime}}$ are independent, then we know from Lemmas 3 and 4 that (15) holds with $I=(-x_0,0)$, so $\mathbf{N}$ is a Poisson random variable. Conversely, if $\mathbf{N}$ follows a Poisson law then (12) holds for $t\in\mathbb R$, so relations (6) and (10) in Proposition 1 imply $\Phi_{(\mathbf{S}_1,\ldots,\mathbf{S}_R)}(t_1,\ldots,t_R)=\prod_{r=1}^R\Phi_{\mathbf{S}_r}(t_r)$; in other words $\mathbf{S}_1,\ldots,\mathbf{S}_R$ are mutually independent.
Proof of Theorem 1(ii). We infer from Lemma 1 that the random variables $\mathbf{S}_r$ and $\mathbf{S}_{r^{\prime}}$ are identically distributed if and only if $M_{ \mathbf{S}_r}=M_{\mathbf{S}_{r^{\prime}}}$. In view of (9) and keeping in mind that $M_{[\mathbf{N}]}$ is injective, this is equivalent to $p_rm_{\mathbf{Y}}=p_{r^{\prime}}m_{\mathbf{Y}}$; the latter is equivalent to $p_r=p_{r^{\prime}}$ since $m_{\mathbf{Y}}$ is not the zero function.
4. A proof based on moments and cumulants
In this section we shall prove Theorem 1 by means of moments and cumulants. For $d=1,2,\ldots $ we will let $\mu_d(\mathbf{X})=\mathbb E \mathbf{X}^d $ denote the moment of order d of the RV $\mathbf{X}$. Recall that assuming $\mu_d(\mathbf{X})<\infty$ is equivalent to assuming the finiteness of the factorial moments $\mu_{[d]}(\mathbf{X})=\mathbb E \mathbf{X}(\mathbf{X}-1)\cdots(\mathbf{X}-d+1)$, or of the cumulants $\kappa_d(\mathbf{X})$, or of the factorial cumulants $\kappa_{[d]}(\mathbf{X})$, these coefficients being given, provided relation
holds (see [Reference Feller3, XV.4, (4.15)]), by the developments
(see [Reference Johnson, Kemp and Kotz6, §1.2.7], specifically formulas (1.227) for $M_{\mathbf{X}}$, (1.249) for $K_{\mathbf{X}}$, (1.256) for $K_{\mathbf{N}}$, and (1.274) for $M_{[\mathbf{N}]}$). The latter developments converge over a neighbourhood of the origin.
Let us now introduce basic facts and notation from combinatorics. The first tool we will need from this field consists of the partitions of integers and the associated Young diagrams. Consider two integers $d\geq k\geq 1$. The k-tuple of integers $L=(\ell_1,\ldots,\ell_k)$ is called a k-partition of d provided (see [1, 1a]) $\sum_{i=1}^k\ell_i=d$ and $\ell_1\geq\cdots\geq \ell_k\geq 1.$ The set of k-partitions of d is denoted $\mathcal L_{d,k}$. Any element of the set $\mathcal L_d=\cup_{k=1}^d \mathcal L_{d,k}$ is called a partition of d.
A convenient and visual representation of partitions is provided by Young diagrams, equivalent to Ferrers diagrams discussed in [1, §2.4].
The Young diagram associated with $L=(\ell_1,\ldots,\ell_k)\in\mathcal L_{d,k}$ is a set of d cells arranged in k left justified rows, the ith row consisting of $\ell_i$ cells. For a given partition let $s=\text{Card} \{\ell_1,\ldots,\ell_k \}$ be the number of distinct summands. We will let $\lambda_1=\ell_1>\cdots>\lambda_s=\ell_k\geq 1$ and $\alpha_1,\ldots,\alpha_s\geq 1$ denote the two sequences such that
These coefficients satisfy the equalities
Therefore an alternative notation for the partition $L=(\ell_1,\ldots,\ell_k)$ is $L=[\lambda_1^{\alpha_1},\ldots,\lambda_s^{\alpha_s}]$, where an exponent can be omitted if it equals 1.
For $L=(\ell_1,\ldots,\ell_k)=[\lambda_1^{\alpha_1},\ldots,\lambda_s^{\alpha_s}]\in \mathcal L_{d,k}$, let us introduce the coefficients
A convenient tool we will borrow from combinatorics is the family of Bell polynomials. The complete Bell polynomials $\{B_n(X_1,\ldots,X_n)\}_{n\geq1}$, the partial Bell polynomials
(see [Reference Comtet1, §III.3, formulas [3a]–[3c]]), and the polynomials $\{b_n(X;.)\}_{n\geq1}$ which we will call associated Bell polynomials (see [Reference Roman11, §4.1.8]), are defined via the following generating functions:
All these polynomials have positive integer-valued coefficients and $B_{n,k}$ is k-homogeneous. By convention we set $b_0=B_0=B_{0,0}={\textbf{1}}$ and $B_{n,0}={\boldsymbol{0}}$ for $n\geq1$. Bell polynomials are related to Stirling numbers, of the first kind by formula $s(n,k)=(-1)^{n+k}B_{n,k}(0!,\ldots,(n-k+1)!),$ of the second kind by formula $S(n,k)=B_{n,k}(1,\ldots,1)$ (see [Reference Comtet1, eqs [3g], [3i], and [5d]]). From (20)–(21) and definitions (24)–(25) the moments and cumulants of an RV are related by the equality
this formula amounts to (3.33) of [Reference Kendall and Stuart4], a fundamental reference in which, however, Bell polynomials are not referred to. Henceforth, when the RV is omitted $\mu_d$ will always mean $\mu_d(\mathbf{Y})$, and $\mu_{L}=\mu_{L}(\mathbf{Y})$. A standard expression for partial Bell polynomials (see [Reference Comtet1, Theorem A]) is given by
where the summation takes place over the set C(d, k) of sequences $c\colon c_1,c_2,\ldots \geq 0$ of integers such that
The following lemma gives, for partial Bell polynomials (27), a more explicit expression (28) in terms of Young diagrams.
Lemma 5. We have
Proof. Each sequence in C(d, k) has a finite number of non-zero terms whose list can be written in a unique way as $c_{\phi(1)},\ldots,c_{\phi(s)}$ with $\phi(1)>\cdots >\phi(s)$. We build a bijection between C(d, k) and $\mathcal L_{d,k}$ by associating c with the Young diagram
for which
and this completes the proof of the claimed result.
We are now in a position to state a first important result.
Theorem 3. Consider an integer $d\geq 1$.
(i) The moment of order d of the random variable $\mathbf{S}$ admits the expression
(29) \begin{align} \mu_d(\mathbf{S})&= \sum_{k=1}^d\mu_{[k]} (\mathbf{N}) B_{d,k}(\mu_{1},\ldots,\mu_{{d-k+1}}) \notag \\ &=\sum_{L\in \mathcal L_{d}}\mu_{L}(\mathbf{N})\mu_L(\mathbf{Y}) \notag \\ &=\sum_{k=1}^d\mu_{[k]} (\mathbf{N})\sum_{L\in \mathcal L_{d,k}}c_L\mu_L(\mathbf{Y}),\end{align}and therefore satisfies the inequality(30) \begin{equation} \mu_d(\mathbf{S})\leq\mu_d(\mathbf{N}) \mu_d(\mathbf{Y}),\end{equation}with equality if and only if $\mu_k(\mathbf{Y})=\mu_1^k(\mathbf{Y}),\ k=1,\ldots,d$.-
(ii) The cumulant of order d of the random variable $\mathbf{S}$ admits the expression
(31) \begin{align} \kappa_d(\mathbf{S})&= \sum_{k=1}^d\kappa_{[k]} (\mathbf{N}) B_{d,k}(\mu_{1},\ldots,\mu_{{d-k+1}}) \notag \\ &= \sum_{k=1}^d\kappa_{[k]} (\mathbf{N})\sum_{L\in \mathcal L_{d,k}}c_L\mu_L(\mathbf{Y}).\end{align}
Proof. Use Theorem A (Faà di Bruno Formula) of [Reference Comtet1] with functions f, g and $h=f\circ g$ defined to be $f=M_{[\mathbf{N}]}$, $g=m_{\mathbf{Y}}$, so that from (9) we get $h=M_{\mathbf{S}}$. Then formula [4c] from [Reference Comtet1] is exactly (29). The proof is similar for the cumulants.
For inequality (30), let us first use the well-known Lyapunov inequality (see [Reference Feller3, §V.8(c)]): $\mu_\lambda\leq \mu_{d}^{\lambda/d}$ provided $\lambda\leq d$, to get
Note in passing that equality holds if and only if $\mu_\lambda= \mu_{d}^{\lambda/d}$ for $\lambda=1,\ldots,d$, which is equivalent to $\mu_{\lambda}=\mu_1^{\lambda}$ for $\lambda=1,\ldots,d$.
Now, using this inequality we obtain
Lemma 5 with $\mathbf{Y}$ chosen as a constant equal to 1, hence $\mu_L(\mathbf{Y})=1$ for every L, leads to
(see [Reference Johnson, Kemp and Kotz6, (1.246)] and [Reference Comtet1, [2.c]]). When combined with (32), this completes the proof of (30).
A straightforward consequence of (30), justifying our approach based on moments, is as follows.
Corollary 1. Let $\alpha=1-\beta\in[0,1]$. If $\mathbf{N}$ and $\mathbf{Y}$ are completely determined by their moments due to
then $\mathbf{S}$ and $\mathbf{S}_r$, $(r=1,\ldots,R)$ are completely determined by their moments.
In the case when $\mathbf{N}$ is a Poisson random variable with expectation $\mu_{[1]}=m$, the other factorial moments are given by
so that in view of (26), another corollary of (29) is the following. Recall that the associated Bell polynomials $b_n$, $n\geq 1,$ were defined above by formulas (24)–(26).
Corollary 2. If $\mathbf{N}$ follows a Poisson law with expectation m, then the moments and cumulants of $\mathbf{S}$ are given by
For the extended collective model the following result holds.
Theorem 4. The moment and the cumulant of order d of the random variable $\mathbf{S}_r$ are given by
Proof. Apply Theorem 3 to the RV $\hat{\mathbf{Y}}_{r}$, i.e. (29)–(31) with $\mu_d=\mu_d(\hat{\mathbf{Y}}_r)=p_r\mu_d(\mathbf{Y})$ and the k-homogeneity of $B_{n,k}$.
For a reason similar to that given before Corollary 2, the following result is a straightforward consequence of Theorem 4.
Corollary 3. If $\mathbf{N}$ follows a Poisson law with expectation m, then the moments and cumulants of $\mathbf{S}_r$, $r=1,\ldots,R$, are given by
We are now in a position to prove the necessary condition in Theorem 1 (i).
Proposition 2. If the random variables $\mathbf{S}_{r}$ and $\mathbf{S}_{r^{\prime}}$ are independent then $\mathbf{N}$ follows a Poisson distribution.
Proof. The cumulants satisfy $\kappa_d(\mathbf{S}_r+\mathbf{S}_{r^{\prime}})=\kappa_d(\mathbf{S}_r)+\kappa_d(\mathbf{S}_{r^{\prime}}).$ In view of (34), this implies, for every d,
These equalities, due to
can hold if and only if $\kappa_{[k]} (\mathbf{N})=0$ for $k\geq 2$, in other words $K_{[\mathbf{N}]}(t)=\kappa_{[1]}t$, which is one of the characterizations of a Poisson law in Lemma 2.
Let us now complete the proof of Theorem 1 (i) by proving the result stated below as Corollary 4.
To this end it suffices to show that, for any $J\in\{2,\ldots,R\}$,
for $r_1<\cdots<r_J\in\{1,\ldots,R\}$ and $d_1,\ldots,d_J\geq 1$. The left-hand side of (36) can be obtained from (34). Let us introduce some notations that will enable us to compute the right-hand side of (36). Introduce, for $1\leq k\leq n$, the index sets
In this setting, for $n\geq d$, the multinomial formula can be written as
Consider the elementary and power symmetric polynomials
for $1\leq k\leq n,$ and the random variables
We will first need the following basic identities.
Lemma 6. Assume $1\leq k\leq n$. Then we have
Proof. Equality (38) is a straightforward consequence of $\mathbf{Z}_i^k=\mathbf{Z}_i$ if k and i are positive integers. For (39) we will use some of the well-known properties of Bell polynomials; see e.g. [Reference Comtet1], specifically formula [3i], the last equation of §9, and [5d–e]. Let us fix n and use the abbreviation $\pi_k=\pi_k(\mathbf{Z}_1,\ldots,\mathbf{Z}_n)$, so that $\pi_k=\mathbf{N}^{(n)}$ for every $k\geq 1$. Calculations give
and the result is proved.
We are now equipped to compute the product-mixed moments.
Theorem 5. With the notation above we have
and more generally, for $J\in\{2,\ldots,R\}$,
Proof. We shall only prove (40), the mechanism being easily extended to prove (41). Let $\mathbf{X}_{r,i}=\mathbf{Y}_{r,i}\mathbf{Z}_i,i=1,2\ldots.$ First note that for $(i_1,\ldots,i_k)=I\in{\mathcal{I}}_{d,k}$,
Thus, by using (37), we obtain
On one hand, if $I\cap I^{\prime}\neq\emptyset$, then
because for each k, $\mathbf{Y}_{r,k}\mathbf{Y}_{r^{\prime},k}=0$. Thus
The claimed result is obtained as n tends to infinity by the dominated convergence theorem and the increasing convergence $0\leq \mathbf{S}_{r,n}^d\mathbf{S}_{r^{\prime},n}^{d^{\prime}}\nearrow \mathbf{S}_{r}^d\mathbf{S}_{r^{\prime}}^{d^{\prime}}$.
The sufficient condition for Theorem 1 (i) is now within our reach.
Corollary 4. If $\mathbf{N} $ follows a Poisson distribution, then the random variables $\mathbf{S}_{r},r=1,\ldots,R$ are mutually independent.
Proof. For the Poisson law, (33) holds. Write the product whose factors are given by formula (34) for the values $r_1,\ldots,r_J$, with $\mu_{[k_j]}(\mathbf{N})=m^{k_j}$; then write (41) with
It is clear that (36) is satisfied, which proves the claimed result.
The proof of Theorem 1 (ii) is the object of the following result.
Corollary 5. The random variables $\mathbf{S}_{r}$ and $\mathbf{S}_{r^{\prime}}$ are identically distributed if and only if $p_r=p_{r^{\prime}}$.
Proof. The two random variables are determined by their moments, so they are identically distributed if and only if all their moments are equal. Let us use Theorem 3. If $p_r=p_{r^{\prime}}$, then $\mu_d(\mathbf{S}_{r})=\mu_d(\mathbf{S}_{r^{\prime}})$ for every d. Conversely, if $\mu_d(\mathbf{S}_{r})=\mu_d(\mathbf{S}_{r^{\prime}})$ for every d, then $d=1$ yields
so $p_r=p_{r^{\prime}}$ because $\mu_{[1]}(\mathbf{N})B_{1,1}(\mu_1)=\mu_{[1]}(\mathbf{N})\mu_{1}(\mathbf{Y})>0$. This proves our result.
Acknowledgements
The author is grateful to Professor D. Pierre-Loti-Viaud for introducing him to the subject of non-life insurance and some connected problems in extreme values theory. He would also like to thank the anonymous referees for their careful reading of the paper and their valuable suggestions and comments.