1. Main results
1.1. Basic notation
First we introduce some basic notion of integral geometry following [Reference Schneider and Weil17]. The Euclidean space ℝd is equipped with the Euclidean scalar product ‹·, ·›. The volume is denoted by | · |. Some of the sets we consider have dimension less than d. In fact, we consider three classes: the convex hulls of k + 1 points, orthogonal projections to k-dimensinal linear subspaces, and intersections with k-dimensional affine subspaces, where k ∈ {0, …, d}. In this case, | · | stands for the k-dimensional volume.
The unit ball in ℝk is denoted by $\mathbb B^k$ . For p > 0, we write
where, for an integer k, we have $\kappa_k=|\mathbb B^k|$ .
For k ∈ {0, …, d}, the linear (respectively affine) Grassmannian of k-dimensional linear (respectively affine) subspaces of ℝd is denoted by Gd,k (respectively Ad,k) and is equipped with a unique rotation invariant (respectively rigid motion invariant) Haar measure νd,k (respectively μd,k), normalized by
and
respectively.
A compact convex subset K of ℝd with nonempty interior is called a convex body. We define the intrinsic volumes of K by Kubota’s formula,
where PLK denotes the image of K under the orthogonal projection to L.
For L ∈ Gd,k (respectively E ∈ Ad,k), we denote by λL (respectively λE) the k-dimensional Lebesgue measures on L (respectively E).
1.2. Affine transformation of spherically symmetric distribution
For a fixed k ∈ {1, …, d}, consider random vectors X 0, …, Xk ∈ ℝd (not necessarily independent and identically distributed (i.i.d.)) with an arbitrary spherically symmetric joint distribution. By this we mean that the (k + 1)-tuple (UX 0, …, UXk) has the same distribution for any orthogonal d × d matrix U. The convex hull
is a k-dimensional simplex (maybe degenerate) with well-defined k-dimensional volume
How does the volume in (1.3) change under affine transformations? For k = d, the answer is obvious: it is multiplied by the determinant of the transformation. The k < d case presents a more delicate problem.
Theorem 1.1. Let A be any nonsingular d × d matrix, and let ɛ be the ellipsoid defined by
Then we have
where Pξ denotes the orthogonal projection to a uniformly chosen random k-dimensional linear subspace ξ independent of X 0, …, Xk.
Due to Kubota’s formula (see (1.2)), $\mathbb E[|{P_\xi} \varepsilon|]$ is proportional to Vk(ɛ). Thus, taking the expectation in (1.5) and using the formula
readily implies the following corollary.
Corollary 1.1. Under the assumptions of Theorem 1.1, we have
For a formula of Vk(ɛ), see [Reference Kabluchko and Zaporozhets11]. Relation (1.6) can be generalized to higher moments using the notion of generalized intrinsic volumes introduced in [Reference Dafnis and Paouris4], but we shall skip to describing details here.
The main ingredient of the proof of Theorem 1.1 is the following deterministic version of (1.5).
Proposition 1.1. Let A and ɛ be as in Theorem 1.1. Consider x1, …, xk ∈ ℝd and denote by L their span (linear hull). Then
Let us stress that here the origin is added to the convex hull.
Applying (1.7) to standard Gaussian vectors (details are in Section 2.3) leads to the following representation.
Corollary 1.2. Under the assumptions of Theorem 1.1, we have
where G is a random d × k matrix with i.i.d. standard Gaussian entries Nij and Gλ is a random d × k matrix with the entries λiNij, where λ 1, …, λd denote the singular values of A.
Thus, we obtain the following version of (1.5).
Corollary 1.3. Under the assumptions of Theorem 1.1 and Corollary 1.2, we have
The important special case k = 1 corresponds to the distance between two random points.
Corollary 1.4. Under the assumptions of Theorem 1.1, we have
where N 1, …, Nd are i.i.d. standard Gaussian variables and λ 1, …, λd denote the singular values of A.
1.3 Random points in ellipsoids
Now suppose that X 0, …, X k are independent and uniformly distributed in some convex body K ⊂ ℝd. A classical problem of stochastic geometry is to find the distribution of (1.3) starting with its moments
The most studied case is d = 2, k = p = 1, when the problem reduces to calculating the mean distance between two uniformly chosen random points in a planar convex set (see [Reference Bäsel1], [Reference Borel2], [Reference Ghosh6], [Reference Mathai13, Chapter 2], and [Reference Santaló16, Chapter 4]).
For an arbitrary d and k = 1, there is an electromagnetic interpretation of (1.9) (see [Reference Hansen and Reitzner8]): a transmitter X 0 and a receiver X 1 are placed uniformly at random in K. It is empirically known that the power received decreases with an inverse distance law of the form 1/|X 0 - X 1|α, where α is the so-called path-loss exponent, which depends on the environment in which both are located (see [Reference Rappaport, Annamalai, Buehrer and Tranter15]). Thus, with k = 1 and p = -nα, (1.9) expresses the nth moment of the power received (n < d/α).
The case of arbitrary k and d was studied only for K being a ball. In [Reference Miles14] it was shown (see also [Reference Schneider and Weil17, Theorem 8.2.3]) that, for X 0, …, Xk uniformly distributed in the unit ball $\mathbb B^d\subset\mathbb R^d$ and for an integer p ≥ 0,
where κk is defined in (1.1) and for any real number q > k – 1 we write (see [Reference Schneider and Weil17, Equation (7.8)])
with ωp := pκp being equal to the area of the unit (p – 1)-dimensional sphere when p is integer.
In [Reference Kabluchko, Temesvari and Thäle10, Proposition 2.8] this relation was extended to all real p > – 1. It should be noted that Proposition 2.8 of [Reference Kabluchko, Temesvari and Thäle10] is formulated for real p ≥ 0 only, but in the proof (see p. 23) it is argued that by analytic continuation, the formula holds for all real p > -1 as well. Theorem 1.1 implies (for details see Section 2.4) the following generalization of (1.10) for ellipsoids. Recall that Pξ denotes the orthogonal projection to a uniformly chosen random k-dimensional linear subspace ξ independent of X 0, …, Xk.
Theorem 1.2. For X 0, …, Xk uniformly distributed in some nondegenerate ellipsoid ɛ ⊂ ℝd and any real number p > −1, we have
Note that (1.12) is indeed a generalization of (1.10) since $P_\xi\mathbb B^d=\mathbb B^k$ a.s. and $|\mathbb B^k|^{\kern.5pt p}=\kappa_{k}^{\kern.5pt p}$ . For k = 1, (1.12) was recently obtained in [Reference Heinrich9].
For p = 1, the right-hand side of (1.12) is proportional to the kth intrinsic volume of ɛ (see (1.2)), which implies the following result (for details, see Section 2.5).
Corollary 1.5. For X 0, …, Xk uniformly distributed in some nondegenerate ellipsoid ɛ ⊂ ℝd, we have
Very recently, for X 0, …, Xk uniformly distributed in the unit ball $\mathbb B^d$ , the formula for the distribution of | conv (X 0, …, Xk)| has been derived in [Reference Grote, Kabluchko and Thäle7]. For a random variable η and α 1, α 2 > 0, we write η ~ B(α 1, α 2) to denote that η has a beta distribution with parameters α 1, α 2 and the density
It was shown in [Reference Grote, Kabluchko and Thäle7] that, for X 0, …, Xk uniformly distributed in $\mathbb B^d$ ,
where η, η′, η 1, …, ηk are independent random variables independent of X 0, …, Xk such that
Multiplying both sides of (1.13) by $|P_\xi\varepsilon|^2/\kappa_k^2$ and applying Theorem 1.1 and Corollary 1.2 (for details, see Section 2.4) leads to the following generalization of (1.13).
Theorem 1.3. For X 0, …, Xk uniformly distributed in some nondegenerate ellipsoid ɛ ⊂ ℝd, we have
where the matrices G and Gλ are defined in Corollary 1.2 and λ 1, …, λd denote the length of semi-axes of ɛ.
Taking k = 1 yields the distribution of the distance between two random points in ɛ.
Corollary 1.6. Under the assumptions of Theorem 1.3, we have
where N 1, …, Nd are i.i.d. standard Gaussian variables.
1.4. Integral geometry formulae
Recall that Gd ,k and Ad ,k denote the linear and affine Grassmannians defined in Section 1.1.
For an arbitrary convex compact body K, p > −d, and k = 1, it is possible to express (1.9) in terms of the lengths of the one-dimensional sections of K (see [Reference Chakerian3] and [Reference Kingman12]):
This formula does not extend to k > 1. The next theorem shows that for ellipsoids this is possible.
Theorem 1.4. For any nondegenerate ellipsoid ɛ ⊂ ℝd, k ∈ {0, 1, …, d}, and any real number p > −d + k − 1, we have
The proof is given in Section 3.2.
Comparing this theorem with Theorem 1.2 readily gives the following connection between the volumes of k-dimensional cross-sections and projections of ellipsoids.
Theorem 1.5. Under the assumptions of Theorem 1.4, we have
For p = 0, we obtain the following integral formula.
Corollary 1.7. Under the assumptions of Theorem 1.4, we have
This result may be regarded as an affine version of the following integral formula of Furstenberg and Tzkoni (see [Reference Furstenberg and Tzkoni5]):
Our next theorem generalizes this formula in the same way as (1.14) generalizes (1.15).
Theorem 1.6. For any nondegenerate ellipsoid ɛ ⊂ ℝd, k ∈ {0, 1, …, d}, and any real number p > −d + k, we have
In probabilistic language it may be formulated as
where X 1, …, Xk are independent uniformly distributed random vectors in ɛ and ξ is a uniformly chosen random k-dimensional linear subspace in ℝd.
2. Proofs: part I
2.1. Proof of Theorem 1.1 assuming Proposition 1.1
First note that, with probability 1, the equation
holds if and only if
which in turn is equivalent to
Therefore, to prove (1.5), it is enough to show that the conditional distributions of
given dim conv (X 0, …, Xk) = k are equal. Thus, without loss of generality, we can assume that the simplex conv (X 0, …, Xk) is not degenerate with probability 1:
Our original proof was based on the Blaschke–Petkantschin formula and the characteristic function uniqueness theorem. (The original proof can be found in the first version of this paper available at https://arxiv.org/abs/1711.06578v1.) Later, Youri Davydov found a much simpler and nicer proof which also allows us to get rid of the assumption about the existence of the joint density of X 0, …, Xk. Let us present this proof.
Since the joint distribution of X 0, …, Xk is spherically symmetric, we have for any orthogonal matrix U
Now let $\Upsilon$ be a random orthogonal matrix chosen uniformly from SO(n) with respect to the probabilistic Haar measure and independently of X 0, …, Xk. By (2.1), with probability one the span of X 1 − X 0, …, Xk − X 0 is a k-dimensional linear subspace of ℝd. Thus, the span
is a random uniformly chosen k-dimensional linear subspace in ℝd independent of X 0, …, Xk. Applying Proposition 1.1 to the vectors $\Upsilon X_1-\Upsilon X_0,\dots,\Upsilon X_{k}-\Upsilon X_0$, we obtain
Combining this with (2.2) for $U=\Upsilon$ completes the proof.
2.2. Proof of Proposition 1.1
To avoid trivialities, we assume that dim L = k, i.e. x1, …, xk are in general position. Let e1, …, ek ∈ ℝd be some orthonormal basis in L. Let OL and X denote d × k matrices whose columns are e1, …, ek and x1, …, xk, respectively. It is easy to check that $O_LO_L^{\top}$ is a d × d matrix corresponding to the orthogonal projection operator PL. Thus,
Recall that ɛ is defined by (1.4). It is known (see, e.g. [Reference Schweppe18, Appendix H]) that the orthogonal projection PLɛ is an ellipsoid in L and
where
A well-known formula for the volume of a k-dimensional parallelepiped implies that, for any x1, …, xk ∈ ℝd,
Therefore,
Applying (2.3) produces
2.3. Proof of Corollary 1.2
Denote by G 1, …, Gk ∈ ℝd the columns of the matrix G. Hence, AG 1, …, AGk ∈ ℝd are the columns of the matrix AG. Using Proposition 1.1 with xi = Gi and applying (2.5) to G and AG gives
or
where η is the span of G 1, …, Gk. Since G 1, …, Gk are i.i.d. standard Gaussian vectors, η is uniformly distributed in Gd ,k with respect to νd ,k (given dim η = k which holds a.s.), therefore, $\eta \mathop = \limits^{\rm{D}} \xi$ and the corollary follows.
2.4. Proofs of Theorem 1.2 and Theorem 1.3
For any nondegenerate ellipsoid ɛ, there exists a unique symmetric positive-definite d × d matrix A such that
Since X 0, …, Xk are i.i.d. random vectors uniformly distributed in ɛ, then A −1X 0, …, A −1Xk are i.i.d. random vectors uniformly distributed in $\mathbb B^d$ . It follows from Theorem 1.1 that
Taking the pth moment and applying (1.10) implies Theorem 1.2.
Now apply (1.13) to A −1X 0, …, A −1Xk:
Multiplying by ${|P_\xi\varepsilon|}/{\kappa_{k}^{\kern.5pt p}}$ and applying (2.6) implies the first equation in Theorem 1.3. The second equation follows from (1.8).
2.5. Proof of Corollary 1.5
From Kubota’s formula (see (1.2)) and Theorem 1.2, we have
where
From the definitions of bd ,k (see (1.11)) and κp (see (1.1)), we obtain
Using Legendre’s duplication formula for the gamma function,
the recursion (Г(1 + z) = zГ(z), and the fact that k, d ∈ ℤ, we obtain
3. Proofs: part II
3.1. Blaschke–Petkantschin formula
In our further calculations we will need to integrate some nonnegative measurable function h of k-tuples of points in ℝd. To this end, we integrate first over the k-tuples of points in a fixed k-dimensional linear subspace L with respect to the product measure $\lambda_L^k$ and then integrate over Gd,k with respect to νd,k. The corresponding transformation formula is known as the linear Blaschke–Petkantschin formula (see [Reference Schneider and Weil17, Theorem 7.2.1]):
where bd,k is defined in (1.11).
A similar affine version (see [Reference Schneider and Weil17, Theorem 7.2.7]) may be stated as follows:
3.2. Proof of Theorem 1.4
Let
Using the affine Blaschke–Petkantschin formula (see (3.2)) with
yields
Now fix E ∈ Ad ,k. Applying Theorem 1.2 to the ellipsoid ɛ ∩ E gives
which leads to
3.3. Proof of Theorem 1.6
The proof is similar to the previous proof. Let
Using the linear Blaschke–Petkantschin formula (see (3.1)) with
gives
Fix L ∈ Gd ,k. Since ɛ ∩ L is an ellipsoid, there exists a linear transform AL : L→ℝk such that $A_L(\varepsilon\cap L)=\mathbb B^k$ . Applying the coordinate transform xi = AL yi, i = 1, 2, …, k, we get
It is known (see, e.g. [Reference Schneider and Weil17, Theorem 8.2.2]) that
Substituting (3.5) and (3.4) into (3.3) completes the proof.
Acknowledgements
The authors are grateful to Daniel Hug and Günter Last for helpful discussions and suggestions which improved this paper. The authors also gratefully acknowledge Youri Davydov who considerably simplified the proof of Theorem 1.1.
The work was done with the financial support of the Bielefeld University (Germany). The work of F. G. and D. Z. was supported by grant SFB 1283. The work of A. G. was supported by grant IRTG 2235. The work of D. Z. was supported by grant RFBR 16-01-00367 and by the Program of Fundamental Researches of the Russian Academy of Sciences, ‘Modern Problems of Fundamental Mathematics’.