Hostname: page-component-745bb68f8f-b6zl4 Total loading time: 0 Render date: 2025-02-06T04:59:02.227Z Has data issue: false hasContentIssue false

Functional limit theorems for the euler characteristic process in the critical regime

Published online by Cambridge University Press:  17 March 2021

Andrew M. Thomas*
Affiliation:
Purdue University
Takashi Owada*
Affiliation:
Purdue University
*
*Postal address: Department of Statistics, Purdue University, West Lafayette, IN47907, USA. Email address: thoma186@purdue.edu
*Postal address: Department of Statistics, Purdue University, West Lafayette, IN47907, USA. Email address: thoma186@purdue.edu
Rights & Permissions [Opens in a new window]

Abstract

This study presents functional limit theorems for the Euler characteristic of Vietoris–Rips complexes. The points are drawn from a nonhomogeneous Poisson process on $\mathbb{R}^d$ , and the connectivity radius governing the formation of simplices is taken as a function of the time parameter t, which allows us to treat the Euler characteristic as a stochastic process. The setting in which this takes place is that of the critical regime, in which the simplicial complexes are highly connected and have nontrivial topology. We establish two ‘functional-level’ limit theorems, a strong law of large numbers and a central limit theorem, for the appropriately normalized Euler characteristic process.

Type
Original Article
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

The Euler characteristic is one of the oldest and simplest topological summaries. It is at once local and global, combinatorial and topological, owing to its representation as either the alternating sum of Betti numbers of a topological space, or the alternating sum of simplices in its triangulation. Beyond its theoretical beauty, the Euler characteristic has recently made its way into the field of applied mathematics, notably topological data analysis (TDA). For instance, the Euler characteristic of sublevel (or superlevel) sets of random fields has found broad applications [Reference Adler1, Reference Crawford, Monod, Chen, Mukherjee and Rabadán8]. In TDA, the technique of capturing the dynamic evolution of topology is generally studied in persistent homology—see [Reference Carlsson7] for a good introduction. Persistent homology originated in computational topology [Reference Edelsbrunner and Harer10] and has received much attention as a useful machinery for exploring the manner in which topological holes appear and/or disappear in a filtered topological space. The primary objective of the current study is to associate the Euler characteristic with some filtered topological space by treating it as a stochastic process in the time parameter t.

Due to recent rapid development of TDA in conjunction with probability theory, there has been a growing interest in the study of random geometric complexes. We focus on the Vietoris–Rips complex [Reference Kahle15, Reference Kahle and Meckes16, Reference Owada19], because of its ease of application, especially in computational topology; though much research has also been done on the Čech complex [Reference Bobrowski and Adler4, Reference Bobrowski and Mukherjee6, Reference Decreusefond, Ferraz, Randriambololona and Vergne9, Reference Goel, Trinh and Tsunoda11, Reference Kahle15, Reference Kahle and Meckes16, Reference Owada and Thomas20, Reference Yogeshwaran, Subag and Adler25], and on the notion of generalizing both types of complex [Reference Hiraoka, Shirai and Trinh13]. An elegant survey of progress in these areas can be found in [Reference Bobrowski and Kahle5]. These studies are mostly concerned with the asymptotic behavior of topological invariants such as the Euler characteristic and Betti numbers. Among them, [Reference Decreusefond, Ferraz, Randriambololona and Vergne9] derived a concentration inequality for the Euler characteristic built over a Čech complex on a d-dimensional torus, as well as its asymptotic mean and variance; and [Reference Hug, Last and Schulte14] established a multivariate central limit theorem for the intrinsic volumes, including the Euler characteristic. Furthermore, [Reference Schneider and Weil23] proved ergodic theorems for the Euler characteristic over a stationary and ergodic point process.

Most of the studies cited in the last paragraph start with either an independent and identically distributed (i.i.d.) random sample $\mathcal X_n=\{ X_1,\dots,X_n \}$ or a Poisson point process $\mathcal{P}_n=\{ X_1,\dots,X_{N_n} \}$ , where $N_n$ is a Poisson random variable with mean n, independent of $(X_i)_i$ . Subsequently, we will consider a simple Boolean model of the union of balls centered around $\mathcal X_n$ or $\mathcal{P}_n$ with a sequence of non-random radii $s_n\to 0$ , $n\to\infty$ . Then the behavior of topological invariants based on the Boolean model can be split up into several distinct regimes. When $ns_n^d \to 0$ as $n\to\infty$ , we have what is called the sparse (or subcritical) regime, in which there occur many small connected components. If $n s_n^d \to \infty$ as $n\to\infty$ , we have the dense (or supercritical) regime, which is characterized by a large connected component with few topological holes as a result of a slower decay rate of $s_n$ . The intermediate case where $n s_n^d$ converges to a positive and finite constant is called the critical regime, in which the stochastic features of a geometric complex are less assured, and arguably more interesting, owing to the emergence of highly connected components with nontrivial topologies. The present study focuses exclusively on the critical regime. This is because the behaviors of the Euler characteristic in other regimes, e.g. the sparse and dense regimes, are essentially trivial. For example, in the dense regime, the Euler characteristic is asymptotic to 1 (see [Reference Bobrowski and Adler4]).

Within the context of geometric complexes—such as the Čech and Vietoris–Rips complexes—few attempts have been made thus far at deriving limit theorems on the functional level for topological invariants (for some exceptions, see [Reference Biscio, Chenavier, Hirsch and Svane3, Reference Owada19, Reference Owada and Thomas20]). From the viewpoint of persistent homology, such functional information is crucial for the understanding of topological invariants in a filtered topological space. With this in mind, the current study proceeds to establish functional limit theorems for the Euler characteristic defined as a stochastic process. More specifically, we shall prove a functional strong law of large numbers and a functional central limit theorem in the space $D[0,\infty)$ of right-continuous functions with left limits. Our results are the first functional limit theorems in the literature for a topological invariant under the critical regime that have neither time/radius restrictions nor restrictions on the number/size of components in the underlying simplicial complex. The primary benefit in our results lies in information obtainable about topological changes as the time parameter t varies. For example, if we let $\chi_n(t)$ be the Euler characteristic considered as a stochastic process, then as consequences of our main theorems, one can capture the limiting behavior of various useful functions of the Euler characteristic process via the continuous mapping theorem. We elaborate on these at the end of Section 3. Other potential applications can be found in Chapter 14 of [Reference Billingsley2] and in [Reference Whitt24].

In Section 2 we discuss all the topological background necessary for the paper. In Section 3 we discuss our main results: the functional strong law of large numbers and functional central limit theorems for the Euler characteristic process in the critical regime. All of the proofs in the paper are collected in Section 4.

2. Preliminaries

2.1. Topology

The main concept in the present paper is the Euler characteristic. Before introducing it we begin with the notions of a simplex and an (abstract) simplicial complex. Let $\mathbb{N}$ , $\mathbb{N}_0$ be the positive and nonnegative integers respectively, and let B(x, r) be the closed ball centered at x with radius $r \ge 0$ .

Definition 2.1. Let $\mathcal{X}$ be a finite set. An abstract simplicial complex $\mathcal{K}$ is a collection of non-empty subsets of $\mathcal{X}$ which satisfy the following conditions:

  1. 1. All singleton subsets of $\mathcal{X}$ are in $\mathcal{K}$ .

  2. 2. If $\sigma \in \mathcal{K}$ and $\tau \subset \sigma$ , then $\tau \in \mathcal{K}$ .

If $\sigma \in \mathcal{K}$ and $|\sigma| = k + 1$ , with $k \in \mathbb{N}_0$ , then $\sigma$ is said to have dimension k and is called a k-simplex in $\mathcal{K}$ . The dimension of $\mathcal{K}$ is the dimension of the largest simplex in $\mathcal{K}$ .

It can be shown (cf. [Reference Edelsbrunner and Harer10]) that every abstract simplicial complex $\mathcal{K}$ of dimension d can be embedded into $\mathbb{R}^{2d+1}$ . The image of such an embedding, denoted $\textrm{geom}(\mathcal{K})$ , is called the geometric realization of $\mathcal{K}$ . A topological space Y is said to be triangulable if there exists a simplicial complex $\mathcal{K}$ together with a homeomorphism between Y and $\textrm{geom}(\mathcal{K})$ . We now define the Euler characteristic.

Definition 2.2. Take $\mathcal{K}$ to be a simplicial complex and let $S_k(\mathcal{K})$ be the number of k-simplices in $\mathcal{K}$ . Then the Euler characteristic of $\mathcal{K}$ is defined as

\begin{equation*}\chi(\mathcal{K}) := \sum_{k=0}^{\infty} (\!-\!1)^k S_k(\mathcal{K}).\end{equation*}

If Y is a triangulable topological space with an associated simplicial complex $\mathcal K$ , then we have $\chi(Y) = \chi(\mathcal{K})$ , and $\chi(Y)$ is independent of the triangulation (see Theorem 2.44 in [Reference Hatcher12]). Therefore, the Euler characteristic is a topological invariant (and in fact a homotopy invariant).

Our setting for this study is always in $\mathbb{R}^d$ , so we may take $\mathcal{X}$ , $\mathcal{Y}$ to be arbitrary finite subsets of $\mathbb{R}^d$ . To conclude this section, we will now define the Vietoris–Rips complex: the aforementioned simplicial complex that allows us to get a topological, as well as combinatorial, structure from our data $\mathcal{X}$ . A family of Vietoris–Rips complexes $( \mathcal{R}(\mathcal{X}, t), \, t \ge 0 )$ for points in $\mathbb{R}^2$ can be seen in Figure 1; yellow represents a 2-simplex and green represents a 3-simplex, which cannot be embedded in $\mathbb{R}^2$ .

Figure 1. A family of Vietoris–Rips complexes.

Definition 2.3. Let $\mathcal{X} = \{x_1, \dots, x_n\}$ be a finite subset of $\mathbb{R}^d$ and $t\ge0$ . The Vietoris–Rips complex $\mathcal{R}(\mathcal{X},t)$ is the (abstract) simplicial complex with the following properties:

  1. 1. All singleton subsets of $\mathcal{X}$ are in $\mathcal{R}(\mathcal{X},t)$ .

  2. 2. A k-simplex $\sigma = \{x_{i_0}, \dots, x_{i_k}\}$ is in $\mathcal{R}(\mathcal{X},t)$ if

    \begin{equation*}B(x_{i_j}, t) \cap B(x_{i_\ell}, t) \neq \emptyset\end{equation*}
    for all $0 \leq j < \ell \leq k$ .

2.2. Tools

Throughout, we let $\mathcal{P}_n$ denote a Poisson point process on $\mathbb{R}^d$ with intensity measure $n\int_Af(x)\textrm{d}x$ , where A is a Borel subset of $\mathbb{R}^d$ , and f is a probability density function. Writing m for Lebesgue measure on $\mathbb{R}^d$ , we assume that f is bounded almost everywhere, i.e., $\Vert f \Vert_{\infty} := \inf \big\{a \in \mathbb{R}: m\big(f^{-1}(a, \infty)\big) = 0\big\}<\infty$ .

For two finite subsets $\mathcal{Y}\subset \mathcal{X}$ of $\mathbb{R}^d$ with $|\mathcal{Y}|=k+1$ , and $t\ge0$ , we define

(2.1) \begin{align}h_t^{k} (\mathcal{Y}) &:= \textbf{1} \big\{ \mathcal{Y} \text{ forms a } k\text{-simplex in } \mathcal{R} (\mathcal{X},t) \big\} = \prod_{x, y \in \mathcal{Y}, \, x \neq y} \textbf{1} \Big\{ B(x, t) \cap B(y, t) \neq \emptyset \Big\}. \end{align}

In the below we present obvious but highly useful properties of this indicator function. First, it is translation- and scale-invariant: for any $c>0$ , $x\in\mathbb{R}^d$ , and $y_0, \dots, y_k \in \mathbb{R}^d$ ,

\begin{equation*}h^{k}_t(cy_0 + x, \dots, cy_k + x) = h^{k}_{t/c}(y_0, \dots, y_k).\end{equation*}

Furthermore, for any fixed $y_i \in \mathbb{R}^d$ , $i=0,\dots,k$ , it is nondecreasing in t, i.e.,

(2.2) \begin{equation} h_s^{k}(y_0,\dots,y_k) \le h_t^{k}(y_0, \dots, y_k), \ \ \ 0 \le s \le t.\end{equation}

Using (2.1), we can define k-simplex counts by $S_k(\mathcal{X}, t) := \sum_{\mathcal{Y} \subset \mathcal{X}} h_t^k (\mathcal{Y}).$ As declared in the introduction, we shall exclusively focus on the critical regime, so that $ns_n^d\to1$ , $n\to \infty$ . Finally, in order to formulate the Euler characteristic as a stochastic process, let $r_n(t):= s_nt$ and define

(2.3) \begin{equation} \chi_n(t) := \sum_{k=0}^\infty (\!-\!1)^k S_k \big( \mathcal{P}_n, r_n(t) \big) = \sum_{k=0}^\infty (\!-\!1)^k \sum_{\mathcal{Y} \subset \mathcal{P}_n} h_{r_n(t)}^k (\mathcal{Y}), \ \ t \ge 0.\end{equation}

Notice that (2.3) is almost surely (a.s.) a finite sum, because the cardinality of $\mathcal{P}_n$ , denoted by $|\mathcal{P}_n|$ , is finite a.s., and $S_k\big( \mathcal{P}_n, r_n(t) \big)\equiv 0$ for all $k \ge |\mathcal{P}_n|$ . Furthermore, for a Borel subset A of $\mathbb{R}^d$ , define a restriction of the Euler characteristic to A by

(2.4) \begin{equation} \chi_{n,A}(t) := \sum_{k=0}^\infty (\!-\!1)^k \sum_{\mathcal{Y} \subset \mathcal{P}_n} h_{r_n(t)}^k (\mathcal{Y}) \textbf{1} \big\{ \text{LMP}(\mathcal{Y})\in A \big\},\end{equation}

where $\text{LMP}(\mathcal{Y})$ represents the leftmost point of $\mathcal{Y}$ , i.e., the least point with respect to lexicographic order in $\mathbb{R}^d$ . This restriction is useful for proving finite-dimensional convergence in the case when A is bounded. When A is bounded we get a finite number of random variables for the dependency graph, so that we may use Stein’s method for normal approximation. See Section 4.3 for more details. Clearly, $\chi_{n,\mathbb{R}^d}(t) = \chi_n(t)$ .

3. Main results

The first contribution of the present paper is the functional strong law of large numbers (FSLLN) for $\chi_n$ in the space $D[0,\infty)$ of right-continuous functions with left limits. More precisely, almost sure convergence of $\chi_n/n$ to the limiting mean will be established in terms of the uniform metric. Our proof techniques rely on the Borel–Cantelli lemma to prove a strong law of large numbers for each fixed t, and we then extend this to the functional case. As for the method of proofs in other studies, [Reference Penrose22] and [Reference Yogeshwaran, Subag and Adler25] have established concentration inequalities that can lead to the desired (static) strong law of large numbers. Although these concentration inequalities can yield sharper bounds, a downside is that extra conditions need to be put on an underlying density f. For example f must have bounded support. For this reason, we have adopted a different approach via the Borel–Cantelli lemma, by which one can prove $n^{-1}\big(\chi_n(t) - \mathbb{E}[\chi_n(t)]\big) \to 0$ a.s. by showing that the sum of the fourth moments is convergent. The relevant article taking an approach similar to ours is [Reference Goel, Trinh and Tsunoda11].

The second contribution of this paper is to show the weak convergence of the process

\begin{equation*}\bar{\chi}_n(t) := n^{-1/2}\big( \chi_n(t)-\mathbb{E}[\chi_n(t)] \big), \ \ \ t \ge 0,\end{equation*}

with respect to the Skorokhod $J_1$ -topology. Proving finite-dimensional weak convergence of $\bar{\chi}_n$ in conjunction with its tightness will allow us to obtain the desired convergence in $D[0, \infty)$ . Finite-dimensional convergence will be established via the Cramér–Wold device and Stein’s method, as in Theorem 2.4 in [Reference Penrose22], by adhering closely to the proof of Theorem 3.9 in the same source. In addition, the tightness will be proven via Theorem 13.5 in [Reference Billingsley2]. These functional limit theorems enable us to capture dynamic features of topological changes in $D[0,\infty)$ . The proofs for all results in this section are postponed to Section 4.

In order to obtain a clear picture of our limit theorems, it will be beneficial to start with some results on asymptotic moments of $\chi_n$ . Define for $k_1, k_2 \in \mathbb{N}_0$ , $t, s \ge 0$ , and a Borel subset A of $\mathbb{R}^d$ ,

\begin{equation*} \Psi_{k_1,k_2,A}(t,s) := \sum_{j=1}^{(k_1 \wedge k_2)+1} \psi_{j,k_1,k_2,A}(t,s),\end{equation*}

where $k_1 \wedge k_2 = \min\{ k_1, k_2 \}$ , and

\begin{align*}\psi_{j,k_1,k_2,A}(t,s) &:= \frac{\int_A f(x)^{k_1+k_2+2-j}\textrm{d}x}{j! (k_1+1-j)! (k_2+1-j)!} \\[3pt] &\quad \times\int_{(\mathbb{R}^d)^{k_1 + k_2+1 -j}} h_t^{k_1}(0,y_1,\dots,y_{k_1})\\[3pt] &\quad \times h_s^{k_2}(0,y_1,\dots, y_{j-1}, y_{k_1+1}, \dots, y_{k_1+k_2+1-j})\textrm{d}\textbf{y}.\end{align*}

Here we set $h_t^k(0,y_1,\dots,y_k)=1$ if $k=0$ , so that $\Psi_{0,0,A}(t,s)=\psi_{1,0,0,A}(t,s)=\int_Af(x)dx$ . In the sequel, we write $\Psi_{k_1,k_2}(t,s) := \Psi_{k_1,k_2,\mathbb{R}^d}(t,s)$ with $\psi_{j,k_1,k_2}(t,s) := \psi_{j,k_1,k_2,\mathbb{R}^d}(t,s)$ .

Proposition 3.1. For $t,s \ge 0$ and $A \subset \mathbb{R}^d$ open with $m(\partial A) = 0$ , we have

(3.1) \begin{align}n^{-1} \mathbb{E}[\chi_{n,A}(t)] &\to \sum_{k=0}^\infty (\!-\!1)^k \psi_{k+1, k, k, A}(t,t), \qquad \ n\to\infty, \end{align}

(3.2) \begin{align} \nonumber\\[-30pt]n^{-1} \textrm{Cov} \big( \chi_{n,A}(t), \chi_{n,A}(s) \big) &\to \sum_{k_1=0}^\infty \sum_{k_2=0}^\infty (\!-\!1)^{k_1+k_2} \Psi_{k_1,k_2,A}(t,s), \qquad \ n\to\infty, \end{align}

so that both of the right-hand sides are convergent for every such $A\subset \mathbb{R}^d$ .

We can now introduce the FSLLN for the process $\chi_n$ .

Theorem 3.1. (FSLLN for $\chi_n$ .) As $n\to\infty$ ,

\begin{equation*}\frac{\chi_n(t)}{n} \to \sum_{k=0}^\infty (\!-\!1)^k \psi_{k+1, k, k}(t,t) \ \ \text{a.s. in } D[0,\infty),\end{equation*}

where $D[0,\infty)$ is equipped with the uniform topology.

Before stating our functional central limit theorem (FCLT) for $\chi_n$ , let us define its limiting process. First define $(\mathcal H_k, \, k \in \mathbb{N}_0)$ as a family of zero-mean Gaussian processes on a generic probability space $(\Omega, \mathcal{F}, \mathbb{P})$ , with intra-process covariance

(3.3) \begin{equation}\mathbb{E}[\mathcal{H}_k(t)\mathcal{H}_k(s)] = \Psi_{k, k}(t, s), \end{equation}

and inter-process convariance

(3.4) \begin{equation}\mathbb{E}[\mathcal{H}_{k_1}(t)\mathcal{H}_{k_2}(s)] = \Psi_{k_1, k_2}(t, s), \end{equation}

for all $k, k_1, k_2 \in \mathbb{N}_0$ with $k_1 \neq k_2$ and $t, s \ge 0$ . In the proof of Proposition 3.1, the functions $\Psi_{k_1,k_2}(t,s)$ naturally appear in the covariance calculation of $\chi_n$ , which in turn implies that the covariance functions in (3.3) and (3.4) are well-defined. With this notation in mind, we now define the limiting Gaussian process for $\bar{\chi}_n$ as

(3.5) \begin{equation} \mathcal H(t) := \sum_{k=0}^\infty (\!-\!1)^k \mathcal H_k(t), \qquad \ t \ge 0,\end{equation}

so that

(3.6) \begin{equation} \mathbb{E}[\mathcal H(t)\mathcal H(s)] = \sum_{k_1=0}^\infty \sum_{k_2=0}^\infty (\!-\!1)^{k_1+k_2} \Psi_{k_1,k_2}(t,s), \ \ t, s \ge0.\end{equation}

Once again, Proposition 3.1 implies that the right-hand side of (3.6) can define the covariance functions of a limiting Gaussian process, since it is obtained as a (scaled) limit of the covariance functions of $\chi_n$ . In particular, since (3.6) is convergent, for every $t\ge 0$ , $\mathcal H(t)$ is definable in the $L^2(\Omega)$ sense. Note that the Euler characteristic in (2.3) and the process (3.5) exhibit similar structure, in the sense that $S_k\big( \mathcal{P}_n, r_n(t) \big)$ in (2.3) and $\mathcal H_k(t)$ both correspond to the spatial distribution of k-simplices.

Now, we proceed to stating the FCLT for $\chi_n$ .

Theorem 3.2. (FCLT for $\chi_n$ .) As $n \to \infty$ ,

\begin{equation*}\bar{\chi}_n \Rightarrow \mathcal H \ \ \textit{in } D[0,\infty),\end{equation*}

where $D[0,\infty)$ is equipped with the Skorokhod $J_1$ -topology. Furthermore, for every $0 < T < \infty$ , we have that $\big(\mathcal{H}(t), \, 0 \leq t \leq T\big)$ has a continuous version with Hölder continuous sample paths of any exponent $\gamma \in [0, 1/2)$ .

Remark 3.1. The results of Theorem 3.1 and Theorem 3.2 also hold for the Čech complex, in the case of the latter theorem only up to finite-dimensional weak convergence of $\bar{\chi}_n$ . The definition of a k-simplex of the Čech complex requires a non-empty intersection of ‘multiple’ closed balls. This makes it more difficult to establish the required tightness for the Čech complex. Specifically, obtaining bounds as in Lemma 4.2 seems much harder. If one were able to establish such a nice bound, the rest of the argument for tightness would essentially be the same as the Vietoris–Rips case.

Example 3.1. Consider a map $x \mapsto \sup_{0\le t \le 1}|x(t)|$ from D[0, 1] to $\mathbb{R}_+$ . This map is continuous on C[0, 1], the space of continuous functions on [0, 1]. Since the limits in Theorems 3.1 and 3.2 are both continuous, we get that as $n\to\infty$ ,

\begin{align*}&n^{-1} \sup_{0 \le t \le 1} |\chi_n(t)| \to \sup_{0 \le t \le 1}|\sum_{k=0}^\infty (\!-\!1)^k \psi_{k+1, k, k}(t,t)| \quad \ a.s., \\[3pt] &n^{-1/2} \sup_{0 \le t \le 1} \big| \chi_n (t) -\mathbb{E}[\chi_n(t)] \big| \Rightarrow \sup_{0 \le t \le 1} |\mathcal H(t)|.\end{align*}

In particular, the latter claims that the supremum of a mean-centered Euler characteristic process can be approximated by $n^{1/2}\sup_{0 \le t \le 1}|\mathcal H(t)|$ for large enough n.

4. Proofs

We first deal with moment asymptotics of $\chi_n$ , in Section 4.1. In Section 4.2 we prove the FSLLN in Theorem 3.1. Subsequently, we establish Theorem 3, the proof of which is divided into two parts, with the first part devoted to finite-dimensional weak convergence, and the second to tightness. The proofs frequently refer to Palm theory for Poisson processes for computing the moments of various Poisson functionals. A brief citation is given in Lemma 5.1 of the appendix. Finally we verify Hölder continuity of the limiting Gaussian process $\mathcal H$ , adhering closely to what is established for subgraph counting processes in Proposition 4.2 of [Reference Owada18].

For simplicity of description, we assume throughout this section that $ns_n^d=1$ . However, generalizing it to $ns_n^d\to1$ , $n\to\infty$ , is straightforward. In the following, we write $a\vee b:= \max\{ a,b\}$ and $a \wedge b := \min\{ a,b \}$ for $a, b \in \mathbb{R}$ .

4.1 Proof of moment asymptotics

Without loss of generality, the proof of Proposition 3.1 only handles the case when $A=\mathbb{R}^d$ . Throughout this section, let $\mathcal{Y}$ , $\mathcal{Y}_1$ , and $\mathcal{Y}_2$ denote collections of i.i.d. random points with density f. We begin with the following lemma.

Lemma 4.1.

  1. (i) For $t\ge 0$ we have, as $n\to\infty$ ,

    \begin{equation*}\frac{n^k}{(k+1)!}\, \mathbb{E}\ \big[ h_{r_n(t)}^k (\mathcal{Y}) \big] \to \psi_{k+1,k,k}(t,t).\end{equation*}
  2. (ii) For all $n \in \mathbb{N}$ ,

    \begin{equation*}n^k \mathbb{E}\ \big[ h_{r_n(t)}^k (\mathcal{Y}) \big] \le (a_t)^k,\end{equation*}
    where
    (4.1) \begin{equation} a_t := (2t)^d \theta_d \|\,f \|_\infty\end{equation}
    with $\theta_d=m\big( B(0,1) \big)$ , i.e., the volume of the unit ball in $\mathbb{R}^d$ .
  3. (iii) For $1\le j \le (k_1\wedge k_2)+1$ , $k_1, k_2 \in \mathbb{N}_0$ , and $t,s\ge 0$ ,

    \begin{equation*}\frac{n^{k_1+k_2+1-j}}{j! (k_1+1-j)! (k_2+1-j)!}\, \mathbb{E} \big[ h_{r_n(t)}^{k_1}(\mathcal{Y}_1)h_{r_n(s)}^{k_2}(\mathcal{Y}_2)\, \textbf{1} \big\{ |\mathcal{Y}_1 \cap \mathcal{Y}_2|=j \big\} \big] \to \psi_{j,k_1,k_2}(t,s)\end{equation*}
    as $n\to\infty$ .
  4. (iv) For all $n\in \mathbb{N}$ ,

    \begin{equation*}n^{k_1+k_2+1-j} \mathbb{E} \big[ h_{r_n(t)}^{k_1}(\mathcal{Y}_1)h_{r_n(s)}^{k_2}(\mathcal{Y}_2)\, \textbf{1} \big\{ |\mathcal{Y}_1 \cap \mathcal{Y}_2|=j \big\} \big] \le (a_{t\vee s})^{k_1+k_2+1-j}.\end{equation*}

Proof. We shall prove (iii) and (iv) only, since (i) and (ii) can be established by a similar and simpler argument. With the change of variables $x_1=x$ and $x_i = x+s_n y_{i-1}$ , $i=1,\dots,k_1+k_2+2-j$ , the left-hand side of (iii) equals

(4.2) \begin{align}&\frac{n^{k_1+k_2+1-j}}{j! (k_1+1-j)! (k_2+1-j)!}\, \int_{(\mathbb{R}^d)^{k_1+k_2+2-j}} h_{r_n(t)}^{k_1}(x_1,\dots,x_{k_1+1}) \notag \\&\qquad \qquad \qquad\qquad \qquad\qquad \times h_{r_n(s)}^{k_2}(x_1,\dots,x_j,x_{k_1+2}, \dots, x_{k_1+k_2+2-j}) \prod_{i=1}^{k_1+k_2+2-j} f(x_i)\textrm{d}\textbf{x} \notag \\&=\frac{(ns_n^d)^{k_1+k_2+1-j}}{j! (k_1+1-j)! (k_2+1-j)!}\, \int_{\mathbb{R}^d} \int_{(\mathbb{R}^d)^{k_1+k_2+1-j}} h_t^{k_1}(0,y_1\dots,y_{k_1})\\ &\quad \times h_s^{k_2}(0,y_1,\dots,y_{j-1},y_{k_1+1}, \dots, y_{k_1+k_2+1-j}) f(x)\prod_{i=1}^{k_1+k_2+1-j} f(x+s_ny_i) \textrm{d}\textbf{y}\,\textrm{d}x. \notag \end{align}

Recall that $ns_n^d=1$ and note that $\prod_{i=1}^{k_1+k_2+1-j}f(x+s_ny_i) \to f(x)^{k_1+k_2 +1-j}$ , $n\to\infty$ , holds under the integral sign because of the Lebesgue differentiation theorem. Thus, (4.2) converges to $\psi_{j,k_1,k_2}(t,s)$ as $n\to\infty$ .

Now let us turn to proving (iv). Without loss of generality, we may assume $s\le t$ . After the same change of variables as in (iii), the left-hand side of (iv) is bounded by

(4.3) \begin{align} \big( \|\,f\|_\infty \big)^{k_1+k_2+1-j}&\int_{(\mathbb{R}^d)^{k_1+k_2+1-j}} h_t^{k_1}(0,y_1\dots,y_{k_1})\nonumber\\[3pt] &\quad \times h_s^{k_2}(0,y_1,\dots,y_{j-1},y_{k_1+1}, \dots, y_{k_1+k_2+1-j}) \textrm{d}\textbf{y}.\end{align}

By the definition of the indicators $h_t^{k_1}$ , $h_s^{k_2}$ , each of the $y_i$ in (4.3) must have distance at most 2t from the origin. Therefore, (4.3) can be bounded by

\begin{align*}\big( \|\,f\|_\infty \big)^{k_1+k_2 +1-j} m\big( B(0,2t) \big)^{k_1+k_2+1-j} = (a_t)^{k_1+k_2+1-j}.\\[-40pt]\end{align*}

Proof of Proposition 3.1. We prove only (3.2), as the proof techniques for (3.1) are very similar to (3.2). Specifically, we shall make use of (ii), (iii), and (iv) of Lemma 4.1. We start by writing

(4.4) \begin{align}n^{-1} \textrm{Cov} \big( \chi_n(t), \chi_n(s) \big) &= n^{-1} \mathbb{E} \bigg[ \sum_{k_1=0}^\infty\sum_{k_2=0}^\infty (\!-\!1)^{k_1+k_2} S_{k_1}\big(\mathcal{P}_n, r_n(t)\big) S_{k_2}\big(\mathcal{P}_n, r_n(s)\big) \bigg] \\ &\qquad- n^{-1} \mathbb{E}\bigg[ \sum_{k=0}^\infty(\!-\!1)^k S_k\big(\mathcal{P}_n, r_n(t)\big) \bigg]\mathbb{E}\bigg[ \sum_{k=0}^\infty(\!-\!1)^k S_k\big(\mathcal{P}_n, r_n(s)\big) \bigg].\notag\end{align}

Next, Palm theory for Poisson processes, Lemma 5.1(ii), along with the bounds given in Parts (ii) and (iv) of Lemma 4.1, yields that

\begin{align*}&\mathbb{E} \Big[ S_{k_1}\big(\mathcal{P}_n, r_n(t)\big) S_{k_2}\big(\mathcal{P}_n, r_n(s)\big) \Big] \\[3pt] &=\sum_{j=0}^{(k_1 \wedge k_2)+1} \mathbb{E}\bigg[ \sum_{\mathcal{Y}_1\subset \mathcal{P}_n}\sum_{\mathcal{Y}_2\subset \mathcal{P}_n} h_{r_n(t)}^{k_1}(\mathcal{Y}_1)h_{r_n(s)}^{k_2}(\mathcal{Y}_2)\, \textbf{1}\big\{ |\mathcal{Y}_1\cap \mathcal{Y}_2|=j \big\} \bigg] \\[3pt] &= \frac{n^{k_1+k_2 +2}}{(k_1+1)!(k_2+1)!}\, \mathbb{E} \big[ h_{r_n(t)}^{k_1}(\mathcal{Y}_1) \big]\mathbb{E} \big[ h_{r_n(s)}^{k_2}(\mathcal{Y}_2) \big] \\[3pt] &\qquad + \sum_{j=1}^{(k_1 \wedge k_2)+1}\frac{n^{k_1+k_2+2-j}}{j!(k_1+1-j)!(k_2+1-j)!}\, \mathbb{E}\Big[ h_{r_n(t)}^{k_1}(\mathcal{Y}_1) h_{r_n(s)}^{k_2}(\mathcal{Y}_2) \textbf{1}\big\{{|\mathcal{Y}_1 \cap \mathcal{Y}_2| = j}\big\} \Big] \\[3pt] &\le \frac{n^2(a_t)^{k_1}(a_s)^{k_2}}{(k_1+1)!(k_2+1)!} + \sum_{j=1}^{(k_1 \wedge k_2)+1}\frac{n(a_{t\vee s})^{k_1+k_2+1-j}}{j!(k_1+1-j)!(k_2+1-j)!}.\end{align*}

Here it is straightforward to see that

\begin{align*}&\sum_{k=0}^\infty \frac{(a_t)^k}{(k+1)!} < e^{a_t} <\infty, \ \ \sum_{k_1=0}^\infty \sum_{k_2=0}^\infty \sum_{j=1}^{(k_1 \wedge k_2)+1}\frac{(a_{t\vee s})^{k_1+k_2+1-j}}{j!(k_1+1-j)!(k_2+1-j)!} < 2 e^{3a_{t\vee s}} < \infty.\end{align*}

So Fubini’s theorem is applicable to the first term in (4.4). Repeating the same argument for the second term of (4.4), one can get

\begin{align*}n^{-1} \textrm{Cov} \big( \chi_n(t), \chi_n(s) \big) &= \sum_{k_1=0}^\infty \sum_{k_2=0}^\infty(\!-\!1)^{k_1+k_2} \sum_{j=1}^{(k_1 \wedge k_2)+1}\frac{n^{k_1+k_2+1-j}}{j!(k_1+1-j)!(k_2+1-j)!}\, \\[3pt]& \qquad \qquad \qquad \times \mathbb{E}\Big[ h_{r_n(t)}^{k_1}(\mathcal{Y}_1) h_{r_n(s)}^{k_2}(\mathcal{Y}_2) \textbf{1}\big\{{|\mathcal{Y}_1 \cap \mathcal{Y}_2| = j}\big\} \Big].\end{align*}

By virtue of Lemma 4.1(iii)–(iv), the dominated convergence theorem implies that the last expression converges to $\sum_{k_1=0}^\infty \sum_{k_2=0}^\infty(\!-\!1)^{k_1+k_2} \Psi_{k_1,k_2}(t,s)$ as required.

4.2. Proof of FSLLN

To prove the FSLLN, we first establish a result which allows us to extend a ‘pointwise’ strong law for a fixed t into a functional one, if the processes are nondecreasing and there is a deterministic and continuous limit. We again would like to emphasize that our approach in this section gives an improvement from the viewpoint of assumptions on the density f. Unlike the existing results, such as those of [Reference Yogeshwaran, Subag and Adler25], ours do not require f to have compact and convex support.

Proposition 4.1. Let $(X_n, \, n \in \mathbb{N})$ be a sequence of random elements in $D[0, \infty)$ with nondecreasing sample paths. Suppose $\lambda\,{:}\,[0,\infty) \to \mathbb{R}$ is a continuous and nondecreasing function. If we have

(4.5) \begin{equation} X_n(t) \to \lambda(t), \qquad \ n\to\infty, \ \ \textit{a.s.}\end{equation}

for every $t\ge0$ , then it follows that

\begin{equation*} \sup_{t \in [0,T]} |X_n(t) - \lambda(t)| \to 0, \qquad \ n\to\infty, \ \ \textit{a.s.}\end{equation*}

for every $0\le T<\infty$ . Hence, it holds that $X_n \to \lambda$ a.s. in $D[0, \infty)$ endowed with the uniform topology.

Proof. Fix $0\le T<\infty$ . Note that $\lambda$ is uniformly continuous on [0, T]. Given $\epsilon>0$ , choose $k=k(\epsilon)\in \mathbb{N}$ such that for all $s,t\in [0,T]$ ,

(4.6) \begin{equation} |s-t|\le 1/k \quad \text{implies} \quad \big|\lambda(s)-\lambda(t) \big| < \epsilon.\end{equation}

Since $X_n(t)$ and $\lambda(t)$ are both nondecreasing in t, we see that

\begin{align*}&\sup_{t\in [0,T]} \big| X_n(t)-\lambda(t) \big| =\max_{1\le i \le k} \sup_{t\in [ (i-1)T/k, \, iT/k ]} \big| X_n(t)-\lambda(t) \big| \\[3pt]&\qquad \le \max_{1\le i \le k} \bigg\{ \Big( X_n(iT/k)-\lambda((i-1)T/k) \Big) \vee \Big( \lambda(iT/k)-X_n((i-1)T/k) \Big) \bigg\} \\[3pt]&\qquad \le \max_{1\le i \le k} \bigg\{ \Big( X_n(iT/k)-\lambda(iT/k) \Big) \vee \Big( \lambda((i-1)T/k)-X_n((i-1)T/k) \Big) \bigg\} + \epsilon \\[3pt]&\qquad \le \max_{0\le i \le k} \Big| X_n(iT/k) -\lambda(iT/k) \Big| + \epsilon,\end{align*}

where the second inequality follows from (4.6). By the SLLN in (4.5), the last expression tends to $\epsilon$ a.s. as $n\to\infty$ . Since $\epsilon$ is arbitrary, this completes the proof.

Proof of Theorem 3.1. Since (2.3) is a.s. represented as a sum of finitely many terms, it can be split into two parts:

\begin{equation*}\chi_n(t) = \sum_{k=0}^\infty S_{2k}\big( \mathcal{P}_n, r_n(t) \big) - \sum_{k=0}^\infty S_{2k+1}\big( \mathcal{P}_n, r_n(t) \big) =: \chi_n^{(1)}(t) - \chi_n^{(2)}(t) \quad \text{a.s.}\end{equation*}

Denoting by K(t) the limit of (3.1) with $A=\mathbb{R}^d$ , we decompose it in a way similar to the above:

\begin{equation*}K(t)=\sum_{k=0}^\infty \psi_{2k+1, 2k, 2k}(t,t) - \sum_{k=0}^\infty \psi_{2k+2, 2k+1, 2k+1}(t,t) =: K^{(1)}(t)-K^{(2)}(t).\end{equation*}

Our final goal is to prove that for every $0<T<\infty$ ,

\begin{equation*}\sup_{0\le t \le T}\Big| \frac{\chi_n(t)}{n} - K(t) \Big|\to 0, \qquad n\to\infty, \ \ \text{a.s.},\end{equation*}

which is clearly implied by

\begin{equation*} \sup_{0\le t \le T}\Big| \frac{\chi_n^{(i)}(t)}{n} - K^{(i)}(t) \Big|\to 0, \qquad n\to\infty, \ \ \text{a.s.}\end{equation*}

for each $i=1,2$ . As $\chi_n^{(i)}(t)/n$ and $K^{(i)}(t)$ satisfy the conditions of Proposition 4.1, it suffices to show that

\begin{equation*}\frac{\chi_n^{(i)}(t)}{n} \to K^{(i)}(t), \ \ n\to\infty, \ \ \text{a.s.}\end{equation*}

for every $t \geq 0$ . We will prove only the case $i=1$ , and henceforth omit the superscript (1) from $\chi_n^{(1)}(t)$ and $K^{(1)}(t)$ . It then suffices to show that

(4.7) \begin{equation}n^{-1} | \chi_n(t) - \mathbb{E} [\chi_n(t)] | \to 0, \qquad \ n\to\infty, \ \ \text{a.s.}, \end{equation}

and

(4.8) \begin{equation}\big| n^{-1}\mathbb{E}[\chi_n(t)] - K(t) \big| \to 0, \qquad \ n\to\infty. \end{equation}

First we will deal with (4.8). It follows from the customary change of variables as in the proof of Lemma 4.1 that

\begin{align*}& \big| n^{-1}\mathbb{E}[\chi_n(t)] - K(t) \big| \\&\qquad= \bigg| \sum_{k=1}^\infty \frac{1}{(2k+1)!}\, \int_{\mathbb{R}^d} \int_{(\mathbb{R}^d)^{2k}} h_{t}^{2k}(0,y_1,\dots,y_{2k}) \\& \qquad \qquad \qquad \qquad \qquad \times f(x)\Big( \prod_{i=1}^{2k}f(x+s_ny_i)-f(x)^{2k} \Big) \textrm{d}\textbf{y}\, \textrm{d}x \bigg| \\&\qquad\le \sum_{k=1}^\infty \frac{1}{(2k+1)!}\, \int_{\mathbb{R}^d} \int_{(\mathbb{R}^d)^{2k}} h_{t}^{2k}(0,y_1,\dots,y_{2k}) f(x) \Big| \prod_{i=1}^{2k}f(x+s_ny_i)-f(x)^{2k} \Big| \textrm{d}\textbf{y}\, \textrm{d}x.\end{align*}

Similarly to the proof of Lemma 4.1 Part (ii) or (iv), one can show that the last term above is bounded by $2\sum_{k=1}^\infty (a_t)^{2k}/(2k+1)! < \infty$ (where $a_t$ is defined in (4.1)). Thus, the dominated convergence theorem concludes the proof of (4.8).

Now, let us turn our attention to (4.7). From the Borel–Cantelli lemma it suffices to show that, for every $\epsilon>0$ ,

\begin{equation*}\sum_{n=1}^\infty \mathbb{P} \Big( \big| \chi_n(t)-\mathbb{E}[\chi_n(t)] \big| >\epsilon n\Big) <\infty.\end{equation*}

By Markov’s inequality, the left-hand side above is bounded by

\begin{equation*}\frac{1}{\epsilon^4}\sum_{n=1}^\infty \frac{1}{n^4}\mathbb{E} \Big[ \big( \chi_n(t)-\mathbb{E}[\chi_n(t)] \big)^4 \Big].\end{equation*}

Since $\sum_n n^{-2}<\infty$ , we only need to show that

(4.9) \begin{equation} \limsup_{n\to\infty}\frac{1}{n^2} \mathbb{E} \Big[ \big( \chi_n(t)-\mathbb{E}[\chi_n(t)] \big)^4 \Big] <\infty.\end{equation}

Applying Fubini’s theorem as in the proof of Proposition 3.1, along with Hölder’s inequality, we get that

\begin{align*}&\frac{1}{n^2}\, \mathbb{E} \Big[ \big( \chi_n(t)-\mathbb{E}[\chi_n(t)] \big)^4 \Big] \\&=\frac{1}{n^2} \sum_{(k_1,\dots,k_4)\in \mathbb{N}^4} \mathbb{E} \bigg[ \prod_{i=1}^4 \Big( S_{2k_i}\big( \mathcal{P}_n, r_n(t) \big) -E\big[S_{2k_i} \big( \mathcal{P}_n, r_n(t) \big) \big]\Big) \bigg] \\&\le \bigg[ \sum_{k=1}^\infty \bigg\{\frac{1}{n^2} \mathbb{E} \Big[ \Big( S_{2k}\big( \mathcal{P}_n, r_n(t) \big) - \mathbb{E} \big[ S_{2k}\big( \mathcal{P}_n, r_n(t) \big) \big] \Big)^4 \Big] \bigg\}^{1/4} \bigg]^4.\end{align*}

Now, (4.9) can be obtained if we show that

(4.10) \begin{equation} \sum_{k=1}^\infty \bigg\{ \limsup_{n\to\infty}\frac{1}{n^2}\, \mathbb{E} \Big[ \Big( S_{2k}\big( \mathcal{P}_n, r_n(t) \big) - \mathbb{E} \big[ S_{2k}\big( \mathcal{P}_n, r_n(t) \big) \big] \Big)^4 \Big] \bigg\}^{1/4} <\infty.\end{equation}

From this point on, let us introduce the shorthand $S_{2k}:= S_{2k}\big( \mathcal{P}_n, r_n(t) \big)$ . In order to find an appropriate upper bound for (4.10), using the binomial expansion we write

(4.11) \begin{equation} \mathbb{E} \big[ \big(S_{2k}-\mathbb{E}[S_{2k}]\big)^4 \big] = \sum_{\ell=0}^4 \binom{4}{\ell}(\!-\!1)^\ell \mathbb{E}[S_{2k}^\ell] \big( \mathbb{E}[S_{2k}] \big)^{4-\ell}.\end{equation}

For every $\ell\in \{ 0,\dots,4 \}$ , one can write $\mathbb{E}[S_{2k}^\ell] \big( \mathbb{E}[S_{2k}] \big)^{4-\ell}$ as

(4.12) \begin{equation} \mathbb{E} \bigg[ \sum_{\mathcal{Y}_1 \subset \mathcal{P}_n^{(1)}}\sum_{\mathcal{Y}_2 \subset \mathcal{P}_n^{(2)}}\sum_{\mathcal{Y}_3 \subset \mathcal{P}_n^{(3)}}\sum_{\mathcal{Y}_4 \subset \mathcal{P}_n^{(4)}} \prod_{i=1}^4 h_{r_n(t)}^{2k}(\mathcal{Y}_i) \bigg],\end{equation}

where for every $i, j \in \{ 1,\dots,4 \}$ , we have either that $\mathcal{P}_n^{(i)}=\mathcal{P}_n^{(j)}$ or that $\mathcal{P}_n^{(i)}$ is an independent copy of $\mathcal{P}_n^{(j)}$ . If $|\mathcal{Y}_1 \cup \cdots \cup \mathcal{Y}_4|=8k+4$ , i.e., $\mathcal{Y}_1, \dots, \mathcal{Y}_4$ do not have any common elements, then Palm theory (Lemma 5.1) shows that (4.12) is equal to $\big( \mathbb{E}[S_{2k}] \big)^4$ , which grows at the rate of $O(n^4)$ (see Lemma 4.1(i)). In this case, the total contribution to (4.11) disappears, because

\begin{equation*}\sum_{\ell=0}^4 \binom{4}{\ell}(\!-\!1)^\ell \big( \mathbb{E}[S_{2k}] \big)^4=0.\end{equation*}

Suppose next that $|\mathcal{Y}_1 \cup \cdots \cup \mathcal{Y}_4|=8k+3$ ; that is, there is exactly one common element between $\mathcal{Y}_i$ and $\mathcal{Y}_j$ for some $i\neq j$ , with no other overlappings. Then (4.12) is equal to

\begin{equation*}\mathbb{E} \bigg[ \sum_{\mathcal{Y}_1\subset \mathcal{P}_n}\sum_{\mathcal{Y}_2\subset \mathcal{P}_n} h_{r_n(t)}^{2k}(\mathcal{Y}_1)h_{r_n(t)}^{2k}(\mathcal{Y}_2)\, \textbf{1} \{ |\mathcal{Y}_1 \cap \mathcal{Y}_2|=1 \} \bigg] \big( \mathbb{E}[S_{2k}] \big)^2.\end{equation*}

Although the growth rate of the above term is $O(n^3)$ (see Lemma 4.1 Parts (i) and (iii)), an overall contribution to (4.11) is again canceled. This is because

\begin{align*}&\bigg\{ \binom{4}{2}(\!-\!1)^2+\binom{4}{3}(\!-\!1)^3\binom{3}{2}+\binom{4}{4}(\!-\!1)^4\binom{4}{2} \bigg\} \\&\quad \times \mathbb{E} \bigg[ \sum_{\mathcal{Y}_1\subset \mathcal{P}_n}\sum_{\mathcal{Y}_2\subset \mathcal{P}_n} h_{r_n(t)}^{2k}(\mathcal{Y}_1)h_{r_n(t)}^{2k}(\mathcal{Y}_2)\, \textbf{1} \{ |\mathcal{Y}_1 \cap \mathcal{Y}_2|=1 \} \bigg] \big( \mathbb{E}[S_{2k}] \big)^2=0.\end{align*}

By the above discussion, we only need to consider the case where there are at least two common elements within $\mathcal{Y}_1, \dots, \mathcal{Y}_4$ . Among many such cases, let us deal with a specific term,

(4.13) \begin{align}&n^{-2} \mathbb{E} \bigg[ \sum_{\mathcal{Y}_1\subset \mathcal{P}_n}\sum_{\mathcal{Y}_2\subset \mathcal{P}_n}\sum_{\mathcal{Y}_3\subset \mathcal{P}_n}\sum_{\mathcal{Y}_4\subset \mathcal{P}_n} \prod_{i=1}^4 h_{r_n(t)}^{2k}(\mathcal{Y}_i)\,\\ &\qquad \qquad \qquad \times \textbf{1} \big\{ |\mathcal{Y}_1\cap \mathcal{Y}_2|=\ell_1, \,|\mathcal{Y}_3\cap \mathcal{Y}_4|=\ell_2, \, \big| (\mathcal{Y}_1\cup \mathcal{Y}_2)\cap (\mathcal{Y}_3\cup \mathcal{Y}_4) \big|=0 \big\} \bigg], \notag \end{align}

where $\ell_1, \ell_2 \in \{ 1,\dots, 2k+1 \}$ . Palm theory allows us to write (4.13) as

(4.14) \begin{equation} \prod_{i=1}^2 \frac{n^{4k+1-\ell_i}}{\ell_i! \big( (2k+1-\ell_i)! \big)^2}\, \mathbb{E}\Big[ h_{r_n(t)}^{2k}(\mathcal{Y}_1)h_{r_n(t)}^{2k}(\mathcal{Y}_2)\, \textbf{1} \{ |\mathcal{Y}_1\cap \mathcal{Y}_2|=\ell_i \} \Big] .\end{equation}

By Lemma 4.1(iv) and the fact that $\ell! (2k+1-\ell)! \geq k!$ for any $\ell \in \{ 1,\dots,2k+1 \}$ , one can bound (4.14) by

\begin{equation*}\prod_{i=1}^2 \frac{(a_t)^{4k+1-\ell_i}}{\ell_i ! \big( (2k+1-\ell_i)! \big)^2} \le \frac{(a_t)^{8k+2-\ell_1-\ell_2}}{k!}.\end{equation*}

Now, the ratio test shows that

\begin{equation*}\sum_{k=1}^\infty \bigg\{ \frac{(a_t)^{8k+2-\ell_1-\ell_2}}{k!} \bigg\}^{1/4} < \infty\end{equation*}

as desired. Notice that all the cases except (4.13) can be handled in a very similar way, and so (4.10) follows.

4.3. Proof of finite-dimensional convergence in Theorem 3.2

Proof of finite-dimensional convergence in Theorem 3.2. Throughout the proof, $C^*$ denotes a generic positive constant that potentially varies across and within the lines. Recall (2.4), and define $\bar{\chi}_{n,A}(t)$ analogously to $\bar{\chi}_n(t)$ by mean-centering and scaling by $n^{-1/2}$ . We first consider the case where A is an open and bounded subset of $\mathbb{R}^d$ with $m(\partial A) = 0$ .

From the viewpoint of the Cramér–Wold device, one needs to establish weak convergence of $\sum_{i=1}^m a_i \bar{\chi}_n(t_i)$ for every $0 <t_1 < \cdots <t_m$ , $m \in \mathbb{N}$ , and $a_i\in \mathbb{R}$ , $i=1,\dots,m$ . Our proof exploits Stein’s normal approximation method in Theorem 2.4 of [Reference Penrose22]. Let $(Q_{j,n}, \, j \ge 1)$ be an enumeration of disjoint subsets of $\mathbb{R}^d$ congruent to $(0,r_n(t_m)]^d$ , such that $\mathbb{R}^d = \bigcup_{j=1}^\infty Q_{j,n}$ . Let $H_n = \{j \in \mathbb{N}\,{:}\, Q_{j,n} \cap A \neq \emptyset\}$ . Define

\begin{equation*}\xi_{j,n} := \sum_{k=0}^{\infty} (\!-\!1)^k \sum_{\mathcal{Y} \subset \mathcal{P}_n} \sum_{i=1}^m a_i h_{r_n(t_i)}^k(\mathcal{Y})\textbf{1} \big\{\text{LMP}(\mathcal{Y}) \in A \cap Q_{j,n}\big\},\end{equation*}

and also

\begin{equation*}\bar{\xi}_{j,n} := \frac{\xi_{j,n} - \mathbb{E}[\xi_{j,n}]}{\sqrt{\textrm{Var}\big(\sum_{i=1}^m a_i \chi_{n,A}(t_i)\big)}}.\end{equation*}

Then, we have $\sum_{i=1}^m a_i \chi_{n,A}(t_i) = \sum_{j \in H_n} \xi_{j,n}$ .

Now, we define $H_n$ to be the vertex set of a dependency graph (see Section 2.1 of [Reference Penrose22] for the formal definition) for the random variables $(\bar{\xi}_{j,n}, \, j \in H_n)$ by setting $j \sim {j}^{\prime}$ if and only if the condition

\begin{equation*}\inf\big\{ \|{x-y}\|\,{:}\, x \in Q_{j,n}, \, y \in Q_{j, n}^{\prime}\big\} \leq 4r_n(t_m)\end{equation*}

is satisfied. This is because $\xi_{j,n}$ and $\xi_{j', n}$ become independent whenever $j\sim j'$ fails to hold. Now we must ensure that the other conditions of Theorem 2.4 in [Reference Penrose22] are satisfied with respect to the dependency graph $(H_n, \sim)$ . First, $\sum_{j\in H_n}\bar{\xi}_{j,n}$ is a zero-mean random variable with unit variance. We know that $|H_n| = O(s_n^{-d})$ as A is bounded. Furthermore, the maximum degree of any vertex of $H_n$ is uniformly bounded by a positive and finite constant. Let Z denote a standard normal random variable. Then the aforementioned theorem implies that

(4.15) \begin{align}&\Bigl|\, \mathbb{P}\bigl( \sum_{j \in H_n} \bar{\xi}_{j,n} \leq x \bigr) - \mathbb{P}(Z \leq x) \, \Bigr| \leq C^*\left( \sqrt{s_n^{-d} \max_j \, \mathbb{E} \bigl[ |\bar{\xi}_{j,n}|^3 \bigr]} + \sqrt{s_n^{-d} \max_j\, \mathbb{E} \bigl[ |\bar{\xi}_{j,n}|^4 \bigr]} \right) \notag\\&\quad \leq C^*\left( \sqrt{s_n^{-d}n^{-3/2} \max_j \, \mathbb{E} \bigl[ |\xi_{j,n}-\mathbb{E}[\xi_{j,n}]|^3 \bigr]} + \sqrt{s_n^{-d} n^{-2} \max_j\, \mathbb{E} \bigl[ |\xi_{j,n} -\mathbb{E}[ \xi_{j,n}]|^4 \bigr]} \right)\!, \end{align}

where the second inequality follows from Proposition 3.1, which claims that $\textrm{Var} \big( \sum_{i=1}^m a_i \chi_{n,A}(t_i)) \big)$ is asymptotically equal to n up to multiplicative constants. Minkowski’s inequality implies that

\begin{align*}\Big (\mathbb{E} \bigl[ |\xi_{j,n}-\mathbb{E}[\xi_{j,n}]|^p \bigr]\Big)^{1/p} \leq \big(\mathbb{E}\big[|\xi_{j,n}|^p\big]\big)^{1/p} + \mathbb{E}\big[|\xi_{j,n}|\big].\end{align*}

Recall that for fixed $\mathcal{Y} \subset \mathbb{R}^d$ , $h^{k}_t(\mathcal{Y})$ is nondecreasing in t. Then, we have that

\begin{align*}|\xi_{j,n}| &\leq \sum_{k=0}^{\infty} \sum_{\mathcal{Y} \subset \mathcal{P}_n} \sum_{i=1}^m |a_i | h^{k}_{r_n(t_i)}(\mathcal{Y})\textbf{1}\big\{{\text{LMP}(\mathcal{Y}) \in A \cap Q_{j,n}}\big\} \\[3pt]&\leq C^* \sum_{k=0}^{\infty} \sum_{\mathcal{Y} \subset \mathcal{P}_n} h^k_{r_n(t_m)}(\mathcal{Y}) \textbf{1}\big\{{\text{LMP}(\mathcal{Y}) \in A \cap Q_{j,n}}\big\} \\[3pt]&\leq C^* \sum_{k=0}^{\infty} \dbinom{\mathcal{P}_n\big(\text{Tube}(Q_{j,n}, 2r_n(t_m))\big)}{k+1} \\[3pt]&\leq C^* \cdot 2^{\mathcal{P}_n(\text{Tube}(Q_{j,n}, \, 2r_n(t_m)))},\end{align*}

where

\begin{equation*}\text{Tube}\big(Q_{j,n}, 2r_n(t_m)\big) = \big \{ x \in \mathbb{R}^d\,{:}\, \inf_{y \in Q_{j,n}} \|{x-y}\| \leq 2r_n(t_m) \bigr\}.\end{equation*}

By the assumption $ns_n^d = 1$ , one can easily show that $\mathcal{P}_n\big(\text{Tube}(Q_{j,n}, \, 2r_n(t_m))\big)$ is stochastically dominated by a Poisson random variable with positive and finite parameter, which does not depend on j and n. Denote such a Poisson random variable by Y. Then, for $p=3,4$ ,

\begin{equation*}\max_j \mathbb{E} \Big[ \big| \xi_{j,n}-\mathbb{E}[\xi_{j,n}] \big|^p \Big] \le C^*\Big[ \big( \mathbb{E}[2^{pY}] \big)^{1/p} + \mathbb{E} (2^Y)\Big] <\infty.\end{equation*}

Referring back to (4.15) and noting $ns_n^d = 1$ , we can see that

\begin{equation*}\Bigl|\, \mathbb{P}\bigl( \sum_{j \in H_n} \bar{\xi}_{j,n} \leq x \bigr) - \mathbb{P}(Z \leq x) \, \Bigr| \le C^* \Big( \sqrt{s_n^{-d}n^{-3/2}} + \sqrt{s_n^{-d}n^{-2}}\Big) = O(n^{-1/4}) \to 0, \qquad n\to\infty,\end{equation*}

which implies that $\sum_{j\in H_n}\bar{\xi}_{j,n}\Rightarrow \mathcal N(0,1)$ as $n\to\infty$ ; equivalently,

\begin{equation*}\sum_{i=1}^m a_i \bar{\chi}_{n,A} (t_i) \Rightarrow \mathcal N (0,\Sigma_A), \qquad n\to\infty,\end{equation*}

where

\begin{equation*}\Sigma_A := \sum_{i=1}^m\sum_{j=1}^m a_i a_j \sum_{k_i=0}^{\infty} \sum_{k_j=0}^{\infty} (\!-\!1)^{k_i + k_j} \Psi_{k_i, k_j, A}(t_i, t_j).\end{equation*}

Subsequently we claim that

\begin{equation*}\sum_{i=1}^m a_i \bar{\chi}_n(t_i) \Rightarrow \mathcal N (0,\Sigma_{\mathbb{R}^d}), \qquad n\to\infty,\end{equation*}

which completes the proof. To show this, take $A_K=(\!-K,K)^d$ for $K>0$ . It then suffices to verify that

\begin{align*}&\mathcal N(0,\Sigma_{A_K})\Rightarrow \mathcal N (0,\Sigma_{\mathbb{R}^d}), \qquad K \to\infty,\end{align*}

and for each $t\ge0$ and $\epsilon>0$ ,

\begin{align*}&\lim_{K\to\infty}\limsup _{n\to\infty} \mathbb{P} \Big( \big| \bar{\chi}_{n}(t)- \bar{\chi}_{n,A_K}(t) \big| > \epsilon \Big)=0.\end{align*}

The former condition is obvious from the fact that $\Sigma_{A_K}\to \Sigma_{\mathbb{R}^d}$ as $K\to\infty$ . The latter is also a direct consequence of Proposition 3.1, together with Chebyshev’s inequality and the fact that $\chi_n(t)-\chi_{n,A_K}(t)=\chi_{n,\mathbb{R}^d \setminus A_K}(t)$ .

4.4. Proof of tightness in Theorem 3.2

Before we begin the proof, we add a few more useful properties of $h_t^{k}$ . For $0 \le s <t <\infty$ , we define

\begin{equation*}h_{t,s}^{k}(\mathcal{Y}) = h_t^{k}(\mathcal{Y}) - h_s^{k}(\mathcal{Y}), \qquad \mathcal{Y}=(y_0,\dots, y_k) \in (\mathbb{R}^d)^{k+1}.\end{equation*}

Lemma 4.2.

  1. (i) For any $0\le s \le t \le T < \infty$ ,

    \begin{equation*}\int_{(\mathbb{R}^d)^k}h_{t,s}^{k}(0,y_1,\dots,y_k) \textrm{d}\textbf{y}\le C_{d,k,T} (t^d-s^d),\end{equation*}
    where $C_{d,k,T}=k^2 (2^d\theta_d)^kT^{d(k-1)}$ .
  2. (ii) Let $j \in \{1, \dots, (k_1 \wedge k_2)+1\}$ and suppose that $\textbf{y}_0 \in (\mathbb{R}^d)^{j-1}$ , $\textbf{y}_1 \in (\mathbb{R}^d)^{k_1 + 1-j}$ , and $\textbf{y}_2 \in (\mathbb{R}^d)^{k_2+1-j}$ . Then, for $0 \le t_1 \le s \le t_2 \le T < \infty$ ,

    \begin{align*}&\int_{(\mathbb{R}^d)^{k_1 + k_2 + 1 - j}} h_{s,t_1}^{k_1} (0, \textbf{y}_0, \textbf{y}_1) h_{t_2, s}^{k_2}(0, \textbf{y}_0, \textbf{y}_2) \textrm{d}{\textbf{y}_0} \textrm{d}{\textbf{y}_1} \textrm{d}{\textbf{y}_2} \\[3pt]& \leq 36(k_1 k_2)^6 ((2T)^d \theta_d)^{2(k_1 + k_2)} \big(t_2^d - t_1^d\big)^2.\end{align*}

Proof. We note that for any $0 \le s < t$ with $y_0\equiv 0$ ,

\begin{align*}&h_{t,s}^{k}(0, y_1, \dots, y_k) = \textbf{1}\big\{{ 2s < \max_{0 \leq i < j \leq k} \big\|{y_i - y_j}\big\| \leq 2t}\big\} \\&\qquad \leq \prod_{i=1}^k \textbf{1} \big\{ y_i \in B(0,2T) \big\} \bigg( \sum_{i=1}^k \textbf{1}\big\{{ 2s < \big\|{y_i}\big\| \leq 2t }\big\} + \sum_{1 \leq i < j \leq k} \textbf{1}\big\{{ 2s < \big\|{y_i - y_j}\big\| \leq 2t}\big\} \bigg).\end{align*}

For each $i = 1, \dots, k$ , let $\textbf{y}^{(i)}$ be the tuple $(y_1, \dots, y_{i-1}, y_{i+1}, \dots, y_k) \in (\mathbb{R}^d)^{k-1}$ with the ith coordinate omitted. Then

\begin{align*}\int_{(\mathbb{R}^d)^k} h^{k}_{t,s}(0, y_1, \dots, y_k) \textrm{d}\textbf{y} &\leq \sum_{i=1}^k \int_{B(0,2T)^{k-1}} \int_{\mathbb{R}^d} \textbf{1}\big\{{ 2s < \big\|{y_i}\big\| \leq 2t }\big\} \textrm{d}{y_i}\,\textrm{d}{\textbf{y}^{(i)}} \\[3pt]&\quad+ \sum_{1 \leq i < j \leq k} \int_{B(0,2T)^{k-1}} \int_{\mathbb{R}^d} \textbf{1}\big\{{ 2s < \big\|{y_{i} - y_j}\big\| \leq 2t }\big\} \textrm{d}{y_i}\, \textrm{d}{\textbf{y}^{(i)}}. \\[3pt]&= \Big( k+\binom{k}{2} \Big) m\big( B(0,2T) \big)^{k-1} \big[ m\big(B(0,2t)\big)-m\big(B(0,2s)\big) \big] \\[3pt]&\le C_{d,k,T}(t^d-s^d)\end{align*}

as required.

Part (ii) is essentially the same as Lemma 7.1 in [Reference Owada18], so the proof is skipped.

Proof of tightness in Theorem 3.2. To show tightness, it suffices to use Theorem 13.5 from [Reference Billingsley2], which requires that for every $0<T<\infty$ , there exist a $C > 0$ such that

(4.16) \begin{align} \mathbb{E}\big[ |\bar{\chi}_n(t_2) - \bar{\chi}_n(s)|^2 |\bar{\chi}_n(s) - \bar{\chi}_n(t_1)|^2 \big] \leq C(t_2^d - t_1^d)^2\end{align}

for all $0 \leq t_1 \leq s \leq t_2 \leq T$ and $n\in \mathbb{N}$ . To demonstrate (4.16), we will give an abridged proof—tightness will be similarly established for analogous processes seen in [Reference Owada18, Reference Penrose21]. Let us begin with some helpful notation, namely,

\begin{align*}&h_{n, t, s}^{k}(\mathcal{Y}) := h_{r_n(t), r_n(s)}^{k}(\mathcal{Y}) = h_{r_n(t)}^k (\mathcal{Y}) - h_{r_n(s)}^k (\mathcal{Y}), \\[3pt]&\zeta^{k}_{n, t, s} := S_k\big(\mathcal{P}_n, r_n(t)\big) - S_k\big(\mathcal{P}_n, r_n(s)\big) = \sum_{\mathcal{Y} \subset \mathcal{P}_n} h_{n, t, s}^{k}(\mathcal{Y}).\end{align*}

By the same argument as in the proof of Proposition 3.1, one can apply Fubini’s theorem to obtain

(4.17) \begin{align}&\mathbb{E}\big[ |\bar{\chi}_n(t_2) - \bar{\chi}_n(s)|^2 |\bar{\chi}_n(s) - \bar{\chi}_n(t_1)|^2 \big] \\[3pt] &\qquad= \frac{1}{n^2} \sum_{(k_1, k_2, k_3, k_4) \in \mathbb{N}_0^4} (\!-\!1)^{k_1 + k_2 + k_3 + k_4} \mathbb{E}\Big[ \big(\zeta^{k_1}_{n, t_2, s} - \mathbb{E}[\zeta^{k_1}_{n, t_2, s}]\big ) \big(\zeta^{k_2}_{n, t_2, s} - \mathbb{E}[\zeta^{k_2}_{n, t_2, s}]\big ) \notag \\[3pt]&\phantom{\qquad= \frac{1}{n^2} \sum_{\textbf{k} \in \mathbb{N}_0^4} (\!-\!1)^{k_1 + k_2 + k_3 + k_4} \big(\zeta^{(k_1)}_{n, s, t_1} - \mathbb{E}[\zeta^{(k_1)}} \times \big(\zeta^{k_3}_{n, s, t_1} - \mathbb{E}[\zeta^{k_3}_{n, s, t_1}]\big ) \big(\zeta^{k_4}_{n, s, t_1} - \mathbb{E}[\zeta^{k_4}_{n, s, t_1}]\big ) \Big]. \notag \end{align}

Our objective now is to find a suitable bound for

(4.18) \begin{align} \mathbb{E}\Big[ \big(\zeta^{k_1}_{n, t_2, s} - \mathbb{E}[\zeta^{k_1}_{n, t_2, s}]\big ) \big(\zeta^{k_2}_{n, t_2, s} - \mathbb{E}[\zeta^{k_2}_{n, t_2, s}]\big ) \big(\zeta^{k_3}_{n, s, t_1} - \mathbb{E}[\zeta^{k_3}_{n, s, t_1}]\big ) \big(\zeta^{k_4}_{n, s, t_1} - \mathbb{E}[\zeta^{k_4}_{n, s, t_1}]\big ) \Big].\end{align}

To this end, let us refine the notation once more by setting $\xi_1 := \zeta^{k_1}_{n, t_2, s}$ , $\xi_2 := \zeta^{k_2}_{n, t_2, s}$ , $\xi_3 := \zeta^{k_3}_{n, s, t_1}$ , and $\xi_4 := \zeta^{k_4}_{n, s, t_1}$ . Furthermore, let $h_1 := h_{n, t_2, s}^{k_1}$ , $h_2 := h_{n, t_2, s}^{k_2}$ , $h_3 := h_{n, s, t_1}^{k_3}$ , and $h_4 := h_{n, s, t_1}^{k_4}$ . Define $[n] := \{1,2, \dots, n\}$ , and for any $\sigma \subset [4]$ let $\xi_\sigma = \prod_{i \in \sigma} \xi_i$ , where we set $\xi_\emptyset = 1$ by convention. Then we can express (4.18) quite simply as

(4.19) \begin{align} \sum_{\sigma \subset [4]} (\!-\!1)^{|\sigma|}\, \mathbb{E}[ \xi_\sigma] \prod_{i \in [4]\setminus \sigma} \mathbb{E}[\xi_i].\end{align}

For $\sigma \subset [4]$ with $\sigma \neq \emptyset$ , and finite subsets $\mathcal{Y}_j \subset \mathbb{R}^d$ , $j\in \sigma$ , we define $\mathcal{Y}_\sigma := \bigcup_{j \in \sigma}\mathcal{Y}_j$ . Given a subset $\tau \subset \sigma \subset [4]$ , we also define

\begin{align*}\mathcal I_{\tau, \sigma} (\mathcal{Y}_\sigma) &:= \prod_{j\in \tau} \textbf{1} \big\{ \text{there exists } p \in \tau\setminus \{j\} \text{ such that } \mathcal{Y}_j \cap \mathcal{Y}_p \neq \emptyset \big\} \\[3pt]&\quad \times \prod_{j \in \sigma \setminus \tau} \textbf{1} \big\{ \mathcal{Y}_j \cap \mathcal{Y}_q =\emptyset \text{ for all } q \in \sigma \setminus \{j\} \big\}.\end{align*}

Note that $\mathcal I_{\tau, \sigma}(\mathcal{Y}_\sigma)= 0$ whenever $|\tau|=1$ , and

(4.20) \begin{equation} \sum_{\tau \subset \sigma} \mathcal I_{\tau, \sigma}(\mathcal{Y}_\sigma)=1.\end{equation}

Furthermore, if $\tau=\sigma$ , we write $\mathcal I_\sigma(\!\cdot\!):= \mathcal I_{\sigma, \sigma}(\!\cdot\!)$ . It follows from (4.20) and the Palm theory in the appendix that, for each non-empty $\sigma \subset [4]$ ,

\begin{align*}\mathbb{E}[\xi_\sigma] &= \mathbb{E} \Big[ \sum_{\mathcal{Y}_j \subset \mathcal{P}_n, \, j\in \sigma} \prod_{i \in \sigma} h_i (\mathcal{Y}_i) \Big] \\[3pt]&= \sum_{\tau \subset \sigma} \mathbb{E} \Big[ \sum_{\mathcal{Y}_j \subset \mathcal{P}_n, \, j\in \sigma} \mathcal I_{\tau, \sigma} (\mathcal{Y}_\sigma) \prod_{i\in \sigma} h_i(\mathcal{Y}_i) \Big] \\[3pt]&=\sum_{\tau \subset \sigma} \mathbb{E} \Big[ \sum_{\mathcal{Y}_j \subset \mathcal{P}_n, \, j\in \tau} \mathcal I_{\tau} (\mathcal{Y}_\tau) \prod_{i\in \tau} h_i(\mathcal{Y}_i) \Big] \prod_{i \in \sigma \setminus \tau} \mathbb{E}[\xi_i].\end{align*}

Hence, (4.19) is equal to

\begin{align*}&\sum_{\sigma \subset [4]} \sum_{\tau \subset \sigma} (\!-\!1)^{|\sigma|}\, \mathbb{E} \Big[ \sum_{\mathcal{Y}_j \subset \mathcal{P}_n, \, j\in \tau} \mathcal I_{\tau} (\mathcal{Y}_\tau)\prod_{i\in \tau} h_i(\mathcal{Y}_i) \Big] \prod_{i \in \sigma \setminus \tau} \mathbb{E}[\xi_i] \prod_{i \in [4]\setminus \sigma} \mathbb{E}[\xi_i] \\[3pt]&= \sum_{\tau \subset [4]} \mathbb{E} \Big[ \sum_{\mathcal{Y}_j \subset \mathcal{P}_n, \, j\in \tau} \mathcal I_{\tau} (\mathcal{Y}_\tau)\prod_{i\in \tau} h_i(\mathcal{Y}_i) \Big] \prod_{i \in [4]\setminus \tau} \mathbb{E}[\xi_i] \sum_{\tau \subset \sigma \subset [4]} (\!-\!1)^{|\sigma|} \\[3pt]&= \mathbb{E} \Big[ \sum_{\mathcal{Y}_1\subset \mathcal{P}_n} \sum_{\mathcal{Y}_2\subset \mathcal{P}_n} \sum_{\mathcal{Y}_3\subset \mathcal{P}_n} \sum_{\mathcal{Y}_4\subset \mathcal{P}_n} \mathcal I_{[4]}(\mathcal{Y}_{[4]}) \prod_{i=1}^4 h_i (\mathcal{Y}_i) \Big],\end{align*}

where the last line follows from the fact that

\begin{equation*}\sum_{\tau \subset \sigma \subset [4]} (\!-\!1)^{|\sigma|} = \binom{4-|\tau|}{0}(\!-\!1)^{|\tau|} + \dots + \binom{4-|\tau|}{4-|\tau|}(\!-\!1)^4 = 0\end{equation*}

unless $\tau = [4]$ . Substituting this back into (4.17) and taking the absolute value of $(\!-\!1)^{k_1+k_2+k_3+k_4}$ , we get

\begin{align*}&\mathbb{E}\big[ |\bar{\chi}_n(t_2) - \bar{\chi}_n(s)|^2 |\bar{\chi}_n(s) - \bar{\chi}_n(t_1)|^2 \big] \\[3pt]&\qquad \leq \sum_{(k_1, k_2, k_3, k_4) \in \mathbb{N}_0^4} \frac{1}{n^2} \mathbb{E} \Big[ \sum_{\mathcal{Y}_1 \subset \mathcal{P}_n} \sum_{\mathcal{Y}_2 \subset \mathcal{P}_n} \sum_{\mathcal{Y}_3 \subset \mathcal{P}_n} \sum_{\mathcal{Y}_4 \subset \mathcal{P}_n} \mathcal I_{[4]}(\mathcal{Y}_{[4]}) \prod_{i = 1}^4 h_i(\mathcal{Y}_{i}) \Big].\end{align*}

Now, it suffices to show that the right-hand side above is less than $C\big(t_2^d-t_1^d\big)^2$ for some $C > 0$ . We can break the above summand into four distinct cases:

  1. (I) $b_{12}=|\mathcal{Y}_1 \cap \mathcal{Y}_2| >0$ , $b_{34}=|\mathcal{Y}_3 \cap \mathcal{Y}_4| >0$ , with all other pairwise intersections empty.

  2. (II) $b_{13}=|\mathcal{Y}_1 \cap \mathcal{Y}_3| >0$ , $b_{24}=|\mathcal{Y}_2 \cap \mathcal{Y}_4| >0$ , with all other pairwise intersections empty.

  3. (III) $b_{14}=|\mathcal{Y}_1 \cap \mathcal{Y}_4| >0$ , $b_{23}=|\mathcal{Y}_2 \cap \mathcal{Y}_3| >0$ , with all other pairwise intersections empty.

  4. (IV) For each i, there exists a $j\neq i$ such that $ \mathcal{Y}_i \cap \mathcal{Y}_j \neq \emptyset$ , but $\textbf{(I)}$ $\textbf{(III)}$ do not hold.

We prove appropriate upper bounds for Cases (I) and (IV), and the other two cases follow from the proof for (I). The Palm theory in the appendix implies that

(4.21) \begin{align} &\frac{1}{n^2} \mathbb{E} \Big[ \sum_{\mathcal{Y}_1 \subset \mathcal{P}_n} \sum_{\mathcal{Y}_2 \subset \mathcal{P}_n} \sum_{\mathcal{Y}_3 \subset \mathcal{P}_n} \sum_{\mathcal{Y}_4 \subset \mathcal{P}_n} \prod_{i=1}^4 h_i(\mathcal{Y}_i)\\ &\,\,\qquad \times \textbf{1}\big\{{ |\mathcal{Y}_1 \cap \mathcal{Y}_2| = b_{12}, |\mathcal{Y}_3 \cap \mathcal{Y}_4| = b_{34}, |\mathcal{Y}_i \cap \mathcal{Y}_j| = 0 \text{ for other } (i,j)}\big\}\Big] \notag \\[3pt]&= \frac{1}{n^2} \mathbb{E}\Big[ \sum_{\mathcal{Y}_1 \subset \mathcal{P}_n} \sum_{\mathcal{Y}_2 \subset \mathcal{P}_n} h_1(\mathcal{Y}_1) h_2(\mathcal{Y}_2) \textbf{1}\big\{{ |\mathcal{Y}_1 \cap \mathcal{Y}_2| = b_{12}}\big\}\Big] \notag \\[3pt]& \qquad\,\,\times \mathbb{E}\Big[ \sum_{\mathcal{Y}_3 \subset \mathcal{P}_n} \sum_{\mathcal{Y}_4 \subset \mathcal{P}_n} h_3(\mathcal{Y}_3) h_4(\mathcal{Y}_4) \textbf{1}\big\{{ |\mathcal{Y}_3 \cap \mathcal{Y}_4| = b_{34}}\big\}\Big] \notag \\[3pt]&=\frac{n^{k_1 + k_2 + 1 - b_{12}}}{b_{12}! (k_1 + 1 - b_{12})! (k_2 + 1 - b_{12})!} \notag \\[3pt]&\phantom{n^{k_1 + k_2 + 1 - b_{12}}} \times \mathbb{E}\big[ h_1(X_1, \dots, X_{k_1 + 1}) h_2(X_1, \dots, X_{b_{12}}, X_{k_1 + 2}, \dots, X_{k_1 + k_2 + 2 - b_{12}})\big] \notag \\[3pt]&\times\frac{n^{k_3 + k_4 + 1 - b_{34}}}{b_{34}! (k_3 + 1 - b_{34})! (k_4 + 1 - b_{34})!} \notag \\[3pt]&\phantom{n^{k_1 + k_2 + 1 - j_{12}}} \times \mathbb{E}\big[ h_3(X_1, \dots, X_{k_3 + 1}) h_4(X_1, \dots, X_{b_{34}}, X_{k_3 + 2}, \dots, X_{k_3 + k_4 + 2 - b_{34}})\big]. \notag\end{align}

In the remainder of the proof, assume for ease of description that $(2T)^d \theta_d\,{>}\, 1$ , $\Vert f \Vert_{\infty} > 1$ , and $T > 1$ . Moreover, assume without loss of generality that $k_1 \ge k_2$ and $k_3 \ge k_4$ . Using trivial bounds and the customary changes of variable (i.e., $x_1 = x$ and $x_i = x + s_ny_{i-1}$ for $i=2,\dots,k_1+k_2+2-b_{12}$ ), applying Lemma 4.2(i), and recalling that $a_T=(2T)^d \theta_d \|\,f\|_\infty$ , we see that

\begin{align*}&n^{k_1 + k_2 + 1 - b_{12}} \mathbb{E}[ h_1(X_1, \dots, X_{k_1 + 1}) h_2(X_1, \dots, X_{b_{12}}, X_{k_1 + 2}, \dots, X_{k_1 + k_2 + 2 - b_{12}})] \\[3pt]&\quad\leq ( \Vert f \Vert_{\infty})^{k_1 + k_2 + 1 - b_{12}}\int_{(\mathbb{R}^d)^{k_2+1-b_{12}}}\int_{(\mathbb{R}^d)^{k_1+1-b_{12}}}\int_{(\mathbb{R}^d)^{b_{12}-1}}h_{t_2, s}^{k_1}(0, \textbf{y}_0, \textbf{y}_1) \\[3pt]&\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \times h_{t_2, s}^{k_2}(0, \textbf{y}_0, \textbf{y}_2) \textrm{d}{\textbf{y}_0} \textrm{d}{\textbf{y}_1} \textrm{d}{\textbf{y}_2} \\[3pt]&\quad\leq ( \Vert f \Vert_{\infty})^{k_1 + k_2} \big((2T)^d\theta_d\big)^{k_2+1-b_{12}} \int_{(\mathbb{R}^d)^{k_1+1-b_{12}}}\int_{(\mathbb{R}^d)^{b_{12}-1}} h_{t_2, s}^{k_1}(0, \textbf{y}_0, \textbf{y}_1) \textrm{d}{\textbf{y}_0} \textrm{d}{\textbf{y}_1} \\[3pt]&\quad\leq ( \Vert f \Vert_{\infty})^{k_1 + k_2} \big((2T)^d\theta_d\big)^{k_2+1-b_{12}} C_{d, k_1, T} (t_2^d - s^d) \\[3pt]&\quad\le k_1^2 (a_T )^{k_1+k_2}(t_2^d - s^d).\end{align*}

Hence (4.21) is bounded by

\begin{align*}&\frac{(a_T)^{k_1 + k_2}k_1^2 }{b_{12}!(k_1 + 1 - b_{12})! (k_2 + 1 - b_{12})!} (t_2^d - s^d) \frac{( a_T)^{k_3 + k_4}k_3^2 }{b_{34}!(k_3 + 1 - b_{34})! (k_4 + 1 - b_{34})!} (s^d - t_1^d) \\[3pt]&\leq \frac{(a_T)^{k_1 + k_2 + k_3 + k_4}k_1^2 k_3^2 }{b_{12}!(k_1 + 1 - b_{12})! (k_2 + 1 - b_{12})! b_{34}!(k_3 + 1 - b_{34})! (k_4 + 1 - b_{34})!} (t_2^d - t_1^d)^2.\end{align*}

Finally we see that

\begin{align*}&\sum_{\substack{k_1 \geq k_2, \, k_3 \geq k_4, \\ 1 \leq b_{12} \leq k_2+1, \\ 1 \leq b_{34} \leq k_4+1}} \frac{(a_T)^{k_1 + k_2 + k_3 + k_4}k_1^2 k_3^2}{b_{12}!(k_1 + 1 - b_{12})! (k_2 + 1 - b_{12})! b_{34}!(k_3 + 1 - b_{34})! (k_4 + 1 - b_{34})!} <\infty,\end{align*}

since

(4.22) \begin{align}\sum_{k_1=0}^\infty \sum_{k_2=0}^{k_1} \sum_{\ell =1}^{k_2+1} \frac{(a_T)^{k_1+k_2}k_1^2}{\ell! (k_1+1-\ell)! (k_2+1-\ell)!} &= \sum_{\ell=1}^\infty \sum_{k_1=\ell-1}^\infty \frac{(a_T)^{k_1}k_1^2}{\ell! (k_1+1-\ell)!}\, \sum_{k_2=\ell-1}^{k_1} \frac{(a_T)^{k_2}}{(k_2+1-\ell)!} \notag \\&\le e^{a_T}\sum_{\ell=1}^\infty \frac{(a_T)^{\ell-1}}{\ell!} \sum_{k_1=\ell-1}^\infty \frac{(a_T)^{k_1}k_1^2}{(k_1+1-\ell)!}\, <\infty. \end{align}

Now, for Cases (I)(III), we have an upper bound of the form $C(t_2^d - t_1^d)^2$ , as desired.

Thus we need only demonstrate the same for Case (IV). In addition to the notation $b_{ij}$ , $1\le i < j \le 4$ , introduced above, define for $\mathcal{Y}_i \in (\mathbb{R}^d)^{k_i+1}$ , $k_i \in \mathbb{N}_0$ , $i=1,\dots,4$ ,

\begin{equation*}b_{ijk} := |\mathcal{Y}_i \cap \mathcal{Y}_j \cap \mathcal{Y}_k|, \qquad 1\le i < j < k \le 4,\end{equation*}
\begin{equation*}b_{1234} := |\mathcal{Y}_1 \cap \mathcal{Y}_2 \cap \mathcal{Y}_3 \cap \mathcal{Y}_4|,\end{equation*}

and

(4.23) \begin{equation} b := b_{12} + b_{13} + b_{14} + b_{23} + b_{24} + b_{34} - b_{123} - b_{124} - b_{134} -b_{234} + b_{1234}, \end{equation}

so that $|\mathcal{Y}_1 \cup \mathcal{Y}_2 \cup \mathcal{Y}_3 \cup \mathcal{Y}_4| = k_1 +k_2 +k_3 +k_4+4-b$ with $b\ge 3$ . Let $\mathcal B$ be the collection of $\textbf{b}=(b_{12}, \dots, b_{1234}) \in \mathbb{N}_0^{11}$ satisfying the conditions in Case (IV). For a non-empty $\sigma \subset [4]$ and $\mathcal{Y}_i \in (\mathbb{R}^d)^{k_i+1}$ , $i=1,\dots,4$ , let

(4.24) \begin{equation} j_\sigma := \biggl| \, \bigcap_{i\in \sigma} \Big( \mathcal{Y}_i \setminus \bigcup_{j\in [4]\setminus \sigma} \mathcal{Y}_j \Big) \, \bigg|.\end{equation}

In particular, the $j_\sigma$ are functions of b such that $\sum_{\sigma \subset [4], \, \sigma \neq \emptyset} j_\sigma = |\mathcal{Y}_1 \cup \mathcal{Y}_2 \cup \mathcal{Y}_3 \cup \mathcal{Y}_4|$ . The Palm theory in the appendix yields

\begin{align*}&\frac{1}{n^2}\, \mathbb{E} \Big[ \sum_{\mathcal{Y}_1 \subset \mathcal{P}_n} \sum_{\mathcal{Y}_2 \subset \mathcal{P}_n} \sum_{\mathcal{Y}_3 \subset \mathcal{P}_n} \sum_{\mathcal{Y}_4 \subset \mathcal{P}_n} \prod_{i=1}^4h_i(\mathcal{Y}_i)\, \textbf{1}\big\{{ \text{case \textbf{(IV)} holds}}\big\} \Big] \\&\quad =\sum_{\textbf{b} \in \mathcal B} \frac{1}{n^2}\, \mathbb{E} \Big[ \sum_{\mathcal{Y}_1 \subset \mathcal{P}_n} \sum_{\mathcal{Y}_2 \subset \mathcal{P}_n} \sum_{\mathcal{Y}_3 \subset \mathcal{P}_n} \sum_{\mathcal{Y}_4 \subset \mathcal{P}_n} \prod_{i=1}^4h_i(\mathcal{Y}_i)\, \textbf{1}\big\{ |\mathcal{Y}_1\cap \mathcal{Y}_2|=b_{12}, |\mathcal{Y}_1 \cap \mathcal{Y}_3|=b_{13}, \\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \dots, |\mathcal{Y}_1 \cap \mathcal{Y}_2 \cap \mathcal{Y}_3 \cap \mathcal{Y}_4|=b_{1234} \big\} \Big] \\[3pt]&=\sum_{\textbf{b} \in \mathcal B} \frac{n^{k_1+k_2+k_3+k_4+2-b}}{\prod_{\sigma \subset [4], \, \sigma \neq \emptyset} j_\sigma!}\,\mathbb{E} \Big[ \prod_{i=1}^4h_i(\mathcal{Y}_i)\, \textbf{1}\big\{ |\mathcal{Y}_1\cap \mathcal{Y}_2|=b_{12}, |\mathcal{Y}_1 \cap \mathcal{Y}_3|=b_{13}, \\[3pt]&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \dots, |\mathcal{Y}_1 \cap \mathcal{Y}_2 \cap \mathcal{Y}_3 \cap \mathcal{Y}_4|=b_{1234} \big\} \Big].\end{align*}

Under the conditions in Case (IV), at least one of the $b_{ij}$ is nonzero, so we may assume without loss of generality that $b_{13}>0$ . Then we have

\begin{align*}&n^{k_1+k_2+k_3+k_4+2-b} \mathbb{E} \Big[ \prod_{i=1}^4h_i(\mathcal{Y}_i)\, \textbf{1}\big\{ |\mathcal{Y}_1\cap \mathcal{Y}_2|=b_{12}, \dots, |\mathcal{Y}_1 \cap \mathcal{Y}_2 \cap \mathcal{Y}_3 \cap \mathcal{Y}_4|=b_{1234} \big\} \Big] \\[3pt]&= n^{k_1+k_2+k_3+k_4+2-b} \int_{(\mathbb{R}^d)^{k_1+k_2+k_3+k_4+4-b}} h_1(\textbf{x}_0, \textbf{x}_1) h_3(\textbf{x}_0, \textbf{x}_3) h_2(\textbf{x}_2) h_4 (\textbf{x}_4) \\[3pt]&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \times \prod_{x \in \bigcup_{i=0}^4 \textbf{x}_i} f(x) \textrm{d}\, (\textbf{x}_0 \cup \textbf{x}_1 \cup \cdots \cup \textbf{x}_4),\end{align*}

where $\textbf{x}_0$ is a collection of elements in $\mathbb{R}^d$ with $|\textbf{x}_0|=b_{13}>0$ . In other words, $\textbf{x}_0 \in (\mathbb{R}^d)^{b_{13}}$ , so that $\textbf{x}_1\in (\mathbb{R}^d)^{k_1+1-b_{13}}$ and $\textbf{x}_3\in (\mathbb{R}^d)^{k_3+1-b_{13}}$ with $\textbf{x}_1 \cap \textbf{x}_3 = \emptyset$ . Moreover, $\textbf{x}_2 \in (\mathbb{R}^d)^{k_2+1}$ and $\textbf{x}_4\in (\mathbb{R}^d)^{k_4+1}$ , such that if $\textbf{x}_2 \cap \textbf{x}_4 =\emptyset$ , then $\textbf{x}_i \cap (\textbf{x}_0 \cup \textbf{x}_1 \cup \textbf{x}_3) \neq \emptyset$ for $i=2,4$ , and if $\textbf{x}_2 \cap \textbf{x}_4 \neq \emptyset$ , then $(\textbf{x}_2 \cup \textbf{x}_4) \cap (\textbf{x}_0 \cup \textbf{x}_1 \cup \textbf{x}_3)\neq \emptyset$ .

Now, let us perform the change of variables $\textbf{x}_i = x\textbf{1} +s_n \textbf{y}_i$ for $i=0,\dots,4$ , where $\textbf{1}$ is a vector with all entries 1, and the first element of $\textbf{y}_0$ is taken to be 0. In addition to this, we apply the translation and scale invariance of the $h_i$ to get

\begin{align*}&n^{k_1+k_2+k_3+k_4+2-b} \mathbb{E} \bigg[ \prod_{i=1}^4h_i(\mathcal{Y}_i)\, \textbf{1}\big\{ |\mathcal{Y}_1\cap \mathcal{Y}_2|=b_{12}, \dots, |\mathcal{Y}_1 \cap \mathcal{Y}_2 \cap \mathcal{Y}_3 \cap \mathcal{Y}_4|=b_{1234} \big\} \bigg] \\[3pt]&=n^{k_1+k_2+k_3+k_4+2-b}s_n^{d(k_1+k_2+k_3+k_4+3-b)} \int_{\mathbb{R}^d} \int_{(\mathbb{R}^d)^{k_1+k_2+k_3+k_4+3-b}} h_{t_2,s}^{k_1}(\textbf{y}_0, \textbf{y}_1) h_{s, t_1}^{k_3}\big(\textbf{y}_0, \textbf{y}_3\big) \\[3pt]&\qquad \qquad \qquad \qquad \times h_{t_2,s}^{k_2}(\textbf{y}_2) h_{s, t_1}^{k_4}(\textbf{y}_4) \prod_{y \in \bigcup_{i=0}^4 \textbf{y}_i} f(x+s_n y) \textrm{d}\big( (\textbf{y}_0 \cup \dots \cup \textbf{y}_4)\setminus \{ 0 \} \big) \textrm{d}x.\end{align*}

Using $ns_n^d=1$ , together with the trivial bounds $h_{t_2,s}^{k_2}(\textbf{y}_2) \le h_T^{k_2}(\textbf{y}_2)$ , $h_{s, t_1}^{k_4}(\textbf{y}_4)\le h_T^{k_4}(\textbf{y}_4)$ , and $f(x+s_n y) \le \| f \|_\infty$ , one can bound the last expression by

(4.25) \begin{align}&\| f \|_\infty^{k_1+k_2+k_3+k_4+3-b} \int_{(\mathbb{R}^d)^{k_1+k_2+k_3+k_4+3-b}} h_{t_2,s}^{k_1}(\textbf{y}_0, \textbf{y}_1) h_{s, t_1}^{k_3}(\textbf{y}_0, \textbf{y}_3) \notag \\[3pt]&\quad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \times h_T^{k_2}(\textbf{y}_2) h_T^{k_4}(\textbf{y}_4) \textrm{d}\big( (\textbf{y}_0 \cup \dots \cup \textbf{y}_4)\setminus \{ 0 \} \big) \notag \\[3pt]&= \| f \|_\infty^{k_1+k_2+k_3+k_4+3-b} \int_{(\mathbb{R}^d)^{k_1+k_3+1-b_{13}}} h_{t_2,s}^{k_1}(\textbf{y}_0, \textbf{y}_1) h_{s, t_1}^{k_3}(\textbf{y}_0, \textbf{y}_3) \notag \\[3pt]&\qquad \quad \times \bigg\{ \int_{(\mathbb{R}^d)^{k_2+k_4+2-b+b_{13}}} h_T^{k_2}(\textbf{y}_2) h_T^{k_4}(\textbf{y}_4) \textrm{d}\big( (\textbf{y}_2 \cup \textbf{y}_4) \setminus (\textbf{y}_0 \cup \textbf{y}_1 \cup \textbf{y}_3) \big) \bigg\} \textrm{d}\big( \textbf{y}_0\setminus \{ 0 \} \big)\textrm{d}\textbf{y}_1 \textrm{d}\textbf{y}_3. \end{align}

Suppose $h_T^{k_2}(\textbf{y}_2) h_T^{k_4}(\textbf{y}_4) =1$ , so that

(4.26) \begin{equation} \textbf{y}_2 \cap \textbf{y}_4 \neq \emptyset, \qquad \textbf{y}_2 \cap (\textbf{y}_0 \cup \textbf{y}_1 \cup \textbf{y}_3) = \emptyset, \qquad \textbf{y}_4 \cap (\textbf{y}_0 \cup \textbf{y}_1 \cup \textbf{y}_3) \neq \emptyset.\end{equation}

Then there exists $y'\in \textbf{y}_4 \cap (\textbf{y}_0 \cup \textbf{y}_1 \cup \textbf{y}_3)$ such that all points in $\textbf{y}_2$ are at distance at most 4T from y . Since y itself lies within distance 2T from the origin (recall that the first element of $\textbf{y}_0$ is 0), we conclude that all points in $\textbf{y}_2 \cap \textbf{y}_4$ are at distance at most 6T from the origin. As $b_{13} \le k_1 + k_3 +1$ and $b \ge 3$ , we have

(4.27) \begin{align}&\int_{(\mathbb{R}^d)^{k_2+k_4+2-b+b_{13}}} h_T^{k_2}(\textbf{y}_2) h_T^{k_4}(\textbf{y}_4) \textrm{d}\big( (\textbf{y}_2 \cup \textbf{y}_4) \setminus (\textbf{y}_0 \cup \textbf{y}_1 \cup \textbf{y}_3) \big)\\[3pt] &\quad\le m\big( B(0,6T) \big)^{k_2+k_4+2-b+b_{13}} = \big( (6T)^d \theta_d \big)^{k_2+k_4+2-b+b_{13}} \le \big( (6T)^d \theta_d \big)^{k_1+k_2+k_3 + k_4}. \notag \end{align}

If $\textbf{y}_2$ and $\textbf{y}_4$ do not satisfy (4.26), it is still easy to check (4.27).

Applying (4.27), along with Lemma 4.2(ii), one can bound (4.25) by

\begin{align*}&\| f \|_\infty^{k_1+k_2+k_3+k_4+3-b} \big( (6T)^d \theta_d \big)^{k_1+k_2+k_3 + k_4} \times 36 (k_1k_3)^6 \big( (2T)^d \theta_d \big)^{2(k_1+k_3)} (t_2^d -t_1^d)^2 \\[3pt]&\quad\le 36 (k_1 k_2 k_3 k_4)^6 \big( (6T)^d \theta_d \| f \|_\infty \big)^{3(k_1 + k_2 + k_3+k_4)} (t_2^d -t_1^d)^2.\end{align*}

Thus, we conclude that

\begin{align*}&\frac{1}{n^2}\, \mathbb{E} \Big[ \sum_{\mathcal{Y}_1 \subset \mathcal{P}_n} \sum_{\mathcal{Y}_2 \subset \mathcal{P}_n} \sum_{\mathcal{Y}_3 \subset \mathcal{P}_n}\sum_{\mathcal{Y}_4 \subset \mathcal{P}_n} \prod_{i=1}^4h_i(\mathcal{Y}_i)\, \textbf{1}\big\{{ \text{case \textbf{(IV)} holds}}\big\} \Big] \\[3pt]&\quad \le 36 (k_1 k_2 k_3 k_4)^6 \big( (6T)^d \theta_d \| f \|_\infty \big)^{3(k_1 + k_2 + k_3+k_4)} \sum_{\textbf{b}\in \mathcal B} \frac{1}{\prod_{\sigma \subset [4], \, \sigma \neq \emptyset}j_\sigma !}\, (t_2^d -t_1^d)^2.\end{align*}

To complete the proof, we need to show that

\begin{equation*}\sum_{k_1 \le k_2 \le k_3 \le k_4} (k_1 k_2 k_3 k_4)^6 \big( (6T)^d \theta_d \| f \|_\infty \big)^{3(k_1 + k_2 + k_3+k_4)} \sum_{\textbf{b}\in \mathcal B} \frac{1}{\prod_{\sigma \subset [4], \, \sigma \neq \emptyset}j_\sigma !} <\infty.\end{equation*}

As seen in the calculation at (4.22), the term $(k_1 k_2 k_3 k_4)^6 \big( (6T)^d \theta_d \| f \|_\infty \big)^{3(k_1 + k_2 + k_3+k_4)}$ is negligible, while proving

\begin{equation*}\sum_{k_1 \le k_2 \le k_3 \le k_4} \sum_{\textbf{b}\in \mathcal B} \frac{1}{\prod_{\sigma \subset [4], \, \sigma \neq \emptyset}j_\sigma !} <\infty\end{equation*}

is straightforward.

4.5 Proof of Hölder continuity of $\mathcal H$

Proof of Hölder continuity in Theorem 3.2. Since $\mathcal H(t)-\mathcal H(s)$ has a normal distribution for $0 \le s < t<\infty$ , we have for every $m\in \mathbb{N}$ that

\begin{equation*}\mathbb{E}\Big[ \big( \mathcal{H}(t) - \mathcal{H}(s) \big)^{2m} \Big] = \prod_{i=1}^m (2i-1) \Big( \mathbb{E}\big[ \big( \mathcal{H}(t) - \mathcal{H}(s) \big)^{2} \big] \Big)^m.\end{equation*}

Proposition 3.1 ensures that $\big(\! \sum_{k=0}^M(\!-\!1)^k \mathcal H_k(t), \, M\in \mathbb{N}_0 \big)$ constitutes a Cauchy sequence in $L^2(\Omega)$ . Therefore we have

\begin{align*}\mathbb{E}\big[ \big( \mathcal{H}(t) - \mathcal{H}(s) \big)^{2}\big] &=\lim_{M\to\infty} \mathbb{E} \bigg[ \Big( \sum_{k=0}^M (\!-\!1)^k \big(\mathcal H_k(t)-\mathcal H_k(s) \big) \Big)^2 \bigg] \\[3pt]&\le \bigg[ \sum_{k=0}^\infty \bigg\{ \mathbb{E} \Big[ \big( \mathcal H_k(t) - \mathcal H_k(s) \big)^2 \Big] \bigg\}^{1/2} \bigg]^2,\end{align*}

where the second line is due to the Cauchy–Schwarz inequality. We see at once that

(4.28) \begin{align}\mathbb{E}\big[ \big( \mathcal{H}_k(t) - \mathcal{H}_k(s) \big)^{2} \big] &= \Psi_{k,k}(t, t) - 2\Psi_{k,k}(t, s) + \Psi_{k,k}(s,s) \notag\\[3pt]&\le \Psi_{k,k}(t, t) - \Psi_{k,k}(t, s) =\sum_{j=1}^{k+1} \big( \psi_{j,k,k}(t,t) - \psi_{j,k,k}(t,s) \big) \end{align}

by monotonicity due to (2.2) and symmetry of $\Psi_{k,k}(\cdot, \cdot)$ in its arguments. Now, we note that

\begin{align*}\psi_{j,k,k}(t, t) - \psi_{j,k,k}(t, s) &= \frac{\int_{\mathbb{R}^d} f(x)^{2k+2-j} \textrm{d}{x}}{j! ((k+1-j)!)^2} \\[3pt]&\quad \times \int_{(\mathbb{R}^d)^{k+1-j}} \int_{(\mathbb{R}^d)^{k+1-j}} \int_{(\mathbb{R}^d)^{j-1}} h_t^k (0,\textbf{y}_0, \textbf{y}_1)h_{t,s}^k (0,\textbf{y}_0,\textbf{y}_2)\textrm{d}\textbf{y}_0 \textrm{d}\textbf{y}_1 \textrm{d}\textbf{y}_2.\end{align*}

Applying a bound $h_t^k(0,\textbf{y}_0,\textbf{y}_1) \le \prod_{y\in \textbf{y}_1} \textbf{1} \{ \|y\| \le 2T \}$ and integrating out $\textbf{y}_1$ , as well as using Lemma 4.2(i), we get

\begin{align*}\psi_{j,k,k}(t, t) - \psi_{j,k,k}(t, s) &\le \frac{k^2}{T^d j! \big( (k+1-j)! \big)^2}\, (a_T)^{2k+1-j} (t^d-s^d) \\[3pt]&\le \frac{dk^2}{T j! \big( (k+1-j)! \big)^2}\,(a_T)^{2k+1-j} (t-s),\end{align*}

where $a_T$ is given in (4.1). Substituting this back into (4.28), we obtain

\begin{align*}\mathbb{E}\big[ \big( \mathcal{H}_k(t) - \mathcal{H}_k(s) \big)^{2} \big] &\le \frac{dk^2}{T} \sum_{j=1}^{k+1} \frac{(a_T)^{2k+1-j}}{j! \big( (k+1-j)! \big)^2}\, (t-s) \\[3pt]&\le \frac{dk^2}{T(k+1)! a_T}\, \big( a_T (1+a_T) \big)^{k+1} (t-s).\end{align*}

Therefore, we conclude that

\begin{align*}\mathbb{E}\Big[ \big( \mathcal{H}(t) - \mathcal{H}(s) \big)^{2m} \Big] \le \prod_{i=1}^m (2i-1) \Big( \frac{d}{Ta_T} \Big)^m \bigg( \sum_{k=0}^\infty \frac{k\big( a_T(1+a_T)\big)^{(k+1)/2} }{\sqrt{(k+1)!}} \bigg)^{2m} (t-s)^m.\end{align*}

One can easily check that the infinite sum on the right-hand side converges via the ratio test. As a result, we can apply the Kolmogorov continuity theorem [Reference Karatzas and Shreve17]. This implies that there exists a continuous version of $(\mathcal{H}_k(t), \, 0 \leq t \leq T)$ with Hölder continuous sample paths on [0, T] with any exponent $\gamma \in [0, (m-1)/2m)$ . As m is arbitrary, we can conclude by letting $m\to\infty$ .

5. Appendix

We briefly cite the necessary Palm theory for Poisson processes, for easy reference for the proofs in Section 4.

Lemma 5.1. (Lemma 8.1 in [Reference Owada18] and Theorems 1.6–1.7 in [Reference Penrose22].) Suppose $\mathcal{P}_n$ is a Poisson point process on $\mathbb{R}^d$ with intensity nf. Further, for every $k_i \in \mathbb{N}_0$ , $i=1,\dots,4$ , let $h_i(\mathcal{Y})$ be a real-valued measurable bounded function defined for $\mathcal{Y}\in (\mathbb{R}^d)^{k_i+1}$ . By a slight abuse of notation we let $\mathcal{Y}_i$ be collections of $k_i+1$ i.i.d. points with density f on the right-hand side of each equation below. We have the following results:

  1. (i)

    \begin{equation*}\mathbb{E}\bigg[ \sum_{\mathcal{Y}_1 \subset \mathcal{P}_n} h_1(\mathcal{Y}_1)\bigg] = \frac{n^{k_1+1}}{(k_1+1)!}\, \mathbb{E}[h_1(\mathcal{Y}_1)].\end{equation*}
  2. (ii) For every $\ell\in \{ 0,\dots,(k_1\wedge k_2)+1 \}$ ,

    \begin{align*}&\mathbb{E} \Big[ \sum_{\mathcal{Y}_1\subset \mathcal{P}_n} \sum_{\mathcal{Y}_2\subset \mathcal{P}_n} h_1(\mathcal{Y}_1)h_2(\mathcal{Y}_2)\, \textbf{1} \big\{ |\mathcal{Y}_1\cap \mathcal{Y}_2|=\ell \big\} \Big] \\[3pt]&\quad = \frac{n^{k_1+k_2+2-\ell}}{\ell!(k_1+1-\ell)!(k_2+1-\ell)!}\, \mathbb{E} \big[ h_1(\mathcal{Y}_1)h_2(\mathcal{Y}_2)\, \textbf{1} \big\{ |\mathcal{Y}_1\cap \mathcal{Y}_2|=\ell \big\} \big].\end{align*}
  3. (iii) For every $\textbf{b} = (b_{12}, b_{13}, \dots, b_{1234})\in \mathbb{N}_0^{11}$ , we have

    \begin{align*}&\mathbb{E} \Big[ \sum_{\mathcal{Y}_1\subset \mathcal{P}_n} \sum_{\mathcal{Y}_2\subset \mathcal{P}_n}\sum_{\mathcal{Y}_3\subset \mathcal{P}_n} h_1(\mathcal{Y}_1)h_2(\mathcal{Y}_2)h_3(\mathcal{Y}_3)\, \\[3pt]&\qquad \times \textbf{1} \big\{ |\mathcal{Y}_1\cap \mathcal{Y}_2|=b_{12}, \, |\mathcal{Y}_1\cap \mathcal{Y}_3|=b_{13}, \, |\mathcal{Y}_2\cap \mathcal{Y}_3|=b_{23}, \, |\mathcal{Y}_1\cap \mathcal{Y}_2 \cap \mathcal{Y}_3|=b_{123} \big\} \Big] \\[3pt]&= \frac{n^{k_1+k_2+k_3+3-b_{12}-b_{13}-b_{23}+b_{123}}}{\prod_{\sigma\subset [3], \, \sigma \neq \emptyset}j_\sigma !}\, \mathbb{E} \big[ h_1(\mathcal{Y}_1)h_2(\mathcal{Y}_2)h_3(\mathcal{Y}_3) \\[3pt]&\qquad \times \textbf{1} \big\{ |\mathcal{Y}_1\cap \mathcal{Y}_2|=b_{12}, \, |\mathcal{Y}_1\cap \mathcal{Y}_3|=b_{13}, \, |\mathcal{Y}_2\cap \mathcal{Y}_3|=b_{23}, \, |\mathcal{Y}_1\cap \mathcal{Y}_2 \cap \mathcal{Y}_3|=b_{123} \big\} \Big],\end{align*}
    where
    \begin{equation*}j_\sigma = \bigg| \bigcap_{i\in \sigma} \Big( \mathcal{Y}_i \setminus \bigcup_{j \in [3]\setminus \sigma}\mathcal{Y}_j \Big) \bigg|.\end{equation*}
  4. (iv) Furthermore, we have

    \begin{align*}&\mathbb{E} \Big[ \sum_{\mathcal{Y}_1\subset \mathcal{P}_n} \sum_{\mathcal{Y}_2\subset \mathcal{P}_n}\sum_{\mathcal{Y}_3\subset \mathcal{P}_n} \sum_{\mathcal{Y}_4\subset \mathcal{P}_n} h_1(\mathcal{Y}_1)h_2(\mathcal{Y}_2)h_3(\mathcal{Y}_3)h_4(\mathcal{Y}_4)\, \\[3pt]&\qquad \times \textbf{1} \big\{ |\mathcal{Y}_i\cap \mathcal{Y}_j|=b_{ij}, \, 1\le i <j \le 4, \ |\mathcal{Y}_i\cap \mathcal{Y}_j \cap \mathcal{Y}_k|=b_{ijk}, \, 1 \le i < j < k\le 4, \\[3pt]&\qquad \qquad \qquad \qquad \qquad \qquad \qquad |\mathcal{Y}_1\cap \mathcal{Y}_2\cap \mathcal{Y}_3 \cap \mathcal{Y}_4|=b_{1234} \big\} \Big] \\[3pt]&=\frac{n^{k_1+k_2+k_3+k_4+4-b}}{\prod_{\sigma\subset [4], \, \sigma \neq \emptyset}j_\sigma !}\, \mathbb{E} \Big[ h_1(\mathcal{Y}_1)h_2(\mathcal{Y}_2)h_3(\mathcal{Y}_3)h_4(\mathcal{Y}_4)\, \textbf{1} \big\{ |\mathcal{Y}_i\cap \mathcal{Y}_j|=b_{ij}, \, 1\le i <j \le 4, \\[3pt]&\qquad \qquad \qquad \quad |\mathcal{Y}_i\cap \mathcal{Y}_j \cap \mathcal{Y}_k|=b_{ijk}, \, 1 \le i < j < k\le 4, \ |\mathcal{Y}_1\cap \mathcal{Y}_2\cap \mathcal{Y}_3 \cap \mathcal{Y}_4|=b_{1234} \big\} \Big],\end{align*}
    where b and $j_\sigma$ are defined in (4.23) and (4.24), respectively.

Acknowledgements

The authors cordially thank the two anonymous referees and the anonymous editor for their insightful and in-depth comments. Their suggestions have allowed us to improve the clarity and flow of the paper. T. O.’s research is partially supported by the National Science Foundation’s Division of Mathematical Sciences Grant #1811428.

References

Adler, R. J. (2008). Some new random field tools for spatial analysis. Stoch. Environm. Res. Risk Assessment 22, 809.10.1007/s00477-008-0242-6CrossRefGoogle Scholar
Billingsley, P. (1999). Convergence of Probability Measures, 2nd edn. John Wiley, New York.10.1002/9780470316962CrossRefGoogle Scholar
Biscio, C. A. N., Chenavier, N., Hirsch, C. and Svane, A. M. (2020). Testing goodness of fit for point processes via topological data analysis. Electron. J. Statist. 14, 10241074.10.1214/20-EJS1683CrossRefGoogle Scholar
Bobrowski, O. and Adler, R. J. (2014). Distance functions, critical points, and the topology of random Čech complexes. Homol. Homotopy Appl. 16, 311344.10.4310/HHA.2014.v16.n2.a18CrossRefGoogle Scholar
Bobrowski, O. and Kahle, M. (2018). Topology of random geometric complexes: a survey. J. Appl. Comput. Topol. 1, 331364.10.1007/s41468-017-0010-0CrossRefGoogle Scholar
Bobrowski, O. and Mukherjee, S. (2015). The topology of probability distributions on manifolds. Prob. Theory Relat. Fields 161, 651686.10.1007/s00440-014-0556-xCrossRefGoogle Scholar
Carlsson, G. (2009). Topology and data. Bull. Amer. Math. Soc. 46, 255308.10.1090/S0273-0979-09-01249-XCrossRefGoogle Scholar
Crawford, L., Monod, A., Chen, A. X., Mukherjee, S. and Rabadán, R. (2019). Predicting clinical outcomes in glioblastoma: an application of topological and functional data analysis. J. Amer. Statist. Assoc. 115, 11391150.10.1080/01621459.2019.1671198CrossRefGoogle Scholar
Decreusefond, L., Ferraz, E., Randriambololona, H. and Vergne, A. (2014). Simplicial homology of random configurations. Adv. Appl. Prob. 46, 325347.10.1239/aap/1401369697CrossRefGoogle Scholar
Edelsbrunner, H. and Harer, J. (2010). Computational Topology: an Introduction. American Mathematical Society, Providence, RI.Google Scholar
Goel, A., Trinh, K. D. and Tsunoda, K. (2019). Strong law of large numbers for Betti numbers in the thermodynamic regime. J. Statist. Phys. 174, 865892.10.1007/s10955-018-2201-zCrossRefGoogle Scholar
Hatcher, A. (2002). Algebraic Topology. Cambridge University Press.Google Scholar
Hiraoka, Y., Shirai, T. and Trinh, K. D. (2018). Limit theorems for persistence diagrams. Ann. Appl. Prob. 28, 27402780.10.1214/17-AAP1371CrossRefGoogle Scholar
Hug, D., Last, G. and Schulte, M. (2016). Second-order properties and central limit theorems for geometric functionals of Boolean models. Ann. Prob. 26, 73135.10.1214/14-AAP1086CrossRefGoogle Scholar
Kahle, M. (2011). Random geometric complexes. Discrete Computat. Geom. 45, 553573.10.1007/s00454-010-9319-3CrossRefGoogle Scholar
Kahle, M. and Meckes, E. (2013). Limit theorems for Betti numbers of random simplicial complexes. Homol. Homotopy Appl. 15, 343374.10.4310/HHA.2013.v15.n1.a17CrossRefGoogle Scholar
Karatzas, I. and Shreve, S. E. (1991). Brownian Motion and Stochastic Calculus. Springer, New York.Google Scholar
Owada, T. (2017). Functional central limit theorem for subgraph counting processes. Electron. J. Prob. 22, 38 pp.10.1214/17-EJP30CrossRefGoogle Scholar
Owada, T. (2019). Topological crackle of heavy-tailed moving average processes. Stoch. Process. Appl. 129, 49654997.10.1016/j.spa.2018.12.017CrossRefGoogle Scholar
Owada, T. and Thomas, A. M. (2020). Limit theorems for process-level Betti numbers for sparse and critical regimes. Adv. Appl. Prob. 52, 131.10.1017/apr.2019.50CrossRefGoogle Scholar
Penrose, M. D. (2000). Central limit theorems for k-nearest neighbour distances. Stoch. Process. Appl. 85, 295320.10.1016/S0304-4149(99)00080-0CrossRefGoogle Scholar
Penrose, M. D. (2003). Random Geometric Graphs. Oxford University Press.10.1093/acprof:oso/9780198506263.001.0001CrossRefGoogle Scholar
Schneider, R. and Weil, W. (2008). Stochastic and Integral Geometry. Springer, New York.10.1007/978-3-540-78859-1CrossRefGoogle Scholar
Whitt, W. (2002). Stochastic-Process Limits: an Introduction to Stochastic-Process Limits and Their Application to Queues. Springer, New York.10.1007/b97479CrossRefGoogle Scholar
Yogeshwaran, D., Subag, E. and Adler, R. J. (2017). Random geometric complexes in the thermodynamic regime. Prob. Theory Relat. Fields 167, 107142.10.1007/s00440-015-0678-9CrossRefGoogle Scholar
Figure 0

Figure 1. A family of Vietoris–Rips complexes.