Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-02-06T04:51:43.118Z Has data issue: false hasContentIssue false

Limit theory for unbiased and consistent estimators of statistics of random tessellations

Published online by Cambridge University Press:  16 July 2020

Daniela Flimmel*
Affiliation:
Charles University
Zbyněk Pawlas*
Affiliation:
Charles University
J. E. Yukich*
Affiliation:
Lehigh University
*
*Postal address: Department of Probability and Mathematical Statistics, Faculty of Mathematics and Physics, Charles University, Sokolovská 83, 186 75 Praha 8, Czech Republic. E-mail: daniela.flimmel@karlin.mff.cuni.cz
**Postal address: Department of Probability and Mathematical Statistics, Faculty of Mathematics and Physics, Charles University, Sokolovská 83, 186 75 Praha 8, Czech Republic. E-mail: pawlas@karlin.mff.cuni.cz
***Postal address: Department of Mathematics, Lehigh University, 14 E. Packer Ave, Bethlehem, PA 18015. E-mail: jey0@lehigh.edu
Rights & Permissions [Opens in a new window]

Abstract

We observe a realization of a stationary weighted Voronoi tessellation of the d-dimensional Euclidean space within a bounded observation window. Given a geometric characteristic of the typical cell, we use the minus-sampling technique to construct an unbiased estimator of the average value of this geometric characteristic. Under mild conditions on the weights of the cells, we establish variance asymptotics and the asymptotic normality of the unbiased estimator as the observation window tends to the whole space. Moreover, weak consistency is shown for this estimator.

Type
Research Papers
Copyright
© Applied Probability Trust 2020

1 Introduction

Tessellations, also known as mosaics, are an important model in stochastic geometry [Reference Chiu, Stoyan, Kendall and Mecke5, Reference Schneider and Weil16] and have numerous applications in engineering and the natural sciences [Reference Okabe, Boots, Sugihara and Chiu11]. This paper focuses on random Voronoi tessellations of $\mathbb{R}^d$ , as well as the so-called weighted Voronoi tessellations. We shall be interested in developing the limit theory for unbiased and consistent estimators of statistics of a typical cell in a weighted Voronoi tessellation.

The estimators are constructed by observing the tessellation within a bounded window. Unbiased estimators are constructed by considering only those cells which lie within the bounded window. This technique, known as minus-sampling, has a long history going back to Miles [Reference Miles9] as well as Horvitz and Thompson; see [Reference Baddeley1] for details. In this paper we use stabilization methods to develop expectation and variance asymptotics, as well as central limit theorems, for unbiased and asymptotically consistent estimators of geometric statistics of a typical cell.

Letting $(\Omega, \mathcal{F}, \mathbb{P})$ be the common probability space, weighted Voronoi tessellations are defined as follows. Let $\mathcal{P}$ be a unit intensity stationary marked point process in $\mathbb{R}^d$ with mark space $\mathbb{M} \coloneqq [0,\mu]$ for some constant $\mu < \infty$ . Let $\mathcal{B}(\mathbb{M})$ be the Borel $\sigma$ -algebra on $\mathbb{M}$ , and let $\mathbb{Q}_\mathbb{M}$ be the mark distribution; for its definition see [Reference Schneider and Weil16, Section 3.5]. The elements of $\mathbb{R}^d \times \mathbb{M}$ will be denoted by $\hat{x} \coloneqq (x,m_x)$ . We introduce a weight function $\rho: \mathbb{R}^d \times (\mathbb{R}^d \times \mathbb{M}) \to \mathbb{R}$ which, for each $\hat{x} \in \mathcal{P}$ , generates the weighted cell

\begin{equation*}{C^{\rho}}(\hat{x},\mathcal{P}) \coloneqq \left\{ y \in \mathbb{R}^d\,{:}\,\rho(y,\hat{x}) \leq \rho(y,\hat{z})\;\text{for all}\; \hat{z} \in \mathcal{P}\right\}.\end{equation*}

Letting $\|x\|$ denote the Euclidean norm of x, we focus on the following well-known weights:

  1. (i) Voronoi cell: $\rho_1(y,\hat{x}) \coloneqq \|x-y\|$ ,

  2. (ii) Laguerre cell: $\rho_2(y,\hat{x}) \coloneqq \|x-y\|^2 - m_x^2$ ,

  3. (iii) Johnson–Mehl cell: $\rho_3(y,\hat{x}) \coloneqq \|x-y\| - m_x$ .

Notice that the Lebesgue measure of ${C^{\rho}}(\hat{x},\mathcal{P})$ increases with increasing $m_x$ . Voronoi and Laguerre cells are convex, whereas Johnson–Mehl cells need not be convex. The weight functions $\rho_i( \cdot, \hat{x})$ , $i = 1,2,3$ , generate the Voronoi, Laguerre [Reference Lautensack and Zuyev7], and Johnson–Mehl tessellations [Reference Møller10], respectively, and are often called the power of the point x; see [Reference Schneider and Weil16, Section 10.2]. When $\mathcal{P}$ is a marked Poisson point process we shall refer to these tessellations as weighted Poisson–Voronoi tessellations.

We next consider the notion of a typical cell of a stationary random tessellation defined by the weight $\rho$ . By the typical cell $K^{\rho}_{{\bf 0}} \coloneqq K^{\rho}_{{\bf 0}}(\mathcal{P})$ we understand the cell generated by a typical point of $\mathcal{P}$ . This can be formally introduced by considering the Palm probability $\mathbb{P}^0$ which corresponds to $\mathbb{P}$ conditional on the event that $\mathcal{P}$ has a point at the origin. Expectation with respect to $\mathbb{P}^0$ is denoted by $\mathbb{E}^0$ . In the case of Laguerre or Johnson–Mehl tessellations the typical cell $K^\rho_{\bf 0}$ could satisfy $K^\rho_{\bf 0} = \emptyset$ . This is different from the definition of the typical cell described in, e.g., [Reference Schneider and Weil16, Section 10.4], where the typical cell is meant to be the typical non-empty cell. For a Voronoi tessellation both approaches coincide. For weighted Voronoi tessellations, $K_{\bf 0}^\rho$ is distributed as a mixture of the typical non-empty cell and the empty cell with mixture weights $1-p_\emptyset$ and $p_\emptyset$ , where $p_\emptyset$ is the probability that the cell generated by the typical point is empty. Let $Q^{\rho}$ be the distribution of the typical cell.

Denote by ${\bf F}^{d}$ the space of all closed subsets of $\mathbb{R}^d$ equipped with the Borel $\sigma$ -algebra $\mathcal{B}({\bf F}^d)$ generated by the open sets from the Fell topology; see [Reference Schneider and Weil16, Definition 2.1.1]. Let $h: ({\bf F}^d, \mathcal{B}({\bf F}^d)) \rightarrow (\mathbb{R}, \mathcal{B}(\mathbb{R}))$ describe a geometric characteristic of elements of ${\bf F}^{d}$ (e.g. diameter, volume) such that $h(\emptyset) = 0$ .

We have two goals: (i) use minus-sampling to construct unbiased estimators of $\mathbb{E}^0 h(K^{\rho}_{\bf 0}) = \int h(K) \, Q^{\rho} (\textup{d} K)$ , and (ii) establish variance asymptotics and asymptotic normality for such estimators. As a by-product, we also establish the limit theory for geometric statistics of Laguerre and Johnson–Mehl tessellations, adding to the results of [Reference Penrose12, Reference Penrose and Yukich14] which are confined to Voronoi tessellations.

2 Main results

Denote by $\hat{\mathbb{R}^d}$ the Cartesian product of $\mathbb{R}^d$ and $\mathbb{M}$ . Let N be the set of all locally finite marked counting measures on $\hat{\mathbb{R}^d}$ . An element of N can be interpreted as a marked point configuration. Therefore, we treat it as a set in the notation. The set N is equipped with the standard $\sigma$ -algebra $\mathcal{N}$ which is the smallest $\sigma$ -algebra such that all mappings $\pi_A\,{:}\,{\bf N} \to \mathbb{N} \cup \{0,\infty\}$ , $\mathcal{P} \mapsto \mathcal{P}(A)$ , $A \in \mathcal{B}(\hat{\mathbb{R}^d})$ , are measurable.

Define, for $z \in \mathbb{R}^d$ and $\hat{x} \in \mathcal{P}$ ,

\begin{equation*}C^{\rho}_z(\hat{x}, \mathcal{P}) \coloneqq {C^{\rho}}(\hat{x}, \mathcal{P}) + (z - x).\end{equation*}

Thus, ${C^{\rho}}(\hat{x}, \mathcal{P}) = x + C^{\rho}_{{\bf 0}}(\hat{x}, \mathcal{P})$ , where ${\bf 0}$ denotes the origin of $\mathbb{R}^d$ . Note that $K_{\bf 0}^\rho = C^\rho_{\bf 0}(({\bf 0},M_{\bf 0}),\mathcal{P})$ $\mathbb{P}_0$ almost surely, where $M_{\bf 0}$ is a typical mark distributed according to $\mathbb{Q}_\mathbb{M}$ .

Recall that $h\,{:}\,{\bf F}^d \to \mathbb{R}$ measures a geometric characteristic of elements of ${\bf F}^{d}$ . We assume that h is invariant with respect to shifts, namely, for all $x \in \mathbb{R}^d$ and $m_x \in \mathbb{M}$ ,

\begin{equation*}h({C^{\rho}}((x,m_x), \mathcal{P})) = h(x + C^{\rho}_{{\bf 0}}((x,m_x), \mathcal{P})) = h(C^{\rho}_{{\bf 0}}((x,m_x), \mathcal{P})).\end{equation*}

Put $W_\lambda \coloneqq \big[ -\frac{\lambda^{1/d}} {2}, \frac{\lambda^{1/d}} {2}\big]^d$ and $\hat{W}_\lambda \coloneqq W_\lambda \times \mathbb{M}$ , $\lambda>0$ . Given h and a tessellation defined by the weight $\rho$ , we define, for all $\lambda > 0$ ,

\begin{equation*}H_{\lambda}^{\rho}(\mathcal{P} \cap \hat{W}_\lambda ) \coloneqq \sum_{\hat{x} \in \mathcal{P} \cap \hat{W}_\lambda} \frac{ h({C^{\rho}}(\hat{x}, \mathcal{P})) } { \textrm{Vol}( W_\lambda \ominus {C^{\rho}}(\hat{x}, \mathcal{P}) ) } \,{\bf 1} \{ {C^{\rho}}(\hat{x}, \mathcal{P}) \subseteq W_\lambda \}.\end{equation*}

Here, for sets A and B, $A \ominus B \coloneqq \{x \in \mathbb{R}^d\,{:}\, B+x \subseteq A\}$ denotes the erosion of A by B. The statistic $H_{\lambda}^{\rho}(\mathcal{P} \cap \hat{W}_\lambda )$ disregards cells contained in the window $W_\lambda$ that are generated by the points outside $W_\lambda$ . Such cells do not exist in the Voronoi case but they could appear for weighted cells. Therefore, we also consider

\begin{equation*}H_{\lambda}^{\rho}(\mathcal{P}) \coloneqq \sum_{\hat{x} \in \mathcal{P}} \frac{ h({C^{\rho}}(\hat{x}, \mathcal{P})) } { \textrm{Vol}( W_\lambda \ominus {C^{\rho}}(\hat{x}, \mathcal{P}) ) }\, {\bf 1}\{ {C^{\rho}}(\hat{x}, \mathcal{P}) \subseteq W_\lambda \}.\end{equation*}

Our first main result has a proof which is short and illustrative. The result holds if $W_\lambda$ is replaced by any observation window. It is a special case of a more general result given in [Reference Baddeley1, Theorem 2.1] and formulated for stationary germ-grain models and general sampling rules.

Theorem 2.1. Let $\mathcal{P}$ be a stationary marked point process with unit intensity and mark distribution $\mathbb{Q}_\mathbb{M}$ . Let $h\,{:}\, {\bf F}^d \rightarrow \mathbb{R}$ be translation invariant as above. Then, for all $\lambda > 0$ , the statistic $ H_{\lambda}^{\rho}(\mathcal{P})$ is an unbiased estimator of $\mathbb{E}^0 h(K^{\rho}_{\bf 0})$ .

Proof of Theorem 2.1. We have

\begin{align*}\mathbb{E} H_{\lambda}^\rho(\mathcal{P}) &= \mathbb{E} \sum_{\hat{x} \in \mathcal{P}} \frac{ h(C^\rho(\hat{x}, \mathcal{P})) } { \textrm{Vol}( W_\lambda \ominus C^\rho(\hat{x}, \mathcal{P}) ) }\, {\bf 1}\{ C^\rho(\hat{x},\mathcal{P}) \subseteq W_\lambda \} \\&= \mathbb{E} \sum_{\hat{x} \in \mathcal{P}} \frac{ h(C_{{\bf 0}}^\rho(\hat{x}, \mathcal{P}) ) } { \textrm{Vol}( W_\lambda \ominus C_{{\bf 0}}^\rho(\hat{x}, \mathcal{P}) ) } \, {\bf 1}\{ x + C_{{\bf 0}}^\rho(\hat{x},\mathcal{P}) \subseteq W_\lambda \} \\&= \int_{\mathbb{R}^d} \mathbb{E}^0 \left( \frac{ h( K_{{\bf 0}}^\rho ) } { \textrm{Vol}( W_\lambda \ominus K_{{\bf 0}}^{\rho} ) }\, {\bf 1}\{ x + K_{{\bf 0}}^{\rho} \subseteq W_\lambda \} \right) \,\textup{d} x \\&= \mathbb{E}^0 \int_{\mathbb{R}^d} \left( \frac{ h( K_{{\bf 0}}^\rho ) } { \textrm{Vol}( W_\lambda \ominus K_{{\bf 0}}^\rho ) }\, {\bf 1}\{ x \in W_\lambda \ominus K_{{\bf 0}}^\rho \} \right) \,\textup{d} x \\& = \mathbb{E}^0 h( K_{{\bf 0}}^\rho ),\end{align*}

where the second equality uses the translation invariance of h and translation invariance of erosions, the third uses the refined Campbell theorem for stationary marked point processes [Reference Schneider and Weil16, Theorem 3.5.3], while the fourth uses Fubini’s theorem. Hence, we have shown the unbiasedness $H_{\lambda}^\rho(\mathcal{P})$ .

Controlling the moments of $H_{\lambda}^{\rho}(\mathcal{P} \cap \hat{W}_\lambda )$ is problematic since $\textrm{Vol}( W_\lambda \ominus {C^{\rho}}(\hat{x}, \mathcal{P}) )$ may become arbitrarily small. It will therefore be convenient to consider versions of $H_{\lambda}^{\rho}(\mathcal{P} \cap \hat{W}_\lambda ) $ and $H_{\lambda}^{\rho}(\mathcal{P})$ given by

\begin{equation*}\hat{H}_\lambda ^{\rho}(\mathcal{P} \cap \hat{W}_\lambda ) \coloneqq \sum_{\hat{x} \in \mathcal{P} \cap \hat{W}_\lambda} \frac{ h({C^{\rho}}(\hat{x}, \mathcal{P}))\,{\bf 1} \{ {C^{\rho}}(\hat{x},\mathcal{P}) \subseteq W_\lambda \}}{ \textrm{Vol}( W_\lambda \ominus {C^{\rho}}(\hat{x}, \mathcal{P}) ) }\, {\bf 1}\bigg\{ \textrm{Vol}(W_\lambda \ominus {C^{\rho}}(\hat{x}, \mathcal{P}) ) \geq \frac{\lambda} {2} \bigg\}\end{equation*}

and

\begin{equation*}\hat{H}_\lambda ^{\rho}(\mathcal{P}) \coloneqq \sum_{\hat{x} \in \mathcal{P}} \frac{ h({C^{\rho}}(\hat{x}, \mathcal{P})) \, {\bf 1}\{ {C^{\rho}}(\hat{x},\mathcal{P}) \subseteq W_\lambda \}} { \textrm{Vol}( W_\lambda \ominus {C^{\rho}}(\hat{x}, \mathcal{P}) ) }\, {\bf 1}\bigg\{ \textrm{Vol}(W_\lambda \ominus {C^{\rho}}(\hat{x}, \mathcal{P}) ) \geq \frac{\lambda} {2} \bigg\}.\end{equation*}

Note that $ H_{\lambda}^{\rho}(\mathcal{P} \cap \hat{W}_\lambda)$ , $ \hat{H}_{\lambda}^{\rho}(\mathcal{P})$ , and $ \hat{H}_{\lambda}^{\rho}(\mathcal{P} \cap \hat{W}_\lambda)$ are not unbiased. Under the assumptions of Theorem 2.1, one instead has

\begin{align*} \mathbb{E} H_{\lambda}^{\rho}(\mathcal{P} \cap \hat{W}_\lambda ) & = \mathbb{E}^0 \left(h(K^{\rho}_{\bf 0})\frac{\textrm{Vol}(W_\lambda \cap (W_\lambda \ominus K^{\rho}_{\bf 0}))}{\textrm{Vol}(W_\lambda \ominus K^{\rho}_{\bf 0})}\right),\\[5pt] \mathbb{E} \hat{H}_{\lambda}^{\rho}(\mathcal{P} \cap \hat{W}_\lambda ) & = \mathbb{E}^0 \left(h(K^{\rho}_{\bf 0})\frac{\textrm{Vol}(W_\lambda \cap (W_\lambda \ominus K^{\rho}_{\bf 0}))}{\textrm{Vol}(W_\lambda \ominus K^{\rho}_{\bf 0})}\,{\bf 1}\bigg\{\textrm{Vol}(W_\lambda \ominus K^{\rho} _{\bf 0}) \geq \frac{\lambda} {2} \bigg\}\right), \end{align*}

and

\begin{equation*} \mathbb{E} \hat{H}_{\lambda}^{\rho}(\mathcal{P}) = \mathbb{E}^0 \left(h(K^{\rho}_{\bf 0}) {\bf 1} \bigg\{\textrm{Vol}(W_\lambda \ominus K^{\rho}_{\bf 0}) \geq \frac{\lambda} {2} \bigg\}\right).\\\end{equation*}

The general form of the bias is given by Theorem 2.1 of [Reference Baddeley1].

We need some additional terminology. For every weight $\rho$ and geometric statistic h we define the score $\xi^{\rho}\,{:}\, \hat{\mathbb{R}^d} \times {\bf N} \to \mathbb{R}$ by

(2.1) \begin{equation}\xi^{\rho}(\hat{x}, \mathcal{A}) \coloneqq h({C^{\rho}}(\hat{x}, \mathcal{A})){\bf 1}\{{C^{\rho}}(\hat{x},\mathcal{A}) \text{ is bounded}\}, \qquad \hat{x} \in \hat{\mathbb{R}^d},\ \mathcal{A} \in {\bf N}.\end{equation}

We use this representation to explicitly link our statistics with the stabilizing statistics in the literature [Reference Baryshnikov and Yukich2, Reference Błaszczyszyn, Yogeshwaran and Yukich4, Reference Lachièze-Rey, Schulte and Yukich6, Reference Penrose12, Reference Penrose13, Reference Penrose and Yukich14, Reference Penrose and Yukich15]. Translation invariance for h implies

\begin{equation*}\xi^{\rho}(\hat{x}, \mathcal{A}) = \xi^{\rho}((x, m_x), \mathcal{A}) = \xi^{\rho}(({\bf 0}, m_x),\mathcal{A}-x) \end{equation*}

for every $\hat{x} \in \hat{\mathbb{R}^d}$ , $\hat{x} \coloneqq (x, m_x)$ , and $\mathcal{A} \in {\bf N} $ , where $\mathcal{A} - x \coloneqq \{(a - x, m_a)\,{:}\, (a, m_a) \in \mathcal{A}\}$ . If ${C^{\rho}}(\hat{x},\mathcal{P})$ is empty we have $\xi^{\rho}(\hat{x},\mathcal{P}) = h(\emptyset) = 0$ . Write ${C^{\rho}}(\hat{x},\mathcal{P}) \coloneqq {C^{\rho}}(\hat{x},\mathcal{P} \cup \{\hat{x}\})$ for $\hat{x} \not\in \mathcal{P}$ , so $\xi^{\rho}(\hat{x},\mathcal{P}) \coloneqq \xi^{\rho}(\hat{x},\mathcal{P} \cup \{\hat{x}\})$ for $\hat{x} \not\in \mathcal{P}$ .

Let $\mathbb{P}^{\hat{x}}$ , respectively $\mathbb{P}^{\hat{x},\hat{y}}$ , denote the Palm probability measures conditioned on $\mathcal{P}$ having an additional marked point $\hat{x}$ , respectively two additional marked points $\hat{x}$ and $\hat{y}$ . In particular, $\mathbb{P}^0(\!\cdot\!) = \int_\mathbb{M} \mathbb{P}^{({\bf 0},m)}(\!\cdot\!)\,\mathbb{Q}_\mathbb{M}(\textup{d} m)$ . By $\mathbb{E}^{\hat{x}}$ , respectively $\mathbb{E}^{\hat{x},\hat{y}}$ , we denote expectation with respect to $\mathbb{P}^{\hat{x}}$ , respectively $\mathbb{P}^{\hat{x},\hat{y}}$ .

Definition 2.1. The score $\xi^{\rho}$ is said to satisfy a p-moment condition, $p \in [1, \infty)$ , if

(2.2) \begin{equation} \sup_{\hat{x}, \hat{y} \in \hat{\mathbb{R}^d} } \mathbb{E}^{\hat{x},\hat{y}} | \xi^{\rho}(\hat{x}, \mathcal{P})|^p < \infty.\end{equation}

For $r \in (0, \infty)$ and $y \in \mathbb{R}^d$ , we denote by $B_r(y)$ the closed Euclidean ball of radius r centered at y.

Definition 2.2. We say that the cells of the tessellation defined by $\rho$ and generated by $\mathcal{P}$ have diameters with exponentially decaying tails if there is a constant $c_\textrm{diam} \in (0, \infty)$ such that, for all $\hat{x} \coloneqq (x,m_x) \in \mathcal{P}$ , there exists an almost surely finite random variable $D_{\hat{x}}$ such that ${C^{\rho}}(\hat{x},\mathcal{P}) \subseteq B_{D_{\hat{x}}}(x)$ and

(2.3) \begin{equation}\mathbb{P}^{\hat{x}}(D_{\hat{x}} \ge t) \le c_\textrm{diam}\exp\left(-\frac{1}{c_\textrm{diam}}\, t^d \right), \qquad t \geq 0.\end{equation}

Definition 2.3. We say that $\xi^{\rho}$ is stabilizing with respect to $\mathcal{P}$ if, for all $\hat{x} \coloneqq (x,m_x) \in \mathcal{P}$ , there exists an almost surely finite random variable $R_{\hat{x}} \coloneqq R_{\hat{x}}(\mathcal{P})$ , henceforth called a radius of stabilization, such that

(2.4) \begin{equation} \xi^{\rho}(\hat{x},(\mathcal{P} \cup \mathcal{A}) \cap \hat{B}_{R_{\hat{x}}}(x)) = \xi^{\rho}(\hat{x},\mathcal{P} \cup \mathcal{A})\end{equation}

for all $\mathcal{A}$ with $\text{card}( \mathcal{A} ) \leq 7$ and where $\hat{B}_r(y) \coloneqq B_r(y) \times \mathbb{M}$ . We say that $\xi^{\rho}$ is exponentially stabilizing with respect to $\mathcal{P}$ if there are constants $c_\textrm{stab}, \alpha \in (0, \infty)$ such that

\begin{equation*}\mathbb{P}^{\hat{x}}(R_{\hat{x}} \geq t) \leq c_\textrm{stab}\exp\left(-\frac{1}{c_\textrm{stab}}\,t^\alpha \right), \qquad t \ge 0.\end{equation*}

In other words, $\xi^{\rho}$ is stabilizing with respect to $\mathcal{P}$ if there is $R_{\hat{x}}$ such that the cell ${C^{\rho}}(\hat{x},\mathcal{P})$ is not affected by changes in point configurations outside $\hat{B}_{R_{\hat{x}}}(x)$ . The condition $\text{card}( \mathcal{A} ) \leq 7$ is chosen in accordance with the approach in [Reference Lachièze-Rey, Schulte and Yukich6], which requires controlling moments of difference operators with possibly seven extra points inserted into a Poisson point process.

By $\eta_s$ , $s \in (0, \infty)$ , we denote a homogeneous marked Poisson point process with values in $\mathbb{R}^d \times \mathbb{M}$ and such that the unmarked process on $\mathbb{R}^d$ has rate s. We write $\eta$ for $\eta_1$ . Our next results establish the limit theory for the above estimators.

Theorem 2.2. Let $M_{\bf 0}$ be a random mark distributed according to $\mathbb{Q}_\mathbb{M}$ .

  1. (i) If $\xi^{\rho}$ satisfies the p-moment condition (2.2) for some $p \in (1, \infty)$ and if the cell $C^{\rho}(({\bf 0},M_{\bf 0}),\eta)$ has a diameter with an exponentially decaying tail, then $H_{\lambda}^{\rho}(\eta \cap \hat{W}_\lambda)$ , $\hat{H}_{\lambda}^{\rho}(\eta)$ , and $\hat{H}_{\lambda}^{\rho}(\eta \cap \hat{W}_\lambda)$ are asymptotically unbiased estimators of $\mathbb{E}^0 h(K^{\rho}_{\bf 0}(\eta))$ .

  2. (ii) Under the conditions of (i), and assuming that $\xi^\rho$ stabilizes with respect to $\eta$ as at (2.4), the statistics $ H_{\lambda}^{\rho}(\eta)$ , $ H_{\lambda}^{\rho}(\eta \cap \hat{W}_\lambda)$ , $ \hat{H}_{\lambda}^{\rho}(\eta)$ , and $\hat{H}_{\lambda}^{\rho}(\eta \cap \hat{W}_\lambda)$ are consistent estimators of $\mathbb{E}^0 h(K^{\rho}_{\bf 0}(\eta))$ .

Given the score $\xi^{\rho}$ at (2.1), put

(2.5) \begin{align}\sigma^2(\xi^{\rho}) \coloneqq & \mathbb{E} (\xi^{\rho} ({\bf 0}_M, \eta))^2\\ & + \int_{\mathbb{R}^d} \left[\mathbb{E} \xi^{\rho}({\bf 0}_M, \eta \cup \{x_M\}) \xi^{\rho}(x_M, \eta \cup \{{\bf 0}_M\}) - \mathbb{E} \xi^{\rho}({\bf 0}_M, \eta)\,\mathbb{E}\xi^{\rho}(x_M, \eta)\right] \,\textup{d} x, \nonumber\end{align}

where ${\bf 0}_M \coloneqq ({\bf 0},M_{\bf 0})$ , $x_M \coloneqq (x,M_x)$ , and $M_{\bf 0}$ and $M_x$ are independent random marks distributed according to $\mathbb{Q}_\mathbb{M}$ . Note that $\mathbb{E}^0 h(K^{\rho}_{{\bf 0}}(\eta)) = \mathbb{E} h(C^\rho({\bf 0}_M, \eta)) = \mathbb{E} \xi^\rho({\bf 0}_M,\eta)$ by the Slivnyak theorem [Reference Schneider and Weil16, Theorem 3.5.9]. Here, we use that, given the Poisson process $\eta$ , the Palm distribution corresponds to the usual distribution with a point inserted at the origin.

Theorem 2.3. Let h be translation invariant and assume that $\xi^{\rho}$ is exponentially stabilizing with respect to $\eta$ .

  1. (i) If $\xi^{\rho}$ satisfies the p-moment condition (2.2) for some $p \in (2, \infty)$ , then

    (2.6) \begin{equation} \lim_{\lambda \to \infty} \lambda \textrm{Var} \hat{H}_{\lambda}^{\rho}(\eta \cap \hat{W}_\lambda ) = \lim_{\lambda \to \infty} \lambda \textrm{Var} \hat{H}_{\lambda}^{\rho}(\eta)= \sigma^2(\xi^{\rho}) \in [0, \infty).\end{equation}
  2. (ii) If $\sigma^2(\xi^{\rho}) \in (0, \infty)$ and if the p-moment condition (2.2) holds for some $p \in (4, \infty)$ , then

    \begin{equation*} \sqrt{ \lambda} \left( H_{\lambda}^{\rho}(\eta \cap \hat{W}_\lambda ) - \mathbb{E} H_{\lambda}^{\rho}(\eta \cap \hat{W}_\lambda ) \right) \mathop{\stackrel{\textrm{D}}{\longrightarrow}}\limits_{\lambda \to \infty} N(0, \sigma^2(\xi^{\rho}))\end{equation*}
    and
    \begin{equation*} \sqrt{ \lambda} \left( H_{\lambda}^{\rho}(\eta ) - \mathbb{E}^0 h(K^{\rho}_{\bf 0} (\eta) ) \right) \mathop{\stackrel{\textrm{D}}{\longrightarrow}}\limits_{\lambda \to \infty} N(0, \sigma^2(\xi^{\rho})),\end{equation*}
    where $N(0, \sigma^2(\xi^{\rho}))$ denotes a mean-zero Gaussian random variable with variance $ \sigma^2(\xi^{\rho})$ .

Remark 2.1. The assumption $\sigma^2(\xi^{\rho}) \in (0, \infty)$ is often satisfied by scores of interest, as seen in the upcoming applications. According to Theorem 2.1 in [Reference Penrose and Yukich14], where it has been shown that whenever we have

\begin{equation*}\frac{\sum_{\hat{x} \in \eta \cap \hat{W}_\lambda} (\xi^\rho (\hat{x}, \eta) - \mathbb{E} \xi^\rho(\hat{x}, \eta))}{\sqrt{\textrm{Var} \sum_{\hat{x} \in \eta \cap \hat{W}_\lambda} \xi^\rho (\hat{x}, \eta)}} \stackrel{\textrm{D}}{\longrightarrow} N(0, \sigma^2(\xi^\rho))\end{equation*}

then necessarily $\sigma^2(\xi^{\rho}) \in (0, \infty)$ provided (a) there is a random variable $S < \infty$ and a random variable $\Delta^\rho(\infty)$ such that, for all finite ${\mathcal{A}}\subseteq \hat{B}_S({\bf 0})^c$ , we have

\begin{align*}\Delta^\rho(\infty) = & \sum_{ \hat{x} \in (\eta \cap \hat{B}_S({\bf 0})) \cup {\mathcal{A}} \cup \{{\bf 0}_M \} } \xi^\rho(\hat{x}, (\eta \cap \hat{B}_S({\bf 0})) \cup {\mathcal{A}} \cup \{{\bf 0}_M\})\\ & - \sum_{ \hat{x} \in (\eta \cap \hat{B}_S({\bf 0})) \cup {\mathcal{A}} } \xi^\rho(\hat{x}, (\eta \cap \hat{B}_S({\bf 0})) \cup {\mathcal{A}} ), \end{align*}

and (b) $\Delta^\rho(\infty)$ is non-degenerate, that is to say it is not almost surely constant. We will use this fact in showing positivity of $\sigma^2(\xi^{\rho})$ in the applications which follow.

Remark 2.2. Theorems 2.2 and 2.3 hold for translation-invariant statistics h of Poisson–Voronoi cells regardless of the mark distribution because $\xi^{\rho_1}$ stabilizes exponentially fast and diameters of Voronoi cells have exponentially decaying tails as shown in [Reference Penrose13, Reference Penrose and Yukich14]. In Section 3 we establish that the cells of Laguerre and the Johnson–Mehl tessellations also have diameters with exponentially decaying tails, and that $\xi^{\rho_i}$ , $i=2, 3$ , are exponentially stabilizing with respect to $\eta$ .

2.1. Applications

We provide some applications of our main results. The proofs are provided in Section 5. Our first result gives the limit theory for an unbiased estimator of the distribution function of the volume of a typical cell in a weighted Poisson–Voronoi tessellation.

Theorem 2.4.

  1. (i) For all $i= 1, 2, 3$ and $t \in (0, \infty)$ the statistic

    \begin{equation*}\sum_{\hat{x} \in \eta} \frac{ {\bf 1}\{\textrm{Vol} (C^{\rho_i}(\hat{x}, \eta)) \leq t \} } { \textrm{Vol}( W_\lambda \ominus C^{\rho_i}(\hat{x}, \eta) ) }\,{\bf 1}\{ C^{\rho_i}(\hat{x}, \eta) \subseteq W_\lambda \}\end{equation*}
    is an unbiased estimator of $\mathbb{P}^0 (\textrm{Vol}(K^{\rho_i}_{\bf 0} (\eta) ) \leq t)$ .
  2. (ii) For all $i= 1, 2, 3$ and $t \in (0, \infty)$ we have that

    (2.7) \begin{equation} \sqrt{ \lambda} \left( \sum_{\hat{x} \in \eta} \frac{ {\bf 1}\{\textrm{Vol} (C^{\rho_i}(\hat{x}, \eta)) \leq t \} } { \textrm{Vol}( W_\lambda \ominus C^{\rho_i}(\hat{x}, \eta) ) }\, {\bf 1}\{ C^{\rho_i}(\hat{x}, \eta) \subseteq W_\lambda \} - \mathbb{P}^0 \big(\textrm{Vol}(K^{\rho_i}_{\bf 0} (\eta) ) \leq t \big) \right)\end{equation}
    tends to $N(0, \sigma^2(\varphi^{\rho_i}))$ in distribution as $\lambda \to \infty$ , where $\varphi^{\rho_i}(\hat{x}, \eta) \coloneqq {\bf 1}\{\textrm{Vol} (C^{\rho_i}(\hat{x}, \eta)) \leq t \}$ and where $\sigma^2(\varphi^{\rho_i}) \in (0, \infty)$ is given by (2.5).

Our next result gives the limit theory for an unbiased estimator of the $(d-1)$ -dimensional Hausdorff measure $\mathcal{H}^{d-1}$ of the boundary of a typical cell in a weighted Poisson–Voronoi tessellation.

Theorem 2.5.

  1. (i) For all $i = 1,2,3$ we have that

    \begin{equation*} \sum_{\hat{x} \in \eta } \frac{ {\mathcal{H}}^{d-1} ( \partial C^{\rho_i}(\hat{x}, \eta)) } { \textrm{Vol}( W_\lambda \ominus C^{\rho_i}(\hat{x}, \eta) ) }\,{\bf 1}\{ C^{\rho_i}(\hat{x}, \eta) \subseteq W_\lambda \}\end{equation*}
    is an unbiased estimator of $\mathbb{E}^0 \mathcal{H}^{d-1}(\partial K^{\rho_i}_{\bf 0} (\eta) ).$
  2. (ii) For all $i= 1, 2, 3$ we have that

    \begin{equation*} \sqrt{ \lambda} \left( \sum_{\hat{x} \in \eta } \frac{ \mathcal{H}^{d-1} ( \partial C^{\rho_i}(\hat{x}, \eta)) } { \textrm{Vol}( W_\lambda \ominus C^{\rho_i}(\hat{x}, \eta) ) }\,{\bf 1}\{ C^{\rho_i}(\hat{x}, \eta) \subseteq W_\lambda \} - \mathbb{E}^0 \mathcal{H}^{d-1}( \partial K^{\rho_i}_{\bf 0} (\eta) ) \right)\end{equation*}
    tends to $N(0, \sigma^2(\xi^{\rho_i}))$ in distribution as $\lambda \to \infty$ , where
    \begin{equation*}\xi^{\rho_i}(\hat{x}, \eta) \coloneqq \mathcal{H}^{d-1} ( \partial C^{\rho_i}(\hat{x}, \eta)){\bf 1}\{C^{\rho_i}(\hat{x},\eta) {is\ bounded}\}\end{equation*}
    and where $\sigma^2(\xi^{\rho_i}) \in (0, \infty) $ is given by (2.5).

There are naturally other applications of the general theorems. By choosing h appropriately, one could for example use the general results to deduce the limit theory for an unbiased estimator of the distribution function of either the surface area, inradius, or circumradius of a typical cell in a weighted Poisson–Voronoi tessellation.

3. Stabilization of statistics of weighted Poisson–Voronoi tessellations

In this section we establish that (i) the cells in the Voronoi, Laguerre, and Johnson–Mehl tessellations generated by Poisson input have diameters with exponentially decaying tails (see Definition 2.2), and (ii) the scores $\xi^{\rho_i}$ , $i=1, 2, 3$ , as defined at (2.1) are exponentially stabilizing (see Definition 2.3). These two conditions arise in the statements of Theorems 2.2 and 2.3. Conditions (i) and (ii) have already been established in the case of the Poisson–Voronoi tessellation ( $\rho_1$ ) in [Reference Penrose13] and [Reference Penrose and Yukich14]. The Voronoi cell is a special example of both the Laguerre and the Johnson–Mehl cell when putting $\mathbb{M} = \{0\}$ (or any constant). Thus it will be enough to show that these two conditions hold for the Laguerre ( $\rho_2$ ) and the Johnson–Mehl ( $\rho_3$ ) tessellations.

By definition we have

\begin{equation*}{C^{\rho}}(\hat{x},\mathcal{P}) = \bigcap_{\hat{z} \in \mathcal{P} \setminus \{\hat{x}\}} \mathbb{H}_{\hat{z}}^{\rho}(\hat{x}),\end{equation*}

where $\mathbb{H}_{\hat{z}}^{\rho}(\hat{x}) \coloneqq \{y \in \mathbb{R}^d\,{:}\, \rho(y,\hat{x}) \leq \rho(y,\hat{z})\}$ . Note that $\mathbb{H}_{\cdot}^{\rho}(\!\cdot\!)$ is a closed half-space in the context of the Voronoi and Laguerre tessellations, whereas it has a hyperbolic boundary for the Johnson–Mehl tessellation. Tessellations generated by $\mathcal{P}$ are stationary and are examples of stationary particle processes; see [Reference Beneš and Rataj3, Section 2.8] or [Reference Schneider and Weil16, Section 10.1].

Proposition 3.1. The cells of the tessellation defined by $\rho_i$ , $i = 1,2,3$ , and generated by Poisson input $\eta$ have diameters with exponentially decaying tails as at (2.3).

Proof. We need to prove (2.3) for all $\hat{x} \in \eta$ . Without loss of generality, we may assume that $\hat{x}$ is the origin $\hat{{\bf 0}} \coloneqq ({\bf 0}, m_{\bf 0})$ and we denote $D \coloneqq D_{\hat{\bf 0}}$ .

Let ${\mathcal{K}}_j$ , $j=1,\dots,J$ , be a collection of convex cones in $\mathbb{R}^d$ such that $\cup_{j=1}^J {\mathcal{K}}_j = \mathbb{R}^d$ and $\langle x,y\rangle \geq 3\|x\| \|y\| /4$ for any x and y from the same cone $\mathcal{K}_j$ . Each cone has an apex at the origin ${\bf 0}$ . Denote $\hat{\mathcal{K}}_j \coloneqq \mathcal{K}_j \times \mathbb{M}$ . We take $(x_j,m_j) \in \eta \cap {\hat{\mathcal{K}}}_j \cap {\hat{B}}_{2\mu}({\bf{0}})^c$ so that $x_j$ is closer to ${\bf{0}}$ than any other point from $\eta \cap {\hat{\mathcal{K}}}_j \cap {\hat{B}}_{2\mu}({\bf{0}})^c$ . This condition means that the balls $B_{m_{\bf{0}}}({\bf{0}})$ and $B_{m_j}(x_j)$ do not overlap. Then

\begin{equation*}C^{\rho_i}(\hat{\bf 0},\eta) \subseteq \bigcap_{j=1}^J \mathbb{H}_{(x_j,m_j)}^{\rho_i}(\hat{\bf 0}), \qquad i = 1,2,3.\end{equation*}

Therefore, it is sufficient to find D such that for all $i = 1,2, 3$ we have $\mathbb{H}_{(x_j,m_j)}^{\rho_i}(\hat{\bf 0}) \cap \mathcal{K}_j \subseteq B_D({\bf 0})$ for $j=1,\dots,J$ to obtain $C^{\rho_i}(\hat{\bf 0},\eta) \subseteq B_D({\bf 0})$ . Consider $y \in \mathbb{H}_{(x_j,m_j)}^{\rho_i}(\hat{\bf 0}) \cap \mathcal{K}_j$ . Then $\rho_i(y,\hat{\bf 0}) \leq \rho_i(y,(x_j,m_j))$ and $\langle y,x_j\rangle \geq 3\|x_j\| \|y\| /4$ . For the Laguerre cell ( $i=2$ ) the first condition means that $\|y\|^2 - m_{{\bf 0}}^2 \leq \|y - x_j\|^2 - m_j^2 = \|y\|^2 + \|x_j\|^2 - 2\langle y,x_j \rangle - m_j^2$ . Thus,

\begin{equation*}2\langle y,x_j \rangle \le \|x_j\|^2+m_{\bf 0}^2-m_j^2 \le \|x_j\|^2+\mu^2 < \frac32 \|x_j\|^2\end{equation*}

and so $\|y\| < \|x_j\|$ . For the Johnson–Mehl cell ( $i=3$ ) we have

\begin{equation*}\|y-x_j\| \ge \|y\| - m_{\bf 0} + m_j \ge \|y\| - \mu,\end{equation*}

which for $\|y\| > \mu$ gives

\begin{equation*}2\langle y,x_j\rangle \le 2\mu\|y\|-\mu^2+\|x_j\|^2.\end{equation*}

Hence, using the assumptions $\langle x_j,y\rangle \geq 3\|x_j\| \|y\| /4$ and $\|x_j\|>2\mu$ ,

\begin{equation*}\|y\| \le \frac{2(\|x_j\|^2-\mu^2)}{3\|x_j\|-4\mu} < \frac{2\|x_j\|^2}{\|x_j\|} = 2\|x_j\|.\]

Consequently, for either the Laguerre or Johnson–Mehl cells, we can take

(3.1) \begin{equation} D=2\max_{j=1,\dots,J} \|x_j\|.\end{equation}

Then, for $t \in (4\mu, \infty)$ we have

\begin{align*}\mathbb{P}^{\hat{\bf 0}}(D \ge t) &\le \sum_{j=1}^J \mathbb{P}(2\|x_j\| \ge t)= \sum_{j=1}^J \mathbb{P}(\eta \cap (\hat{B}_{t/2}({\bf 0}) \setminus \hat{B}_{2\mu}({\bf 0})) \cap \hat{\mathcal{K}}_j = \emptyset) \\&= \sum_{j=1}^J \exp(-\textrm{Vol}((B_{t/2}({\bf 0}) \setminus B_{2\mu}({\bf 0})) \cap \mathcal{K}_j )) \leq c_\textrm{diam} \exp\left(-\frac{1}{c_\textrm{diam}}\, t^d\right)\end{align*}

for some $c_\textrm{diam} \coloneqq c_\textrm{diam}(d, \mu) \in (0, \infty)$ depending on d and $\mu$ . This shows Proposition 3.1 for $i = 2, 3$ , and hence for $i = 1$ as well.

Proposition 3.2. For all $i = 1, 2, 3$ the score $\xi^{\rho_i}$ defined at (2.1) is exponentially stabilizing with respect to $\eta$ .

Proof. We will prove (2.4) when $\hat{x}$ is the origin, and we denote $R \coloneqq R_{\hat{\bf 0}}$ . For simplicity of exposition, we prove (2.4) when $\mathcal{A}$ is the empty set, as the arguments do not change otherwise. In other words, if $\mathcal{A}$ is not empty then the resulting radius of stabilization will not be larger, as seen by the following arguments. By (2.1), it is enough to show that there is an almost surely finite random variable R such that

\begin{equation*}C^{\rho_i}(\hat{\bf 0}, \eta \cap \hat{B}_R({\bf 0})) = C^{\rho_i}(\hat{\bf 0}, (\eta \cap \hat{B}_R({\bf 0})) \cup \{(z,m_z)\}) \end{equation*}

almost surely whenever $\|z\| \in (R, \infty)$ . To see this we put $R \coloneqq 2D + \mu$ , where D is as given in (3.1). Given $\hat{z} \coloneqq (z,m_z)$ , with $\|z\| \in (R, \infty)$ , we assert that

\begin{equation*}B_D({\bf 0}) \subseteq \mathbb{H}_{\hat{z}}^{\rho_i}(\hat{\bf 0}).\end{equation*}

To prove this, we take any point $y \in B_D({\bf 0})$ and show that

(3.2) \begin{equation}\rho_i(y, \hat{{\bf 0}}) \leq \rho_i(y, \hat{z}), \qquad i = 1,2,3.\end{equation}

Note that $y \in B_D({\bf 0})$ implies $\|y-z\| \in (D+\mu, \infty)$ . The proof of (3.2) is shown for the Laguerre and Johnson–Mehl cases individually. First, assume that $C^{\rho_2}(\hat{\bf 0}, \eta)$ is the cell in the Laguerre tessellation. Then

\begin{equation*}\rho_2(y, \hat{{\bf 0}}) = \|y\|^2-m_{\bf 0}^2 \le D^2 < (D+\mu)^2-\mu^2 < \|y-z\|^2 - \mu^2 \le \|y-z\|^2 - m_z^2 = \rho_2(y, \hat{z}),\end{equation*}

showing that $y \in \mathbb{H}_{\hat{z}}^{\rho_2}(\hat{\bf 0})$ . For the Johnson–Mehl case,

\begin{equation*}\rho_3(y, \hat{{\bf 0}})=\|y\|-m_{\bf 0} \le D = (D+\mu)-\mu < \|y-z\| - \mu \le \|y-z\| - m_z =\rho_3(y, \hat{z}),\end{equation*}

thus again $y \in \mathbb{H}_{\hat{z}}^{\rho_3}(\hat{\bf 0})$ , which shows our assertion.

The radius D at (3.1) has a tail decaying exponentially fast, showing that R also has the same property. Consequently, for all $i = 1,2,3$ , the score $\xi^{\rho_i}$ is exponentially stabilizing with respect to $\eta$ .

Remark 3.1. The assertion $C^{\rho_i}(\hat{\bf 0}, \mathcal{P}) \subseteq B_D({\bf 0})$ holds for a larger class of marked point processes. We only need that the unmarked point process has at least one point in each cone $\mathcal{K}_j \cap B_{2\mu}({\bf 0})^c$ , $j=1,\dots,J$ , with Palm probability $\mathbb{P}^0$ equal to 1. Consequently, scores $\xi^{\rho_i}$ , $i = 1,2,3$ , are stabilizing with respect to such marked point processes.

Remark 3.2. Proposition 3.2 implies that the limit theory developed in [Reference McGivney and Yukich8, Reference Penrose and Yukich14, Reference Penrose and Yukich15] for the total edge length and other stabilizing functionals of the Poisson–Voronoi tessellation extends to Poisson tessellation models with weighted Voronoi cells. Thus, Proposition 3.2 provides expectation and variance asymptotics, as well as normal convergence, for such functionals of the Poisson tessellation.

Remark 3.3. Aside from weighted Poisson–Voronoi tessellations, Propositions 3.1 and 3.2 hold also for the Poisson–Delaunay triangulation. On the other hand, Proposition 3.1 holds for Poisson-line tessellation, but Proposition 3.2 does not.

4. Proofs of the main results

4.1. Preliminary lemmas

In this section we omit in the notation the dependence on the weight $\rho$ that defines the tessellation. For simplicity, we write

\begin{equation*}H_{\lambda}(\eta \cap \hat{W}_\lambda ) \coloneqq H_{\lambda}^{\rho}(\eta \cap \hat{W}_\lambda ), \qquad H_{\lambda}(\eta ) \coloneqq H_{\lambda}^{\rho}(\eta),\end{equation*}

as well as

\begin{equation*}\hat{H}_{\lambda}(\eta \cap \hat{W}_\lambda ) \coloneqq \hat{H}_{\lambda}^{\rho}(\eta \cap \hat{W}_\lambda ), \qquad \hat{H}_{\lambda}(\eta ) \coloneqq \hat{H}_{\lambda}^{\rho}(\eta ).\end{equation*}

Let us start with some useful first-order results.

Lemma 4.1. Under the assumptions of Theorem 2.2(ii), we have

\begin{equation*}\lim_{\lambda \to \infty} \lambda\,\mathbb{E} \left| H_{\lambda}(\eta \cap \hat{W}_\lambda ) - \hat{H}_{\lambda}(\eta\cap \hat{W}_\lambda) \right| = 0.\end{equation*}

Proof. We denote by $\hat{\mathbb{Q}}$ the product of the Lebesgue measure on $\mathbb{R}^d$ and $\mathbb{Q}_\mathbb{M}$ . By the refined Campbell theorem and stationarity,

\begin{align*}&\mathbb{E} \left| H_{\lambda}(\eta \cap \hat{W}_\lambda ) - \hat{H}_{\lambda}(\eta\cap \hat{W}_\lambda) \right| \\&\leq \mathbb{E} \sum_{\hat{x} \in \eta \cap \hat{W}_\lambda} \frac{ |h(C(\hat{x},\eta))|}{\textrm{Vol}(W_\lambda \ominus C(\hat{x},\eta))}\,{\bf 1}\{C(\hat{x},\eta) \subseteq W_\lambda\}\,{\bf 1}\bigg\{\textrm{Vol}(W_\lambda \ominus C(\hat{x}, \eta) ) < \frac{\lambda} {2} \bigg\} \\&= \int_{\hat{W}_\lambda} \mathbb{E}^{\hat{x}} \left( \frac{ |h(C(\hat{x},\eta))|}{\textrm{Vol}(W_\lambda \ominus C(\hat{x},\eta))}\,{\bf 1}\{C(\hat{x},\eta) \subseteq W_\lambda\}\,{\bf 1}\bigg\{\textrm{Vol}(W_\lambda \ominus C(\hat{x}, \eta) ) < \frac{\lambda} {2} \bigg\}\right) \,\hat{\mathbb{Q}}(\textup{d} \hat{x}) \\&= \int_{W_\lambda}\int_\mathbb{M} \mathbb{E}^{({\bf 0},m)} \Biggl( \frac{ |h(C(({\bf 0},m),\eta))|}{\textrm{Vol}(W_\lambda \ominus C(({\bf 0},m),\eta))}\,{\bf 1}\{x \in W_\lambda \ominus C(({\bf 0},m),\eta)\} \\&\quad \times {\bf 1}\bigg\{\textrm{Vol}(W_\lambda \ominus C(({\bf 0},m), \eta) ) < \frac{\lambda} {2} \bigg\}\Biggr)\,\mathbb{Q}_\mathbb{M}(\textup{d} m)\,\textup{d} x.\end{align*}

Changing the order of integration we get

(4.1) \begin{align}\mathbb{E} \left| H_{\lambda}(\eta \cap \hat{W}_\lambda ) - \hat{H}_{\lambda}(\eta\cap \hat{W}_\lambda) \right| &\leq \int_{\mathbb{M}} \!\! \mathbb{E}^{{\bf 0}_m} \bigg( |h(C({\bf 0}_m,\eta))|{\bf 1}\bigg\{ \textrm{Vol}( W_\lambda \ominus C({\bf 0}_m,\eta)) < \frac{\lambda} {2} \bigg\} \nonumber\\&\quad\times \int_{W_\lambda}\frac{{\bf 1}\{ x \in W_\lambda \ominus C({\bf 0}_m,\eta) \}}{\textrm{Vol}(W_\lambda \ominus C({\bf 0}_m,\eta))} \,\textup{d} x\bigg)\,\mathbb{Q}_\mathbb{M}(\textup{d} m),\end{align}

where ${\bf 0}_m \coloneqq ({\bf 0},m)$ . The inner integral over $W_\lambda$ is bounded by one, showing that for all $p \in (1, \infty)$ we have

\begin{align*}& \mathbb{E} \left| H_{\lambda}(\eta \cap \hat{W}_\lambda ) - \hat{H}_{\lambda}(\eta\cap \hat{W}_\lambda) \right|\\&\quad\leq \int_{\mathbb{M}} \mathbb{E}^{{\bf 0}_m} \left( |h(C({\bf 0}_m,\eta))|\,{\bf 1}\bigg\{ \textrm{Vol}( W_\lambda \ominus C({\bf 0}_m,\eta)) < \frac{\lambda} {2} \bigg\}\right)\,\mathbb{Q}_\mathbb{M}(\textup{d} m) \\ &\quad\leq \int_{\mathbb{M}} \left(\mathbb{E}^{{\bf 0}_m} |h(C({\bf 0}_m,\eta))|^{p}\right)^{\frac{1}{p}} \, \mathbb{P}^{{\bf 0}_m} \left( \textrm{Vol}( W_\lambda \ominus C({\bf 0}_m,\eta)) < \frac{\lambda} {2} \right)^{\frac{p-1}{p}} \,\mathbb{Q}_\mathbb{M}(\textup{d} m).\end{align*}

The random variable D at (3.1) satisfies $C(\hat{\bf 0},\eta) \subseteq B_D({\bf 0})$ almost surely. Thus,

\begin{equation*}\mathbb{P}^{\hat{\bf 0}} \left( \textrm{Vol}( W_\lambda \ominus C(\hat{\bf 0},\eta) ) < \frac{\lambda} {2} \right) \leq \mathbb{P}^{\hat{\bf 0}} \left( \textrm{Vol}( W_\lambda \ominus B_D({\bf 0}) ) < \frac{\lambda} {2} \right).\end{equation*}

The volume of the erosion on the right-hand side equals $(\lambda^{1/d}-2D)_+ ^d$ . By conditioning on $Y \coloneqq {\bf 1}\{\lambda^{1/d}\geq 2D\}$ , we obtain

\begin{align*}\mathbb{P}^{\hat{\bf 0}} \left((\lambda^{1/d}-2D)_+ ^d < \frac{\lambda} {2} \right)&= \mathbb{P}^{\hat{\bf 0}} \left((\lambda^{1/d}-2D)_+ ^d < \frac{\lambda} {2} \, \Big| \, Y=1\right)\, \mathbb{P}^{\hat{\bf 0}}(Y=1) \\[4pt] &\quad + \mathbb{P}^{\hat{\bf 0}} \left((\lambda^{1/d}-2D)_+ ^d < \frac{\lambda} {2} \, \Big| \, Y=0\right)\, \mathbb{P}^{\hat{\bf 0}}(Y=0)\\[4pt] & \leq \mathbb{P}^{\hat{\bf 0}} \left((\lambda^{1/d}-2D) ^d < \frac{\lambda} {2} \right) + \mathbb{P}^{\hat{\bf 0}} (\lambda^{1/d} < 2D)\\[4pt] & \leq 2 \mathbb{P}^{\hat{\bf 0}} (D > e(\lambda)),\end{align*}

where $e(\lambda) \coloneqq (\lambda^{1/d} - (\lambda/2)^{1/d})/2$ . Finally, recalling that D has exponentially decaying tails as at (2.3), we obtain

\begin{equation*}\mathbb{P}^{\hat{\bf 0}} \left( \textrm{Vol}( W_\lambda \ominus C(\hat{\bf 0},\eta) ) < \frac{\lambda} {2} \right) \le 2 \, c_\textrm{diam} \exp\left(-\frac{1}{c_\textrm{diam}}\, e(\lambda)^d \right).\end{equation*}

Using this bound, we have

\begin{align*}& \lambda\,\mathbb{E} \left| H_{\lambda}(\eta \cap \hat{W}_\lambda ) - \hat{H}_{\lambda}(\eta\cap \hat{W}_\lambda) \right|\\&\quad \leq \lambda\int_{\mathbb{M}} ( \mathbb{E}^{{\bf 0}_m} |h(C({\bf 0}_m,\eta))|^{p})^{\frac{1}{p}} \left( 2\, c_\textrm{diam} \exp\left(- \frac{1}{c_\textrm{diam}}\, e(\lambda)^d\right)\right)^{\frac{p-1}{p}}\,\mathbb{Q}_\mathbb{M}(\textup{d} m).\end{align*}

Now $\xi$ satisfies the p-moment condition (2.2) for $p \in (1, \infty)$ and thus the first factor is bounded by a constant uniformly over all $m \in \mathbb{M}.$ Lemma 4.1 follows.

Lemma 4.2. Under the assumptions of Theorem 2.2(i), we have

\begin{equation*}\lim_{\lambda \to \infty} \lambda\,\mathbb{E} \left| H_{\lambda}(\eta) - \hat{H}_{\lambda}(\eta) \right| = 0.\end{equation*}

Proof. We follow the proof of Lemma 4.1. In (4.1), we integrate over $\mathbb{R}^d$ instead of over $W_\lambda$ , yielding a value of one for the inner integral. Now follow the proof of Lemma 4.1 verbatim.

Lemma 4.3. Under the assumptions of Theorem 2.2(i), we have

\begin{equation*}\lim_{\lambda \to \infty} \mathbb{E} \left| \hat{H}_{\lambda}(\eta\cap \hat{W}_\lambda) - \hat{H}_{\lambda}(\eta) \right| = 0.\end{equation*}

Proof. Write

(4.2) \begin{align*} \hat{\nu}_\lambda(\hat{x}, \eta) \coloneqq & \frac{ h(C( \hat{x}, \eta)) \, {\bf 1}\{ C(\hat{x}, \eta) \subseteq W_\lambda \}} { \textrm{Vol}( W_\lambda \ominus C(\hat{x}, \eta) ) }\\[5pt] & \times {\bf 1}\bigg\{ \textrm{Vol}( W_\lambda \ominus C( \hat{x}, \eta) ) \geq \frac{\lambda} {2} \bigg\}\,{\bf 1}\{D_{\hat{x}} \geq d(x, W_\lambda)\},\end{align*}

where $D_{\hat{x}}$ is the radius of the ball centered at x and containing $C(\hat{x}, \eta)$ , and where $D_{\hat{x}}$ is equal in distribution to D, with D as in (3.1). Here, $d(x, W_\lambda)$ denotes the Euclidean distance between x and $W_\lambda$ . We observe that

\begin{equation*}\mathbb{E} \left| \hat{H}_{\lambda}(\eta\cap \hat{W}_\lambda) - \hat{H}_{\lambda}(\eta) \right| \leq \mathbb{E} \sum_{\hat{x} \in \eta \cap \hat{W}_\lambda ^c} \left| \hat{\nu}_\lambda(\hat{x}, \eta) \right|.\end{equation*}

From now on, we use the notation c to denote a universal positive constant whose value may change from line to line. By the Hölder inequality, the p-moment condition on $\xi$ , and the assumption that $C(\hat{x}, \eta)$ has an exponentially decaying tail, we have $\mathbb{E} |\hat{\nu}_\lambda(\hat{x}, \eta)| \leq (c/\lambda) \exp\left( - \frac{1}{c} d(x, W_\lambda)^d \right)$ . Thus,

\begin{equation*}\mathbb{E} \left| \hat{H}_{\lambda}(\eta\cap \hat{W}_\lambda) - \hat{H}_{\lambda}(\eta) \right| \leq \frac{c}{\lambda} \int_{W_\lambda^c} \exp\left( - \frac{1}{c} \, d(x, W_\lambda)^d \right)\,\textup{d} x.\end{equation*}

Let $W_{\lambda,\varepsilon}$ be the set of points in $W_\lambda^c$ at distance $\varepsilon$ from $W_\lambda$ . The co-area formula implies

\begin{equation*}\mathbb{E} \left| \hat{H}_{\lambda}(\eta\cap \hat{W}_\lambda) - \hat{H}_{\lambda}(\eta) \right| \leq \frac{c}{\lambda} \int_0^{\infty} \int_{W_{\lambda,\varepsilon}} \exp\left( - \frac{1}{c} \, \varepsilon^d \right)\, \mathcal{H}^{d-1}(\textup{d} y)\,\textup{d} \varepsilon.\end{equation*}

Since $\mathcal{H}^{d-1} ( W_{\lambda,\varepsilon}) \leq c\,(\lambda^{1/d}(1 + \varepsilon) )^{d-1}$ , we get

\begin{equation*}\hspace*{110pt}\mathbb{E} \left| \hat{H}_{\lambda}(\eta\cap \hat{W}_\lambda) - \hat{H}_{\lambda}(\eta) \right| = O(\lambda^{-1/d}). \hspace*{110pt}\square\end{equation*}

4.2. Proof of Theorem 2.2

  1. (i) The asymptotic unbiasedness of $H_{\lambda}(\eta \cap \hat{W}_\lambda)$ , $\hat{H}_{\lambda}(\eta \cap \hat{W}_\lambda )$ , and $\hat{H}_{\lambda}(\eta)$ is a consequence of Lemmas 4.1, 4.2, and 4.3. For example, concerning $H_{\lambda}(\eta \cap \hat{W}_\lambda)$ , one may write

    \begin{align*}& |\mathbb{E} H_{\lambda}(\eta \cap \hat{W}_\lambda )-\mathbb{E}^0 h(K_{\bf 0}(\eta) )| \leq \mathbb{E} |H_{\lambda}(\eta \cap \hat{W}_\lambda ) - H_{\lambda}(\eta)| \\&\quad \leq \left( \mathbb{E} |H_{\lambda}(\eta \cap \hat{W}_\lambda ) - \hat{H}_{\lambda}(\eta \cap \hat{W}_\lambda )| + \mathbb{E} |\hat{H}_{\lambda}(\eta \cap \hat{W}_\lambda ) - \hat{H}_{\lambda}(\eta)| + \mathbb{E} |\hat{H}_{\lambda}(\eta) - H_{\lambda}(\eta)| \right),\end{align*}
    which in view of Lemmas 4.1, 4.2, and 4.3 goes to zero as $\lambda \to \infty$ . This gives the asymptotic unbiasedness of $H_{\lambda}(\eta \cap \hat{W}_\lambda )$ . One may similarly show the asymptotic unbiasedness for $\hat{H}_{\lambda}(\eta \cap \hat{W}_\lambda )$ and $\hat{H}_{\lambda}(\eta)$ .
  2. (ii) To show consistency, we introduce $T_\lambda(\eta \cap \hat{W}_\lambda)= \lambda^{-1} \sum_{\hat{x} \in \eta \cap \hat{W}_\lambda} \xi(\hat{x}, \eta)$ . By assumption, $\xi$ stabilizes and satisfies the p-moment condition for $p \in (1, \infty)$ . Thus, using Theorem 2.1 of [Reference Penrose and Yukich15], we get that $T_\lambda(\eta \cap \hat{W}_\lambda)$ is a consistent estimator of $\mathbb{E}^0 h(K_{\bf 0}(\eta) ).$ To prove the consistency of the estimators in Theorem 2.2(iii), it is enough to show for one of them that it has the same $L_1$ limit as $T_\lambda(\eta \cap \hat{W}_\lambda)$ . We choose $ \hat{H}_{\lambda}(\eta \cap \hat{W}_\lambda )$ and write

    \begin{align*}&\mathbb{E} \left| \hat{H}_{\lambda}(\eta \cap \hat{W}_\lambda )-T_\lambda(\eta \cap \hat{W}_\lambda)\right| \\[4pt] &\quad = \mathbb{E} \left|\lambda^{-1} \sum_{\hat{x} \in \eta \cap \hat{W}_\lambda} \xi(\hat{x}, \eta) \left( \frac{ \lambda\,{\bf 1}\{ C(\hat{x}, \eta) \subseteq W_\lambda \}\,{\bf 1}\{ \textrm{Vol}( W_\lambda \ominus C(\hat{x}, \eta) ) \geq \frac{\lambda}{2} \} } { \textrm{Vol}( W_\lambda \ominus C(\hat{x}, \eta) ) } -1 \right) \right| \\[4pt] &\quad \leq \lambda^{-1} \mathbb{E} \sum_{\hat{x} \in \eta \cap \hat{W}_\lambda} |\xi(\hat{x}, \eta)| \left| \frac{ \lambda\,{\bf 1}\{ C(\hat{x}, \eta) \subseteq W_\lambda \}\,{\bf 1}\{ \textrm{Vol}( W_\lambda \ominus C(\hat{x}, \eta) ) \geq \frac{\lambda}{2}\} } { \textrm{Vol}( W_\lambda \ominus C(\hat{x}, \eta) ) } -1 \right| \\[4pt] &\quad \leq \int_{W_\lambda} \lambda^{-1} \mathbb{E}^0 \left( |h(K_{\bf 0}(\eta) )| \left| \frac{ \lambda\,{\bf 1}\{ x + K_{\bf 0}(\eta) \subseteq W_\lambda \}\,{\bf 1}\{ \textrm{Vol}( W_\lambda \ominus K_{\bf 0}(\eta) ) \geq \frac{\lambda}{2} \}} {\textrm{Vol}( W_\lambda \ominus K_{\bf 0}(\eta) ) } -1 \right| \right)\,\textup{d} x\\[4pt] &\quad = \int_{[\!- \frac{1} {2}, \frac{1} {2 } ]^d} \mathbb{E}^0 \big( |h(K_{\bf 0}(\eta))| Y_\lambda(u)\big)\,\textup{d} u,\end{align*}
    where we substituted $\lambda^{1/d}u$ for x in the last equality and the defined random variables
    \begin{equation*}Y_{\lambda} (u) \coloneqq \left| \frac{ \lambda\,{\bf 1}\{ \lambda^{1/d} u + K_{\bf 0}(\eta) \subseteq W_\lambda \}\,{\bf 1}\{ \textrm{Vol}( W_\lambda \ominus K_{\bf 0}(\eta) ) \geq \frac{\lambda}{2} \}} { \textrm{Vol}( W_\lambda\ominus K_{\bf 0}(\eta) )} -1 \right|.\end{equation*}

We show that $Y_{\lambda} (u)$ converges to zero in $\mathbb{P}^0$ probability for any $u \in (-1/2, 1/2)^d$ . Write the inclusion $K_{\bf 0}(\eta) \subseteq B_D({\bf 0})$ , where D has exponentially decaying tails by assumption. We conclude that both $\lambda/\textrm{Vol}(W_\lambda \ominus K_{\bf 0}(\eta))$ and ${\bf 1}\{ \textrm{Vol}( W_\lambda\ominus K_{\bf 0}(\eta) ) \geq \lambda/2 \}$ tend to one in $\mathbb{P}^0$ probability. To prove the convergence of $Y_{\lambda}(u)$ to zero in $\mathbb{P}^0$ probability, it remains to show that ${\bf 1}\{ \lambda^{1/d} u + K_{\bf 0}(\eta) \subseteq W_\lambda \}$ converges to one in $\mathbb{P}^0$ probability. Equivalently, we show that the $\mathbb{P}^0$ probability of the event $\{\lambda^{1/d} u + K_{\bf 0}(\eta) \subseteq W_\lambda \}$ goes to 1. Let $u \in (-1/2, 1/2)^d$ be fixed. Then

\begin{align*}&\mathbb{P}^0(\lambda^{1/d}u \in W_\lambda \ominus K_{\bf 0}(\eta) ) \geq \mathbb{P}^0(\lambda^{1/d}u \in W_\lambda \ominus B_D({\bf 0})) \\[2pt] &\quad = \mathbb{P}^0 \left(u \in \left[\!-\frac{1}{2} + \frac{D}{\lambda^{1/d}}, \frac{1}{2} - \frac{D}{\lambda^{1/d}} \right]^d \right)\\[2pt] &\quad = \mathbb{P}^0 \left(u \in \left[\!-\frac{1}{2} + \frac{D}{\lambda^{1/d}}, \frac{1}{2} - \frac{D}{\lambda^{1/d}} \right]^d \, \Big| \, D\leq \log{\lambda}\right) \mathbb{P}^0(D\leq \log{\lambda})\\[2pt] &\qquad + \mathbb{P}^0 \left(u \in \left[\!-\frac{1}{2} + \frac{D}{\lambda^{1/d}}, \frac{1}{2} - \frac{D}{\lambda^{1/d}} \right]^d \, \Big| \, D > \log{\lambda}\right) \mathbb{P}^0(D > \log{\lambda})\displaybreak \\ &\quad \geq {\bf 1} \left\{u \in \left[\!-\frac{1}{2} + \frac{\log{\lambda}}{\lambda^{1/d}}, \frac{1}{2} - \frac{\log{\lambda}}{\lambda^{1/d}} \right]^d \right\} \mathbb{P}^0(D\leq \log{\lambda}) \\ &\qquad + \mathbb{P}^0 \left(u \in \left[\!-\frac{1}{2} + \frac{D}{\lambda^{1/d}}, \frac{1}{2} - \frac{D}{\lambda^{1/d}} \right]^d \, \Big| \, D > \log{\lambda}\right) \mathbb{P}^0(D > \log{\lambda}).\end{align*}

Again, D has exponentially decaying tails, so the lower bound converges to $\mathbb{P}^0 (u \in (-1/2, 1/2)^d) = 1$ , showing that $Y_\lambda(u)$ goes to zero in probability as $\lambda \to \infty$ . We proved that the $Y_{\lambda} (u)$ converge to zero in $\mathbb{P}^0$ probability, but they are also uniformly bounded by one, hence it follows from the moment condition on $\xi$ that $h(K_{\bf 0}(\eta))Y_\lambda(u)$ goes to zero in $L^1$ . Finally, by the dominated convergence theorem, we get

\begin{equation*}\lim_{\lambda \to \infty}\mathbb{E} \left| \hat{H}_{\lambda}(\eta \cap \hat{W}_\lambda )-T_\lambda(\eta \cap \hat{W}_\lambda)\right| = 0.\end{equation*}

Thus, $ \hat{H}_{\lambda}(\eta \cap \hat{W}_\lambda )$ converges to $\mathbb{E}^0 h(K_{\bf 0}(\eta))$ in $L^1$ and also in probability. The consistency of the remaining estimators in Theorem 2.2 follows from Lemmas 4.1, 4.2, and 4.3. This completes the proof of Theorem 2.2.

4.3. Proof of Theorem 2.3(i)

We prove the variance asymptotics (2.6). The proof is split into two lemmas (Lemmas 4.5 and 4.6). We first show an auxiliary result used in the proofs of both lemmas. Then we prove the variance asymptotics for $\hat{H}_\lambda(\eta \cap \hat{W}_\lambda)$ . This is easier, since, after scaling by $\lambda$ , the scores are bounded by $2 |\xi(\hat{x},\eta)|$ and thus, by assumption, satisfy a p-moment condition for some $p \in (2, \infty)$ . Finally, we conclude the proof by showing that the asymptotic variance of $\hat{H}_\lambda(\eta)$ is the same as the asymptotic variance of $\hat{H}_\lambda(\eta \cap \hat{W}_\lambda)$ .

Lemma 4.4. Let $\varphi\,{:}\, \hat{\mathbb{R}^d} \times {\bf N} \to \mathbb{R}$ be an exponentially stabilizing function with respect to $\eta$ that satisfies the p-moment condition for some $p \in (2, \infty)$ . Then there exists a constant $c \in (0, \infty)$ such that, for all $\hat{x}, \hat{y} \in \hat{\mathbb{R}^d}$ ,

(4.3) \begin{align} &|\mathbb{E} \varphi(\hat{x}, \eta \cup \{\hat{y}\})\varphi(\hat{y}, \eta \cup \{\hat{x}\}) - \mathbb{E} \varphi(\hat{x}, \eta)\,\mathbb{E}\varphi(\hat{y}, \eta)| \nonumber \\&\qquad\leq {c}\,\left( \sup_{\hat{x}, \hat{y} \in \hat{\mathbb{R}^d} } \mathbb{E} | \varphi(\hat{x}, \eta \cup \{ \hat{y} \} )|^{p}\right)^{\frac{2}{p}}\exp\left(-\frac{1}{c}\, \|x - y\|^{ \alpha } \right),\end{align}

where $\varphi(\hat{x},\eta) \coloneqq \varphi(\hat{x},\eta \cup \{\hat{x}\})$ if $\hat{x} \not\in \eta$ .

Proof. We follow the proof of Lemma 5.2 in [Reference Baryshnikov and Yukich2] and show that the constant $A_{1,1}$ there involves the moment $\smash{(\mathbb{E} | \varphi(\hat{x}, \eta \cup \{\hat{y}\})|^p)^{\frac{2}{p}}}$ . Put $R \coloneqq \max (R_{\hat{x}}, R_{\hat{y}})$ , where $R_{\hat{x}}, R_{\hat{y}}$ are the radii of stabilization as in Proposition 3.2 for $\hat{x}$ and $\hat{y}$ , respectively. Furthermore, put $r \coloneqq \|x-y\|/3$ and define the event $E \coloneqq \{ R \leq r \}$ . Hölder’s inequality gives

(4.4) \begin{align} & | \mathbb{E} \varphi(\hat{x}, \eta \cup \{\hat{y}\} ) \varphi(\hat{y}, \eta \cup \{\hat{x}\} ) - \mathbb{E} \varphi(\hat{x}, \eta \cup \{\hat{y}\} ) \varphi(\hat{y}, \eta \cup \{\hat{x}\} ){\bf 1}\{E\} | \nonumber \\&\qquad \leq c \left(\sup_{ \hat{x}, \hat{y} \in \hat{\mathbb{R}^d} } \mathbb{E} | \varphi(\hat{x}, \eta \cup \{\hat{y}\} )|^p\right)^{\frac{2}{p}} \, \mathbb{P}(E^c)^{\frac{p-2}{p}}.\end{align}

Notice that

\begin{align*}& \mathbb{E} \varphi(\hat{x}, \eta \cup \{\hat{y}\} ) \varphi(\hat{y}, \eta \cup \{\hat{x}\} ){\bf 1}\{E\} \\&\quad = \mathbb{E} \varphi(\hat{x}, (\eta \cup \{\hat{y}\}) \cap \hat{B}_{r}(\hat{x}) ) \varphi(\hat{y}, (\eta \cup \{\hat{x}\}) \cap \hat{B}_{r}(\hat{x}) ){\bf 1}\{E\} \\&\quad =\mathbb{E} \varphi(\hat{x}, (\eta \cup \{\hat{y}\}) \cap \hat{B}_{r}(\hat{x}) ) \varphi(\hat{y}, (\eta \cup \{\hat{x}\}) \cap \hat{B}_{r}(\hat{x}) ) (1 - {\bf 1}\{E^c\}).\end{align*}

A second application of Hölder’s inequality gives

(4.5) \begin{align} & | \mathbb{E} \varphi(\hat{x}, \eta \cup \{\hat{y}\} ) \varphi(\hat{y}, \eta \cup \{\hat{x}\} ){\bf 1}\{E\}- \mathbb{E} \varphi(\hat{x}, (\eta \cup \{\hat{y}\}) \cap \hat{B}_{r}(\hat{x}) ) \varphi(\hat{y}, (\eta \cup \{\hat{x}\}) \cap \hat{B}_{r}(\hat{y}) ) |\nonumber \\&\quad \leq c \left(\sup_{ \hat{x}, \hat{y} \in \hat{\mathbb{R}^d} } \mathbb{E} | \varphi(\hat{x}, \eta \cup \{\hat{y}\} )|^p\right)^{\frac{2}{p}} \, \mathbb{P}(E^c)^{\frac{p-2}{p}}.\end{align}

Thus, combining (4.4) and (4.5) and using the independence of $\varphi(\hat{x}, (\eta \cup \{\hat{y}\}) \cap \hat{B}_{r}(\hat{x}) )$ and $\varphi(\hat{y}, (\eta \cup \{\hat{x}\}) \cap \hat{B}_{r}(\hat{y}) )$ we have

(4.6) \begin{align} & | \mathbb{E} \varphi(\hat{x}, \eta \cup \{\hat{y}\} ) \varphi(\hat{y}, \eta \cup \{\hat{x}\} )- \mathbb{E} \varphi(\hat{x}, (\eta \cup \{\hat{y}\}) \cap \hat{B}_{r}(\hat{x}) ) \mathbb{E} \varphi(\hat{y}, (\eta \cup \{\hat{x}\}) \cap \hat{B}_{r}(\hat{y}) ) | \nonumber \\&\qquad \leq c \left(\sup_{ \hat{x}, \hat{y} \in \hat{\mathbb{R}^d} } \mathbb{E} | \varphi(\hat{x}, \eta \cup \{\hat{y}\} )|^p\right)^{\frac{2}{p}} \, \mathbb{P}(E^c)^{\frac{p-2}{p}}.\end{align}

Likewise, we may show that

(4.7) \begin{align} & | \mathbb{E} \varphi(\hat{x}, \eta ) \mathbb{E} \varphi(\hat{y}, \eta ) - \mathbb{E} \varphi(\hat{x}, \eta \cap \hat{B}_{r}(\hat{x}) ) \mathbb{E} \varphi(\hat{y}, \eta \cap \hat{B}_{r}(\hat{y}) ) | \nonumber \\&\qquad \leq c \left(\sup_{ \hat{x}, \hat{y} \in \hat{\mathbb{R}^d} } \mathbb{E} | \varphi(\hat{x}, \eta \cup \{\hat{y}\} )|^p\right)^{\frac{2}{p}} \, \mathbb{P}(E^c)^{\frac{p-2}{p}}.\end{align}

Combining (4.6) and (4.7), and using that $\mathbb{P}(E^c)$ decreases exponentially in $\|x - y\|^{\alpha}$ , we thus obtain (4.3).

Lemma 4.5. If $\xi$ is exponentially stabilizing with respect to $\eta$ then

\begin{equation*}\lim_{\lambda \to \infty} \lambda \textrm{Var} \hat{H}_\lambda(\eta \cap \hat{W}_\lambda) = \sigma^2(\xi),\end{equation*}

where $\sigma^2(\xi)$ is as in (2.5).

Proof. Put, for all $\hat{x} \in \hat{\mathbb{R}^d}$ and any marked point process $\mathcal{P}$ ,

\begin{equation*}\zeta_\lambda(\hat{x}, \mathcal{P}) \coloneqq \frac{ \lambda\,\xi( \hat{x}, \mathcal{P}) } { \textrm{Vol}( W_\lambda \ominus C(\hat{x}, \mathcal{P}) ) }\, {\bf 1}\bigg\{ \textrm{Vol}( W_\lambda \ominus C( \hat{x}, \mathcal{P}) ) \geq \frac{\lambda} {2} \bigg\}\end{equation*}

and

\begin{equation*}\nu_\lambda(\hat{x}, \mathcal{P}) \coloneqq \zeta_\lambda(\hat{x},\mathcal{P})\, {\bf 1}\{ C(\hat{x}, \mathcal{P}) \subseteq W_\lambda \}.\end{equation*}

Note that $\zeta_\lambda$ is translation invariant whereas $\nu_\lambda$ is not translation invariant. Then, $\lambda\,\hat{H}_\lambda(\eta \cap \hat{W}_\lambda) = \sum_{\hat{x} \in \eta \cap \hat{W}_\lambda} \nu_\lambda(\hat{x},\eta)$ .

Recall that $\hat{\mathbb{Q}}$ is the product measure of Lebesgue measure on $\mathbb{R}^d$ and $\mathbb{Q}_{\mathbb{M}}$ . By the Slivnyak–Mecke theorem [Reference Schneider and Weil16, Corollary 3.2.3] we have

\begin{align*} \lambda \textrm{Var} \hat{H}_\lambda(\eta \cap \hat{W}_\lambda) & = \lambda^{-1} \mathbb{E} \sum_{\hat{x} \in \eta \cap \hat{W}_\lambda} \nu_\lambda^2(\hat{x},\eta) \\ & \quad + \lambda^{-1} \mathbb{E} \sum_{\hat{x}, \hat{y} \in \eta \cap \hat{W}_\lambda; \hat{x} \neq \hat{y}} \nu_\lambda (\hat{x},\eta)\nu_\lambda (\hat{y},\eta)- \lambda^{-1} \left(\mathbb{E} \sum_{\hat{x} \in \eta \cap \hat{W}_\lambda} \nu_\lambda(\hat{x},\eta)\right)^2 \\ & = \lambda^{-1} \int_{\hat{W}_\lambda} \mathbb{E} \nu_\lambda^2(\hat{x}, \eta)\,\hat{\mathbb{Q}}(\textup{d} \hat{x}) \\[4pt] & \quad + \lambda^{-1} \int_{\hat{W}_\lambda} \int_{\hat{W}_\lambda} \left[ \mathbb{E} \nu_\lambda(\hat{x}, \eta \cup \{\hat{y}\} ) \nu_\lambda(\hat{y}, \eta \cup \{\hat{x}\} )\right.\\[4pt] & \quad \left. -\ \mathbb{E} \nu_\lambda(\hat{x}, \eta)\,\mathbb{E} \nu_\lambda (\hat{y}, \eta)\right]\,\hat{\mathbb{Q}}(\textup{d} \hat{y})\,\hat{\mathbb{Q}}(\textup{d} \hat{x}) \\ & \eqqcolon\ I_1(\lambda) + I_2(\lambda).\end{align*}

Here we use the convention that $\nu_\lambda(\hat{x},\mathcal{P}) \coloneqq \nu_\lambda(\hat{x},\mathcal{P} \cup \{\hat{x}\})$ if $\hat{x} \not\in \mathcal{P}$ .

Using stationarity and the transformation $u \coloneqq \lambda^{1/d}x$ , we rewrite $I_1(\lambda)$ as

\begin{equation*}I_1(\lambda) = \lambda^{-1} \int_{W_\lambda}\int_{\mathbb{M}} \mathbb{E} Z_\lambda^2({\bf 0}_m,\eta,x)\,\mathbb{Q}_\mathbb{M}(\textup{d} m)\,\textup{d} x = \int_{W_1} \mathbb{E} Z_\lambda^2({\bf 0}_M,\eta,\lambda^{1/d}u)\,\textup{d} u,\end{equation*}

where $Z_\lambda((z,m_z),\mathcal{P},x) \coloneqq \zeta_\lambda((z,m_z), \mathcal{P})\,{\bf 1}\{C((z,m_z),\mathcal{P}) \subseteq W_\lambda-x\}$ . Similarly, by translation invariance of $\zeta_\lambda$ , we have

\begin{align*}I_2(\lambda) &= \lambda^{-1} \int_{W_\lambda} \int_{W_\lambda-x} \int_{\mathbb{M}} \int_{\mathbb{M}} [ \mathbb{E} Z_\lambda({\bf 0}_{m_1}, \eta \cup \{z_{m_2}\}, x)\,Z_\lambda(z_{m_2}, \eta \cup \{{\bf 0}_{m_1}\}, x) \\[4pt] &\quad - \mathbb{E} Z_\lambda({\bf 0}_{m_1}, \eta, x)\,\mathbb{E} Z_\lambda(z_{m_2}, \eta, x)]\,\mathbb{Q}_\mathbb{M}(\textup{d} m_1)\,\mathbb{Q}_\mathbb{M}(\textup{d} m_2)\,\textup{d} z\,\textup{d} x \\[4pt] &= \int_{W_1} \int_{W_\lambda-\lambda^{1/d}u} [\mathbb{E} Z_\lambda({\bf 0}_M,\eta \cup \{z_M\},\lambda^{1/d}u)\, Z_\lambda(z_M,\eta \cup \{{\bf 0}_M\},\lambda^{1/d}u) \\[4pt] &\quad - \mathbb{E} Z_\lambda({\bf 0}_M,\eta,\lambda^{1/d}u)\,\mathbb{E} Z_\lambda(z_M,\eta,\lambda^{1/d}u)]\,\textup{d} z\,\textup{d} u,\end{align*}

where ${\bf 0}_{m_1} \coloneqq ({\bf 0},m_1)$ , $z_{m_2} \coloneqq (z,m_2)$ , ${\bf 0}_M \coloneqq ({\bf 0},M_{\bf 0})$ , $z_M \coloneqq (z,M_z)$ , and $M_{\bf 0}$ , $M_z$ are random marks distributed according to $\mathbb{Q}_\mathbb{M}$ .

Since $|\zeta_\lambda(\hat{x},\eta)| \leq 2|\xi(\hat{x},\eta)|$ , $\zeta_\lambda$ satisfies a p-moment condition, $p \in (2, \infty)$ . Recall that $\textrm{Vol}(W_\lambda \ominus C(\hat{x},\eta))/\lambda$ tends in probability to 1, and notice that $W_\lambda - \lambda^{1/d}u$ for $u \in (-1/2,1/2)^d$ increases to $\mathbb{R}^d$ as $\lambda \to \infty$ . Thus, as $\lambda \to \infty$ , we have, for any $\hat{\bf 0} \coloneqq ({\bf 0},m_{\bf 0})$ , $\hat{z} \coloneqq (z,m_z) \in \hat{\mathbb{R}^d}$ , and $u \in (-1/2,1/2)^d$ ,

(4.8) \begin{align}\mathbb{E} Z_\lambda(\hat{\bf 0},\eta,\lambda^{1/d}u) &\to \mathbb{E} \xi(\hat{\bf 0},\eta), \end{align}
(4.9) \begin{align}\mathbb{E} Z_\lambda^2(\hat{\bf 0}, \eta,\lambda^{1/d}u) &\to \mathbb{E} \xi^2(\hat{\bf 0}, \eta), \end{align}
(4.10) \begin{align}\mathbb{E} Z_\lambda(\hat{\bf 0}, \eta \cup \{ \hat{z}\}, \lambda^{1/d}u)Z_\lambda(\hat{z},\eta \cup \{ \hat{\bf 0} \} ,\lambda^{1/d}u) &\to \mathbb{E}\xi(\hat{\bf 0}, \eta \cup \{ \hat{z}\}) \xi(\hat{z}, \eta \cup \{\hat{\bf 0} \}). \end{align}

These ingredients are enough to establish variance asymptotics for $\hat{H}_\lambda(\eta \cap \hat{W}_\lambda)$ . Indeed, $I_1(\lambda)$ converges to $\mathbb{E} \xi^2({\bf 0}_M, \eta)$ by (4.9). Concerning $I_2(\lambda)$ , for each $u \in (-1/2,1/2)^d$ we have

\begin{align*}& \lim_{\lambda \to \infty} \int_{W_\lambda-\lambda^{1/d}u}[\mathbb{E} Z_\lambda({\bf 0}_M,\eta \cup \{z_M\},\lambda^{1/d}u) Z_\lambda(z_M,\eta \cup \{{\bf 0}_M\},\lambda^{1/d}u) \\[4pt] &\quad - \mathbb{E} Z_\lambda({\bf 0}_M,\eta,\lambda^{1/d}u)\,\mathbb{E} Z_\lambda(z_M,\eta,\lambda^{1/d}u)]\,\textup{d} z\\[4pt] &= \int_{\mathbb{R}^d} [ \mathbb{E} \xi({\bf 0}_M,\eta \cup \{z_M\}) \xi(z_M,\eta \cup \{{\bf 0}_M\}) - \mathbb{E} \xi({\bf 0}_M,\eta) \mathbb{E} \xi(z_M,\eta)]\,\textup{d} z.\end{align*}

Here we use that for any $x \in \mathbb{R}^d$ , the function $Z_\lambda(\cdot, \cdot, x)\,{:}\, \hat{\mathbb{R}^d} \times {\bf N} \to \mathbb{R}$ is exponentially stabilizing with respect to $\eta$ and satisfies the p-moment condition for some $p \in (2, \infty)$ . Thus, from Lemma 4.4, the integrand is dominated by an exponentially decaying function of $\|z\|^{ \alpha } $ . Applying the dominated convergence theorem, together with (4.8) and (4.10), we obtain the desired variance asymptotics since $\textrm{Vol}(W_1) = 1$ .

The next lemma completes the proof of Theorem 2.3(i).

Lemma 4.6. If $\xi$ is exponentially stabilizing with respect to $\eta$ then

\begin{equation*}\lim_{\lambda \to \infty} \lambda \textrm{Var} \hat{H}_\lambda(\eta) = \lim_{\lambda \to \infty} \lambda \textrm{Var} \hat{H}_\lambda(\eta\cap \hat{W}_\lambda ) = \sigma^2(\xi).\end{equation*}

Proof. Write

\begin{equation*}\lambda\,\hat{H}_\lambda(\eta) = \sum_{\hat{x} \in \eta \cap \hat{W}_\lambda} \nu_\lambda(\hat{x},\eta) + \sum_{\hat{x} \in \eta \cap \hat{W}_\lambda^c} \nu_\lambda(\hat{x},\eta).\end{equation*}

Now,

\begin{align*}\lambda \textrm{Var} \hat{H}_\lambda(\eta) = & \lambda^{-1} \textrm{Var} \left( \sum_{\hat{x} \in \eta \cap \hat{W}_\lambda} \nu_\lambda(\hat{x},\eta)\right) + \lambda^{-1} \textrm{Var} \left( \sum_{\hat{x} \in \eta \cap \hat{W}_\lambda^c} \nu_\lambda(\hat{x},\eta) \right) \\ & + 2 \lambda^{-1} \textrm{Cov} \left( \sum_{\hat{x} \in \eta \cap \hat{W}_\lambda} \nu_\lambda(\hat{x},\eta), \sum_{\hat{x} \in \eta \cap \hat{W}_\lambda^c} \nu_\lambda(\hat{x},\eta) \right).\end{align*}

It suffices to show that $\textrm{Var} \big( \sum_{\hat{x} \in \eta \cap \hat{W}_\lambda^c} \nu_\lambda(\hat{x},\eta) \big) = O(\lambda^{(d-1)/d})$ , for then the Cauchy–Schwarz inequality shows that the covariance term in the above expression is negligible compared to $\lambda$ .

We show that $\textrm{Var} \big( \sum_{\hat{x} \in \eta \cap \hat{W}_\lambda^c} \nu_\lambda(\hat{x},\eta) \big) = O(\lambda^{(d-1)/d})$ as follows. Note that $\hat{H}_\lambda(\eta) = \sum_{\hat{x} \in \eta} \hat{\nu}_\lambda(\hat{x}, \eta),$ where $\hat{\nu}_\lambda(\hat{x}, \eta)$ is as in (4.2). By the Slivnyak–Mecke theorem we have

\begin{align*}& \lambda \textrm{Var} \left( \sum_{\hat{x} \in \eta \cap \hat{W}_\lambda^c} \nu_\lambda(\hat{x},\eta) \right) = \lambda^{-1} \mathbb{E} \sum_{\hat{x} \in \eta \cap \hat{W}_\lambda^c} \hat{\nu}_\lambda^2(\hat{x},\eta) \\ &\qquad + \lambda^{-1} \mathbb{E} \sum_{\hat{x}, \hat{y} \in \eta \cap \hat{W}_\lambda^c; \hat{x} \neq \hat{y}} \hat{\nu}_\lambda (\hat{x},\eta)\hat{\nu}_\lambda (\hat{y},\eta) - \lambda^{-1} \left(\mathbb{E} \sum_{\hat{x} \in \eta \cap \hat{W}_\lambda^c} \hat{\nu}_\lambda(\hat{x},\eta)\right)^2 \displaybreak\\ &\quad = \lambda^{-1} \int_{\hat{W}_\lambda^c} \mathbb{E} \hat{\nu}_\lambda^2(\hat{x}, \eta)\,\hat{\mathbb{Q}}(\textup{d} \hat{x}) \\ &\qquad + \lambda^{-1} \int_{\hat{W}_\lambda^c} \int_{\hat{W}_\lambda^c} [ \mathbb{E} \hat{\nu}_\lambda(\hat{x}, \eta \cup \{\hat{y}\} ) \hat{\nu}_\lambda(\hat{y}, \eta \cup \{\hat{x}\} ) - \mathbb{E} \hat{\nu}_\lambda(\hat{x}, \eta)\, \mathbb{E} \hat{\nu}_\lambda (\hat{y}, \eta)]\,\hat{\mathbb{Q}}(\textup{d} \hat{x})\,\hat{\mathbb{Q}}(\textup{d} \hat{y})\\ &\quad \eqqcolon I_1^*(\lambda) + I_2^*(\lambda).\end{align*}

By the Hölder inequality, the moment condition on $\xi$ , and the assumed exponential decay of the tail of the diameter of $C(\hat{x}, \eta)$ , we have $\mathbb{E} \hat{\nu}_\lambda(\hat{x}, \eta)^p \leq c \exp\left( - \frac{1}{c} \, d(x, W_\lambda)^d \right)$ for some positive constant c. Then, similarly to Lemma 4.3, we may use the co-area formula to obtain $I_1^*(\lambda) = O(\lambda^{-1/d}).$

To bound $I_2^*(\lambda)$ we appeal to Lemma 4.3. Notice that $|\hat{\nu}_\lambda(\hat{x},\eta)| \leq 2|\xi(\hat{x},\eta)|$ . Since $\hat{\nu}_\lambda, \lambda \geq 1$ are exponentially stabilizing with respect to $\eta$ and satisfy the p-moment condition for $p \in (2, \infty)$ , then, by Lemma 4.4,

\begin{align*}& | \mathbb{E} \hat{\nu}_\lambda(\hat{x}, \eta \cup \{\hat{y}\} ) \hat{\nu}_\lambda(\hat{y}, \eta \cup \{\hat{x}\} ) - \mathbb{E} \hat{\nu}_\lambda(\hat{x}, \eta)\, \mathbb{E} \hat{\nu}_\lambda (\hat{y}, \eta)|\\&\qquad \leq c\,\left(\sup_{ \hat{x}, \hat{y} \in \hat{\mathbb{R}^d} } \mathbb{E} | \hat{\nu}_\lambda(\hat{x}, \eta \cup \{\hat{y}\} )|^p\right)^{\frac{2}{p}} \exp\left(-\frac{1}{c} \, \|x - y\|^\alpha \right).\end{align*}

Using this estimate we compute

\begin{align*}I_2^*(\lambda) &\leq \lambda^{-1} \int_{\hat{W}_\lambda^c} \int_{W_\lambda^c} c \,(\mathbb{E} | \hat{\nu}_\lambda(\hat{x}, \eta)|^{p})^{\frac{2}{p}} \exp\left(-\frac{1}{c}\, \|x - y\|^\alpha \right)\,\textup{d} y\,\hat{\mathbb{Q}}(\textup{d} \hat{x}) \\[3pt] &\leq c\, \lambda^{-1} \int_{\hat{W}_\lambda^c} ( \mathbb{E} | \hat{\nu}_\lambda(\hat{x}, \eta)|^{p})^{\frac{2}{p}} \int_{\mathbb{R}^d} \exp\left(-\frac{1}{c} \, \|x - y\|^\alpha\right)\,\textup{d} y\,\hat{\mathbb{Q}}(\textup{d} \hat{x}) \\[3pt] &\leq c\, \lambda^{-1} \int_{W_\lambda^c} \exp\left( -\frac{1}{c} \, d(x, W_\lambda)^d \right) \,\textup{d} x \int_{\mathbb{R}^d} \exp\left(-\frac{1}{c}\, \|y\|^\alpha\right)\,\textup{d} y.\end{align*}

Since $\int_{\mathbb{R}^d} \exp(- \| y\|^\alpha/c )\,\textup{d} y < \infty$ , we obtain

\begin{equation*}I_2^*(\lambda) \leq c\, \lambda^{-1} \int_{W_\lambda^c} \exp\left( -\frac{1}{c} \, d(x, W_\lambda)^d \right)\,\textup{d} x.\end{equation*}

Arguing as we did for $I_1^*(\lambda)$ we obtain $I_2^*(\lambda) = O(\lambda^{-1/d}).$

4.4. Proof of Theorem 2.3(ii)

Now we prove the central limit theorems for $H_\lambda(\eta \cap \hat{W}_\lambda)$ and $H_\lambda(\eta)$ . Let us first introduce some notation. Define, for any stationary marked point process $\mathcal{P}$ on $\hat{\mathbb{R}^d}$ ,

\begin{align*}\xi_\lambda (\hat{x}, \mathcal{P}) &\coloneqq \frac{\lambda\,\xi( \lambda^{1/d} \hat{x}, \lambda^{1/d} \mathcal{P}) } { \textrm{Vol}( W_\lambda \ominus C( \lambda^{1/d} \hat{x}, \lambda^{1/d} \mathcal{P}) ) } \, {\bf 1}\{ C(\lambda^{1/d} \hat{x}, \lambda^{1/d}\mathcal{P}) \subseteq W_\lambda \},\\\hat{\xi}_\lambda(\hat{x}, \mathcal{P}) & \coloneqq \xi_\lambda(\hat{x}, \mathcal{P})\,{\bf 1}\bigg\{ \textrm{Vol}( W_\lambda \ominus C( \lambda^{1/d} \hat{x}, \lambda^{1/d} \mathcal{P}) ) \geq \frac{\lambda} {2} \bigg\},\end{align*}

where $\lambda^{1/d}\hat{x} \coloneqq (\lambda^{1/d} x,m_x)$ and $\lambda^{1/d}\mathcal{P} \coloneqq \{\lambda^{1/d}\hat{x}\,{:}\, \hat{x} \in \mathcal{P}\}$ .

Put

\begin{align*}S_\lambda(\eta_\lambda \cap \hat{W}_1) \coloneqq \sum_{\hat{x} \in \eta_\lambda \cap \hat{W}_1} \xi_\lambda(\hat{x}, \eta_\lambda), & \qquad \hat{S}_\lambda(\eta_\lambda \cap \hat{W}_1) \coloneqq \sum_{\hat{x} \in \eta_\lambda \cap \hat{W}_1} \hat{\xi}_\lambda(\hat{x}, \eta_\lambda),\end{align*}

as well as

\begin{align*} S_\lambda(\eta_\lambda) \coloneqq \sum_{\hat{x} \in \eta_\lambda} \xi_\lambda(\hat{x}, \eta_\lambda), & \qquad \hat{S}_\lambda (\eta_\lambda) \coloneqq \sum_{\hat{x} \in \eta_\lambda} \hat{\xi}_\lambda(\hat{x}, \eta_\lambda).\end{align*}

Notice that

\begin{equation*}S_\lambda(\eta_\lambda \cap \hat{W}_1) \stackrel{\textrm{D}}{=} \lambda\,H_\lambda(\eta \cap \hat{W}_\lambda), \qquad S_\lambda(\eta_\lambda) \stackrel{\textrm{D}}{=} \lambda\,H_\lambda(\eta) \end{equation*}

and

\begin{equation*}\hat{S}_\lambda(\eta_\lambda \cap \hat{W}_1) \stackrel{\textrm{D}}{=} \lambda\,\hat{H}_\lambda(\eta \cap \hat{W}_\lambda), \qquad \hat{S}_\lambda(\eta_\lambda) \stackrel{\textrm{D}}{=} \lambda\,\hat{H}_\lambda(\eta)\end{equation*}

due to the distributional identity $\lambda^{1/d} \eta_\lambda \stackrel{\textrm{D}}{=} \eta_1$ . The reason for expressing the statistic $\lambda\,H_\lambda(\eta \cap \hat{W}_\lambda)$ in terms of the scores $\xi_\lambda(\hat{x}, \eta_\lambda)$ is that it puts us in a better position to apply the normal approximation results of [Reference Lachièze-Rey, Schulte and Yukich6] to the sums $S_\lambda(\eta_\lambda \cap \hat{W}_1)$ .

In particular, we appeal to Theorem 2.3 of [Reference Lachièze-Rey, Schulte and Yukich6], with s replaced by $\lambda$ there, to establish a central limit theorem for $\hat{S}_\lambda (\eta_\lambda \cap \hat{W}_1)$ . Indeed, in that paper we may put $\mathbb{X}$ to be $\mathbb{R}^d$ , we let $\mathbb{Q}$ be Lebesgue measure on $\mathbb{R}^d$ so that $\eta_\lambda $ has intensity measure $\lambda \mathbb{Q}$ , and we put $K = W_1$ . We may write $\hat{S}_\lambda(\eta_\lambda \cap \hat{W}_1) = \sum_{\hat{x} \in \eta_\lambda \cap \hat{W}_1 } \hat{\xi}_\lambda(\hat{x}, \eta_\lambda)\,{\bf 1}\{x \in W_1\}.$ Note that $\hat{\xi}_\lambda(\hat{x}, \eta_\lambda){\bf 1}\{x \in W_1\}$ , $\hat{x} \in \hat{\mathbb{X}}$ , are exponentially stabilizing with respect to the input $\eta_\lambda$ , they satisfy the p-moment condition for some $p \in (4, \infty)$ , they vanish for $x \in W_1^c$ , and they (trivially) decay exponentially fast with respect to the distance to K. (Here, the notion of decaying exponentially fast with respect to the distance to K is defined at (2.8) of [Reference Lachièze-Rey, Schulte and Yukich6]; since the distance to K is zero for $x \in K$ this condition is trivially satisfied.) This makes $I_{K, \lambda} = \Theta(\lambda)$ , where $I_{K, \lambda}$ is defined in (2.10) of [Reference Lachièze-Rey, Schulte and Yukich6]. Thus, all conditions of Theorem 2.3 of [Reference Lachièze-Rey, Schulte and Yukich6] are fulfilled and we deduce a central limit theorem for $\hat{S}_\lambda(\eta_\lambda \cap \hat{W}_1)$ , and hence for $\hat{H}_\lambda(\eta \cap \hat{W}_\lambda)$ .

We may also apply Theorem 2.3 of [Reference Lachièze-Rey, Schulte and Yukich6] to show a central limit theorem for $\hat{S}_\lambda(\eta_\lambda)$ . For $x \in W_1^c$ we find the radius $D_x$ such that $C(\lambda^{1/d}\hat{x},\lambda^{1/d}\eta_\lambda) \subseteq B_{D_x}(\lambda^{1/d}x)$ . Then the score $\hat{\xi}_\lambda(\hat{x}, \eta_\lambda)$ vanishes if $D_x > d(\lambda^{1/d}x,W_\lambda)$ . As in Section 3, $D_x$ has exponentially decaying tails and thus $\hat{\xi}_\lambda$ decays exponentially fast with respect to the distance to K.

Let $d_\textrm{K}(X, Y)$ denote the Kolmogorov distance between random variables X and Y. Applying Theorem 2.3 of [Reference Lachièze-Rey, Schulte and Yukich6] we obtain

\begin{equation*} d_\textrm{K} \left( \frac{ \hat{S}_\lambda (\eta_\lambda \cap \hat{W}_1) - \mathbb{E} \hat{S}_\lambda(\eta_\lambda \cap \hat{W}_1) } { \sqrt{ \textrm{Var} \hat{S}_\lambda (\eta_\lambda \cap \hat{W}_1) }}, N(0,1) \right) \leq \frac{ c } { \sqrt{ \textrm{Var} \hat{S}_\lambda (\eta_\lambda \cap \hat{W}_1) }\,}\end{equation*}

and

\begin{equation*} d_\textrm{K} \left( \frac{ \hat{S}_\lambda (\eta_\lambda) - \mathbb{E} \hat{S}_\lambda(\eta_\lambda) } { \sqrt{ \textrm{Var} \hat{S}_\lambda(\eta_\lambda) }}, N(0,1) \right) \leq \frac{ c } { \sqrt{ \textrm{Var} \hat{S}_\lambda (\eta_\lambda) }\,}.\end{equation*}

Combining this with (2.6) and using $\textrm{Var} \hat{S}_\lambda (\eta_\lambda \cap \hat{W}_1) \geq c\, \lambda$ , we obtain, as $\lambda \to \infty$ ,

\begin{equation*} \frac{ \hat{S}_\lambda (\eta_\lambda \cap \hat{W}_1) - \mathbb{E} \hat{S}_\lambda (\eta_\lambda \cap \hat{W}_1)} { \sqrt{ \lambda}} \stackrel{\textrm{D}}{\longrightarrow} N(0, \sigma^2(\xi))\end{equation*}

and

\begin{equation*}\frac{ \hat{S}_\lambda (\eta_\lambda) - \mathbb{E} \hat{S}_\lambda (\eta_\lambda)} { \sqrt{ \lambda}} \stackrel{\textrm{D}}{\longrightarrow} N(0, \sigma^2(\xi)).\end{equation*}

To show that

(4.11) \begin{equation} \frac{ {S}_\lambda (\eta_\lambda \cap \hat{W}_1) - \mathbb{E} {S}_\lambda(\eta_\lambda \cap \hat{W}_1) } { \sqrt{ \lambda}} \stackrel{\textrm{D}}{\longrightarrow} N(0, \sigma^2(\xi)) \end{equation}

as $\lambda \to \infty$ , it suffices to show that $\lim_{\lambda \to \infty} \mathbb{E}| S_\lambda (\eta_\lambda \cap \hat{W}_1)- \hat{S}_\lambda(\eta_\lambda \cap \hat{W}_1)| = 0$ . Since $\mathbb{E}| S_\lambda (\eta_\lambda \cap \hat{W}_1) - \hat{S}_\lambda (\eta_\lambda\cap \hat{W}_1)| = \lambda\,\mathbb{E} |H _\lambda (\eta_\lambda \cap \hat{W}_\lambda) - \hat{H} _\lambda (\eta_\lambda \cap \hat{W}_\lambda)|$ , we may use Lemma 4.1 to prove (4.11). Likewise, to obtain the central limit theorem for $ {S}_\lambda (\eta_\lambda)$ , it suffices to show that $\lim_{\lambda \to \infty} \mathbb{E}| S_\lambda (\eta_\lambda)- \hat{S}_\lambda(\eta_\lambda)| = 0$ , which is a consequence of Lemma 4.2. Hence, we deduce from the central limit theorem for $\hat{S}_\lambda(\eta_\lambda)$ that, as $\lambda \to \infty$ ,

\begin{equation*} \frac{ {S}_\lambda(\eta_\lambda) - \mathbb{E} {S}_\lambda(\eta_\lambda) } { \sqrt{ \lambda}} \stackrel{\textrm{D}}{=} \sqrt{\lambda} \left( H_\lambda(\eta) - \mathbb{E}^0 h(K_{\bf 0}(\eta)) \right) \stackrel{\textrm{D}}{\longrightarrow} N(0, \sigma^2(\xi)).\end{equation*}

This completes the proof of Theorem 2.3(ii).

5. Proofs of Theorems 2.4 and 2.5

Before giving the proof of Theorem 2.4, we recall from Section 3 that translation invariant cell characteristics $\xi^{\rho_i}$ are exponentially stabilizing with respect to Poisson input $\eta$ . This allows us to apply Theorem 2.3 to cell characteristics of tessellations defined by $\rho_i$ , $i=1, 2, 3$ . For example, we can take $h(\!\cdot\!)$ to be either the volume or surface area of a cell or the radius of the circumscribed or inscribed ball.

5.1. Proof of Theorem 2.4

  1. (i) The assertion of unbiasedness follows from Theorem 2.1.

  2. (ii) To prove asymptotic normality, we write

    \begin{equation*}h(C^{\rho_i}(\hat{x}, \eta)) \coloneqq {\bf 1}\{\textrm{Vol} (C^{\rho_i}(\hat{x}, \eta)) \leq t \} \eqqcolon \varphi^{\rho_i}(\hat{x}, \eta).\end{equation*}
    To deduce (2.7) from Theorem 2.3(ii) we need only verify the p-moment condition for $p \in (4, \infty)$ and the positivity of $\sigma^2(\varphi^{\rho_i})$ . The moment condition holds for all $p \ \in [1, \infty) $ since $\varphi$ is bounded by 1. To verify the positivity of $\sigma^2(\varphi^{\rho_i})$ , we recall Remark 2.1. More precisely, we may use Theorem 2.1 of [Reference Penrose and Yukich14] and show that there is an almost surely finite random variable S and a non-degenerate random variable $\Delta^{\rho_i}(\infty)$ such that, for all finite $\mathcal{A} \subseteq \hat{B}_S({\bf 0})^c$ , we have
    \begin{align*}\Delta^{\rho_i}(\infty) = & \sum_{ \hat{x} \in (\eta \cap \hat{B}_S({\bf 0})) \cup \mathcal{A} \cup \{{\bf 0}_M \} }{\bf 1}\{\textrm{Vol} (C^{\rho_i}(\hat{x}, (\eta \cap \hat{B}_S({\bf 0})) \cup \mathcal{A} \cup \{{\bf 0}_M\} ))\leq t \} \\ & - \sum_{ \hat{x} \in (\eta \cap \hat{B}_S({\bf 0})) \cup \mathcal{A} } {\bf 1}\{\textrm{Vol} (C^{\rho_i}(\hat{x}, (\eta \cap \hat{B}_S({\bf 0})) \cup \mathcal{A} )) \leq t \}.\end{align*}
    We first explain the argument for the Voronoi case and then indicate how to extend it to treat Laguerre and Johnson–Mehl tessellations.

Let $t \in (0,\infty)$ be arbitrary but fixed. Let N be the smallest integer of even parity that is larger than $4\sqrt{d}$ . The choice of this value will be explained later in the proof. For $L>0$ we consider a collection of $N^d$ cubes $Q_{L,1},\ldots, Q_{L,N^d}$ centered around $x_i$ , $i = 1,\ldots, N^d$ , such that

  1. (i) $Q_{L,i}$ has side length $\frac{L} {N}$ , and

  2. (ii) $\cup \{ Q_{L,i}, i=1, \ldots, N^d\} = \big[\!- \frac{L} {2} , \frac{L} {2} \big]^d$ .

Put $\varepsilon_L \coloneqq L/100N$ and $\hat{Q}_{L,i} \coloneqq Q_{L,i} \times \mathbb{M}$ . Define the event

\begin{equation*}E_{L, N} \coloneqq \left\{ |\eta \cap \hat{Q}_{L,i} \cap \hat{B}_{\varepsilon_L}(x_i)|=1,\, |\eta \cap \hat{Q}_{L,i} \cap \hat{B}_{\varepsilon_L}^c(x_i)|=0 \text{ for all }i=1,\ldots,N^d\right\}.\end{equation*}

Elementary properties of the Poisson point process show that $\mathbb{P}(E_{L,N}) > 0$ for all L and N.

On $E_{L,N}$ the faces of the tessellation restricted to $\big[\frac{-L} {2}, \frac{L} {2}\big]^d$ nearly coincide with the union of the boundaries of $Q_{L,i}, i=1, \ldots, N^d$ , and the cell generated by $\hat{x} \in \eta \cap \big[\frac{-L} {2} + \frac{L} {N}, \frac{L} {2} - \frac{L} {N}\big]^d$ is determined only by $\eta \cap(\cup \{Q_{L,j}, j \in I(\hat{x})\})$ , where $j \in I(\hat{x})$ if and only if $\hat{x} \in \hat{Q}_{L,j}$ or $\hat{Q}_{L,j} \cap \hat{Q}_{L,i}\neq \emptyset$ for i such that $\hat{x} \in \hat{Q}_{L,i}$ . Thus, inserting a point at the origin will not affect the cells far from the origin. More precisely, the cells around the points outside ${\hat R}_{L,N} \coloneqq \big[\!-\frac{2L} {N}, \frac{2L} {N} \big]^d \times \mathbb{M}$ are not affected by inserting a point at the origin. For $S_L \coloneqq L/2$ we have ${\hat R}_{L,N} \subseteq {\hat B}_{S_L}({\bf 0})$ due to our choice of the value N. Therefore,

\begin{equation*}C^{\rho_1}(\hat{x},(\eta \cap \hat{B}_{S_L}({\bf 0})) \cup \mathcal{A} \cup \{{\bf 0}_M\}) =C^{\rho_1}(\hat{x},(\eta \cap \hat{B}_{S_L}({\bf 0})) \cup \mathcal{A})\end{equation*}

for any finite $\mathcal{A} \subseteq {\hat B}_{S_L}({\bf 0})^c$ and $\hat{x} \in (\eta \cap ({\hat B}_{S_L}({\bf 0}) \setminus {\hat R}_{L,N})) \cup \mathcal{A}$ . Consequently, on $E_{L,N}$ ,

\begin{align*}\Delta^{\rho_1}(\infty) = & \sum_{ \hat{x} \in (\eta \cap {\hat R}_{L,N}) \cup \{{\bf 0}_M \} }{\bf 1}\{\textrm{Vol} (C^{\rho_1}(\hat{x}, (\eta \cap \hat{B}_{S_L}({\bf 0})) \cup \mathcal{A} \cup \{{\bf 0}_M\} ))\leq t \} \\ & - \sum_{ \hat{x} \in \eta \cap {\hat R}_{L,N} } {\bf 1}\{\textrm{Vol} (C^{\rho_1}(\hat{x}, (\eta \cap \hat{B}_{S_L}({\bf 0})) \cup \mathcal{A} )) \leq t \}.\end{align*}

Figure 1 illustrates the difference appearing in $\Delta^{\rho_1}(\infty)$ on $E_{L,N}$ for $d=2$ . The cells generated by the points outside the square $\big[\!-\frac{2L} {N}, \frac{2L} {N}\big]^2$ are identical for both point configurations, whereas the cells generated by the points inside the square may differ.

On the event $E_{L,N}$ , the cell generated by $\hat{x} \in (\eta \cap {\hat R}_{L,N}) \cup \{{\bf 0}_M \}$ is contained in $\cup \{Q_{L,j}, j \in I(\hat{x})\}$ , and thus

\begin{equation*}\sup_{\hat{x} \in (\eta \cap {\hat R}_{L,N}) \cup \{{\bf 0}_M\}} \textrm{Vol}(C^{\rho_1}(\hat{x},(\eta \cap \hat{B}_{S_L}({\bf 0})) \cup \mathcal{A})) \leq \left(\frac{3L} {N}\right)^d.\end{equation*}

If $L \in (0, N t^{1/d}/3)$ , then all cell volumes in ${\hat R}_{L,N}$ are at most t; thus, $\Delta^{\rho_1}(\infty) = 1$ on the event $E_{L_1,N}$ with $L_1 \coloneqq \frac16 N t^{1/d}$ . Similarly,

\begin{equation*}\inf_{\hat{x} \in (\eta \cap {\hat R}_{L,N}) \cup \{{\bf 0}_M\}} \textrm{Vol}(C^{\rho_1}(\hat{x}, (\eta \cap \hat{B}_{S_L}({\bf 0})) \cup \mathcal{A} \cup \{{\bf 0}_M\})) \geq\left(\frac{L}{3N}\right)^d.\end{equation*}

If $L \in (3N t^{1/d}, \infty)$ , then all the cell volumes in ${\hat R}_{L,N}$ exceed t and thus $\Delta^{\rho_1}(\infty) = 0$ on the event $E_{L_2,N}$ with $L_2 \coloneqq 6 N t^{1/d}$ . Taking $S \coloneqq S_{L_1}{\bf 1}\{E_{L_1,N}\} + S_{L_2}{\bf 1}\{E_{L_2,N}\}$ , we have found two disjoint events $E_{L_1,N}$ and $E_{L_2,N}$ , each having positive probability, such that $\Delta^{\rho_1} (\infty)$ takes different values on these events, and thus it is non-degenerate. Hence, $\sigma^2(\varphi^{\rho_1})>0$ and we can apply Theorem 2.3(ii).

Figure 1: Voronoi tessellations in $[\!-\frac{L} {2}, \frac{L} {2}]^2$ generated by $(\eta \cap \hat{B}_{S_L}({\bf 0})) \cup \mathcal{A}$ (left) and $(\eta \cap \hat{B}_{S_L}({\bf 0})) \cup \mathcal{A} \cup \{{\bf 0}_M\}$ (right). The ball $B_{S_L}({\bf 0})$ encloses the square $[\!-\frac{2L} {N}, \frac{2L} {N}]^2$ , where N here has a value of 10.

To prove the positivity of $\sigma^2(\varphi^{\rho_2})$ and $\sigma^2(\varphi^{\rho_3})$ we shall consider a subset of $E_{L,N}$ . Assume there exists a parameter $\mu^* \in [0, \mu]$ and a small interval $I_{\alpha}(\mu^*) \subseteq [0, \mu]$ for some $\alpha \geq 0$ such that $\mathbb{Q}_{\mathbb{M}}(I_{\alpha}(\mu^*))>0$ . Define $\hat{E}_{L,N}$ to be the intersection of $E_{L,N}$ and the event $F_{L,N, \alpha}$ that the Poisson points in $[\!-L/2,L/2]^d$ have marks in $I_{\alpha}(\mu^*)$ . If $\alpha$ is small enough, then the Laguerre and Johnson–Mehl cells nearly coincide with the Voronoi cells on the event $\hat{E}_{L,N}$ . Consideration of the events $\hat{E}_{L_1,N}$ and $\hat{E}_{L_2,N}$ shows that $\Delta^{\rho_2} (\infty)$ and $\Delta^{\rho_3} (\infty)$ are non-degenerate, implying that $\sigma^2(\varphi^{\rho_2})>0$ and $\sigma^2(\varphi^{\rho_3}) > 0$ . Thus, Theorem 2.4 holds for the Laguerre and Johnson–Mehl tessellations.

Remark 5.1. In the same way, one can establish that Theorem 2.4 holds for any h taking the form

\begin{equation*}h(K) = \textbf{1} \{g(K) \leq t\} \quad \text{or} \quad h(K) = \textbf{1} \{g(K) > t\}\end{equation*}

for $t \in (0, \infty)$ fixed and $g\,{:}\,({\bf F}^d, \mathcal{B}({\bf F}^d)) \rightarrow (\mathbb{R}, \mathcal{B}(\mathbb{R}))$ a homogeneous function of order q, i.e. $g(\alpha K) = \alpha^q g(K)$ for all $K \in \textbf{F}^d$ and $\alpha \in (0, \infty).$ Examples of the function g include (a) $g(K) \coloneqq \mathcal{H}^{d-1}(\partial K)$ , (b) $g(K) \coloneqq \text{diam}(K)$ , (c) $g(K) \coloneqq \text{radius}$ of the circumscribed ball of K, and (d) $g(K) \coloneqq \text{radius}$ of the circumscribed ball of K.

5.2. Proof of Theorem 2.5

The unbiasedness is again a consequence of Theorem 2.1. To prove the asymptotic normality, we need to check the p-moment condition for

\begin{equation*}\xi^{\rho_i}(\hat{x}, \eta) \coloneqq \mathcal{H}^{d-1}(\partial C^{\rho_i} (\hat{x}, \eta)){\bf 1}\{C^{\rho_i}(\hat{x},\eta) \text{ is bounded}\}\end{equation*}

and the positivity of $\sigma^2(\xi^{\rho_i})$ , $i=1, 2, 3$ .

First, we verify the moment condition with $p=5$ . Given any $\hat{x},\hat{y} \in \hat{\mathbb{R}^d}$ , we assert that $\mathbb{E}^{\hat{x},\hat{y}} \mathcal{H}^{d-1}(\partial C^{\rho_i}(\hat{x},\eta))^5= \mathbb{E} \mathcal{H}^{d-1}(\partial C^{\rho_i}(\hat{x},\eta \cup \{\hat{y}\}))^5 \leq c < \infty$ for some constant c that does not depend on $\hat{x}$ and $\hat{y}$ . From Proposition 3.2 there is a random variable $R_{\hat{x}}$ such that

\begin{equation*}C^{\rho_i}(\hat{x},\eta \cup \{\hat{y}\}) = \bigcap_{\hat{z} \in (\eta \cup \{\hat{y}\} \setminus \{\hat{x}\}) \cap \hat{B}_{R_{\hat{x}}}(x)} \mathbb{H}_{\hat{z}}(\hat{x}).\end{equation*}

As in Proposition 3.1 we find $D_{\hat{x}}$ such that $C^{\rho_i}(\hat{x},\eta \cup \{\hat{y}\}) \subseteq B_{D_{\hat{x}}}(\hat{x})$ . Then

\begin{align*}\mathcal{H}^{d-1}(\partial C^{\rho_i}(\hat{x},\eta \cup \{\hat{y}\})) &\leq \sum_{\hat{z} \in (\eta \cup \{\hat{y}\} \setminus \{\hat{x}\}) \cap \hat{B}_{R_{\hat{x}}}(x)}\mathcal{H}^{d-1}(\partial \mathbb{H}_{\hat{z}}(\hat{x}) \cap B_{D_{\hat{x}}}(\hat{x})) \\&\leq c_{i,d} D_{\hat{x}}^{d-1} \eta(\hat{B}_{R_{\hat{x}}}(x))\end{align*}

for some constant $c_{i,d}$ that depends only on i and d. Using the Cauchy–Schwarz inequality we get

\begin{equation*}\mathbb{E} \mathcal{H}^{d-1}(\partial C^{\rho_i}(\hat{x},\eta \cup \{\hat{y}\}))^5 \leq c_{i,d}^5 (\mathbb{E} D_{\hat{x}}^{10(d-1)})^{1/2} (\mathbb{E} \eta(\hat{B}_{R_{\hat{x}}}(x))^{10})^{1/2}.\end{equation*}

By the property of the Poisson distribution we have

\begin{equation*}\mathbb{E} \eta(\hat{B}_{R_{\hat{x}}}(x))^{10} = \mathbb{E} (\mathbb{E} (\eta(\hat{B}_{R_{\hat{x}}}(x))^{10} \mid R_{\hat{x}})) = \mathbb{E} P(\textrm{Vol}(B_{R_{\hat{x}}}(x))),\end{equation*}

where $P(\!\cdot\!)$ is a polynomial of degree 10. Both $D_{\hat{x}}$ and $R_{\hat{x}}$ have exponentially decaying tails, and the decay does not depend on $\hat{x}$ . Therefore, $(\mathbb{E} D_{\hat{x}}^{10(d-1)})^{1/2} (\mathbb{E} \eta(\hat{B}_{R_{\hat{x}}}(x))^{10})^{1/2}$ is bounded and the moment condition is satisfied with $p=5$ .

The positivity of the asymptotic variance can be shown similarly to the proof of Theorem 2.4. We will show it only for the Voronoi case, as the Laguerre and Johnson–Mehl tessellations can be treated similarly. We will again find a random variable S and a $\Delta^{\rho_1}(\infty)$ such that, for all finite $\mathcal{A} \subseteq \hat{B}_S({\bf 0})^c$ , we have

\begin{align*}\Delta^{\rho_1}(\infty) = & \sum_{ \hat{x} \in (\eta \cap \hat{B}_S({\bf 0})) \cup \mathcal{A} \cup \{{\bf 0}_M \} } \xi^{\rho_1} (\hat{x}, (\eta \cap \hat{B}_S({\bf 0})) \cup \mathcal{A} \cup \{{\bf 0}_M\} ) \\[3pt] & - \sum_{ \hat{x} \in (\eta \cap \hat{B}_S({\bf 0})) \cup \mathcal{A} } \xi^{\rho_1} (\hat{x}, (\eta \cap \hat{B}_S({\bf 0})) \cup \mathcal{A} )\end{align*}

and, moreover, $\Delta^{\rho_1}(\infty)$ assumes different values on two events having positive probability and is thus non-degenerate. By Theorem 2.1 of [Reference Penrose and Yukich14], this is enough to show the positivity of $\sigma^2(\xi^{\rho_1})$ .

Let $L>0$ and let $N \in \mathbb{N}$ have odd parity. Abusing notation, we construct a collection of $N^d$ cubes $Q_{L,1}, \ldots, Q_{L,N^d}$ centered around $x_i \in \mathbb{R}^d$ , $i= 1, \ldots, N^d$ , such that

  1. (i) $Q_{L,i}$ has side length $ \frac{L} {N} $ , and

  2. (ii) $\cup \{ Q_{L,i}, i=1, \ldots, N^d\} = \big[\!-\frac{L} {2}, \frac{L} {2}\big]^d$ .

There is a unique index $i_0 \in \{1, \ldots, N^d\}$ such that $x_{i_0} = {\bf 0}$ . We define $\varepsilon_L, \hat{Q}_{L,i}$ and the event $E_{L,N}$ as in the proof of Theorem 2.4. Note that, under $E_{L,N}$ ,

\begin{equation*}\inf_{(x, m_x) \in \eta \cap \hat{Q}_{L,i_0} } \|x\| \leq \varepsilon_L.\end{equation*}

Hence, on the event $E_{L,N}$ , the insertion of the origin into the point configuration creates a new face of the tessellation whose surface area is bounded below by $c_{\min} (L/N)^{d-1}$ and bounded above by $c_{\max} (L/N)^{d-1}$ . Thus,

\begin{equation*}c_{\min} \left(\frac{L}{N}\right)^{d-1} + O\left(\varepsilon_L\left(\frac{L}{N}\right)^{d-2}\right)\leq \Delta^{\rho_1}(\infty) \leq c_{\max} \left(\frac{L}{N}\right)^{d-1} - O\left(\varepsilon_L\left(\frac{L}{N}\right)^{d-2}\right),\end{equation*}

where $O\big(\varepsilon_L\left(\frac{L}{N}\right)^{d-2}\big)$ is the change in the combined surface areas of the already existing faces after inserting the origin. Events $E_{L_1,N}, E_{L_2, N}$ , $L_1 < L_2$ , both occur with positive probability for any $L_1, L_2$ . Similarly to the proof of Theorem 2.4, we can find N, S, $L_1$ , and $L_2$ ( $L_2-L_1$ large enough) such that the value of $\Delta^{\rho_1}(\infty)$ differs on each event. Thus, $\sigma^2(\xi^{\rho_1})$ is strictly positive.

To show that $\sigma^2(\xi^{\rho_2})$ and $\sigma^2(\xi^{\rho_3})$ are strictly positive we argue as follows. The Laguerre and Johnson–Mehl tessellations are close to the Voronoi tessellation on the event $F_{L,N, \alpha}$ , for $\alpha$ small. Arguing as we did in the proof of Theorem 2.4, and considering the event $\hat{E}_{L,N}$ given in the proof of that theorem, we may conclude that $\sigma^2(\xi^{\rho_2})>0$ and $\sigma^2(\xi^{\rho_3}) > 0$ .

Acknowledgements

The research of Flimmel and Pawlas is supported by the Czech Science Foundation, project 17-00393J, and by Charles University, project SVV 2017 No. 260454. The research of Yukich is supported by a Simons Collaboration Grant. He thanks Charles University for its kind hospitality and support.

References

Baddeley, A. J. (1999). Spatial sampling and censoring, In Stochastic Geometry: Likelihood and Computation, eds O. E. Barndorff-Nielsen, W. S. Kendall and M. N. M. Van Lieshout. Chapman and Hall, London, pp. 3778.Google Scholar
Baryshnikov, Yu. and Yukich, J. E. (2005). Gaussian limits for random measures in geometric probability. Ann. Appl. Prob. 15, 213253.10.1214/105051604000000594CrossRefGoogle Scholar
Beneš, V. andRataj, J. (2004). Stochastic Geometry: Selected Topics. Kluwer Academic Publishers, Boston.Google Scholar
Błaszczyszyn, B., Yogeshwaran, D. andYukich, J. E. (2019). Limit theory for geometric statistics of point processes having fast decay of correlations. To appear in Ann. Prob.10.1214/18-AOP1273CrossRefGoogle Scholar
Chiu, S. N., Stoyan, D., Kendall, W. S. andMecke, J. (2013). Stochastic Geometry and its Applications, 3rd edn. Wiley, Chichester.10.1002/9781118658222CrossRefGoogle Scholar
Lachièze-Rey, R., Schulte, M. andYukich, J. E. (2019). Normal approximation for stabilizing functionals. Ann. Appl. Prob., 29, 931993.10.1214/18-AAP1405CrossRefGoogle Scholar
Lautensack, C. andZuyev, S. (2008). Random Laguerre tessellations. Adv. Appl. Prob. 40, 630650.10.1239/aap/1222868179CrossRefGoogle Scholar
McGivney, K. andYukich, J. E. (1999). Asymptotics for Voronoi tessellations on random samples. Stoch. Process. Appl. 83, 273288.10.1016/S0304-4149(99)00035-6CrossRefGoogle Scholar
Miles, R. E. (1974). On the elimination of edge-effects in planar sampling. In Stochastic Geometry: A Tribute to the Memory of Rollo Davidson, eds E. F. Harding and D. G. Kendall. John Wiley and Sons, London, pp. 228247.Google Scholar
Møller, J. (1992). Random Johnson–Mehl tessellations. Adv. Appl. Prob. 24, 814844.10.2307/1427714CrossRefGoogle Scholar
Okabe, A., Boots, B., Sugihara, K. andChiu, S. N. (2000). Spatial Tessellations: Concepts and Applications of Voronoi Diagrams, 2nd edn. Wiley, Chichester.10.1002/9780470317013CrossRefGoogle Scholar
Penrose, M. D. (2007). Gaussian limits for random geometric measures. Electron. J. Prob. 12, 9891035.10.1214/EJP.v12-429CrossRefGoogle Scholar
Penrose, M. D. (2007). Laws of large numbers in stochastic geometry with statistical applications. Bernoulli 13, 11241150.10.3150/07-BEJ5167CrossRefGoogle Scholar
Penrose, M. D. andYukich, J. E. (2001). Central limit theorems for some graphs in computational geometry. Ann. Appl. Prob. 11, 10051041.CrossRefGoogle Scholar
Penrose, M. D. andYukich, J. E. (2003). Weak laws of large numbers in geometric probability. Ann. Appl. Prob. 13, 277303.CrossRefGoogle Scholar
Schneider, R. andWeil, W. (2008). Stochastic and Integral Geometry. Springer, Berlin.10.1007/978-3-540-78859-1CrossRefGoogle Scholar
Figure 0

Figure 1: Voronoi tessellations in $[\!-\frac{L} {2}, \frac{L} {2}]^2$ generated by $(\eta \cap \hat{B}_{S_L}({\bf 0})) \cup \mathcal{A}$ (left) and $(\eta \cap \hat{B}_{S_L}({\bf 0})) \cup \mathcal{A} \cup \{{\bf 0}_M\}$ (right). The ball $B_{S_L}({\bf 0})$ encloses the square $[\!-\frac{2L} {N}, \frac{2L} {N}]^2$, where N here has a value of 10.