Hostname: page-component-745bb68f8f-hvd4g Total loading time: 0 Render date: 2025-02-06T11:02:03.566Z Has data issue: false hasContentIssue false

Expansions for the linear-elastic contribution to the self-interaction force of dislocation curves

Published online by Cambridge University Press:  20 October 2021

PATRICK VAN MEURS*
Affiliation:
Faculty of Mathematics and Physics, Kanazawa University, Kanazawa, Japan email: pjpvmeurs@gmail.com
Rights & Permissions [Opens in a new window]

Abstract

The self-interaction force of dislocation curves in metals depends on the local arrangement of the atoms and on the non-local interaction between dislocation curve segments. While these non-local segment–segment interactions can be accurately described by linear elasticity when the segments are further apart than the atomic scale of size $\varepsilon$ , this model breaks down and blows up when the segments are $O(\varepsilon)$ apart. To separate the non-local interactions from the local contribution, various models depending on $\varepsilon$ have been constructed to account for the non-local term. However, there are no quantitative comparisons available between these models. This paper makes such comparisons possible by expanding the self-interaction force in these models in $\varepsilon$ beyond the O(1)-term. Our derivation of these expansions relies on asymptotic analysis. The practical use of these expansions is demonstrated by developing numerical schemes for them, and by – for the first time – bounding the corresponding discretisation error.

Type
Papers
Copyright
© The Author(s), 2021. Published by Cambridge University Press

1 Introduction

Dislocations in metals are curve-like defects in the atomic lattice. The emergent group behaviour of many dislocations is the driving mechanism of plastic deformation of metals. One of the reasons that describing plastic deformation is an active field of research is that there is no consensus on a description for the self-interaction force of a dislocation curve. Such a description has been sought for more than half a century; we cite several papers in the remainder of the introduction. The aim of this paper is to contribute towards reaching consensus by developing a framework for computing expansions for several descriptions of the self-interaction force.

To describe the complexity of describing the self-interaction force of a dislocation curve, we first introduce the setting. To avoid long formulas and to keep the focus on the methodology, we consider the simple setting where $\mathbb R^2$ represents an isotropic elastic medium with shear modulus $\mu > 0$ and Poisson ratio $\nu \in (-1, \frac12)$ . We consider a dislocation loop represented by a closed regular curve $\Gamma \subset \mathbb R^2$ and set $b = e_1$ as its Burgers vector, where $\{e_1, e_2\}$ is the standard basis in $\mathbb R^2$ . For readers unfamiliar with dislocations and their Burgers vector, we refer to the textbooks [Reference Hull and BaconHB11, Reference Hirth and LotheHL82]. For $x \in \Gamma$ , we set

(1.1) \begin{equation} \kappa(x), \quad \tau(x) = \begin{bmatrix} -\sin \phi(x) \\[4pt] \cos \phi(x) \end{bmatrix}, \quad n(x) = \begin{bmatrix} \cos \phi(x) \\[4pt] \sin \phi(x) \end{bmatrix},\end{equation}

as respectively the curvature, tangent vector (counter-clockwise direction) and outward pointing normal vector of $\Gamma$ at x. Figure 1 illustrates the setting. Our sign convention is that $\kappa(x) < 0$ when $\Gamma$ is a circle.

Figure 1. Left: sketch of $\Gamma$ . Right: sketch of $\Gamma_\varepsilon(x)$ .

Next, we describe the complexity of the self-interaction force on the dislocation $\Gamma$ at x. For practical applications such as dislocation dynamics, it is sufficient to focus on the (scalar-valued) normal component of this force. For brevity, in the remainder, we simply refer to this normal component as the self-interaction force. The reader familiar with the self-interaction force will recognise formulas (1.2)–(1.6) below and can skip the following explanatory text up to (1.6).

Instead of the self-interaction force on $\Gamma$ at x, let us first consider the easier expression for the force that the second dislocation loop $\tilde \Gamma$ with the same Burgers vector b exerts on $\Gamma$ at x. We assume that $\tilde \Gamma$ is far enough away from $\Gamma$ such that the interaction force is accurately described by linear elasticity. Then, the force $\tilde F(x)$ exerted by $\tilde \Gamma$ on $\Gamma$ at x is found by first computing the stress in the medium at x caused by $\tilde \Gamma$ (see, e.g. [Reference de Wit’dW60, (7.4)]), and then by applying the Peach–Koehler formula [Reference Peach and KoehlerPK50] to convert this stress into a force. This yields

(1.2) \begin{equation} \tilde F(x) := \oint_{\tilde \Gamma} G(y-x) \cdot \tau (y) \, dy,\end{equation}

where we have normalised the multiplicative constant to 1, and

\begin{equation*} G(z) := \left( \frac{z^T}{ |z|^3 } \begin{bmatrix} 0 & \quad -1 \\[4pt] 1-\nu & \quad 0 \end{bmatrix} \right)^T = \frac1{ |z|^3 } \begin{bmatrix} (1-\nu) z_2 \\[4pt] - z_1 \end{bmatrix}.\end{equation*}

Due to the singularity of G(z) at $z = 0$ , the function $\tilde F : \mathbb R^2 \to \mathbb R$ is singular on $\Gamma$ . This stems from the fact that linear elasticity is a poor model for describing the stress field in the dislocation core. The dislocation core is the tubular neighbourhood with radius of about 5 atoms ( $\sim 1 $ nm) around a dislocation. In particular, this means that (1.2) cannot be used to describe the force that $\Gamma$ exerts on itself at any point $x \in \Gamma$ .

However, in practice, the width of the dislocation core is much smaller than the diameter of $\Gamma$ , which we take for the sake of explanation to be the typical value of $10 \mu$ m. Then, scaling the diameter of the dislocation loop to 1 dimensionless unit, the radius of the dislocation core is

(1.3) \begin{equation} \varepsilon \sim 10^{-4}.\end{equation}

In the remainder, we regard $\varepsilon$ as a generic small parameter which indicates the length scale where linear elasticity breaks down. We leave the choice of $\varepsilon$ as a modelling issue.

The easiest modelling choice for avoiding the breakdown of linear elasticity inside the dislocation core is to neglect the contribution of the dislocation core to the self-interaction force. To describe the resulting force, we set

\begin{equation*} \Gamma_\varepsilon(x) := \Gamma \setminus B(x, \varepsilon);\end{equation*}

see Figure 1 for a sketch. Then, (1.2) implies that

(1.4) \begin{equation} F_\varepsilon(x) := \int_{\Gamma_\varepsilon(x)} G(y-x) \cdot \tau (y) \, dy\end{equation}

gives an acceptable approximation of the force exerted on $\Gamma$ at x by $\Gamma_\varepsilon(x)$ . This model can be made more accurate by coupling it to an atomistic model which accounts for the local contribution of $\Gamma \cap B(x, \varepsilon)$ ; see for example, [Reference Gehlen, Hirth, Hoagland and KanninenGHHK72, Reference Henager and HoaglandHH05, Reference ClouetClo11].

In this paper, we focus on the contribution from linear elasticity to the self-interaction force on $\Gamma$ . Since there is no clear splitting of the self-interaction force in terms of a linear-elastic part and microscopic contributions, there are several choices on how to define the linear elastic part. (1.4) is one such choice; it dates back at least to [Reference Hirth and LotheHL82]. Other choices are derived by first smearing out the singularity of the stress field within the dislocation core, and then by computing the force from this regularised stress field. This is a popular approach, because it includes a phenomenological model for the dislocation core, which then needs no additional treatment. The smearing out of the dislocation core has been done either with a (singular) convolution kernel (see [Reference Cai, Arsenlis, Weinberger and BulatovCAWB06] for a short review), or with a phase field approach (see [Reference Kundin, Emmerich and ZimmerKEZ11] for a review). In particular, we focus on the specific convolution kernel constructed in [Reference Cai, Arsenlis, Weinberger and BulatovCAWB06]. The feature of this convolution kernel is that the convolution integral in the related expression for G can be computed explicitly; see [Reference Cai, Arsenlis, Weinberger and BulatovCAWB06, (33)]Footnote 1 . The resulting self-interaction force is given by:

(1.5) \begin{align} \mathcal F_\varepsilon(x) = \oint_\Gamma G_\varepsilon(y-x) \cdot \tau (y) \, dy,\end{align}

where

(1.6) \begin{align} G_\varepsilon(z)^T := \frac{z^T}{ \sqrt{ |z|^2 + \varepsilon^2 }^3 } \begin{bmatrix} 0 & \quad -1 \\[4pt] 1-\nu & \quad 0 \end{bmatrix} + \frac{3 \varepsilon^2 z^T}{ 2 \sqrt{ |z|^2 + \varepsilon^2 }^5 } \begin{bmatrix} 0 & \quad 0 \\[4pt] 1 - \nu & \quad 0 \end{bmatrix}.\end{align}

The difference with (1.4) is that the dependence on $\varepsilon$ in the integral in (1.5) has shifted from the integration domain to the integrand.

To summarise the above, $F_\varepsilon(x)$ and $\mathcal F_\varepsilon(x)$ are two of the many choices for the (linear-elastic contribution of the) self-interaction force on $\Gamma$ at x. While attempts have been made to compare different choices (see, e.g. [Reference LesarLeS04, Reference Cai, Arsenlis, Weinberger and BulatovCAWB06]), the comparisons mainly rely on unquantifiable notions and practical experiences when using them in numerical computations. To quantify the differences, one needs expansions of the self-interaction force in terms of $\varepsilon$ . Constructing such expansions is the first of the two aims of this paper. The second aim is to use these expansions to construct accurate schemes by which they can be computed numerically. We pursue these two aims, respectively, in Sections 1.1 and 1.2.

1.1 The expansions of $\textbf{F}_{\boldsymbol\varepsilon}$ and $\boldsymbol{\mathcal F}_{\boldsymbol\varepsilon}$

Expansions for forces such as $F_\varepsilon$ and $\mathcal F_\varepsilon$ in terms of $\varepsilon$ have been constructed in [Reference Gavazza and BarnettGB76, Reference LotheLot92, Reference Zhao, Wang and XiangZWX12] (each paper considers a different core regularisation; none of which equals $F_\varepsilon$ or $\mathcal F_\varepsilon$ ), and in a more general setting than in this paper (either in three dimensions or for an anisotropic medium). In all three works, the self-interaction force (reduced to our setting) expands as:

(1.7) \begin{equation} \kappa(x) \big( 1 + \nu - 3 \nu \cos^2 \phi(x) \big) \log \frac{1}{\varepsilon} + O(1),\end{equation}

as $\varepsilon \to 0$ . The leading order term of $O(|\log \varepsilon|)$ is consistent in all works and also coincides with the term obtain from phase-field models.

However, it is desired to specify the O(1) term too. Indeed, for the typical value of $\varepsilon$ in (1.3), the O(1) term may not be significantly smaller than the $O(|\log \varepsilon|)$ term. In particular, for parts of $\Gamma$ which are approximately straight (i.e. where $|\kappa|$ is small), the prefactor of the $O(|\log \varepsilon|)$ term is small, whereas the non-local contribution to the interaction force (which contributes to the O(1) term) does not decay (in general) in $\kappa$ for small $|\kappa|$ .

Yet, the treatment and expression of the O(1) term is quite different in the three works cited above. In [Reference LotheLot92, (156)], this term is left unspecified. In [Reference Gavazza and BarnettGB76, (7.1)], only a part of this term is specified. In [Reference Zhao, Wang and XiangZWX12, (44)], the self-interaction force of the curve segment $\Gamma \cap B(x, \varepsilon')$ at x is expanded up to an error of $O(\varepsilon' + \varepsilon/(\varepsilon')^2)$ , where $\varepsilon'$ is a phenomenological mesoscopic length scale which satisfies $\sqrt \varepsilon \ll \varepsilon' \ll 1$ . The related O(1) term is explicit, but local (i.e. it depends on $\Gamma$ only through n(x) and $\kappa(x)$ at x). The contribution of $\Gamma_{\varepsilon'}$ on the self-interaction force is not expanded.

Hence, the results in the literature that go beyond an unspecified O(1) term in (1.7) are very limited. Our first main contribution (Theorem 1.1) extends these results by characterising completely the O(1) term in the expansions of $F_\varepsilon(x)$ and $\mathcal F_\varepsilon(x)$ , and by showing that the next order term is $O(\varepsilon)$ . To state this result, we set

\begin{equation*} \mathcal C(x) := \left\{ \begin{aligned} &\{ t \tau(x) : t \in \mathbb R \} &&\text{if } \kappa(x) = 0 \\[4pt] &\partial B \left( x + \frac{ n(x) }{ \kappa(x) }, \frac1{|\kappa(x)|} \right) &&\text{otherwise}, \end{aligned} \right.\end{equation*}

as the tangent circle of $\Gamma$ at x. Similar to $\Gamma_\varepsilon(x)$ , we set

\begin{equation*} \mathcal C_\varepsilon(x) := \mathcal C(x) \setminus B(0, \varepsilon).\end{equation*}

Theorem 1.1 (Expansions of $F_\varepsilon(x)$ and $\mathcal F_\varepsilon(x)$ ). Let $\Gamma$ be a non-intersecting closed curve of class $C^3$ with finite length. Then, for all $x \in \Gamma$ , the following limit exists

(1.8) \begin{equation} \Psi(x) := \lim_{\varepsilon \to 0} \int_{\Gamma_\varepsilon(x) \cup \mathcal C_\varepsilon(x)} G(y-x) \cdot \tau (y) \, dy,\end{equation}

and

(1.9) \begin{align} F_\varepsilon(x) = \kappa(x) A_{\phi(x)} \log \frac1 {\varepsilon |\kappa(x)|} + \kappa(x) B_{\phi(x)} + \Psi(x) + O (\varepsilon), \end{align}
(1.10) \begin{align} \mathcal F_\varepsilon(x) = F_\varepsilon(x) + \kappa(x) C_{\phi(x)} + O (\varepsilon),\end{align}

as $\varepsilon \to 0$ , where

(1.11a) \begin{align} A_{\phi(x)} := 1 + \nu - 3 \nu \cos^2 \phi(x),\end{align}
(1.11b) \begin{align} B_{\phi(x)} := 2 \big( \log 2 - (1 - \log 2) \nu - (3 \log 2 - 2) \nu \cos^2 \phi(x) \big),\end{align}
(1.11c) \begin{align} C_{\phi(x)} &:= \frac12 \big( -3 - \nu + 3 (1 + \nu) \cos^2 \phi(x) \big),\end{align}

and $O(\varepsilon)$ are uniformly bounded in x.

Idea of the proof. The main idea for expanding $F_\varepsilon (x)$ is to add and subtract integration over $\mathcal C_\varepsilon(x)$ . This idea dates back to [Reference Gavazza and BarnettGB76]. The integral of $G \cdot \tau$ over $\mathcal C_\varepsilon(x)$ can be expanded explicitly in terms of $\varepsilon$ , which gives rise to the first two terms (i.e. the local contribution) in the right-hand side of (1.9). The remaining two terms are the integrals over $\Gamma_\varepsilon(x)$ and $\mathcal C_\varepsilon(x)$ , which are given in the right-hand side of (1.8). The leading order terms in the expansion of these two terms cancel out, and the next order terms turn out to be (relying on Stokes’ Theorem) a Cauchy sequence in $\varepsilon$ , which implies the existence of the limit in (1.8). In Remark 2.1, we specify the extension of $\tau$ from $\Gamma$ to $\mathcal C(x)$ and the convergence rate of the limit $\varepsilon \to 0$ in (1.8).

For the expansion of $\mathcal F_\varepsilon (x)$ , we add and subtract integration over $\mathcal C(x)$ . Since $G_\varepsilon$ is regular, there is no need to remove $B(x,\varepsilon)$ from the curves. The integral over $\mathcal C(x)$ can be expanded explicitly, which gives rise not only to the first two terms in the right-hand side of (1.9), but also to the new term $\kappa(x) C_{\phi(x)}$ appearing in (1.10). To expand the joint integral over $\Gamma$ and $\mathcal C(x)$ , we show that it is close in value to the integrals of the $\varepsilon$ -independent integrand $G(y-x) \cdot \tau (y)$ over $\Gamma_\varepsilon(x)$ and $\mathcal C_\varepsilon(x)$ . Then, the same value $\Psi(x)$ appears naturally in the limit $\varepsilon \to 0$ , and we obtain the following alternative characterisation of $\Psi$ :

(1.12) \begin{equation} \Psi(x) = \lim_{\varepsilon \to 0} \oint_{\Gamma \cup \mathcal C(x)} G_\varepsilon(y-x) \cdot \tau (y) \, dy.\end{equation}

Remarks. We remark on two aspects of Theorem 1.1. First, instead of using $\mathcal C_\varepsilon(x)$ in (1.8), there are many other choices to cancel out the singularity of the integrand at $y = x$ . The benefit of using $\mathcal C(x)$ is that it depends on $\Gamma$ only through the local information x, $\tau(x)$ and $\kappa(x)$ , and that the integral of $G(y-x) \cdot \tau (y)$ over $\mathcal C_\varepsilon(x)$ can be explicitly expanded in terms of $\varepsilon$ .

Second, part of the error $O(\varepsilon)$ comes from a Taylor approximation of $\Gamma$ at x, which we can expand up to third order thanks to the assumption that $\Gamma$ is of class $C^3$ . If $\Gamma$ would only be of class $C^{2,\alpha}$ for some $\alpha \in (0,1)$ specifying the Hölder exponent of the second derivative, then the error term becomes $O(\varepsilon^\alpha)$ .

Contribution. The two main contributions of Theorem 1.1 to the literature are as follows. First, the relative error of the expansions is $O(\varepsilon)$ , while the relative error in (1.7) is $O(1 / |\log \varepsilon|)$ . Especially in view of the typical value $\varepsilon \sim 10^{-4}$ , this is a huge improvement.

Second, the difference between the two formulas for the forces is

(1.13) \begin{equation} \mathcal F_\varepsilon(x) - F_\varepsilon(x) = \kappa(x) C_{\phi(x)} + O(\varepsilon),\end{equation}

as $\varepsilon \to 0$ . The local nature of this difference stems from the result that the non-local term $\Psi(x)$ can be expressed in terms of the modelling assumptions that underly either $\mathcal F_\varepsilon(x)$ or $F_\varepsilon(x)$ ; see (1.8) and (1.12).

The value in (1.13) quantifies the difference between two inherently different models for the self-interaction force. Since this difference is a local term which is relatively small with respect to the leading order term in (1.9), there is no reason to believe that one of the two models is superior over the other. This gives a new perspective to the comparisons made in [CAWB06, Sect. 3]. In fact, in the next section, we will see that the self-interaction force in both models can be discretised in a similar manner.

1.2 Discretisations of $\textbf{F}_{\boldsymbol\varepsilon}$ and $\boldsymbol{\mathcal F}_{\boldsymbol\varepsilon}$

The expansions in Theorem 1.1 of $F_\varepsilon$ and $\mathcal F_\varepsilon$ open up new possibilities to compute them numerically from a discretised version of $\Gamma$ . Such computations are necessary in discrete dislocation dynamics, which is a substantial component in the current research on plasticity. We refer to [Reference Lesar and CapolungoLC20] for a review and cite several specific papers below.

The need for expansions in the numerical computation of $F_\varepsilon(x)$ and $\mathcal F_\varepsilon(x)$ as opposed to using their definitions in (1.4) and (1.5) can be understood as follows. The curve segments of $\Gamma$ close to x yield the main contribution to the integrals in (1.4) and (1.5). Hence, to avoid large errors, one requires a fine discretisation of $\Gamma$ around x, typically of size $O(\varepsilon)$ . Since dislocation dynamics require $F_\varepsilon(x)$ and $\mathcal F_\varepsilon(x)$ to be evaluated at many points x, this requires a discretisation of size $O(\varepsilon)$ everywhere along $\Gamma$ , which is computationally expensive. A similar reason deems phase field models impractical too.

Such fine discretisations of $\Gamma$ are not needed when expansions such as those in Theorem 1.1 are used. Indeed, the terms in the expansions which need to be computed are independent of $\varepsilon$ . Therefore, we introduce a discretisation parameter $h > 0$ for the discretisation of $\Gamma$ and treat it independently of $\varepsilon$ . Our second of the two aims in this paper is to develop accurate numerical schemes for $F_\varepsilon(x)$ and $\mathcal F_\varepsilon(x)$ . Our main contribution (Theorem 1.2) is to quantify the accuracy by estimating the error made when replacing $F_\varepsilon(x)$ and $\mathcal F_\varepsilon(x)$ by their corresponding schemes.

To the best of our knowledge, there are no discretisation errors beyond O(1) in $\varepsilon$ available in the literature. This could explain the rather crude numerical schemes that are used in the literature. For instance, the approaches in [Reference Zbib, Rhee and HirthZRH98, Reference SchwarzSch99, Reference Arsenlis, Cai, Tang, Rhee, Oppelstrup, Hommes, Pierce and BulatovACT+07] first discretise $\Gamma$ to a polygon and then compute the self-interaction force from the known formula for straight dislocation segments (see (1.24) below). A special treatment for connecting segments is made to avoid the singularity in G. It is difficult to track the discretisation error made in this manner. In fact, our approach below reveals that an approximation by straight lines may cause a large error if not dealt with carefully. Another set of examples are the schemes used in [Reference Ghoniem, Tong and SunGTS00, Reference Zhu, Chapman and AcharyaZCA13] and in the paper series by Beneš, Kratochvíl, Pauš et al. (see [Reference Kolář, Beneš, Kratochvl and PaušKBKP18] and references therein), which are based on the expansion in (1.7), but neglect the non-local contribution $\Psi(x)$ . This creates a relative discretisation error of size $O(1/|\log \varepsilon|)$ .

Our second main result, Theorem 1.2, guarantees a much smaller discretisation error. To define the corresponding numerical schemes, we first discretize $\Gamma$ . Let $h > 0$ be a spatial discretisation parameter, which we assume to be small enough with respect to $\Gamma$ . Let $x_i \in \Gamma$ be discretisation points for $i = 1,\dots,N$ . Figure 2 illustrates this setting. We consider $\mathbf x := (x_1, \ldots, x_N) \in (\mathbb R^2)^N$ as the complete list of variables which describe the discretisation of $\Gamma$ . Here and henceforth, we extend $x_i$ periodically over its index by setting $x_{i + Nj} := x_i$ for any $j \in \mathbb Z$ . In addition, we choose the ordering of $x_1, \ldots, x_N$ such that, when transversing $\Gamma$ in counter-clockwise direction, the points $x_i$ appear with increasing index. We set

\begin{equation*} \gamma_i := \{ t x_i + (1-t) x_{i-1} : 0 \leq t \leq 1 \} \qquad \text{for } i = 1,\dots,N,\end{equation*}

as the closed line segments connecting $x_{i-1}$ to $x_i$ , and set

\begin{equation*} \Gamma^h := \bigcup_{i=1}^N \gamma_i,\end{equation*}

as the piecewise-affine closed curve which discretises $\Gamma$ . To relate the choice of $\mathbf x$ to h, we assume that $|\gamma_i| \sim h$ , i.e. there exists a universal constant $C > 1$ such that

(1.14) \begin{equation} \frac1C \max_{1 \leq i \leq N}|\gamma_i| \leq h \leq C \min_{1 \leq i \leq N}|\gamma_i|.\end{equation}

Figure 2. Sketch of a possible discretisation $\Gamma^h$ of $\Gamma$ .

Our discretisation of $F_\varepsilon(x_i)$ is

(1.15) \begin{equation} F_{\varepsilon,i}^h (\mathbf x) := \frac{ \kappa^h(x_i) A_{\phi^h(x_i)}}2 \log \frac{|x_{i+m^h} - x_i| |x_{i-m^h} - x_i|}{\varepsilon^2} + \sum_{j = m^h + 1}^{N - m^h} \int_{\gamma_{i + j}} G(y-x_i) \cdot \tau (y) \, dy,\end{equation}

where

(1.16) \begin{equation} m^h := \big\lceil h^{-1/3} \big\rceil \in \mathbb N,\end{equation}

$\kappa^h(x_i)$ is the approximation of $\kappa(x_i)$ given by:

(1.17) \begin{equation} \kappa^h(x_i) := 2 \left( \frac1{|y_+|} + \frac1{|y_-|} \right) \frac{ y_- \cdot Q y_+ }{ (|y_+| + |y_-|)^2 },\end{equation}

with

(1.18) \begin{equation} y_\pm := x_{i \pm 1} - x_i \quad \text{and} \quad Q := \begin{bmatrix} 0 & \quad 1 \\[4pt] -1 & \quad 0 \end{bmatrix},\end{equation}

and $A_{\phi^h(x_i)}$ is explicit as a function of $\cos \phi^h(x_i) = n^h(x_i) \cdot e_1$ (see (11a)), where

(1.19) \begin{equation} n^h(x_i) := \frac{\tilde n^h(x_i)}{|\tilde n^h(x_i)|} \quad \text{with} \quad \tilde n^h(x_i) := Q \left( |y_-| \frac{y_+}{|y_+|} - |y_+| \frac{y_-}{|y_-|} \right),\end{equation}

is an approximation of $n(x_i)$ . The expression (1.15) resembles the definition of $F_\varepsilon$ in (1.4) rather than its expansion in (1.9). Indeed, if we replace in (1.4) the curve $\Gamma_\varepsilon$ by the discretised curve $\Gamma^h$ and remove from it several line segments $\gamma_j$ close to the point $x_i$ , we obtain (1.15) with a local correction term. Yet, the construction of (1.15) relies on Theorem 1.1.

Finally, we introduce our discretisation of $\mathcal F_\varepsilon(x_i)$ . It is simply given by:

(1.20) \begin{equation} \mathcal F_{\varepsilon,i}^h (\mathbf x) := F_{\varepsilon,i}^h (\mathbf x) + \kappa^h(x_i) C_{\phi^h(x_i)},\end{equation}

where $C_{\phi^h(x_i)}$ is – similar to $A_{\phi^h(x_i)}$ – a function of the two components of the vector $n^h(x_i)$ .

Theorem 1.2 (Discretisation of $F_\varepsilon(x)$ and $\mathcal F_\varepsilon(x)$ ). Let $\Gamma$ be a non-intersecting closed curve of class $C^3$ with finite length. Let $\varepsilon, h \in (0,1)$ be small enough with respect to $\Gamma$ , and let $\mathbf x$ be as above such that (1.14) holds. Then, for all $i = 1,\ldots,N$

(1.21) \begin{align} \big| F_{\varepsilon,i}^h (\mathbf x) - F_\varepsilon(x_i) \big| + \big| \mathcal F_{\varepsilon,i}^h (\mathbf x) - \mathcal F_\varepsilon(x_i) \big| \leq C (\varepsilon + h |\log \varepsilon| + h^{2/3}),\end{align}

where $C > 0$ is independent of $\varepsilon, h, \mathbf x$ .

Idea of the proof. The proof is constructive and motivates the scheme in (1.15) by starting from the expansion of $F_\varepsilon(x_i)$ in (1.9). This explains the appearance of $\varepsilon$ in the error term. Since we construct $\kappa^h(x_i)$ to be O(h) close to $\kappa(x_i)$ , the logarithmic term in (1.9) leads to the error term $h |\log \varepsilon|$ in (1.21).

The interesting part is the discretisation of $\Psi(x_i)$ . The characterisation of $\Psi(x_i)$ in (1.8) is constructed carefully for the singularity in the integrand $G(y-x_i) \cdot \tau(y)$ to cancel out. By adding and subtracting integration over $\mathcal C(x_i)$ in this characterisation, we construct effectively an approximation of $\Gamma$ around $x_i$ up to third order. However, the approximation of $\Gamma$ by the polygon $\Gamma^h$ is only of first order, which is not enough to cancel out the singularity. Hence, the replacement of $\Gamma_\varepsilon(x_i) = \Gamma \setminus B(x_i,\varepsilon)$ in the definition of $\Psi(x_i)$ in (1.8) by $\Gamma^h \setminus B(x_i,\varepsilon)$ is expected to yield an error which is large as $\varepsilon \to 0$ .

To work around this problem, we rely on a byproduct from the proof of Theorem 1.1 (see Remark 2.1). This byproduct states that the integral in (1.8) is $O(\varepsilon)$ close to $\Psi(x_i)$ . This creates a trade-off with the previously mentioned error from replacing $\Gamma_\varepsilon (x_i)$ by $\Gamma^h \setminus B(x_i,\varepsilon)$ , which becomes larger as $\varepsilon$ tends to 0. Instead of applying these error estimates for the given value of $\varepsilon$ in Theorem 1.2, we apply them for a possibly different, h-dependent $\varepsilon^h$ for which both error terms are of the same order. It will turn out that

\begin{equation*} \varepsilon^h \sim h^{2/3}.\end{equation*}

This explains the appearance of $h^{2/3}$ in the error in Theorem 1.2. It also explains the use of the number $m^h$ in (1.16), which is chosen such that

\begin{equation*} |x_{i \pm m^h} - x_i| \sim \varepsilon^h.\end{equation*}

Remarks. We remark on five aspects of Theorem 1.2. First, note that Theorem 1.2 requires no estimates between $\varepsilon$ and h, which means that (1.21) is valid in the two-dimensional parameter space $\varepsilon, h > 0$ in a (complete) neighbourhood around 0. For practical purposes, $\varepsilon > 0$ is a modelling parameter, and the choice of h is up to the discretion of the user. Then, the following corollary of Theorem 1.2 is of more practical use: given the assumptions of Theorem 1.2, it holds that

(1.22) \begin{equation} \varepsilon^{3/2} \leq h \leq |\log \varepsilon|^{-3} \implies \big| F_{\varepsilon,i}^h (\mathbf x) - F_\varepsilon(x_i) \big| + \big| \mathcal F_{\varepsilon,i}^h (\mathbf x) - \mathcal F_\varepsilon(x_i) \big| \leq C h^{2/3}\end{equation}

for some constant $C > 0$ independent of $\varepsilon, h, \mathbf x$ .

Second, the statement ‘ $\varepsilon,h$ are small enough with respect to $\Gamma$ ’ can be made more precise. In fact, the proof requires $\varepsilon$ and h to be small enough with respect to the four constants:

(1.23) \begin{equation} |\Gamma|, \quad \max_{s \in \mathbb R} |\varphi''(s)|, \quad \max_{s \in \mathbb R} |\varphi'''(s)|, \quad \min_{s<t} \frac{t-s}{|\varphi(t) - \varphi(s)|},\end{equation}

where $|\Gamma|$ is the length of $\Gamma$ and $\varphi$ is an arc length parametrisation of $\Gamma$ . The fourth constant above is large if, for instance, in the situation on the right in Figure 1, the second part of $\Gamma$ would be inside $B(x,\varepsilon)$ .

Third, note the practical property that $F_{\varepsilon,i}^h$ and $\mathcal F_{\varepsilon,i}^h$ can be directly computed from $\mathbf x$ . Indeed, the first term in (1.15) and the second term in (1.20) are explicitly expressed in terms of the five points $x_{i-m^h}$ , $x_{i-1}$ , $x_i$ , $x_{i+1}$ , and $x_{i+m^h}$ . To express the remaining second term in (1.15) explicitly in terms of $\mathbf x$ , we recall (see, e.g. [Reference de Wit’dW60, Reference Hirth and LotheHL82]) the following formula for the force that a line segment $\gamma_{x \to y}$ which connects $x \in \mathbb R^2$ to $y \in \mathbb R^2$ exerts on a curve segment at 0 in normal direction:

(1.24) \begin{equation} I(x,y) := \int_{\gamma_{x \to y}} G(z) \cdot \tau(z) \, dz = \frac1{ |x| |y| + x \cdot y } \Big( \frac x{|x|} + \frac y{|y|} \Big)^T \begin{bmatrix} 0 & \quad -1 \\[4pt] 1 - \nu & \quad 0 \end{bmatrix} (y-x).\end{equation}

Using this explicit formula, the second term in (1.15) becomes

\begin{equation*} \sum_{j = m^h + 1}^{N - m^h} \int_{\gamma_{i + j}} G(y-x_i) \cdot \tau (y) \, dy = \sum_{j = m^h + 1}^{N - m^h} I(x_{i+j-1} - x_i, x_{i+j} - x_i),\end{equation*}

which is indeed an explicit function of $x_i$ and $x_{m^h + i}, x_{m^h + i + 1}, \ldots, x_{N-m^h + i}$ .

Fourth, the discretisations $F_{\varepsilon,i}^h(\mathbf x)$ and $\mathcal F_{\varepsilon,i}^h(\mathbf x)$ are numerically suboptimal, because they do not depend on $x_{i \pm j}$ for $j = 2,3,\ldots,m^h-1$ . One could decide to require higher regularity on $\Gamma$ and use these points $x_{i \pm j}$ to develop a higher-order approximation for n(x) and $\kappa(x)$ than those in (1.19) and (1.17). This alone, however, will not decrease the size of the discretisation error in (1.21).

Fifth, the description of the proof reveals that the relatively large error term $h^{2/3}$ appears because of the low regularity (Lipschitz, but not $C^1$ ) of the discretised curve $\Gamma^h$ . By using a higher-order discretisation based on splines (see e.g. [Reference Ghoniem, Tong and SunGTS00]), it might be possible to get this part of the error down to O(h), simply by taking $m^h = 1$ and $\varepsilon^h \sim h$ . We leave this direction as an open problem.

Contribution. The main contribution of Theorem 1.2 to the literature is that, for the first time, it introduces and estimates the discretisation error made when computing the self-interaction force from a numerical scheme. We consider this an important step towards building more accurate methods for computing discrete dislocation dynamics.

1.3 Conclusion

Rather than the statements of Theorems 1.1 and 1.2, we consider their proofs to be the main contribution of this paper. Indeed, these two theorems are merely stated for the simplest setting of a two-dimensional, isotropic medium, and they only entail two out of several models for the self-interaction force. Yet, we expect the methodology (i.e. the skeleton of the proof) to apply to different models for this force (especially those constructed by smearing out the dislocation by a convolution kernel) and to a three-dimensional setting.

Finally, we recall that expansions such as those in Theorem 1.1 account for the non-local linear-elastic contribution of the self-interaction force of a dislocation, but that they do not provide a proper accounting for the contribution from the dislocation core. Yet, expansions as those in Theorem 1.1 reveal the connection between different models for the non-local contribution of the self-interaction force, which gives more freedom in constructing an appropriate coupling with atomistic models.

The remainder of the paper is concerned with proving Theorems 1.1 and 1.2. Section 2 contains the proof of Theorem 1.1 and Section 3 contains the proof of Theorem 1.2.

2 Proof of Theorem 1.1

The notation convention in Sections 2 and 3 is as follows. Whenever convenient, we denote balls such as $B(x,\varepsilon)$ by $B_\varepsilon(x)$ . We reserve c, C for generic positive constants which do not depend on any of the important variables. We use C in upper bounds (and think of it as possibly large) and c in lower bounds (and think of it as possibly small). While c, C may vary from line to line, in the same display they refer to the same value. If different constants appear in the same display, we denote them by $C, C', C'', \ldots$ . Finally, in local computations, we sometimes reuse notation. For instance, in the computation of line integrals, the parametrisation of the corresponding curve is an auxiliary step which is of no further use outside of the computation of the line integral; we use the symbol $\varphi$ to denote various parametrisations.

We prove Theorem 1.1 by establishing (1.9) and (1.10) in, respectively, Sections 2.1 and 2.2.

2.1 Expansion of $\textbf{F}_{\boldsymbol\varepsilon}$

In this section, we construct the expansion of $F_\varepsilon(x)$ as given in (1.9), which is the first of the two parts of Theorem 1.1. We fix an $x \in \Gamma$ , translate the coordinate system such that x is at the origin, and remove x from the notation. Then, the definition of $F_\varepsilon(x)$ in (1.4) reads as:

\begin{equation*} F_\varepsilon = \int_{\Gamma_\varepsilon} G(y) \cdot \tau (y) \, dy = \int_{\Gamma_\varepsilon} G \cdot \tau.\end{equation*}

Regarding (1.1), we express the curvature, tangent vector (counter-clockwise direction) and outward pointing normal vector of $\Gamma$ at 0, respectively, by $\kappa_0$ , $\tau_0$ and $n_0$ (to avoid clutter, we denote the related angle $\phi_0$ simply by $\phi$ ). The statement in Theorem 1.1 that the error term is uniform in x translates to the requirement that the error term $O(\varepsilon)$ has to be uniform in $\phi \in [0, 2\pi)$ , $\kappa_0$ and other local information of $\Gamma$ at 0. Depending on the sign of $\kappa_0$ , we split three cases.

The case $\kappa_0 > 0$ . Let $r_0 = 1/\kappa_0$ be the radius of the tangent circle $\mathcal C = \partial B( r_0 n_0, r_0)$ of $\Gamma$ at 0. For technical reasons, we first assume that $\Gamma \cap \mathcal C = \{0\}$ , that is, $\Gamma$ and $\mathcal C$ intersect only at 0. We treat the general case afterwards.

The idea for expanding $F_\varepsilon$ is to replace $\Gamma_\varepsilon$ by

\begin{equation*} \mathcal C_\varepsilon := \mathcal C \setminus B_\varepsilon(0)\end{equation*}

and to show that the error made by this replacement is of the form $\Psi + O(\varepsilon)$ . With this aim, let $\varepsilon$ be small enough such that $\partial B_\varepsilon(0)$ intersects with $\Gamma$ and $\mathcal C$ at precisely two points. Note that this upper bound on $\varepsilon$ can be constructed from the constants in (1.23), such that it does not depend on the local information of $\Gamma$ at 0. Let $\gamma_\varepsilon \subset \partial B_\varepsilon(0)$ be the union of the two disjoint arcs which connect the endpoints of $\Gamma_\varepsilon$ and $\mathcal C_\varepsilon$ . Figure 3 illustrates these curves. We extend $\tau$ (i.e., the direction of the tangent vector on $\Gamma$ ) to $\mathcal C_\varepsilon$ and $\gamma_\varepsilon$ such that there is a consistent direction in which the closed curve $\Gamma_\varepsilon \cup \mathcal C_\varepsilon \cup \gamma_\varepsilon$ is traversed. In particular, this means that $\tau$ is such that $\mathcal C_\varepsilon$ is traversed in counter-clockwise direction. Then, we decompose

(2.1) \begin{equation} F_\varepsilon = \oint_{\Gamma_\varepsilon \cup \mathcal C_\varepsilon \cup \gamma_\varepsilon} G \cdot \tau - \int_{\mathcal C_\varepsilon} G \cdot \tau - \int_{\gamma_\varepsilon} G \cdot \tau =: F_\varepsilon^1 - F_\varepsilon^2 - F_\varepsilon^3,\end{equation}

and expand $F_\varepsilon^1$ , $F_\varepsilon^2$ and $F_\varepsilon^3$ independently.

Figure 3. Sketch of the closed curve formed by $\Gamma_\varepsilon$ , $\mathcal C_\varepsilon$ and $\gamma_\varepsilon$ .

We start with $F_\varepsilon^3$ . Let

\begin{equation*} \varphi (\theta) = \varepsilon \begin{bmatrix} \cos \theta \\[4pt] \sin \theta \end{bmatrix} \qquad \text{with } \alpha < \theta < \alpha + \delta_\varepsilon,\end{equation*}

be a parametrisation of one of the two arcs of $\gamma_\varepsilon$ , where $\alpha = \phi - \tfrac\pi2 + O(\varepsilon)$ (recall $\phi$ from (1.1)). For $\varepsilon$ small enough, it follows from the $C^3$ -regularity of $\Gamma$ that the endpoints of $\mathcal C_\varepsilon$ and $\Gamma_\varepsilon$ are a distance of $O(\varepsilon^3)$ apart. Hence, $\delta_\varepsilon = O(\varepsilon^2)$ . Then, the contribution of this arc to the value of $F_\varepsilon^3$ is

\begin{equation*} \int_{\alpha}^{\alpha + \delta_\varepsilon} G(\varphi(\theta)) \varphi'(\theta) \, d\theta = -\frac1\varepsilon \int_{\alpha}^{\alpha + \delta_\varepsilon} \big( (1-\nu) \sin^2 \theta + \cos^2 \theta \big) \, d\theta = O(\delta_\varepsilon/\varepsilon) = O(\varepsilon).\end{equation*}

The contribution of the second arc of $\gamma_\varepsilon$ can be treated analogously. This proves that $F_\varepsilon^3 = O(\varepsilon)$ .

Next, we expand the term $F_\varepsilon^1$ in (2.1). With this aim, we first show that $(F_\varepsilon^1)_\varepsilon$ is a Cauchy sequence. Let $\varepsilon > 0$ be small enough and take $\delta \in (0, \varepsilon)$ . Let $\Omega$ be the region enclosed by $\Gamma$ . Then,

\begin{equation*} F_\varepsilon^1 - F_\delta^1 = \oint_{\partial \omega_{\delta, \varepsilon}} G \cdot \tau,\end{equation*}

where the open set

(2.2) \begin{equation} \omega_{\delta, \varepsilon} := B_\varepsilon(0) \setminus \overline{ B_\delta(0) \cup B_{r_0}( r_0 n_0) \cup \Omega }\end{equation}

is a subset of the narrow wedges between $\Gamma$ and $\mathcal C$ (see Figure 3). Since $\Gamma$ and $\mathcal C$ intersect only at 0, $\partial \omega_{\delta, \varepsilon}$ consists of two disjoint closed loops. Then, by Stokes’ Theorem,

\begin{equation*} \oint_{\partial \omega_{\delta, \varepsilon}} G \cdot \tau = \iint_{\omega_{\delta, \varepsilon}} g,\end{equation*}

where

(2.3) \begin{equation} g(y) := \textrm{curl} G(y) = \frac{ (1+\nu) y_1^2 + (1 - 2\nu) y_2^2 }{|y|^5}.\end{equation}

Hence,

(2.4) \begin{equation} \bigg| \oint_{\partial \omega_{\delta, \varepsilon}} G \cdot \tau \bigg| \leq C \iint_{\omega_{\delta, \varepsilon}} \frac1{ |y|^3 } \, dy\end{equation}

for some constant C which only depends on $\nu$ . To estimate this integral, we use polar coordinates to write

(2.5) \begin{equation} \omega_{\delta, \varepsilon} = \{ (r, \theta) : \delta < r < \varepsilon, \, \theta \in \Theta(r) \},\end{equation}

where $\Theta(r) \subset \mathbb R / (2\pi \mathbb Z)$ is the union of two intervals. Analogously to the argument for $|\gamma_\varepsilon| = O(\varepsilon^3)$ , we obtain that $|\Theta(r)| = O(r^2)$ . Then,

(2.6) \begin{equation} \iint_{\omega_{\delta, \varepsilon}} \frac1{ |y|^3 } \, dy = \int_\delta^\varepsilon \int_{\Theta(r)} \frac1{r^3} r \, d\theta dr = \int_\delta^\varepsilon \frac1{r^2} |\Theta(r)| \, dr = O(\varepsilon)\end{equation}

uniformly in $\delta$ . Hence, $(F_\varepsilon^1)_\varepsilon$ is a Cauchy sequence, and

\begin{equation*} F_\varepsilon^1 = \Psi + O(\varepsilon)\end{equation*}

for some $\Psi \in \mathbb R$ .

Finally, we expand the term $F_\varepsilon^2$ in (2.1). With this aim, we parametrise the arc $\mathcal C_\varepsilon$ in counter-clockwise direction by:

(2.7) \begin{align} \varphi (\theta) := r_0 \begin{bmatrix} \cos \phi + \cos (\theta + \phi + \pi) \\[4pt] \sin \phi + \sin (\theta + \phi + \pi) \end{bmatrix} \qquad \text{with } \alpha < \theta < 2\pi - \alpha,\end{align}

where $\alpha \in (0, \pi)$ is such that

\begin{equation*} \varepsilon^2 = |\varphi(\alpha)|^2.\end{equation*}

For simplicity, we set

\begin{align*} c_\phi &:= \cos \phi & c &:= \cos \left( \frac{\theta}{2} \right) \\[4pt] s_\phi &:= \sin \phi & s &:= \sin \left( \frac{\theta}{2} \right)\end{align*}

and

\begin{equation*} Q_\phi := \begin{bmatrix} c_\phi & -s_\phi \\[4pt] s_\phi & c_\phi \end{bmatrix}\end{equation*}

as the counter-clockwise rotation matrix by angle $\phi$ . Using trigonometric identities, we get

(2.8) \begin{align} \varphi (\theta) = 2 r_0 s Q_\phi \begin{bmatrix} s \\[4pt] -c \end{bmatrix}, \quad | \varphi (\theta) | = 2 r_0 |s| = 2 r_0 s \quad \text{and} \quad \varphi' (\theta) = r_0 Q_\phi \begin{bmatrix} 2sc \\[4pt] s^2 - c^2 \end{bmatrix}.\end{align}

In particular,

\begin{equation*} \varepsilon^2 = |\varphi(\alpha)|^2 = 4 r_0^2 \sin^2 (\alpha/2).\end{equation*}

Hence,

\begin{equation*} \alpha = \frac{\varepsilon}{r_0} + O\left(\varepsilon^3\right),\end{equation*}

where $O(\varepsilon^3)$ can be bounded uniformly in $\kappa_0$ since $1/r_0 = \kappa_0 \leq \max_\Gamma |\kappa|$ .

Noting that $s > 0$ , we obtain from the preparations above that

(2.9) \begin{align} \notag F_\varepsilon^2 &= \int_{\alpha}^{2\pi - \alpha} G(\varphi(\theta))\varphi'(\theta) \, d\theta \\[4pt]\notag &= \int_{\alpha}^{2\pi - \alpha} \frac{1}{4 r_0 s^2} \begin{bmatrix} s & -c \end{bmatrix} Q_\phi^T \begin{bmatrix} 0 & -1 \\[4pt] 1-\nu & 0 \end{bmatrix} Q_\phi \begin{bmatrix} 2sc \\[4pt] s^2 - c^2 \end{bmatrix} \, d\theta \\[4pt] &= \frac{\kappa_0}4 \sum_{k=0}^3 C_k \int_{\alpha}^{2\pi - \alpha} c^k s^{1-k} \, d\theta\end{align}

for some explicit constants $C_k$ which only depend on $\nu$ and $\phi$ . By the symmetries of the sine and cosine, the integrands corresponding to odd powers of the cosine (i.e. $k = 1, 3$ ) yield zero integral. Then, substituting the explicit values of $C_0$ and $C_2$ and using $c^2 = 1 - s^2$ , we obtain

(2.10) \begin{equation} F_\varepsilon^2 = \frac{\kappa_0}4 \left( 2 \nu \left(1 - 2 c_\phi^2\right) \int_{\alpha}^{2\pi - \alpha} \sin \left(\frac{\theta}{2}\right) \, d\theta + \left(3 \nu c_\phi^2 - \nu - 1\right) \int_{\alpha}^{2\pi - \alpha} \frac{1}{\sin \left( \frac{\theta}{2} \right)} \, d\theta \right).\end{equation}

Computing the integrals, we get

\begin{align*} \int_{\alpha}^{2\pi - \alpha} \sin \left( \frac{\theta}{2} \right) \, d\theta = 4 + O\left(\alpha^2\right) = 4 + O\left(\varepsilon^2\right)\end{align*}

and

\begin{align*} \int_{\alpha}^{2\pi - \alpha} \frac{1}{\sin \left( \frac{\theta}{2}\right)} \, d\theta & = 2 \log \left( \tan \frac{\theta}{4}\right) \bigg|_{\theta = \alpha}^{2\pi - \alpha} = 4 \log \frac1{\alpha} + 8 \log 2 + O\left(\alpha^2\right) \\[4pt] & = 4 \log \frac{r_0}\varepsilon + 8 \log 2 + O\left(\varepsilon^2\right).\end{align*}

Collecting the computations above, we obtain

(2.11) \begin{equation} F_\varepsilon^2 = -\kappa_0 \left( A_\phi \log \frac1{\varepsilon \kappa_0} + B_\phi + O (\varepsilon^2) \right),\end{equation}

where $A_\phi$ and $B_\phi$ are defined in (1.11).

Finally, collecting all estimates on $F_\varepsilon^i$ , (1.9) follows for the case in which $\Gamma \cap \mathcal C = \{0\}$ .

Next, we generalise the proof of (1.9) to general curves $\Gamma$ without any restrictions to the number of intersections with $\mathcal C$ . In this case, the derivation of the expansions of $F_\varepsilon^2$ and $F_\varepsilon^3$ above remain valid. The derivation of the expansions of $F_\varepsilon^1$ requires minor modifications. Indeed, since $\Omega$ and $B_{r_0} (r_0 n_0)$ may overlap, (2.2) may not be the correct region to consider.

To find an alternative definition for $\omega_{\delta, \varepsilon}$ , we take $\varepsilon$ small enough so that both $\Gamma \cap B_\varepsilon(0)$ and $\mathcal C \cap B_\varepsilon(0)$ can be parametrised by height functions $h_1$ and $h_2$ on the part $\{t \tau_0 : |t| < \varepsilon \}$ of the tangent line of $\Gamma$ at 0 (one can use Figure 3 for a visualisation). Note that

\begin{align*} \Gamma \cap B_\varepsilon(0) &\subset \left\{ t \tau_0 + h_1(t) n_0 : -\varepsilon < t < \varepsilon \right\}, \\[4pt] \mathcal C \cap B_\varepsilon(0) &\subset \left\{ t \tau_0 + h_2(t) n_0 : -\varepsilon < t < \varepsilon \right\}.\end{align*}

Then, we define $\omega_{\delta,\varepsilon}$ as the open set inside the annulus $B_\varepsilon(0) \setminus B_\delta(0)$ which is in between $\Gamma$ and $\mathcal C$ . Note that this definition is consistent with that in (2.2). The connected components of the closed set

\begin{equation*} \{ t \in [-\varepsilon, \varepsilon] : h_1(t) = h_2(t) \}\end{equation*}

separate $\omega_{\delta,\varepsilon}$ into connected components. If the number of components is finite, then we can apply Stokes’ Theorem as in (2.4) on each of the components and apply the estimate in (2.6) to conclude that $F_\varepsilon^1 = \Psi + O(\varepsilon)$ . If the number of components of $\omega_{\delta,\varepsilon}$ is infinite, then we order them in size, observe that this ordering shows that the number of components is countable, write

\begin{equation*} \omega_{\delta,\varepsilon} = \bigcup_{k=1}^\infty \omega_k\end{equation*}

with $|\omega_1| \geq |\omega_2| \geq \ldots$ , and use that $\{\omega_k\}_{k=1}^\infty$ are disjoint to estimate

\begin{equation*} \bigg| \oint_{\partial \omega_{\delta, \varepsilon}} G \cdot \tau \bigg| \leq \sum_{k=1}^\infty \bigg| \oint_{\partial \omega_k} G \cdot \tau \bigg| = \sum_{k=1}^\infty \bigg| \iint_{\omega_k} g \bigg| \leq \iint_{\omega_{\delta, \varepsilon}} | g |.\end{equation*}

The remainder of the argument for the expansion of $F_\varepsilon^1$ is analogous to the previous case in which $\Gamma \cap \mathcal C = \{0\}$ . This completes the proof of (1.9) for the case $\kappa_0 > 0$ .

The case $\kappa_0 = 0$ . This case can be treated with two minor modifications to the proof in the case $\kappa_0 > 0$ . We list these modifications below.

The first modification is that we put $\mathcal C$ as the tangent line of $\Gamma$ at 0. Rather than a modification, this choice of $\mathcal C$ is the natural extension of the tangent circle as $\kappa_0 \downarrow 0$ . While $\mathcal C$ is unbounded, it follows from the quadratic decay of G(z) as $|z| \to \infty$ that the splitting in (2.1) remains valid.

The second modification is the expansion of $F_\varepsilon^2$ . Since G is odd, we get immediately

\begin{equation*} F_\varepsilon^2 = \int_{\mathcal C_\varepsilon} G \cdot \tau = 0,\end{equation*}

which holds without any error terms depending on $\varepsilon$ .

The case $\kappa_0 < 0$ . Let $r_0 = 1/|\kappa_0|$ be the radius of the tangent circle $\mathcal C = \partial B_{r_0}( -r_0 n_0)$ . As in the case $\kappa_0 > 0$ , we extend $\tau$ from $\Gamma$ to $ \mathcal C_\varepsilon$ and $ \gamma_\varepsilon$ such that $\Gamma_\varepsilon \cup \mathcal C_\varepsilon \cup \gamma_\varepsilon$ has a consistent direction in which it is traversed. In particular, $\mathcal C_\varepsilon$ is traversed in clockwise direction, which is opposite to the case $\kappa_0 > 0$ . See Figure 4 for a sketch in the simple situation where $ \mathcal C \subset \Omega \cup \{0\}$ .

Figure 4. Sketch of a situation as in Figure 3 when $ \kappa_0 < 0$ .

The splitting of $F_\varepsilon$ in (2.1) in terms of $F_\varepsilon^1, F_\varepsilon^2, F_\varepsilon^3$ still holds, and similar to the case $\kappa_0 > 0$ we obtain $ F_\varepsilon^1 = \Psi + O(\varepsilon)$ and $ F_\varepsilon^3 = O(\varepsilon)$ . To compute $F_\varepsilon^2$ , we parametrise $\mathcal C_\varepsilon$ by $\tilde \varphi (\theta) := - \varphi (\theta)$ , where $\varphi$ is the parametrisation used in the case $\kappa_0 > 0$ ; see (2.7). Since this parametrisation traverses $\mathcal C_\varepsilon$ in the opposite direction (i.e. counter-clockwise), we obtain as in (2.9)

\begin{align*} F_\varepsilon^2 &= \int_{\mathcal C_\varepsilon} G \cdot \tau \\[4pt] &= -\int_{\alpha}^{2\pi - \alpha} G(\tilde \varphi(\theta)) \tilde \varphi'(\theta) \, d\theta \\[4pt] &= -\int_{\alpha}^{2\pi - \alpha} G(\varphi(\theta))\varphi'(\theta) \, d\theta \\[4pt] &= - \frac{|\kappa_0|}4 \sum_{k=0}^3 C_k \int_{\alpha}^{2\pi - \alpha} c^k s^{1-k} \, d\theta\end{align*}

with the same constants $C_k$ . Then, the result (2.11) of the computation below (2.9) yields

\begin{equation*} F_\varepsilon^2 = - \kappa_0 \left( A_{\phi} \log \frac1{\varepsilon |\kappa_0|} + B_{\phi} + O (\varepsilon^2) \right).\end{equation*}

This completes the proof of (1.9).

Remark 2.1 (A more precise characterisation of $\Psi(x)$ ). The proof above shows that

(2.12) \begin{equation} \Psi(x) = \oint_{\Gamma_\varepsilon \cup \mathcal C_\varepsilon(x) \cup \gamma_\varepsilon(x)} G(y-x) \cdot \tau(y) \, dy + O(\varepsilon),\end{equation}

where the curves $\mathcal C_\varepsilon(x)$ and $\gamma_\varepsilon(x)$ , the direction in which they are traversed, and the existence of the limit as $\varepsilon \to 0$ are detailed in the proof. The proof also shows that the contribution of $\gamma_\varepsilon(x)$ to this line integral is $O(\varepsilon)$ (uniformly in x), and thus its contribution may be left out in (2.12).

2.2 Expansion of $\boldsymbol{\mathcal F}_{\boldsymbol\varepsilon}$

In this section, we complete the proof of Theorem 1.1 by proving (1.10). The preparation is analogous to that in Section 2.1; we fix an $x \in \Gamma$ , translate the coordinate system such that x is at the origin and remove x from the notation. Again, we set $\kappa_0$ , $\tau_0$ and $n_0$ as respectively the curvature, tangent vector (counter-clockwise direction) and outward pointing normal vector of $\Gamma$ at 0. In addition, in view of the definition of $G_\varepsilon$ in (1.6), we set

\begin{equation*} R_\varepsilon(y) := \sqrt{ |y|^2 + \varepsilon^2 }\end{equation*}

for $y \in \mathbb R^2$ and extend this definition to scalars $t \in \mathbb R$ by $R_\varepsilon(t) = \sqrt{ t^2 + \varepsilon^2 }$ . Depending on the sign of $\kappa_0$ , we split three cases.

The case $\kappa_0 > 0$ . Similar to (2.1), we split

(2.13) \begin{equation}\mathcal F_\varepsilon= \oint_{\Gamma \cup \mathcal C} G_\varepsilon \cdot \tau - \oint_\mathcal C G_\varepsilon \cdot \tau=: \mathcal F_\varepsilon^1 - \mathcal F_\varepsilon^2,\end{equation}

where $\mathcal C$ is traversed in counter-clockwise direction. Since $G_\varepsilon$ is regular at 0, there is no need to avoid the origin, and thus the splitting above is simpler than that in (2.1). Again, we first assume that $\Gamma \cap \mathcal C = \{0\}$ and treat the general case afterwards.

We start with showing that $\mathcal F_\varepsilon^1 = \Psi + O(\varepsilon)$ . We recall $\mathcal C_\varepsilon$ and $\gamma_\varepsilon$ from Section 2.1 (see Figure 3) and extend the definition in (2.2) to

\begin{equation*} \omega_{\varepsilon} := \omega_{0, \varepsilon} := B_\varepsilon(0) \setminus \overline{ B_{r_0}( r_0 n_0) \cup \Omega },\end{equation*}

that is, $\omega_\varepsilon$ is the union of the two narrow wedges inside $B_\varepsilon(0)$ between $\Gamma$ and $\mathcal C$ . Then,

(2.14) \begin{equation} \mathcal F_\varepsilon^1 = \oint_{\partial \omega_\varepsilon} G_\varepsilon \cdot \tau + \oint_{\Gamma_\varepsilon \cup \mathcal C_\varepsilon \cup \gamma_\varepsilon} (G_\varepsilon - G) \cdot \tau + \oint_{\Gamma_\varepsilon \cup \mathcal C_\varepsilon \cup \gamma_\varepsilon} G \cdot \tau.\end{equation}

By (2.12), the third term equals $\Psi + O(\varepsilon)$ . Hence, it remains to show that the first two terms are error terms of size $O(\varepsilon)$ .

We start with the first error term in (2.14). In preparation for applying Stokes’ Theorem, we compute

(2.15) \begin{align} \textrm{curl} G_\varepsilon(y) = \frac{ (1+\nu) y_1^2 + (1 - 2\nu) y_2^2 - (2-\nu) \varepsilon^2 }{R_\varepsilon^5(y)} + \frac32 (1 - \nu) \frac{ \varepsilon^2 ( y_1^2 - 4y_2^2 + \varepsilon^2) }{R_\varepsilon^7(y)}.\end{align}

Similar to (2.3), it is easy to see that there exists a constant $C > 0$ which only depends on $\nu$ such that

\begin{equation*} \big| \textrm{curl} G_\varepsilon(y) \big| \leq C / |y|^3 \qquad \text{for all } \varepsilon > 0, \: y \in \mathbb R^2.\end{equation*}

Then, applying Stokes’ Theorem to the first error term in (2.14), and then estimating the result analogously to (2.6), we obtain that this error term is $O(\varepsilon)$ .

To bound the second error term in (2.14), we start with some preparations. We write

\begin{equation*} \left(G_\varepsilon - G\right)(y)^T = \left( \frac1{ R_\varepsilon(y)^3 } - \frac1{ |y|^3 } \right) y^T \begin{bmatrix} 0 & \quad -1 \\[4pt] 1-\nu & \quad 0 \end{bmatrix} + \frac{3 \varepsilon^2 y^T}{ 2 R_\varepsilon(y)^5 } \begin{bmatrix} 0 & \quad 0 \\[4pt] 1 - \nu & \quad 0 \end{bmatrix}\end{equation*}

and claim that $(G_\varepsilon - G)(y)$ is small when y remains a fixed distance $\rho > 0$ away from 0. Indeed, for such y,

\begin{equation*} \frac{1}{R_\varepsilon(y)} = \frac1{|y|} \frac1{\sqrt{ 1 + \varepsilon^2/|y|^2 }} = \frac1{|y|} \big( 1 + O(\varepsilon^2) \big),\end{equation*}

where the term $O(\varepsilon^2)$ is uniform in y on $B_\rho(0)^c$ but may depend on $\rho$ . Hence, there exists $C > 0$ such that

\begin{equation*} \big| (G_\varepsilon - G)(y) \big| \leq C \varepsilon^2 / |y|^2 \qquad \text{for all } |y| \geq \rho.\end{equation*}

To use this result, we expand

(2.16) \begin{align} \oint_{\Gamma_\varepsilon \cup \mathcal C_\varepsilon \cup \gamma_\varepsilon} (G_\varepsilon - G) \cdot \tau = \left( \int_{\Gamma_\rho \cup \gamma_\rho} + \int_{\mathcal C_\rho} + \oint_{\partial \omega_{\varepsilon, \rho}} \right) (G_\varepsilon - G) \cdot \tau,\end{align}

where $\omega_{\varepsilon, \rho}$ is defined in (2.2). We claim that the first two integrals are $O(\varepsilon^2)$ . Indeed, since $\Gamma_\rho \cup \gamma_\rho$ is of finite length, this follows immediately for the first integral. For the second integral, we need to be more precise, because $|\mathcal C_\rho| = O(r_0)$ is not bounded uniformly in $\kappa_0$ (it blows up as $\kappa_0 \downarrow 0$ ). We use the parametrisation of $\mathcal C_\rho$ defined in (2.7) (see (2.8) for its properties) to find

\begin{multline*} \bigg| \int_{\mathcal C_\rho} (G_\varepsilon - G) \cdot \tau \bigg| \leq \int_{\rho/r_0}^{2\pi - \rho/r_0} \big| (G_\varepsilon - G)(\varphi(\theta)) \big| | \varphi'(\theta) | \, d\theta \\[4pt] \leq C \varepsilon^2 \int_{\rho/r_0}^{2\pi - \rho/r_0} \frac{ | \varphi'(\theta) | }{| \varphi(\theta)) |^2} \, d\theta \leq C \frac{ \varepsilon^2 }{4 r_0} \int_{\rho/r_0}^{\pi} \frac{d\theta}{ \sin^2(\theta/2) } \leq C' \frac{ \varepsilon^2 }{r_0} \int_{\rho/r_0}^{\pi} \frac{d\theta}{ \theta^2 } \leq C^{\prime\prime} \varepsilon^2,\end{multline*}

which is uniform in $\kappa_0$ .

Finally, we bound the third integral in (2.16). To apply a similar estimate as for the expansion of $F_\varepsilon$ , we choose $\rho > 0$ small enough with respect to $\max_{\Gamma} |\kappa|$ such that $\omega_{\varepsilon, \rho}$ can be described as in (2.5). Then, in preparation for applying Stokes’ Theorem, we note that the curl of the integrand is of the form (recalling (2.3) and (2.15))

\begin{equation*} \textrm{curl} \left(G_\varepsilon - G\right)(y) = \left( C_1 y_1^2 + C_2 y_2^2 \right) \left( \frac1{R_\varepsilon^5(y)} - \frac1{|y|^5} \right) + \frac{\varepsilon^2 \left(C_3 y_1^2 + C_4 y_2^2 + C_5 \varepsilon^2\right) }{R_\varepsilon^7(y)},\end{equation*}

where $C_i \in \mathbb R$ are constants which depend only on $\nu$ . Writing $r := |y|$ and using $R_\varepsilon(y) \geq r$ , we estimate

\begin{align*} r^3 \big| \textrm{curl} (G_\varepsilon - G)(y) \big| & \leq C \left( 1 - \frac{r^6}{R_\varepsilon^6(r)} \right) + C' \frac{ \varepsilon^2 }{R_\varepsilon^2(r)} \\[4pt] & = C \frac{ 3 (r/\varepsilon)^4 + 3 (r/\varepsilon)^2 + 1 }{(1 + (r/\varepsilon)^2)^3} + \frac{ C' }{1 + (r/\varepsilon)^2} =: \psi(r/\varepsilon).\end{align*}

Note that $\psi$ is integrable on $(0,\infty)$ . Then, applying Stokes’ Theorem on the third integral in (2.16) and recalling (2.6), we obtain

\begin{align*} \bigg| \oint_{\partial \omega_{\varepsilon, \rho}} (G_\varepsilon - G) \cdot \tau \bigg| & = \bigg| \iint_{\omega_{\varepsilon, \rho}} \textrm{curl} (G_\varepsilon - G) \bigg| \\[4pt] & \leq C \int_\varepsilon^\rho r^3 \big| \textrm{curl} (G_\varepsilon - G)(r) \big| \, dr \leq C \varepsilon \int_1^{\rho/\varepsilon} \psi(t) \, dt = O(\varepsilon).\end{align*}

This completes the expansion of the right-hand side of (2.14). Collecting all computations above, we obtain

\begin{equation*} \mathcal F_\varepsilon^1 = \Psi + O(\varepsilon).\end{equation*}

Next, we expand the second term $\mathcal F_\varepsilon^2$ of $\mathcal F_\varepsilon$ in (2.13). Inserting (1.6), $\mathcal F_\varepsilon^2$ reads as

\begin{equation*} \mathcal F_\varepsilon^2 = \oint_\mathcal C \frac{y}{ R_\varepsilon^3(|y|) } \cdot \begin{bmatrix} 0 & \quad -1 \\[4pt] 1-\nu & \quad 0 \end{bmatrix} \tau (y) + \frac{3 \varepsilon^2 y}{ 2 R_\varepsilon^5(|y|) } \begin{bmatrix} 0 & \quad 0 \\[4pt] 1 - \nu & \quad 0 \end{bmatrix} \tau (y) \, dy.\end{equation*}

Using the parametrisation of $\mathcal C$ defined in (2.7) and applying the same steps as in the derivation leading to (2.10), we obtain

\begin{align*} \mathcal F_\varepsilon^2 =\, & \kappa_0 \bigg( 2 \nu \left(1 - 2 c_\phi^2\right) I_3^4 + \left(3 \nu c_\phi^2 - \nu - 1\right) I_3^2 + 3 \left(1 - \nu\right) \left(2 c_\phi^2 - 1\right) I_5^4\\[4pt] & \left. + \frac32 \left(1 - \nu\right) \left(1 - 3 c_\phi^2\right) I_5^2 \right),\end{align*}

where

\begin{align*} I_j^i := \int_{0}^{2\pi} \frac{ 2 r_0^3 \varepsilon^{j-3} \sin^i \tfrac\theta2 }{ \sqrt{ 4 r_0^2 \sin^2 \tfrac\theta2 + \varepsilon^2 }^j } \, d\theta \qquad \text{for } i = 2,4 \text{ and } j = 3,5.\end{align*}

To expand the integrals $I_j^i$ , we change variables $\theta/2 \to \theta$ , use the symmetry of $\sin$ to cut the integration domain in half and introduce

\begin{equation*} \ell := \frac\varepsilon{2 r_0}\end{equation*}

to simplify it to

\begin{align*} I_j^i = \ell^{j-3} \int_{0}^{\frac\pi2} \frac{ \sin^i \theta }{ \sqrt{ \sin^2 \theta + \ell^2 }^j } \, d\theta.\end{align*}

By expanding in the integrands the numerator as a polynomial of $\sin^2 \theta + \ell^2$ , we obtain

\begin{align*} I_3^4 &= \int_{0}^{\frac\pi2} \sqrt{ \sin^2 \theta + \ell^2 } \, d\theta - 2 \ell^2 \int_{0}^{\frac\pi2} \frac1{ \sqrt{ \sin^2 \theta + \ell^2 } } \, d\theta + \ell^4 \int_{0}^{\frac\pi2} \frac1{ \sqrt{ \sin^2 \theta + \ell^2 }^3 } \, d\theta \\[4pt] I_5^4 &= \ell^2 \int_{0}^{\frac\pi2} \frac1{ \sqrt{ \sin^2 \theta + \ell^2 } } \, d\theta - 2 \ell^4 \int_{0}^{\frac\pi2} \frac1{ \sqrt{ \sin^2 \theta + \ell^2 }^3 } \, d\theta + \ell^6 \int_{0}^{\frac\pi2} \frac1{ \sqrt{ \sin^2 \theta + \ell^2 }^5 } \, d\theta \\[4pt] I_3^2 &= \int_{0}^{\frac\pi2} \frac1{ \sqrt{ \sin^2 \theta + \ell^2 } } \, d\theta - \ell^2 \int_{0}^{\frac\pi2} \frac1{ \sqrt{ \sin^2 \theta + \ell^2 }^3 } \, d\theta \\[4pt] I_5^2 &= \ell^2 \int_{0}^{\frac\pi2} \frac1{ \sqrt{ \sin^2 \theta + \ell^2 }^3 } \, d\theta - \ell^4 \int_{0}^{\frac\pi2} \frac1{ \sqrt{ \sin^2 \theta + \ell^2 }^5 } \, d\theta.\end{align*}

The integrals appearing are complete elliptic integrals of a certain order. To see this, set

\begin{equation*} k^2 := \frac1{1 + \ell^2} = \frac{4 r_0^2}{ 4 r_0^2 + \varepsilon^2 } \xrightarrow{\varepsilon \to 0} 1\end{equation*}

and note that

\begin{equation*} \sin^2 \theta + \ell^2 = 1 - \cos^2 \theta + \ell^2 = \frac1{k^2} \left(1 - k^2 \cos^2 \theta\right).\end{equation*}

Hence,

\begin{equation*} \int_{0}^{\frac\pi2} \frac1{\sqrt{ \sin^2 \theta + \ell^2 }^m} \, d\theta = k^{m} \int_{0}^{\frac\pi2} \frac1{\sqrt{ 1 - k^2 \cos^2 \theta }^m} \, d\theta\end{equation*}

for $m \in \mathbb Z$ , which are complete elliptic integrals when m is a positive, odd integer. For such m, the number m is the order of the complete elliptic integral.

Next, we expand the appearing complete elliptic integrals around $k=1$ . For $m=1$ , we obtain from [Reference Gradshteyn and RyzhikGR07, (8.113.3)] that

\begin{equation*} \int_0^{\frac\pi2} \frac{d \theta}{ \sqrt{ 1 - k^2 \cos^2 \theta } } = \log \frac4{\sqrt{1 - k^2}} + O \left( (1 - k^2) \log \frac1{1 - k^2} \right) = \log \frac1\ell + \log 2 + O\left(\ell^2 |\log \ell|\right),\end{equation*}

and for $m=-1$ , [Reference Gradshteyn and RyzhikGR07, (8.114.3)] states

\begin{equation*} \int_0^{\frac\pi2} \sqrt{ 1 - k^2 \cos^2 \theta } \, d \theta = 1 + O \left( (1 - k^2) \log \frac1{1 - k^2} \right) = 1 + O \left(\ell^2 |\log \ell|\right).\end{equation*}

For $m=3$ and $m=5$ , we did not find an explicit expansion in the literature. To obtain such expansions, one can either rewrite the integrals in terms of more standard integrals (see e.g. [Reference Abramowitz and StegunAS64, Chap.17]), or replace the cosine by its Taylor polynomial, show that the error made is O(1), and use [Reference Gradshteyn and RyzhikGR07, (2.271.4--6)] to expand the obtained integrals. This yields

\begin{align*} \int_{0}^{\frac\pi2} \frac1{\sqrt{ 1 - k^2 \cos^2 \theta }^3} \, d\theta &= \frac 1{1 - k^2} + O \left( \log\frac1{1-k^2} \right) = \frac1{\ell^2} + O (|\log \ell|) \\[4pt] \int_{0}^{\frac\pi2} \frac1{\sqrt{ 1 - k^2 \cos^2 \theta }^5} \, d\theta &= \frac23 \frac{1}{(1 - k^2)^2} + O \left( \frac{1}{1 - k^2} \right) = \frac23 \frac1{\ell^4} + O (\ell^{-2}).\end{align*}

Finally, tracing back our expansions up to $\mathcal F_\varepsilon^2$ , we observe that

\begin{align*} I_3^4 &= 1 + O(\ell^2 |\log \ell|) \\[4pt] I_5^4 &= O(\ell^2 |\log \ell|) \\[4pt] I_3^2 &= \log \frac1\ell + \log 2 - 1 + O(\ell^2 |\log \ell|) \\[4pt] I_5^2 &= \frac13 + O(\ell^2 |\log \ell|)\end{align*}

and obtain

\begin{equation*} \mathcal F_\varepsilon^2 = - \kappa_0 \Big( A_\phi \log \frac1 {\varepsilon \kappa_0} + B_{\phi} + C_{\phi} \Big) + O (\varepsilon),\end{equation*}

where $A_{\phi}$ , $B_{\phi}$ and $C_{\phi}$ are the constants defined in (1.1). This completes the proof of Theorem 1.11 for the case $\kappa_0 > 0$ and $\Gamma \cap \mathcal C = \{0\}$ .

Next, we demonstrate how this result can be modified to the case of general curves $\Gamma$ . The only required modifications are in the definitions of $\omega_\varepsilon$ and $\omega_{\varepsilon,\rho}$ . By taking first $\rho$ and then $\varepsilon$ smaller if necessary, we can describe these sets analogously to the description at the end of the case $\kappa_0 > 0$ of the proof for the expansion of $F_\varepsilon$ . Similar to that proof, we can then apply Stokes’ Theorem on $\omega_\varepsilon$ and $\omega_{\varepsilon,\rho}$ to justify the steps above. This completes the proof of Theorem 1.1 for the case $\kappa_0 > 0$ .

The case $\kappa_0 = 0$ . As for the expansion of $F_\varepsilon$ , we can treat the case $\kappa_0 = 0$ similarly to the case $\kappa_0 > 0$ with minor modifications. In fact, two out of three modifications are the same:

  1. 1. the replacement of $\mathcal C$ by the tangent line to $\Gamma$ at 0, and

  2. 2. the observation that $\mathcal F_\varepsilon^2 = 0$ (because $G_\varepsilon$ is odd).

The third modification is the treatment of the second integral (the one over $\mathcal C_\rho$ ) in (2.16). By the symmetry of $\mathcal C_\rho$ and the oddness of both $G_\varepsilon$ and G, we directly obtain

\begin{equation*} \int_{\mathcal C_\rho} (G_\varepsilon - G) \cdot \tau = 0.\end{equation*}

The case $\tilde \kappa_0 < 0$ . This case can be treated along the same lines as for the expansion of $F_\varepsilon$ ; we omit the details. This completes the proof of Theorem 1.1.

3 Proof of Theorem 1.2

We fix some $i \in \{1,\dots,N\}$ and translate the points $x_j$ of $\Gamma^h$ by $-x_i$ . Then, we relabel the points by setting

\begin{equation*} y_j := x_{i+j} - x_i \qquad \text{for all } j \in \mathbb Z.\end{equation*}

Note that $y_0 = 0$ . As in the proof of Theorem 1.1, we also translate $\Gamma$ by $-x_i$ and set $\tau_0$ , $n_0$ and $\kappa_0$ as respectively the tangent vector, the normal vector and curvature of the translated copy of $\Gamma$ at 0. Furthermore, we remove x from the notation and note that (1.9) reads as:

(3.1) \begin{equation} F_\varepsilon = \kappa_0 \left( A_{\phi} \log \frac1 {\varepsilon |\kappa_0|} + B_{\phi} \right) + \Psi + O (\varepsilon).\end{equation}

We give a constructive proof of (1.21), starting with the first of the two terms in the left-hand side. This means that we are going to approximate $F_\varepsilon$ as a function of $\mathbf y$ without using any further information of $\Gamma$ . We refer to this process as a discretisation of $F_\varepsilon$ . As a result, the discretisation in (1.15) will appear as the leading order term in this approximation.

We start by discretising the local part (i.e. the first term in (3.1)) of $F_\varepsilon$ . With this aim, it is enough to discretise $\kappa_0$ . For later use, we also discretise $\tau_0$ , $n_0$ and the constants $A_{\phi}$ , $B_{\phi}$ and $C_{\phi}$ . We use a simple and standard approximation. Since there are many different discretisations available in the literature on parametric curves, we derive our approximation in detail.

For this discretisation, we only use the two points $y_1$ and $y_{-1}$ and denote them in consistency with (1.18) by respectively $y_+$ and $y_-$ . We start with some preliminaries. Let $\varphi$ be the arc length parametrisation of $\Gamma$ around 0 with $\varphi(0) = 0$ and $\varphi'(0) = \tau_0$ . Let $t_- < 0 < t_+$ be such that $\varphi(t_\pm) = y_\pm$ and note that

(3.2) \begin{equation} |y_\pm| \leq t_\pm.\end{equation}

We take h small enough such that the part of $\Gamma$ from 0 to $y_+$ can be described as the graph of a height function H with respect to the line segment $\gamma_1$ , that is, as the graph of

\begin{equation*} \eta \mapsto \eta \frac{y_+}{|y_+|} + H(\eta) Q \frac{ y_+}{|y_+|}, \qquad \text{where } Q := \begin{bmatrix} 0 & \quad 1 \\[4pt] -1 & \quad 0 \end{bmatrix}\end{equation*}

is the rotation matrix by 90 degrees in clockwise direction. Since $H(0) = H(|y_+|) = 0$ , it follows from the Mean Value Theorem that $H'(\eta_*) = 0$ for some $\eta_* \in (0, |y_+|)$ . Since $H \in C^3([0, |y_+|])$ , we then have that $|H'(\eta)| \leq C |y_+| \leq C' h$ . Hence,

(3.3) \begin{equation} t_+ = \int_0^{|y_+|} \sqrt{1 + H'(\eta)^2 } d\eta \leq |y_+| \left(1 + C h^2\right).\end{equation}

Moreover, since

\begin{equation*} H(\eta) = H(0) + \eta H'(0) + O(\eta^2) = \eta \big( H'(\eta_*) + O(\eta_*) \big) + O(\eta^2) = O(h^2),\end{equation*}

we obtain for the region $\omega_1^h$ enclosed by $\Gamma$ and $\gamma_1$ that

(3.4) \begin{equation} |\omega_1^h| = \int_0^{|y_+|} |H(\eta)| \, d\eta \leq C h^3.\end{equation}

Turning back to (3.3), one can derive a similar estimate for $t_-$ . Together with the lower bound in (3.2), this yields

\begin{equation*} |y_\pm| \leq |t_\pm| \leq |y_\pm| (1 + C h^2).\end{equation*}

We use this to expand $\varphi$ around 0. Since $\varphi$ is an arc length parametrisation, we obtain (see e.g. [Reference do CarmodC16]) $\varphi'(0) = \tau_0$ and $\varphi''(0) = \kappa_0 n_0$ . Then,

(3.5) \begin{align} \notag y_\pm = \varphi(t_\pm) &= \varphi(0) + t_\pm \varphi'(0) + \frac12 t_\pm^2 \varphi''(0) + O\left(t_\pm^3\right) \notag\\[4pt] &= t_\pm \tau_0 + \frac{1}2 t_\pm^2 \kappa_0 n_0 + O\left(h^3\right) \notag\\[4pt] &= \pm |y_\pm| \tau_0 + \frac{1}2 |y_\pm|^2 \kappa_0 n_0 + O\left(h^3\right).\end{align}

This completes the preliminaries.

We use (3.5) to construct approximations for $\tau_0$ , $n_0$ and $\kappa_0$ . For any linear combination, we obtain

(3.6) \begin{equation} a y_+ + b y_- = \left(a |y_+| - b |y_-|\right)\! \tau_0 + \frac{1}2 \left(a |y_+|^2 + b |y_-|^2\right) \kappa_0 n_0 + O \left((|a| + |b|)h^3 \right).\end{equation}

Solving for a and b such that the prefactors of $\tau_0$ and $\kappa_0 n_0$ are respectively 1 and 0, we obtain

\begin{align*} a &= \frac{|y_-| / |y_+|}{|y_+| + |y_-|} = O\left(h^{-1}\right), \\[4pt] b &= -\frac{|y_+| / |y_-|}{|y_+| + |y_-|} = O\left(h^{-1}\right), \\[4pt] \tilde \tau_0^h &:= a y_+ + b y_- = \tau_0 + O\left(h^2\right).\end{align*}

In particular, $|\tilde \tau_0^h| = 1 + O(h^2)$ . We use this to normalise our approximation of $\tau_0$ :

\begin{equation*} \tau_0^h := \frac{\tilde \tau_0^h}{|\tilde \tau_0^h|} = \tau_0 + O(h^2).\end{equation*}

Then, we simply rotate to obtain

\begin{equation*} n_0^h := Q \tau_0^h = n_0 + O(h^2).\end{equation*}

Note that this definition of $n_0^h$ is consistent with that in (1.19). Analogously to $\phi$ , we take $\phi^h \in [0, 2\pi)$ such that

\begin{equation*} n_0^h = \begin{bmatrix} \cos \phi^h \\[4pt] \sin \phi^h \end{bmatrix}.\end{equation*}

Note that with the two components of $n_0^h$ , we can approximate the constants $A_{\phi}$ , $B_{\phi}$ and $C_{\phi}$ in (1.11) by respectively $A_{\phi^h}$ , $B_{\phi^h}$ and $C_{\phi^h}$ with an error of size $O(h^2)$ .

Similarly, solving for a,b such that the prefactors of $\tau_0$ and $\kappa_0 n_0$ in (3.6) are respectively 0 and 1, we obtain

\begin{equation*} a = \frac{2 / |y_+|}{|y_+| + |y_-|} = O\left(h^{-2}\right), \qquad b = \frac{2 / |y_-|}{|y_+| + |y_-|} = O\left(h^{-2}\right), \qquad a y_+ + b y_- = \kappa_0 n_0 + O(h).\end{equation*}

Multiplying both sides of the third equation by $\tilde n_0^h = Q \tilde \tau_0^h$ , we obtain

(3.7) \begin{equation} \kappa_0^h := (a y_+ + b y_-) \cdot \tilde n_0^h = \kappa_0 + O(h).\end{equation}

Rewriting this in terms of $y_\pm$ , we get

\begin{equation*} \kappa_0^h = 2 \left( \frac1{|y_+|} + \frac1{|y_-|} \right) \frac{ y_- \cdot Q y_+ }{ (|y_+| + |y_-|)^2 },\end{equation*}

which motivates the expression in (1.17).

Finally, we substitute the expansions above in the local term of (3.1). While this needs no further motivation for most of the terms, we wish to treat the expansion of $\psi(\kappa_0) := \kappa_0 \log |\kappa_0|$ carefully. From the derivation of (3.7), we obtain that

\begin{equation*} |\underbrace{ \kappa_0 - \kappa_0^h }_{R^h}| \leq M h\end{equation*}

for a constant $M > 0$ which is independent of h and $\kappa_0$ . If $|\kappa_0| \leq 2 M h$ , then we observe that $\psi(\kappa_0) = O(h |\log h|)$ and $\psi(\kappa_0^h) = O(h |\log h|)$ . If $|\kappa_0| \geq 2 M h$ , then $M h \leq \kappa_0^h \leq C$ , and thus we may apply Taylor’s Theorem on $\psi$ at $\kappa_0^h$ . This yields an $\kappa^* \in \overline{B(\kappa_0, Mh)}$ such that

\begin{equation*} \psi(\kappa_0) = \psi(\kappa_0^h) + R^h \psi'(\kappa^*) = \psi(\kappa_0^h) + R^h (\log |\kappa^*| + 1) = \psi(\kappa_0^h) + O(h |\log h|).\end{equation*}

Using this, we obtain from (3.1) that

(3.8) \begin{equation} F_\varepsilon = \kappa_0^h \left( A_{\phi^h} \log \frac1 {\varepsilon |\kappa_0^h|} + B_{\phi^h} \right) + \Psi + O \left( \varepsilon + h |\log \varepsilon| + h |\log h| \right).\end{equation}

It is left to approximate the non-local term $\Psi$ . We do this in detail for the case $\kappa_0 > 0$ ; the case $\kappa_0 \leq 0$ can then be treated along the same lines as in the proof of Theorem 1.1.

We start with some preliminaries. Let $\varphi$ be again the arc length parametrisation of $\Gamma$ , and let $t_j$ be defined by

\begin{equation*} \varphi(t_j) = y_j \qquad \text{for } - \lfloor N/2 \rfloor \leq j \leq \lfloor N/2 \rfloor - 1. \end{equation*}

Let $\Gamma_j$ be the part of $\Gamma$ in between $y_{j-1}$ and $y_j$ and set $\omega_j^h$ as the region enclosed by $\Gamma_j$ and $\gamma_j$ . Analogously to (3.4), one can derive (for h small enough) that

(3.9) \begin{equation} |\omega_j^h| \leq C h^3 \qquad \text{for all } j \in \mathbb Z.\end{equation}

Interpreting $d(s) := |\varphi(s)|$ as the distance from the origin, we note that

\begin{equation*} d'(s) = \frac{\varphi(s)}{|\varphi(s)|} \cdot \varphi'(s) \qquad \text{for all } s \in \mathbb R \setminus |\Gamma| \mathbb Z.\end{equation*}

Since $\varphi(s) / |\varphi(s)| \to \tau_0$ as $s \downarrow 0$ , d (s) is uniformly continuous on $(0, \rho]$ for any $\rho < |\Gamma|$ . Observing that $d'(0\pm) = \pm1$ , we may take $\rho$ independent of h such that

(3.10) \begin{equation} \inf_{(0, \rho)} d' \geq \frac12 \quad \text{and} \quad \sup_{(-\rho,0)} d' \leq - \frac12,\end{equation}

and such that $\Gamma$ intersects $\partial B_\rho(0)$ in precisely two points. Let $n^\rho$ be the largest integer for which

\begin{equation*} |y_{n^\rho}| \leq \rho \quad \text{and} \quad |y_{-n^\rho}| \leq \rho.\end{equation*}

We observe from

\begin{equation*} |y_{n^\rho}| = |y_{n^\rho} - y_0| \leq \sum_{j=1}^{n^\rho} |y_j - y_{j-1}| \leq C n^\rho h\end{equation*}

that $n^\rho \geq c / h$ for some $c > 0$ independent of h. Recalling $m^h$ from (1.16), we take h small enough such that $m^h \leq n^\rho$ .

From this construction, we obtain

(3.11) \begin{align} |y_{j}| - |y_{j-1}| & = d(t_j) - d(t_{j-1}) = \int_{t_{j-1}}^{t_j} d'(s) \, ds\nonumber\\ & \geq \frac12 (t_j - t_{j-1}) \geq \frac12 |\gamma_j| \geq c h \qquad \text{for } j = 1,\ldots,n^\rho. \end{align}

Since a similar estimate holds for the points $y_{-j}$ , we obtain

(3.12) \begin{align} c h^{2/3} \leq |y_{\pm m^h}| \leq C h^{2/3}\end{align}

for all h small enough. For convenience, we first assume that

(3.13) \begin{align} \varepsilon^h := |y_{-m^h}| = |y_{m^h}|\end{align}

and comment on the general case afterwards. Note from (3.12) that

\begin{equation*} \varepsilon^h = O(h^{2/3}).\end{equation*}

This concludes the preliminaries for discretising $\Psi$ in (3.8).

From the characterisation of $\Psi$ in Remark 2.1, we obtain

(3.14) \begin{align} \Psi= \int_{\Gamma_{\varepsilon^h}} G \cdot \tau + \int_{\mathcal C_{\varepsilon^h}} G \cdot \tau + O(\varepsilon_h)\end{align}

for any $h > 0$ small enough with respect to $\Gamma$ . We discretise the first term by the integral over

(3.15) \begin{equation} \Gamma_{\varepsilon^h}^h := \Gamma^h \setminus B(0, \varepsilon^h) = \bigcup_{j = m^h + 1}^{N - m^h} \gamma_j \subset \Gamma^h.\end{equation}

Figure 5 illustrates the setting.

Figure 5. $\Gamma_{\varepsilon^h}$ and $\Gamma_{\varepsilon^h}^h$ related to $\Gamma$ and $\Gamma^h$ as illustrated in Figure 2. The sketch is a special case in which (54) is satisfied.

Then, by Stokes’ TheoremFootnote 2

(3.16) \begin{equation} \bigg| \int_{\Gamma_{\varepsilon^h}} G \cdot \tau - \int_{\Gamma_{\varepsilon^h}^h} G \cdot \tau \bigg|\leq \sum_{j = m^h + 1}^{N-m^h} \bigg| \oint_{\gamma_j \cup \Gamma_j} G \cdot \tau \bigg|\leq \sum_{j = m^h + 1}^{N-m^h} \iint_{\omega_j^h} |g|,\end{equation}

where $g = \textrm{curl} G$ (see (2.3)). To bound the sum in the right-hand side, we split it into three sub-summations. For $j = n^\rho + 1, \ldots, N - n^\rho$ , by the construction of $\rho$ , it holds that $\textrm{dist}(0, \Omega_j) \geq c$ . Hence,

\begin{equation*} \max_{\omega_j^h} |g| \leq C. \end{equation*}

Using this together with (3.9), we obtain

\begin{equation*} \iint_{\omega_j^h} |g| \leq |\omega_j^h| \max_{\omega_j^h} |g| \leq C h^3.\end{equation*}

Noting from (1.14) that $N \leq C/h$ , we then have that

\begin{equation*} \sum_{j = n^\rho + 1}^{N-n^\rho} \iint_{\omega_j^h} | g | \leq C N h^3 \leq C' h^2.\end{equation*}

The remaining two sub-summations in the right-hand side of (3.16) can be treated similarly to one another; we focus on the one over $j = m^h + 1,\ldots ,n^\rho$ . From the preliminaries (see (3.10) and (3.11)), we obtain

\begin{equation*} \max_{\omega_j^h} |g| \leq \max_{y \in \omega_j^h} \frac C{|y|^3} = \frac C{|y_{j-1}|^3} \leq \frac{C'}{(hj)^3}.\end{equation*}

Then,

\begin{equation*} \sum_{j = m^h + 1}^{n^\rho} \bigg| \iint_{\omega_j^h} g \bigg| \leq \sum_{j = m^h + 1}^{n^\rho} |\omega_j^h| \max_{\omega_j^h} |g| \leq \sum_{j = m^h + 1}^{n^\rho} \frac{C}{j^3} \leq \int_{m^h}^\infty \frac{C}{\alpha^3} \, d\alpha = 2C / (m^h)^2 \leq C' h^{2/3}.\end{equation*}

In conclusion, recalling (3.16) and (3.15),

(3.17) \begin{equation} \int_{\Gamma_{\varepsilon^h}} G \cdot \tau= \sum_{j = m^h + 1}^{N-m^h} \int_{\gamma_i} G \cdot \tau + O\left(h^{2/3}\right).\end{equation}

Next, we estimate the second term in (3.14) by the integral over a circle $\mathcal C^h$ which we will construct from $n_0^h$ and $\kappa_0^h$ . For $\mathcal C^h$ to be close to $\mathcal C$ , we require in (3.7) that the error term O(h) is sufficiently smaller than $\kappa_0$ . This motivates us to first consider the case $\kappa_0 \geq \varepsilon^h$ and treat the case for small $\kappa_0$ afterwards.

Assuming that $\kappa_0 \geq \varepsilon^h$ , we set

\begin{equation*} \mathcal C^h := \partial B_{r_0^h} (r_0^h n_0^h) \quad \text{and} \quad \mathcal C_{\varepsilon^h}^h := \mathcal C^h \setminus B_{\varepsilon^h}(0),\end{equation*}

where

\begin{equation*} r_0^h = \frac1{\kappa_0^h} = \frac1{\kappa_0 + O(h)} = \frac{1 + O(h/\kappa_0)}{\kappa_0} = r_0 (1 + O(r_0 h) )\end{equation*}

can be computed from the two points $y_+$ and $y_-$ . Let $\gamma_{\varepsilon^h}$ be the two small arcs on $\partial B_{\varepsilon^h}(0)$ which connect the endpoints of $\mathcal C_{\varepsilon^h}^h$ and $\mathcal C_{\varepsilon^h}$ , and let $\omega^h$ be the region enclosed by the closed loop $\mathcal C_{\varepsilon^h}^h \cup \mathcal C_{\varepsilon^h} \cup \gamma_{\varepsilon^h}$ ; see Figure 6 for a sketch. Then, by Stokes’ Theorem:

(3.18) \begin{equation} \bigg| \int_{\mathcal C_{\varepsilon^h}} G \cdot \tau - \int_{\mathcal C_{\varepsilon^h}^h} G \cdot \tau \bigg| \leq \bigg| \int_{\gamma_{\varepsilon^h}} G \cdot \tau \bigg| + \iint_{\omega^h} |g|.\end{equation}

Figure 6. Sketch of the closed loop $\mathcal C_{\varepsilon^h}^h \cup \mathcal C_{\varepsilon^h} \cup \gamma_{\varepsilon^h}$ .

Next, we show that both integrals in the right-hand side of (3.18) are small. We start with the first one. We parametrise $\gamma_{\varepsilon^h}$ by

(3.19) \begin{equation} \varphi(\theta) := \varepsilon^h \begin{bmatrix} \cos \theta \\[4pt] \sin \theta \end{bmatrix} \qquad \text{for } \theta \in \Theta,\end{equation}

where $\Theta$ is (similar to (2.5)) the union of two intervals. Let $\theta_1$ and $\theta_2$ be the endpoints of one of these intervals. In particular, $\varphi(\theta_1) \in \mathcal C$ and $\varphi(\theta_2) \in \mathcal C^h$ , that is,

\begin{equation*} |\varphi (\theta_1) - r_0 n_0| = r_0 \quad \text{and} \quad |\varphi (\theta_2) - r_0^h n_0^h| = r_0^h.\end{equation*}

To solve for $\theta_1$ , we first compute

\begin{align*} r_0^2 & = |\varphi (\theta_1) - r_0 n_0|^2 = \left(\varepsilon^h \cos \theta_1 - r_0 \cos \phi\right)^2 + \left(\varepsilon^h \sin \theta_1 - r_0 \sin \phi\right)^2 \nonumber\\[4pt] & = (\varepsilon^h)^2 + r_0^2 - 2 \varepsilon^h r_0 \cos (\theta_1 - \phi).\end{align*}

Then, we obtain two solutions given by:

\begin{equation*} \theta_1 = \phi \pm \arccos \frac{\varepsilon^h}{2r_0}.\end{equation*}

Analogously, we obtain

\begin{equation*} \theta_2 = \phi^h \pm \arccos \frac{\varepsilon^h}{2r_0^h}.\end{equation*}

Taking the plus sign in both equations above, we obtain the endpoints of one of the two intervals of $\Theta$ . Then, substituting the expansions for $\phi^h$ and $\kappa_0^h = 1/r_0^h$ , we obtain

(3.20) \begin{equation} \big| \theta_2 - \theta_1 \big| \leq C h^2 + \Big| \arccos \frac{\varepsilon^h}{2r_0} - \arccos \left( \frac{\varepsilon^h}{2r_0} + O({\varepsilon^h} h) \right) \Big| \leq C' {\varepsilon^h} h.\end{equation}

Hence, $|\Theta| \leq C {\varepsilon^h} h$ . We use this to estimate the first integral in (3.18) by:

\begin{equation*} \bigg| \int_{\gamma_{\varepsilon^h}} G \cdot \tau \bigg| = \bigg| \int_{\Theta} G( \varphi(\theta) ) \cdot \varphi'(\theta) \, d\theta \bigg| \leq |\Theta| \frac C{\varepsilon^h} \leq C' h.\end{equation*}

To estimate the second integral in (3.18), we split $\omega^h$ into two pieces; the part inside $B_{r_0}(0)$ and the part outside $B_{r_0}(0)$ . For the inside part, we write similar to (2.2)

(3.21) \begin{align} \omega^h \cap B_{r_0}(0) = \{ (s, \theta) : \varepsilon^h < s < r_0, \ \theta \in \Theta(s) \},\end{align}

where $\Theta(s)$ is the extension of $\Theta$ in (3.19) for $s = \varepsilon^h$ to $s \in (\varepsilon^h, r_0)$ . In particular, the same argument yields $|\Theta(s)| \leq C sh$ . Then,

(3.22) \begin{align} \iint_{\omega^h \cap B_{r_0}(0)} |g| & = \int_{\varepsilon^h}^{r_0} \int_{\Theta(s)} \big| g(s, \theta) \big| s \, d\theta ds\nonumber\\[4pt] & \leq C \int_{\varepsilon^h}^{r_0} |\Theta(s)| \frac1{s^2} \, ds \leq C' h \int_{\varepsilon^h}^{r_0} \frac1{s} \, ds = O(h |\log h|). \end{align}

For the part of $\omega^h$ outside $B_{r_0}(0)$ , note that $\omega^h$ remains inside the tubular neighbourhood of $\mathcal C = \partial B(r_0 n_0, r_0)$ of size $O(r_0^2 h)$ . Indeed, for any point $y \in \mathcal C^h = \partial B(r_0^h n_0^h, r_0^h)$ , we obtain from the triangle inequality that

\begin{align*} |y - r_0 n_0| &\leq |y - r_0^h n_0^h| + |r_0^h n_0^h - r_0 n_0| = r_0 + O(r_0^2 h), \\[4pt] |y - r_0 n_0| &\geq |y - r_0^h n_0^h| - |r_0^h n_0^h - r_0 n_0| = r_0 + O(r_0^2 h).\end{align*}

Hence, $|\omega^h| \leq C r_0^3 h$ , and thus

(3.23) \begin{equation} \iint_{\omega^h \setminus B_{r_0}(0)} |g| \leq |\omega^h| \max_{B_{r_0}(0)^c} |g| \leq C h.\end{equation}

Inserting our findings above in (3.18), we obtain

(3.24) \begin{equation} \int_{\mathcal C_{\varepsilon^h}} G \cdot \tau = \int_{\mathcal C_{\varepsilon^h}^h} G \cdot \tau + O(h |\log h|).\end{equation}

To expand the integral in the right-hand side, we parametrise $\mathcal C_{\varepsilon^h}^h$ as in (2.7) by:

\begin{equation*} \varphi^h (\theta) := r_0^h \begin{bmatrix} \cos \phi^h - \cos (\theta + \phi^h) \\[4pt] \sin \phi^h - \sin (\theta + \phi^h) \end{bmatrix} \qquad \text{with } \alpha^h < \theta < 2\pi - \alpha^h,\end{equation*}

where $\alpha^h \in (0, \pi)$ is such that $|\varphi^h(\alpha^h)| = \varepsilon^h$ . An analogous derivation as the one for $F_e^2$ leading to (2.11) yields

(3.25) \begin{equation} \int_{\mathcal C_{\varepsilon^h}^h} G \cdot \tau = -\kappa_0^h \left( A_{\phi^h} \log \frac{1}{\varepsilon^h \kappa_0^h} + B_{\phi^h} \right) + O \big( (\varepsilon^h)^2 \big).\end{equation}

Then, (3.24) yields

(3.26) \begin{align} \int_{\mathcal C_{\varepsilon^h}} G \cdot \tau = -\kappa_0^h \left( A_{\phi^h} \log \frac{1}{\varepsilon^h \kappa_0^h} + B_{\phi^h} \right) + O(h |\log h|).\end{align}

We recall that (3.26) relies on the assumption $\kappa_0 \geq \varepsilon^h$ . The remaining case $0 < \kappa_0 < \varepsilon^h$ can be treated with minor modifications to the derivation of (3.26). We list these modifications first for the additional assumption $\kappa_0^h > 0$ , for which no change to the definition of $\mathcal C^h$ is required. The main modification is that we use $B(0, 1/\varepsilon^h)$ instead of $B_{r_0}(0)$ when splitting $\omega_h$ into two pieces. It is easy to see that also $\partial B(0, 1/\varepsilon^h)$ intersects with $\mathcal C^h$ in two points, and that the estimate $|\Theta (s)| \leq Csh$ remains valid. Using this estimate, as in (3.22), we obtain

\begin{equation*} \iint_{\omega^h \cap B(0, 1/\varepsilon^h)} |g| = O(h |\log h|).\end{equation*}

As an alternative to (3.23), we use the rougher estimate:

\begin{equation*} \iint_{\omega^h \setminus B(0, 1/\varepsilon^h)} |g| \leq \iint_{B(0, 1/\varepsilon^h)^c} |g| \leq C \int_{1/\varepsilon^h}^\infty \frac1{s^2} \, ds = C \varepsilon^h.\end{equation*}

Then, the same steps leading to (3.26) yield

(3.27) \begin{align} \int_{\mathcal C_{\varepsilon^h}} G \cdot \tau = -\kappa_0^h \left( A_{\phi^h} \log \frac{1}{\varepsilon^h \kappa_0^h} + B_{\phi^h} \right) + O(\varepsilon^h).\end{align}

Next, we treat the case $0 < \kappa_0 < \varepsilon^h$ with $\kappa_0^h < 0$ . Note from (3.7) that this setting implies $\kappa_0 < Ch$ and $\kappa_0^h \geq -C' h$ . We set $r_0^h := -1/\kappa_0^h > 0$ , $\mathcal C^h := \partial B(- r_0^h n_0^h, r_0^h)$ and

\begin{equation*} \omega^h = \left( B(r_0 n_0, r_0) \cup B(- r_0^h n_0^h, r_0^h) \right)^c \cup \left( B(r_0 n_0, r_0) \cap B(- r_0^h n_0^h, r_0^h) \right).\end{equation*}

In addition to the modification in the case $\kappa_0^h > 0$ , the derivation of $|\Theta| \leq C \varepsilon^h h$ requires a modification too. Indeed, while the endpoint $\theta_1$ can be found analogously, the condition for $\theta_2$ becomes

\begin{equation*} |\varphi (\theta_2) + r_0^h n_0^h| = r_0^h.\end{equation*}

Solving for $\theta_2$ yields

\begin{equation*} \theta_2 = \phi^h \pm \arccos \frac{-\varepsilon^h}{2r_0^h}.\end{equation*}

Then, using the expansion

\begin{equation*} \frac{-1}{r_0^h} = \kappa_0^h = \kappa_0 + O(h),\end{equation*}

we get

\begin{equation*} \theta_2 = \phi^h \pm \arccos \left( \frac{\varepsilon^h}{2r_0} + O(\varepsilon^h h) \right).\end{equation*}

Then, $|\Theta| \leq C \varepsilon^h h$ follows from (3.20) as before.

Finally, we treat the case $0 < \kappa_0 < \varepsilon^h$ with $\kappa_0^h = 0$ . Setting $\mathcal C^h = \{ t \tau^h : t \in \mathbb R \}$ , this case can be treated similarly as either the case $\kappa_0^h > 0$ or the case $\kappa_0^h < 0$ ; we omit the details. This completes the proof of (3.27) for all $\kappa_0 > 0$ without any additional assumptions on the sign or size of $\kappa_0^h$ .

In conclusion, starting from (3.8) and substituting consecutively (3.14), (3.17) and (3.27), we obtain

(3.28) \begin{align} F_\varepsilon = \kappa_0^h A_{\phi^h} \log \frac{\varepsilon^h}{\varepsilon} + \sum_{j = m^h + 1}^{N-m^h} \int_{\gamma_i} G \cdot \tau + O \big( \varepsilon + h |\log \varepsilon| + h^{2/3} \big).\end{align}

This prove (1.21) for the first term in the left-hand side of (1.21) under the additional assumption $|y_{-m^h}| = |y_{m^h}|$ .

The proof above easily extends to the generic case:

\begin{equation*} \varepsilon^h := |y_{m^h}| \neq |y_{-m^h}| =: \varepsilon^{-h}.\end{equation*}

Indeed, the main modification is that $B_{\varepsilon^h}(0)$ is replaced by the union of two half-balls cut along $n_0$ :

\begin{equation*} D := \big\{ x \in B_{\varepsilon^h}(0) : x \cdot \tau_0 \geq 0 \big\} \cup \big\{ x \in B_{\varepsilon^{-h}}(0) : x \cdot \tau_0 \leq 0 \big\}.\end{equation*}

Indeed, in the proof above, the narrow wedges (see, e.g. (3.21)) between any of the four curves $\Gamma$ , $\Gamma^h$ , $\mathcal C$ and $\mathcal C^h$ can be treated independently and are always included in one of the two half-balls. A ramification of this modification is that (3.25) changes to

\begin{align*} \int_{\mathcal C^h \setminus D} G \cdot \tau &= -\kappa_0^h \left( \frac12 A_{\phi^h} \log \frac{1}{\varepsilon^h \varepsilon^{-h} (\kappa_0^h)^2} + B_{\phi^h} \right) + O \big( h^{4/3} \big) \\[4pt] &= -\kappa_0^h \left( \frac12 A_{\phi^h} \log \frac{1}{|y_{m^h}| |y_{-m^h}| (\kappa_0^h)^2} + B_{\phi^h} \right) + O \big( h^{4/3} \big);\end{align*}

this can be seen from obvious modifications to the argument leading to (2.11).

To complete the proof of Theorem 1.2, we note that (1.21) follows almost directly from (3.28). Indeed, by the triangle inequality and the definitions of $\mathcal F_\varepsilon$ and $\mathcal F_{\varepsilon,i}^h$ , we get

\begin{equation*} \big| \mathcal F_{\varepsilon,i}^h (\mathbf x) - \mathcal F_\varepsilon(x_i) \big| \leq \big| F_{\varepsilon,i}^h (\mathbf x) - F_\varepsilon(x_i) \big| + \big| \kappa_0^h C_{\phi^h} - \kappa_0 C_\phi \big|.\end{equation*}

Recalling from the proof that $\kappa_0^h = \kappa_0 + O(h)$ and $C_{\phi^h} = C_{\phi} + O(h^2)$ , we obtain (1.21).

Acknowledgements

The author gratefully acknowledges support from JSPS KAKENHI Grant Number JP20K14358. The author also expresses his sincere gratitude to Riccardo Scala for the conception of the proof of Theorem 1.1.

Conflicts of interest

None.

Footnotes

1 Precisely, we take the regularised stress field induced by a dislocation and use it in the formula for the Peach–Koehler force applied to a (singular) dislocation.

2 In the case where $\gamma_j$ and $\Gamma_j$ intersect, we apply a similar argument as that at the end of the case $\kappa_0 > 0$ in the proof of (1.9).

References

Arsenlis, A., Cai, W., Tang, M., Rhee, M., Oppelstrup, T., Hommes, G., Pierce, T. G. & Bulatov, V. V. (2007). Enabling strain hardening simulations with dislocation dynamics. Model. Simul. Mater. Sci. Eng. 15(6), 553.CrossRefGoogle Scholar
Abramowitz, M. & Stegun, I. A. (1964). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, Vol. 55. US Government Printing Office.Google Scholar
Cai, W., Arsenlis, A., Weinberger, C. R. & Bulatov, V.V. (2006). A non-singular continuum theory of dislocations. J. Mech. Phys. Solids 54(3), 561587.CrossRefGoogle Scholar
Clouet, E. (2011). Dislocation core field. I. Modeling in anisotropic linear elasticity theory. Phys. Rev. B 84(22), 224111.CrossRefGoogle Scholar
do Carmo, M. P. (2016). Differential Geometry of Curves and Surfaces: Revised and Updated, 2nd ed. Courier Dover Publications.Google Scholar
de Wit’, R. (1960). The continuum theory of stationary dislocations. In: Solid State Physics, Vol. 10. Elsevier, pp. 249292.Google Scholar
Gavazza, S. D. & Barnett, D. M. (1976). The self-force on a planar dislocation loop in an anisotropic linear-elastic medium. J. Mech. Phys. Solids 24(4), 171185.CrossRefGoogle Scholar
Gehlen, P. C., Hirth, J. P., Hoagland, R. G. & Kanninen, M. F. (1972). A new representation of the strain field associated with the cube-edge dislocation in a model of a $\alpha$ -iron. J. Appl. Phys. 43(10), 39213933.Google Scholar
Gradshteyn, I. S. & Ryzhik, I. M. (2007). Table of Integrals, Series, and Products, Seventh Edition. Burlington, Massachusetts: Academic Press.Google Scholar
Ghoniem, N. M., Tong, S.-H. & Sun, L. Z. (2000). Parametric dislocation dynamics: a thermodynamics-based approach to investigations of mesoscopic plastic deformation. Phys. Rev. B 61(2), 913.CrossRefGoogle Scholar
Hull, D. & Bacon, D. J. (2011). Introduction to Dislocations, Vol. 37. Oxford: Elsevier.Google Scholar
Henager, C. & Hoagland, R. (2005). Dislocation and stacking fault core fields in FCC metals. Philos. Mag. 85(36), 44774508.Google Scholar
Hirth, J. P. & Lothe, J. (1982). Theory of Dislocations. New York: John Wiley & Sons, 1982.Google Scholar
Kolář, M., Beneš, M., Kratochvl, J. & Pauš, P. (2018). Modeling of double cross-slip by means of geodesic curvature driven flow. Acta Phys. Pol. A 134(3), 667670.CrossRefGoogle Scholar
Kundin, J., Emmerich, H. & Zimmer, J. (2011). Mathematical concepts for the micromechanical modelling of dislocation dynamics with a phase-field approach. Philos. Mag. 91(1), 97121.CrossRefGoogle Scholar
Lesar, R. & Capolungo, L. (2020). Advances in discrete dislocation dynamics simulations. In: Handbook of Materials Modeling: Methods: Theory and Modeling, pp. 10791110.CrossRefGoogle Scholar
Lesar, R. (2004). Ambiguities in the calculation of dislocation self energies. Phys. Stat. Sol. (B) 241(13), 28752880.Google Scholar
Lothe, J. (1992). Dislocations in continuous elastic media. In: Modern Problems in Condensed Matter Sciences, Vol. 31. Elsevier, pp. 175235.Google Scholar
Peach, M. & Koehler, J. S. (1950). The forces exerted on dislocations and the stress fields produced by them. Phys. Rev. 80(3), 436439.CrossRefGoogle Scholar
Schwarz, K. W. (1999). Simulation of dislocations on the mesoscopic scale. I. Methods and examples. J. Appl. Phys. 85(1), 108119.CrossRefGoogle Scholar
Zhu, Y., Chapman, S. J. & Acharya, A. (2013). Dislocation motion and instability. J. Mech. Phys. Sol. 61(8), 18351853.CrossRefGoogle Scholar
Zbib, H. M., Rhee, M. & Hirth, J. P. (1998). On plastic deformation and the dynamics of 3D dislocations. Int. J. Mech. Sci. 40(2–3), 113127.CrossRefGoogle Scholar
Zhao, D., Wang, H. & Xiang, Y. (2012). Asymptotic behaviors of the stress fields in the vicinity of dislocations and dislocation segments. Philos. Mag. 92(18), 23512374.CrossRefGoogle Scholar
Figure 0

Figure 1. Left: sketch of $\Gamma$. Right: sketch of $\Gamma_\varepsilon(x)$.

Figure 1

Figure 2. Sketch of a possible discretisation $\Gamma^h$ of $\Gamma$.

Figure 2

Figure 3. Sketch of the closed curve formed by $\Gamma_\varepsilon$, $\mathcal C_\varepsilon$ and $\gamma_\varepsilon$.

Figure 3

Figure 4. Sketch of a situation as in Figure 3 when $ \kappa_0 < 0$.

Figure 4

Figure 5. $\Gamma_{\varepsilon^h}$ and $\Gamma_{\varepsilon^h}^h$ related to $\Gamma$ and $\Gamma^h$ as illustrated in Figure 2. The sketch is a special case in which (54) is satisfied.

Figure 5

Figure 6. Sketch of the closed loop $\mathcal C_{\varepsilon^h}^h \cup \mathcal C_{\varepsilon^h} \cup \gamma_{\varepsilon^h}$.