We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We consider time-inhomogeneous ordinary differential equations (ODEs) whose parameters are governed by an underlying ergodic Markov process. When this underlying process is accelerated by a factor $\varepsilon^{-1}$, an averaging phenomenon occurs and the solution of the ODE converges to a deterministic ODE as $\varepsilon$ vanishes. We are interested in cases where this averaged flow is globally attracted to a point. In that case, the equilibrium distribution of the solution of the ODE converges to a Dirac mass at this point. We prove an asymptotic expansion in terms of $\varepsilon$ for this convergence, with a somewhat explicit formula for the first-order term. The results are applied in three contexts: linear Markov-modulated ODEs, randomized splitting schemes, and Lotka–Volterra models in a random environment. In particular, as a corollary, we prove the existence of two matrices whose convex combinations are all stable but are such that, for a suitable jump rate, the top Lyapunov exponent of a Markov-modulated linear ODE switching between these two matrices is positive.
We prove polynomial ergodicity for the one-dimensional Zig-Zag process on heavy-tailed targets and identify the exact order of polynomial convergence of the process when targeting Student distributions.
A retrial queue with classical retrial policy, where each blocked customer in the orbit retries for service, and general retrial times is modeled by a piecewise deterministic Markov process (PDMP). From the extended generator of the PDMP of the retrial queue, we derive the associated martingales. These results are used to derive the conditional expected number of customers in the orbit in the transient regime.
Markov chain Monte Carlo (MCMC) methods provide an essential tool in statistics for sampling from complex probability distributions. While the standard approach to MCMC involves constructing discrete-time reversible Markov chains whose transition kernel is obtained via the Metropolis–Hastings algorithm, there has been recent interest in alternative schemes based on piecewise deterministic Markov processes (PDMPs). One such approach is based on the zig-zag process, introduced in Bierkens and Roberts (2016), which proved to provide a highly scalable sampling scheme for sampling in the big data regime; see Bierkens et al. (2016). In this paper we study the performance of the zig-zag sampler, focusing on the one-dimensional case. In particular, we identify conditions under which a central limit theorem holds and characterise the asymptotic variance. Moreover, we study the influence of the switching rate on the diffusivity of the zig-zag process by identifying a diffusion limit as the switching rate tends to ∞. Based on our results we compare the performance of the zig-zag sampler to existing Monte Carlo methods, both analytically and through simulations.
In [A. Genadot and M. Thieullen, Averaging for a fully coupled piecewise-deterministicmarkov process in infinite dimensions. Adv. Appl. Probab. 44(2012) 749–773], the authors addressed the question of averaging for a slow-fastPiecewise Deterministic Markov Process (PDMP) in infinite dimensions. In the presentpaper, we carry on and complete this work by the mathematical analysis of the fluctuationsof the slow-fast system around the averaged limit. A central limit theorem is derived andthe associated Langevin approximation is considered. The motivation for this work is thestudy of stochastic conductance based neuron models which describe the propagation of anaction potential along a nerve fiber.
We consider the level hitting times τy = inf{t ≥ 0 | Xt = y} and the running maximum process Mt = sup{Xs | 0 ≤ s ≤ t} of a growth-collapse process (Xt)t≥0, defined as a [0, ∞)-valued Markov process that grows linearly between random ‘collapse’ times at which downward jumps with state-dependent distributions occur. We show how the moments and the Laplace transform of τy can be determined in terms of the extended generator of Xt and give a power series expansion of the reciprocal of Ee−sτy. We prove asymptotic results for τy and Mt: for example, if m(y) = Eτy is of rapid variation then Mt / m-1(t) →w 1 as t → ∞, where m-1 is the inverse function of m, while if m(y) is of regular variation with index a ∈ (0, ∞) and Xt is ergodic, then Mt / m-1(t) converges weakly to a Fréchet distribution with exponent a. In several special cases we provide explicit formulae.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.