Hostname: page-component-745bb68f8f-v2bm5 Total loading time: 0 Render date: 2025-02-07T05:46:34.988Z Has data issue: false hasContentIssue false

Comparison of hit-and-run, slice sampler and random walk Metropolis

Published online by Cambridge University Press:  16 January 2019

Daniel Rudolf*
Affiliation:
Universität Göttingen
Mario Ullrich*
Affiliation:
Universität Linz
*
* Postal address: Felix-Bernstein-Institute for Mathematical Statistics in the Biosciences, Universität Göttingen, Goldschmidstraße 7, 37077 Göttingen, Germany. Email address: daniel.rudolf@uni-goettingen.de
** Postal address: Universität Linz, Altenberger Straße 69, 4040 Linz, Austria. Email address: mario.ullrich@jku.at
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Different Markov chains can be used for approximate sampling of a distribution given by an unnormalized density function with respect to the Lebesgue measure. The hit-and-run, (hybrid) slice sampler, and random walk Metropolis algorithm are popular tools to simulate such Markov chains. We develop a general approach to compare the efficiency of these sampling procedures by the use of a partial ordering of their Markov operators, the covariance ordering. In particular, we show that the hit-and-run and the simple slice sampler are more efficient than a hybrid slice sampler based on hit-and-run, which, itself, is more efficient than a (lazy) random walk Metropolis algorithm.

Type
Research Papers
Copyright
Copyright © Applied Probability Trust 2018 

References

[1]Andersen, C. and Diaconis, P. (2007). Hit and run as a unifying device. J. Soc. Fr. Stat. 148, 528.Google Scholar
[2]Andrieu, C. and Vihola, M. (2016). Establishing some order amongst exact approximations of MCMCs. Ann. Appl. Prob. 26, 26612696.Google Scholar
[3]Baxendale, P. (2005).Renewal theory and computable convergence rates for geometrically ergodic Markov chains. Ann. Appl. Prob. 15, 700738.Google Scholar
[4]Bélisle, C., Romeijn, H. and Smith, R. (1993).Hit-and-run algorithms for generating multivariate distributions. Math. Operat. Res. 18, 255266.Google Scholar
[5]Brooks, S., Gelman, A., Jones, G. and Meng, X. (eds) (2011).Handbook of Markov chain Monte Carlo. Chapman and Hall, Boca Raton.Google Scholar
[6]Casella, G. and Robert, C. (2004). Monte Carlo Statistical Methods, 2nd edn. Springer, New York.Google Scholar
[7]Chen, M. (2005). Eigenvalues, inequalities, and ergodic theory. In Probability and its Applications,Springer, LondonGoogle Scholar
[8]Diaconis, P. and Freedman, D. (1997). On the hit and run process. Tech. Rep. University of California497, 116.Google Scholar
[9]Fill, J. and Khan, J. (2013 ). Comparison inequalities and fastest mixing Markov chains. Ann. App. Prob. 23, 17781816.Google Scholar
[10]Higdon, D. (1998). Auxiliary variable methods for Markov chain Monte Carlo with applications. J. Amer. Statist. Assoc. 93, 585595.Google Scholar
[11]Jarner, S. and Hansen, E. (2000). Geometric ergodicity of Metropolis algorithms. Stoch. Process. Appl. 85, 341361.Google Scholar
[12]Kaufman, D. and Smith, R. (1998). Direction choice for accelerated convergence in hit-and-run sampling. Operat. Res. 46, 8495.Google Scholar
[13]Kiatsupaibul, S., Smith, R. and Zabinsky, Z. (2011). An analysis of a variation of hit-and-run for uniform sampling from general regions. ACM Trans. Model. Comput. Simul. 21, 111.Google Scholar
[14]Kreyszig, E. (1989). Introductory Functional Analysis with Applications. John Wiley, New York.Google Scholar
[15]Latuszyński, K. and Rudolf, D. (2014). Convergence of hybrid slice sampling via spectral gap. Preprint. Available at https://arxiv.org/abs/1409.2709.Google Scholar
[16]Lovász, L. (1999). Hit-and-run mixes fast. Math. Program. 86, 443461.Google Scholar
[17]Lovász, L. and Vempala, S. (2006). Fast algorithms for logconcave functions: sampling, rounding, integration and optimization.. In Proc. 47th Annual IEEE Symposium on Foundations of Computer Science,Berkeley, CA, pp. 5768.Google Scholar
[18]Lovász, L. and Vempala, S. (2006). Hit-and-run from a corner. SIAM J. Comput. 35, 9851005.Google Scholar
[19]Lovász, L. and Vempala, S. (2007). The geometry of logconcave functions and sampling algorithms. Random Structures Algorithms 30, 307358.Google Scholar
[20]Mathé, P. and Novak, E. (2007). Simple Monte Carlo and the Metropolis algorithm. J. Complexity 23, 673696.Google Scholar
[21]Mengersen, K. and Tweedie, R. (1996). Rates of convergence of the Hastings and Metropolis algorithms. Ann. Statist. 24, 101121.Google Scholar
[22]Mira, A. (2001). Ordering and improving the performance of Monte Carlo Markov chains. Statist. Sci. 16, 340350.Google Scholar
[23]Mira, A. and Leisen, F. (2009). Covariance ordering for discrete and continuous time Markov chains. Statistica Sinica 19, 651666.Google Scholar
[24]Mira, A. and Tierney, L. (2002). Efficiency and convergence properties of slice samplers. Scand. J. Statist. 29, 112.Google Scholar
[25]Mira, A., Møller, J. and Roberts, G. (2001). Perfect slice samplers. J. R. Statist. Soc. B 63, 593606.Google Scholar
[26]Montenegro, R. and Tetali, P. (2006). Mathematical aspects of mixing times in Markov chains. Found. Trends Theor. Comput. Sci. 1, 121 pp.Google Scholar
[27]Neal, P., Roberts, G. and Yuen, W. (2012). Optimal scaling of random walk Metropolis algorithms with discontinuous target densities. Ann. Appl. Prob. 22, 18801927.Google Scholar
[28]Neal, R. (2003). Slice sampling. Ann. Statist. 31, 705767.Google Scholar
[29]Peskun, P. (1973). Optimum Monte-Carlo sampling using Markov chains. Biometrika 60, 607612.Google Scholar
[30]Roberts, G. and Rosenthal, J. (1999). Convergence of slice sampler Markov chains. J. R. Statist. Soc. B 61, 643660.Google Scholar
[31]Roberts, G. and Rosenthal, J. (2001). Optimal scaling for various Metropolis–Hastings algorithms. Statist. Sci. 16, 351367.Google Scholar
[32]Roberts, G. and Rosenthal, J. (2002). The polar slice sampler. Stoch. Models 18, 257280.Google Scholar
[33]Roberts, G. and Tweedie, R. (1996). Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms. Biometrika 83, 95110.Google Scholar
[34]Roberts, G., Gelman, A. and Gilks, W. (1997). Weak convergence and optimal scaling of random walk Metropolis algorithms. Ann. Appl. Prob. 7, 110120.Google Scholar
[35]Rudolf, D. (2012). Explicit error bounds for Markov chain Monte Carlo. Dissertationes Math. 485, 93 pp.Google Scholar
[36]Rudolf, D. (2013). Hit-and-run for numerical integration. In Monte Carlo and Quasi-Monte Carlo Methods 2012 (Springer Proc. Math. Statist. 65), eds J. Dick, F. Y. Kuo, G. Peters, and I. H. Sloan. Springer, Berlin, pp. 597612.Google Scholar
[37]Rudolf, D. and Ullrich, M. (2013). Positivity of hit-and-run and related algorithms. Electron. Commun. Prob. 18, 18.Google Scholar
[38]Smith, R. (1984). Efficient Monte Carlo procedures for generating points uniformly distributed over bounded regions. Operat. Res. 32, 12961308.Google Scholar
[39]Tierney, L. (1998). A note on Metropolis–Hastings kernels for general state spaces. Ann. Appl. Prob. 8, 19.Google Scholar
[40]Ullrich, M. (2012). Rapid mixing of Swendsen-Wang and single-bond dynamics in two dimensions. Preprint. Available at https://arxiv.org/abs/1202.6321.Google Scholar
[41]Ullrich, M. (2014). Rapid mixing of Swendsen–Wang dynamics in two dimensions. Dissertationes Math. 502, 64 pp.Google Scholar