Title: The Multiple-Try Method and Local Optimization in Metropolis Sampling
Abstract:Abstract This article describes a new Metropolis-like transition rule, the multiple-try Metropolis, for Markov chain Monte Carlo (MCMC) simulations. By using this transition rule together with adaptiv...Abstract This article describes a new Metropolis-like transition rule, the multiple-try Metropolis, for Markov chain Monte Carlo (MCMC) simulations. By using this transition rule together with adaptive direction sampling, we propose a novel method for incorporating local optimization steps into a MCMC sampler in continuous state-space. Numerical studies show that the new method performs significantly better than the traditional Metropolis-Hastings (M-H) sampler. With minor tailoring in using the rule, the multiple-try method can also be exploited to achieve the effect of a griddy Gibbs sampler without having to bear with griddy approximations, and the effect of a hit-and-run algorithm without having to figure out the required conditional distribution in a random direction. Key Words: Adaptive direction samplingConjugate gradientDamped sinusoidalGibbs samplingGriddy Gibbs samplerHit-and-run algorithmMarkov chain Monte CarloMetropolis algorithmMixture modelOrientational bias Monte Carlo.Read More
Publication Year: 2000
Publication Date: 2000-03-01
Language: en
Type: article
Indexed In: ['crossref']
Access and Citation
Cited By Count: 63
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot