首页 > > 详细

MCMC留学生讲解、辅导R程序设计、讲解R设计、辅导algorithm辅导Web开发|辅导R语言编程

Lab 3: Markov chain Monte Carlo
Nial Friel
1 November 2019
Aim of this lab
In the lab we will focus on implementing MCMC for an example which we have explored in class. The
objective will be understand the role that the proposal variance plays in determining the efficiency of the
Markov chain.
A brief re-cap of MCMC
Recall that MCMC is a framework for simulating from a target distribution π(θ) by constructing a Markov
chain with whose stationary distribution is π(θ). Then, if we run the chain for long enough, simulated values
from the chain can be treated as a (dependent) sample from the target distribution and used as a basis for
summarising important features of π.
Under certain regularity conditions, the Markov chain sample path mimics a random sample from π. Given
realisations {θt : t = 0, 1, . . . } from such a chain, typical asymptotic results include the distributional
convergence of the realisations, ie
θt → π(θ)
both in distribution and as t → ∞. Also the consistency of the ergodic average, for any scalar functional φ,
So this leads to the question of how one can generate such a Markov chain, in general. As we’ve seen in
lectures, the Metropolis-Hastings algorithm provides a general means to do this.
Metropolis-Hastings algorithm
Recall that at iteration t of the Metropolis-Hastings algorithm one simulates a proposed state, φ from some
proposal distribution which depends on the current state, θt. We denote this proposal density by q(θt, φ).
This proposed state is then accepted with probability denoted by α(θt, φ) (as described below) and the next
state of the Markov chain becomes θt+1 = φ, otherwise, θt+1 = θt.
We algorithmically describe this algorithm as follows:
Step 1. Given the current state, θt, generate a proposed new state, φ, from the distribution q(θt, φ).
Step 2. Calculate.
Step 3. With probability α(θt, φ), set θt+1 = φ, else set θt+1 = θt.
Step 4. Return to step 1.
Here we write a generic function to implement the Metroplis-Hastings algorithm.
metropolis.hastings <- function(f, # the target distribution
g, # the proposal distribution
random.g, # a sample from the proposal distribution
x0, # initial value for chain, in R it is x[1]
sigma, # proposal st. dev.
chain.size=1e5){ # chain size
x <- c(x0, rep(NA,chain.size-1)) # initialize chain
for(i in 2:chain.size) {
y <- random.g(x[i-1],sigma) # generate Y from g(.|xt) using random.g
#alpha <- min(1, f(y)*g(x[i-1],y,sigma)/(f(x[i-1])*g(y,x[i-1],sigma)))
alpha <- min(1, f(y)*g(y,x[i-1],sigma)/(f(x[i-1])*g(x[i-1],y,sigma)))
if( runif(1)

联系我们
  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-21:00
  • 微信:codinghelp
热点标签

联系我们 - QQ: 99515681 微信:codinghelp
程序辅导网!