Chapter 9 - Markov Chain Monte Carlo

This chapter has been an informal introduction to Markov chain Monte Carlo (MCMC) estimation. The goal has been to introduce the purpose and approach MCMC algorithms. The major algorithms introduced were the Metropolis, Gibbs sampling, and Hamiltonian Monte Carlo algorithms. Each has its advantages and disadvantages. The ulam function in the rethinking package was introduced. It uses the Stan (mc-stan.org) Hamiltonian Monte Carlo engine to fit models as they are defined in this book. General advice about diagnosing poor MCMC fits was introduced by the use of a couple of pathological examples.

Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Problems are labeled Easy (E), Medium (M), and Hard(H).

Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.

Questions

9E1. Which of the following is a requirement of the simple Metropolis algorithm?

  1. The parameters must be discrete.
  2. The likelihood function must be Gaussian.
  3. The proposal distribution must be symmetric.
#3
# The proposal distribution needs to be symmetric.

9E2. Gibbs sampling is more efficient than the Metropolis algorithm. How does it achieve this extra efficiency? Are there any limitations to the Gibbs sampling strategy?

## The reason why Gibbs sampling is more efficient is because the process is established so that the proposal distribution will exactly mirror the posterior probability for a parameter, thus the proposed move is always accepted

#The limitation of Gibbs sampling is that it uses pairs of priors and likelihoods and it can merge proposal and reject/accept steps.

9E3. Which sort of parameters can Hamiltonian Monte Carlo not handle? Can you explain why?

# It cannot handle discrete parameters due to that there is no slope. 

9E4. Explain the difference between the effective number of samples, n_eff as calculated by Stan, and the actual number of samples.

# n_eff is an estimate of the number of independent samples from the posterior distribution, in terms of estimating some function like the posterior mean. The actual number of samples is the samples we use for accurate inference.

#n_eff is an estimate of effective number of samples, for the purpose of estimating the posterior mean.
#The actual number of samples is samples we use for accurate inference.

9E5. Which value should Rhat approach, when a chain is sampling the posterior distribution correctly?

#Rhat should approach 1 in this case.

9E6. Sketch a good trace plot for a Markov chain, one that is effectively sampling from the posterior distribution. What is good about its shape? Then sketch a trace plot for a malfunctioning Markov chain. What about its shape indicates malfunction?

library(rethinking)
data(rugged)
d <- rugged
d$log_gdp <- log(d$rgdppc_2000)
dd <- d[ complete.cases(d$rgdppc_2000) , ]
dd$log_gdp_std <- dd$log_gdp / mean(dd$log_gdp)
dd$rugged_std <- dd$rugged / max(dd$rugged)
dd$cid <- ifelse( dd$cont_africa==1 , 1 , 2 )
m8.3 <- quap(
    alist(
        log_gdp_std ~ dnorm( mu , sigma ) ,
        mu <- a[cid] + b[cid]*( rugged_std - 0.215 ) ,
        a[cid] ~ dnorm( 1 , 0.1 ) ,
        b[cid] ~ dnorm( 0 , 0.3 ) ,
        sigma ~ dexp( 1 )
    ) , data=dd )
precis( m8.3 , depth=2 )
##             mean          sd        5.5%       94.5%
## a[1]   0.8865428 0.015676261  0.86148910  0.91159648
## a[2]   1.0506017 0.009936975  1.03472053  1.06648294
## b[1]   0.1325088 0.074207194  0.01391141  0.25110626
## b[2]  -0.1427913 0.054751280 -0.23029445 -0.05528821
## sigma  0.1094984 0.005935875  0.10001177  0.11898512
dat_slim <- list(
    log_gdp_std = dd$log_gdp_std,
    rugged_std = dd$rugged_std,
    cid = as.integer( dd$cid )
)
str(dat_slim)
## List of 3
##  $ log_gdp_std: num [1:170] 0.88 0.965 1.166 1.104 0.915 ...
##  $ rugged_std : num [1:170] 0.138 0.553 0.124 0.125 0.433 ...
##  $ cid        : int [1:170] 1 2 2 2 2 2 2 2 2 1 ...
m9.1 <- ulam(
    alist(
        log_gdp_std ~ dnorm( mu , sigma ) ,
        mu <- a[cid] + b[cid]*( rugged_std - 0.215 ) ,
        a[cid] ~ dnorm( 1 , 0.1 ) ,
        b[cid] ~ dnorm( 0 , 0.3 ) ,
        sigma ~ dexp( 1 )
    ) , data=dat_slim , chains=4 , cores=4 )
show( m9.1 )
## Hamiltonian Monte Carlo approximation
## 2000 samples from 4 chains
## 
## Sampling durations (seconds):
##         warmup sample total
## chain:1   0.15    0.1  0.25
## chain:2   0.15    0.1  0.25
## chain:3   0.16    0.1  0.25
## chain:4   0.14    0.1  0.24
## 
## Formula:
## log_gdp_std ~ dnorm(mu, sigma)
## mu <- a[cid] + b[cid] * (rugged_std - 0.215)
## a[cid] ~ dnorm(1, 0.1)
## b[cid] ~ dnorm(0, 0.3)
## sigma ~ dexp(1)
traceplot( m9.1 )

# From the result above, the samples were plotted in sequential order, joined by a line. 


# If any of the requirements, stationarity, good mixing and convergence is not met, the Markov chain trace plot will be malfunctioning.

y <- c(-1,1)
set.seed(11)
m9.2 <- ulam(
    alist(
        y ~ dnorm( mu , sigma ) ,
         mu <- alpha ,
    alpha ~ dnorm( 0 , 1000 ) ,
    sigma ~ dexp( 0.0001 )
) , data=list(y=y) , chains=3 )
## 
## SAMPLING FOR MODEL '726d002e27cec1633082261fcfedb813' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 5.5e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.55 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
## Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
## Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
## Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
## Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
## Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.15233 seconds (Warm-up)
## Chain 1:                0.020413 seconds (Sampling)
## Chain 1:                0.172743 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL '726d002e27cec1633082261fcfedb813' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 5e-06 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:   1 / 1000 [  0%]  (Warmup)
## Chain 2: Iteration: 100 / 1000 [ 10%]  (Warmup)
## Chain 2: Iteration: 200 / 1000 [ 20%]  (Warmup)
## Chain 2: Iteration: 300 / 1000 [ 30%]  (Warmup)
## Chain 2: Iteration: 400 / 1000 [ 40%]  (Warmup)
## Chain 2: Iteration: 500 / 1000 [ 50%]  (Warmup)
## Chain 2: Iteration: 501 / 1000 [ 50%]  (Sampling)
## Chain 2: Iteration: 600 / 1000 [ 60%]  (Sampling)
## Chain 2: Iteration: 700 / 1000 [ 70%]  (Sampling)
## Chain 2: Iteration: 800 / 1000 [ 80%]  (Sampling)
## Chain 2: Iteration: 900 / 1000 [ 90%]  (Sampling)
## Chain 2: Iteration: 1000 / 1000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.135835 seconds (Warm-up)
## Chain 2:                0.019843 seconds (Sampling)
## Chain 2:                0.155678 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL '726d002e27cec1633082261fcfedb813' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 6e-06 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:   1 / 1000 [  0%]  (Warmup)
## Chain 3: Iteration: 100 / 1000 [ 10%]  (Warmup)
## Chain 3: Iteration: 200 / 1000 [ 20%]  (Warmup)
## Chain 3: Iteration: 300 / 1000 [ 30%]  (Warmup)
## Chain 3: Iteration: 400 / 1000 [ 40%]  (Warmup)
## Chain 3: Iteration: 500 / 1000 [ 50%]  (Warmup)
## Chain 3: Iteration: 501 / 1000 [ 50%]  (Sampling)
## Chain 3: Iteration: 600 / 1000 [ 60%]  (Sampling)
## Chain 3: Iteration: 700 / 1000 [ 70%]  (Sampling)
## Chain 3: Iteration: 800 / 1000 [ 80%]  (Sampling)
## Chain 3: Iteration: 900 / 1000 [ 90%]  (Sampling)
## Chain 3: Iteration: 1000 / 1000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.151217 seconds (Warm-up)
## Chain 3:                0.016731 seconds (Sampling)
## Chain 3:                0.167948 seconds (Total)
## Chain 3:
## Warning: There were 82 divergent transitions after warmup. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## to find out why this is a problem and how to eliminate them.
## Warning: Examine the pairs() plot to diagnose sampling problems
## Warning: The largest R-hat is 1.07, indicating chains have not mixed.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#r-hat
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
show( m9.2 )
## Hamiltonian Monte Carlo approximation
## 1500 samples from 3 chains
## 
## Sampling durations (seconds):
##         warmup sample total
## chain:1   0.15   0.02  0.17
## chain:2   0.14   0.02  0.16
## chain:3   0.15   0.02  0.17
## 
## Formula:
## y ~ dnorm(mu, sigma)
## mu <- alpha
## alpha ~ dnorm(0, 1000)
## sigma ~ dexp(1e-04)
traceplot( m9.2 )

9E7. Repeat the problem above, but now for a trace rank plot.

trankplot( m9.1 )
trankplot( m9.2 )

9M1. Re-estimate the terrain ruggedness model from the chapter, but now using a uniform prior for the standard deviation, sigma. The uniform prior should be dunif(0,1). Use ulam to estimate the posterior. Does the different prior have any detectible influence on the posterior distribution of sigma? Why or why not?

data(rugged)
d <- rugged
d$log_gdp <- log(d$rgdppc_2000)
dd <- d[ complete.cases(d$rgdppc_2000) , ]

dd$log_gdp_std <- dd$log_gdp/ mean(dd$log_gdp)
dd$rugged_std<- dd$rugged/max(dd$rugged)

dd$cid<-ifelse(dd$cont_africa==1,1,2)

m8.3 <- quap(
  alist(
    log_gdp_std ~ dnorm( mu , sigma ) ,
    mu <- a[cid] + b[cid]* (rugged_std-0.215) ,
    a[cid] ~ dnorm(1,0.1),
    b[cid] ~ dnorm(0,0.3),
    sigma ~ dexp(1)
  ) , 
  data=dd)

precis(m8.3 , depth=2)
##             mean          sd        5.5%       94.5%
## a[1]   0.8865660 0.015675078  0.86151419  0.91161779
## a[2]   1.0505679 0.009936208  1.03468791  1.06644787
## b[1]   0.1325350 0.074201585  0.01394649  0.25112342
## b[2]  -0.1425568 0.054747270 -0.23005354 -0.05506012
## sigma  0.1094897 0.005934696  0.10000487  0.11897445
pairs(m8.3)

m8.3_unif <- quap(
  alist(
    log_gdp_std ~ dnorm( mu , sigma ) ,
    mu <- a[cid] + b[cid]* (rugged_std-0.215) ,
    a[cid] ~ dnorm(1,0.1),
    b[cid] ~ dnorm(0,0.3),
    sigma ~ dunif(0,1)
  ) , 
  data=dd)

precis(m8.3_unif , depth=2)
##             mean          sd        5.5%       94.5%
## a[1]   0.8865646 0.015680645  0.86150390  0.91162530
## a[2]   1.0505685 0.009939796  1.03468276  1.06645419
## b[1]   0.1325028 0.074227013  0.01387368  0.25113189
## b[2]  -0.1425733 0.054766564 -0.23010089 -0.05504579
## sigma  0.1095296 0.005940112  0.10003617  0.11902306
pairs(m8.3_unif)

# There is no detectable influence.

9M2. Modify the terrain ruggedness model again. This time, change the prior for b[cid] to dexp(0.3). What does this do to the posterior distribution? Can you explain it?

m8.3_exp <- quap(
  alist(
    log_gdp_std ~ dnorm( mu , sigma ) ,
    mu <- a[cid] + b[cid]* (rugged_std-0.215) ,
    a[cid] ~ dnorm(1,0.1),
    b[cid] ~ dnorm(0,0.3),
    sigma ~ dexp(0.3)
  ) , 
  data=dd)

precis(m8.3_exp , depth=2)
##             mean          sd        5.5%       94.5%
## a[1]   0.8865649 0.015679232  0.86150647  0.91162335
## a[2]   1.0505696 0.009938884  1.03468537  1.06645388
## b[1]   0.1325026 0.074220566  0.01388385  0.25112145
## b[2]  -0.1425740 0.054761661 -0.23009371 -0.05505429
## sigma  0.1095195 0.005938737  0.10002822  0.11901072
pairs(m8.3_exp)

# No difference is identified in the in the posterior distribution.

9M3. Re-estimate one of the Stan models from the chapter, but at different numbers of warmup iterations. Be sure to use the same number of sampling iterations in each case. Compare the n_eff values. How much warmup is enough?

m9.1 <- ulam(
    alist(
        log_gdp_std ~ dnorm( mu , sigma ) ,
        mu <- a[cid] + b[cid]*( rugged_std - 0.215 ) ,
        a[cid] ~ dnorm( 1 , 0.1 ) ,
        b[cid] ~ dnorm( 0 , 0.3 ) ,
        sigma ~ dexp( 1 )
    ) , data=dat_slim , chains=4 , cores=4 )

precis( m9.1 , depth=2 )
##             mean          sd        5.5%       94.5%    n_eff     Rhat4
## a[1]   0.8867544 0.015997741  0.86215748  0.91227352 3450.006 0.9989706
## a[2]   1.0505851 0.010190587  1.03446373  1.06672717 2534.798 1.0000621
## b[1]   0.1330407 0.075296395  0.01564941  0.25222800 2595.367 0.9988701
## b[2]  -0.1413581 0.056017108 -0.22998829 -0.04893445 2559.966 0.9997827
## sigma  0.1115539 0.005802838  0.10263806  0.12113434 2527.737 0.9998120
pairs( m9.1 )

# 500 warmup would be enough.

9H1. Run the model below and then inspect the posterior distribution and explain what it is accomplishing.

mp <- ulam(
 alist(
   a ~ dnorm(0,1),
   b ~ dcauchy(0,1)
 ), data=list(y=1) , chains=1 )
## 
## SAMPLING FOR MODEL '3bd3f4d287e9cccab124308e5415245c' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 1.1e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
## Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
## Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
## Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
## Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
## Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.034526 seconds (Warm-up)
## Chain 1:                0.028367 seconds (Sampling)
## Chain 1:                0.062893 seconds (Total)
## Chain 1:
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
traceplot(mp)

Compare the samples for the parameters a and b. Can you explain the different trace plots? If you are unfamiliar with the Cauchy distribution, you should look it up. The key feature to attend to is that it has no expected value. Can you connect this fact to the trace plot?

# Trace plot should be a normal distribution as the prior is around 0 and spread in between 2 and -2. 

# Plot b is Cauchy distribution which contains some extreme values.