Chapter 9 - Markov Chain Monte Carlo

This chapter has been an informal introduction to Markov chain Monte Carlo (MCMC) estimation. The goal has been to introduce the purpose and approach MCMC algorithms. The major algorithms introduced were the Metropolis, Gibbs sampling, and Hamiltonian Monte Carlo algorithms. Each has its advantages and disadvantages. The ulam function in the rethinking package was introduced. It uses the Stan (mc-stan.org) Hamiltonian Monte Carlo engine to fit models as they are defined in this book. General advice about diagnosing poor MCMC fits was introduced by the use of a couple of pathological examples.

Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Make sure to include plots if the question requests them. Problems are labeled Easy (E), Medium (M), and Hard(H).

Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.

Questions

9E1. Which of the following is a requirement of the simple Metropolis algorithm?

  1. The parameters must be discrete.
  2. The likelihood function must be Gaussian.
  3. The proposal distribution must be symmetric.
# 3 is the requirement of the simple Metropolis algorithm.

9E2. Gibbs sampling is more efficient than the Metropolis algorithm. How does it achieve this extra efficiency? Are there any limitations to the Gibbs sampling strategy?

# Gibbs sampling is more efficient than the Metropolis algorithm as the distribution of proposed parameter values is adjusted to current parameter values.

# Gibbs sampling strategy has limitations that it utilize the information of likelihood and conjugate priors, so it can merge proposal and reject/accept steps. 

9E3. Which sort of parameters can Hamiltonian Monte Carlo not handle? Can you explain why?

# Hamiltonian Monte Carlo can not handle discrete parameters as there is no slope. It requires continuous parameters. The method can differentiate the space because the article moves in the space of parameters with speed proportional to the gradient of likelihood.

9E4. Explain the difference between the effective number of samples, n_eff as calculated by Stan, and the actual number of samples.

# The effective number of samples: a method of estimating the sample size of the independent function based on the posterior distribution.
# n_eff: the number of independent samples.
# The actual number of samples: the sample we use for accurate inference.

9E5. Which value should Rhat approach, when a chain is sampling the posterior distribution correctly?

# Rhat should reach 1. 

9E6. Sketch a good trace plot for a Markov chain, one that is effectively sampling from the posterior distribution. What is good about its shape? Then sketch a trace plot for a malfunctioning Markov chain. What about its shape indicates malfunction?

data(rugged)
d <- rugged
d$log_gdp <- log(d$rgdppc_2000)
dd <- d[ complete.cases(d$rgdppc_2000) , ]
dd$log_gdp_std <- dd$log_gdp / mean(dd$log_gdp)
dd$rugged_std <- dd$rugged / max(dd$rugged)
dd$cid <- ifelse( dd$cont_africa == 1,1,2)
dat_slim <- list(log_gdp_std=dd$log_gdp_std, rugged_std=dd$rugged_std, cid=as.integer(dd$cid))

m <- ulam (
  alist(
    log_gdp_std ~ dnorm( mu,sigma ) ,
    mu <- a[cid] + b[cid]*( rugged_std-0.215),
    a[cid] ~ dnorm( 1,0.2) ,
    b[cid] ~ dnorm( 0,0.1) ,
    sigma ~ dexp( 1)
  ), 
  data = dat_slim, chains = 4, cores = 4
)

precis(m, depth = 2)
##              mean         sd        5.5%       94.5%    n_eff     Rhat4
## a[1]   0.88266604 0.01586046  0.85690719  0.90791144 2437.774 0.9992217
## a[2]   1.04999128 0.01008492  1.03411154  1.06553417 2724.763 0.9995914
## b[1]   0.08555427 0.06113689 -0.01086151  0.18318834 2035.501 1.0016811
## b[2]  -0.11362059 0.04934065 -0.19254990 -0.03368896 2978.361 0.9992264
## sigma  0.11192580 0.00644343  0.10258344  0.12257044 2335.885 0.9989960
traceplot(m)
## [1] 1000
## [1] 1
## [1] 1000

9E7. Repeat the problem above, but now for a trace rank plot.

trankplot(m)

9M1. Re-estimate the terrain ruggedness model from the chapter, but now using a uniform prior for the standard deviation, sigma. The uniform prior should be dunif(0,1). Visualize the priors. Use ulam to estimate the posterior. Visualize the posteriors for both models. Does the different prior have any detectible influence on the posterior distribution of sigma? Why or why not?

data(rugged)
d <- rugged
d$log_gdp <- log(d$rgdppc_2000)
dd <- d[ complete.cases(d$rgdppc_2000) , ]
dd$log_gdp_std <- dd$log_gdp / mean(dd$log_gdp)
dd$rugged_std <- dd$rugged / max(dd$rugged)
dd$cid <- ifelse( dd$cont_africa == 1,1,2)


m2 <- quap (
  alist(
    log_gdp_std ~ dnorm( mu,sigma ) ,
    mu <- a[cid] + b[cid]*( rugged_std-0.215),
    a[cid] ~ dnorm( 1,0.1) ,
    b[cid] ~ dnorm( 0,0.3) ,
    sigma ~ dexp( 1)
  ), 
  data = dd
)

precis(m2, depth = 2)
##             mean          sd        5.5%       94.5%
## a[1]   0.8865633 0.015675227  0.86151130  0.91161538
## a[2]   1.0505696 0.009936306  1.03468943  1.06644970
## b[1]   0.1324936 0.074202325  0.01390394  0.25108323
## b[2]  -0.1425664 0.054747795 -0.23006400 -0.05506889
## sigma  0.1094908 0.005934846  0.10000574  0.11897580
pairs(m2)

m3 <- quap (
  alist(
    log_gdp_std ~ dnorm( mu,sigma ) ,
    mu <- a[cid] + b[cid]*( rugged_std-0.215),
    a[cid] ~ dnorm( 1,0.1) ,
    b[cid] ~ dnorm( 0,0.3) ,
    sigma ~ dunif( 0, 1) #Uniform prior
  ), 
  data = dd
)
precis(m3, depth = 2)
##             mean          sd        5.5%       94.5%
## a[1]   0.8865613 0.015680442  0.86150093  0.91162168
## a[2]   1.0505699 0.009939668  1.03468438  1.06645540
## b[1]   0.1325251 0.074226096  0.01389742  0.25115269
## b[2]  -0.1425289 0.054765927 -0.23005540 -0.05500234
## sigma  0.1095282 0.005939921  0.10003506  0.11902134
pairs(m3)

# As shown the comparison with model m2 and m3, it seems no apparent difference with the model with unfiorm priors. 

# The main reason is because the effect of standard deviation on the distribution with different priors is minimal as it is the same sample. 

9M2. Modify the terrain ruggedness model again. This time, change the prior for b[cid] to dexp(0.3). What does this do to the posterior distribution? Can you explain it?

m4 <- quap (
  alist(
    log_gdp_std ~ dnorm( mu,sigma ) ,
    mu <- a[cid] + b[cid]*( rugged_std-0.215),
    a[cid] ~ dnorm( 1,0.1) ,
    b[cid] ~ dnorm( 0,0.3) ,
    sigma ~ dexp( 0.3) 
  ), 
  data = dd
)
precis(m4, depth = 2)
##             mean          sd        5.5%      94.5%
## a[1]   0.8865661 0.015679213  0.86150771  0.9116245
## a[2]   1.0505688 0.009938872  1.03468457  1.0664530
## b[1]   0.1325066 0.074220467  0.01388799  0.2511253
## b[2]  -0.1425678 0.054761595 -0.23008742 -0.0550482
## sigma  0.1095193 0.005938717  0.10002811  0.1190105
pairs(m4)

# As we can see from model m4 results, the prior to dexp(0.3) is still yielding similar posterior distribution. 

9M3. Re-estimate one of the Stan models from the chapter, but at different numbers of warmup iterations. Be sure to use the same number of sampling iterations in each case. Compare the n_eff values. How much warmup is enough?

m5 <- quap (
  alist(
    log_gdp_std ~ dnorm( mu,sigma ) ,
    mu <- a[cid] + b[cid]*( rugged_std-0.215),
    a[cid] ~ dnorm( 1,0.1) ,
    b[cid] ~ dnorm( 0,0.3) ,
    sigma ~ dexp(1) 
  ), 
  data = dd
)
precis(m5, depth = 2)
##             mean          sd        5.5%       94.5%
## a[1]   0.8865630 0.015675492  0.86151053  0.91161546
## a[2]   1.0505697 0.009936477  1.03468929  1.06645011
## b[1]   0.1325050 0.074203525  0.01391341  0.25109654
## b[2]  -0.1425725 0.054748709 -0.23007149 -0.05507346
## sigma  0.1094927 0.005935103  0.10000723  0.11897811
m5.1 <- map2stan(m5, data = dd, iter = 1000, warmup = 100)
## 
## SAMPLING FOR MODEL '647997796da0d01420d339b929805f9f' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 2.9e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.29 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: WARNING: There aren't enough warmup iterations to fit the
## Chain 1:          three stages of adaptation as currently configured.
## Chain 1:          Reducing each adaptation stage to 15%/75%/10% of
## Chain 1:          the given number of warmup iterations:
## Chain 1:            init_buffer = 15
## Chain 1:            adapt_window = 75
## Chain 1:            term_buffer = 10
## Chain 1: 
## Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
## Chain 1: Iteration: 101 / 1000 [ 10%]  (Sampling)
## Chain 1: Iteration: 200 / 1000 [ 20%]  (Sampling)
## Chain 1: Iteration: 300 / 1000 [ 30%]  (Sampling)
## Chain 1: Iteration: 400 / 1000 [ 40%]  (Sampling)
## Chain 1: Iteration: 500 / 1000 [ 50%]  (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.021628 seconds (Warm-up)
## Chain 1:                0.185416 seconds (Sampling)
## Chain 1:                0.207044 seconds (Total)
## Chain 1:
precis(m5.1)
##           mean         sd     5.5%     94.5%    n_eff     Rhat4
## sigma 0.111648 0.00652421 0.101471 0.1228654 540.8287 0.9988931
m5.1 <- map2stan(m5, data = dd, iter = 1000, warmup = 600)
## 
## SAMPLING FOR MODEL '9e8d2f201e2ae05cabcdcf2b4d1cde8f' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 4.3e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.43 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
## Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
## Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
## Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
## Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
## Chain 1: Iteration: 600 / 1000 [ 60%]  (Warmup)
## Chain 1: Iteration: 601 / 1000 [ 60%]  (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.076589 seconds (Warm-up)
## Chain 1:                0.038463 seconds (Sampling)
## Chain 1:                0.115052 seconds (Total)
## Chain 1:
precis(m5.1)
##            mean         sd      5.5%    94.5%    n_eff    Rhat4
## sigma 0.1116976 0.00682089 0.1011381 0.123071 618.6459 0.998033
m5.1 <- map2stan(m5, data = dd, iter = 1000, warmup = 1100)
## Warning in map2stan(m5, data = dd, iter = 1000, warmup = 1100): 'iter' less
## than or equal to 'warmup'. Setting 'iter' to sum of 'iter' and 'warmup' instead
## (2100).
## 
## SAMPLING FOR MODEL '2715fc237221b62c10a6be9b9d80c57e' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 5.5e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.55 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 2100 [  0%]  (Warmup)
## Chain 1: Iteration:  210 / 2100 [ 10%]  (Warmup)
## Chain 1: Iteration:  420 / 2100 [ 20%]  (Warmup)
## Chain 1: Iteration:  630 / 2100 [ 30%]  (Warmup)
## Chain 1: Iteration:  840 / 2100 [ 40%]  (Warmup)
## Chain 1: Iteration: 1050 / 2100 [ 50%]  (Warmup)
## Chain 1: Iteration: 1101 / 2100 [ 52%]  (Sampling)
## Chain 1: Iteration: 1310 / 2100 [ 62%]  (Sampling)
## Chain 1: Iteration: 1520 / 2100 [ 72%]  (Sampling)
## Chain 1: Iteration: 1730 / 2100 [ 82%]  (Sampling)
## Chain 1: Iteration: 1940 / 2100 [ 92%]  (Sampling)
## Chain 1: Iteration: 2100 / 2100 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.115963 seconds (Warm-up)
## Chain 1:                0.083054 seconds (Sampling)
## Chain 1:                0.199017 seconds (Total)
## Chain 1:
precis(m5.1)
##            mean          sd      5.5%     94.5%    n_eff     Rhat4
## sigma 0.1119765 0.006144094 0.1028695 0.1224885 1268.072 0.9997252
# From above models results, for 1000 iterations, after trying multiple iterations, warmup of 600-700 seems to be the most balanced so far as I increase or decrease the warmup by +/- 500.

9H1. Run the model below and then inspect the posterior distribution and explain what it is accomplishing.

mp <- ulam(
 alist(
   a ~ dnorm(0,1),
   b ~ dcauchy(0,1)
 ), data=list(y=1) , chains=1 )
## 
## SAMPLING FOR MODEL '3bd3f4d287e9cccab124308e5415245c' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 1.1e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
## Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
## Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
## Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
## Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
## Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.010294 seconds (Warm-up)
## Chain 1:                0.013969 seconds (Sampling)
## Chain 1:                0.024263 seconds (Total)
## Chain 1:
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
precis(mp)
##          mean         sd      5.5%    94.5%    n_eff     Rhat4
## a  0.09158845  0.9677736 -1.328284 1.772684 85.78504 0.9984589
## b -0.85252518 12.4736426 -6.234665 4.163071 80.87841 1.0119838
traceplot(mp)
## [1] 1000
## [1] 1
## [1] 1000

Compare the samples for the parameters a and b. Can you explain the different trace plots? If you are unfamiliar with the Cauchy distribution, you should look it up. The key feature to attend to is that it has no expected value. Can you connect this fact to the trace plot?