Chapter 9 - Markov Chain Monte Carlo

This chapter has been an informal introduction to Markov chain Monte Carlo (MCMC) estimation. The goal has been to introduce the purpose and approach MCMC algorithms. The major algorithms introduced were the Metropolis, Gibbs sampling, and Hamiltonian Monte Carlo algorithms. Each has its advantages and disadvantages. The ulam function in the rethinking package was introduced. It uses the Stan (mc-stan.org) Hamiltonian Monte Carlo engine to fit models as they are defined in this book. General advice about diagnosing poor MCMC fits was introduced by the use of a couple of pathological examples.

Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Problems are labeled Easy (E), Medium (M), and Hard(H).

Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.

Questions

9E1. Which of the following is a requirement of the simple Metropolis algorithm?

  1. The parameters must be discrete.
  2. The likelihood function must be Gaussian.
  3. The proposal distribution must be symmetric.
#The requirement of the simple Metropolis algorithm is, 3.The proposal distribution must be symmetric.

9E2. Gibbs sampling is more efficient than the Metropolis algorithm. How does it achieve this extra efficiency? Are there any limitations to the Gibbs sampling strategy?

#Metroplois algorithm is a more general algorithm whereas Gibbs sampling is more concentrated. For a particular parameter in a distribution, it samples each vector of the component from that distribution provided that it is conditioned on rest of the components which have been sampled.

#The limitations are it gets stuck degenerates towards random walk and insufficient because it resamples from the same distribution

9E3. Which sort of parameters can Hamiltonian Monte Carlo not handle? Can you explain why?

#The parameters Hamiltonian Monte Carlo cannot handle are discrete parameters because Hamiltonian MC itself depends on dynamics or series of simulations starting from previous and moving forward.

9E4. Explain the difference between the effective number of samples, n_eff as calculated by Stan, and the actual number of samples.

#The difference is that n_eff as calculated by Stan are independent samples whereas the actual number of samples aren't independent as these are the iterations of the Markov Chains and also n_eff samples can be larger than actual number of samples 

9E5. Which value should Rhat approach, when a chain is sampling the posterior distribution correctly?

#5 Value of 1 should be approached by Rhat when a chain is sampling the posterior distribution correctly.

9E6. Sketch a good trace plot for a Markov chain, one that is effectively sampling from the posterior distribution. What is good about its shape? Then sketch a trace plot for a malfunctioning Markov chain. What about its shape indicates malfunction?

#Terrain Ruggedness Data
data(rugged)
d <- rugged
d$log_gdp <- log(d$rgdppc_2000)
dd <- d[ complete.cases(d$rgdppc_2000) , ]
dd$log_gdp_std <- dd$log_gdp / mean(dd$log_gdp)
dd$rugged_std <- dd$rugged / max(dd$rugged)
dd$cid <- ifelse( dd$cont_africa == 1,1,2)
dat_slim <- list(log_gdp_std=dd$log_gdp_std, rugged_std=dd$rugged_std, cid=as.integer(dd$cid))

m <- ulam (
  alist(
    log_gdp_std ~ dnorm( mu,sigma ) ,
    mu <- a[cid] + b[cid]*( rugged_std-0.215),
    a[cid] ~ dnorm( 1,0.2) ,
    b[cid] ~ dnorm( 0,0.1) ,
    sigma ~ dexp( 1)
  ), 
  data = dat_slim, chains = 4, cores = 4
)

precis(m, depth = 2)
##              mean          sd        5.5%       94.5%    n_eff     Rhat4
## a[1]   0.88268266 0.015979767  0.85734056  0.90695014 2747.441 1.0004184
## a[2]   1.05052497 0.010187116  1.03447878  1.06711459 3275.379 0.9982897
## b[1]   0.08588875 0.064430822 -0.01474839  0.18543934 2924.167 0.9998947
## b[2]  -0.11199811 0.050526695 -0.19310979 -0.03125327 2409.410 0.9984955
## sigma  0.11191197 0.006393086  0.10240488  0.12236913 2304.917 0.9992175
traceplot(m)
## [1] 1000
## [1] 1
## [1] 1000
#Its shape definitely shows some warmup and the direction in which it is converging also looks better.

9E7. Repeat the problem above, but now for a trace rank plot.

trankplot(m)

9M1. Re-estimate the terrain ruggedness model from the chapter, but now using a uniform prior for the standard deviation, sigma. The uniform prior should be dunif(0,1). Use ulam to estimate the posterior. Does the different prior have any detectible influence on the posterior distribution of sigma? Why or why not?

data(rugged)
d <- rugged
d$log_gdp <- log(d$rgdppc_2000)
dd <- d[ complete.cases(d$rgdppc_2000) , ]
dd$log_gdp_std <- dd$log_gdp / mean(dd$log_gdp)
dd$rugged_std <- dd$rugged / max(dd$rugged)
dd$cid <- ifelse( dd$cont_africa == 1,1,2)


m2 <- quap (
  alist(
    log_gdp_std ~ dnorm( mu,sigma ) ,
    mu <- a[cid] + b[cid]*( rugged_std-0.215),
    a[cid] ~ dnorm( 1,0.1) ,
    b[cid] ~ dnorm( 0,0.3) ,
    sigma ~ dexp( 1)
  ), 
  data = dd
)

precis(m2, depth = 2)
##             mean          sd        5.5%       94.5%
## a[1]   0.8865667 0.015674875  0.86151522  0.91161817
## a[2]   1.0505769 0.009936070  1.03469715  1.06645667
## b[1]   0.1324793 0.074200705  0.01389226  0.25106638
## b[2]  -0.1425685 0.054746552 -0.23006404 -0.05507291
## sigma  0.1094882 0.005934498  0.10000373  0.11897267
pairs(m2)

m3 <- quap (
  alist(
    log_gdp_std ~ dnorm( mu,sigma ) ,
    mu <- a[cid] + b[cid]*( rugged_std-0.215),
    a[cid] ~ dnorm( 1,0.1) ,
    b[cid] ~ dnorm( 0,0.3) ,
    sigma ~ dunif( 0, 1) #Uniform prior
  ), 
  data = dd
)
precis(m3, depth = 2)
##             mean          sd        5.5%       94.5%
## a[1]   0.8865644 0.015680503  0.86150392  0.91162486
## a[2]   1.0505690 0.009939704  1.03468341  1.06645455
## b[1]   0.1324796 0.074226384  0.01385146  0.25110766
## b[2]  -0.1425747 0.054766068 -0.23010149 -0.05504798
## sigma  0.1095286 0.005939974  0.10003537  0.11902182
pairs(m3)

#Comparing the pair plots for model m2 and model m3, there doesn't appear to be much difference when compared to the model with unfiorm priors, probably because the effect of standard deviation on the distribution with different priors is minimal as it is the same sample. 

9M2. Modify the terrain ruggedness model again. This time, change the prior for b[cid] to dexp(0.3). What does this do to the posterior distribution? Can you explain it?

m4 <- quap (
  alist(
    log_gdp_std ~ dnorm( mu,sigma ) ,
    mu <- a[cid] + b[cid]*( rugged_std-0.215),
    a[cid] ~ dnorm( 1,0.1) ,
    b[cid] ~ dnorm( 0,0.3) ,
    sigma ~ dexp( 0.3) 
  ), 
  data = dd
)
precis(m4, depth = 2)
##             mean          sd        5.5%       94.5%
## a[1]   0.8865623 0.015679119  0.86150406  0.91162058
## a[2]   1.0505693 0.009938815  1.03468520  1.06645349
## b[1]   0.1325047 0.074220069  0.01388674  0.25112275
## b[2]  -0.1425777 0.054761281 -0.23009682 -0.05505862
## sigma  0.1095187 0.005938631  0.10002761  0.11900977
pairs(m4)

#As observed in the previous question, changing the prior to dexp(0.3) is still yielding similar posterior distribution. I believe this is because this prior is already following a uniform distribution that's why there isn't any difference with changing the priors

9M3. Re-estimate one of the Stan models from the chapter, but at different numbers of warmup iterations. Be sure to use the same number of sampling iterations in each case. Compare the n_eff values. How much warmup is enough?

m5 <- quap (
  alist(
    log_gdp_std ~ dnorm( mu,sigma ) ,
    mu <- a[cid] + b[cid]*( rugged_std-0.215),
    a[cid] ~ dnorm( 1,0.1) ,
    b[cid] ~ dnorm( 0,0.3) ,
    sigma ~ dexp(1) 
  ), 
  data = dd
)
precis(m5, depth = 2)
##             mean          sd        5.5%       94.5%
## a[1]   0.8865627 0.015674358  0.86151204  0.91161335
## a[2]   1.0505682 0.009935748  1.03468891  1.06644740
## b[1]   0.1324740 0.074198384  0.01389064  0.25105734
## b[2]  -0.1425604 0.054744792 -0.23005312 -0.05506762
## sigma  0.1094845 0.005934003  0.10000086  0.11896823
m5.1 <- map2stan(m5, data = dd, iter = 1000, warmup = 100)
## 
## SAMPLING FOR MODEL 'a1edb229874b4cc2b17ece51a55e91fe' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: WARNING: There aren't enough warmup iterations to fit the
## Chain 1:          three stages of adaptation as currently configured.
## Chain 1:          Reducing each adaptation stage to 15%/75%/10% of
## Chain 1:          the given number of warmup iterations:
## Chain 1:            init_buffer = 15
## Chain 1:            adapt_window = 75
## Chain 1:            term_buffer = 10
## Chain 1: 
## Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
## Chain 1: Iteration: 101 / 1000 [ 10%]  (Sampling)
## Chain 1: Iteration: 200 / 1000 [ 20%]  (Sampling)
## Chain 1: Iteration: 300 / 1000 [ 30%]  (Sampling)
## Chain 1: Iteration: 400 / 1000 [ 40%]  (Sampling)
## Chain 1: Iteration: 500 / 1000 [ 50%]  (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.027 seconds (Warm-up)
## Chain 1:                0.254 seconds (Sampling)
## Chain 1:                0.281 seconds (Total)
## Chain 1:
## Computing WAIC
precis(m5.1)
## 4 vector or matrix parameters hidden. Use depth=2 to show them.
##            mean          sd      5.5%     94.5%    n_eff     Rhat4
## sigma 0.1117235 0.006250781 0.1024554 0.1228969 646.7697 0.9992129
m5.1 <- map2stan(m5, data = dd, iter = 1000, warmup = 600)
## 
## SAMPLING FOR MODEL 'c3feba1b48e303d0a1abccdb966e4762' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
## Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
## Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
## Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
## Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
## Chain 1: Iteration: 600 / 1000 [ 60%]  (Warmup)
## Chain 1: Iteration: 601 / 1000 [ 60%]  (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.101 seconds (Warm-up)
## Chain 1:                0.041 seconds (Sampling)
## Chain 1:                0.142 seconds (Total)
## Chain 1:
## Computing WAIC
precis(m5.1)
## 4 vector or matrix parameters hidden. Use depth=2 to show them.
##            mean          sd      5.5%     94.5%    n_eff    Rhat4
## sigma 0.1112842 0.005860598 0.1031833 0.1207757 543.7591 1.003432
m5.1 <- map2stan(m5, data = dd, iter = 1000, warmup = 1100)
## Warning in map2stan(m5, data = dd, iter = 1000, warmup = 1100): 'iter' less
## than or equal to 'warmup'. Setting 'iter' to sum of 'iter' and 'warmup' instead
## (2100).
## 
## SAMPLING FOR MODEL 'c87fa856e626bc5e2d4c6aa431cdc3c2' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 2100 [  0%]  (Warmup)
## Chain 1: Iteration:  210 / 2100 [ 10%]  (Warmup)
## Chain 1: Iteration:  420 / 2100 [ 20%]  (Warmup)
## Chain 1: Iteration:  630 / 2100 [ 30%]  (Warmup)
## Chain 1: Iteration:  840 / 2100 [ 40%]  (Warmup)
## Chain 1: Iteration: 1050 / 2100 [ 50%]  (Warmup)
## Chain 1: Iteration: 1101 / 2100 [ 52%]  (Sampling)
## Chain 1: Iteration: 1310 / 2100 [ 62%]  (Sampling)
## Chain 1: Iteration: 1520 / 2100 [ 72%]  (Sampling)
## Chain 1: Iteration: 1730 / 2100 [ 82%]  (Sampling)
## Chain 1: Iteration: 1940 / 2100 [ 92%]  (Sampling)
## Chain 1: Iteration: 2100 / 2100 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.165 seconds (Warm-up)
## Chain 1:                0.11 seconds (Sampling)
## Chain 1:                0.275 seconds (Total)
## Chain 1:
## Computing WAIC
precis(m5.1)
## 4 vector or matrix parameters hidden. Use depth=2 to show them.
##            mean          sd      5.5%     94.5%    n_eff    Rhat4
## sigma 0.1112325 0.006102667 0.1020499 0.1210259 1477.014 0.999276
#For 1000 iterations, after trying multiple iterations, warmup of 600-700 seems to be the most balanced so far as I increase or decrease the warmup by +/- 500

9H1. Run the model below and then inspect the posterior distribution and explain what it is accomplishing.

mp <- ulam(
 alist(
   a ~ dnorm(0,1),
   b ~ dcauchy(0,1)
 ), data=list(y=1) , chains=1 )
## 
## SAMPLING FOR MODEL 'bcf56ee89f6cf2a4224a4139ff01c7d4' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0.001 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 10 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
## Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
## Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
## Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
## Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
## Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.021 seconds (Warm-up)
## Chain 1:                0.04 seconds (Sampling)
## Chain 1:                0.061 seconds (Total)
## Chain 1:
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
precis(mp)
##        mean       sd      5.5%     94.5%     n_eff     Rhat4
## a 0.1313996 1.100448 -1.495779  1.915021  78.83027 1.0026565
## b 1.0538651 8.728177 -5.577899 10.888268 315.24901 0.9990766
traceplot(mp)
## [1] 1000
## [1] 1
## [1] 1000
#Comparing traceplots for a and b, plot a seems to have more warmup than plot b and a's n_eff is also much lower compared to b. The key feature to attend to that it has no expected value can be probably explained by the shape of the traceplots for a and b (how a is more "hairy caterpillar than b) and the difference in the independent samples for a and b. Maybe because Cauchy distribution is a continuous distribution when compared to the normal distribution.

Compare the samples for the parameters a and b. Can you explain the different trace plots? If you are unfamiliar with the Cauchy distribution, you should look it up. The key feature to attend to is that it has no expected value. Can you connect this fact to the trace plot?