Chapter 9 - Markov Chain Monte Carlo

This chapter has been an informal introduction to Markov chain Monte Carlo (MCMC) estimation. The goal has been to introduce the purpose and approach MCMC algorithms. The major algorithms introduced were the Metropolis, Gibbs sampling, and Hamiltonian Monte Carlo algorithms. Each has its advantages and disadvantages. The ulam function in the rethinking package was introduced. It uses the Stan (mc-stan.org) Hamiltonian Monte Carlo engine to fit models as they are defined in this book. General advice about diagnosing poor MCMC fits was introduced by the use of a couple of pathological examples.

Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Make sure to include plots if the question requests them. Problems are labeled Easy (E), Medium (M), and Hard(H).

Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.

Questions

9E1. Which of the following is a requirement of the simple Metropolis algorithm?

  1. The parameters must be discrete.
  2. The likelihood function must be Gaussian.
  3. The proposal distribution must be symmetric.
# 3  the proposal distribution must be symmetric

9E2. Gibbs sampling is more efficient than the Metropolis algorithm. How does it achieve this extra efficiency? Are there any limitations to the Gibbs sampling strategy?

# Gibbs sampling uses conjugate priors which allows it to make smarter proposals and is thus more efficient.  But, the downside to this, is that it uses conjugate priors which might not be a good or valid prior from a scientific perspective. Also, it becomes quite inefficient with complex models of hundreds or more parameter.

9E3. Which sort of parameters can Hamiltonian Monte Carlo not handle? Can you explain why?

# Hamiltonian Monte Carlo is derived by adding the concept of momentum which requires that the Hessian is non-negative, which in term requires a continuous smooth function. Thus HMC cannot handle discrete parameters by construction. 

9E4. Explain the difference between the effective number of samples, n_eff as calculated by Stan, and the actual number of samples.

#  The effective number of samples gives an estimate of the number of samples that are independent.Ideal samples are entirely uncorrelated. Because the Markov chains are autocorrelated, sequential samples are not independent of each other

9E5. Which value should Rhat approach, when a chain is sampling the posterior distribution correctly?

# Rhat should approach 1 when a chain is sampling the posterior distribution correctly.

# The literature (gelmanBayesianDataAnalysis2014) often cites a value of 1.01 for convergence. However, newer versions of Stan tend are documented to suggest 1.05 since they use newer formulations of the Rhat value (vehtariRanknormalizationFoldingLocalization2020).

9E6. Sketch a good trace plot for a Markov chain, one that is effectively sampling from the posterior distribution. What is good about its shape? Then sketch a trace plot for a malfunctioning Markov chain. What about its shape indicates malfunction?

library(rethinking)
data(rugged)
d <- rugged
d$log_gdp <- log(d$rgdppc_2000)
dd <- d[ complete.cases(d$rgdppc_2000) , ]
dd$log_gdp_std <- dd$log_gdp / mean(dd$log_gdp)
dd$rugged_std <- dd$rugged / max(dd$rugged)
dd$cid <- ifelse( dd$cont_africa == 1 , 1 , 2)

datalist <- list(
                log_gdp_std = dd$log_gdp_std,
                rugged_std = dd$rugged_std,
                cid = as.integer( dd$cid ))


m9e6a <- ulam(
  alist(
    logGDP_std ~ dnorm(mu,sigma),
    mu<-a[cid] + b[cid]*(rugged_std-0.215),
    a[cid]~dnorm(1,0.1),
    b[cid]~dnorm(0,0.3),
    sigma~dunif(0,1)
  ), data=datalist, chains=4, cores=4
)
## Warning: There were 5 divergent transitions after warmup. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## to find out why this is a problem and how to eliminate them.
## Warning: There were 940 transitions after warmup that exceeded the maximum treedepth. Increase max_treedepth above 10. See
## http://mc-stan.org/misc/warnings.html#maximum-treedepth-exceeded
## Warning: There were 4 chains where the estimated Bayesian Fraction of Missing Information was low. See
## http://mc-stan.org/misc/warnings.html#bfmi-low
## Warning: Examine the pairs() plot to diagnose sampling problems
## Warning: The largest R-hat is 4.08, indicating chains have not mixed.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#r-hat
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
precis(m9e6a , depth = 2)
##                    mean           sd          5.5%        94.5%       n_eff
## logGDP_std 1.045572e+00 4.987268e-02  9.929330e-01 1.112024e+00    2.002065
## a[1]       1.045572e+00 4.987268e-02  9.929333e-01 1.112024e+00    2.002065
## a[2]       1.045572e+00 4.987268e-02  9.929331e-01 1.112024e+00    2.002065
## b[1]       4.792681e-10 9.961636e-07 -1.587976e-06 1.571823e-06 1321.722321
## b[2]       6.251722e-09 6.984263e-07 -1.126781e-06 1.090220e-06 1967.428474
## sigma      1.389518e-06 2.984851e-07  9.888418e-07 1.998760e-06    8.400661
##                  Rhat4
## logGDP_std 309.7188298
## a[1]       309.7081250
## a[2]       309.7231268
## b[1]         0.9997705
## b[2]         0.9999278
## sigma        2.1702670
traceplot(m9e6a)
## [1] 1000
## [1] 1
## [1] 1000

9E7. Repeat the problem above, but now for a trace rank plot.

# malfunctioning chain
m9e6b<-ulam(
  alist(
    y ~ dnorm(mu,sigma),
    mu<-alpha,
    alpha ~ dnorm(0,1000),
    sigma~dexp(0.0001)
  ),data=list(y=c(-1,1)),chains=4,cores=4
)
## Warning: There were 647 divergent transitions after warmup. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## to find out why this is a problem and how to eliminate them.
## Warning: Examine the pairs() plot to diagnose sampling problems
## Warning: The largest R-hat is 1.62, indicating chains have not mixed.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#r-hat
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
# trace plot
traceplot(m9e6b)
## [1] 1000
## [1] 1
## [1] 1000

9M1. Re-estimate the terrain ruggedness model from the chapter, but now using a uniform prior for the standard deviation, sigma. The uniform prior should be dunif(0,1). Visualize the priors. Use ulam to estimate the posterior. Visualize the posteriors for both models. Does the different prior have any detectible influence on the posterior distribution of sigma? Why or why not?

data(rugged)
d <- rugged
d$log_gdp <- log(d$rgdppc_2000)
dd <- d[ complete.cases(d$rgdppc_2000) , ]

dd$log_gdp_std <- dd$log_gdp/ mean(dd$log_gdp)
dd$rugged_std<- dd$rugged/max(dd$rugged)

dd$cid<-ifelse(dd$cont_africa==1,1,2)

m9.1 <- quap(
  alist(
    log_gdp_std ~ dnorm( mu , sigma ) ,
    mu <- a[cid] + b[cid]* (rugged_std-0.215) ,
    a[cid] ~ dnorm(1,0.1),
    b[cid] ~ dnorm(0,0.3),
    sigma ~ dexp(1)
  ) , 
  data=dd)

precis(m9.1 , depth=2)
##             mean          sd        5.5%       94.5%
## a[1]   0.8865630 0.015675200  0.86151104  0.91161504
## a[2]   1.0505698 0.009936289  1.03468972  1.06644994
## b[1]   0.1325035 0.074202195  0.01391407  0.25109295
## b[2]  -0.1425825 0.054747687 -0.23007988 -0.05508513
## sigma  0.1094906 0.005934819  0.10000559  0.11897557
pairs(m9.1)

m9.1_unif <- quap(
  alist(
    log_gdp_std ~ dnorm( mu , sigma ) ,
    mu <- a[cid] + b[cid]* (rugged_std-0.215) ,
    a[cid] ~ dnorm(1,0.1),
    b[cid] ~ dnorm(0,0.3),
    sigma ~ dunif(0,1)
  ) , 
  data=dd)

precis(m9.1 , depth=2)
##             mean          sd        5.5%       94.5%
## a[1]   0.8865630 0.015675200  0.86151104  0.91161504
## a[2]   1.0505698 0.009936289  1.03468972  1.06644994
## b[1]   0.1325035 0.074202195  0.01391407  0.25109295
## b[2]  -0.1425825 0.054747687 -0.23007988 -0.05508513
## sigma  0.1094906 0.005934819  0.10000559  0.11897557
pairs(m9.1_unif)

9M2. Modify the terrain ruggedness model again. This time, change the prior for b[cid] to dexp(0.3). What does this do to the posterior distribution? Can you explain it?

m9.3_exp <- quap(
  alist(
    log_gdp_std ~ dnorm(mu , sigma) ,
    mu <- a[cid] + b[cid]* (rugged_std-0.215) ,
    a[cid] ~ dnorm(1,0.1),
    b[cid] ~ dnorm(0,0.3),
    sigma ~ dexp(0.3)
  ) , 
  data=dd)

precis(m9.3_exp , depth=2)
##             mean          sd        5.5%       94.5%
## a[1]   0.8865668 0.015678970  0.86150873  0.91162478
## a[2]   1.0505695 0.009938713  1.03468554  1.06645351
## b[1]   0.1324667 0.074219387  0.01384978  0.25108361
## b[2]  -0.1426213 0.054760692 -0.23013945 -0.05510312
## sigma  0.1095176 0.005938478  0.10002673  0.11900840
pairs(m9.3_exp)

9M3. Re-estimate one of the Stan models from the chapter, but at different numbers of warmup iterations. Be sure to use the same number of sampling iterations in each case. Compare the n_eff values. How much warmup is enough?

dat_slim <- list(
                log_gdp_std = dd$log_gdp_std,
                rugged_std = dd$rugged_std,
                cid = as.integer( dd$cid ))
str(dat_slim)
## List of 3
##  $ log_gdp_std: num [1:170] 0.88 0.965 1.166 1.104 0.915 ...
##  $ rugged_std : num [1:170] 0.138 0.553 0.124 0.125 0.433 ...
##  $ cid        : int [1:170] 1 2 2 2 2 2 2 2 2 1 ...
m9.4 <- ulam(
    alist(
        log_gdp_std ~ dnorm(mu , sigma) ,
        mu <- a[cid] + b[cid]*( rugged_std - 0.215) ,
        a[cid] ~ dnorm(1 , 0.1) ,
        b[cid] ~ dnorm(0 , 0.3) ,
        sigma ~ dexp(1)
    ) , data=dat_slim , chains=4 , cores=4)

precis(m9.4, depth=2)
##             mean         sd         5.5%       94.5%    n_eff     Rhat4
## a[1]   0.8868070 0.01623864  0.861389999  0.91319566 2482.802 0.9993058
## a[2]   1.0505578 0.01044042  1.034126794  1.06754756 3925.536 0.9982869
## b[1]   0.1329622 0.07520785  0.009864328  0.25021518 2999.451 1.0006886
## b[2]  -0.1433880 0.05658701 -0.233898731 -0.05302751 2432.661 0.9985709
## sigma  0.1117447 0.00618156  0.102308378  0.12205040 2412.011 1.0003292

9H1. Run the model below and then inspect the posterior distribution and explain what it is accomplishing.

mp <- ulam(
 alist(
   a ~ dnorm(0,1),
   b ~ dcauchy(0,1)
 ), data=list(y=1) , chains=1 )
## 
## SAMPLING FOR MODEL 'bcf56ee89f6cf2a4224a4139ff01c7d4' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
## Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
## Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
## Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
## Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
## Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.02 seconds (Warm-up)
## Chain 1:                0.041 seconds (Sampling)
## Chain 1:                0.061 seconds (Total)
## Chain 1:
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
traceplot(mp)
## [1] 1000
## [1] 1
## [1] 1000

Compare the samples for the parameters a and b. Can you explain the different trace plots? If you are unfamiliar with the Cauchy distribution, you should look it up. The key feature to attend to is that it has no expected value. Can you connect this fact to the trace plot?