This chapter has been an informal introduction to Markov chain Monte Carlo (MCMC) estimation. The goal has been to introduce the purpose and approach MCMC algorithms. The major algorithms introduced were the Metropolis, Gibbs sampling, and Hamiltonian Monte Carlo algorithms. Each has its advantages and disadvantages. The ulam function in the rethinking package was introduced. It uses the Stan (mc-stan.org) Hamiltonian Monte Carlo engine to fit models as they are defined in this book. General advice about diagnosing poor MCMC fits was introduced by the use of a couple of pathological examples.
Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Problems are labeled Easy (E), Medium (M), and Hard (H).
Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.
9E1. Which of the following is a requirement of the simple Metropolis algorithm?
# 3. The proposal distribution must be symmetric.
9E2. Gibbs sampling is more efficient than the Metropolis algorithm. How does it achieve this extra efficiency? Are there any limitations to the Gibbs sampling strategy?
# Gibbs sampling uses conjugate priors which allows it to make smarter proposals and is thus more efficient.
# The limitation is that the conjugate priors it uses might not be a good or valid prior from a scientific perspective. Also, it becomes quite inefficient with complex models of hundreds or more parameter.
9E3. Which sort of parameters can Hamiltonian Monte Carlo not handle? Can you explain why?
# Hamiltonian Monte Carlo cannot handle discrete parameters because it computes gradients in the parameter space and cannot glide through discrete parameters without slopes.
9E4. Explain the difference between the effective number of samples, n_eff as calculated by Stan, and the actual number of samples.
# The effective number of samples estimates independent samples from the posterior distribution.
# Stan provides an estimate of effective number of samples, and n_eff is used for the purpose of estimating the posterior mean.
# The actual number of samples is used for accurate inference.
9E5. Which value should Rhat approach, when a chain is sampling the posterior distribution correctly?
# R_hat should approach 1 when a chain is sampling correctly.
9E6. Sketch a good trace plot for a Markov chain, one that is effectively sampling from the posterior distribution. What is good about its shape? Then sketch a trace plot for a malfunctioning Markov chain. What about its shape indicates malfunction?
library(rethinking)
data(rugged)
d <- rugged
d$log_gdp <- log(d$rgdppc_2000)
dd <- d[complete.cases(d$rgdppc_2000), ]
dd$log_gdp_std <- dd$log_gdp / mean(dd$log_gdp)
dd$rugged_std <- dd$rugged / max(dd$rugged)
dd$cid <- ifelse(dd$cont_africa == 1, 1, 2)
m9.6 <- quap(alist(
log_gdp_std ~ dnorm(mu, sigma),
mu <- a[cid] + b[cid]*(rugged_std - 0.215),
a[cid] ~ dnorm(1, 0.1),
b[cid] ~ dnorm(0, 0.3),
sigma ~ dexp(1)), data = dd)
precis(m9.6, depth = 2)
## mean sd 5.5% 94.5%
## a[1] 0.8865629 0.015675594 0.86151025 0.91161550
## a[2] 1.0505666 0.009936545 1.03468612 1.06644715
## b[1] 0.1325227 0.074203978 0.01393042 0.25111500
## b[2] -0.1425738 0.054749064 -0.23007338 -0.05507422
## sigma 0.1094934 0.005935203 0.10000781 0.11897901
dat_slim <- list(
log_gdp_std = dd$log_gdp_std,
rugged_std = dd$rugged_std,
cid = as.integer(dd$cid))
str(dat_slim)
## List of 3
## $ log_gdp_std: num [1:170] 0.88 0.965 1.166 1.104 0.915 ...
## $ rugged_std : num [1:170] 0.138 0.553 0.124 0.125 0.433 ...
## $ cid : int [1:170] 1 2 2 2 2 2 2 2 2 1 ...
m9.1 <- ulam(alist(
log_gdp_std ~ dnorm(mu, sigma) ,
mu <- a[cid] + b[cid]*(rugged_std - 0.215) ,
a[cid] ~ dnorm(1, 0.1),
b[cid] ~ dnorm(0, 0.3),
sigma ~ dexp(1)),
data = dat_slim, chains = 4, cores = 4)
show(m9.1)
## Hamiltonian Monte Carlo approximation
## 2000 samples from 4 chains
##
## Sampling durations (seconds):
## warmup sample total
## chain:1 0.09 0.05 0.13
## chain:2 0.05 0.04 0.09
## chain:3 0.07 0.05 0.12
## chain:4 0.05 0.03 0.08
##
## Formula:
## log_gdp_std ~ dnorm(mu, sigma)
## mu <- a[cid] + b[cid] * (rugged_std - 0.215)
## a[cid] ~ dnorm(1, 0.1)
## b[cid] ~ dnorm(0, 0.3)
## sigma ~ dexp(1)
traceplot(m9.1)
## [1] 1000
## [1] 1
## [1] 1000
# The shape is good because all chains are stationary, staying centered on or near 0 with minimal spiking that remains close to 0.
y <- c(-1,1)
set.seed(11)
m9.2 <- ulam(alist(
y ~ dnorm(mu, sigma),
mu <- alpha,
alpha ~ dnorm(0, 1000),
sigma ~ dexp(0.0001)),
data = list(y = y), chains = 3)
##
## SAMPLING FOR MODEL 'd26c527083e7eda89b17a8c2eccd6019' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 1: Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 1: Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 1: Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 1: Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 1: Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 0.065 seconds (Warm-up)
## Chain 1: 0.041 seconds (Sampling)
## Chain 1: 0.106 seconds (Total)
## Chain 1:
##
## SAMPLING FOR MODEL 'd26c527083e7eda89b17a8c2eccd6019' NOW (CHAIN 2).
## Chain 2:
## Chain 2: Gradient evaluation took 0 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2:
## Chain 2:
## Chain 2: Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 2: Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 2: Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 2: Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 2: Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 2: Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 2: Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 2: Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 2: Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 2: Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 2: Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 2: Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 2:
## Chain 2: Elapsed Time: 0.074 seconds (Warm-up)
## Chain 2: 0.016 seconds (Sampling)
## Chain 2: 0.09 seconds (Total)
## Chain 2:
##
## SAMPLING FOR MODEL 'd26c527083e7eda89b17a8c2eccd6019' NOW (CHAIN 3).
## Chain 3:
## Chain 3: Gradient evaluation took 0 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3:
## Chain 3:
## Chain 3: Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 3: Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 3: Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 3: Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 3: Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 3: Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 3: Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 3: Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 3: Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 3: Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 3: Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 3: Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 3:
## Chain 3: Elapsed Time: 0.09 seconds (Warm-up)
## Chain 3: 0.016 seconds (Sampling)
## Chain 3: 0.106 seconds (Total)
## Chain 3:
## Warning: There were 42 divergent transitions after warmup. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## to find out why this is a problem and how to eliminate them.
## Warning: Examine the pairs() plot to diagnose sampling problems
## Warning: The largest R-hat is 1.11, indicating chains have not mixed.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#r-hat
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
show(m9.2)
## Hamiltonian Monte Carlo approximation
## 1500 samples from 3 chains
##
## Sampling durations (seconds):
## warmup sample total
## chain:1 0.06 0.04 0.11
## chain:2 0.07 0.02 0.09
## chain:3 0.09 0.02 0.11
##
## Formula:
## y ~ dnorm(mu, sigma)
## mu <- alpha
## alpha ~ dnorm(0, 1000)
## sigma ~ dexp(1e-04)
traceplot(m9.2)
## [1] 1000
## [1] 1
## [1] 1000
# The spiking is extreme and frequent, reaching far beyond the center point, which indicates malfunction.
9E7. Repeat the problem above, but now for a trace rank plot.
trankplot(m9.1)
trankplot(m9.2)
9M1. Re-estimate the terrain ruggedness model from the chapter, but now using a uniform prior for the standard deviation, sigma. The uniform prior should be dunif(0,1). Use ulam to estimate the posterior. Does the different prior have any detectible influence on the posterior distribution of sigma? Why or why not?
data(rugged)
d <- rugged
d$log_gdp <- log(d$rgdppc_2000)
dd <- d[complete.cases(d$rgdppc_2000), ]
dd$log_gdp_std <- dd$log_gdp / mean(dd$log_gdp)
dd$rugged_std <- dd$rugged / max(dd$rugged)
dd$cid <- ifelse(dd$cont_africa == 1, 1, 2)
dat_slim <- list(
log_gdp_std = dd$log_gdp_std,
rugged_std = dd$rugged_std,
cid = as.integer(dd$cid))
m9.m1.1 <- ulam(alist(
log_gdp_std ~ dnorm(mu, sigma),
mu <- a[cid] + b[cid]*(rugged_std - 0.215),
a[cid] ~ dnorm(1, 0.1),
b[cid] ~ dnorm(0, 0.3),
sigma ~ dexp(1)),
data = dat_slim, chains = 4, cores = 4)
## recompiling to avoid crashing R session
precis(m9.m1.1, depth = 2)
## mean sd 5.5% 94.5% n_eff Rhat4
## a[1] 0.8868446 0.015526099 0.86101253 0.91127888 3025.424 0.9994464
## a[2] 1.0501936 0.010329508 1.03322759 1.06667698 3175.405 0.9992210
## b[1] 0.1282541 0.073511547 0.01103383 0.24556007 2286.949 1.0006962
## b[2] -0.1429011 0.057570141 -0.23591207 -0.05303669 3075.485 0.9982806
## sigma 0.1116664 0.005934558 0.10284902 0.12153920 3038.510 0.9999374
pairs(m9.m1.1)
traceplot(m9.m1.1)
## [1] 1000
## [1] 1
## [1] 1000
trankplot(m9.m1.1)
m9.m1.2 <- ulam(alist(
log_gdp_std ~ dnorm(mu, sigma),
mu <- a[cid] + b[cid]*(rugged_std - 0.215),
a[cid] ~ dnorm(1, 0.1),
b[cid] ~ dnorm(0, 0.3),
sigma ~ dunif(0,1)),
data = dat_slim, chains = 4, cores = 4)
precis(m9.m1.2, depth = 2)
## mean sd 5.5% 94.5% n_eff Rhat4
## a[1] 0.8864098 0.015485074 0.86208952 0.91199647 2917.845 1.0001992
## a[2] 1.0503998 0.009866952 1.03485844 1.06622259 2925.069 0.9986480
## b[1] 0.1331934 0.072566374 0.02204414 0.24756436 2502.863 0.9991982
## b[2] -0.1432618 0.056315932 -0.23347798 -0.05239063 3101.173 0.9990883
## sigma 0.1115428 0.005910800 0.10262709 0.12116366 2961.890 1.0002611
pairs(m9.m1.2)
traceplot(m9.m1.2)
## [1] 1000
## [1] 1
## [1] 1000
trankplot(m9.m1.2)
# Detectible influence: n_eff for sigma is increased, and n_eff for other parameters is decreased.
9M2. Modify the terrain ruggedness model again. This time, change the prior for b[cid] to dexp(0.3). What does this do to the posterior distribution? Can you explain it?
m9.m2 <- ulam(alist(
log_gdp_std ~ dnorm(mu, sigma),
mu <- a[cid] + b[cid]*(rugged_std - 0.215),
a[cid] ~ dnorm(1, 0.1),
b[cid] ~ dexp(0.3),
sigma ~ dunif(0,1)),
data = dat_slim, chains = 4, cores = 4)
precis(m9.m2, depth = 2)
## mean sd 5.5% 94.5% n_eff Rhat4
## a[1] 0.88780482 0.016186031 0.861825012 0.91293343 1591.2148 1.0017450
## a[2] 1.04823974 0.010212421 1.031847313 1.06499012 1910.4549 0.9996511
## b[1] 0.14435879 0.074305069 0.029878781 0.27118492 874.4282 1.0016575
## b[2] 0.01853383 0.018121102 0.001042937 0.05221071 2119.7380 1.0009636
## sigma 0.11415518 0.006333025 0.104505079 0.12491791 1500.0139 1.0003533
pairs(m9.m2)
traceplot(m9.m2)
## [1] 1000
## [1] 1
## [1] 1000
trankplot(m9.m2)
# Values of the b parameter become all positive. The reason is that dexp (exponential distribution) is always positive.
9M3. Re-estimate one of the Stan models from the chapter, but at different numbers of warmup iterations. Be sure to use the same number of sampling iterations in each case. Compare the n_eff values. How much warmup is enough?
# 10 warmup
m9.m3.1 <- ulam(alist(
log_gdp_std ~ dnorm(mu, sigma),
mu <- a[cid] + b[cid]*(rugged_std - 0.215),
a[cid] ~ dnorm(1, 0.1),
b[cid] ~ dnorm(0, 0.3),
sigma ~ dexp(1)),
data = dat_slim, chains = 4, cores = 4, warmup = 10, iter = 1000)
## Warning: There were 1050 divergent transitions after warmup. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## to find out why this is a problem and how to eliminate them.
## Warning: There were 1 chains where the estimated Bayesian Fraction of Missing Information was low. See
## http://mc-stan.org/misc/warnings.html#bfmi-low
## Warning: Examine the pairs() plot to diagnose sampling problems
## Warning: The largest R-hat is 1.21, indicating chains have not mixed.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#r-hat
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
precis(m9.m3.1, depth = 2)
## mean sd 5.5% 94.5% n_eff Rhat4
## a[1] 0.8872963 0.017130978 0.86065070 0.91360207 122.58841 1.038440
## a[2] 1.0497339 0.009483446 1.03619972 1.06502947 460.92500 1.010946
## b[1] 0.1360033 0.117626715 -0.01003378 0.34221847 19.90381 1.256160
## b[2] -0.1435547 0.056145588 -0.22902706 -0.05894093 164.53852 1.044091
## sigma 0.1125345 0.007101580 0.10223031 0.12181623 66.33343 1.045075
traceplot(m9.m3.1)
## [1] 1000
## [1] 1
## [1] 1000
# 100 warmup
m9.m3.2 <- ulam(alist(
log_gdp_std ~ dnorm(mu, sigma),
mu <- a[cid] + b[cid]*(rugged_std - 0.215),
a[cid] ~ dnorm(1, 0.1),
b[cid] ~ dnorm(0, 0.3),
sigma ~ dexp(1)),
data = dat_slim, chains = 4, cores = 4, warmup = 100, iter = 1000)
precis(m9.m3.2, depth = 2)
## mean sd 5.5% 94.5% n_eff Rhat4
## a[1] 0.8869660 0.016213705 0.86098949 0.91248064 4015.050 0.999155
## a[2] 1.0503544 0.010167461 1.03411001 1.06690472 3847.245 1.000746
## b[1] 0.1333127 0.076791173 0.01245515 0.25463699 1594.489 1.001145
## b[2] -0.1409201 0.058934536 -0.23854087 -0.04612551 2051.355 1.003061
## sigma 0.1114960 0.006183883 0.10226060 0.12182553 2289.443 1.000900
traceplot(m9.m3.2)
## [1] 1000
## [1] 1
## [1] 1000
# 500 warmup
m9.m3.3 <- ulam(alist(
log_gdp_std ~ dnorm(mu, sigma),
mu <- a[cid] + b[cid]*(rugged_std - 0.215),
a[cid] ~ dnorm(1, 0.1),
b[cid] ~ dnorm(0, 0.3),
sigma ~ dexp(1)),
data = dat_slim, chains = 4, cores = 4, warmup = 500, iter = 1000)
precis(m9.m3.3, depth = 2)
## mean sd 5.5% 94.5% n_eff Rhat4
## a[1] 0.8865794 0.015135360 0.862630633 0.9108440 2518.610 0.9987536
## a[2] 1.0507660 0.010057071 1.034509950 1.0667270 2771.809 0.9993889
## b[1] 0.1294649 0.077296563 0.007403356 0.2536810 2535.631 0.9990068
## b[2] -0.1434897 0.055916816 -0.230819360 -0.0547745 2450.076 0.9996937
## sigma 0.1114407 0.006215865 0.101662010 0.1216584 2489.632 0.9990441
traceplot(m9.m3.3)
## [1] 1000
## [1] 1
## [1] 1000
# While the number of warmup iterations increases, the n_eff values have evener distribution and smaller floating range. 500 warmup is enough.
9H1. Run the model below and then inspect the posterior distribution and explain what it is accomplishing.
mp <- ulam(
alist(
a ~ dnorm(0,1),
b ~ dcauchy(0,1)
), data=list(y=1) , chains=1 )
##
## SAMPLING FOR MODEL 'bcf56ee89f6cf2a4224a4139ff01c7d4' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 1: Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 1: Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 1: Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 1: Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 1: Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 0.02 seconds (Warm-up)
## Chain 1: 0.013 seconds (Sampling)
## Chain 1: 0.033 seconds (Total)
## Chain 1:
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
Compare the samples for the parameters a and b. Can you explain the different trace plots? If you are unfamiliar with the Cauchy distribution, you should look it up. The key feature to attend to is that it has no expected value. Can you connect this fact to the trace plot?
precis(mp, depth = 2)
## mean sd 5.5% 94.5% n_eff Rhat4
## a 0.04970448 1.076959 -1.957155 1.608715 52.64105 1.001111
## b 0.06999946 8.830457 -6.534918 6.138137 96.15047 1.019544
pairs(mp)
traceplot(mp)
## [1] 1000
## [1] 1
## [1] 1000
# It is accomplishing two distributions: the normal distribution and the Cauchy distribution. The trace plots show samples from the two distributions. The Cauchy distribution has heavy tails, so it samples some large numbers in the tail with a few spikes.
# The Cauchy distribution does not have expected value. Most values in the trace plot are clustered around 0, with a few big spikes. The normal distribution, however, has a typical MCMC output in the trace plot.