This chapter has been an informal introduction to Markov chain Monte Carlo (MCMC) estimation. The goal has been to introduce the purpose and approach MCMC algorithms. The major algorithms introduced were the Metropolis, Gibbs sampling, and Hamiltonian Monte Carlo algorithms. Each has its advantages and disadvantages. The ulam function in the rethinking package was introduced. It uses the Stan (mc-stan.org) Hamiltonian Monte Carlo engine to fit models as they are defined in this book. General advice about diagnosing poor MCMC fits was introduced by the use of a couple of pathological examples.
Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Problems are labeled Easy (E), Medium (M), and Hard(H).
Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.
9E1. Which of the following is a requirement of the simple Metropolis algorithm?
#3 it is
9E2. Gibbs sampling is more efficient than the Metropolis algorithm. How does it achieve this extra efficiency? Are there any limitations to the Gibbs sampling strategy?
# Gibbs sampling is more efficiency. on the other side, when the model gets more complexy with many parameters, its getting slower.
9E3. Which sort of parameters can Hamiltonian Monte Carlo not handle? Can you explain why?
# It cant handle discrete parameters since
9E4. Explain the difference between the effective number of samples, n_eff as calculated by Stan, and the actual number of samples.
# The differnce is that neff is from estimating functions and actual number is the actual number of the whole sample size.
9E5. Which value should Rhat approach, when a chain is sampling the posterior distribution correctly?
#Rhat should go to 1
9E6. Sketch a good trace plot for a Markov chain, one that is effectively sampling from the posterior distribution. What is good about its shape? Then sketch a trace plot for a malfunctioning Markov chain. What about its shape indicates malfunction?
library(rethinking)
library(rstan)
data(rugged)
d <- rugged
d$log_gdp <- log(d$rgdppc_2000)
dd <- d[ complete.cases(d$rgdppc_2000) , ]
dd$log_gdp_std <- dd$log_gdp / mean(dd$log_gdp)
dd$rugged_std <- dd$rugged / max(dd$rugged)
dd$cid <- ifelse( dd$cont_africa==1 , 1 , 2 )
model <- quap(
alist(
log_gdp_std ~ dnorm( mu , sigma ) ,
mu <- a[cid] + b[cid]*( rugged_std - 0.215 ) ,
a[cid] ~ dnorm( 1 , 0.1 ) ,
b[cid] ~ dnorm( 0 , 0.3 ) ,
sigma ~ dexp( 1 )
) ,
data=dd )
precis( model , depth=2 )
## mean sd 5.5% 94.5%
## a[1] 0.8865618 0.015675716 0.86150903 0.91161467
## a[2] 1.0505724 0.009936621 1.03469172 1.06645300
## b[1] 0.1324954 0.074204564 0.01390221 0.25108866
## b[2] -0.1425742 0.054749491 -0.23007442 -0.05507389
## sigma 0.1094943 0.005935323 0.10000850 0.11898009
dat_slim <- list(
log_gdp_std = dd$log_gdp_std,
rugged_std = dd$rugged_std,
cid = as.integer( dd$cid )
)
model2 <- ulam(
alist(
log_gdp_std ~ dnorm( mu , sigma ) ,
mu <- a[cid] + b[cid]*( rugged_std - 0.215 ) ,
a[cid] ~ dnorm( 1 , 0.1 ) ,
b[cid] ~ dnorm( 0 , 0.3 ) ,
sigma ~ dexp( 1 )
) , data=dat_slim , chains=4 , cores=4 , iter=1000)
precis(model2)
## 4 vector or matrix parameters hidden. Use depth=2 to show them.
## mean sd 5.5% 94.5% n_eff Rhat4
## sigma 0.1118736 0.00619731 0.1025506 0.1220588 2675.628 0.9989302
traceplot(model2)
y<-c(-1,1)
set.seed(11)
model3<-ulam(
alist(
y ~ dnorm(mu,sigma),
mu<-alpha,
alpha ~ dnorm(0,1000),
sigma~dexp(0.0001)
),data=list(y=y),chains=4,cores=4
)
## Warning: There were 83 divergent transitions after warmup. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## to find out why this is a problem and how to eliminate them.
## Warning: Examine the pairs() plot to diagnose sampling problems
## Warning: The largest R-hat is 1.13, indicating chains have not mixed.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#r-hat
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
precis(model3)
## mean sd 5.5% 94.5% n_eff Rhat4
## alpha 10.97385 376.0519 -524.14011 556.997 160.3099 1.021409
## sigma 749.74256 2048.4936 16.06148 2662.224 331.6216 1.020117
traceplot(model3)
9E7. Repeat the problem above, but now for a trace rank plot.
trankplot( model2 )
## Warning in if (class(x) == "numeric") x <- array(x, dim = c(length(x), 1)): the
## condition has length > 1 and only the first element will be used
## Warning in if (class(x) == "numeric") x <- array(x, dim = c(length(x), 1)): the
## condition has length > 1 and only the first element will be used
## Warning in if (class(x) == "numeric") x <- array(x, dim = c(length(x), 1)): the
## condition has length > 1 and only the first element will be used
## Warning in if (class(x) == "numeric") x <- array(x, dim = c(length(x), 1)): the
## condition has length > 1 and only the first element will be used
## Warning in if (class(x) == "numeric") x <- array(x, dim = c(length(x), 1)): the
## condition has length > 1 and only the first element will be used
trankplot( model3 )
## Warning in if (class(x) == "numeric") x <- array(x, dim = c(length(x), 1)): the
## condition has length > 1 and only the first element will be used
## Warning in if (class(x) == "numeric") x <- array(x, dim = c(length(x), 1)): the
## condition has length > 1 and only the first element will be used
9M1. Re-estimate the terrain ruggedness model from the chapter, but now using a uniform prior for the standard deviation, sigma. The uniform prior should be dunif(0,1). Use ulam to estimate the posterior. Does the different prior have any detectible influence on the posterior distribution of sigma? Why or why not?
data(rugged)
data <- rugged
data$log_gdp <- log(data$rgdppc_2000)
dd2 <- data[ complete.cases(data$rgdppc_2000) , ]
dd2$log_gdp_std <- dd2$log_gdp/ mean(dd2$log_gdp)
dd2$rugged_std<- dd2$rugged/max(dd2$rugged)
dd2$cid<-ifelse(dd2$cont_africa==1,1,2)
model4 <- quap(
alist(
log_gdp_std ~ dnorm( mu , sigma ) ,
mu <- a[cid] + b[cid]* (rugged_std-0.215) ,
a[cid] ~ dnorm(1,0.1),
b[cid] ~ dnorm(0,0.3),
sigma ~ dexp(1)
) ,
data=dd2)
precis(model4 , depth=2)
## mean sd 5.5% 94.5%
## a[1] 0.8865638 0.015675786 0.8615109 0.9116168
## a[2] 1.0505698 0.009936665 1.0346890 1.0664505
## b[1] 0.1325055 0.074204857 0.0139118 0.2510992
## b[2] -0.1425761 0.054749718 -0.2300767 -0.0550755
## sigma 0.1094948 0.005935387 0.1000089 0.1189807
pairs(model4)
model4_unif <- quap(
alist(
log_gdp_std ~ dnorm( mu , sigma ) ,
mu <- a[cid] + b[cid]* (rugged_std-0.215) ,
a[cid] ~ dnorm(1,0.1),
b[cid] ~ dnorm(0,0.3),
sigma ~ dunif(0,1)
) ,
data=dd2)
precis(model4_unif , depth=2)
## mean sd 5.5% 94.5%
## a[1] 0.8865598 0.015680166 0.86149984 0.9116197
## a[2] 1.0505692 0.009939492 1.03468400 1.0664545
## b[1] 0.1325319 0.074224840 0.01390623 0.2511575
## b[2] -0.1425299 0.054764976 -0.23005491 -0.0550049
## sigma 0.1095262 0.005939653 0.10003352 0.1190189
pairs(model4_unif)
9M2. Modify the terrain ruggedness model again. This time, change the prior for b[cid] to dexp(0.3). What does this do to the posterior distribution? Can you explain it?
model5 <- ulam(
alist(
log_gdp_std ~ dnorm( mu , sigma ) ,
mu <- a[cid] + b[cid]*( rugged_std - 0.215 ) ,
a[cid] ~ dnorm( 1 , 0.1 ) ,
b[cid] ~ dexp(0.3) ,
sigma ~ dexp( 1 )
) , data=dat_slim , chains=4 , cores=4 )
show( model5 )
## Hamiltonian Monte Carlo approximation
## 2000 samples from 4 chains
##
## Sampling durations (seconds):
## warmup sample total
## chain:1 0.38 0.13 0.50
## chain:2 0.42 0.16 0.57
## chain:3 0.38 0.13 0.52
## chain:4 0.34 0.14 0.48
##
## Formula:
## log_gdp_std ~ dnorm(mu, sigma)
## mu <- a[cid] + b[cid] * (rugged_std - 0.215)
## a[cid] ~ dnorm(1, 0.1)
## b[cid] ~ dexp(0.3)
## sigma ~ dexp(1)
precis(model5,2)
## mean sd 5.5% 94.5% n_eff Rhat4
## a[1] 0.8866609 0.016297333 0.860418442 0.91265836 1790.398 1.0007070
## a[2] 1.0485720 0.010313368 1.032033772 1.06522409 2049.147 0.9989362
## b[1] 0.1467541 0.074433603 0.030710198 0.26724094 1052.197 1.0030099
## b[2] 0.0185734 0.017176851 0.001025054 0.05103307 1754.869 0.9996802
## sigma 0.1141120 0.006255951 0.104683508 0.12495609 1408.000 0.9999856
9M3. Re-estimate one of the Stan models from the chapter, but at different numbers of warmup iterations. Be sure to use the same number of sampling iterations in each case. Compare the n_eff values. How much warmup is enough?
model6 <- ulam(
alist(
log_gdp_std ~ dnorm( mu , sigma ) ,
mu <- a[cid] + b[cid]*( rugged_std - 0.215 ) ,
a[cid] ~ dnorm( 1 , 0.1 ) ,
b[cid] ~ dnorm( 0 , 0.3 ) ,
sigma ~ dexp( 1 )
) , data=dat_slim , chains=4 , cores=4,iter=1000 )
precis( model6 , 2 )
## mean sd 5.5% 94.5% n_eff Rhat4
## a[1] 0.8869093 0.015755621 0.86175638 0.9121889 2346.761 0.9992880
## a[2] 1.0503897 0.009667121 1.03441087 1.0657235 3297.905 0.9982358
## b[1] 0.1324425 0.077058022 0.01162472 0.2579711 2070.469 0.9995292
## b[2] -0.1433868 0.055633172 -0.23398827 -0.0544610 2104.682 1.0002698
## sigma 0.1116323 0.006057864 0.10261643 0.1215318 2240.258 0.9998442
9H1. Run the model below and then inspect the posterior distribution and explain what it is accomplishing.
mp <- ulam(
alist(
a ~ dnorm(0,1),
b ~ dcauchy(0,1)
), data=list(y=1) , chains=1 )
##
## SAMPLING FOR MODEL 'bcf56ee89f6cf2a4224a4139ff01c7d4' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 1: Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 1: Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 1: Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 1: Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 1: Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 0.041 seconds (Warm-up)
## Chain 1: 0.034 seconds (Sampling)
## Chain 1: 0.075 seconds (Total)
## Chain 1:
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
traceplot(mp)
# traceplot for a indicates that this convergence in distribution takes place more rapidly than b. No Long-term trends or drifts in a plot indicate higher convergence than b.
Compare the samples for the parameters a and b. Can you explain the different trace plots? If you are unfamiliar with the Cauchy distribution, you should look it up. The key feature to attend to is that it has no expected value. Can you connect this fact to the trace plot?