This chapter has been an informal introduction to Markov chain Monte Carlo (MCMC) estimation. The goal has been to introduce the purpose and approach MCMC algorithms. The major algorithms introduced were the Metropolis, Gibbs sampling, and Hamiltonian Monte Carlo algorithms. Each has its advantages and disadvantages. The ulam function in the rethinking package was introduced. It uses the Stan (mc-stan.org) Hamiltonian Monte Carlo engine to fit models as they are defined in this book. General advice about diagnosing poor MCMC fits was introduced by the use of a couple of pathological examples.
Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Make sure to include plots if the question requests them. Problems are labeled Easy (E), Medium (M), and Hard(H).
Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.
9E1. Which of the following is a requirement of the simple Metropolis algorithm?
#3. The proposal distribution must be symmetric.
9E2. Gibbs sampling is more efficient than the Metropolis algorithm. How does it achieve this extra efficiency? Are there any limitations to the Gibbs sampling strategy?
#It explores posterior distribution more efficiently, so there are fewer rejections compared to base Metropolis algorithm.
#It makes use of information about the analytic form of likelihood and conjugate priors, so it can merge proposal and reject/accept steps.
9E3. Which sort of parameters can Hamiltonian Monte Carlo not handle? Can you explain why?
# Hamiltonian Monte Carlo method couldn't handle discrete parameters. Method is based on the physical metaphor of the moving particle. This particle moves in the space of parameters with speed proportional to the gradient of likelihood, so the method should be able to differentiate the space. Discrete parameters aren't differentiable.
9E4. Explain the difference between the effective number of samples, n_eff as calculated by Stan, and the actual number of samples.
# N_eff estimates the number of 'ideal' samples. Ideal samples are entirely uncorrelated. Due to way MCMC works each next sample is actually correlated with the previous one to some extent. n_eff estimate how many independent samples we should get to collect same information as presented by the current sequence of posterior samples.
9E5. Which value should Rhat approach, when a chain is sampling the posterior distribution correctly?
# Rhat should be close to 1. It reflects the fact that inner variance and outer variance between chains is roughly the same, so we could expect that inference is not broken.
9E6. Sketch a good trace plot for a Markov chain, one that is effectively sampling from the posterior distribution. What is good about its shape? Then sketch a trace plot for a malfunctioning Markov chain. What about its shape indicates malfunction?
plot(y=rnorm(1000,0,1),x=1:1000, type="l")
9E7. Repeat the problem above, but now for a trace rank plot.
9M1. Re-estimate the terrain ruggedness model from the chapter, but now using a uniform prior for the standard deviation, sigma. The uniform prior should be dunif(0,1). Visualize the priors. Use ulam to estimate the posterior. Visualize the posteriors for both models. Does the different prior have any detectible influence on the posterior distribution of sigma? Why or why not?
data(rugged)
d <- rugged
d$log_gdp <- log(d$rgdppc_2000)
dd <- d[ complete.cases(d$rgdppc_2000) , ]
dd$log_gdp_std <- dd$log_gdp / mean(dd$log_gdp)
dd$rugged_std <- dd$rugged / max(dd$rugged)
dd$cid <- ifelse( dd$cont_africa==1 , 1 , 2 )
m8.5 <- quap(
alist(
log_gdp_std ~ dnorm( mu , sigma ) ,
mu <- a[cid] + b[cid]*( rugged_std - 0.215 ) ,
a[cid] ~ dnorm( 1 , 0.1 ) ,
b[cid] ~ dnorm( 0 , 0.3 ) ,
sigma ~ dexp( 1 )
) ,
data=dd )
precis( m8.5 , depth=2 )
## mean sd 5.5% 94.5%
## a[1] 0.8865643 0.015673402 0.86151517 0.91161342
## a[2] 1.0505696 0.009935129 1.03469137 1.06644788
## b[1] 0.1325070 0.074193980 0.01393069 0.25108331
## b[2] -0.1425803 0.054741446 -0.23006772 -0.05509291
## sigma 0.1094777 0.005933069 0.09999547 0.11895985
m8.5_unif <- quap(
alist(
log_gdp_std ~ dnorm( mu , sigma ) ,
mu <- a[cid] + b[cid]* (rugged_std-0.215) ,
a[cid] ~ dnorm(1,0.1),
b[cid] ~ dnorm(0,0.3),
sigma ~ dunif(0,1)
) ,
data=dd)
precis(m8.5_unif , depth=2)
## mean sd 5.5% 94.5%
## a[1] 0.8865582 0.015679942 0.86149866 0.91161781
## a[2] 1.0505698 0.009939350 1.03468476 1.06645476
## b[1] 0.1325232 0.074223840 0.01389921 0.25114727
## b[2] -0.1426714 0.054764065 -0.23019492 -0.05514782
## sigma 0.1095247 0.005939435 0.10003229 0.11901702
pairs(m8.5)
pairs(m8.5_unif)
#The difference in the resulting models is indistinguishable
9M2. Modify the terrain ruggedness model again. This time, change the prior for b[cid] to dexp(0.3). What does this do to the posterior distribution? Can you explain it?
m9.2 <- quap(
alist(
log_gdp_std ~ dnorm( mu , sigma ) ,
mu <- a[cid] + b[cid]*( rugged_std - 0.215 ) ,
a[cid] ~ dnorm( 1 , 0.1 ) ,
b[cid] ~ dnorm( 0 , 0.3 ) ,
sigma ~ dexp( 0.3 )
) ,
data=dd )
precis( m9.2 , depth=2 )
## mean sd 5.5% 94.5%
## a[1] 0.8865653 0.015678180 0.86150853 0.91162205
## a[2] 1.0505645 0.009938210 1.03468137 1.06644773
## b[1] 0.1325536 0.074215723 0.01394252 0.25116464
## b[2] -0.1425690 0.054758018 -0.23008291 -0.05505514
## sigma 0.1095119 0.005937713 0.10002231 0.11900154
pairs(m9.2)
#The difference in the resulting models is indistinguishable
9M3. Re-estimate one of the Stan models from the chapter, but at different numbers of warmup iterations. Be sure to use the same number of sampling iterations in each case. Compare the n_eff values. How much warmup is enough?
m3 <- quap(
alist(
log_gdp_std ~ dnorm( mu , sigma ) ,
mu <- a[cid] + b[cid]*( rugged_std - 0.215 ) ,
a[cid] ~ dnorm( 1 , 0.1 ) ,
b[cid] ~ dnorm( 0 , 0.3 ) ,
sigma ~ dexp( 0.3 )
) ,
data=dd )
precis( m9.2 , depth=2 )
## mean sd 5.5% 94.5%
## a[1] 0.8865653 0.015678180 0.86150853 0.91162205
## a[2] 1.0505645 0.009938210 1.03468137 1.06644773
## b[1] 0.1325536 0.074215723 0.01394252 0.25116464
## b[2] -0.1425690 0.054758018 -0.23008291 -0.05505514
## sigma 0.1095119 0.005937713 0.10002231 0.11900154
9H1. Run the model below and then inspect the posterior distribution and explain what it is accomplishing.
mp <- ulam(
alist(
a ~ dnorm(0,1),
b ~ dcauchy(0,1)
), data=list(y=1) , chains=1 )
## Running /Library/Frameworks/R.framework/Resources/bin/R CMD SHLIB foo.c
## clang -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I"/Library/Frameworks/R.framework/Versions/3.6/Resources/library/Rcpp/include/" -I"/Library/Frameworks/R.framework/Versions/3.6/Resources/library/RcppEigen/include/" -I"/Library/Frameworks/R.framework/Versions/3.6/Resources/library/RcppEigen/include/unsupported" -I"/Library/Frameworks/R.framework/Versions/3.6/Resources/library/BH/include" -I"/Library/Frameworks/R.framework/Versions/3.6/Resources/library/StanHeaders/include/src/" -I"/Library/Frameworks/R.framework/Versions/3.6/Resources/library/StanHeaders/include/" -I"/Library/Frameworks/R.framework/Versions/3.6/Resources/library/RcppParallel/include/" -I"/Library/Frameworks/R.framework/Versions/3.6/Resources/library/rstan/include" -DEIGEN_NO_DEBUG -DBOOST_DISABLE_ASSERTS -DBOOST_PENDING_INTEGER_LOG2_HPP -DSTAN_THREADS -DBOOST_NO_AUTO_PTR -include '/Library/Frameworks/R.framework/Versions/3.6/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp' -D_REENTRANT -DRCPP_PARALLEL_USE_TBB=1 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -fPIC -Wall -g -O2 -c foo.c -o foo.o
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/3.6/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/3.6/Resources/library/RcppEigen/include/Eigen/Dense:1:
## In file included from /Library/Frameworks/R.framework/Versions/3.6/Resources/library/RcppEigen/include/Eigen/Core:88:
## /Library/Frameworks/R.framework/Versions/3.6/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:1: error: unknown type name 'namespace'
## namespace Eigen {
## ^
## /Library/Frameworks/R.framework/Versions/3.6/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:16: error: expected ';' after top level declarator
## namespace Eigen {
## ^
## ;
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/3.6/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/3.6/Resources/library/RcppEigen/include/Eigen/Dense:1:
## /Library/Frameworks/R.framework/Versions/3.6/Resources/library/RcppEigen/include/Eigen/Core:96:10: fatal error: 'complex' file not found
## #include <complex>
## ^~~~~~~~~
## 3 errors generated.
## make: *** [foo.o] Error 1
##
## SAMPLING FOR MODEL '3bd3f4d287e9cccab124308e5415245c' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 1.4e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.14 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 1: Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 1: Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 1: Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 1: Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 1: Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 0.07185 seconds (Warm-up)
## Chain 1: 0.181484 seconds (Sampling)
## Chain 1: 0.253334 seconds (Total)
## Chain 1:
## Warning: The largest R-hat is 1.07, indicating chains have not mixed.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#r-hat
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
# There is no model for y in the formula, so data doesn't influence inference.
# Model will be sampling from the specified prioir distributions.
# We expect to see normal distribution for a and cauchy for b.
plot(mp)
precis(mp)
## mean sd 5.5% 94.5% n_eff Rhat4
## a -0.09588696 0.856269 -1.556777 1.230984 35.93122 1.018954
## b -0.18422309 2.942966 -2.777166 2.536628 463.55811 1.002743
samples <- extract.samples(mp)
hist(samples$a)#perfect Gausssian
hist(samples$b)#hard to examine, cause has sevral extreme values (as expected from cauchy thick tails)
Compare the samples for the parameters a and b. Can you explain the different trace plots? If you are unfamiliar with the Cauchy distribution, you should look it up. The key feature to attend to is that it has no expected value. Can you connect this fact to the trace plot?