9E1. Which of the following is a requirement of the simple Metropolis algorithm?
#3. The proposal distribution must be symmetric.
9E2. Gibbs sampling is more efficient than the Metropolis algorithm. How does it achieve this extra efficiency? Are there any limitations to the Gibbs sampling strategy?
#Gibbs sampling's distribution of proposed parameter values is adjusted to currect parameter values. Gibbs sampler uses pairs of priors and likelihoods that have anlytic solutions for the posterior of an individual parameter.
9E3. Which sort of parameters can Hamiltonian Monte Carlo not handle? Can you explain why?
# Hamiltonian Monte Carlo method couldn't handle discrete parameters. Method is based on the physical metaphor of the moving particle. This particle moves in the space of parameters with speed proportional to the gradient of likelihood, so the method should be able to differentiate the space. Discrete parameters aren't differentiable
9E4. Explain the difference between the effective number of samples, n_eff as calculated by Stan, and the actual number of samples.
# N_effective aims to estimate the number of 'ideal' samples. Ideal samples are entirely uncorrelated. Due to way MCMC works each next sample is actually correlated with the previous one to some extent. n_eff estimate how many independent samples we should get to collect same information as presented by the current sequence of posterior samples.
9E5. Which value should Rhat approach, when a chain is sampling the posterior distribution correctly?
# Rhat should be close to 1. It reflects the fact that inner variance and outer variance between chains is roughly the same, so we could expect that inference is not broken (does't depend on the chain).
9E6. Sketch a good trace plot for a Markov chain, one that is effectively sampling from the posterior distribution. What is good about its shape? Then sketch a trace plot for a malfunctioning Markov chain. What about its shape indicates malfunction?
data(rugged)
d <- rugged
d$log_gdp <- log(d$rgdppc_2000)
dd <- d[ complete.cases(d$rgdppc_2000) , ]
dd$log_gdp_std <- dd$log_gdp / mean(dd$log_gdp)
dd$rugged_std <- dd$rugged / max(dd$rugged)
dd$cid <- ifelse( dd$cont_africa==1 , 1 , 2 )
m8.3 <- quap(
alist(
log_gdp_std ~ dnorm( mu , sigma ) ,
mu <- a[cid] + b[cid]*( rugged_std - 0.215 ) ,
a[cid] ~ dnorm( 1 , 0.1 ) ,
b[cid] ~ dnorm( 0 , 0.3 ) ,
sigma ~ dexp( 1 )
) , data=dd )
precis( m8.3 , depth=2 )
## mean sd 5.5% 94.5%
## a[1] 0.8865642 0.015675245 0.86151217 0.91161631
## a[2] 1.0505715 0.009936315 1.03469139 1.06645169
## b[1] 0.1324998 0.074202395 0.01391007 0.25108959
## b[2] -0.1425727 0.054747846 -0.23007035 -0.05507508
## sigma 0.1094909 0.005934862 0.10000583 0.11897594
dat_slim <- list(
log_gdp_std = dd$log_gdp_std,
rugged_std = dd$rugged_std,
cid = as.integer( dd$cid )
)
str(dat_slim)
## List of 3
## $ log_gdp_std: num [1:170] 0.88 0.965 1.166 1.104 0.915 ...
## $ rugged_std : num [1:170] 0.138 0.553 0.124 0.125 0.433 ...
## $ cid : int [1:170] 1 2 2 2 2 2 2 2 2 1 ...
m9.1 <- ulam(
alist(
log_gdp_std ~ dnorm( mu , sigma ) ,
mu <- a[cid] + b[cid]*( rugged_std - 0.215 ) ,
a[cid] ~ dnorm( 1 , 0.1 ) ,
b[cid] ~ dnorm( 0 , 0.3 ) ,
sigma ~ dexp( 1 )
) , data=dat_slim , chains=4 , cores=4 )
show( m9.1 )
## Hamiltonian Monte Carlo approximation
## 2000 samples from 4 chains
##
## Sampling durations (seconds):
## warmup sample total
## chain:1 0.05 0.04 0.09
## chain:2 0.05 0.04 0.09
## chain:3 0.06 0.04 0.10
## chain:4 0.05 0.04 0.09
##
## Formula:
## log_gdp_std ~ dnorm(mu, sigma)
## mu <- a[cid] + b[cid] * (rugged_std - 0.215)
## a[cid] ~ dnorm(1, 0.1)
## b[cid] ~ dnorm(0, 0.3)
## sigma ~ dexp(1)
traceplot( m9.1 )
# As shown in the trace plot, the samples were plotted in sequential order, joined by a line. Three factors need to be considered when a chain is healthy, including (1) stationarity, (2) good mixing, and (3) convergence. If the three factors are not met, the Markov chain trace plot is malfunctioning.
y <- c(-1,1)
set.seed(11)
m9.2 <- ulam(
alist(
y ~ dnorm( mu , sigma ) ,
mu <- alpha ,
alpha ~ dnorm( 0 , 1000 ) ,
sigma ~ dexp( 0.0001 )
) , data=list(y=y) , chains=3 )
##
## SAMPLING FOR MODEL '726d002e27cec1633082261fcfedb813' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 1.2e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 1: Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 1: Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 1: Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 1: Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 1: Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 0.056395 seconds (Warm-up)
## Chain 1: 0.008729 seconds (Sampling)
## Chain 1: 0.065124 seconds (Total)
## Chain 1:
##
## SAMPLING FOR MODEL '726d002e27cec1633082261fcfedb813' NOW (CHAIN 2).
## Chain 2:
## Chain 2: Gradient evaluation took 3e-06 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2:
## Chain 2:
## Chain 2: Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 2: Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 2: Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 2: Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 2: Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 2: Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 2: Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 2: Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 2: Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 2: Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 2: Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 2: Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 2:
## Chain 2: Elapsed Time: 0.052671 seconds (Warm-up)
## Chain 2: 0.008852 seconds (Sampling)
## Chain 2: 0.061523 seconds (Total)
## Chain 2:
##
## SAMPLING FOR MODEL '726d002e27cec1633082261fcfedb813' NOW (CHAIN 3).
## Chain 3:
## Chain 3: Gradient evaluation took 3e-06 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3:
## Chain 3:
## Chain 3: Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 3: Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 3: Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 3: Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 3: Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 3: Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 3: Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 3: Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 3: Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 3: Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 3: Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 3: Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 3:
## Chain 3: Elapsed Time: 0.057726 seconds (Warm-up)
## Chain 3: 0.007911 seconds (Sampling)
## Chain 3: 0.065637 seconds (Total)
## Chain 3:
## Warning: There were 82 divergent transitions after warmup. Increasing adapt_delta above 0.95 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## Warning: Examine the pairs() plot to diagnose sampling problems
## Warning: The largest R-hat is 1.07, indicating chains have not mixed.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#r-hat
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
show( m9.2 )
## Hamiltonian Monte Carlo approximation
## 1500 samples from 3 chains
##
## Sampling durations (seconds):
## warmup sample total
## chain:1 0.06 0.01 0.07
## chain:2 0.05 0.01 0.06
## chain:3 0.06 0.01 0.07
##
## Formula:
## y ~ dnorm(mu, sigma)
## mu <- alpha
## alpha ~ dnorm(0, 1000)
## sigma ~ dexp(1e-04)
traceplot( m9.2 )
9E7. Repeat the problem above, but now for a trace rank plot.
trankplot( m9.1 )
## Warning in if (class(x) == "numeric") x <- array(x, dim = c(length(x), 1)): the
## condition has length > 1 and only the first element will be used
## Warning in if (class(x) == "numeric") x <- array(x, dim = c(length(x), 1)): the
## condition has length > 1 and only the first element will be used
## Warning in if (class(x) == "numeric") x <- array(x, dim = c(length(x), 1)): the
## condition has length > 1 and only the first element will be used
## Warning in if (class(x) == "numeric") x <- array(x, dim = c(length(x), 1)): the
## condition has length > 1 and only the first element will be used
## Warning in if (class(x) == "numeric") x <- array(x, dim = c(length(x), 1)): the
## condition has length > 1 and only the first element will be used
trankplot( m9.2 )
## Warning in if (class(x) == "numeric") x <- array(x, dim = c(length(x), 1)): the
## condition has length > 1 and only the first element will be used
## Warning in if (class(x) == "numeric") x <- array(x, dim = c(length(x), 1)): the
## condition has length > 1 and only the first element will be used
9M1. Re-estimate the terrain ruggedness model from the chapter, but now using a uniform prior for the standard deviation, sigma. The uniform prior should be dunif(0,1). Use ulam to estimate the posterior. Does the different prior have any detectible influence on the posterior distribution of sigma? Why or why not?
data(rugged)
d <- rugged
d$log_gdp <- log(d$rgdppc_2000)
dd <- d[ complete.cases(d$rgdppc_2000) , ]
dd$log_gdp_std <- dd$log_gdp/ mean(dd$log_gdp)
dd$rugged_std<- dd$rugged/max(dd$rugged)
dd$cid<-ifelse(dd$cont_africa==1,1,2)
m8.3 <- quap(
alist(
log_gdp_std ~ dnorm( mu , sigma ) ,
mu <- a[cid] + b[cid]* (rugged_std-0.215) ,
a[cid] ~ dnorm(1,0.1),
b[cid] ~ dnorm(0,0.3),
sigma ~ dexp(1)
) ,
data=dd)
precis(m8.3 , depth=2)
## mean sd 5.5% 94.5%
## a[1] 0.8865660 0.015675078 0.86151419 0.91161779
## a[2] 1.0505679 0.009936208 1.03468791 1.06644787
## b[1] 0.1325350 0.074201585 0.01394649 0.25112342
## b[2] -0.1425568 0.054747270 -0.23005354 -0.05506012
## sigma 0.1094897 0.005934696 0.10000487 0.11897445
pairs(m8.3)
m8.3_unif <- quap(
alist(
log_gdp_std ~ dnorm( mu , sigma ) ,
mu <- a[cid] + b[cid]* (rugged_std-0.215) ,
a[cid] ~ dnorm(1,0.1),
b[cid] ~ dnorm(0,0.3),
sigma ~ dunif(0,1)
) ,
data=dd)
precis(m8.3_unif , depth=2)
## mean sd 5.5% 94.5%
## a[1] 0.8865646 0.015680645 0.86150390 0.91162530
## a[2] 1.0505685 0.009939796 1.03468276 1.06645419
## b[1] 0.1325028 0.074227013 0.01387368 0.25113189
## b[2] -0.1425733 0.054766564 -0.23010089 -0.05504579
## sigma 0.1095296 0.005940112 0.10003617 0.11902306
pairs(m8.3_unif)
9M2. Modify the terrain ruggedness model again. This time, change the prior for b[cid] to dexp(0.3). What does this do to the posterior distribution? Can you explain it?
m8.3_exp <- quap(
alist(
log_gdp_std ~ dnorm( mu , sigma ) ,
mu <- a[cid] + b[cid]* (rugged_std-0.215) ,
a[cid] ~ dnorm(1,0.1),
b[cid] ~ dnorm(0,0.3),
sigma ~ dexp(0.3)
) ,
data=dd)
precis(m8.3_exp , depth=2)
## mean sd 5.5% 94.5%
## a[1] 0.8865649 0.015679232 0.86150647 0.91162335
## a[2] 1.0505696 0.009938884 1.03468537 1.06645388
## b[1] 0.1325026 0.074220566 0.01388385 0.25112145
## b[2] -0.1425740 0.054761661 -0.23009371 -0.05505429
## sigma 0.1095195 0.005938737 0.10002822 0.11901072
pairs(m8.3_exp)
9M3. Re-estimate one of the Stan models from the chapter, but at different numbers of warmup iterations. Be sure to use the same number of sampling iterations in each case. Compare the n_eff values. How much warmup is enough?
9H1. Run the model below and then inspect the posterior distribution and explain what it is accomplishing.
mp <- ulam(
alist(
a ~ dnorm(0,1),
b ~ dcauchy(0,1)
), data=list(y=1) , chains=1 )
##
## SAMPLING FOR MODEL '3bd3f4d287e9cccab124308e5415245c' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 1e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 1: Iteration: 200 / 1000 [ 20%] (Warmup)
## Chain 1: Iteration: 300 / 1000 [ 30%] (Warmup)
## Chain 1: Iteration: 400 / 1000 [ 40%] (Warmup)
## Chain 1: Iteration: 500 / 1000 [ 50%] (Warmup)
## Chain 1: Iteration: 501 / 1000 [ 50%] (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 0.019602 seconds (Warm-up)
## Chain 1: 0.011606 seconds (Sampling)
## Chain 1: 0.031208 seconds (Total)
## Chain 1:
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
Compare the samples for the parameters a and b. Can you explain the different trace plots? If you are unfamiliar with the Cauchy distribution, you should look it up. The key feature to attend to is that it has no expected value. Can you connect this fact to the trace plot?