Chapter 9 - Markov Chain Monte Carlo

This chapter has been an informal introduction to Markov chain Monte Carlo (MCMC) estimation. The goal has been to introduce the purpose and approach MCMC algorithms. The major algorithms introduced were the Metropolis, Gibbs sampling, and Hamiltonian Monte Carlo algorithms. Each has its advantages and disadvantages. The ulam function in the rethinking package was introduced. It uses the Stan (mc-stan.org) Hamiltonian Monte Carlo engine to fit models as they are defined in this book. General advice about diagnosing poor MCMC fits was introduced by the use of a couple of pathological examples.

Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Make sure to include plots if the question requests them.

Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.

Questions

9-1. Re-estimate the terrain ruggedness model from the chapter, but now using a uniform prior for the standard deviation, sigma. The uniform prior should be dunif(0,1). Visualize the priors. Use ulam to estimate the posterior. Visualize the posteriors for both models. Does the different prior have any detectible influence on the posterior distribution of sigma? Why or why not?

solution

It does not have any significant impact on the posterior distribution as shown by the coefficient plots that are fairly similar.

par(mfrow=c(1,2))
plot(density(rexp(n=10000, 1)), main="Sigma ~ dexp(1)")
plot(density(runif(n=10000, 1)), main="Sigma ~ dunif(0, 1)")

data(rugged)
d = rugged
d$log_gdp = log(d$rgdppc_2000)
dd = d[complete.cases(d$rgdppc_2000),]
dd$log_gdp_std = dd$log_gdp / mean(dd$log_gdp)
dd$rugged_std = dd$rugged / max(dd$rugged)
dd$cid = ifelse(dd$cont_africa==1, 1, 2)
dat_slim = list(
  log_gdp_std = dd$log_gdp_std,
  rugged_std = dd$rugged_std,
  cid = as.integer(dd$cid)
)

# Model
m9.1 = ulam(
  alist(
    log_gdp_std ~ dnorm(mu, sigma),
    mu <- a[cid] + b[cid] * (rugged_std - 0.215),
    a[cid] ~ dnorm(1, 0.1),
    b[cid] ~ dnorm(0, 0.3),
    sigma ~ dexp(1)
  ), data=dat_slim, chains=1, log_lik=TRUE
)
## 
## SAMPLING FOR MODEL 'bdaed75a9993557c6ccccabd1899b8a7' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0.001 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 10 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
## Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
## Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
## Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
## Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
## Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.112 seconds (Warm-up)
## Chain 1:                0.066 seconds (Sampling)
## Chain 1:                0.178 seconds (Total)
## Chain 1:
precis(m9.1, depth=2)
##             mean          sd        5.5%      94.5%    n_eff     Rhat4
## a[1]   0.8857555 0.015300301  0.86139505  0.9110877 751.7871 0.9980736
## a[2]   1.0508784 0.010410293  1.03457897  1.0672301 949.7562 0.9980664
## b[1]   0.1303580 0.074199788  0.01765176  0.2497502 529.0053 1.0046599
## b[2]  -0.1450097 0.052513968 -0.22496643 -0.0601791 401.0908 0.9987536
## sigma  0.1116964 0.006024636  0.10207057  0.1216895 673.2512 0.9981600
m9.1.1 = ulam(
  alist(
    log_gdp_std ~ dnorm(mu, sigma),
    mu <- a[cid] + b[cid] * (rugged_std - 0.215),
    a[cid] ~ dnorm(1, 0.1),
    b[cid] ~ dnorm(0, 0.3),
    sigma ~ dunif(0, 1)
  ), data=dat_slim, chains=1, log_lik=TRUE
)
## 
## SAMPLING FOR MODEL 'b74c3cc26192beb2326d52dceba36e01' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
## Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
## Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
## Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
## Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
## Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.151 seconds (Warm-up)
## Chain 1:                0.098 seconds (Sampling)
## Chain 1:                0.249 seconds (Total)
## Chain 1:
precis(m9.1.1, depth=2)
##             mean          sd        5.5%       94.5%     n_eff     Rhat4
## a[1]   0.8868454 0.015210174  0.86379269  0.91169251  573.7770 0.9999633
## a[2]   1.0507580 0.011239953  1.03331799  1.06774254 1123.3144 1.0003195
## b[1]   0.1364557 0.075083108  0.01736333  0.25847721  420.1860 0.9998623
## b[2]  -0.1428537 0.056776958 -0.22807934 -0.05187787  871.2587 0.9990167
## sigma  0.1116736 0.006178162  0.10274827  0.12112048  496.6782 0.9980006
compare(m9.1, m9.1.1)
##             WAIC       SE     dWAIC       dSE    pWAIC    weight
## m9.1   -259.8294 14.60313 0.0000000        NA 4.666286 0.6043951
## m9.1.1 -258.9817 14.61815 0.8476246 0.3458675 5.112799 0.3956049
plot(coeftab(m9.1, m9.1.1), par=c("a[1]", "a[2]", "b[1]", "b[2]", "sigma"))

9-2. Modify the terrain ruggedness model again. This time, change the prior for b[cid] to dexp(0.3). What does this do to the posterior distribution? Can you explain it?

Solution

Changing the prior of b[cid] does have a significant impact on the model. This is because the previous prior, the mean and variance of b[2] were both less than 0, in our new prior we eliminated the option to sample from negative numbers, and b[2] is bound to have a positive effect on the outcome, which is not a good prior at all.

# Plotting priors
par(mfrow=c(1,2))
plot(density(rnorm(n=10000, mean=0, sd=0.3)), main="b[cid] ~ dnorm(0, 0.3)")
plot(density(rexp(n=10000, 0.3)), main="b[cid] ~ dexp(0.3)")

m9.1.2 = ulam(
  alist(
    log_gdp_std ~ dnorm(mu, sigma),
    mu <- a[cid] + b[cid] * (rugged_std - 0.215),
    a[cid] ~ dnorm(1, 0.1),
    b[cid] ~ dexp(0.3),
    sigma ~ dexp(1)
  ), data=dat_slim, chains=1, log_lik=TRUE
)
## 
## SAMPLING FOR MODEL 'eac852e8eafd54ad026f0bd6c3c1dd36' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
## Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
## Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
## Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
## Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
## Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.45 seconds (Warm-up)
## Chain 1:                0.202 seconds (Sampling)
## Chain 1:                0.652 seconds (Total)
## Chain 1:
precis(m9.1.2, depth=2)
##             mean          sd        5.5%      94.5%    n_eff     Rhat4
## a[1]  0.88725329 0.016142766 0.860644695 0.91225169 395.2858 0.9981514
## a[2]  1.04799225 0.009377046 1.032572037 1.06281480 512.6367 0.9980999
## b[1]  0.14875048 0.068589783 0.038957672 0.25925547 319.9172 0.9980381
## b[2]  0.01813967 0.017310424 0.001030471 0.05098864 422.5725 1.0067248
## sigma 0.11373512 0.006049616 0.104636331 0.12422963 384.9223 1.0010652
compare(m9.1, m9.1.2)
##             WAIC       SE    dWAIC      dSE    pWAIC     weight
## m9.1   -259.8294 14.60313 0.000000       NA 4.666286 0.94886903
## m9.1.2 -253.9876 14.53729 5.841761 7.261685 3.188021 0.05113097
plot(coeftab(m9.1, m9.1.2), par=c("a[1]", "a[2]", "b[1]", "b[2]", "sigma"))

9-3. Re-estimate one of the Stan models from the chapter, but at different numbers of warmup iterations. Be sure to use the same number of sampling iterations in each case. Compare the n_eff values. How much warmup is enough?

Solution

The model performs significantly worse when the sampling iterations are set to 500 and the warmup iterations to 5. On the other hand setting the number of warm up iterations to 10 produces results similar to 900 iterations

m9.1.3 = ulam(
  alist(
    log_gdp_std ~ dnorm(mu, sigma),
    mu <- a[cid] + b[cid] * (rugged_std - 0.215),
    a[cid] ~ dnorm(1, 0.1),
    b[cid] ~ dnorm(0, 0.3),
    sigma ~ dexp(1)
  ), data=dat_slim, chains=1, log_lik=TRUE, iter=505, warmup=5
)
## 
## SAMPLING FOR MODEL 'bdaed75a9993557c6ccccabd1899b8a7' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: WARNING: No variance estimation is
## Chain 1:          performed for num_warmup < 20
## Chain 1: 
## Chain 1: Iteration:   1 / 505 [  0%]  (Warmup)
## Chain 1: Iteration:   6 / 505 [  1%]  (Sampling)
## Chain 1: Iteration:  55 / 505 [ 10%]  (Sampling)
## Chain 1: Iteration: 105 / 505 [ 20%]  (Sampling)
## Chain 1: Iteration: 155 / 505 [ 30%]  (Sampling)
## Chain 1: Iteration: 205 / 505 [ 40%]  (Sampling)
## Chain 1: Iteration: 255 / 505 [ 50%]  (Sampling)
## Chain 1: Iteration: 305 / 505 [ 60%]  (Sampling)
## Chain 1: Iteration: 355 / 505 [ 70%]  (Sampling)
## Chain 1: Iteration: 405 / 505 [ 80%]  (Sampling)
## Chain 1: Iteration: 455 / 505 [ 90%]  (Sampling)
## Chain 1: Iteration: 505 / 505 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0 seconds (Warm-up)
## Chain 1:                0.021 seconds (Sampling)
## Chain 1:                0.021 seconds (Total)
## Chain 1:
m9.1.4 = ulam(
  alist(
    log_gdp_std ~ dnorm(mu, sigma),
    mu <- a[cid] + b[cid] * (rugged_std - 0.215),
    a[cid] ~ dnorm(1, 0.1),
    b[cid] ~ dnorm(0, 0.3),
    sigma ~ dexp(1)
  ), data=dat_slim, chains=1, log_lik=TRUE, iter=510, warmup=10
)
## 
## SAMPLING FOR MODEL 'bdaed75a9993557c6ccccabd1899b8a7' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: WARNING: No variance estimation is
## Chain 1:          performed for num_warmup < 20
## Chain 1: 
## Chain 1: Iteration:   1 / 510 [  0%]  (Warmup)
## Chain 1: Iteration:  11 / 510 [  2%]  (Sampling)
## Chain 1: Iteration:  61 / 510 [ 11%]  (Sampling)
## Chain 1: Iteration: 112 / 510 [ 21%]  (Sampling)
## Chain 1: Iteration: 163 / 510 [ 31%]  (Sampling)
## Chain 1: Iteration: 214 / 510 [ 41%]  (Sampling)
## Chain 1: Iteration: 265 / 510 [ 51%]  (Sampling)
## Chain 1: Iteration: 316 / 510 [ 61%]  (Sampling)
## Chain 1: Iteration: 367 / 510 [ 71%]  (Sampling)
## Chain 1: Iteration: 418 / 510 [ 81%]  (Sampling)
## Chain 1: Iteration: 469 / 510 [ 91%]  (Sampling)
## Chain 1: Iteration: 510 / 510 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.002 seconds (Warm-up)
## Chain 1:                0.044 seconds (Sampling)
## Chain 1:                0.046 seconds (Total)
## Chain 1:
m9.1.5 = ulam(
  alist(
    log_gdp_std ~ dnorm(mu, sigma),
    mu <- a[cid] + b[cid] * (rugged_std - 0.215),
    a[cid] ~ dnorm(1, 0.1),
    b[cid] ~ dnorm(0, 0.3),
    sigma ~ dexp(1)
  ), data=dat_slim, chains=1, log_lik=TRUE, iter=1400, warmup=900
)
## 
## SAMPLING FOR MODEL 'bdaed75a9993557c6ccccabd1899b8a7' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 1400 [  0%]  (Warmup)
## Chain 1: Iteration:  140 / 1400 [ 10%]  (Warmup)
## Chain 1: Iteration:  280 / 1400 [ 20%]  (Warmup)
## Chain 1: Iteration:  420 / 1400 [ 30%]  (Warmup)
## Chain 1: Iteration:  560 / 1400 [ 40%]  (Warmup)
## Chain 1: Iteration:  700 / 1400 [ 50%]  (Warmup)
## Chain 1: Iteration:  840 / 1400 [ 60%]  (Warmup)
## Chain 1: Iteration:  901 / 1400 [ 64%]  (Sampling)
## Chain 1: Iteration: 1040 / 1400 [ 74%]  (Sampling)
## Chain 1: Iteration: 1180 / 1400 [ 84%]  (Sampling)
## Chain 1: Iteration: 1320 / 1400 [ 94%]  (Sampling)
## Chain 1: Iteration: 1400 / 1400 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.251 seconds (Warm-up)
## Chain 1:                0.147 seconds (Sampling)
## Chain 1:                0.398 seconds (Total)
## Chain 1:
compare(m9.1.3, m9.1.4, m9.1.5)
##             WAIC       SE     dWAIC      dSE     pWAIC        weight
## m9.1.5 -260.2485 14.48725   0.00000       NA  4.387048  9.985317e-01
## m9.1.4 -247.2041 18.74635  13.04443 10.36013 10.077901  1.468253e-03
## m9.1.3  285.0362 21.33086 545.28472 23.30220  3.502362 3.911013e-119
plot(coeftab(m9.1.3, m9.1.4, m9.1.5), par=c("a[1]", "a[2]", "b[1]", "b[2]", "sigma"))

9-4. Run the model below and then inspect the posterior distribution and explain what it is accomplishing.

Solution

The posterior sampled data from the prior as seen from the similarity between the prior plot and posterior plot.

mp <- ulam(
 alist(
   a ~ dnorm(0,1),
   b ~ dcauchy(0,1)
 ), data=list(y=1) , chains=1 )
## 
## SAMPLING FOR MODEL 'bcf56ee89f6cf2a4224a4139ff01c7d4' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
## Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
## Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
## Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
## Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
## Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.011 seconds (Warm-up)
## Chain 1:                0.04 seconds (Sampling)
## Chain 1:                0.051 seconds (Total)
## Chain 1:

Compare the samples for the parameters a and b. Can you explain the different trace plots? If you are unfamiliar with the Cauchy distribution, you should look it up. The key feature to attend to is that it has no expected value. Can you connect this fact to the trace plot?

Solution

The traceplot for a follows a normal distribution since most of the samples are found around 0. However, the traceplot for b is scattered all over, even though it is still centered at around 0. Cauchy distribution is an example of pathological distribution, which has undefined expected value and variance. It aligns with what we see on the posterior plot.

9-5. Recall the divorce rate example from Chapter 5. Repeat that analysis, using ulam this time, fitting models m5.1, m5.2, and m5.3. Use compare to compare the models on the basis of WAIC or PSIS. To use WAIC or PSIS with ulam, you need add the argument log_log=TRUE. Explain the model comparison results.

Solution

The model with Age as the only parameter performed the best, the one with Age and Marriage performed slightly worse due to the causal DAG, the model with Marriage only performed the worst. This is expected, and aligned with the quap model results.

data("WaffleDivorce")
d = data.frame(
  D=standardize(WaffleDivorce$Divorce),
  M=standardize(WaffleDivorce$Marriage),
  A=standardize(WaffleDivorce$MedianAgeMarriage)
)

m5.1 = ulam(
  alist(
    D ~ dnorm(mu, sigma),
    mu <- a + bA * A,
    a ~ dnorm(0, 0.2),
    bA ~ dnorm(0, 0.5),
    sigma ~ dexp(1)
  ), data=d, chains=1, log_lik=TRUE
)
## 
## SAMPLING FOR MODEL '5605ea5f0769a3b35ef3557a0659786d' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
## Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
## Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
## Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
## Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
## Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.037 seconds (Warm-up)
## Chain 1:                0.016 seconds (Sampling)
## Chain 1:                0.053 seconds (Total)
## Chain 1:
m5.2 = ulam(
  alist(
    D ~ dnorm(mu, sigma),
    mu <- a + bM * M,
    a ~ dnorm(0, 0.2),
    bM ~ dnorm(0, 0.5),
    sigma ~ dexp(1)
  ), data=d, chains=1, log_lik=TRUE
)
## 
## SAMPLING FOR MODEL '5562557d1eb945f85e38992afb87f838' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
## Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
## Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
## Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
## Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
## Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.068 seconds (Warm-up)
## Chain 1:                0.054 seconds (Sampling)
## Chain 1:                0.122 seconds (Total)
## Chain 1:
m5.3 = ulam(
  alist(
    D ~ dnorm(mu, sigma),
    mu <- a + bM * M + bA * A,
    a ~ dnorm(0, 0.2),
    bA ~ dnorm(0, 0.5),
    bM ~ dnorm(0, 0.5),
    sigma ~ dexp(1)
  ), data=d, chains=1, log_lik=TRUE
)
## 
## SAMPLING FOR MODEL '2c20f1358db2f3b1a0834492a0529c0f' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
## Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
## Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
## Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
## Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
## Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.062 seconds (Warm-up)
## Chain 1:                0.058 seconds (Sampling)
## Chain 1:                0.12 seconds (Total)
## Chain 1:
compare(m5.1, m5.2, m5.3)
##          WAIC        SE     dWAIC       dSE    pWAIC      weight
## m5.1 126.4034 12.907649  0.000000        NA 3.987601 0.677345224
## m5.3 127.8943 12.697266  1.490878 0.7993251 4.893329 0.321417945
## m5.2 139.0147  9.603453 12.611258 9.4662594 2.747102 0.001236831
plot(coeftab(m5.1, m5.2, m5.3), par=c("a", "bA", "bM", "sigma"))