This chapter has been an introduction to the motivation, implementation, and interpretation of basic multilevel models. It focused on varying intercepts, which achieve better estimates of baseline differences among clusters in the data. They achieve better estimates, because they simultaneously model the population of clusters and use inferences about the population to pool information among parameters. From another perspective, varying intercepts are adaptively regularized parameters, relying upon a prior that is itself learned from the data.
Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Make sure to include plots if the question requests them.
Finally, upon completion, name your final output .html
file as: YourName_ANLY505-Year-Semester.html and publish
the assignment to your R Pubs account and submit the link to Canvas.
Each question is worth 5 points.
13-1. Revisit the Reed frog survival data, data(reedfrogs), and add the predation and size treatment variables to the varying intercepts model. Consider models with either main effect alone, both main effects, as well as a model including both and their interaction. Instead of focusing on inferences about these two predictor variables, focus on the inferred variation across tanks. Explain why it changes as it does across models. Plot the sigma estimates.
data(reedfrogs)
d <- reedfrogs
dat <- list(
surv_ = d$surv,
dens_ = d$density,
tank_ = 1:nrow(d),
pred_ = ifelse(d$pred == "no", 0L, 1L),
size_ = ifelse(d$size == "small", 1L, 2L)
)
m_tank <- ulam(
alist(
surv_ ~ dbinom(dens_, p),
logit(p) <- a[tank_],
a[tank_] ~ dnorm(a_1, sigma),
a_1 ~ dnorm(0, 1.5),
sigma ~ dexp(1)
),
data = dat, chains = 4, cores = 4, log_lik = TRUE, iter = 2e3
)
m_pred <- ulam(
alist(
surv_ ~ dbinom(dens_, p),
logit(p) <- a[tank_] + bp * pred_,
a[tank_] ~ dnorm(a_1, sigma),
bp ~ dnorm(-0.5, 1),
a_1 ~ dnorm(0, 1.5),
sigma ~ dexp(1)
),
data = dat, chains = 4, cores = 4, log_lik = TRUE, iter = 2e3
)
# size model
m_size <- ulam(
alist(
surv_ ~ dbinom(dens_, p),
logit(p) <- a[tank_] + s[size_],
a[tank_] ~ dnorm(a_1, sigma),
s[size_] ~ dnorm(0, 0.5),
a_1 ~ dnorm(0, 1.5),
sigma ~ dexp(1)
),
data = dat, chains = 4, cores = 4, log_lik = TRUE, iter = 2e3
)
m_pred_size <- ulam(
alist(
surv_ ~ dbinom(dens_, p),
logit(p) <- a[tank_] + bp * pred_ + s[size_],
a[tank_] ~ dnorm(a_1, sigma),
bp ~ dnorm(-0.5, 1),
s[size_] ~ dnorm(0, 0.5),
a_1 ~ dnorm(0, 1.5),
sigma ~ dexp(1)
),
data = dat, chains = 4, cores = 4, log_lik = TRUE, iter = 2e3
)
m_Interaction <- ulam(
alist(
surv_ ~ dbinom(dens_, p),
logit(p) <- a_1 + a[tank_] * sigma + bp[size_] * pred_ + s[size_],
a[tank_] ~ dnorm(0, 1),
bp[size_] ~ dnorm(-0.5, 1),
s[size_] ~ dnorm(0, 0.5),
a_1 ~ dnorm(0, 1.5),
sigma ~ dexp(1)
),
data = dat, chains = 4, cores = 4, log_lik = TRUE, iter = 2e3
)
plot(coeftab(m_tank, m_pred, m_size, m_pred_size, m_Interaction),
pars = "sigma",
labels = c("Tank", "Predation", "Size", "Pred+Size", "Interaction")
)
## Based on the graph, it is evident that the predominant factor
affecting the variability of tanks is predation. This leads us to the
conclusion that predation plays a significant role in explaining the
variance among tanks.
13-2. Compare the models you fit just above, using WAIC. Can you reconcile the differences in WAIC with the posterior distributions of the models? Show the WAIC table.
compare(m_tank, m_pred, m_size, m_pred_size, m_Interaction)
## WAIC SE dWAIC dSE pWAIC weight
## m_pred 198.0546 8.835002 0.000000 NA 18.75039 0.41312207
## m_Interaction 199.2590 9.090490 1.204458 3.266913 18.94728 0.22622140
## m_size 200.1715 7.101450 2.116922 5.524933 20.91280 0.14334897
## m_tank 200.2533 7.089931 2.198756 5.521168 21.01920 0.13760195
## m_pred_size 201.3454 8.760122 3.290806 1.890514 19.82232 0.07970561
precis(m_pred, 2)
## mean sd 5.5% 94.5% n_eff Rhat4
## a[1] 2.4647727 0.6952376 1.40388526 3.599102 3987.9499 1.0003282
## a[2] 2.9174495 0.7379080 1.77255084 4.169775 3306.1781 1.0003499
## a[3] 1.6803599 0.6226609 0.71354629 2.683554 2640.3650 1.0003996
## a[4] 2.9161439 0.7249866 1.84098580 4.096411 2907.4196 0.9998941
## a[5] 2.4545632 0.6847622 1.41593283 3.557498 3007.5057 0.9996122
## a[6] 2.4590547 0.6749440 1.43196418 3.564047 3404.0209 0.9995543
## a[7] 2.9420699 0.7238278 1.81721381 4.137820 3225.9000 1.0001161
## a[8] 2.4573084 0.6668489 1.40287302 3.523000 3790.4824 0.9997666
## a[9] 2.1757528 0.5819694 1.26131546 3.094829 1653.4430 1.0017691
## a[10] 3.5287248 0.6106075 2.58871672 4.517267 1752.0054 1.0007389
## a[11] 2.9670618 0.5692518 2.09850107 3.905784 1787.6390 1.0031904
## a[12] 2.7007000 0.5546406 1.80135924 3.589798 1608.8656 1.0037648
## a[13] 2.9745846 0.5948429 2.04220653 3.947679 1857.2577 1.0012179
## a[14] 2.4338278 0.5625507 1.52076374 3.316742 1843.1397 1.0016549
## a[15] 3.5438764 0.6236512 2.58860810 4.551370 1877.2528 1.0029979
## a[16] 3.5313294 0.6206718 2.57782671 4.568920 1772.4874 1.0014523
## a[17] 2.8690758 0.6171239 1.92048049 3.897605 3236.1705 0.9994930
## a[18] 2.5528366 0.5599126 1.70340666 3.482916 3267.2513 0.9995323
## a[19] 2.2546344 0.5200254 1.48008313 3.119140 3747.8313 1.0002098
## a[20] 3.2434261 0.6648689 2.24145288 4.376025 2984.7512 1.0005459
## a[21] 2.5365219 0.5428454 1.70921011 3.453511 3345.0420 1.0000038
## a[22] 2.5329612 0.5736882 1.64257419 3.460167 3730.7313 0.9998396
## a[23] 2.5396344 0.5690498 1.66667827 3.499366 2838.7888 1.0010272
## a[24] 1.9844232 0.4905518 1.20809151 2.784993 3078.3076 0.9994759
## a[25] 1.5210218 0.4863898 0.73611202 2.293023 1222.0251 1.0035503
## a[26] 2.4757382 0.4484098 1.75948420 3.200779 1231.0092 1.0027788
## a[27] 1.2022970 0.4978464 0.40003873 1.973584 1281.1909 1.0027294
## a[28] 1.9499827 0.4571650 1.21452215 2.652172 1118.9236 1.0032457
## a[29] 2.4801602 0.4530589 1.75674816 3.204157 1238.0795 1.0030389
## a[30] 3.4837794 0.4898281 2.70666041 4.278047 1170.0479 1.0042474
## a[31] 1.8182245 0.4559256 1.09862705 2.548220 1154.9053 1.0041712
## a[32] 2.0958279 0.4465247 1.37364312 2.797921 1148.0583 1.0025095
## a[33] 3.0504555 0.5957357 2.16897337 4.020487 2704.5744 0.9991879
## a[34] 2.7544391 0.5445857 1.93592668 3.659010 3012.2109 1.0004761
## a[35] 2.7385767 0.5355668 1.94532011 3.667404 2998.0470 1.0009770
## a[36] 2.2484590 0.4646935 1.55130618 3.002345 3699.1504 0.9994173
## a[37] 2.2432852 0.4635065 1.52940521 3.008207 3643.9825 0.9998991
## a[38] 3.3994574 0.6544312 2.44295694 4.504340 2815.5431 1.0006661
## a[39] 2.7451185 0.5333200 1.93581002 3.657719 3501.4691 0.9996611
## a[40] 2.4824544 0.4913317 1.72299540 3.314589 3636.3034 1.0000735
## a[41] 0.8823506 0.4890579 0.08922273 1.623627 1146.0041 1.0032341
## a[42] 1.8630045 0.4156786 1.19126262 2.518868 1109.7167 1.0035627
## a[43] 1.9693255 0.4187114 1.30628215 2.627660 1064.0231 1.0030943
## a[44] 2.0601231 0.4134340 1.39889262 2.711463 1066.0233 1.0041084
## a[45] 2.8625014 0.4165446 2.19670704 3.528870 993.6674 1.0032353
## a[46] 1.8569271 0.4205728 1.17960791 2.537196 1045.2417 1.0050631
## a[47] 3.9675244 0.4673035 3.24372679 4.739112 1154.4145 1.0044854
## a[48] 2.3622325 0.4083493 1.70632860 3.016891 1045.8550 1.0043405
## bp -2.4001849 0.2841143 -2.84602919 -1.942607 546.8515 1.0084440
## a_1 2.5100216 0.2228753 2.15733221 2.865181 667.3396 1.0056375
## sigma 0.8223410 0.1436861 0.61267314 1.068078 1102.9394 1.0016381
precis(m_size, 2)
## mean sd 5.5% 94.5% n_eff Rhat4
## a[1] 2.15350623 0.9506941 0.6918847 3.74789851 1889.4919 1.0002546
## a[2] 3.03958525 1.1196331 1.4293091 4.96793272 1994.0802 1.0004102
## a[3] 1.04002858 0.7888208 -0.2117637 2.30698222 1510.8369 1.0002437
## a[4] 3.04685775 1.1492970 1.3538683 4.96219474 2164.1801 0.9994377
## a[5] 1.94977758 0.9488701 0.5175946 3.53464761 1950.8698 1.0001678
## a[6] 1.93257883 0.9570296 0.5038159 3.55556394 2165.7644 0.9998081
## a[7] 2.87150426 1.1645867 1.1661866 4.86356088 2441.2464 1.0007904
## a[8] 1.94066417 0.9538229 0.5061137 3.50568880 1839.5046 1.0017211
## a[9] -0.13182850 0.7147069 -1.2630464 0.99616412 1278.6048 0.9999972
## a[10] 2.15746710 0.9328504 0.7813469 3.77618116 1928.0012 0.9995884
## a[11] 1.02709741 0.7596550 -0.1523939 2.26113883 1668.5405 0.9995788
## a[12] 0.61986190 0.7430263 -0.5367493 1.83199209 1365.2365 0.9997463
## a[13] 0.78306857 0.7660577 -0.4003882 2.05303295 1605.4339 1.0011496
## a[14] -0.01467788 0.7114808 -1.1513377 1.11571214 1425.7326 1.0004373
## a[15] 1.95725898 0.9581614 0.4659517 3.55576028 2157.7747 0.9992512
## a[16] 1.92578589 0.9342534 0.5263456 3.43794574 1761.9210 1.0001213
## a[17] 2.92097970 0.8804404 1.6522853 4.38766250 1583.0795 1.0000710
## a[18] 2.44955724 0.7864458 1.2402323 3.74781178 1361.7679 1.0000818
## a[19] 2.03809065 0.6891830 0.9533577 3.17184384 1328.4849 0.9995486
## a[20] 3.64641202 1.0550736 2.1290554 5.48307516 1998.7110 0.9994412
## a[21] 2.17571778 0.7402165 1.0411386 3.40546175 1472.4838 1.0006146
## a[22] 2.16644315 0.7635976 1.0185431 3.44760983 1414.5502 1.0007397
## a[23] 2.18192767 0.7528796 1.0245086 3.44798839 1511.5361 1.0015097
## a[24] 1.48148314 0.6304587 0.5067721 2.51096075 1299.5973 1.0006467
## a[25] -0.95084195 0.5930326 -1.9086760 -0.01167525 1013.6566 1.0011908
## a[26] 0.21349638 0.5540452 -0.6734043 1.08994246 940.6885 1.0001170
## a[27] -1.37484398 0.6123505 -2.3862718 -0.41171838 1176.4883 1.0002358
## a[28] -0.41926273 0.5768794 -1.3294399 0.50003448 926.3098 0.9995138
## a[29] -0.06771548 0.5510259 -0.9630942 0.81079291 954.3601 1.0017146
## a[30] 1.22849484 0.6276272 0.2432128 2.24722805 1189.7381 1.0006895
## a[31] -0.86427911 0.5622919 -1.7697180 0.03546810 1013.9180 1.0007418
## a[32] -0.54685700 0.5513997 -1.4529879 0.35393563 948.4523 1.0014124
## a[33] 3.23013832 0.8556184 1.9825947 4.70740233 1528.3446 0.9997092
## a[34] 2.73491246 0.7426734 1.5822994 3.94823737 1364.7399 0.9998977
## a[35] 2.73318290 0.7523553 1.5709613 4.01316354 1279.0826 0.9997346
## a[36] 2.09914740 0.6253119 1.1374455 3.10939887 1016.7786 1.0017142
## a[37] 1.83441079 0.6303126 0.8303983 2.83843426 1089.8487 1.0006321
## a[38] 3.69443366 1.0273200 2.1734710 5.41956836 1957.3306 0.9999286
## a[39] 2.47405765 0.7379777 1.3616386 3.73924284 1420.3897 1.0011226
## a[40] 2.12029140 0.6788875 1.0726166 3.25467119 1211.9759 1.0009930
## a[41] -1.74772598 0.6131866 -2.7434170 -0.83048738 1124.1202 0.9998356
## a[42] -0.52107569 0.5333362 -1.3847914 0.33343719 858.5482 0.9999227
## a[43] -0.40921992 0.5222458 -1.2472690 0.43246029 801.7959 1.0003393
## a[44] -0.28442571 0.5155253 -1.1097773 0.52809015 820.9239 0.9997273
## a[45] 0.35585146 0.5231063 -0.4714386 1.18853496 935.8958 1.0010909
## a[46] -0.80098564 0.5134331 -1.6271718 0.01711221 976.7330 1.0008262
## a[47] 1.85086248 0.6500004 0.8615543 2.92936478 1130.8331 1.0001705
## a[48] -0.22891628 0.5060946 -1.0496086 0.56072130 815.3710 1.0017250
## s[1] 0.24002096 0.3891843 -0.3888330 0.87399655 581.4511 1.0021021
## s[2] -0.05694931 0.3987763 -0.6965580 0.58313618 547.7227 1.0004049
## a_1 1.25614806 0.4091379 0.6066271 1.91702049 555.8332 1.0002572
## sigma 1.60900395 0.2099483 1.2932389 1.96507186 2191.1137 1.0009183
13-3. Re-estimate the basic Reed frog varying intercept model, but now using a Cauchy distribution in place of the Gaussian distribution for the varying intercepts. That is, fit this model: \[\begin{align} s_i ∼ Binomial(n_i, p_i) \\ logit(p_i) = α_{tank[i]} \\ α_{tank} ∼ Cauchy(α, σ) \\ α ∼ Normal(0, 1) \\ σ ∼ Exponential(1) \\ \end{align}\]
(You are likely to see many divergent transitions for this model. Can you figure out why? Can you fix them?) Plot and compare the posterior means of the intercepts, αtank, to the posterior means produced in the chapter, using the customary Gaussian prior. Can you explain the pattern of differences? Take note of any change in the mean α as well.
m_tank_cauchy <- ulam(
alist(
surv_ ~ dbinom(dens_, p),
logit(p) <- a[tank_],
a[tank_] ~ dcauchy(a_1, sigma),
a_1 ~ dnorm(0, 1.5),
sigma ~ dexp(1)
),
data = dat, chains = 4, cores = 4, log_lik = TRUE,
iter = 2e3, control = list(adapt_delta = 0.99)
)
s_tank <- apply(extract.samples(m_tank)$a, 2, mean)
s_tank_cauchy <- apply(extract.samples(m_tank_cauchy)$a, 2, mean)
plot(s_tank, s_tank_cauchy,
pch = 16, col = "purple",
xlab = "Gaussian intercept", ylab = "Cauchy intercept"
)
abline(a = 0, b = 1, lty = 2)
## The plot indicates that in the majority of the intercepts, the two
models display similar outcomes. However, when crossing the extreme
α_tank in Gaussian, the estimates of Cauchy become more extreme. This is
due to the fact that Gaussian distribution is more condensed compared to
Cauchy, causing the estimates to shrink and decrease to lower
values.
13-4. Now use a Student-t distribution with ν = 2 for the intercepts: \[\begin{align} α_{tank} ∼ Student(2, α, σ) \end{align}\]
Refer back to the Student-t example in Chapter 7 (page 234), if necessary. Plot and compare the resulting posterior to both the original model and the Cauchy model in 13-3. Can you explain the differences and similarities in shrinkage in terms of the properties of these distributions?
m_student <- map2stan(
alist(
surv_ ~ dbinom(dens_, p),
logit(p) <- a[tank_],
a[tank_] ~ dstudent(2, a_1, sigma),
a_1 ~ dnorm(0,1),
sigma ~ dcauchy(0,1)
),
data=dat, chains=4, cores=4, warmup=1000,
iter=3000, control=list(adapt_delta=0.99)
)
s_student <- apply(extract.samples(m_student)$a, 2, mean)
plot(s_tank, s_student,
pch = 16, col = "purple",
xlab = "Gaussian intercept", ylab = "student-t intercept"
)
abline(a = 0, b = 1, lty = 2)
plot(s_student, s_tank_cauchy,
pch = 16, col = "purple",
xlab = "student-t intercept", ylab = "Cauchy intercept"
)
abline(a = 0, b = 1, lty = 2)
## The two plots demonstrate that in the majority of the intercepts, the
student-t model yields the same posterior means as the other two models.
Beyond the extreme α_tank, the student-t model becomes more extreme,
similar to the Cauchy model comparison. This suggests that the Gaussian
model leads to more shrinkage of the extreme values than the student-t
model, and the student-t model results in more shrinkage of extreme
values than the Cauchy model.
13-5. Sometimes the prior and the data (through the likelihood) are in conflict, because they concentrate around different regions of parameter space. What happens in these cases depends a lot upon the shape of the tails of the distributions. Likewise, the tails of distributions strongly influence can outliers are shrunk or not towards the mean. I want you to consider four different models to fit to one observation at y = 0, this is your data, do not use any other data set. The models differ only in the distributions assigned to the likelihood and prior. Here are the four models:
\[\begin{align} Model \;NN: y &∼ Normal(μ, 1) & Model \;TN: y &∼ Student(2, μ, 1) \\ μ &∼ Normal(10, 1) & μ &∼ Normal(10, 1) \\ Model \;NT: y &∼ Normal(μ, 1) & Model \;TT: y &∼ Student(2, μ, 1) \\ μ &∼ Student(2, 10, 1) & μ &∼ Student(2, 10, 1) \\ \end{align}\]
Estimate and plot the posterior distributions against the likelihoods for these models and compare them. Can you explain the results, using the properties of the distributions?
data(chimpanzees)
d <- chimpanzees
m_NN <- map2stan(
alist(
pulled_left ~ dnorm(mu, 1),
mu ~ dnorm(10, 1)
),
data = d, iter = 2000, chains = 1, cores = 1
)
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 5e-06 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 0.007 seconds (Warm-up)
## Chain 1: 0.007 seconds (Sampling)
## Chain 1: 0.014 seconds (Total)
## Chain 1:
## Error in .local(fit, data, n, ...) :
## There appear to be no linear models here
m_TN <- map2stan(
alist(
pulled_left ~ dstudent(2, mu, 1),
mu ~ dnorm(10, 1)
),
data = d, iter = 2000, chains = 1, cores = 1
)
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 1.1e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 0.019 seconds (Warm-up)
## Chain 1: 0.019 seconds (Sampling)
## Chain 1: 0.038 seconds (Total)
## Chain 1:
## Error in .local(fit, data, n, ...) :
## There appear to be no linear models here
m_NT <- map2stan(
alist(
pulled_left ~ dnorm(mu, 1),
mu ~ dstudent(2, 10, 1)
),
data = d, iter = 2000, chains = 1, cores = 1
)
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 5e-06 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 0.007 seconds (Warm-up)
## Chain 1: 0.007 seconds (Sampling)
## Chain 1: 0.014 seconds (Total)
## Chain 1:
## Error in .local(fit, data, n, ...) :
## There appear to be no linear models here
m_TT <- map2stan(
alist(
pulled_left ~ dstudent(2, mu, 1),
mu ~ dstudent(2, 10, 1)
),
data = d, iter = 2000, chains = 1, cores = 1
)
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 1.1e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 0.018 seconds (Warm-up)
## Chain 1: 0.019 seconds (Sampling)
## Chain 1: 0.037 seconds (Total)
## Chain 1:
## Error in .local(fit, data, n, ...) :
## There appear to be no linear models here
precis(m_NN)
## mean sd 5.5% 94.5% n_eff Rhat4
## mu 0.5971601 0.04636043 0.5220641 0.6694655 285.9826 0.9994232
precis(m_TN)
## mean sd 5.5% 94.5% n_eff Rhat4
## mu 0.6159101 0.04306217 0.5456575 0.6810804 371.6214 1.006108
precis(m_NT)
## mean sd 5.5% 94.5% n_eff Rhat4
## mu 0.5796443 0.04609809 0.5071243 0.6522254 242.6878 1.001678
precis(m_TT)
## mean sd 5.5% 94.5% n_eff Rhat4
## mu 0.6033324 0.04589968 0.529172 0.6780794 335.5736 0.999833
s_m_NN <- extract.samples(m_NN)
s_m_TN <- extract.samples(m_TN)
s_m_NT <- extract.samples(m_NT)
s_m_TT <- extract.samples(m_TT)
dens(s_m_NN$mu, col="purple")
dens(s_m_TN$mu, col="blue", add = TRUE)
dens(s_m_NT$mu, col="red", add=TRUE)
dens(s_m_TT$mu, col="green", add=TRUE)
## The plot shows that when the prior and the data through likelihood
have the same distribution, the distribution appears to be symmetrical.
Additionally, the tails of the student-t distribution are heavier than
those of the normal distribution, indicating that it has an impact on
shrinkage.