This chapter introduced interactions, which allow for the association between a predictor and an outcome to depend upon the value of another predictor. While you can’t see them in a DAG, interactions can be important for making accurate inferences. Interactions can be difficult to interpret, and so the chapter also introduced triptych plots that help in visualizing the effect of an interaction. No new coding skills were introduced, but the statistical models considered were among the most complicated so far in the book.
Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Make sure to include plots if the question requests them. Problems are labeled Easy (E), Medium (M), and Hard(H).
Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.
8E1. For each of the causal relationships below, name a hypothetical third variable that would lead to an interaction effect:
# Quality of dough / Baker's expertise / temperature
# Type of Job / Sector of Job / Experience
# Power of Engine
8E2. Which of the following explanations invokes an interaction?
# 1. Onion caramelizationAn invokes interaction between heat and dryness. Specifically, it implies that caramelization will only occur when both heat and dryness are low.
# 2. This explanation invokes main effects of number of cylinders and fuel injector quality on car speed but does not explicitly invoke an interaction. Specifically, it seems to imply that adding cylinders and increasing the quality of the fuel injector are independent routes to increasing car speed.
# 3. This explanation implies that there are two types of people: those who acquire their beliefs from their parents and those who acquire their beliefs from their friends. The implied model seems to predict individuals’ political beliefs using a linear combination of the interactions between type and parents’ beliefs on the one hand and between type and friends’ beliefs on the other hand.
# 4. There seems an interaction between sociality and the possession of manipulative appendages in predicting a species’ intelligence. Specifically, it seems to imply that intelligent species are high on sociality or have manipulative appendages but are not both high on sociality and in possession of manipulative appendages.
8E3. For each of the explanations in 8E2, write a linear model that expresses the stated relationship.
# 1. μ_i = α + β_H*H_i + β_D*D_i + β_HD*H_i*D_i
# (H = heat, D = Dryness)
# 2. μ_i = α + β_C*C_i + β_F*F_i
# (C = cylinder, F = fuel injection)
# 3. μ_i = α + β_TP*T_i*P_i + β_TF*T_i*F_i
# (T = type of people, F = friends, P = parents)
# 4. μ_i = α + β_S*S_i + β_M*M_i + β_SM*S_i*M_i
# (S = social, M = manipultive appendages)
8M1. Recall the tulips example from the chapter. Suppose another set of treatments adjusted the temperature in the greenhouse over two levels: cold and hot. The data in the chapter were collected at the cold temperature. You find none of the plants grown under the hot temperature developed any blooms at all, regardless of the water and shade levels. Can you explain this result in terms of interactions between water, shade, and temperature?
# Since there are three predictor variables now (Water, Shade, and Temperature), we would have a single three-way interaction and three two-way interactions: WST, WS, WT, and ST.
8M2. Can you invent a regression equation that would make the bloom size zero, whenever the temperature is hot?
# According to the text the regression varibale the regression formula resembles:
# μ_i = α + β_W*W_i + β_S*S_i + β_T*T_i + β_WT*W_i*T_i + β_ST*S_i*T_i + β_WS*W_i*S_i + β_WST*W_i*S_i*T_i
# The first 3 are the water, shade, and temperature by themselves, the next 3 are 2 way interactions between water & temperature, shade &temperature, and water & shade. Lastly is the 3 way interaction between the water, shade, and temperature.
# bloom size = 0( μi = 0), when temperature is hot (Ti = 1),
# μi = α - α*T_i + β_W*W_i - β_W*W_i*T_i + β_S*S_i - β_S*S_i*T_i + β_WS*W_i*S_i - β_WS*W_i*S_i*T_i
8M4. Repeat the tulips analysis, but this time use priors that constrain the effect of water to be positive and the effect of shade to be negative. Use prior predictive simulation. What do these prior assumptions mean for the interaction prior, if anything? Visualize the prior simulation.
library(rethinking)
data(tulips)
d <- tulips
str(d)
## 'data.frame': 27 obs. of 4 variables:
## $ bed : Factor w/ 3 levels "a","b","c": 1 1 1 1 1 1 1 1 1 2 ...
## $ water : int 1 1 1 2 2 2 3 3 3 1 ...
## $ shade : int 1 2 3 1 2 3 1 2 3 1 ...
## $ blooms: num 0 0 111 183.5 59.2 ...
d$blooms_std <- d$blooms / max(d$blooms)
d$water_cent <- d$water - mean(d$water)
d$shade_cent <- d$shade - mean(d$shade)
m8.4 <- quap(
alist(
blooms_std ~ dnorm( mu , sigma ) ,
mu <- a + bw*water_cent + bs*shade_cent + bws*water_cent*shade_cent,
a ~ dnorm( 0.5, 0.25) ,
bw ~ dnorm( 1 , 0.25 ) ,
bs ~ dnorm( -1 , 0.25 ) ,
bws ~ dnorm( 0 , 0.25 ) ,
sigma ~ dexp( 1 )
),
data = d)
precis(m8.4)
## mean sd 5.5% 94.5%
## a 0.3579833 0.02404916 0.31954805 0.39641845
## bw 0.2205116 0.02953110 0.17331520 0.26770801
## bs -0.1272626 0.02956872 -0.17451918 -0.08000611
## bws -0.1431157 0.03587218 -0.20044635 -0.08578502
## sigma 0.1255314 0.01721940 0.09801146 0.15305131
#The negative effect of shade & positive effect of water make "bws" a negative mean value.
8H1. Return to the data(tulips) example in the chapter. Now include the bed variable as a predictor in the interaction model. Don’t interact bed with the other predictors; just include it as a main effect. Note that bed is categorical. So to use it properly, you will need to either construct dummy variables or rather an index variable, as explained in Chapter 5.
d <- tulips
d$shade.c <- d$shade - mean(d$shade)
d$water.c <- d$water - mean(d$water)
# Dummy variables
d$bedb <- d$bed == "b"
d$bedc <- d$bed == "c"
# Index variable
d$bedx <- coerce_index(d$bed)
m_dummy <- map(
alist(
blooms ~ dnorm(mu, sigma),
mu <- a + bW*water.c + bS*shade.c + bWS*water.c*shade.c + bBb*bedb + bBc*bedc,
a ~ dnorm(130, 100),
bW ~ dnorm(0, 100),
bS ~ dnorm(0, 100),
bWS ~ dnorm(0, 100),
bBb ~ dnorm(0, 100),
bBc ~ dnorm(0, 100),
sigma ~ dunif(0, 100)
),
data = d,
start = list(a = mean(d$blooms), bW = 0, bS = 0, bWS = 0, bBb = 0, bBc = 0, sigma = sd(d$blooms))
)
precis(m_dummy)
## mean sd 5.5% 94.5%
## a 99.36068 12.757566 78.97163 119.74973
## bW 75.12448 9.199779 60.42146 89.82750
## bS -41.23111 9.198508 -55.93210 -26.53012
## bWS -52.15047 11.242982 -70.11893 -34.18201
## bBb 42.41229 18.039329 13.58195 71.24262
## bBc 47.03233 18.040183 18.20063 75.86403
## sigma 39.18975 5.337957 30.65867 47.72084
m_index <- map(
alist(
blooms ~ dnorm(mu, sigma),
mu <- a[bedx] + bW*water.c + bS*shade.c + bWS*water.c*shade.c,
a[bedx] ~ dnorm(130, 100),
bW ~ dnorm(0, 100),
bS ~ dnorm(0, 100),
bWS ~ dnorm(0, 100),
sigma ~ dunif(0, 200)
),
data = d
)
precis(m_index, depth = 2)
## Warning in sqrt(diag(vcov(model))): NaNs produced
## Warning in sqrt(diag(vcov(model))): NaNs produced
## Warning in sqrt(diag(vcov(model))): NaNs produced
## mean sd 5.5% 94.5%
## a[1] 120.05330 53.91550 33.88592 206.22067
## a[2] -14.87777 30.43138 -63.51300 33.75746
## a[3] 108.22639 53.15382 23.27632 193.17645
## bW 35.35975 39.32414 -27.48781 98.20732
## bS 61.77349 23.01852 24.98545 98.56153
## bWS -55.29408 48.86608 -133.39151 22.80335
## sigma 194.05002 NaN NaN NaN
coeftab(m_dummy, m_index)
## Warning in sqrt(diag(vcov(model))): NaNs produced
## m_dummy m_index
## a 99.36 NA
## bW 75.12 35.36
## bS -41.23 61.77
## bWS -52.15 -55.29
## bBb 42.41 NA
## bBc 47.03 NA
## sigma 39.19 194.05
## a[1] NA 120.05
## a[2] NA -14.88
## a[3] NA 108.23
## nobs 27 27
#The results are very similar. The main differences are the form in which the bed-specific intercepts are presented.
8H5. Consider the data(Wines2012) data table. These data are expert ratings of 20 different French and American wines by 9 different French and American judges. Your goal is to model score, the subjective rating assigned by each judge to each wine. I recommend standardizing it. In this problem, consider only variation among judges and wines. Construct index variables of judge and wine and then use these index variables to construct a linear regression model. Justify your priors. You should end up with 9 judge parameters and 20 wine parameters. Plot the parameter estimates. How do you interpret the variation among individual judges and individual wines? Do you notice any patterns, just by plotting the differences? Which judges gave the highest/lowest ratings? Which wines were rated worst/best on average?
library(rethinking)
data(wines2012)
d2 <- wines2012
d2_list = list(s = standardize(d2$score),
wine = as.integer(d2$wine),
judge = as.integer(d2$judge))
str(d2_list)
## List of 3
## $ s : num [1:180] -1.5766 -0.4505 -0.0751 0.3003 -2.3274 ...
## ..- attr(*, "scaled:center")= num 14.2
## ..- attr(*, "scaled:scale")= num 2.66
## $ wine : int [1:180] 1 3 5 7 9 11 13 15 17 19 ...
## $ judge: int [1:180] 4 4 4 4 4 4 4 4 4 4 ...
H8.5 <- ulam(alist(
s ~ dnorm(mu, sigma),
mu <- j[judge] + w[wine] ,
w[wine] ~ dnorm(0, 0.5),
j[judge] ~ dnorm(0, 0.5),
sigma ~ dexp(1)),
data = d2_list,
chains = 4,
cores = 4)
precis(H8.5, 2)
## mean sd 5.5% 94.5% n_eff Rhat4
## w[1] 0.11464000 0.25891324 -0.30512169 0.52577632 3125.681 0.9998306
## w[2] 0.08184778 0.25266261 -0.31926113 0.49124380 2569.019 0.9994241
## w[3] 0.22634948 0.25615684 -0.18565723 0.62909266 3468.725 0.9988208
## w[4] 0.47234518 0.27232028 0.03922349 0.90210602 2766.750 0.9994782
## w[5] -0.10451486 0.26135178 -0.52451667 0.32340289 3843.006 0.9984652
## w[6] -0.30826554 0.25720488 -0.72627677 0.09751010 3082.250 0.9998306
## w[7] 0.24559748 0.25470635 -0.17212450 0.66686715 3925.696 0.9986416
## w[8] 0.22571098 0.25710617 -0.17545982 0.63554555 3180.271 0.9987226
## w[9] 0.07373405 0.26850701 -0.36334535 0.49663949 3245.244 0.9991817
## w[10] 0.10117866 0.25851998 -0.30322081 0.50636261 3881.518 0.9986017
## w[11] -0.01055810 0.25571139 -0.40474020 0.41205759 2746.600 0.9986935
## w[12] -0.02899840 0.25573783 -0.43412327 0.38771186 3112.023 1.0007827
## w[13] -0.08424936 0.25508481 -0.50329361 0.31714781 2894.699 0.9988535
## w[14] 0.01158246 0.25705414 -0.40308510 0.42812600 2889.560 0.9997721
## w[15] -0.18055458 0.25495020 -0.58838558 0.23226101 3041.013 0.9991459
## w[16] -0.16603894 0.25387078 -0.57349025 0.24095997 3642.262 0.9994447
## w[17] -0.11730263 0.25069299 -0.50866016 0.28479316 3028.586 0.9992629
## w[18] -0.72222692 0.26042580 -1.13779357 -0.30699875 3272.013 0.9993257
## w[19] -0.13376276 0.24978918 -0.53514946 0.26889622 2940.832 0.9992389
## w[20] 0.33148337 0.25648734 -0.08029634 0.74214973 3110.292 0.9991587
## j[1] -0.28581940 0.19050973 -0.58730987 0.01870905 2678.797 1.0007403
## j[2] 0.20759519 0.20125177 -0.10914025 0.54120688 2404.961 0.9989973
## j[3] 0.20626988 0.19647114 -0.10872228 0.51225285 2575.394 0.9999364
## j[4] -0.54076000 0.18685741 -0.84436478 -0.23810008 1768.186 1.0037441
## j[5] 0.79614061 0.18981590 0.49488162 1.09891129 2370.022 0.9995588
## j[6] 0.47279829 0.19495182 0.15134785 0.77908813 1983.194 1.0013017
## j[7] 0.13089834 0.19057023 -0.17116042 0.44235021 2300.238 1.0001139
## j[8] -0.65267058 0.19050203 -0.96578865 -0.35002946 2174.846 0.9998715
## j[9] -0.34224040 0.19741709 -0.65902469 -0.02619530 2669.544 0.9999309
## sigma 0.84751932 0.04847837 0.77373685 0.92505611 3269.870 0.9989662
traceplot(H8.5)
# As can be seen from the results above, wine 4 has the highest rating and wine 18 has the lowest rating. Judge 5 gives the highest average rating and judge 8 seems to give the lowest average rating.