This chapter introduced interactions, which allow for the association between a predictor and an outcome to depend upon the value of another predictor. While you can’t see them in a DAG, interactions can be important for making accurate inferences. Interactions can be difficult to interpret, and so the chapter also introduced triptych plots that help in visualizing the effect of an interaction. No new coding skills were introduced, but the statistical models considered were among the most complicated so far in the book.
Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Make sure to include plots if the question requests them. Problems are labeled Easy (E), Medium (M), and Hard(H).
Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.
8E1. For each of the causal relationships below, name a hypothetical third variable that would lead to an interaction effect:
# 1.temperature
# 2.job type
# 3.Age of car
8E2. Which of the following explanations invokes an interaction?
# Caramelizing invokes interaction between heat and dryness.
# There is no interaction effect between cylinders and fuel injectors and both act independently.
# There is an interaction effect between getting political beliefs from family and friends.
# There is an interaction effect between social level and manipulative appendages to predict intelligence of animal species.
8E3. For each of the explanations in 8E2, write a linear model that expresses the stated relationship.
# 1. mu_i= alpha + beta_H*H_i+beta_D*D_i+beta_HD*H_i*D_i (H=Heat, D=Dryness)
# 2. mu_i= alpha +beta_C*C_i+beta_F*F_i (C=Cylinders, F=Feul Injector Effecticiency)
# 3. mu_i= alpha + beta_TP*T_i*P_i+beta_TF*T_i*F_i (T=Type of people, P= Parent's belief, F= Friend's belief)
# 4. mu_i= alpha +beta_S*S_i+beta_A*A_i+beta_SA*S_i*A_i (S=Soical level, A= Appendages)
8M1. Recall the tulips example from the chapter. Suppose another set of treatments adjusted the temperature in the greenhouse over two levels: cold and hot. The data in the chapter were collected at the cold temperature. You find none of the plants grown under the hot temperature developed any blooms at all, regardless of the water and shade levels. Can you explain this result in terms of interactions between water, shade, and temperature?
# Both water and shade interact with temperature. i.e we have 1 3way interaction and 3 2way interactions
8M2. Can you invent a regression equation that would make the bloom size zero, whenever the temperature is hot?
#Bloom_Size = mu + beta_Water * Water + beta_Shade * Shade + beta_Temp * Temp + beta_WS * Water * Shade + beta_WT * Water * Temp + beta_ST * Shade * Temp * beta_WST * Water * Shade * Temp
#Temperature should be a binary variable, hot is 0 and cold is 1. Water and shade too are binary variables,water present = 1 and shade present = 1
#when temperature is hot
#Bloom_size = mu + beta_Water*Water + beta_Shade * Shade + beta_ws * Water * Shade
#When water and shade is present
#Bloom_Size = mu + beta_Water + beta_Shade + beta_WS
#Now for Bloom_Size to be 0, we need the water*shade interaction coefficient to be:
#beta_ws = -(mu + beta_water + beta_shade)
#Therefore when the above equation holds true, bloom size will be 0 in hot temperature.
8M4. Repeat the tulips analysis, but this time use priors that constrain the effect of water to be positive and the effect of shade to be negative. Use prior predictive simulation. What do these prior assumptions mean for the interaction prior, if anything? Visualize the prior simulation.
library(rethinking)
data(tulips)
data_new <- tulips
str(data_new)
## 'data.frame': 27 obs. of 4 variables:
## $ bed : Factor w/ 3 levels "a","b","c": 1 1 1 1 1 1 1 1 1 2 ...
## $ water : int 1 1 1 2 2 2 3 3 3 1 ...
## $ shade : int 1 2 3 1 2 3 1 2 3 1 ...
## $ blooms: num 0 0 111 183.5 59.2 ...
data_new$blooms_std <- data_new$blooms / max(data_new$blooms)
data_new$water_cent <- data_new$water - mean(data_new$water)
data_new$shade_cent <- data_new$shade - mean(data_new$shade)
bw_d<- abs(rnorm(nrow(data_new),0,0.25))
bs_d<- (-abs(rnorm(nrow(data_new),0,0.25)))
m_tulip <- quap(
alist(
blooms_std ~ dnorm( mu , sigma ) ,
mu <- a + bw*water_cent + bs*shade_cent + bws*water_cent*shade_cent,
a ~ dnorm( 0.5, 0.25) ,
bw ~ dnorm(bw_d) ,
bs ~ dnorm(bs_d) ,
bws ~ dnorm( 0 , 0.25 ),
sigma ~ dexp( 1 )
) ,
data=data_new )
precis(m_tulip)
## mean sd 5.5% 94.5%
## a 0.3579812 0.02391446 0.3197612 0.39620107
## bw 0.2099591 0.02908310 0.1634787 0.25643955
## bs -0.1163893 0.02908516 -0.1628730 -0.06990558
## bws -0.1431615 0.03567322 -0.2001742 -0.08614880
## sigma 0.1248220 0.01693116 0.0977627 0.15188122
8H1. Return to the data(tulips) example in the chapter. Now include the bed variable as a predictor in the interaction model. Don’t interact bed with the other predictors; just include it as a main effect. Note that bed is categorical. So to use it properly, you will need to either construct dummy variables or rather an index variable, as explained in Chapter 5.
data(tulips)
data_new <- tulips
str(data_new)
## 'data.frame': 27 obs. of 4 variables:
## $ bed : Factor w/ 3 levels "a","b","c": 1 1 1 1 1 1 1 1 1 2 ...
## $ water : int 1 1 1 2 2 2 3 3 3 1 ...
## $ shade : int 1 2 3 1 2 3 1 2 3 1 ...
## $ blooms: num 0 0 111 183.5 59.2 ...
data_new$bed_id <- coerce_index(data_new$bed)
data_new$blooms_std <- data_new$blooms / max(data_new$blooms)
data_new$water_cent <- data_new$water - mean(data_new$water)
data_new$shade_cent <- data_new$shade - mean(data_new$shade)
m_bed <- quap(
alist(
blooms_std ~ dnorm(mu, sigma),
mu <- a + bb*bed_id + bw*water_cent + bs*shade_cent + bws*water_cent*shade_cent,
a ~ dnorm(0.5,0.25),
bw ~ dnorm(0,0.25),
bs ~ dnorm(0,0.25),
bws ~ dnorm(0,0.25),
bb ~ dnorm(0, 0.25),
sigma ~ dunif(0, 100)
),
data=data_new
)
precis(m_bed,depth = 2)
## mean sd 5.5% 94.5%
## a 0.23320633 0.05535578 0.14473710 0.32167555
## bw 0.20729486 0.02619457 0.16543087 0.24915884
## bs -0.11377112 0.02618943 -0.15562689 -0.07191535
## bws -0.14374383 0.03199149 -0.19487242 -0.09261525
## bb 0.06271942 0.02569110 0.02166008 0.10377876
## sigma 0.11171755 0.01524537 0.08735251 0.13608259
8H5. Consider the data(Wines2012) data table. These data are expert ratings of 20 different French and American wines by 9 different French and American judges. Your goal is to model score, the subjective rating assigned by each judge to each wine. I recommend standardizing it. In this problem, consider only variation among judges and wines. Construct index variables of judge and wine and then use these index variables to construct a linear regression model. Justify your priors. You should end up with 9 judge parameters and 20 wine parameters. Plot the parameter estimates. How do you interpret the variation among individual judges and individual wines? Do you notice any patterns, just by plotting the differences? Which judges gave the highest/lowest ratings? Which wines were rated worst/best on average?
data(wines2012)
d2 <- wines2012
d2_list = list(s = standardize(d2$score),
wine = as.integer(d2$wine),
judge = as.integer(d2$judge))
str(d2_list)
## List of 3
## $ s : num [1:180] -1.5766 -0.4505 -0.0751 0.3003 -2.3274 ...
## ..- attr(*, "scaled:center")= num 14.2
## ..- attr(*, "scaled:scale")= num 2.66
## $ wine : int [1:180] 1 3 5 7 9 11 13 15 17 19 ...
## $ judge: int [1:180] 4 4 4 4 4 4 4 4 4 4 ...
H85 <- ulam(alist(
s ~ dnorm(mu, sigma),
mu <- j[judge] + w[wine] ,
w[wine] ~ dnorm(0, 0.5),
j[judge] ~ dnorm(0, 0.5),
sigma ~ dexp(1)),
data = d2_list,
chains = 4,
cores = 4)
precis(H85, 2)
## mean sd 5.5% 94.5% n_eff Rhat4
## w[1] 0.11017298 0.24900401 -0.28176852 0.50882826 2825.894 1.0010570
## w[2] 0.08461476 0.26143117 -0.32927120 0.50913250 2498.443 0.9993383
## w[3] 0.22802363 0.25459280 -0.17201594 0.63998740 2601.841 1.0018377
## w[4] 0.46937305 0.26144393 0.05004750 0.88313280 2809.849 1.0015355
## w[5] -0.10332045 0.25708779 -0.50886632 0.30121285 3309.484 0.9987763
## w[6] -0.31389180 0.25748885 -0.72787697 0.08153446 2402.235 0.9993366
## w[7] 0.23670967 0.25375373 -0.17225349 0.63864201 2330.937 1.0004170
## w[8] 0.22569406 0.25475083 -0.17232972 0.62234088 2923.688 0.9983443
## w[9] 0.06702592 0.25290998 -0.34588986 0.47353603 2613.171 1.0006635
## w[10] 0.09329389 0.26132712 -0.33023257 0.50786283 3350.156 0.9996573
## w[11] -0.01330851 0.26170811 -0.43965804 0.39320554 2673.416 0.9998124
## w[12] -0.02622285 0.26025286 -0.45095340 0.38414129 2747.362 1.0014497
## w[13] -0.08565545 0.26565730 -0.49434101 0.33947079 3442.561 0.9992226
## w[14] 0.00515425 0.27635829 -0.43302783 0.43876506 2955.706 0.9992185
## w[15] -0.18739973 0.25051318 -0.58235587 0.21480789 3157.620 0.9996541
## w[16] -0.17143975 0.24744012 -0.56897268 0.20832878 2181.679 1.0035791
## w[17] -0.12672646 0.26652970 -0.54478914 0.30448453 3139.911 0.9989989
## w[18] -0.72831570 0.26687125 -1.14985281 -0.31465554 2705.657 0.9994618
## w[19] -0.14062477 0.25764988 -0.55955532 0.27900962 2674.784 1.0001049
## w[20] 0.32491049 0.25817713 -0.08146508 0.74495877 2722.944 0.9984537
## j[1] -0.27457953 0.19601496 -0.57974961 0.03311338 2133.646 1.0031277
## j[2] 0.21251929 0.20043054 -0.10516399 0.53500930 2121.571 0.9989435
## j[3] 0.21070317 0.20122131 -0.12449820 0.52301503 2524.648 1.0017025
## j[4] -0.54110549 0.19157841 -0.84911435 -0.22927894 2127.199 1.0020699
## j[5] 0.79723341 0.19364847 0.48863613 1.10871189 2143.353 1.0005698
## j[6] 0.47983632 0.19027203 0.18030217 0.79703295 2418.978 1.0001154
## j[7] 0.13333399 0.19871940 -0.18806679 0.45529984 2457.411 1.0011701
## j[8] -0.65168892 0.19619094 -0.96473567 -0.32397513 2350.166 1.0002905
## j[9] -0.34194450 0.19289910 -0.65717011 -0.03382371 2229.479 0.9996678
## sigma 0.84846243 0.04904651 0.77631609 0.92987110 2842.853 0.9984444
traceplot(H85)
# From above result, wine 4 seems to be having the highest rationg and wine 18 has the least.
#Judge 5 highest average rating and judge 8 least average rating based on mean values and traceplot