Assignment #7 your name 2021-03-23 Chapter 8 - Conditional Manatees This chapter introduced interactions, which allow for the association between a predictor and an outcome to depend upon the value of another predictor. While you can’t see them in a DAG, interactions can be important for making accurate inferences. Interactions can be difficult to interpret, and so the chapter also introduced triptych plots that help in visualizing the effect of an interaction. No new coding skills were introduced, but the statistical models considered were among the most complicated so far in the book.
Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Make sure to include plots if the question requests them. Problems are labeled Easy (E), Medium (M), and Hard(H).
Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.
This chapter described some of the most common generalized linear models, those used to model counts. It is important to never convert counts to proportions before analysis, because doing so destroys information about sample size. A fundamental difficulty with these models is that parameters are on a different scale, typically log-odds (for binomial) or log-rate (for Poisson), than the outcome variable they describe. Therefore computing implied predictions is even more important than before.
Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Make sure to include plots if the question requests them. Problems are labeled Easy (E), Medium (M), and Hard(H).
Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.
Questions 8E1. For each of the causal relationships below, name a hypothetical third variable that would lead to an interaction effect:
1.Bread dough rises because of yeast. 2.Education leads to higher income. 3.Gasoline makes a car go.
# 1. it is important to keep the temperature in the oven, so temperature would be the third variable
# 2. the Income of different fields (majors) differ a lot, so field of Major Subject would be the third variable.
# 3. Gasoline only makes a car go if the engine works, so engine status would be the third variable.
8E2. Which of the following explanations invokes an interaction?
Caramelizing onions requires cooking over low heat and making sure the onions do not dry out. A car will go faster when it has more cylinders or when it has a better fuel injector. Most people acquire their political beliefs from their parents, unless they get them instead from their friends. Intelligent animal species tend to be either highly social or have manipulative appendages (hands, tentacles, etc.).
# 1. Invokes interaction between heat and dryness in predicting caramelization of onion.
# 2. Does not invoke an interaction.
# 3. Does not invoke an interaction.
# 4.Invokes interaction between being social and the possessing of manipulative appendages in predicting the intelligence of animal species.
8E3. For each of the explanations in 8E2, write a linear model that expresses the stated relationship.
# 1. D = aA + bB + cAB
# 2. C = aA + bB
# 3. D = aAB + bAC
# 4. D = aA + bB + cAB
8M1. Recall the tulips example from the chapter. Suppose another set of treatments adjusted the temperature in the greenhouse over two levels: cold and hot. The data in the chapter were collected at the cold temperature. You find none of the plants grown under the hot temperature developed any blooms at all, regardless of the water and shade levels. Can you explain this result in terms of interactions between water, shade, and temperature?
#water, shade, and temperature are three predictors, there would be a single three-way interaction and three two-way interactions (WST, WS, WT, and ST).
8M2. Can you invent a regression equation that would make the bloom size zero, whenever the temperature is hot?
#B = a + bW + cS + dT + eWS + fWT + gST + hWST
# let W=1, S=1, T=1
# B = a + b + c + d + e + f + g + h
# whenever T=1, B should be 0:
# we have a + d = 0, b + f = 0, c + g = 0, e + h = 0
# thus B = a + bW + cS - aT + eWS - bWT - cST -eWST
8M4. Repeat the tulips analysis, but this time use priors that constrain the effect of water to be positive and the effect of shade to be negative. Use prior predictive simulation. What do these prior assumptions mean for the interaction prior, if anything? Visualize the prior simulation.
library(rethinking)
data(tulips)
df <- tulips
str(df)
## 'data.frame': 27 obs. of 4 variables:
## $ bed : Factor w/ 3 levels "a","b","c": 1 1 1 1 1 1 1 1 1 2 ...
## $ water : int 1 1 1 2 2 2 3 3 3 1 ...
## $ shade : int 1 2 3 1 2 3 1 2 3 1 ...
## $ blooms: num 0 0 111 183.5 59.2 ...
df$blooms_std <- df$blooms / max(df$blooms)
df$water_cent <- df$water - mean(df$water)
df$shade_cent <- df$shade - mean(df$shade)
a <- rnorm( 1e4 , 0.5 , 1 )
sum( a < 0 | a > 1 ) / length( a )
## [1] 0.6105
a <- rnorm( 1e4 , 0.5 , 0.25 )
sum( a < 0 | a > 1 ) / length( a )
## [1] 0.0448
m8.6 <- quap(
alist(
blooms_std ~ dnorm( mu , sigma ) ,
mu <- a + bw*water_cent + bs*shade_cent ,
a ~ dnorm( 0.5 , 0.25 ) ,
bw ~ dnorm( 0 , 0.25 ) ,
bs ~ dnorm( 0 , 0.25 ) ,
sigma ~ dexp( 1 )
) ,
data=df )
m8.7 <- quap(
alist(
blooms_std ~ dnorm( mu , sigma ) ,
mu <- a + bw*water_cent + bs*shade_cent + bws*water_cent*shade_cent ,
a ~ dnorm( 0.5 , 0.25 ) ,
bw ~ dnorm( 0 , 0.25 ) ,
bs ~ dnorm( 0 , 0.25 ) ,
bws ~ dnorm( 0 , 0.25 ) ,
sigma ~ dexp( 1 )
) ,
data=df )
par(mfrow=c(1,3)) # 3 plots in 1 row
for ( s in -1:1 ) {
idx <- which( df$shade_cent==s )
plot( df$water_cent[idx] , df$blooms_std[idx] , xlim=c(-1,1) , ylim=c(0,1) ,
xlab="water" , ylab="blooms" , pch=16 , col=rangi2 )
mu <- link( m8.6 , data=data.frame( shade_cent=s , water_cent=-1:1 ) )
for ( i in 1:20 ) lines( -1:1 , mu[i,] , col=col.alpha("black",0.3) )
}
8H1. Return to the data(tulips) example in the chapter. Now include the bed variable as a predictor in the interaction model. Don’t interact bed with the other predictors; just include it as a main effect. Note that bed is categorical. So to use it properly, you will need to either construct dummy variables or rather an index variable, as explained in Chapter 5.
data(tulips)
df <- tulips
df$blooms_std <- df$blooms / max(df$blooms)
df$water_cent <- df$water - mean(df$water)
df$shade_cent <- df$shade - mean(df$shade)
# bed index
df$bed_index <- coerce_index(df$bed)
bed_var <- quap(
alist(
blooms_std ~ dnorm( mu , sigma ) ,
mu <- a[bed_index] + bw*water_cent + bs*shade_cent + bws*water_cent*shade_cent,
a[bed_index] ~ dnorm(0.5, 0.25),
bw ~ dnorm(0, 0.25),
bs ~ dnorm(0, 0.25),
bws ~ dnorm(0, 0.25),
sigma ~ dunif(0, 100)
),
data = df
)
precis(bed_var, depth = 2 )
## mean sd 5.5% 94.5%
## a[1] 0.2732816 0.03578503 0.21609017 0.33047295
## a[2] 0.3964034 0.03576737 0.33924027 0.45356659
## a[3] 0.4091217 0.03576632 0.35196022 0.46628319
## bw 0.2074257 0.02542516 0.16679142 0.24806004
## bs -0.1138374 0.02542048 -0.15446422 -0.07321055
## bws -0.1438843 0.03105683 -0.19351911 -0.09424948
## sigma 0.1084031 0.01476785 0.08480118 0.13200492
8H5. Consider the data(Wines2012) data table. These data are expert ratings of 20 different French and American wines by 9 different French and American judges. Your goal is to model score, the subjective rating assigned by each judge to each wine. I recommend standardizing it. In this problem, consider only variation among judges and wines. Construct index variables of judge and wine and then use these index variables to construct a linear regression model. Justify your priors. You should end up with 9 judge parameters and 20 wine parameters. Plot the parameter estimates. How do you interpret the variation among individual judges and individual wines? Do you notice any patterns, just by plotting the differences? Which judges gave the highest/lowest ratings? Which wines were rated worst/best on average?
data(Wines2012)
df <- Wines2012
df_list = list(s = standardize(df$score),
wine = as.integer(df$wine),
judge = as.integer(df$judge))
str(df_list)
## List of 3
## $ s : num [1:180] -1.5766 -0.4505 -0.0751 0.3003 -2.3274 ...
## ..- attr(*, "scaled:center")= num 14.2
## ..- attr(*, "scaled:scale")= num 2.66
## $ wine : int [1:180] 1 3 5 7 9 11 13 15 17 19 ...
## $ judge: int [1:180] 4 4 4 4 4 4 4 4 4 4 ...
wines <- ulam(alist(
s ~ dnorm(mu, sigma),
mu <- j[judge] + w[wine] ,
w[wine] ~ dnorm(0, 0.5),
j[judge] ~ dnorm(0, 0.5),
sigma ~ dexp(1)),
data = df_list,
chains = 4,
cores = 4)
precis(wines, 2)
## mean sd 5.5% 94.5% n_eff Rhat4
## w[1] 0.117359005 0.25999090 -0.28971929 0.53008134 3361.886 0.9996232
## w[2] 0.087570498 0.25904787 -0.30813249 0.51122272 3544.496 0.9984219
## w[3] 0.227885671 0.26881919 -0.20488315 0.66754489 3361.762 0.9986353
## w[4] 0.473276421 0.25165543 0.07007106 0.86474826 2950.267 0.9990671
## w[5] -0.100196553 0.25549134 -0.51061469 0.30832757 2997.449 0.9993494
## w[6] -0.302829056 0.25968114 -0.72895965 0.10971296 3567.788 0.9994512
## w[7] 0.247395346 0.26223729 -0.18246402 0.65547726 3292.989 0.9987164
## w[8] 0.231409681 0.25604941 -0.16609046 0.64496598 3371.671 0.9987906
## w[9] 0.079590957 0.25897520 -0.32088386 0.50112908 3452.575 0.9986973
## w[10] 0.105343522 0.25106286 -0.29818309 0.49851855 3215.571 0.9991943
## w[11] -0.008064817 0.25676010 -0.43208870 0.39043162 3319.771 0.9989845
## w[12] -0.020050825 0.24776285 -0.42851480 0.37528236 3379.271 0.9983621
## w[13] -0.093028187 0.26148578 -0.50567139 0.32592249 3010.139 0.9986693
## w[14] 0.009257117 0.26210303 -0.41339584 0.44069417 3343.489 0.9980920
## w[15] -0.177471479 0.26031218 -0.58255054 0.24042045 3002.692 0.9983077
## w[16] -0.162308369 0.25739425 -0.57511728 0.24244495 2957.631 0.9983829
## w[17] -0.112123076 0.24874448 -0.50393745 0.28057635 3324.923 0.9990728
## w[18] -0.713827157 0.24448928 -1.10910730 -0.33898457 3739.535 0.9986701
## w[19] -0.131699346 0.24894488 -0.53043789 0.25933870 2950.892 0.9983928
## w[20] 0.326243237 0.25538985 -0.08098607 0.72319909 2706.150 0.9986219
## j[1] -0.280357531 0.19686520 -0.58482923 0.03563549 2609.419 0.9989351
## j[2] 0.212538091 0.19796038 -0.09785776 0.52759339 3023.427 0.9989963
## j[3] 0.206482763 0.19306371 -0.09946174 0.51055058 2326.019 0.9993552
## j[4] -0.541353459 0.19726945 -0.84945422 -0.23255904 2749.220 0.9983466
## j[5] 0.789027007 0.20026952 0.48202081 1.10985650 2820.399 0.9989695
## j[6] 0.473686013 0.19947772 0.16975150 0.79245357 2512.638 0.9998294
## j[7] 0.128196211 0.18960441 -0.17917903 0.42746603 2403.676 0.9983802
## j[8] -0.660642973 0.19153088 -0.96066587 -0.35666839 2422.276 0.9991906
## j[9] -0.349472189 0.19465162 -0.65921200 -0.04467002 2154.699 0.9994129
## sigma 0.846665817 0.04723546 0.77581863 0.92789106 2964.820 0.9988632
# As can be seen from the results above, wine 4 seems have the highest rating and wine 18 have the least rating. Judge 5 seems to give the highest average rating and judge 8 seems to give the least average rating based on the mean values and traceplot.