Chapter 8 - Conditional Manatees

This chapter introduced interactions, which allow for the association between a predictor and an outcome to depend upon the value of another predictor. While you can’t see them in a DAG, interactions can be important for making accurate inferences. Interactions can be difficult to interpret, and so the chapter also introduced triptych plots that help in visualizing the effect of an interaction. No new coding skills were introduced, but the statistical models considered were among the most complicated so far in the book.

Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Problems are labeled Easy (E), Medium (M), and Hard(H).

Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.

Questions

8E1. For each of the causal relationships below, name a hypothetical third variable that would lead to an interaction effect:

  1. Bread dough rises because of yeast.
  2. Education leads to higher income.
  3. Gasoline makes a car go.
#1. Bread dough rises because of yeast.
# Hypothetical thrid variable: temperature
# temperature and yeast together will increase the speed of yeast, which is the interaction effect.
#2. Education leads to higher income.
# Hypothetical thrid variable: IQ
# higher IQ together with good education will make people have higher income. This is an interaction effect.
#3. Gasoline makes a car go.
# Hypothetical thrid variable: good road condition
# If you want to drive somewhere, you need to fill up the gas, and find a route that has good condition, such as no road construction, no snow block in the winter, no flood in summer etc. Then you can drive your car to the destination. This is the interaction effect. 

8E2. Which of the following explanations invokes an interaction?

  1. Caramelizing onions requires cooking over low heat and making sure the onions do not dry out.
  2. A car will go faster when it has more cylinders or when it has a better fuel injector.
  3. Most people acquire their political beliefs from their parents, unless they get them instead from their friends.
  4. Intelligent animal species tend to be either highly social or have manipulative appendages (hands, tentacles, etc.).
# the following explanations invokes an interaction:
# 3. Most people acquire their political beliefs from their parents, unless they get them instead
# from their friends. Parents beliefs and friends beliefs invoke an interaction.
# 4. Intelligent animal species tend to be either highly social or have manipulative appendages
# (hands, tentacles, etc.). Social level and manipulative appendages features invoke an interaction.

8E3. For each of the explanations in 8E2, write a linear model that expresses the stated relationship.

# 1. Z_i=β_1X_i+β_2Y_i

# 2. Z_i=β_1X_i+β_2Y_i

# 3. Z_i=β_1X_i+β_2Y_i+β_12*X_iY_i

# 4. Z_i=β_1X_i+β_2Y_i+β_12*X_iY_i

8M1. Recall the tulips example from the chapter. Suppose another set of treatments adjusted the temperature in the greenhouse over two levels: cold and hot. The data in the chapter were collected at the cold temperature. You find none of the plants grown under the hot temperature developed any blooms at all, regardless of the water and shade levels. Can you explain this result in terms of interactions between water, shade, and temperature?

#In the tulips example, we have three independent variables: temperature, water and shade. So these could cause a three way #interaction(temperature, water and shade) and 3 two-way interactions (temperature and water, temperature and shade, water and #shade).

8M2. Can you invent a regression equation that would make the bloom size zero, whenever the temperature is hot?

# Assigning values of 1 for all predictor variables (X1 X2 X3). 
# X1: temoerature, X2: water, X3: shade
# μi=α+βX1+βX2+βX3+βX1X2X3+βX1X2+βX1X3+βX2X3
# set  βX1 = −α and βX1X2 = −βX2. Whenever the temperature is hot, the bloom size will be 0.

8M3. In parts of North America, ravens depend upon wolves for their food. This is because ravens are carnivorous but cannot usually kill or open carcasses of prey. Wolves however can and do kill and tear open animals, and they tolerate ravens co-feeding at their kills. This species relationship is generally described as a “species interaction.” Can you invent a hypothetical set of data on raven population size in which this relationship would manifest as a statistical interaction? Do you think the biological interaction could be linear? Why or why not?

N <- 200 
P_W <- 0.7 # correlation between prey and wolf
P <- 0.3 # regression coefficient for prey
W <- 0.1 # regression coefficient for wolf
P_W1 <- 0.5 # regression coefficient for prey-by-wolf interaction
# Simulate data
prey <- rnorm(
  n = N, 
  mean = 0, 
  sd = 1
)
wolf <- rnorm(
  n = N, 
  mean = P_W * prey, 
  sd = sqrt(1 - P_W^2)
)
raven <- rnorm(
  n = N, 
  mean = P*prey + W*wolf + P_W1*prey*wolf, 
  sd = 1
)
df <- data.frame(raven, prey, wolf)
str(df)
## 'data.frame':    200 obs. of  3 variables:
##  $ raven: num  0.4971 -0.0354 0.3255 -1.501 -0.0904 ...
##  $ prey : num  0.3456 -1.1629 -0.6763 -0.0253 0.4332 ...
##  $ wolf : num  -1.004 -0.982 -1.168 0.685 -0.708 ...
library(rethinking)
## Loading required package: rstan
## Loading required package: StanHeaders
## Loading required package: ggplot2
## rstan (Version 2.21.2, GitRev: 2e1f913d3ca3)
## For execution on a local, multicore CPU with excess RAM we recommend calling
## options(mc.cores = parallel::detectCores()).
## To avoid recompilation of unchanged Stan programs, we recommend calling
## rstan_options(auto_write = TRUE)
## Loading required package: parallel
## rethinking (Version 2.12)
## 
## Attaching package: 'rethinking'
## The following object is masked from 'package:stats':
## 
##     rstudent
model <- map(
  alist(
    raven ~ dnorm(mu, sigma),
    mu <- alpha + P*prey + W*wolf + P_W1*prey*wolf,
    alpha ~ dnorm(0, 1),
    W ~ dnorm(0, 1),
    P ~ dnorm(0, 1),
    P_W1 ~ dnorm(0, 1),
    sigma ~ dunif(0, 5)
  ),
  data = df,
  start = list(a = 0, P = 0, W = 0, P_W1 = 0, sigma = 1)
)
## Error in solve.default(fit$hessian) : 
##   Lapack routine dgesv: system is exactly singular: U[1,1] = 0
## Warning in map(alist(raven ~ dnorm(mu, sigma), mu <- alpha + P * prey + : Error
## when computing variance-covariance matrix (Hessian). Fit may not be reliable.
precis(model)
##             mean sd 5.5% 94.5%
## a     0.00000000 NA   NA    NA
## P     0.26843259 NA   NA    NA
## W     0.07929313 NA   NA    NA
## P_W1  0.45303612 NA   NA    NA
## sigma 0.94707515 NA   NA    NA
## alpha 0.08799146 NA   NA    NA

8M4. Repeat the tulips analysis, but this time use priors that constrain the effect of water to be positive and the effect of shade to be negative. Use prior predictive simulation. What do these prior assumptions mean for the interaction prior, if anything?

data(tulips)
df1 <- tulips
str(df1)
## 'data.frame':    27 obs. of  4 variables:
##  $ bed   : Factor w/ 3 levels "a","b","c": 1 1 1 1 1 1 1 1 1 2 ...
##  $ water : int  1 1 1 2 2 2 3 3 3 1 ...
##  $ shade : int  1 2 3 1 2 3 1 2 3 1 ...
##  $ blooms: num  0 0 111 183.5 59.2 ...
df1$blooms_std <- df1$blooms / max(df1$blooms)
df1$water_cent <- df1$water - mean(df1$water)
df1$shade_cent <- df1$shade - mean(df1$shade)
results <- rnorm( 1e4 , 0.5 , 1 ); sum( results < 0 | results > 1 ) / length( results )
## [1] 0.6116
model1 <- quap(
    alist(
        blooms_std ~ dnorm( mu , sigma ) ,
        mu <- alpha + W*water_cent - S*shade_cent ,
        alpha ~ dnorm( 0.5 , 0.25 ) ,
        W ~ dnorm( 0 , 0.25 ) ,
        S ~ dnorm( 0 , 0.25 ) ,
        sigma ~ dexp( 1 )
) , data=df1 )

model2 <- quap(
    alist(
        blooms_std ~ dnorm( mu , sigma ) ,
        mu <- alpha + W*water_cent - S*shade_cent + W_S*water_cent*shade_cent ,
        alpha ~ dnorm( 0.5 , 0.25 ) ,
        W ~ dnorm( 0 , 0.25 ) ,
        S ~ dnorm( 0 , 0.25 ) ,
        W_S ~ dnorm( 0 , 0.25 ) ,
        sigma ~ dexp( 1 )
) , data=df1 )


par(mfrow=c(1,3)) 
for ( s in -1:1 ) {
    idx <- which( df1$shade_cent==s )
    plot( df1$water_cent[idx] , df1$blooms_std[idx] , xlim=c(-1,1) , ylim=c(0,1) ,
        xlab="Water" , ylab="Blooms" , pch=16 , col=rangi2 )
    mu <- link( model1 , data=data.frame( shade_cent=s , water_cent=-1:1 ) )
    for ( i in 1:20 ) lines( -1:1 , mu[i,] , col=col.alpha("blue",0.3) )}

8H1. Return to the data(tulips) example in the chapter. Now include the bed variable as a predictor in the interaction model. Don’t interact bed with the other predictors; just include it as a main effect. Note that bed is categorical. So to use it properly, you will need to either construct dummy variables or rather an index variable, as explained in Chapter 5.

data(tulips)
df2 <- tulips
df2$shade.c <- df2$shade - mean(df2$shade)
df2$water.c <- df2$water - mean(df2$water)
df2$bed.idx <- coerce_index(df2$bed)

df2$bedb <- df2$bed == "b"
df2$bedc <- df2$bed == "c"

df2$bedx <- coerce_index(df2$bed)

model3 <- map(
  alist(
    blooms ~ dnorm(mu, sigma),
    mu <- alpha + W*water.c + S*shade.c + W_S*water.c*shade.c, 
    alpha ~ dnorm(130, 100),
    W ~ dnorm(0, 100),
    S ~ dnorm(0, 100),
    W_S ~ dnorm(0, 100),
    sigma ~ dunif(0, 100)
  ),
  data = df2,
  start = list(alpha = mean(df2$blooms), W = 0, S = 0, W_S = 0, sigma = sd(df2$blooms))
)
precis(model3)
##            mean        sd      5.5%     94.5%
## alpha 129.00797  8.670771 115.15041 142.86554
## W      74.95946 10.601997  58.01542  91.90350
## S     -41.14054 10.600309 -58.08188 -24.19920
## W_S   -51.87265 12.948117 -72.56625 -31.17906
## sigma  45.22497  6.152982  35.39132  55.05863